Lukasz Majewski [Wed, 2 Apr 2014 10:31:57 +0000 (12:31 +0200)]
cpufreq:LAB: Change method of boost state preserving
It is not necessary to change the boost state when LAB governor is entered,
since LAB will change it according to its own politics. Only enter state
is preserved.
When leaving the LAB, only when required, work is scheduled to restore boost
initial state.
Change-Id: I6323f3c0011fe54a33d70c9ad0f9da5360b4a735
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Wed, 2 Apr 2014 10:14:25 +0000 (12:14 +0200)]
cpufreq:governor: Add serialization to the cpufreq_governor_dbs() function
It is necessary to serialize access to cpufreq_governor_dbs() function, since
it can be accessed from different, not protected by any mutex paths like
sysfs boost attribute or LAB governor internals.
Change-Id: Id7b62db6ca0b7c28f5e8c6286aec312d3d0c971e
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Zhaowei Yuan [Wed, 2 Apr 2014 07:22:33 +0000 (15:22 +0800)]
media: s5p_mfc: remove the code checking dev->plat_dev
We should remove the code checking dev->plat_dev since we
can ensure the pointer pdev can not be NULL.
Change-Id: Ibdc44403068ee4462e414d6e84757b8a4c2b512c
Signed-off-by: Zhaowei Yuan <zhaowei.yuan@samsung.com>
Stephen Boyd [Tue, 27 Aug 2013 18:47:29 +0000 (11:47 -0700)]
cpufreq: Fix timer/workqueue corruption due to double queueing
When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancels a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:
CPU0 CPU1
---- ----
cpu_down()
...
__cpufreq_remove_dev()
cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
gov_cancel_work(dbs_data, policy);
cpu0 work is canceled
timer is canceled
cpu1 work is canceled <work runs>
<waits for cpu1> od_dbs_timer()
gov_queue_work(*, *, true);
cpu0 work queued
cpu1 work queued
cpu2 work queued
...
cpu1 work is canceled
cpu2 work is canceled
...
At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:
WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: G W 3.10.0 #19
[<
c010c178>] (unwind_backtrace+0x0/0x11c) from [<
c0109dec>] (show_stack+0x10/0x14)
[<
c0109dec>] (show_stack+0x10/0x14) from [<
c01904cc>] (warn_slowpath_common+0x4c/0x6c)
[<
c01904cc>] (warn_slowpath_common+0x4c/0x6c) from [<
c019056c>] (warn_slowpath_fmt+0x2c/0x3c)
[<
c019056c>] (warn_slowpath_fmt+0x2c/0x3c) from [<
c0388a7c>] (debug_print_object+0x94/0xbc)
[<
c0388a7c>] (debug_print_object+0x94/0xbc) from [<
c0388e34>] (__debug_object_init+0x2d0/0x340)
[<
c0388e34>] (__debug_object_init+0x2d0/0x340) from [<
c019e3b0>] (init_timer_key+0x14/0xb0)
[<
c019e3b0>] (init_timer_key+0x14/0xb0) from [<
c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8)
[<
c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8) from [<
c06325a0>] (__cpufreq_governor+0xdc/0x1a4)
[<
c06325a0>] (__cpufreq_governor+0xdc/0x1a4) from [<
c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[<
c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [<
c08989f4>] (cpufreq_cpu_callback+0x60/0x80)
[<
c08989f4>] (cpufreq_cpu_callback+0x60/0x80) from [<
c08a43c0>] (notifier_call_chain+0x38/0x68)
[<
c08a43c0>] (notifier_call_chain+0x38/0x68) from [<
c01938e0>] (__cpu_notify+0x28/0x40)
[<
c01938e0>] (__cpu_notify+0x28/0x40) from [<
c0892ad4>] (_cpu_down+0x7c/0x2c0)
[<
c0892ad4>] (_cpu_down+0x7c/0x2c0) from [<
c0892d3c>] (cpu_down+0x24/0x40)
[<
c0892d3c>] (cpu_down+0x24/0x40) from [<
c0893ea8>] (store_online+0x2c/0x74)
[<
c0893ea8>] (store_online+0x2c/0x74) from [<
c04519d8>] (dev_attr_store+0x18/0x24)
[<
c04519d8>] (dev_attr_store+0x18/0x24) from [<
c02a69d4>] (sysfs_write_file+0x100/0x148)
[<
c02a69d4>] (sysfs_write_file+0x100/0x148) from [<
c0255c18>] (vfs_write+0xcc/0x174)
[<
c0255c18>] (vfs_write+0xcc/0x174) from [<
c0255f70>] (SyS_write+0x38/0x64)
[<
c0255f70>] (SyS_write+0x38/0x64) from [<
c0106120>] (ret_fast_syscall+0x0/0x30)
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Change-Id: I3c74dd72e468c150c6664c9ea99083c0a5866b06
[k.kozlowski: Backport to 3.10 to fix CPU0 stall after CPU1 hotplug]
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Donghwa Lee [Mon, 31 Mar 2014 02:31:56 +0000 (11:31 +0900)]
ARM: dts: set mmc clock-frequency for odroidx2
from: Jaehoon Chung <jh80.chung@samsung.com>
set mmc clock-frequency to 400MHZ for odroidx2
Change-Id: I94b9dccbdd8091e333debbe8b06a881bf3ea7ee9
Signed-off-by: Donghwa Lee <dh09.lee@samsung.com>
Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
Donghwa Lee [Mon, 31 Mar 2014 02:13:09 +0000 (11:13 +0900)]
media: s5p-mfc: add to set clock rate
from: Seung-Woo Kim <sw0312.kim@samsung.com>
MFC needs 200MHz for sclk_mfc clock to work properly. The clock
rate setting was missed, so this patch adds it.
Change-Id: Ica696a5fda2babe81e885945fa5affd0b09ff5ba
Signed-off-by: Donghwa Lee <dh09.lee@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Krzysztof Kozlowski [Tue, 1 Apr 2014 09:30:24 +0000 (11:30 +0200)]
clocksource: exynos_mct: Fix too early ISR fire up on wrong CPU
After hotplugging CPU1 the first interrupt handler for CPU1 oneshot
timer was called on CPU0 because it fired up before setting IRQ
affinity.
During setup of the MCT timers the clock event device should be
registered after setting the affinity for interrupt. This will prevent
starting the timer to early.
Additionally, if clock event device has interrupt set up, the
clockevents_config_and_register() will also set the affinity for it.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Change-Id: I64fee65b57106ad562f0ecc1160748a9548debad
Krzysztof Kozlowski [Mon, 24 Mar 2014 09:01:32 +0000 (10:01 +0100)]
clocksource: exynos_mct: Change exynos4_mct_tick_clear return type to void
Return value of exynos4_mct_tick_clear() was never checked so it can
be safely changed to void.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Change-Id: I14f872e244434002005c532adf8afc97ef77cea5
Krzysztof Kozlowski [Mon, 24 Mar 2014 08:27:22 +0000 (09:27 +0100)]
clocksource: exynos_mct: Fix stall after CPU hotplugging
Fix stall after hotplugging CPU1. The stall was a result of starting the
CPU1 local timer not in L1 timer but in L0 (which is used by CPU0).
Stall information:
[ 530.045259] INFO: rcu_preempt detected stalls on CPUs/tasks:
[ 530.045618] 1: (6 GPs behind) idle=6d0/0/0 softirq=369/369
[ 530.050987] (detected by 0, t=6589 jiffies, g=33, c=32, q=0)
[ 530.056721] Task dump for CPU 1:
[ 530.059928] swapper/1 R running 0 0 1 0x00001000
[ 530.066377] [<
c0524e14>] (__schedule+0x414/0x9b4) from [<
c00b6610>] (rcu_idle_enter+0x18/0x38)
[ 530.074955] [<
c00b6610>] (rcu_idle_enter+0x18/0x38) from [<
c0079a18>] (cpu_startup_entry+0x60/0x3bc)
[ 530.084069] [<
c0079a18>] (cpu_startup_entry+0x60/0x3bc) from [<
c0517d34>] (secondary_start_kernel+0x164/0x1a0)
[ 530.094029] [<
c0517d34>] (secondary_start_kernel+0x164/0x1a0) from [<
40517244>] (0x40517244)
The timers for CPU1 were missed:
[ 591.668436] cpu: 1
[ 591.670430] clock 0:
[ 591.672691] .base:
c0ab7750
[ 591.676160] .index: 0
[ 591.679025] .resolution: 1 nsecs
[ 591.682404] .get_time: ktime_get
[ 591.685970] .offset: 0 nsecs
[ 591.689349] active timers:
[ 591.692045] #0: <
dfb51f40>, hrtimer_wakeup, S:01
[ 591.696759] # expires at
454687834257-
454687884257 nsecs [in -
136770537232 to -
136770487232 nsecs]
And the event_handler for next event was wrong:
[ 591.917120] Tick Device: mode: 1
[ 591.920676] Per CPU device: 0
[ 591.923621] Clock Event Device: mct_tick0
[ 591.927623] max_delta_ns:
178956969027
[ 591.931613] min_delta_ns: 1249
[ 591.934913] mult:
51539608
[ 591.938557] shift: 32
[ 591.941681] mode: 3
[ 591.944724] next_event:
595025000000 nsecs
[ 591.949227] set_next_event: exynos4_tick_set_next_event
[ 591.954522] set_mode: exynos4_tick_set_mode
[ 591.959296] event_handler: hrtimer_interrupt
[ 591.963730] retries: 0
[ 591.966761]
[ 591.968245] Tick Device: mode: 0
[ 591.971801] Per CPU device: 1
[ 591.974746] Clock Event Device: mct_tick1
[ 591.978750] max_delta_ns:
178956969027
[ 591.982739] min_delta_ns: 1249
[ 591.986037] mult:
51539608
[ 591.989681] shift: 32
[ 591.992806] mode: 3
[ 591.995848] next_event:
453685000000 nsecs
[ 592.000353] set_next_event: exynos4_tick_set_next_event
[ 592.005648] set_mode: exynos4_tick_set_mode
[ 592.010421] event_handler: tick_handle_periodic
[ 592.015115] retries: 0
[ 592.018145]
After turning off the CPU1, the MCT L1 local timer was disabled but the
interrupt was not cleared. Turning on the CPU1 enabled the IRQ
with setup_irq() but, before setting affinity to CPU1, the pending L1 timer
interrupt was processed by CPU0 in exynos4_mct_tick_isr().
The ISR then called event handler which set up the next timer event for
current CPU (CPU0). Therefore the MCT L1 timer wasn't actually started.
Fix the stall by:
1. Setting next timer event not on current CPU but on the CPU indicated
by cpumask in 'clock_event_device'.
2. Clearing the timer interrupt upon stopping the local timer.
The patch also moves around the call to exynos4_mct_tick_stop() but this
is done only for the code readability as it is not essential for the fix.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Change-Id: I3a1a23e2b970661b5f7c60fc633a7545aa80ed5e
Viresh Kumar [Sat, 31 Aug 2013 12:18:23 +0000 (17:48 +0530)]
cpufreq: serialize calls to __cpufreq_governor()
We can't take a big lock around __cpufreq_governor() as this causes
recursive locking for some cases. But calls to this routine must be
serialized for every policy. Otherwise we can see some unpredictable
events.
For example, consider following scenario:
__cpufreq_remove_dev()
__cpufreq_governor(policy, CPUFREQ_GOV_STOP);
policy->governor->governor(policy, CPUFREQ_GOV_STOP);
cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
mutex_destroy(&cpu_cdbs->timer_mutex)
cpu_cdbs->cur_policy = NULL;
<PREEMPT>
store()
__cpufreq_set_policy()
__cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
policy->governor->governor(policy, CPUFREQ_GOV_LIMITS);
case CPUFREQ_GOV_LIMITS:
mutex_lock(&cpu_cdbs->timer_mutex); <-- Warning (destroyed mutex)
if (policy->max < cpu_cdbs->cur_policy->cur) <- cur_policy == NULL
And so store() will eventually result in a crash if cur_policy is
NULL at this point.
Introduce an additional variable which would guarantee serialization
here.
Change-Id: Ibae767cbd9c25c7598b39d1405fa3d98d2125101
Reported-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar [Tue, 2 Jul 2013 11:06:28 +0000 (16:36 +0530)]
cpufreq: Fix serialization of frequency transitions
Commit 7c30ed ("cpufreq: make sure frequency transitions are serialized")
interacts poorly with systems that have a single core freqency for all
cores. On such systems we have a single policy for all cores with
several CPUs. When we do a frequency transition the governor calls the
pre and post change notifiers which causes cpufreq_notify_transition()
per CPU. Since the policy is the same for all of them all CPUs after
the first and the warnings added are generated by checking a per-policy
flag the warnings will be triggered for all cores after the first.
Fix this by allowing notifier to be called for n times. Where n is the number of
cpus in policy->cpus.
Change-Id: I5712dde7f992644f9c3ddc8313151f80bea0d877
Reported-and-tested-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar [Wed, 19 Jun 2013 04:46:55 +0000 (10:16 +0530)]
cpufreq: make sure frequency transitions are serialized
Whenever we are changing frequency of a cpu, we are calling PRECHANGE and
POSTCHANGE notifiers. They must be serialized. i.e. PRECHANGE or POSTCHANGE
shouldn't be called twice contiguously.
This can happen due to bugs in users of __cpufreq_driver_target() or actual
cpufreq drivers who are sending these notifiers.
This patch adds some protection against this. Now, we keep track of the last
transaction and see if something went wrong.
Change-Id: I0f5465bd515c431ae2d3711d065f70aacec7e978
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar [Fri, 31 May 2013 06:15:08 +0000 (06:15 +0000)]
cpufreq: remove unnecessary cpufreq_cpu_{get|put}() calls
struct cpufreq_policy is already passed as argument to some routines
like: __cpufreq_driver_getavg() and so we don't really need to do
cpufreq_cpu_get() before and cpufreq_cpu_put() in them to get a
policy structure.
Remove them.
Change-Id: I6a9ff8ed483a4f4faacc2ea047d93354dccdb0b6
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Xiaoguang Chen [Wed, 19 Jun 2013 07:00:07 +0000 (15:00 +0800)]
cpufreq: Fix governor start/stop race condition
Cpufreq governors' stop and start operations should be carried out
in sequence. Otherwise, there will be unexpected behavior, like in
the example below.
Suppose there are 4 CPUs and policy->cpu=CPU0, CPU1/2/3 are linked
to CPU0. The normal sequence is:
1) Current governor is userspace. An application tries to set the
governor to ondemand. It will call __cpufreq_set_policy() in
which it will stop the userspace governor and then start the
ondemand governor.
2) Current governor is userspace. The online of CPU3 runs on CPU0.
It will call cpufreq_add_policy_cpu() in which it will first
stop the userspace governor, and then start it again.
If the sequence of the above two cases interleaves, it becomes:
1) Application stops userspace governor
2) Hotplug stops userspace governor
which is a problem, because the governor shouldn't be stopped twice
in a row. What happens next is:
3) Application starts ondemand governor
4) Hotplug starts a governor
In step 4, the hotplug is supposed to start the userspace governor,
but now the governor has been changed by the application to ondemand,
so the ondemand governor is started once again, which is incorrect.
The solution is to prevent policy governors from being stopped
multiple times in a row. A governor should only be stopped once for
one policy. After it has been stopped, no more governor stop
operations should be executed.
Also add a mutex to serialize governor operations.
Change-Id: Ie380dc7c551f2721b81ceb8e4849efa09345ce4b
[rjw: Changelog. And you owe me a beverage of my choice.]
Signed-off-by: Xiaoguang Chen <chenxg@marvell.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
ChenZhen [Tue, 25 Mar 2014 07:52:50 +0000 (15:52 +0800)]
ASoC: max98090: Add of_match_table
It's necessary to add of_match_table to match driver.
Change-Id: I7c09aef9a3f54180041398009b9141142de54ea4
Signed-off-by: ChenZhen <zhen1.chen@samsung.com>
ChenZhen [Tue, 25 Mar 2014 01:54:52 +0000 (09:54 +0800)]
ARM: dts: odroidx2: add i2c1 node for max98090
Add i2c1 node for the codec driver max98090.
Change-Id: Ib6afaf7574827540281959a1f8338d50e221df39
Signed-off-by: ChenZhen <zhen1.chen@samsung.com>
Chanho Park [Tue, 25 Mar 2014 05:03:00 +0000 (14:03 +0900)]
packaging: specify ExclusiveArch to arm/aarch64
This patch specifies "ExclusiveArch" for building only arm and
aarch64.
Change-Id: I33a484b478d7848257a4ea8b4375b0ea1994c47e
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Chanho Park [Mon, 24 Mar 2014 05:15:16 +0000 (14:15 +0900)]
packaging: change modules directory to /boot/lib/modules
This patch changes the default modules directory from /lib/modules
to /boot/lib/modules. The mobile kernel didn't use /lib/modules
for modules because it should be matched with kernel version and
can be recoveried with the /boot directory. The old kernel used
modules.img which includes kernel modules. And we also loop-mounted
it to the /lib/modules directory. Instead of it, we'll use /boot/lib/
modules directory because it will have same functionality if the
/boot directory will be read-only. And we will add the recovery
partition of the /boot.
Change-Id: Ie0f0af47f0f6d3fe25c780fb8685df745b587dd7
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Chanho Park [Mon, 24 Mar 2014 05:12:04 +0000 (14:12 +0900)]
packaging: update kernel version to 3.10.33
Recently, we updated the kernel version from 3.10.19 to 3.10.33.
This patch also updates the kernel version in the spec file.
Change-Id: Ifc5239ea428e2178aa70ddd3b94364fdbb7ebe79
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Michal Hocko [Wed, 31 Jul 2013 20:53:51 +0000 (13:53 -0700)]
vmpressure: make sure there are no events queued after memcg is offlined
vmpressure is called synchronously from reclaim where the target_memcg
is guaranteed to be alive but the eventfd is signaled from the work
queue context. This means that memcg (along with vmpressure structure
which is embedded into it) might go away while the work item is pending
which would result in use-after-release bug.
We have two possible ways how to fix this. Either vmpressure pins memcg
before it schedules vmpr->work and unpin it in vmpressure_work_fn or
explicitely flush the work item from the css_offline context (as
suggested by Tejun).
This patch implements the later one and it introduces vmpressure_cleanup
which flushes the vmpressure work queue item item. It hooks into
mem_cgroup_css_offline after the memcg itself is cleaned up.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Reported-by: Tejun Heo <tj@kernel.org>
Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Li Zefan <lizefan@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I1deefca16b6e243f86bd78b84c561db02e7a20e8
Michal Hocko [Wed, 31 Jul 2013 20:53:50 +0000 (13:53 -0700)]
vmpressure: do not check for pending work to prevent from new work
because it is racy and it doesn't give us much anyway as schedule_work
handles this case already.
Change-Id: I9946652da98eef2ed0312a5470e69db13fab0e4c
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Reported-by: Tejun Heo <tj@kernel.org>
Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Li Zefan <lizefan@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Wed, 31 Jul 2013 20:53:48 +0000 (13:53 -0700)]
vmpressure: change vmpressure::sr_lock to spinlock
There is nothing that can sleep inside critical sections protected by
this lock and those sections are really small so there doesn't make much
sense to use mutex for them. Change the log to a spinlock
Change-Id: I54c8361a88ec810676cf631f3754c5b860d54b01
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Reported-by: Tejun Heo <tj@kernel.org>
Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Li Zefan <lizefan@huawei.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Chanho Park [Fri, 21 Mar 2014 07:24:26 +0000 (16:24 +0900)]
tizen: enable zswap/zsmalloc/zbud to compress swap memory
Change-Id: Iccf427f0acecad261b7cd8baa7114ecf9a421914
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Jonghwa Lee [Thu, 20 Mar 2014 07:19:31 +0000 (16:19 +0900)]
modem: sipc4: Chagne the manner of recieving data for FMT,RFS type device
When packet arrives, link device call iodev's helper function to recieve
packets. The way of recieving data of IPC_FMT and IPC_RFS type iodevs differs
from IPC_RAW and IPC_MULTI_RAW. This patch adds specified method of recieving
data for FMT, RFS typed.
This modification references TIZEN 2.2 kernel.
Change-Id: I01efa7678bbabfbd1011ceba42571fc221313c4d
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Joonyoung Shim [Thu, 20 Mar 2014 08:09:46 +0000 (17:09 +0900)]
drm/exynos: remove DRIVER_HAVE_IRQ feature
Exynos drm driver cannot support DRIVER_HAVE_IRQ feature because it uses
driver specific one instead of routine of drm framework to
install/uninstall irq handler.
Change-Id: I5796d7113cbc4283cbb41591384aaa69011818d4
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Jan Kara [Tue, 12 Nov 2013 23:11:19 +0000 (15:11 -0800)]
rbtree: fix rbtree_postorder_for_each_entry_safe() iterator
The iterator rbtree_postorder_for_each_entry_safe() relies on pointer
underflow behavior when testing for loop termination. In particular it
expects that
&rb_entry(NULL, type, field)->field
is NULL. But the result of this expression is not defined by a C standard
and some gcc versions (e.g. 4.3.4) assume the above expression can never
be equal to NULL. The net result is an oops because the iteration is not
properly terminated.
Fix the problem by modifying the iterator to avoid pointer underflows.
Change-Id: I06d5983b5335412be6cb6ebd95db3c682e26ed38
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Artem Bityutskiy <dedekind1@gmail.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: <stable@vger.kernel.org> [3.12.x]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cody P Schafer [Wed, 11 Sep 2013 21:25:11 +0000 (14:25 -0700)]
rbtree: add rbtree_postorder_for_each_entry_safe() helper
Because deletion (of the entire tree) is a relatively common use of the
rbtree_postorder iteration, and because doing it safely means fiddling
with temporary storage, provide a helper to simplify postorder rbtree
iteration.
Change-Id: I8442bc3efc79dca08bfbc6ebb63607cf4e83bcf6
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Reviewed-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cody P Schafer [Wed, 11 Sep 2013 21:25:10 +0000 (14:25 -0700)]
rbtree: add postorder iteration functions
Postorder iteration yields all of a node's children prior to yielding the
node itself, and this particular implementation also avoids examining the
leaf links in a node after that node has been yielded.
In what I expect will be its most common usage, postorder iteration allows
the deletion of every node in an rbtree without modifying the rbtree nodes
(no _requirement_ that they be nulled) while avoiding referencing child
nodes after they have been "deleted" (most commonly, freed).
I have only updated zswap to use this functionality at this point, but
numerous bits of code (most notably in the filesystem drivers) use a hand
rolled postorder iteration that NULLs child links as it traverses the
tree. Each of those instances could be replaced with this common
implementation.
1 & 2 add rbtree postorder iteration functions.
3 adds testing of the iteration to the rbtree runtime tests
4 allows building the rbtree runtime tests as builtins
5 updates zswap.
This patch:
Add postorder iteration functions for rbtree. These are useful for safely
freeing an entire rbtree without modifying the tree at all.
Change-Id: Ibc97f0e13288030501b5e84defc6603eeb1adca6
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Reviewed-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Streetman [Thu, 23 Jan 2014 23:52:48 +0000 (15:52 -0800)]
mm/zswap.c: change params from hidden to ro
The "compressor" and "enabled" params are currently hidden, this changes
them to read-only, so userspace can tell if zswap is enabled or not and
see what compressor is in use.
Change-Id: I42d8ac2544ccbf981de26d98b772417e183360f6
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Cc: Vladimir Murzin <murzin.v@gmail.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Weijie Yang <weijie.yang@samsung.com>
Acked-by: Seth Jennings <sjennings@variantweb.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Tue, 12 Nov 2013 23:08:27 +0000 (15:08 -0800)]
mm/zswap: refactor the get/put routines
The refcount routine was not fit the kernel get/put semantic exactly,
There were too many judgement statements on refcount and it could be
minus.
This patch does the following:
- move refcount judgement to zswap_entry_put() to hide resource free function.
- add a new function zswap_entry_find_get(), so that callers can use
easily in the following pattern:
zswap_entry_find_get
.../* do something */
zswap_entry_put
- to eliminate compile error, move some functions declaration
This patch is based on Minchan Kim <minchan@kernel.org> 's idea and suggestion.
Change-Id: I8510ffe4f49a1a5f00b53be89b2ee33854464db8
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Cc: Seth Jennings <sjennings@variantweb.net>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Tue, 12 Nov 2013 23:08:26 +0000 (15:08 -0800)]
mm/zswap: bugfix: memory leak when invalidate and reclaim occur concurrently
Consider the following scenario:
thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page)
thread 1: call zswap_frontswap_invalidate_page to invalidate entry x.
finished, entry x and its zbud is not freed as its refcount != 0
now, the swap_map[x] = 0
thread 0: now call zswap_get_swap_cache_page
swapcache_prepare return -ENOENT because entry x is not used any more
zswap_get_swap_cache_page return ZSWAP_SWAPCACHE_NOMEM
zswap_writeback_entry do nothing except put refcount
Now, the memory of zswap_entry x and its zpage leak.
Modify:
- check the refcount in fail path, free memory if it is not referenced.
- use ZSWAP_SWAPCACHE_FAIL instead of ZSWAP_SWAPCACHE_NOMEM as the fail path
can be not only caused by nomem but also by invalidate.
Change-Id: I3d76f21a11f2d9ff0bec412c90b3895efce6478d
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Tue, 12 Nov 2013 23:07:52 +0000 (15:07 -0800)]
mm/zswap: avoid unnecessary page scanning
Add SetPageReclaim() before __swap_writepage() so that page can be moved
to the tail of the inactive list, which can avoid unnecessary page
scanning as this page was reclaimed by swap subsystem before.
Change-Id: If1ed52e3161c332d9f1f6fdd8851e97b5d3b4271
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Wed, 16 Oct 2013 20:46:54 +0000 (13:46 -0700)]
mm/zswap: bugfix: memory leak when re-swapon
zswap_tree is not freed when swapoff, and it got re-kmalloced in swapon,
so a memory leak occurs.
Free the memory of zswap_tree in zswap_frontswap_invalidate_area().
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Cc: Minchan Kim <minchan@kernel.org>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Cc: <stable@vger.kernel.org>
From: Weijie Yang <weijie.yang@samsung.com>
Subject: mm/zswap: bugfix: memory leak when invalidate and reclaim occur concurrently
Consider the following scenario:
thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page)
thread 1: call zswap_frontswap_invalidate_page to invalidate entry x.
finished, entry x and its zbud is not freed as its refcount != 0
now, the swap_map[x] = 0
thread 0: now call zswap_get_swap_cache_page
swapcache_prepare return -ENOENT because entry x is not used any more
zswap_get_swap_cache_page return ZSWAP_SWAPCACHE_NOMEM
zswap_writeback_entry do nothing except put refcount
Now, the memory of zswap_entry x and its zpage leak.
Modify:
- check the refcount in fail path, free memory if it is not referenced.
- use ZSWAP_SWAPCACHE_FAIL instead of ZSWAP_SWAPCACHE_NOMEM as the fail path
can be not only caused by nomem but also by invalidate.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: <stable@vger.kernel.org>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Change-Id: I4a875e48714d73bf2c1f75b60d90776365c047de
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cody P Schafer [Wed, 11 Sep 2013 21:25:33 +0000 (14:25 -0700)]
mm/zswap: use postorder iteration when destroying rbtree
Change-Id: I83b93b7eaadb7c66981f1119eda1119c978d1b9c
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Reviewed-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jianguo Wu [Wed, 11 Sep 2013 21:21:42 +0000 (14:21 -0700)]
mm/zbud: fix some trivial typos in comments
Change-Id: I1acb8c1f4ff9ab8dbd698380a731daef51d028fc
Signed-off-by: Jianguo Wu <wujianguo@huawei.com>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sunghan Suh [Wed, 11 Sep 2013 21:20:22 +0000 (14:20 -0700)]
mm/zswap.c: get swapper address_space by using macro
There is a proper macro to get the corresponding swapper address space
from a swap entry. Instead of directly accessing "swapper_spaces" array,
use the "swap_address_space" macro.
Change-Id: I145f9a3fad914ff83853cd80c60af61f40eab1cf
Signed-off-by: Sunghan Suh <sunghan.suh@samsung.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Heesub Shin [Wed, 31 Jul 2013 20:53:40 +0000 (13:53 -0700)]
mm: zbud: fix condition check on allocation size
zbud_alloc() incorrectly verifies the size of allocation limit. It
should deny the allocation request greater than (PAGE_SIZE -
ZHDR_SIZE_ALIGNED - CHUNK_SIZE), not (PAGE_SIZE - ZHDR_SIZE_ALIGNED)
which has no remaining spaces for its buddy. There is no point in
spending the entire zbud page storing only a single page, since we don't
have any benefits.
Change-Id: Ief305088b6983c01426300a0638520f51b17ad2a
Signed-off-by: Heesub Shin <heesub.shin@samsung.com>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Dongjun Shin <d.j.shin@samsung.com>
Cc: Sunae Seo <sunae.seo@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:06 +0000 (15:46 -0800)]
zram: remove zram->lock in read path and change it with mutex
Finally, we separated zram->lock dependency from 32bit stat/ table
handling so there is no reason to use rw_semaphore between read and
write path so this patch removes the lock from read path totally and
changes rw_semaphore with mutex. So, we could do
old:
read-read: OK
read-write: NO
write-write: NO
Now:
read-read: OK
read-write: OK
write-write: NO
The below data proves mixed workload performs well 11 times and there is
also enhance on write-write path because current rw-semaphore doesn't
support SPIN_ON_OWNER. It's side effect but anyway good thing for us.
Write-related tests perform better (from 61% to 1058%) but read path has
good/bad(from -2.22% to 1.45%) but they are all marginal within stddev.
CPU 12
iozone -t -T -l 12 -u 12 -r 16K -s 60M -I +Z -V 0
==Initial write ==Initial write
records: 10 records: 10
avg: 516189.16 avg: 839907.96
std: 22486.53 (4.36%) std: 47902.17 (5.70%)
max: 546970.60 max: 909910.35
min: 481131.54 min: 751148.38
==Rewrite ==Rewrite
records: 10 records: 10
avg: 509527.98 avg: 1050156.37
std: 45799.94 (8.99%) std: 40695.44 (3.88%)
max: 611574.27 max: 1111929.26
min: 443679.95 min: 980409.62
==Read ==Read
records: 10 records: 10
avg: 4408624.17 avg: 4472546.76
std: 281152.61 (6.38%) std: 163662.78 (3.66%)
max: 4867888.66 max: 4727351.03
min: 4058347.69 min: 4126520.88
==Re-read ==Re-read
records: 10 records: 10
avg: 4462147.53 avg: 4363257.75
std: 283546.11 (6.35%) std: 247292.63 (5.67%)
max: 4912894.44 max: 4677241.75
min: 4131386.50 min: 4035235.84
==Reverse Read ==Reverse Read
records: 10 records: 10
avg: 4565865.97 avg: 4485818.08
std: 313395.63 (6.86%) std: 248470.10 (5.54%)
max: 5232749.16 max: 4789749.94
min: 4185809.62 min: 3963081.34
==Stride read ==Stride read
records: 10 records: 10
avg: 4515981.80 avg: 4418806.01
std: 211192.32 (4.68%) std: 212837.97 (4.82%)
max: 4889287.28 max: 4686967.22
min: 4210362.00 min: 4083041.84
==Random read ==Random read
records: 10 records: 10
avg: 4410525.23 avg: 4387093.18
std: 236693.22 (5.37%) std: 235285.23 (5.36%)
max: 4713698.47 max: 4669760.62
min: 4057163.62 min: 3952002.16
==Mixed workload ==Mixed workload
records: 10 records: 10
avg: 243234.25 avg: 2818677.27
std: 28505.07 (11.72%) std: 195569.70 (6.94%)
max: 288905.23 max: 3126478.11
min: 212473.16 min: 2484150.69
==Random write ==Random write
records: 10 records: 10
avg: 555887.07 avg: 1053057.79
std: 70841.98 (12.74%) std: 35195.36 (3.34%)
max: 683188.28 max: 1096125.73
min: 437299.57 min: 992481.93
==Pwrite ==Pwrite
records: 10 records: 10
avg: 501745.93 avg: 810363.09
std: 16373.54 (3.26%) std: 19245.01 (2.37%)
max: 518724.52 max: 833359.70
min: 464208.73 min: 765501.87
==Pread ==Pread
records: 10 records: 10
avg: 4539894.60 avg: 4457680.58
std: 197094.66 (4.34%) std: 188965.60 (4.24%)
max: 4877170.38 max: 4689905.53
min: 4226326.03 min: 4095739.72
Change-Id: I7d2299149ce6982d76caaaadb936b7385cbee515
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:04 +0000 (15:46 -0800)]
zram: remove workqueue for freeing removed pending slot
Commit
a0c516cbfc74 ("zram: don't grab mutex in zram_slot_free_noity")
introduced free request pending code to avoid scheduling by mutex under
spinlock and it was a mess which made code lenghty and increased
overhead.
Now, we don't need zram->lock any more to free slot so this patch
reverts it and then, tb_lock should protect it.
Change-Id: I3429e568bab78c197da3fc5cbd5afb9355bf7d21
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:03 +0000 (15:46 -0800)]
zram: introduce zram->tb_lock
Currently, the zram table is protected by zram->lock but it's rather
coarse-grained lock and it makes hard for scalibility.
Let's use own rwlock instead of depending on zram->lock. This patch
adds new locking so obviously, it would make slow but this patch is just
prepartion for removing coarse-grained rw_semaphore(ie, zram->lock)
which is hurdle about zram scalability.
Final patch in this patchset series will remove the lock from read-path
and change rw_semaphore with mutex in write path. With bonus, we could
drop pending slot free mess in next patch.
Change-Id: If5456f871bc6b0d6ee1f8218fde3f5a13d261c8b
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:02 +0000 (15:46 -0800)]
zram: use atomic operation for stat
Some of fields in zram->stats are protected by zram->lock which is
rather coarse-grained so let's use atomic operation without explict
locking.
This patch is ready for removing dependency of zram->lock in read path
which is very coarse-grained rw_semaphore. Of course, this patch adds
new atomic operation so it might make slow but my 12CPU test couldn't
spot any regression. All gain/lose is marginal within stddev.
iozone -t -T -l 12 -u 12 -r 16K -s 60M -I +Z -V 0
==Initial write ==Initial write
records: 50 records: 50
avg: 412875.17 avg: 415638.23
std: 38543.12 (9.34%) std: 36601.11 (8.81%)
max: 521262.03 max: 502976.72
min: 343263.13 min: 351389.12
==Rewrite ==Rewrite
records: 50 records: 50
avg: 416640.34 avg: 397914.33
std: 60798.92 (14.59%) std: 46150.42 (11.60%)
max: 543057.07 max: 522669.17
min: 304071.67 min: 316588.77
==Read ==Read
records: 50 records: 50
avg: 4147338.63 avg: 4070736.51
std: 179333.25 (4.32%) std: 223499.89 (5.49%)
max: 4459295.28 max: 4539514.44
min: 3753057.53 min: 3444686.31
==Re-read ==Re-read
records: 50 records: 50
avg: 4096706.71 avg: 4117218.57
std: 229735.04 (5.61%) std: 171676.25 (4.17%)
max: 4430012.09 max: 4459263.94
min: 2987217.80 min: 3666904.28
==Reverse Read ==Reverse Read
records: 50 records: 50
avg: 4062763.83 avg: 4078508.32
std: 186208.46 (4.58%) std: 172684.34 (4.23%)
max: 4401358.78 max: 4424757.22
min: 3381625.00 min: 3679359.94
==Stride read ==Stride read
records: 50 records: 50
avg: 4094933.49 avg: 4082170.22
std: 185710.52 (4.54%) std: 196346.68 (4.81%)
max: 4478241.25 max: 4460060.97
min: 3732593.23 min: 3584125.78
==Random read ==Random read
records: 50 records: 50
avg: 4031070.04 avg: 4074847.49
std: 192065.51 (4.76%) std: 206911.33 (5.08%)
max: 4356931.16 max: 4399442.56
min: 3481619.62 min: 3548372.44
==Mixed workload ==Mixed workload
records: 50 records: 50
avg: 149925.73 avg: 149675.54
std: 7701.26 (5.14%) std: 6902.09 (4.61%)
max: 191301.56 max: 175162.05
min: 133566.28 min: 137762.87
==Random write ==Random write
records: 50 records: 50
avg: 404050.11 avg: 393021.47
std: 58887.57 (14.57%) std: 42813.70 (10.89%)
max: 601798.09 max: 524533.43
min: 325176.99 min: 313255.34
==Pwrite ==Pwrite
records: 50 records: 50
avg: 411217.70 avg: 411237.96
std: 43114.99 (10.48%) std: 33136.29 (8.06%)
max: 530766.79 max: 471899.76
min: 320786.84 min: 317906.94
==Pread ==Pread
records: 50 records: 50
avg: 4154908.65 avg: 4087121.92
std: 151272.08 (3.64%) std: 219505.04 (5.37%)
max: 4459478.12 max: 4435857.38
min: 3730512.41 min: 3101101.67
Change-Id: Ib0d538597fbc4a2037b0464f8d62fb73fa0b0c24
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:01 +0000 (15:46 -0800)]
zram: remove unnecessary free
Commit
a0c516cbfc74 ("zram: don't grab mutex in zram_slot_free_noity")
introduced pending zram slot free in zram's write path in case of
missing slot free by memory allocation failure in zram_slot_free_notify
but it is not necessary because we have already freed the slot right
before overwriting.
Change-Id: I5048bce2ca8c377d9539f0397a04bddc5f5a5e92
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:46:00 +0000 (15:46 -0800)]
zram: delay pending free request in read path
Sergey reported we don't need to handle pending free request every I/O
so that this patch removes it in read path while we remain it in write
path.
Let's consider below example.
Swap subsystem ask to zram "A" block free by swap_slot_free_notify but
zram had been pended it without real freeing. Swap subsystem allocates
"A" block for new data but request pended for a long time just handled
and zram blindly free new data on the "A" block. :(
That's why we couldn't remove handle pending free request right before
zram-write.
Change-Id: Ib4409bfad7b1ae263e2708c74875c322da72c7b3
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:58 +0000 (15:45 -0800)]
zram: fix race between reset and flushing pending work
Dan and Sergey reported that there is a racy between reset and flushing
of pending work so that it could make oops by freeing zram->meta in
reset while zram_slot_free can access zram->meta if new request is
adding during the race window.
This patch moves flush after taking init_lock so it prevents new request
so that it closes the race.
Change-Id: Ibc09001d1ad4a4ef852d661384259b53f0f9c19b
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:57 +0000 (15:45 -0800)]
zsmalloc: add maintainers
tAdd adds maintainer information for zsmalloc into the MAINTAINERS file.
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Conflicts:
MAINTAINERS
Change-Id: I315effbbfbd5aabb96dcadd1d35d9592ee24182d
Minchan Kim [Thu, 30 Jan 2014 23:45:56 +0000 (15:45 -0800)]
zram: add zram maintainers
Add maintainer information for zram into the MAINTAINERS file.
Change-Id: I8a6b11120f55b76aeccf18ce004293721e48081a
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:55 +0000 (15:45 -0800)]
zsmalloc: add copyright
Add my copyright to the zsmalloc source code which I maintain.
Change-Id: Ic3dd8dd11297ef902f4cb913e40c52249282d947
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:55 +0000 (15:45 -0800)]
zram: add copyright
Add my copyright to the zram source code which I maintain.
Change-Id: I8816064aa958c9304c53fae0972e011060cc2bcc
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:54 +0000 (15:45 -0800)]
zram: remove old private project comment
Remove the old private compcache project address so upcoming patches
should be sent to LKML because we Linux kernel community will take care.
Change-Id: Ia5bf208791c8fa6e96161fd9fb842d6829f14698
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 30 Jan 2014 23:45:52 +0000 (15:45 -0800)]
zram: promote zram from staging
Zram has lived in staging for a LONG LONG time and have been
fixed/improved by many contributors so code is clean and stable now. Of
course, there are lots of product using zram in real practice.
The major TV companys have used zram as swap since two years ago and
recently our production team released android smart phone with zram
which is used as swap, too and recently Android Kitkat start to use zram
for small memory smart phone. And there was a report Google released
their ChromeOS with zram, too and cyanogenmod have been used zram long
time ago. And I heard some disto have used zram block device for tmpfs.
In addition, I saw many report from many other peoples. For example,
Lubuntu start to use it.
The benefit of zram is very clear. With my experience, one of the
benefit was to remove jitter of video application with backgroud memory
pressure. It would be effect of efficient memory usage by compression
but more issue is whether swap is there or not in the system. Recent
mobile platforms have used JAVA so there are many anonymous pages. But
embedded system normally are reluctant to use eMMC or SDCard as swap
because there is wear-leveling and latency issues so if we do not use
swap, it means we can't reclaim anoymous pages and at last, we could
encounter OOM kill. :(
Although we have real storage as swap, it was a problem, too. Because
it sometime ends up making system very unresponsible caused by slow swap
storage performance.
Quote from Luigi on Google
"Since Chrome OS was mentioned: the main reason why we don't use swap
to a disk (rotating or SSD) is because it doesn't degrade gracefully
and leads to a bad interactive experience. Generally we prefer to
manage RAM at a higher level, by transparently killing and restarting
processes. But we noticed that zram is fast enough to be competitive
with the latter, and it lets us make more efficient use of the
available RAM. " and he announced.
http://www.spinics.net/lists/linux-mm/msg57717.html
Other uses case is to use zram for block device. Zram is block device
so anyone can format the block device and mount on it so some guys on
the internet start zram as /var/tmp.
http://forums.gentoo.org/viewtopic-t-838198-start-0.html
Let's promote zram and enhance/maintain it instead of removing.
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Acked-by: Pekka Enberg <penberg@kernel.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Luigi Semenzato <semenzato@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Conflicts:
drivers/block/Makefile
Change-Id: I368f76a5368fffaacbf349cfd78f79cba5da0a0d
Minchan Kim [Thu, 30 Jan 2014 23:45:50 +0000 (15:45 -0800)]
zsmalloc: move it under mm
This patch moves zsmalloc under mm directory.
Before that, description will explain why we have needed custom
allocator.
Zsmalloc is a new slab-based memory allocator for storing compressed
pages. It is designed for low fragmentation and high allocation success
rate on large object, but <= PAGE_SIZE allocations.
zsmalloc differs from the kernel slab allocator in two primary ways to
achieve these design goals.
zsmalloc never requires high order page allocations to back slabs, or
"size classes" in zsmalloc terms. Instead it allows multiple
single-order pages to be stitched together into a "zspage" which backs
the slab. This allows for higher allocation success rate under memory
pressure.
Also, zsmalloc allows objects to span page boundaries within the zspage.
This allows for lower fragmentation than could be had with the kernel
slab allocator for objects between PAGE_SIZE/2 and PAGE_SIZE. With the
kernel slab allocator, if a page compresses to 60% of it original size,
the memory savings gained through compression is lost in fragmentation
because another object of the same size can't be stored in the leftover
space.
This ability to span pages results in zsmalloc allocations not being
directly addressable by the user. The user is given an
non-dereferencable handle in response to an allocation request. That
handle must be mapped, using zs_map_object(), which returns a pointer to
the mapped region that can be used. The mapping is necessary since the
object data may reside in two different noncontigious pages.
The zsmalloc fulfills the allocation needs for zram perfectly
[sjenning@linux.vnet.ibm.com: borrow Seth's quote]
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Luigi Semenzato <semenzato@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Conflicts:
mm/Kconfig
Change-Id: I57dad090a3c48db4a67c88e6fa20a4bdbb82d984
Nitin Cupta [Wed, 11 Dec 2013 02:04:37 +0000 (11:04 +0900)]
zsmalloc: add more comment
This patch adds lots of comments and it will help others
to review and enhance.
Change-Id: I743596bf18e8acf1082c21437c2caef5f15aad71
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Minchan Kim [Wed, 11 Dec 2013 02:04:36 +0000 (11:04 +0900)]
zsmalloc: add Kconfig for enabling page table method
Zsmalloc has two methods 1) copy-based and 2) pte based to
access objects that span two pages.
You can see history why we supported two approach from [1].
But it was bad choice that adding hard coding to select arch
which want to use pte based method because there are lots of
SoC in an architecure and they can have different cache size,
CPU speed and so on so it would be better to expose it to user
as selectable Kconfig option like Andrew Morton suggested.
[1] https://lkml.org/lkml/2012/7/11/58
Change-Id: Ieedde9cfac0a7d9bbcb3d5d5b36318efd41132eb
Acked-by: Nitin Gupta <ngupta@vflare.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rashika Kheria [Sun, 10 Nov 2013 16:43:53 +0000 (22:13 +0530)]
Staging: zram: Fix memory leak by refcount mismatch
As suggested by Minchan Kim and Jerome Marchand "The code in reset_store
get the block device (bdget_disk()) but it does not put it (bdput()) when
it's done using it. The usage count is therefore incremented but never
decremented."
This patch also puts bdput() for all error cases.
Change-Id: I92198df5ff42242ef3627e5d3db4acece7940d61
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rashika Kheria [Wed, 30 Oct 2013 13:06:32 +0000 (18:36 +0530)]
Staging: zram: Fix access of NULL pointer
This patch fixes the bug in reset_store caused by accessing NULL pointer.
The bdev gets its value from bdget_disk() which could fail when memory
pressure is severe and hence can return NULL because allocation of
inode in bdget could fail.
Hence, this patch introduces a check for bdev to prevent reference to a
NULL pointer in the later part of the code. It also removes unnecessary
check of bdev for fsync_bdev().
Change-Id: I47cdcc08076df7958a19406b8502627802c7bd07
Cc: stable <stable@vger.kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rashika Kheria [Wed, 30 Oct 2013 13:13:32 +0000 (18:43 +0530)]
Staging: zram: Fix variable dereferenced before check
This patch fixes the following Smatch warning in zram_drv.c-
drivers/staging/zram/zram_drv.c:899
destroy_device() warn: variable dereferenced before check 'zram->disk' (see line 896)
Change-Id: If920cb9e1328289c561de89e074523071cd772b5
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Majunath Goudar [Tue, 8 Oct 2013 10:32:15 +0000 (16:02 +0530)]
zsmalloc: Fix "map_vm_area" undefined reference errors.
This patch adds a MMU dependency to configure the ZSMALLOC in
drivers/staging/zsmalloc/Kconfig. Without this patch, build
system can lead to build failure. This was observed during
randconfig testing, in which ZSMALLOC was enabled w/o MMU being
enabled. Following was the error:
LD vmlinux
drivers/built-in.o: In function `__zs_map_object':
drivers/staging/zsmalloc/zsmalloc-main.c:650: undefined reference to `map_vm_area'
make: *** [vmlinux] Error 1
Change-Id: Ia78fb6b91949e85f3b9fee18eaffa18205aaccb9
Signed-off-by: Manjunath Goudar <csmanjuvijay@gmail.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: devel@driverdev.osuosl.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 12 Sep 2013 22:41:31 +0000 (15:41 -0700)]
Revert "staging: zram: Add auto loading of module if user opens /dev/zram."
This reverts commit
c70bda992c12e593e411c02a52e4bd6985407539.
It's incorrect, Kay writes:
Please just remove it. "devname" is meant to be used for
single-instance devices with a static dev_t, never for things
like zramX.
It will not do anything useful here, it does nothing really
without a statically assigned dev_t, and it should not be used
for devices of this kind anyway.
Change-Id: Ia1503b3cff95e5dd31c934f420025ea252f09129
Reported-by: Tom Gundersen <teg@jklm.no>
Reported-by: Kay Sievers <kay@vrfy.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Minchan Kim [Mon, 12 Aug 2013 06:13:56 +0000 (15:13 +0900)]
zram: don't grab mutex in zram_slot_free_noity
[1] introduced down_write in zram_slot_free_notify to prevent race
between zram_slot_free_notify and zram_bvec_[read|write]. The race
could happen if somebody who has right permission to open swap device
is reading swap device while it is used by swap in parallel.
However, zram_slot_free_notify is called with holding spin_lock of
swap layer so we shouldn't avoid holing mutex. Otherwise, lockdep
warns it.
This patch adds new list to handle free slot and workqueue
so zram_slot_free_notify just registers slot index to be freed and
registers the request to workqueue. If workqueue is expired,
it holds mutex_lock so there is no problem any more.
If any I/O is issued, zram handles pending slot-free request
caused by zram_slot_free_notify right before handling issued
request because workqueue wouldn't be expired yet so zram I/O
request handling function can miss it.
Lastly, when zram is reset, flush_work could handle all of pending
free request so we shouldn't have memory leak.
NOTE: If zram_slot_free_notify's kmalloc with GFP_ATOMIC would be
failed, the slot will be freed when next write I/O write the slot.
[1] [
57ab0485, zram: use zram->lock to protect zram_free_page()
in swap free notify path]
* from v2
* refactoring
* from v1
* totally redesign
Change-Id: I7f478fa9b00215eb0a3e16433e607a2f73151f27
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Jiang Liu <jiang.liu@huawei.com>
Cc: stable@vger.kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Minchan Kim [Mon, 12 Aug 2013 06:13:55 +0000 (15:13 +0900)]
zram: fix invalid memory access
[1] tried to fix invalid memory access on zram->disk but it didn't
fix properly because get_disk failed during module exit path.
Actually, we don't need to reset zram->disk's capacity to zero
in module exit path so that this patch introduces new argument
"reset_capacity" on zram_reset_divice and it only reset it when
reset_store is called.
[1]
6030ea9b, zram: avoid invalid memory access in zram_exit()
Change-Id: Ibc554559bbd533bd986c6001ffb9cfb22f8c9f49
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Jiang Liu <jiang.liu@huawei.com>
Cc: stable@vger.kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kumar Gaurav [Thu, 8 Aug 2013 18:23:24 +0000 (23:53 +0530)]
Staging: zram: zram_drv.c: Fixed Error of trailing whitespace
Fixed by removing trailing whitespace
Change-Id: Ia789db213c6917b2f9eeb795d573d9d2ae4c19cd
Signed-off-by: Kumar Gaurav <kumargauravgupta3@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Wed, 26 Jun 2013 12:28:39 +0000 (15:28 +0300)]
staging: zram: protect zram_reset_device() call
Commit
9b3bb7abcdf2df0f1b2657e6cbc9d06bc2b3b36f (remove
zram_sysfs file (v2)) accidentally made zram_reset_device()
racy. Protect zram_reset_device() call with zram->lock.
Change-Id: Ia8cd8eb9eae37ccc142efc1c21980076f2aa007d
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Jerome Marchand <jmarchand@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sunghan Suh [Wed, 3 Jul 2013 11:10:05 +0000 (20:10 +0900)]
zram: prevent data loss in error cases of function zram_bvec_write()
In function zram_bvec_write(), previous data at the index is
already freed by function zram_free_page().
When failed to compress or zs_malloc, there is no way to restore old data.
Therefore, free previous data when it's about to update.
Also, no need to check whether table is not empty outside of
function zram_free_page(), because the function properly checks inside.
Change-Id: Ieeecfa00839cd4440781ece7f9bbed123e3651d5
Signed-off-by: Sunghan Suh <sunghan.suh@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Konrad Rzeszutek Wilk [Fri, 12 Jul 2013 18:20:52 +0000 (14:20 -0400)]
staging: zram: Add auto loading of module if user opens /dev/zram.
Greg spotted that said driver is not subscribing to the automagic
mechanism of auto-loading if a user tries to open /dev/zram.
This fixes it.
Change-Id: I934d034752f1c677076f439b95da178b45384243
CC: Minchan Kim <minchan@kernel.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sunghan Suh [Fri, 12 Jul 2013 07:08:13 +0000 (16:08 +0900)]
staging: zsmalloc: access page->private by using page_private macro
Change-Id: I3fb9c7c58baeac6268961750ed859518c421b770
Signed-off-by: Sunghan Suh <sunghan.suh@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Sat, 22 Jun 2013 14:21:00 +0000 (17:21 +0300)]
zram: allow request end to coincide with disksize
Pass valid_io_request() checks if request end coincides with disksize
(end equals bound), only fail if we attempt to read beyond the bound.
mkfs.ext2 produces numerous errors:
[ 2164.632747] quiet_error: 1 callbacks suppressed
[ 2164.633260] Buffer I/O error on device zram0, logical block 153599
[ 2164.633265] lost page write due to I/O error on zram0
Change-Id: Ife5ddef82610e1470d233ce4bdf042ed738064b6
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Sat, 22 Jun 2013 00:21:18 +0000 (03:21 +0300)]
zram: remove zram_sysfs file (v2)
Move zram sysfs code to zram drv and remove zram_sysfs.c
file. This gives ability to make static a number of previously
exported zram functions, used from zram sysfs, e.g. internal zram
zram_meta_alloc/free(). We also can drop zram_drv wrapper
functions, used from zram sysfs:
e.g. zram_reset_device()/__zram_reset_device() pair.
v2: as suggested by Greg K-H, move MODULE description to the
bottom of the file.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Conflicts:
drivers/staging/zram/zram_drv.c
Change-Id: Icb484d81dde622e474a097f7ad7a2b7dfa016cdb
Jiang Liu [Thu, 6 Jun 2013 16:07:31 +0000 (00:07 +0800)]
zram: use atomic64_xxx() to replace zram_stat64_xxx()
Use atomic64_xxx() to replace open-coded zram_stat64_xxx().
Some architectures have native support of atomic64 operations,
so we can get rid of the spin_lock() in zram_stat64_xxx().
On the other hand, for platforms use generic version of atomic64
implement, it may cause an extra save/restore of the interrupt
flag. So it's a tradeoff.
Change-Id: Ie2582688a71f89b457b006c9d2f3cda1d24c93ec
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Liu [Thu, 6 Jun 2013 16:07:30 +0000 (00:07 +0800)]
zram: optimize memory operations with clear_page()/copy_page()
Some architectures provides architecture-specific, optimized version of
clear_page()/copy_page(), which may have better performance than
memset()/memcpy(). So use clear_page()/copy_page() to optimize zram
performance if possible.
Change-Id: If33e01891f1bc3790144f85c02e0869a02d0faae
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Liu [Thu, 6 Jun 2013 16:07:29 +0000 (00:07 +0800)]
zram: kill unused zram_get_num_devices()
Now there's no caller of zram_get_num_devices(), so kill it.
And change zram_devices to static because it's only used in zram_drv.c.
Change-Id: Iffd94b2456d86692703b3aff02b6e6d7a3ac5957
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Liu [Thu, 6 Jun 2013 16:07:28 +0000 (00:07 +0800)]
zram: simplify and optimize dev_to_zram()
Simplify and optimize dev_to_zram() without walking the zram_devices
array.
Change-Id: Ib1ee8afe8fc0596f57c4cc7b8e41a817b405c263
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sara Bird [Mon, 20 May 2013 19:18:14 +0000 (15:18 -0400)]
staging/zsmalloc: Fixed up incorrect formatted comments
The existing comments are using an odd style. Fixed them up to adhere
to the StyleGuide. No code changes.
Change-Id: I8e6d8cfa84fa87fb79f5ffc6973d7ab12287ba67
Signed-off-by: Sara Bird <sara.bird.iar@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marlies Ruck [Thu, 16 May 2013 18:30:39 +0000 (14:30 -0400)]
Staging: Fixes string split across lines in zram
Fixes the following checkpatch warning in zram_drv.c:
WARNING: quoted string split across lines
Change-Id: Icd77f2e0465dc821b3af5b27178029194e67ff70
Signed-off-by: Marlies Ruck <marlies.ruck@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marlies Ruck [Wed, 15 May 2013 20:56:49 +0000 (16:56 -0400)]
Staging: Fixes string split across lines in zsmalloc zsmalloc-main
Fixes the following checkpatch warning:
WARNING: quoted string split across lines
Change-Id: Ib5855942b855de05743b974cb842fa2127ff07da
Signed-off-by: Marlies Ruck <marlies.ruck@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jonghwa Lee [Tue, 18 Mar 2014 15:55:10 +0000 (00:55 +0900)]
extcon: max77693: Fix a bug occured at changing ADC debounce time.
During the initialzation of max77693 muic device, it has been suffered from
abnormal interrupt and accidental reset of certain register when it changes
ADC debounce time. All these happens occured by mistakenly writing some value
to BLTDset and JIGset from CONTROL3 register.
BLTDset and JIGset are not configurable and only reflect actual pin status.
If you write some value other than 0 to them, muic device will return false
information.
To set ADC debounce time properly, give 0 to BLTDset and JIGset when writing
CONTORL3 register.
Previous workaround patches are now purged.
Change-Id: If87e01785115d460b1153e24271a50125d1631fb
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Jonghwa Lee [Tue, 18 Mar 2014 15:41:36 +0000 (00:41 +0900)]
Revert "extcon: max77693: Fix bug related to MAX77693 irq when set ADC debounce time"
This reverts commit
b64315125400f4eec9f74f7536acca3e8eeca720.
Upcoming patch will fix a bug thus all related workaournds are now purged.
Change-Id: Iab88c25ef9470a5167be73990ceb7859e076ddd8
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Jonghwa Lee [Tue, 18 Mar 2014 15:41:20 +0000 (00:41 +0900)]
Revert "extcon: max77693: Force using UART path for jig"
This reverts commit
ec05e8de55043b87a1dafccec5477db9a6ecf9a1.
Upcoming patch will fix a bug thus all related workaournds are now purged.
Change-Id: I52b4456867dd887c7d29a4f485c952860c6dc45c
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Jonghwa Lee [Tue, 18 Mar 2014 15:41:04 +0000 (00:41 +0900)]
Revert "extcon: max77693: Fix inaccurate extcon event for JIG with USB cable"
This reverts commit
36bdb901c8c61448d9e1320641c3debebde03321.
Upcoming patch will fix a bug thus all related workaournds are now purged.
Change-Id: Idafdbcadff0d99320502676ef371d6bc0bd1fcf7
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Jonghwa Lee [Mon, 17 Mar 2014 08:44:06 +0000 (17:44 +0900)]
extcon: max77693: Fix inaccurate extcon event for JIG with USB cable
Dmitry's patch 'extcon: max77693: Force using UART~' fixes problem when
device is connected JIG with USB cable. However, with the patch, driver
sends inaccurate event "USB" even muic keeps path to UART not USB. This
would make users confused.
This patch keeps Dmitry's modification working and also bypasses signaling
event "USB" when device is connected to JIG with USB cable.
Change-Id: Ia5f3328e5b0827378178502c70cb90da48e9e3f7
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
Joonyoung Shim [Tue, 11 Mar 2014 05:07:29 +0000 (14:07 +0900)]
ARM: dts: odroidx2: remove fimd node
Our odroidx2 board doesn't have any LCD panel, so it is unnecessary to
use fimd.
Change-Id: I1924ee52cd34d5763aab989035c51b387de98311
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Joonyoung Shim [Tue, 11 Mar 2014 04:44:20 +0000 (13:44 +0900)]
ARM: EXYNOS: register master power domain for genpd from DT
There is one case for EXYNOS4412 that mixer needs LCD0 power domain as
well as TV power domain. If disable LCD0 power domain, mixer occurs
underflow of mixer graphic layer0 line buffer and we can't output normal
data from hdmi. I don't know why mixer has dependency of LCD0 power
domain.
We can control the dependency as LCD0 power domain register to master of
TV power domain on genpd framework.
Change-Id: I71ef5bba37393d2e00c4dd7ea3f7da09d72e8db7
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Joonyoung Shim [Tue, 11 Mar 2014 02:49:32 +0000 (11:49 +0900)]
ARM: odroidx2: update defconfig to enable DRM_EXYNOS_FIMC
DRM_EXYNOS_FIMC needs to use FIMC from DRM.
Change-Id: I854807ae13d356761b68a19860a0d618eb4f14fc
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Seung-Woo Kim [Fri, 29 Nov 2013 07:57:34 +0000 (16:57 +0900)]
media: s5p-mfc: Add support for V4L2_MEMORY_DMABUF type
There is memory constraint that it should be within 128MB from
firmware address. But if IOMMU is supported, then this constraint
is meaningless and DMABUF importing can be used.
So this patch adds V4L2_MEMORY_DMABUF type support for both decoder
and encoder.
Change-Id: I2c893da31f906fcd3f26edeed67ad1e4667e6081
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Donghwa Lee <dh09.lee@samsung.com>
Joonyoung Shim [Sat, 8 Mar 2014 05:04:57 +0000 (14:04 +0900)]
ARM: odroidx2: update defconfig to enable ANDROID_LOGGER
ANDROID_LOGGER needs to use dlogutil at tizen.
Change-Id: Ibcad389c6edf8216441948931e9733aa45b04aa1
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Joonyoung Shim [Sat, 8 Mar 2014 04:55:56 +0000 (13:55 +0900)]
ARM: dts: fix base address of sysmmu_tv for exynos4
The base address of sysmmu_tv is 0x12E20000, not 0x13E20000. Wrong base
address problem causes below error when kernel boots.
[ 2.450988] Unhandled fault: imprecise external abort (0xc06) at 0xb6fd2068
Change-Id: Icc896d02e25800b18659c66cad853fdbf54cae9c
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Beomho Seo [Thu, 6 Mar 2014 02:44:40 +0000 (11:44 +0900)]
dts: exynos4412-trats2: Remove unused device node on trats2 board
GP2AP002A00F ligth/proximity sensor is unused on trats2 board.
Change-Id: I4ef414eea4aaf336200ec0b97485fcfde2645c2f
Signed-off-by: Beomho Seo <beomho.seo@samsung.com>
Lukasz Majewski [Tue, 4 Mar 2014 11:29:04 +0000 (12:29 +0100)]
cpufreq:LAB: Remove MODULE_* macros since it is not possible to compile LAB as a module
LAB cannot be compiled as a module, so MODULE_* macros are in fact a dead
code.
Change-Id: I10f8e5650631b00850631aea1295c1d88e53104a
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Hyungwon Hwang [Wed, 5 Mar 2014 06:37:26 +0000 (15:37 +0900)]
ARM: odroidx2: update defconfig to fix the number of cores
The processor on Odroid X2 is Exynos4412 which has 4 processing cores.
The number of cores on the config file was 2, not 4.
Change-Id: Ia153c28e5c00a7a8cec2cda9ee0d24caa0d0c1ba
Signed-off-by: Hyungwon Hwang <human.hwang@samsung.com>
Hyungwon Hwang [Wed, 5 Mar 2014 05:44:41 +0000 (14:44 +0900)]
dts: arm: odroidx2: add support for external buttons
Add support for two external buttons (power & home buttons)
Change-Id: I90544bf89c804329dfc78709d48d9eedc2c4a181
Signed-off-by: Hyungwon Hwang <human.hwang@samsung.com>
Marek Szyprowski [Tue, 4 Mar 2014 11:30:01 +0000 (12:30 +0100)]
ARM: odroidx2: update defconfig
Added drivers for MALI GPU, DMAbuf, DMAbuf-sync, exynos drm iommu and
cpufreq.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Change-Id: I427332db24aa52daf4d4addbf9fbad8bc116b62e
Marek Szyprowski [Tue, 4 Mar 2014 11:28:51 +0000 (12:28 +0100)]
ARM: dts: odroidx2: add mali gpu and cpu freq
Add nodes to enable MALI gpu and CPU freq drivers.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Change-Id: Id1cc80e7b4278698603422da8e8a7a07fca38904
Marek Szyprowski [Tue, 4 Mar 2014 11:27:53 +0000 (12:27 +0100)]
ARM: dts: odroidx2: remove obsoleted register debug node
Register-debug node was used for some internal debug, it is not needed
at in release code.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Change-Id: I3cefb94755850a9c3caa609ed0fb5c5e56d1fc94
Lukasz Majewski [Tue, 4 Mar 2014 11:27:50 +0000 (12:27 +0100)]
cpufreq:LAB: Replace NR_CPUS with num_possible_cpus() function
The usage of NR_CPUS constant is deprecated, since this value can be the
maximal possible number of cores on a SMP machine.
The num_possible_cpus() represents the number of available cores on the
system.
This change has caused replacement of global tables by ones allocated with
kzalloc().
Change-Id: Ib0bfa27296740a91a25b1af0ece0e573a9756846
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Tue, 4 Mar 2014 11:20:34 +0000 (12:20 +0100)]
cpufreq:LAB:cosmetic: Cosmetic code cleanup
Initialization of static variable is not necessary, since it will be placed
at BSS section.
Also proper format comments have been added.
Change-Id: Id5e30a97d7d3cdb851f09a69e944c77223fa8a82
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Wed, 26 Feb 2014 12:12:36 +0000 (13:12 +0100)]
cpufreq:LAB:dts:trats2: Add attributes necessary for correct LAB operation
Two extra attributes have been added. The "lab-num-of-states" describes how
many ranges will be used during LAB operation. Now the span of it equals to
20 (100 / 5).
The lab-ctrl-freq attribute maps number of idle CPUs and workload (from
scheduler) to LAB operations.
It is possible to specify exact freq (e.g. 1300000), enable boost
(e.g. 0xFFFFFFFF), set frequency to minimum (0xFFFFFFFE) or rely on
ONDEMAND governor to find suitable frequency.
Change-Id: I0ea6ffd8626aded5bed34166c59b9d34278feef2
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Thu, 27 Feb 2014 12:35:03 +0000 (13:35 +0100)]
cpufreq:LAB:ondemand: Enable usage of ONDEMAND specific methods at LAB governor
Two methods from ondemand, namely store_sampling_rate() and od_check_cpu()
are now utilized in LAB governor.
Moreover the od_cpu_dbs_info_s structure shall be regarded as a common one.
Therefore in LAB only its declaration is necessary.
Change-Id: I3408b2f8cfdb292cd69568c931da46d8f957099c
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Wed, 26 Feb 2014 14:50:05 +0000 (15:50 +0100)]
cpufreq:LAB:cpufreq_governor: Remove redundant LAB code from cpufreq_governor.[h|c]
Since the Ondemand code has been reused for LAB, it is now possible to remove
code specially defined for LAB governor at the cpufreq_governor.[h|c] code.
Change-Id: I9c48eade8ffe6a94efd0145b7d48afb405961155
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Wed, 26 Feb 2014 12:42:04 +0000 (13:42 +0100)]
cpufreq:LAB:ondemand: Ondemand governor adjustments necessary for correct LAB operation
Ondemand code needs to be slightly modified for LAB governor operation.
The biggest problem is with the update_sampling_rate function, which shall
not be executed with wrong governor.
Change-Id: I149204bda15b11546c57a77a75a51c4f4f8522b8
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Thu, 27 Feb 2014 12:30:06 +0000 (13:30 +0100)]
cpufreq:LAB:core: Redesign of LAB code to work on top of ONDEMAND governor
The code for LAB has been redesigned to be able to work on top of Ondemand
governor.
Previous version of LAB - the one which used the polynomial approximation
has been replaced with more readable approach.
The LAB control approach is now read from device tree.
User is allowed to specify following operations (based on load and number of
idle CPUs):
- Force a particular frequency
- Explicitly enable boost for a specific kind of load
- Use ondemand governor
By using ondemand one can be sure that for non critical frequencies correct
value will be chosen.
LAB, which works on top of it, can clamp the freq (an thereof power management)
or explicitly enable boost.
Change-Id: Ieaf84d245463edf90fb2baaf825c0534970eab7e
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Lukasz Majewski [Wed, 26 Feb 2014 13:07:17 +0000 (14:07 +0100)]
cpufreq:LAB:ondemand: REMOVE from LAB governor code duplicated at ondemand
LAB is very similar to ondemand governor in its structure.
Both use the same code for:
- governor init and exit
- demand based switching timer code
- governor specific ops
In this way the LAB can be stacked on top of ondemand governor and hence it
is possible to reuse its logic when needed.
Change-Id: I78e0da90bb2f07677fe6f8d451139107994f5a6f
Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>