profile/ivi/kernel-adaptation-intel-automotive.git
15 years agoperf_counter: remove rq->lock usage
Peter Zijlstra [Mon, 6 Apr 2009 09:45:12 +0000 (11:45 +0200)]
perf_counter: remove rq->lock usage

Now that all the task runtime clock users are gone, remove the ugly
rq->lock usage from perf counters, which solves the nasty deadlock
seen when a software task clock counter was read from an NMI overflow
context.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.531137582@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: rework the task clock software counter
Peter Zijlstra [Mon, 6 Apr 2009 09:45:11 +0000 (11:45 +0200)]
perf_counter: rework the task clock software counter

Rework the task clock software counter to use the context time instead
of the task runtime clock, this removes the last such user.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.445450972@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: rework context time
Peter Zijlstra [Mon, 6 Apr 2009 09:45:10 +0000 (11:45 +0200)]
perf_counter: rework context time

Since perf_counter_context is switched along with tasks, we can
maintain the context time without using the task runtime clock.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.353552838@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: change event definition
Peter Zijlstra [Mon, 6 Apr 2009 09:45:09 +0000 (11:45 +0200)]
perf_counter: change event definition

Currently the definition of an event is slightly ambiguous. We have
wakeup events, for poll() and SIGIO, which are either generated
when a record crosses a page boundary (hw_events.wakeup_events == 0),
or every wakeup_events new records.

Now a record can be either a counter overflow record, or a number of
different things, like the mmap PROT_EXEC region notifications.

Then there is the PERF_COUNTER_IOC_REFRESH event limit, which only
considers counter overflows.

This patch changes then wakeup_events and SIGIO notification to only
consider overflow events. Furthermore it changes the SIGIO notification
to report SIGHUP when the event limit is reached and the counter will
be disabled.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.266679874@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: comment the perf_event_type stuff
Peter Zijlstra [Mon, 6 Apr 2009 09:45:08 +0000 (11:45 +0200)]
perf_counter: comment the perf_event_type stuff

Describe the event format.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.211174347@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: counter overflow limit
Peter Zijlstra [Mon, 6 Apr 2009 09:45:07 +0000 (11:45 +0200)]
perf_counter: counter overflow limit

Provide means to auto-disable the counter after 'n' overflow events.

Create the counter with hw_event.disabled = 1, and then issue an
ioctl(fd, PREF_COUNTER_IOC_REFRESH, n); to set the limit and enable
the counter.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.083139737@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: PERF_RECORD_TIME
Peter Zijlstra [Mon, 6 Apr 2009 09:45:06 +0000 (11:45 +0200)]
perf_counter: PERF_RECORD_TIME

By popular request, provide means to log a timestamp along with the
counter overflow event.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094518.024173282@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix the mlock accounting
Peter Zijlstra [Mon, 6 Apr 2009 09:45:05 +0000 (11:45 +0200)]
perf_counter: fix the mlock accounting

Reading through the code I saw I forgot the finish the mlock accounting.
Do so now.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.899767331@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: theres more to overflow than writing events
Peter Zijlstra [Mon, 6 Apr 2009 09:45:04 +0000 (11:45 +0200)]
perf_counter: theres more to overflow than writing events

Prepare for more generic overflow handling. The new perf_counter_overflow()
method will handle the generic bits of the counter overflow, and can return
a !0 return value, in which case the counter should be (soft) disabled, so
that it won't count until it's properly disabled.

XXX: do powerpc and swcounter

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.812109629@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: x86: self-IPI for pending work
Peter Zijlstra [Mon, 6 Apr 2009 09:45:03 +0000 (11:45 +0200)]
perf_counter: x86: self-IPI for pending work

Implement set_perf_counter_pending() with a self-IPI so that it will
run ASAP in a usable context.

For now use a second IRQ vector, because the primary vector pokes
the apic in funny ways that seem to confuse things.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.724626696@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: generalize pending infrastructure
Peter Zijlstra [Mon, 6 Apr 2009 09:45:02 +0000 (11:45 +0200)]
perf_counter: generalize pending infrastructure

Prepare the pending infrastructure to do more than wakeups.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.634732847@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: SIGIO support
Peter Zijlstra [Mon, 6 Apr 2009 09:45:01 +0000 (11:45 +0200)]
perf_counter: SIGIO support

Provide support for fcntl() I/O availability signals.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.579788800@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add more context information
Peter Zijlstra [Mon, 6 Apr 2009 09:45:00 +0000 (11:45 +0200)]
perf_counter: add more context information

Change the callchain context entries to u16, so as to gain some space.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.457320003@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: update mmap() counter read, take 2
Peter Zijlstra [Mon, 6 Apr 2009 09:44:59 +0000 (11:44 +0200)]
perf_counter: update mmap() counter read, take 2

Update the userspace read method.

Paul noted that:
 - userspace cannot observe ->lock & 1 on the same cpu.
 - we need a barrier() between reading ->lock and ->index
   to ensure we read them in that prticular order.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <20090406094517.368446033@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: update mmap() counter read
Peter Zijlstra [Thu, 2 Apr 2009 09:12:04 +0000 (11:12 +0200)]
perf_counter: update mmap() counter read

Paul noted that we don't need SMP barriers for the mmap() counter read
because its always on the same cpu (otherwise you can't access the hw
counter anyway).

So remove the SMP barriers and replace them with regular compiler
barriers.

Further, update the comment to include a race free method of reading
said hardware counter. The primary change is putting the pmc_read
inside the seq-loop, otherwise we can still race and read rubbish.

Noticed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Orig-LKML-Reference: <20090402091319.577951445@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add more context information
Peter Zijlstra [Thu, 2 Apr 2009 09:12:03 +0000 (11:12 +0200)]
perf_counter: add more context information

Put in counts to tell which ips belong to what context.

  -----
   | |  hv
   | --
nr | |  kernel
   | --
   | |  user
  -----

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Orig-LKML-Reference: <20090402091319.493101305@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: kerneltop: update to new ABI
Peter Zijlstra [Thu, 2 Apr 2009 09:12:02 +0000 (11:12 +0200)]
perf_counter: kerneltop: update to new ABI

Update to reflect the new record_type ABI changes.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Orig-LKML-Reference: <20090402091319.407283141@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: per event wakeups
Peter Zijlstra [Thu, 2 Apr 2009 09:12:01 +0000 (11:12 +0200)]
perf_counter: per event wakeups

By request, provide a way to request a wakeup every 'n' events instead
of every page of output.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Orig-LKML-Reference: <20090402091319.323309784@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: move the event overflow output bits to record_type
Peter Zijlstra [Thu, 2 Apr 2009 09:11:59 +0000 (11:11 +0200)]
perf_counter: move the event overflow output bits to record_type

Per suggestion from Paul, move the event overflow bits to record_type
and sanitize the enums a bit.

Breaks the ABI -- again ;-)

Suggested-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Orig-LKML-Reference: <20090402091319.151921176@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: kerneltop: add real-time data acquisition thread
Mike Galbraith [Fri, 27 Mar 2009 11:13:43 +0000 (12:13 +0100)]
perf_counter tools: kerneltop: add real-time data acquisition thread

Decouple kerneltop display from event acquisition by introducing
a separate data acquisition thread. This fixes annnoying kerneltop
display refresh jitter and missed events.

Also add a -r <prio> option, to switch the data acquisition thread
to real-time priority.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: pmc arbitration
Peter Zijlstra [Mon, 30 Mar 2009 17:07:16 +0000 (19:07 +0200)]
perf_counter: pmc arbitration

Follow the example set by powerpc and try to play nice with oprofile
and the nmi watchdog.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171024.459968444@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: x86: callchain support
Peter Zijlstra [Mon, 30 Mar 2009 17:07:15 +0000 (19:07 +0200)]
perf_counter: x86: callchain support

Provide the x86 perf_callchain() implementation.

Code based on the ftrace/sysprof code from Soeren Sandmann Pedersen.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <srostedt@redhat.com>
Orig-LKML-Reference: <20090330171024.341993293@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: provide generic callchain bits
Peter Zijlstra [Mon, 30 Mar 2009 17:07:14 +0000 (19:07 +0200)]
perf_counter: provide generic callchain bits

Provide the generic callchain support bits. If hw_event->callchain is
set the arch specific perf_callchain() function is called upon to
provide a perf_callchain_entry structure filled with the current
callchain.

If it does so, it is added to the overflow output event.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171024.254266860@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: kerneltop: update event_types
Peter Zijlstra [Mon, 30 Mar 2009 17:07:13 +0000 (19:07 +0200)]
perf_counter tools: kerneltop: update event_types

Go along with the new perf_event_type ABI.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171024.133985461@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: re-arrange the perf_event_type
Peter Zijlstra [Mon, 30 Mar 2009 17:07:12 +0000 (19:07 +0200)]
perf_counter: re-arrange the perf_event_type

Breaks ABI yet again :-)

Change the event type so that [0, 2^31-1] are regular event types, but
[2^31, 2^32-1] forms a bitmask for overflow events.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171024.047961770@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: small cleanup of the output routines
Peter Zijlstra [Mon, 30 Mar 2009 17:07:11 +0000 (19:07 +0200)]
perf_counter: small cleanup of the output routines

Move the nmi argument to the _begin() function, so that _end() only needs the
handle. This allows the _begin() function to generate a wakeup on event loss.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.959404268@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: optionally scale counter values in perfstat mode
Paul Mackerras [Mon, 30 Mar 2009 17:07:10 +0000 (19:07 +0200)]
perf_counter tools: optionally scale counter values in perfstat mode

Impact: new functionality

This adds add an option to the perfstat mode of kerneltop to scale the
reported counter values according to the fraction of time that each
counter gets to count.  This is invoked with the -l option (I used 'l'
because s, c, a and e were all taken already.)  This uses the new
PERF_RECORD_TOTAL_TIME_{ENABLED,RUNNING} read format options.

With this, we get output like this:

$ ./perfstat -l -e 0:0,0:1,0:2,0:3,0:4,0:5 ./spin

 Performance counter stats for './spin':

     4016072055  CPU cycles           (events)  (scaled from 66.53%)
     2005887318  instructions         (events)  (scaled from 66.53%)
        1762849  cache references     (events)  (scaled from 66.69%)
         165229  cache misses         (events)  (scaled from 66.85%)
     1001298009  branches             (events)  (scaled from 66.78%)
          41566  branch misses        (events)  (scaled from 66.61%)

 Wall-clock time elapsed:  2438.227446 msecs

This also lets us detect when a counter is zero because the counter
never got to go on the CPU at all.  In that case we print <not counted>
rather than 0.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090330171023.871484899@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: x86: proper error propagation for the x86 hw_perf_counter_init()
Peter Zijlstra [Mon, 30 Mar 2009 17:07:09 +0000 (19:07 +0200)]
perf_counter: x86: proper error propagation for the x86 hw_perf_counter_init()

Now that Paul cleaned up the error propagation paths, pass down the
x86 error as well.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.792822360@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: make it possible for hw_perf_counter_init to return error codes
Paul Mackerras [Mon, 30 Mar 2009 17:07:08 +0000 (19:07 +0200)]
perf_counter: make it possible for hw_perf_counter_init to return error codes

Impact: better error reporting

At present, if hw_perf_counter_init encounters an error, all it can do
is return NULL, which causes sys_perf_counter_open to return an EINVAL
error to userspace.  This isn't very informative for userspace; it means
that userspace can't tell the difference between "sorry, oprofile is
already using the PMU" and "we don't support this CPU" and "this CPU
doesn't support the requested generic hardware event".

This commit uses the PTR_ERR/ERR_PTR/IS_ERR set of macros to let
hw_perf_counter_init return an error code on error rather than just NULL
if it wishes.  If it does so, that error code will be returned from
sys_perf_counter_open to userspace.  If it returns NULL, an EINVAL
error will be returned to userspace, as before.

This also adapts the powerpc hw_perf_counter_init to make use of this
to return ENXIO, EINVAL, EBUSY, or EOPNOTSUPP as appropriate.  It would
be good to add extra error numbers in future to allow userspace to
distinguish the various errors that are currently reported as EINVAL,
i.e. irq_period < 0, too many events in a group, conflict between
exclude_* settings in a group, and PMU resource conflict in a group.

[ v2: fix a bug pointed out by Corey Ashford where error returns from
      hw_perf_counter_init were not handled correctly in the case of
      raw hardware events.]

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090330171023.682428180@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: powerpc: only reserve PMU hardware when we need it
Paul Mackerras [Mon, 30 Mar 2009 17:07:07 +0000 (19:07 +0200)]
perf_counter: powerpc: only reserve PMU hardware when we need it

Impact: cooperate with oprofile

At present, on PowerPC, if you have perf_counters compiled in, oprofile
doesn't work.  There is code to allow the PMU to be shared between
competing subsystems, such as perf_counters and oprofile, but currently
the perf_counter subsystem reserves the PMU for itself at boot time,
and never releases it.

This makes perf_counter play nicely with oprofile.  Now we keep a count
of how many perf_counter instances are counting hardware events, and
reserve the PMU when that count becomes non-zero, and release the PMU
when that count becomes zero.  This means that it is possible to have
perf_counters compiled in and still use oprofile, as long as there are
no hardware perf_counters active.  This also means that if oprofile is
active, sys_perf_counter_open will fail if the hw_event specifies a
hardware event.

To avoid races with other tasks creating and destroying perf_counters,
we use a mutex.  We use atomic_inc_not_zero and atomic_add_unless to
avoid having to take the mutex unless there is a possibility of the
count going between 0 and 1.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090330171023.627912475@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: kerneltop: parse the mmap data stream
Peter Zijlstra [Mon, 30 Mar 2009 17:07:06 +0000 (19:07 +0200)]
perf_counter: kerneltop: parse the mmap data stream

frob the kerneltop code to print the mmap data in the stream

Better use would be collecting the IPs per PID and mapping them onto
the provided userspace code.. TODO

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.501902515@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: executable mmap() information
Peter Zijlstra [Mon, 30 Mar 2009 17:07:05 +0000 (19:07 +0200)]
perf_counter: executable mmap() information

Currently the profiling information returns userspace IPs but no way
to correlate them to userspace code. Userspace could look into
/proc/$pid/maps but that might not be current or even present anymore
at the time of analyzing the IPs.

Therefore provide means to track the mmap information and provide it
in the output stream.

XXX: only covers mmap()/munmap(), mremap() and mprotect() are missing.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Orig-LKML-Reference: <20090330171023.417259499@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: kerneltop: simplify data_head read
Peter Zijlstra [Mon, 30 Mar 2009 17:07:04 +0000 (19:07 +0200)]
perf_counter: kerneltop: simplify data_head read

Now that the kernel side changed, match up again.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.327144324@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix update_userpage()
Peter Zijlstra [Mon, 30 Mar 2009 17:07:03 +0000 (19:07 +0200)]
perf_counter: fix update_userpage()

It just occured to me it is possible to have multiple contending
updates of the userpage (mmap information vs overflow vs counter).
This would break the seqlock logic.

It appear the arch code uses this from NMI context, so we cannot
possibly serialize its use, therefore separate the data_head update
from it and let it return to its original use.

The arch code needs to make sure there are no contending callers by
disabling the counter before using it -- powerpc appears to do this
nicely.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.241410660@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: unify and fix delayed counter wakeup
Peter Zijlstra [Mon, 30 Mar 2009 17:07:02 +0000 (19:07 +0200)]
perf_counter: unify and fix delayed counter wakeup

While going over the wakeup code I noticed delayed wakeups only work
for hardware counters but basically all software counters rely on
them.

This patch unifies and generalizes the delayed wakeup to fix this
issue.

Since we're dealing with NMI context bits here, use a cmpxchg() based
single link list implementation to track counters that have pending
wakeups.

[ This should really be generic code for delayed wakeups, but since we
  cannot use cmpxchg()/xchg() in generic code, I've let it live in the
  perf_counter code. -- Eric Dumazet could use it to aggregate the
  network wakeups. ]

Furthermore, the x86 method of using TIF flags was flawed in that its
quite possible to end up setting the bit on the idle task, loosing the
wakeup.

The powerpc method uses per-cpu storage and does appear to be
sufficient.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090330171023.153932974@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: record time running and time enabled for each counter
Paul Mackerras [Wed, 25 Mar 2009 11:46:58 +0000 (22:46 +1100)]
perf_counter: record time running and time enabled for each counter

Impact: new functionality

Currently, if there are more counters enabled than can fit on the CPU,
the kernel will multiplex the counters on to the hardware using
round-robin scheduling.  That isn't too bad for sampling counters, but
for counting counters it means that the value read from a counter
represents some unknown fraction of the true count of events that
occurred while the counter was enabled.

This remedies the situation by keeping track of how long each counter
is enabled for, and how long it is actually on the cpu and counting
events.  These times are recorded in nanoseconds using the task clock
for per-task counters and the cpu clock for per-cpu counters.

These values can be supplied to userspace on a read from the counter.
Userspace requests that they be supplied after the counter value by
setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or
PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field
when creating the counter.  (There is no way to change the read format
after the counter is created, though it would be possible to add some
way to do that.)

Using this information it is possible for userspace to scale the count
it reads from the counter to get an estimate of the true count:

true_count_estimate = count * total_time_enabled / total_time_running

This also lets userspace detect the situation where the counter never
got to go on the cpu: total_time_running == 0.

This functionality has been requested by the PAPI developers, and will
be generally needed for interpreting the count values from counting
counters correctly.

In the implementation, this keeps 5 time values (in nanoseconds) for
each counter: total_time_enabled and total_time_running are used when
the counter is in state OFF or ERROR and for reporting back to
userspace.  When the counter is in state INACTIVE or ACTIVE, it is the
tstamp_enabled, tstamp_running and tstamp_stopped values that are
relevant, and total_time_enabled and total_time_running are determined
from them.  (tstamp_stopped is only used in INACTIVE state.)  The
reason for doing it like this is that it means that only counters
being enabled or disabled at sched-in and sched-out time need to be
updated.  There are no new loops that iterate over all counters to
update total_time_enabled or total_time_running.

This also keeps separate child_total_time_running and
child_total_time_enabled fields that get added in when reporting the
totals to userspace.  They are separate fields so that they can be
atomic.  We don't want to use atomics for total_time_running,
total_time_enabled etc., because then we would have to use atomic
sequences to update them, which are slower than regular arithmetic and
memory accesses.

It is possible to measure total_time_running by adding a task_clock
counter to each group of counters, and total_time_enabled can be
measured approximately with a top-level task_clock counter (though
inaccuracies will creep in if you need to disable and enable groups
since it is not possible in general to disable/enable the top-level
task_clock counter simultaneously with another group).  However, that
adds extra overhead - I measured around 15% increase in the context
switch latency reported by lat_ctx (from lmbench) when a task_clock
counter was added to each of 2 groups, and around 25% increase when a
task_clock counter was added to each of 4 groups.  (In both cases a
top-level task-clock counter was also added.)

In contrast, the code added in this commit gives better information
with no overhead that I could measure (in fact in some cases I
measured lower times with this code, but the differences were all less
than one standard deviation).

[ v2: address review comments by Andrew Morton. ]

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: allow and require one-page mmap on counting counters
Peter Zijlstra [Wed, 25 Mar 2009 11:48:31 +0000 (12:48 +0100)]
perf_counter: allow and require one-page mmap on counting counters

A brainfart stopped single page mmap()s working. The rest of the code
should be perfectly fine with not having any data pages.

Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <1237981712.7972.812.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: kerneltop: output event support
Peter Zijlstra [Wed, 25 Mar 2009 11:30:27 +0000 (12:30 +0100)]
perf_counter: kerneltop: output event support

Teach kerneltop about the new output ABI.

XXX: anybody fancy integrating the PID/TID data into the output?

Bump the mmap_data pages a little because we bloated the output and
have to be more careful about overruns with structured data.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113317.192910290@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: kerneltop: mmap_pages argument
Peter Zijlstra [Wed, 25 Mar 2009 11:30:26 +0000 (12:30 +0100)]
perf_counter: kerneltop: mmap_pages argument

provide a knob to set the number of mmap data pages.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113317.104545398@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: optionally provide the pid/tid of the sampled task
Peter Zijlstra [Wed, 25 Mar 2009 11:30:25 +0000 (12:30 +0100)]
perf_counter: optionally provide the pid/tid of the sampled task

Allow cpu wide counters to profile userspace by providing what process
the sample belongs to.

This raises the first issue with the output type, lots of these
options: group, tid, callchain, etc.. are non-exclusive and could be
combined, suggesting a bitfield.

However, things like the mmap() data stream doesn't fit in that.

How to split the type field...

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113317.013775235@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: sanity check on the output API
Peter Zijlstra [Wed, 25 Mar 2009 11:30:24 +0000 (12:30 +0100)]
perf_counter: sanity check on the output API

Ensure we never write more than we said we would.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113316.921433024@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: output objects
Peter Zijlstra [Wed, 25 Mar 2009 11:30:23 +0000 (12:30 +0100)]
perf_counter: output objects

Provide a {type,size} header for each output entry.

This should provide extensible output, and the ability to mix multiple streams.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113316.831607932@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: more elaborate write API
Peter Zijlstra [Wed, 25 Mar 2009 11:30:22 +0000 (12:30 +0100)]
perf_counter: more elaborate write API

Provide a begin, copy, end interface to the output buffer.

begin() reserves the space,
 copy() copies the data over, considering page boundaries,
  end() finalizes the event and does the wakeup.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Orig-LKML-Reference: <20090325113316.740550870@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix perf_poll()
Peter Zijlstra [Tue, 24 Mar 2009 12:18:16 +0000 (13:18 +0100)]
perf_counter: fix perf_poll()

Impact: fix kerneltop 100% CPU usage

Only return a poll event when there's actually been one, poll_wait()
doesn't actually wait for the waitq you pass it, it only enqueues
you on it.

Only once all FDs have been iterated and none of thm returned a
poll-event will it schedule().

Also make it return POLL_HUP when there's not mmap() area to read from.

Further, fix a silly bug in the write code.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arjan van de Ven <arjan@infradead.org>
Orig-LKML-Reference: <1237897096.24918.181.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: update documentation
Paul Mackerras [Sun, 22 Mar 2009 23:29:36 +0000 (10:29 +1100)]
perf_counter: update documentation

Impact: documentation fix

This updates the perfcounter documentation to reflect recent changes.

Signed-off-by: Paul Mackerras <paulus@samba.org>
15 years agoperf_counter tools: remove glib dependency and fix bugs in kerneltop.c, fix poll()
Peter Zijlstra [Tue, 24 Mar 2009 09:50:24 +0000 (10:50 +0100)]
perf_counter tools: remove glib dependency and fix bugs in kerneltop.c, fix poll()

Paul Mackerras wrote:

> I noticed the poll stuff is bogus - we have a 2D array of struct
> pollfds (MAX_NR_CPUS x MAX_COUNTERS), we fill in a sub-array (with the
> rest being uninitialized, since the array is on the stack) and then
> pass the first nr_cpus elements to poll.  Not what we really meant, I
> suspect. :)  Not even if we only have one counter, since it's the
> counter dimension that varies fastest.

This should fix the most obvious poll fubar.. not enough to fix the
full problem though..

Reported-by: Paul Mackerras <paulus@samba.org>
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: remove glib dependency and fix bugs in kerneltop.c
Paul Mackerras [Tue, 24 Mar 2009 05:52:34 +0000 (16:52 +1100)]
perf_counter tools: remove glib dependency and fix bugs in kerneltop.c

The glib dependency in kerneltop.c is only for a little bit of list
manipulation, and I find it inconvenient.  This adds a 'next' field to
struct source_line, which lets us link them together into a list.  The
code to do the linking ourselves turns out to be no longer or more
difficult than using glib.

This also fixes a few other problems:

- We need to #include <limits.h> to get PATH_MAX on powerpc.

- We need to #include <linux/types.h> rather than have our own
  definitions of __u64 and __s64; on powerpc the installed headers
  define them to be unsigned long and long respectively, and if we
  have our own, different definition here that causes a compile error.

- This takes out the x86 setting of errno from -ret in
  sys_perf_counter_open.  My experiments on x86 indicate that the
  glibc syscall() does this for us already.

- We had two CPU migration counters in the default set, which seems
  unnecessary; I changed one of them to a context switch counter.

- In perfstat mode we were printing CPU cycles and instructions as
  milliseconds, and the cpu clock and task clock counters as events.
  This fixes that.

- In perfstat mode we were still printing a blank line after the first
  counter, which was a holdover from when a task clock counter was
  automatically included as the first counter.  This removes the blank
  line.

- On a test machine here, parse_symbols() and parse_vmlinux() were
  taking long enough (almost 0.5 seconds) for the mmap buffer to
  overflow before we got to the first mmap_read() call, so this moves
  them before we open all the counters.

- The error message if sys_perf_counter_open fails needs to use errno,
  not -fd[i][counter].

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: increase cpu-cycles again
Ingo Molnar [Mon, 23 Mar 2009 21:29:50 +0000 (22:29 +0100)]
perf_counter tools: increase cpu-cycles again

Commit b7368fdd7d decreased the CPU cycles interval 100-fold, but
this is causig kerneltop failures on my Nehalem box:

 aldebaran:/home/mingo/linux/linux/Documentation/perf_counter>
 ./kerneltop
 KernelTop refresh period: 2 seconds
 ERROR: failed to keep up with mmap data

10,000 cycles is way too short.

What we should do instead on mostly-idle systems is some sort of
read/poll timeout, so that we display something every 2 seconds
for sure.

Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: fix build warning in kerneltop.c
Ingo Molnar [Mon, 23 Mar 2009 21:23:16 +0000 (22:23 +0100)]
perf_counter tools: fix build warning in kerneltop.c

Fix:

 kerneltop.c: In function â€˜record_ip’:
 kerneltop.c:1005: warning: format â€˜%016llx’ expects type â€˜long long unsigned int’, but argument 2 has type â€˜uint64_t’

Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: tidy up in-kernel dependencies
Ingo Molnar [Mon, 23 Mar 2009 20:49:25 +0000 (21:49 +0100)]
perf_counter tools: tidy up in-kernel dependencies

Remove now unified perfstat.c and perf_counter.h, and link to the
in-kernel perf_counter.h.

Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: use mmap() output
Peter Zijlstra [Mon, 23 Mar 2009 17:22:12 +0000 (18:22 +0100)]
perf_counter tools: use mmap() output

update kerneltop to use the mmap() output to gather overflow information

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: update to new syscall ABI
Peter Zijlstra [Mon, 23 Mar 2009 17:22:11 +0000 (18:22 +0100)]
perf_counter tools: update to new syscall ABI

update the kerneltop userspace to work with the latest syscall ABI

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.559643732@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: new output ABI - part 1
Peter Zijlstra [Mon, 23 Mar 2009 17:22:10 +0000 (18:22 +0100)]
perf_counter: new output ABI - part 1

Impact: Rework the perfcounter output ABI

use sys_read() only for instant data and provide mmap() output for all
async overflow data.

The first mmap() determines the size of the output buffer. The mmap()
size must be a PAGE_SIZE multiple of 1+pages, where pages must be a
power of 2 or 0. Further mmap()s of the same fd must have the same
size. Once all maps are gone, you can again mmap() with a new size.

In case of 0 extra pages there is no data output and the first page
only contains meta data.

When there are data pages, a poll() event will be generated for each
full page of data. Furthermore, the output is circular. This means
that although 1 page is a valid configuration, its useless, since
we'll start overwriting it the instant we report a full page.

Future work will focus on the output format (currently maintained)
where we'll likey want each entry denoted by a header which includes a
type and length.

Further future work will allow to splice() the fd, also containing the
async overflow data -- splice() would be mutually exclusive with
mmap() of the data.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.470536358@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agomutex: drop "inline" from mutex_lock() inside kernel/mutex.c
H. Peter Anvin [Thu, 2 Apr 2009 00:21:56 +0000 (17:21 -0700)]
mutex: drop "inline" from mutex_lock() inside kernel/mutex.c

Impact: build fix

mutex_lock() is was defined inline in kernel/mutex.c, but wasn't
declared so not in <linux/mutex.h>.  This didn't cause a problem until
checkin 3a2d367d9aabac486ac4444c6c7ec7a1dab16267 added the
atomic_dec_and_mutex_lock() inline in between declaration and
definion.

This broke building with CONFIG_ALLOW_WARNINGS=n, e.g. make
allnoconfig.

Either from the source code nor the allnoconfig binary output I cannot
find any internal references to mutex_lock() in kernel/mutex.c, so
presumably this "inline" is now-useless legacy.

Cc: Eric Paris <eparis@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <tip-3a2d367d9aabac486ac4444c6c7ec7a1dab16267@git.kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
15 years agomutex: add atomic_dec_and_mutex_lock()
Eric Paris [Mon, 23 Mar 2009 17:22:09 +0000 (18:22 +0100)]
mutex: add atomic_dec_and_mutex_lock()

Much like the atomic_dec_and_lock() function in which we take an hold a
spin_lock if we drop the atomic to 0 this function takes and holds the
mutex if we dec the atomic to 0.

Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.410913479@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add an mmap method to allow userspace to read hardware counters
Paul Mackerras [Mon, 23 Mar 2009 17:22:08 +0000 (18:22 +0100)]
perf_counter: add an mmap method to allow userspace to read hardware counters

Impact: new feature giving performance improvement

This adds the ability for userspace to do an mmap on a hardware counter
fd and get access to a read-only page that contains the information
needed to translate a hardware counter value to the full 64-bit
counter value that would be returned by a read on the fd.  This is
useful on architectures that allow user programs to read the hardware
counters, such as PowerPC.

The mmap will only succeed if the counter is a hardware counter
monitoring the current process.

On my quad 2.5GHz PowerPC 970MP machine, userspace can read a counter
and translate it to the full 64-bit value in about 30ns using the
mmapped page, compared to about 830ns for the read syscall on the
counter, so this does give a significant performance improvement.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <20090323172417.297057964@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: avoid recursion
Peter Zijlstra [Mon, 23 Mar 2009 17:22:07 +0000 (18:22 +0100)]
perf_counter: avoid recursion

Tracepoint events like lock_acquire and software counters like
pagefaults can recurse into the perf counter code again, avoid that.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.152096433@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: remove the event config bitfields
Peter Zijlstra [Mon, 23 Mar 2009 17:22:06 +0000 (18:22 +0100)]
perf_counter: remove the event config bitfields

Since the bitfields turned into a bit of a mess, remove them and rely on
good old masks.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Orig-LKML-Reference: <20090323172417.059499915@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: when no command is feed to perfstat, display help and exit
Wu Fengguang [Fri, 20 Mar 2009 02:08:10 +0000 (10:08 +0800)]
perf_counter tools: when no command is feed to perfstat, display help and exit

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: cut down default count for cpu-cycles
Wu Fengguang [Fri, 20 Mar 2009 02:08:09 +0000 (10:08 +0800)]
perf_counter tools: cut down default count for cpu-cycles

In my system, it takes kerneltop dozens of minutes to
show up usable numbers. Make the default count 100 times
smaller fixed this long startup latency.

I'm not sure if it's the right solution though.

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: fix event_id type
Wu Fengguang [Fri, 20 Mar 2009 02:08:08 +0000 (10:08 +0800)]
perf_counter tools: fix event_id type

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: fix comment for sym_weight()
Wu Fengguang [Fri, 20 Mar 2009 02:08:07 +0000 (10:08 +0800)]
perf_counter tools: fix comment for sym_weight()

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: move remaining code into kerneltop.c
Wu Fengguang [Fri, 20 Mar 2009 02:08:06 +0000 (10:08 +0800)]
perf_counter tools: move remaining code into kerneltop.c

- perfstat.c can be safely removed now
- perfstat: -s => -a for system wide accounting
- kerneltop: add -S/--stat for perfstat mode
- minor adjustments to kerneltop --help, perfstat --help

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Reuse event_name() in kerneltop
Wu Fengguang [Fri, 20 Mar 2009 02:08:05 +0000 (10:08 +0800)]
perf_counter tools: Reuse event_name() in kerneltop

- can handle sw counters now
- the outputs will look slightly different

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: support symbolic event names in kerneltop
Wu Fengguang [Fri, 20 Mar 2009 02:08:04 +0000 (10:08 +0800)]
perf_counter tools: support symbolic event names in kerneltop

- kerneltop: --event_id => --event
- kerneltop: can accept SW event types now
- perfstat: it used to implicitly add event -2(task-clock),
    the new code no longer does this. Shall we?

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Move perfstat supporting code into perfcounters.h
Wu Fengguang [Fri, 20 Mar 2009 02:08:03 +0000 (10:08 +0800)]
perf_counter tools: Move perfstat supporting code into perfcounters.h

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Merge common code into perfcounters.h
Wu Fengguang [Fri, 20 Mar 2009 02:08:02 +0000 (10:08 +0800)]
perf_counter tools: Merge common code into perfcounters.h

kerneltop's MAX_COUNTERS is increased from 8 to 64(the value used by perfstat).

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add sample user-space to Documentation/perf_counter/
Ingo Molnar [Mon, 23 Mar 2009 20:29:59 +0000 (21:29 +0100)]
perf_counter: add sample user-space to Documentation/perf_counter/

Initial version of kerneltop.c and perfstat.c.

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: create Documentation/perf_counter/ and move perfcounters.txt there
Ingo Molnar [Mon, 23 Mar 2009 20:26:52 +0000 (21:26 +0100)]
perf_counter: create Documentation/perf_counter/ and move perfcounters.txt there

We'll have more files in that directory, prepare for that.

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix type/event_id layout on big-endian systems
Paul Mackerras [Sat, 21 Mar 2009 04:31:47 +0000 (15:31 +1100)]
perf_counter: fix type/event_id layout on big-endian systems

Impact: build fix for powerpc

Commit db3a944aca35ae61 ("perf_counter: revamp syscall input ABI")
expanded the hw_event.type field into a union of structs containing
bitfields.  In particular it introduced a type field and a raw_type
field, with the intention that the 1-bit raw_type field should
overlay the most-significant bit of the 8-bit type field, and in fact
perf_counter_alloc() now assumes that (or at least, assumes that
raw_type doesn't overlay any of the bits that are 1 in the values of
PERF_TYPE_{HARDWARE,SOFTWARE,TRACEPOINT}).

Unfortunately this is not true on big-endian systems such as PowerPC,
where bitfields are laid out from left to right, i.e. from most
significant bit to least significant.  This means that setting
hw_event.type = PERF_TYPE_SOFTWARE will set hw_event.raw_type to 1.

This fixes it by making the layout depend on whether or not
__BIG_ENDIAN_BITFIELD is defined.  It's a bit ugly, but that's what
we get for using bitfields in a user/kernel ABI.

Also, that commit didn't fix up some places in arch/powerpc/kernel/
perf_counter.c where hw_event.raw and hw_event.event_id were used.
This fixes them too.

Signed-off-by: Paul Mackerras <paulus@samba.org>
15 years agoperf_counter: powerpc: clean up perc_counter_interrupt
Paul Mackerras [Thu, 19 Mar 2009 19:26:20 +0000 (20:26 +0100)]
perf_counter: powerpc: clean up perc_counter_interrupt

Impact: cleanup

This updates the powerpc perf_counter_interrupt following on from the
"perf_counter: unify irq output code" patch.  Since we now use the
generic perf_counter_output code, which sets the perf_counter_pending
flag directly, we no longer need the need_wakeup variable.

This removes need_wakeup and makes perf_counter_interrupt use
get_perf_counter_pending() instead.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194234.024464535@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: unify irq output code
Peter Zijlstra [Thu, 19 Mar 2009 19:26:19 +0000 (20:26 +0100)]
perf_counter: unify irq output code

Impact: cleanup

Having 3 slightly different copies of the same code around does nobody
any good. First step in revamping the output format.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.929962222@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: revamp syscall input ABI
Peter Zijlstra [Thu, 19 Mar 2009 19:26:18 +0000 (20:26 +0100)]
perf_counter: revamp syscall input ABI

Impact: modify ABI

The hardware/software classification in hw_event->type became a little
strained due to the addition of tracepoint tracing.

Instead split up the field and provide a type field to explicitly specify
the counter type, while using the event_id field to specify which event to
use.

Raw counters still work as before, only the raw config now goes into
raw_event.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.836807573@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: hook up the tracepoint events
Peter Zijlstra [Thu, 19 Mar 2009 19:26:17 +0000 (20:26 +0100)]
perf_counter: hook up the tracepoint events

Impact: new perfcounters feature

Enable usage of tracepoints as perf counter events.

tracepoint event ids can be found in /debug/tracing/event/*/*/id
and (for now) are represented as -65536+id in the type field.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.744044174@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix up counter free paths
Peter Zijlstra [Thu, 19 Mar 2009 19:26:16 +0000 (20:26 +0100)]
perf_counter: fix up counter free paths

Impact: fix crash during perfcounters use

I found another counter free path, create a free_counter() call to
accomodate generic tear-down.

Fixes an RCU bug.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.652078652@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: generic context switch event
Peter Zijlstra [Thu, 19 Mar 2009 19:26:12 +0000 (20:26 +0100)]
perf_counter: generic context switch event

Impact: cleanup

Use the generic software events for context switches.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.283522645@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix uninitialized usage of event_list
Peter Zijlstra [Thu, 19 Mar 2009 19:26:11 +0000 (20:26 +0100)]
perf_counter: fix uninitialized usage of event_list

Impact: fix boot crash

When doing the generic context switch event I ran into some early
boot hangs, which were caused by inf func recursion (event, fault,
event, fault).

I eventually tracked it down to event_list not being initialized
at the time of the first event. Fix this.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Orig-LKML-Reference: <20090319194233.195392657@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: abstract wakeup flag setting in core to fix powerpc build
Paul Mackerras [Mon, 16 Mar 2009 10:00:00 +0000 (21:00 +1100)]
perf_counter: abstract wakeup flag setting in core to fix powerpc build

Impact: build fix for powerpc

Commit bd753921015e7905 ("perf_counter: software counter event
infrastructure") introduced a use of TIF_PERF_COUNTERS into the core
perfcounter code.  This breaks the build on powerpc because we use
a flag in a per-cpu area to signal wakeups on powerpc rather than
a thread_info flag, because the thread_info flags have to be
manipulated with atomic operations and are thus slower than per-cpu
flags.

This fixes the by changing the core to use an abstracted
set_perf_counter_pending() function, which is defined on x86 to set
the TIF_PERF_COUNTERS flag and on powerpc to set the per-cpu flag
(paca->perf_counter_pending).  It changes the previous powerpc
definition of set_perf_counter_pending to not take an argument and
adds a clear_perf_counter_pending, so as to simplify the definition
on x86.

On x86, set_perf_counter_pending() is defined as a macro.  Defining
it as a static inline in arch/x86/include/asm/perf_counters.h causes
compile failures because <asm/perf_counters.h> gets included early in
<linux/sched.h>, and the definitions of set_tsk_thread_flag etc. are
therefore not available in <asm/perf_counters.h>.  (On powerpc this
problem is avoided by defining set_perf_counter_pending etc. in
<asm/hw_irq.h>.)

Signed-off-by: Paul Mackerras <paulus@samba.org>
15 years agoperf_counter: fix crash on perfmon v1 systems
Ingo Molnar [Wed, 18 Mar 2009 07:59:21 +0000 (08:59 +0100)]
perf_counter: fix crash on perfmon v1 systems

Impact: fix boot crash on Intel Perfmon Version 1 systems

Intel Perfmon v1 does not support the global MSRs, nor does
it offer the generalized MSR ranges. So support v2 and later
CPUs only.

Also mark pmc_ops as read-mostly - to avoid false cacheline
sharing.

Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: include missing header
Tim Blechmann [Sat, 14 Mar 2009 13:29:25 +0000 (14:29 +0100)]
perf_counter: include missing header

Impact: build fix

In order to compile a kernel with performance counter patches,
<asm/irq_regs.h> has to be included to provide the declaration of
struct pt_regs *get_irq_regs(void);

[ This bug was masked by unrelated x86 header file changes in the
  x86 tree, but occurs in the tip:perfcounters/core standalone
  tree. ]

Signed-off-by: Tim Blechmann <tim@klingt.org>
Orig-LKML-Reference: <20090314142925.49c29c17@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: fix hrtimer sampling
Peter Zijlstra [Fri, 13 Mar 2009 15:43:47 +0000 (16:43 +0100)]
perf_counter: fix hrtimer sampling

Impact: fix deadlock with perfstat

Fix for the perfstat fubar..

We cannot unconditionally call hrtimer_cancel() without ever having done
hrtimer_init() on the thing.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Orig-LKML-Reference: <1236959027.22447.149.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add an event_list
Peter Zijlstra [Fri, 13 Mar 2009 11:21:36 +0000 (12:21 +0100)]
perf_counter: add an event_list

I noticed that the counter_list only includes top-level counters, thus
perf_swcounter_event() will miss sw-counters in groups.

Since perf_swcounter_event() also wants an RCU safe list, create a new
event_list that includes all counters and uses RCU list ops and use call_rcu
to free the counter structure.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: hrtimer based sampling for software time events
Peter Zijlstra [Fri, 13 Mar 2009 11:21:35 +0000 (12:21 +0100)]
perf_counter: hrtimer based sampling for software time events

Use hrtimers to profile timer based sampling for the software time
counters.

This allows platforms without hardware counter support to still
perform sample based profiling.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: provide major/minor page fault software events
Peter Zijlstra [Fri, 13 Mar 2009 11:21:34 +0000 (12:21 +0100)]
perf_counter: provide major/minor page fault software events

Provide separate sw counters for major and minor page faults.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: provide pagefault software events
Peter Zijlstra [Fri, 13 Mar 2009 11:21:33 +0000 (12:21 +0100)]
perf_counter: provide pagefault software events

We use the generic software counter infrastructure to provide
page fault events.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: software counter event infrastructure
Peter Zijlstra [Fri, 13 Mar 2009 11:21:32 +0000 (12:21 +0100)]
perf_counter: software counter event infrastructure

Provide generic software counter infrastructure that supports
software events.

This will be used to allow sample based profiling based on software
events such as pagefaults. The current infrastructure can only
provide a count of such events, no place information.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: x86: use ULL postfix for 64bit constants
Peter Zijlstra [Fri, 13 Mar 2009 11:21:31 +0000 (12:21 +0100)]
perf_counter: x86: use ULL postfix for 64bit constants

Fix a build warning on 32bit machines by explicitly marking the
constants as 64-bit.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: add comment to barrier
Peter Zijlstra [Fri, 13 Mar 2009 11:21:30 +0000 (12:21 +0100)]
perf_counter: add comment to barrier

We need to ensure the enabled=0 write happens before we
start disabling the actual counters, so that a pcm_amd_enable()
will not enable one underneath us.

I think the race is impossible anyway, we always balance the
ops within any one context and perform enable() with IRQs disabled.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: use list_move_tail()
Peter Zijlstra [Fri, 13 Mar 2009 11:21:29 +0000 (12:21 +0100)]
perf_counter: use list_move_tail()

Instead of del/add use a move list-op.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: x86: fix 32-bit irq_period assumption
Peter Zijlstra [Fri, 13 Mar 2009 11:21:28 +0000 (12:21 +0100)]
perf_counter: x86: fix 32-bit irq_period assumption

No need to assume the irq_period is 32bit.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoMerge branch 'linus' into perfcounters/core-v2
Ingo Molnar [Mon, 6 Apr 2009 07:02:57 +0000 (09:02 +0200)]
Merge branch 'linus' into perfcounters/core-v2

Merge reason: we have gathered quite a few conflicts, need to merge upstream

Conflicts:
arch/powerpc/kernel/Makefile
arch/x86/ia32/ia32entry.S
arch/x86/include/asm/hardirq.h
arch/x86/include/asm/unistd_32.h
arch/x86/include/asm/unistd_64.h
arch/x86/kernel/cpu/common.c
arch/x86/kernel/irq.c
arch/x86/kernel/syscall_table_32.S
arch/x86/mm/iomap_32.c
include/linux/sched.h
kernel/Makefile

Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoMerge branch 'audit.b62' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit...
Linus Torvalds [Sun, 5 Apr 2009 19:36:11 +0000 (12:36 -0700)]
Merge branch 'audit.b62' of git://git./linux/kernel/git/viro/audit-current

* 'audit.b62' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit-current:
  Audit: remove spaces from audit_log_d_path
  audit: audit_set_auditable defined but not used
  audit: incorrect ref counting in audit tree tag_chunk
  audit: Fix possible return value truncation in audit_get_context()
  audit: ignore terminating NUL in AUDIT_USER_TTY messages
  Audit: fix handling of 'strings' with NULL characters
  make the e->rule.xxx shorter in kernel auditfilter.c
  auditsc: fix kernel-doc notation
  audit: EXECVE record - removed bogus newline

15 years agoMerge branch 'for-next' of git://git.o-hand.com/linux-mfd
Linus Torvalds [Sun, 5 Apr 2009 18:38:37 +0000 (11:38 -0700)]
Merge branch 'for-next' of git://git.o-hand.com/linux-mfd

* 'for-next' of git://git.o-hand.com/linux-mfd:
  mfd: fix da903x warning
  mfd: fix MAINTAINERS entry
  mfd: Use the value of the final spin when reading the AUXADC
  mfd: Storage class should be before const qualifier
  mfd: PASIC3: supply clock_rate to DS1WM via driver_data
  mfd: remove DS1WM clock handling
  mfd: remove unused PASIC3 bus_shift field
  pxa/magician: remove deprecated .bus_shift from PASIC3 platform_data
  mfd: convert PASIC3 to use MFD core
  mfd: convert DS1WM to use MFD core
  mfd: Support active high IRQs on WM835x
  mfd: Use bulk read to fill WM8350 register cache
  mfd: remove duplicated #include from pcf50633

15 years agoMerge branch 'for-linus' of git://repo.or.cz/cris-mirror
Linus Torvalds [Sun, 5 Apr 2009 18:36:31 +0000 (11:36 -0700)]
Merge branch 'for-linus' of git://repo.or.cz/cris-mirror

* 'for-linus' of git://repo.or.cz/cris-mirror:
  CRISv32: Remove extraneous space between -I and the path.
  cris: convert obsolete hw_interrupt_type to struct irq_chip
  BUG to BUG_ON changes
  cpumask: use mm_cpumask() wrapper: cris
  cpumask: Use accessors code.: cris
  cpumask: prepare for iterators to only go to nr_cpu_ids/nr_cpumask_bits.: cris

15 years agoMerge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux...
Linus Torvalds [Sun, 5 Apr 2009 18:16:25 +0000 (11:16 -0700)]
Merge branch 'release' of git://git./linux/kernel/git/lenb/linux-acpi-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (140 commits)
  ACPI: processor: use .notify method instead of installing handler directly
  ACPI: button: use .notify method instead of installing handler directly
  ACPI: support acpi_device_ops .notify methods
  toshiba-acpi: remove MAINTAINERS entry
  ACPI: battery: asynchronous init
  acer-wmi: Update copyright notice & documentation
  acer-wmi: Cleanup the failure cleanup handling
  acer-wmi: Blacklist Acer Aspire One
  video: build fix
  thinkpad-acpi: rework brightness support
  thinkpad-acpi: enhanced debugging messages for the fan subdriver
  thinkpad-acpi: enhanced debugging messages for the hotkey subdriver
  thinkpad-acpi: enhanced debugging messages for rfkill subdrivers
  thinkpad-acpi: restrict access to some firmware LEDs
  thinkpad-acpi: remove HKEY disable functionality
  thinkpad-acpi: add new debug helpers and warn of deprecated atts
  thinkpad-acpi: add missing log levels
  thinkpad-acpi: cleanup debug helpers
  thinkpad-acpi: documentation cleanup
  thinkpad-acpi: drop ibm-acpi alias
  ...

15 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
Linus Torvalds [Sun, 5 Apr 2009 18:15:54 +0000 (11:15 -0700)]
Merge git://git./linux/kernel/git/lethal/sh-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (23 commits)
  sh: sh7785lcr: Map whole PCI address space.
  sh: Fix up DSP context save/restore.
  sh: Fix up number of on-chip DMA channels on SH7091.
  sh: update defconfigs.
  sh: Kill off broken direct-mapped cache mode.
  sh: Wire up ARCH_HAS_DEFAULT_IDLE for cpuidle.
  sh: Add a command line option for disabling I/O trapping.
  sh: Select ARCH_HIBERNATION_POSSIBLE.
  sh: migor: Fix up CEU use flags.
  input: migor_ts: add wakeup support
  rtc: rtc-sh: use set_irq_wake()
  input: sh_keysc: use enable/disable_irq_wake()
  sh: intc: set_irq_wake() support
  sh: intc: install enable, disable and shutdown callbacks
  clocksource: sh_cmt: use remove_irq() and remove clockevent workaround
  sh: ap325 and Migo-R use new sh_mobile_ceu_info flags
  sh: Fix up -Wformat-security whining.
  sh: ap325rxa: Add ov772x support, again.
  sh: Sanitize asm/mmu.h for assembly use.
  sh: Tidy up sh7786 pinmux table.
  ...

15 years agoMerge branch 'avr32-arch' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoe...
Linus Torvalds [Sun, 5 Apr 2009 18:15:28 +0000 (11:15 -0700)]
Merge branch 'avr32-arch' of git://git./linux/kernel/git/hskinnemoen/avr32-2.6

* 'avr32-arch' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6:
  avr32: add hardware handshake support to atmel_serial
  avr32: add RTS/CTS/CLK pin selection for the USARTs
  Add RTC support for Merisc boards
  avr32: at32ap700x: setup DMA for AC97C in the machine code
  avr32: at32ap700x: setup DMA for ABDAC in the machine code
  Add Merisc board support
  avr32: use gpio_is_valid() to check USBA vbus_pin I/O line
  atmel-usba-udc: use gpio_is_valid() to check vbus_pin I/O line
  avr32: fix timing LCD parameters for EVKLCD10X boards
  avr32: use GPIO line PB15 on EVKLCD10x boards for backlight
  avr32: configure MCI detect and write protect pins for EVKLCD10x boards
  avr32: set pin mask to alternative 18 bpp for EVKLCD10x boards
  avr32: add pin mask for 18-bit color on the LCD controller
  avr32: fix 15-bit LCDC pin mask to use MSB lines

15 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6
Linus Torvalds [Sun, 5 Apr 2009 18:06:45 +0000 (11:06 -0700)]
Merge git://git./linux/kernel/git/gregkh/staging-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6: (714 commits)
  Staging: sxg: slicoss: Specify the license for Sahara SXG and Slicoss drivers
  Staging: serqt_usb: fix build due to proc tty changes
  Staging: serqt_usb: fix checkpatch errors
  Staging: serqt_usb: add TODO file
  Staging: serqt_usb: Lindent the code
  Staging: add USB serial Quatech driver
  staging: document that the wifi staging drivers a bit better
  Staging: echo cleanup
  Staging: BUG to BUG_ON changes
  Staging: remove some pointless conditionals before kfree_skb()
  Staging: line6: fix build error, select SND_RAWMIDI
  Staging: line6: fix checkpatch errors in variax.c
  Staging: line6: fix checkpatch errors in toneport.c
  Staging: line6: fix checkpatch errors in pcm.c
  Staging: line6: fix checkpatch errors in midibuf.c
  Staging: line6: fix checkpatch errors in midi.c
  Staging: line6: fix checkpatch errors in dumprequest.c
  Staging: line6: fix checkpatch errors in driver.c
  Staging: line6: fix checkpatch errors in audio.c
  Staging: line6: fix checkpatch errors in pod.c
  ...

15 years agoMerge branch 'tracing-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 5 Apr 2009 18:04:19 +0000 (11:04 -0700)]
Merge branch 'tracing-for-linus' of git://git./linux/kernel/git/tip/linux-2.6-tip

* 'tracing-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (413 commits)
  tracing, net: fix net tree and tracing tree merge interaction
  tracing, powerpc: fix powerpc tree and tracing tree interaction
  ring-buffer: do not remove reader page from list on ring buffer free
  function-graph: allow unregistering twice
  trace: make argument 'mem' of trace_seq_putmem() const
  tracing: add missing 'extern' keywords to trace_output.h
  tracing: provide trace_seq_reserve()
  blktrace: print out BLK_TN_MESSAGE properly
  blktrace: extract duplidate code
  blktrace: fix memory leak when freeing struct blk_io_trace
  blktrace: fix blk_probes_ref chaos
  blktrace: make classic output more classic
  blktrace: fix off-by-one bug
  blktrace: fix the original blktrace
  blktrace: fix a race when creating blk_tree_root in debugfs
  blktrace: fix timestamp in binary output
  tracing, Text Edit Lock: cleanup
  tracing: filter fix for TRACE_EVENT_FORMAT events
  ftrace: Using FTRACE_WARN_ON() to check "freed record" in ftrace_release()
  x86: kretprobe-booster interrupt emulation code fix
  ...

Fix up trivial conflicts in
 arch/parisc/include/asm/ftrace.h
 include/linux/memory.h
 kernel/extable.c
 kernel/module.c

15 years agoAudit: remove spaces from audit_log_d_path
Eric Paris [Tue, 10 Mar 2009 22:00:14 +0000 (18:00 -0400)]
Audit: remove spaces from audit_log_d_path

audit_log_d_path had spaces in the strings which would be emitted on the
error paths.  This patch simply replaces those spaces with an _ or removes
the needless spaces entirely.

Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>