platform/kernel/linux-starfive.git
3 years agoperf c2c: Update documentation for metrics reorganization
Leo Yan [Thu, 15 Oct 2020 14:45:48 +0000 (15:45 +0100)]
perf c2c: Update documentation for metrics reorganization

The output format for metrics has been reorganized, update documentation
to reflect the changes for it.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Cc: Al Grant <al.grant@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Ian Rogers <irogers@google.com>
Cc: James Clark <james.clark@arm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20201015144548.18482-10-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Add metrics "RMT Load Hit"
Leo Yan [Wed, 14 Oct 2020 05:09:21 +0000 (06:09 +0100)]
perf c2c: Add metrics "RMT Load Hit"

The metrics "LLC Ld Miss" and "Load Dram" overlap with each other for
accouting items:

  "LLC Ld Miss" = "lcl_dram" + "rmt_dram" + "rmt_hit" + "rmt_hitm"
  "Load Dram"   = "lcl_dram" + "rmt_dram"

Furthermore, the metrics "LLC Ld Miss" is not directive to show
statistics due to it contains summary value and cannot give out
breakdown details.

For this reason, add a new metrics "RMT Load Hit" which is used to
present the remote cache hit; it contains two items:

  "RMT Load Hit" = remote hit ("rmt_hit") + remote hitm ("rmt_hitm")

As result, the metrics "LLC Ld Miss" is perfectly divided into two
metrics "RMT Load Hit" and "Load Dram".  It's not necessary to keep
metrics "LLC Ld Miss", so remove it.

Before:

  #        ----------- Cacheline ----------      Tot  ------- Load Hitm -------    Total    Total    Total  ---- Stores ----  ----- Core Load Hit -----  - LLC Load Hit --      LLC  --- Load Dram ----
  # Index             Address  Node  PA cnt     Hitm    Total  LclHitm  RmtHitm  records    Loads   Stores    L1Hit   L1Miss       FB       L1       L2    LclHit  LclHitm  Ld Miss       Lcl       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765      548     2615       66       169      481        0         0         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0      187      361       27        11       78        0         0         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0      131        0       10       263        1        0         0         0

After:

  #        ----------- Cacheline ----------      Tot  ------- Load Hitm -------    Total    Total    Total  ---- Stores ----  ----- Core Load Hit -----  - LLC Load Hit --  - RMT Load Hit --  --- Load Dram ----
  # Index             Address  Node  PA cnt     Hitm    Total  LclHitm  RmtHitm  records    Loads   Stores    L1Hit   L1Miss       FB       L1       L2    LclHit  LclHitm    RmtHit  RmtHitm       Lcl       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  .......  ........  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765      548     2615       66       169      481         0        0         0         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0      187      361       27        11       78         0        0         0         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0      131        0       10       263        1         0        0         0         0

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-9-leo.yan@linaro.org
3 years agoperf c2c: Correct LLC load hit metrics
Leo Yan [Wed, 14 Oct 2020 05:09:20 +0000 (06:09 +0100)]
perf c2c: Correct LLC load hit metrics

"rmt_hit" is accounted into two metrics: one is accounted into the
metrics "LLC Ld Miss" (see the function llc_miss() for calculation
"llcmiss"); and it's accounted into metrics "LLC Load Hit".  Thus,
for the literal meaning, it is contradictory that "rmt_hit" is
accounted for both "LLC Ld Miss" (LLC miss) and "LLC Load Hit"
(LLC hit).

Thus this is easily to introduce confusion: "LLC Load Hit" gives
impression that all items belong to it are LLC hit; in fact "rmt_hit"
is LLC miss and remote cache hit.

To give out clear semantics for metric "LLC Load Hit", "rmt_hit" is
moved out from it and changes "LLC Load Hit" to contain two items:

  LLC Load Hit = LLC's hit ("ld_llchit") + LLC's hitm ("lcl_hitm")

For output alignment, adjusts the header for "LLC Load Hit".

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-8-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Change header for LLC local hit
Leo Yan [Wed, 14 Oct 2020 05:09:19 +0000 (06:09 +0100)]
perf c2c: Change header for LLC local hit

Replace the header string "Lcl" with "LclHit", which is more explicit
to express the event type is LLC local hit.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-7-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Use more explicit headers for HITM
Leo Yan [Wed, 14 Oct 2020 05:09:18 +0000 (06:09 +0100)]
perf c2c: Use more explicit headers for HITM

Local and remote HITM use the headers 'Lcl' and 'Rmt' respectively,
suppose if we want to extend the tool to display these two dimensions
under any one metrics, users cannot understand the semantics if only
based on the header string 'Lcl' or 'Rmt'.

To explicit express the meaning for HITM items, this patch changes the
headers string as "LclHitm" and "RmtHitm", the strings are more readable
and this allows to extend metrics for using HITM items.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-6-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Change header from "LLC Load Hitm" to "Load Hitm"
Leo Yan [Wed, 14 Oct 2020 05:09:17 +0000 (06:09 +0100)]
perf c2c: Change header from "LLC Load Hitm" to "Load Hitm"

The metrics "LLC Load Hitm" contains two items: one is "local Hitm" and
another is "remote Hitm".

"local Hitm" means: L3 HIT and was serviced by another processor core
with a cross core snoop where modified copies were found; it's no doubt
that "local Hitm" belongs to LLC access.

But for "remote Hitm", based on the code in util/mem-events, it's the
event for remote cache HIT and was serviced by another processor core
with modified copies.  Thus the remote Hitm is a remote cache's hit and
actually it's LLC load miss.

Now the display format gives users the impression that "local Hitm" and
"remote Hitm" both belong to the LLC load, but this is not the fact as
described.

This patch changes the header from "LLC Load Hitm" to "Load Hitm", this
can avoid the give the wrong impression that all Hitm belong to LLC.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-5-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Organize metrics based on memory hierarchy
Leo Yan [Wed, 14 Oct 2020 05:09:16 +0000 (06:09 +0100)]
perf c2c: Organize metrics based on memory hierarchy

The metrics are not organized based on memory hierarchy, e.g. the tool
doesn't organize the metrics order based on memory nodes from the close
node (e.g. L1/L2 cache) to far node (e.g. L3 cache and DRAM).

To output metrics with more friendly form, this patch refines the
metrics order based on memory hierarchy:

  "Core Load Hit" => "LLC Load Hit" => "LLC Ld Miss" => "Load Dram"

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-4-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Display "Total Stores" as a standalone metrics
Leo Yan [Wed, 14 Oct 2020 05:09:15 +0000 (06:09 +0100)]
perf c2c: Display "Total Stores" as a standalone metrics

The total stores is displayed under the metrics "Store Reference", to
output the same format with total records and all loads, extract the
total stores number as a standalone metrics "Total Stores".

After this patch, the tool shows the summary numbers ("Total records",
"Total loads", "Total Stores") in the unified form.

Before:

  #        ----------- Cacheline ----------      Tot  ----- LLC Load Hitm -----    Total    Total  ---- Store Reference ----  --- Load Dram ----      LLC  ----- Core Load Hit -----  -- LLC Load Hit --
  # Index             Address  Node  PA cnt     Hitm    Total      Lcl      Rmt  records    Loads    Total    L1Hit   L1Miss       Lcl       Rmt  Ld Miss       FB       L1       L2       Llc       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  ........  .......  .......  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765         0         0        0      548     2615       66       169         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0         0         0        0      187      361       27        11         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0         0         0        0      131        0       10       263         0

After:

  #        ----------- Cacheline ----------      Tot  ----- LLC Load Hitm -----    Total    Total    Total  ---- Stores ----  --- Load Dram ----      LLC  ----- Core Load Hit -----  -- LLC Load Hit --
  # Index             Address  Node  PA cnt     Hitm    Total      Lcl      Rmt  records    Loads   Stores    L1Hit   L1Miss       Lcl       Rmt  Ld Miss       FB       L1       L2       Llc       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  ........  .......  .......  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765         0         0        0      548     2615       66       169         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0         0         0        0      187      361       27        11         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0         0         0        0      131        0       10       263         0

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-3-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Display the total numbers continuously
Leo Yan [Wed, 14 Oct 2020 05:09:14 +0000 (06:09 +0100)]
perf c2c: Display the total numbers continuously

To view the statistics with "breakdown" mode, it's good to show the
summary numbers for the total records, all stores and all loads, then
the sequential conlumns can be used to break into more detailed items.

To achieve this purpose, this patch displays the summary numbers for
records/stores/loads continuously and places them before breakdown
items, this can allow uses to easily read the summarized statistics.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-2-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf bench: Use condition variables in numa.
Ian Rogers [Mon, 12 Oct 2020 16:16:11 +0000 (09:16 -0700)]
perf bench: Use condition variables in numa.

The existing approach to synchronization between threads in the numa
benchmark is unbalanced mutexes.

This synchronization causes thread sanitizer to warn of locks being
taken twice on a thread without an unlock, as well as unlocks with no
corresponding locks.

This change replaces the synchronization with more regular condition
variables.

While this fixes one class of thread sanitizer warnings, there still
remain warnings of data races due to threads reading and writing shared
memory without any atomics.

Committer testing:

  Basic run on a non-NUMA machine.

  # perf bench numa

          # List of available benchmarks for collection 'numa':

             mem: Benchmark for NUMA workloads
             all: Run all NUMA benchmarks

  # perf bench numa all
  # Running numa/mem benchmark...

   # Running main, "perf bench numa numa-mem"
   #
   # Running test on: Linux five 5.8.12-200.fc32.x86_64 #1 SMP Mon Sep 28 12:17:31 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
   #

   # Running RAM-bw-local, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 0 -s 20 -zZq --thp  1 --no-data_rand_walk"
           20.076 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.073 secs average thread-runtime
            0.190 % difference between max/avg runtime
          241.828 GB data processed, per thread
          241.828 GB data processed, total
            0.083 nsecs/byte/thread runtime
           12.045 GB/sec/thread speed
           12.045 GB/sec total speed

   # Running RAM-bw-local-NOTHP, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 0 -s 20 -zZq --thp  1 --no-data_rand_walk --thp -1"
           20.045 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.014 secs average thread-runtime
            0.111 % difference between max/avg runtime
          234.304 GB data processed, per thread
          234.304 GB data processed, total
            0.086 nsecs/byte/thread runtime
           11.689 GB/sec/thread speed
           11.689 GB/sec total speed

   # Running RAM-bw-remote, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 1 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running RAM-bw-local-2x, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,2 -M 0x2 -s 20 -zZq --thp  1 --no-data_rand_walk"
           20.138 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.121 secs average thread-runtime
            0.342 % difference between max/avg runtime
          135.961 GB data processed, per thread
          271.922 GB data processed, total
            0.148 nsecs/byte/thread runtime
            6.752 GB/sec/thread speed
           13.503 GB/sec total speed

   # Running RAM-bw-remote-2x, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,2 -M 1x2 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running RAM-bw-cross, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,8 -M 1,0 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running  1x3-convergence, "perf bench numa mem -p 1 -t 3 -P 512 -s 100 -zZ0qcm --thp  1"
            0.747 secs latency to NUMA-converge
            0.747 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.714 secs average thread-runtime
           50.000 % difference between max/avg runtime
            3.228 GB data processed, per thread
            9.683 GB data processed, total
            0.231 nsecs/byte/thread runtime
            4.321 GB/sec/thread speed
           12.964 GB/sec total speed

   # Running  1x4-convergence, "perf bench numa mem -p 1 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            1.127 secs latency to NUMA-converge
            1.127 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.089 secs average thread-runtime
            5.624 % difference between max/avg runtime
            3.765 GB data processed, per thread
           15.062 GB data processed, total
            0.299 nsecs/byte/thread runtime
            3.342 GB/sec/thread speed
           13.368 GB/sec total speed

   # Running  1x6-convergence, "perf bench numa mem -p 1 -t 6 -P 1020 -s 100 -zZ0qcm --thp  1"
            1.003 secs latency to NUMA-converge
            1.003 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.889 secs average thread-runtime
           50.000 % difference between max/avg runtime
            2.141 GB data processed, per thread
           12.847 GB data processed, total
            0.469 nsecs/byte/thread runtime
            2.134 GB/sec/thread speed
           12.805 GB/sec total speed

   # Running  2x3-convergence, "perf bench numa mem -p 2 -t 3 -P 1020 -s 100 -zZ0qcm --thp  1"
            1.814 secs latency to NUMA-converge
            1.814 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.716 secs average thread-runtime
           22.440 % difference between max/avg runtime
            3.747 GB data processed, per thread
           22.483 GB data processed, total
            0.484 nsecs/byte/thread runtime
            2.065 GB/sec/thread speed
           12.393 GB/sec total speed

   # Running  3x3-convergence, "perf bench numa mem -p 3 -t 3 -P 1020 -s 100 -zZ0qcm --thp  1"
            2.065 secs latency to NUMA-converge
            2.065 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.947 secs average thread-runtime
           25.788 % difference between max/avg runtime
            2.855 GB data processed, per thread
           25.694 GB data processed, total
            0.723 nsecs/byte/thread runtime
            1.382 GB/sec/thread speed
           12.442 GB/sec total speed

   # Running  4x4-convergence, "perf bench numa mem -p 4 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            1.912 secs latency to NUMA-converge
            1.912 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.775 secs average thread-runtime
           23.852 % difference between max/avg runtime
            1.479 GB data processed, per thread
           23.668 GB data processed, total
            1.293 nsecs/byte/thread runtime
            0.774 GB/sec/thread speed
           12.378 GB/sec total speed

   # Running  4x4-convergence-NOTHP, "perf bench numa mem -p 4 -t 4 -P 512 -s 100 -zZ0qcm --thp  1 --thp -1"
            1.783 secs latency to NUMA-converge
            1.783 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.633 secs average thread-runtime
           21.960 % difference between max/avg runtime
            1.345 GB data processed, per thread
           21.517 GB data processed, total
            1.326 nsecs/byte/thread runtime
            0.754 GB/sec/thread speed
           12.067 GB/sec total speed

   # Running  4x6-convergence, "perf bench numa mem -p 4 -t 6 -P 1020 -s 100 -zZ0qcm --thp  1"
            5.396 secs latency to NUMA-converge
            5.396 secs slowest (max) thread-runtime
            4.000 secs fastest (min) thread-runtime
            4.928 secs average thread-runtime
           12.937 % difference between max/avg runtime
            2.721 GB data processed, per thread
           65.306 GB data processed, total
            1.983 nsecs/byte/thread runtime
            0.504 GB/sec/thread speed
           12.102 GB/sec total speed

   # Running  4x8-convergence, "perf bench numa mem -p 4 -t 8 -P 512 -s 100 -zZ0qcm --thp  1"
            3.121 secs latency to NUMA-converge
            3.121 secs slowest (max) thread-runtime
            2.000 secs fastest (min) thread-runtime
            2.836 secs average thread-runtime
           17.962 % difference between max/avg runtime
            1.194 GB data processed, per thread
           38.192 GB data processed, total
            2.615 nsecs/byte/thread runtime
            0.382 GB/sec/thread speed
           12.236 GB/sec total speed

   # Running  8x4-convergence, "perf bench numa mem -p 8 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            4.302 secs latency to NUMA-converge
            4.302 secs slowest (max) thread-runtime
            3.000 secs fastest (min) thread-runtime
            4.045 secs average thread-runtime
           15.133 % difference between max/avg runtime
            1.631 GB data processed, per thread
           52.178 GB data processed, total
            2.638 nsecs/byte/thread runtime
            0.379 GB/sec/thread speed
           12.128 GB/sec total speed

   # Running  8x4-convergence-NOTHP, "perf bench numa mem -p 8 -t 4 -P 512 -s 100 -zZ0qcm --thp  1 --thp -1"
            4.418 secs latency to NUMA-converge
            4.418 secs slowest (max) thread-runtime
            3.000 secs fastest (min) thread-runtime
            4.104 secs average thread-runtime
           16.045 % difference between max/avg runtime
            1.664 GB data processed, per thread
           53.254 GB data processed, total
            2.655 nsecs/byte/thread runtime
            0.377 GB/sec/thread speed
           12.055 GB/sec total speed

   # Running  3x1-convergence, "perf bench numa mem -p 3 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.973 secs latency to NUMA-converge
            0.973 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.955 secs average thread-runtime
           50.000 % difference between max/avg runtime
            4.124 GB data processed, per thread
           12.372 GB data processed, total
            0.236 nsecs/byte/thread runtime
            4.238 GB/sec/thread speed
           12.715 GB/sec total speed

   # Running  4x1-convergence, "perf bench numa mem -p 4 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.820 secs latency to NUMA-converge
            0.820 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.808 secs average thread-runtime
           50.000 % difference between max/avg runtime
            2.555 GB data processed, per thread
           10.220 GB data processed, total
            0.321 nsecs/byte/thread runtime
            3.117 GB/sec/thread speed
           12.468 GB/sec total speed

   # Running  8x1-convergence, "perf bench numa mem -p 8 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.667 secs latency to NUMA-converge
            0.667 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.607 secs average thread-runtime
           50.000 % difference between max/avg runtime
            1.009 GB data processed, per thread
            8.069 GB data processed, total
            0.661 nsecs/byte/thread runtime
            1.512 GB/sec/thread speed
           12.095 GB/sec total speed

   # Running 16x1-convergence, "perf bench numa mem -p 16 -t 1 -P 256 -s 100 -zZ0qcm --thp  1"
            1.546 secs latency to NUMA-converge
            1.546 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.485 secs average thread-runtime
           17.664 % difference between max/avg runtime
            1.162 GB data processed, per thread
           18.594 GB data processed, total
            1.331 nsecs/byte/thread runtime
            0.752 GB/sec/thread speed
           12.025 GB/sec total speed

   # Running 32x1-convergence, "perf bench numa mem -p 32 -t 1 -P 128 -s 100 -zZ0qcm --thp  1"
            0.812 secs latency to NUMA-converge
            0.812 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.739 secs average thread-runtime
           50.000 % difference between max/avg runtime
            0.309 GB data processed, per thread
            9.874 GB data processed, total
            2.630 nsecs/byte/thread runtime
            0.380 GB/sec/thread speed
           12.166 GB/sec total speed

   # Running  2x1-bw-process, "perf bench numa mem -p 2 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.044 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.020 secs average thread-runtime
            0.109 % difference between max/avg runtime
          125.750 GB data processed, per thread
          251.501 GB data processed, total
            0.159 nsecs/byte/thread runtime
            6.274 GB/sec/thread speed
           12.548 GB/sec total speed

   # Running  3x1-bw-process, "perf bench numa mem -p 3 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.148 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.090 secs average thread-runtime
            0.367 % difference between max/avg runtime
           85.267 GB data processed, per thread
          255.800 GB data processed, total
            0.236 nsecs/byte/thread runtime
            4.232 GB/sec/thread speed
           12.696 GB/sec total speed

   # Running  4x1-bw-process, "perf bench numa mem -p 4 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.169 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.100 secs average thread-runtime
            0.419 % difference between max/avg runtime
           63.144 GB data processed, per thread
          252.576 GB data processed, total
            0.319 nsecs/byte/thread runtime
            3.131 GB/sec/thread speed
           12.523 GB/sec total speed

   # Running  8x1-bw-process, "perf bench numa mem -p 8 -t 1 -P  512 -s 20 -zZ0q --thp  1"
           20.175 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.107 secs average thread-runtime
            0.433 % difference between max/avg runtime
           31.267 GB data processed, per thread
          250.133 GB data processed, total
            0.645 nsecs/byte/thread runtime
            1.550 GB/sec/thread speed
           12.398 GB/sec total speed

   # Running  8x1-bw-process-NOTHP, "perf bench numa mem -p 8 -t 1 -P  512 -s 20 -zZ0q --thp  1 --thp -1"
           20.216 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.113 secs average thread-runtime
            0.535 % difference between max/avg runtime
           30.998 GB data processed, per thread
          247.981 GB data processed, total
            0.652 nsecs/byte/thread runtime
            1.533 GB/sec/thread speed
           12.266 GB/sec total speed

   # Running 16x1-bw-process, "perf bench numa mem -p 16 -t 1 -P 256 -s 20 -zZ0q --thp  1"
           20.234 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.174 secs average thread-runtime
            0.577 % difference between max/avg runtime
           15.377 GB data processed, per thread
          246.039 GB data processed, total
            1.316 nsecs/byte/thread runtime
            0.760 GB/sec/thread speed
           12.160 GB/sec total speed

   # Running  1x4-bw-thread, "perf bench numa mem -p 1 -t 4 -T 256 -s 20 -zZ0q --thp  1"
           20.040 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.028 secs average thread-runtime
            0.099 % difference between max/avg runtime
           66.832 GB data processed, per thread
          267.328 GB data processed, total
            0.300 nsecs/byte/thread runtime
            3.335 GB/sec/thread speed
           13.340 GB/sec total speed

   # Running  1x8-bw-thread, "perf bench numa mem -p 1 -t 8 -T 256 -s 20 -zZ0q --thp  1"
           20.064 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.034 secs average thread-runtime
            0.160 % difference between max/avg runtime
           32.911 GB data processed, per thread
          263.286 GB data processed, total
            0.610 nsecs/byte/thread runtime
            1.640 GB/sec/thread speed
           13.122 GB/sec total speed

   # Running 1x16-bw-thread, "perf bench numa mem -p 1 -t 16 -T 128 -s 20 -zZ0q --thp  1"
           20.092 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.052 secs average thread-runtime
            0.230 % difference between max/avg runtime
           16.131 GB data processed, per thread
          258.088 GB data processed, total
            1.246 nsecs/byte/thread runtime
            0.803 GB/sec/thread speed
           12.845 GB/sec total speed

   # Running 1x32-bw-thread, "perf bench numa mem -p 1 -t 32 -T 64 -s 20 -zZ0q --thp  1"
           20.099 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.063 secs average thread-runtime
            0.247 % difference between max/avg runtime
            7.962 GB data processed, per thread
          254.773 GB data processed, total
            2.525 nsecs/byte/thread runtime
            0.396 GB/sec/thread speed
           12.676 GB/sec total speed

   # Running  2x3-bw-process, "perf bench numa mem -p 2 -t 3 -P 512 -s 20 -zZ0q --thp  1"
           20.150 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.120 secs average thread-runtime
            0.372 % difference between max/avg runtime
           44.827 GB data processed, per thread
          268.960 GB data processed, total
            0.450 nsecs/byte/thread runtime
            2.225 GB/sec/thread speed
           13.348 GB/sec total speed

   # Running  4x4-bw-process, "perf bench numa mem -p 4 -t 4 -P 512 -s 20 -zZ0q --thp  1"
           20.258 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.168 secs average thread-runtime
            0.636 % difference between max/avg runtime
           17.079 GB data processed, per thread
          273.263 GB data processed, total
            1.186 nsecs/byte/thread runtime
            0.843 GB/sec/thread speed
           13.489 GB/sec total speed

   # Running  4x6-bw-process, "perf bench numa mem -p 4 -t 6 -P 512 -s 20 -zZ0q --thp  1"
           20.559 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.382 secs average thread-runtime
            1.359 % difference between max/avg runtime
           10.758 GB data processed, per thread
          258.201 GB data processed, total
            1.911 nsecs/byte/thread runtime
            0.523 GB/sec/thread speed
           12.559 GB/sec total speed

   # Running  4x8-bw-process, "perf bench numa mem -p 4 -t 8 -P 512 -s 20 -zZ0q --thp  1"
           20.744 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.516 secs average thread-runtime
            1.792 % difference between max/avg runtime
            8.069 GB data processed, per thread
          258.201 GB data processed, total
            2.571 nsecs/byte/thread runtime
            0.389 GB/sec/thread speed
           12.447 GB/sec total speed

   # Running  4x8-bw-process-NOTHP, "perf bench numa mem -p 4 -t 8 -P 512 -s 20 -zZ0q --thp  1 --thp -1"
           20.855 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.561 secs average thread-runtime
            2.050 % difference between max/avg runtime
            8.069 GB data processed, per thread
          258.201 GB data processed, total
            2.585 nsecs/byte/thread runtime
            0.387 GB/sec/thread speed
           12.381 GB/sec total speed

   # Running  3x3-bw-process, "perf bench numa mem -p 3 -t 3 -P 512 -s 20 -zZ0q --thp  1"
           20.134 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.077 secs average thread-runtime
            0.333 % difference between max/avg runtime
           28.091 GB data processed, per thread
          252.822 GB data processed, total
            0.717 nsecs/byte/thread runtime
            1.395 GB/sec/thread speed
           12.557 GB/sec total speed

   # Running  5x5-bw-process, "perf bench numa mem -p 5 -t 5 -P 512 -s 20 -zZ0q --thp  1"
           20.588 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.375 secs average thread-runtime
            1.427 % difference between max/avg runtime
           10.177 GB data processed, per thread
          254.436 GB data processed, total
            2.023 nsecs/byte/thread runtime
            0.494 GB/sec/thread speed
           12.359 GB/sec total speed

   # Running 2x16-bw-process, "perf bench numa mem -p 2 -t 16 -P 512 -s 20 -zZ0q --thp  1"
           20.657 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.429 secs average thread-runtime
            1.589 % difference between max/avg runtime
            8.170 GB data processed, per thread
          261.429 GB data processed, total
            2.528 nsecs/byte/thread runtime
            0.395 GB/sec/thread speed
           12.656 GB/sec total speed

   # Running 1x32-bw-process, "perf bench numa mem -p 1 -t 32 -P 2048 -s 20 -zZ0q --thp  1"
           22.981 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           21.996 secs average thread-runtime
            6.486 % difference between max/avg runtime
            8.863 GB data processed, per thread
          283.606 GB data processed, total
            2.593 nsecs/byte/thread runtime
            0.386 GB/sec/thread speed
           12.341 GB/sec total speed

   # Running numa02-bw, "perf bench numa mem -p 1 -t 32 -T 32 -s 20 -zZ0q --thp  1"
           20.047 secs slowest (max) thread-runtime
           19.000 secs fastest (min) thread-runtime
           20.026 secs average thread-runtime
            2.611 % difference between max/avg runtime
            8.441 GB data processed, per thread
          270.111 GB data processed, total
            2.375 nsecs/byte/thread runtime
            0.421 GB/sec/thread speed
           13.474 GB/sec total speed

   # Running numa02-bw-NOTHP, "perf bench numa mem -p 1 -t 32 -T 32 -s 20 -zZ0q --thp  1 --thp -1"
           20.088 secs slowest (max) thread-runtime
           19.000 secs fastest (min) thread-runtime
           20.025 secs average thread-runtime
            2.709 % difference between max/avg runtime
            8.411 GB data processed, per thread
          269.142 GB data processed, total
            2.388 nsecs/byte/thread runtime
            0.419 GB/sec/thread speed
           13.398 GB/sec total speed

   # Running numa01-bw-thread, "perf bench numa mem -p 2 -t 16 -T 192 -s 20 -zZ0q --thp  1"
           20.293 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.175 secs average thread-runtime
            0.721 % difference between max/avg runtime
            7.918 GB data processed, per thread
          253.374 GB data processed, total
            2.563 nsecs/byte/thread runtime
            0.390 GB/sec/thread speed
           12.486 GB/sec total speed

   # Running numa01-bw-thread-NOTHP, "perf bench numa mem -p 2 -t 16 -T 192 -s 20 -zZ0q --thp  1 --thp -1"
           20.411 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.226 secs average thread-runtime
            1.006 % difference between max/avg runtime
            7.931 GB data processed, per thread
          253.778 GB data processed, total
            2.574 nsecs/byte/thread runtime
            0.389 GB/sec/thread speed
           12.434 GB/sec total speed

  #

Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201012161611.366482-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf jevents: Fix event code for events referencing std arch events
John Garry [Thu, 8 Oct 2020 15:19:28 +0000 (23:19 +0800)]
perf jevents: Fix event code for events referencing std arch events

The event code for events referencing std arch events is incorrectly
evaluated in json_events().

The issue is that je.event is evaluated properly from try_fixup(), but
later NULLified from the real_event() call, as "event" may be NULL.

Fix by setting "event" same je.event in try_fixup().

Also remove support for overwriting event code for events using std arch
events, as it is not used.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-By: Kajol Jain<kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/1602170368-11892-1-git-send-email-john.garry@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf diff: Support hot streams comparison
Jin Yao [Fri, 9 Oct 2020 02:28:45 +0000 (10:28 +0800)]
perf diff: Support hot streams comparison

This patch enables perf-diff with "--stream" option.

"--stream": Enable hot streams comparison

Now let's see example.

perf record -b ...      Generate perf.data.old with branch data
perf record -b ...      Generate perf.data with branch data
perf diff --stream

[ Matched hot streams ]

hot chain pair 1:
            cycles: 1, hits: 27.77%                  cycles: 1, hits: 9.24%
        ---------------------------              --------------------------
                      main div.c:39                           main div.c:39
                      main div.c:44                           main div.c:44

hot chain pair 2:
           cycles: 34, hits: 20.06%                cycles: 27, hits: 16.98%
        ---------------------------              --------------------------
          __random_r random_r.c:360               __random_r random_r.c:360
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:380               __random_r random_r.c:380
          __random_r random_r.c:357               __random_r random_r.c:357
              __random random.c:293                   __random random.c:293
              __random random.c:293                   __random random.c:293
              __random random.c:291                   __random random.c:291
              __random random.c:291                   __random random.c:291
              __random random.c:291                   __random random.c:291
              __random random.c:288                   __random random.c:288
                     rand rand.c:27                          rand rand.c:27
                     rand rand.c:26                          rand rand.c:26
                           rand@plt                                rand@plt
                           rand@plt                                rand@plt
              compute_flag div.c:25                   compute_flag div.c:25
              compute_flag div.c:22                   compute_flag div.c:22
                      main div.c:40                           main div.c:40
                      main div.c:40                           main div.c:40
                      main div.c:39                           main div.c:39

hot chain pair 3:
             cycles: 9, hits: 4.48%                  cycles: 6, hits: 4.51%
        ---------------------------              --------------------------
          __random_r random_r.c:360               __random_r random_r.c:360
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:380               __random_r random_r.c:380

[ Hot streams in old perf data only ]

hot chain 1:
            cycles: 18, hits: 6.75%
         --------------------------
          __random_r random_r.c:360
          __random_r random_r.c:388
          __random_r random_r.c:388
          __random_r random_r.c:380
          __random_r random_r.c:357
              __random random.c:293
              __random random.c:293
              __random random.c:291
              __random random.c:291
              __random random.c:291
              __random random.c:288
                     rand rand.c:27
                     rand rand.c:26
                           rand@plt
                           rand@plt
              compute_flag div.c:25
              compute_flag div.c:22
                      main div.c:40

hot chain 2:
            cycles: 29, hits: 2.78%
         --------------------------
              compute_flag div.c:22
                      main div.c:40
                      main div.c:40
                      main div.c:39

[ Hot streams in new perf data only ]

hot chain 1:
                                                     cycles: 4, hits: 4.54%
                                                 --------------------------
                                                              main div.c:42
                                                      compute_flag div.c:28

hot chain 2:
                                                     cycles: 5, hits: 3.51%
                                                 --------------------------
                                                              main div.c:39
                                                              main div.c:44
                                                              main div.c:42
                                                      compute_flag div.c:28

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-8-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Report hot streams
Jin Yao [Fri, 9 Oct 2020 02:28:44 +0000 (10:28 +0800)]
perf streams: Report hot streams

We show the streams separately. They are divided into different sections.

1. "Matched hot streams"

2. "Hot streams in old perf data only"

3. "Hot streams in new perf data only".

For each stream, we report the cycles and hot percent (hits%).

For example,

     cycles: 2, hits: 4.08%
 --------------------------
              main div.c:42
      compute_flag div.c:28

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-7-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Calculate the sum of total streams hits
Jin Yao [Fri, 9 Oct 2020 02:28:43 +0000 (10:28 +0800)]
perf streams: Calculate the sum of total streams hits

We have used callchain_node->hit to measure the hot level of one stream.
This patch calculates the sum of hits of total streams.

Thus in next patch, we can use following formula to report hot percent
for one stream.

hot percent = callchain_node->hit / sum of total hits

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Link stream pair
Jin Yao [Fri, 9 Oct 2020 02:28:42 +0000 (10:28 +0800)]
perf streams: Link stream pair

In previous patch, we have created an evsel_streams for one event, and
top N hottest streams will be saved in a stream array in evsel_streams.

This patch compares total streams among two evsel_streams.

Once two streams are fully matched, they will be linked as a pair. From
the pair, we can know which streams are matched.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-5-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Compare two streams
Jin Yao [Fri, 9 Oct 2020 02:28:41 +0000 (10:28 +0800)]
perf streams: Compare two streams

Stream is the branch history which is aggregated by the branch records
from perf samples. Now we support the callchain as stream.

If the callchain entries of one stream are fully matched with the
callchain entries of another stream, we think two streams are matched.

For example,

   cycles: 1, hits: 26.80%                 cycles: 1, hits: 27.30%
   -----------------------                 -----------------------
             main div.c:39                           main div.c:39
             main div.c:44                           main div.c:44

Above two streams are matched (we don't consider the case that source
code is changed).

The matching logic is, compare the chain string first. If it's not
matched, fallback to dso address comparison.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Get the evsel_streams by evsel_idx
Jin Yao [Fri, 9 Oct 2020 02:28:40 +0000 (10:28 +0800)]
perf streams: Get the evsel_streams by evsel_idx

In previous patch, we have created evsel_streams array.

This patch returns the specified evsel_streams according to the
evsel_idx.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-3-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf streams: Introduce branch history "streams"
Jin Yao [Fri, 9 Oct 2020 02:28:39 +0000 (10:28 +0800)]
perf streams: Introduce branch history "streams"

We define a stream as the branch history which is aggregated by the
branch records from perf samples. For example, the callchains aggregated
from the branch records are considered as streams.  By browsing the hot
stream, we can understand the hot code path.

Now we only support the callchain for stream. For measuring the hot
level for a stream, we use the callchain_node->hit, higher is hotter.

There may be many callchains sampled so we only focus on the top N
hottest callchains. N is a user defined parameter or predefined default
value (nr_streams_max).

This patch creates an evsel_streams array per event, and saves the top N
hottest streams in a stream array.

So now we can get the per-event top N hottest streams.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-2-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf intel-pt: Improve PT documentation slightly
Andi Kleen [Wed, 14 Oct 2020 03:53:46 +0000 (20:53 -0700)]
perf intel-pt: Improve PT documentation slightly

Document the higher level --insn-trace etc. perf script options.

Include the howto how to build xed into the manpage

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lore.kernel.org/lkml/20201014035346.4772-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Add support for exclusive groups/events
Andi Kleen [Wed, 14 Oct 2020 14:42:55 +0000 (07:42 -0700)]
perf tools: Add support for exclusive groups/events

Peter suggested that using the exclusive mode in perf could avoid some
problems with bad scheduling of groups. Exclusive is implemented in the
kernel, but wasn't exposed by the perf tool, so hard to use without
custom low level API users.

Add support for marking groups or events with :e for exclusive in the
perf tool.  The implementation is basically the same as the existing
pinned attribute.

Committer testing:

  # perf test "parse event"
   6: Parse event definition strings                                  : Ok
  # perf test -v "parse event" |& grep :u*e
  running test 56 'instructions:uep'
  running test 57 '{cycles,cache-misses,branch-misses}:e'
  #
  #
  # grep "model name" -m1 /proc/cpuinfo
  model name : AMD Ryzen 9 3900X 12-Core Processor
  #
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:e' sleep 1

   Performance counter stats for 'system wide':

       <not counted>      cycles                                                        (0.00%)
       <not counted>      cache-misses                                                  (0.00%)
       <not counted>      branch-misses                                                 (0.00%)

         1.001269893 seconds time elapsed

  Some events weren't counted. Try disabling the NMI watchdog:
   echo 0 > /proc/sys/kernel/nmi_watchdog
   perf stat ...
   echo 1 > /proc/sys/kernel/nmi_watchdog
  # echo 0 > /proc/sys/kernel/nmi_watchdog
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:e' sleep 1

   Performance counter stats for 'system wide':

       1,298,663,141      cycles
          30,962,215      cache-misses
           5,325,150      branch-misses

         1.001474934 seconds time elapsed

  #
  # The output for asking for precise events on AMD needs to improve, it
  # supposedly works only for system wide or per CPU
  #
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:uep' sleep 1
  Error:
  The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cycles).
  /bin/dmesg | grep -i perf may provide additional information.

  # perf stat -a -e '{cycles,cache-misses,branch-misses}:ue' sleep 1

   Performance counter stats for 'system wide':

         746,363,126      cycles
          16,881,611      cache-misses
           2,871,259      branch-misses

         1.001636066 seconds time elapsed

  #

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20201014144255.22699-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf test: Add build id shell test
Jiri Olsa [Tue, 13 Oct 2020 19:24:41 +0000 (21:24 +0200)]
perf test: Add build id shell test

Add a test for the build id cache that adds a binary with sha1 and md5
build ids and verifies it's added properly.

The test updates build id cache with 'perf record' and 'perf buildid-cache -a'.

Committer testing:

  # perf test "build id"
  82: build id cache operations                                       : Ok
  #
  # perf test -v "build id"
  82: build id cache operations                                       :
  --- start ---
  test child forked, pid 447218
  test binaries: /tmp/perf.ex.SHA1.B8I /tmp/perf.ex.MD5.7Nv
  Adding d1abc1eb7568358cf23c959566f23462461834d1 /tmp/perf.ex.SHA1.B8I: Ok
  build id: d1abc1eb7568358cf23c959566f23462461834d1
  link: /tmp/perf.debug.sS2/.build-id/d1/abc1eb7568358cf23c959566f23462461834d1
  file: /tmp/perf.debug.sS2/.build-id/d1/../../tmp/perf.ex.SHA1.B8I/d1abc1eb7568358cf23c959566f23462461834d1/elf
  OK for /tmp/perf.ex.SHA1.B8I
  Adding a50e350e97c43b4708d09bcd85ebfff7 /tmp/perf.ex.MD5.7Nv: Ok
  build id: a50e350e97c43b4708d09bcd85ebfff7
  link: /tmp/perf.debug.IuW/.build-id/a5/0e350e97c43b4708d09bcd85ebfff7
  file: /tmp/perf.debug.IuW/.build-id/a5/../../tmp/perf.ex.MD5.7Nv/a50e350e97c43b4708d09bcd85ebfff7/elf
  OK for /tmp/perf.ex.MD5.7Nv
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.034 MB /tmp/perf.data.xrH ]
  build id: d1abc1eb7568358cf23c959566f23462461834d1
  link: /tmp/perf.debug.eGR/.build-id/d1/abc1eb7568358cf23c959566f23462461834d1
  file: /tmp/perf.debug.eGR/.build-id/d1/../../tmp/perf.ex.SHA1.B8I/d1abc1eb7568358cf23c959566f23462461834d1/elf
  OK for /tmp/perf.ex.SHA1.B8I
  [ perf record: Woken up 2 times to write data ]
  [ perf record: Captured and wrote 0.034 MB /tmp/perf.data.cbE ]
  build id: a50e350e97c43b4708d09bcd85ebfff7
  link: /tmp/perf.debug.82t/.build-id/a5/0e350e97c43b4708d09bcd85ebfff7
  file: /tmp/perf.debug.82t/.build-id/a5/../../tmp/perf.ex.MD5.7Nv/a50e350e97c43b4708d09bcd85ebfff7/elf
  OK for /tmp/perf.ex.MD5.7Nv
  test child finished with 0
  ---- end ----
  build id cache operations: Ok
  #

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-10-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Align buildid list output for short build ids
Jiri Olsa [Tue, 13 Oct 2020 19:24:40 +0000 (21:24 +0200)]
perf tools: Align buildid list output for short build ids

With shorter md5 build ids we need to align their paths properly with
other build ids:

  $ perf buildid-list
  17f4e448cc746582ea1881528deb549f7fdb3fd5 [kernel.kallsyms]
  a50e350e97c43b4708d09bcd85ebfff7         .../tools/perf/buildid-ex-md5
  1805c738c8f3ec0f47b7ea09080c28f34d18a82b /usr/lib64/ld-2.31.so
  $

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-9-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Add size to 'struct perf_record_header_build_id'
Jiri Olsa [Tue, 13 Oct 2020 19:24:39 +0000 (21:24 +0200)]
perf tools: Add size to 'struct perf_record_header_build_id'

We do not store size with build ids in perf data, but there's enough
space to do it. Adding misc bit PERF_RECORD_MISC_BUILD_ID_SIZE to mark
build id event with size.

With this fix the dso with md5 build id will have correct build id data
and will be usable for debuginfod processing if needed (coming in
following patches).

Committer notes:

Use %zu with size_t to fix this error on 32-bit arches:

  util/header.c: In function '__event_process_build_id':
  util/header.c:2105:3: error: format '%lu' expects argument of type 'long unsigned int', but argument 6 has type 'size_t' [-Werror=format=]
     pr_debug("build id event received for %s: %s [%lu]\n",
     ^

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Pass build_id object to dso__build_id_equal()
Jiri Olsa [Tue, 13 Oct 2020 19:24:38 +0000 (21:24 +0200)]
perf tools: Pass build_id object to dso__build_id_equal()

Passing build_id object to dso__build_id_equal(), so we can properly
check build id with different size than sha1.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Pass build_id object to dso__set_build_id()
Jiri Olsa [Tue, 13 Oct 2020 19:24:37 +0000 (21:24 +0200)]
perf tools: Pass build_id object to dso__set_build_id()

Passing build_id object to dso__set_build_id(), so it's easier
to initialize dos's build id object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Pass build_id object to build_id__sprintf()
Jiri Olsa [Tue, 13 Oct 2020 19:24:36 +0000 (21:24 +0200)]
perf tools: Pass build_id object to build_id__sprintf()

Passing build_id object to build_id__sprintf function, so it can operate
with the proper size of build id.

This will create proper md5 build id readable names,
like following:

  a50e350e97c43b4708d09bcd85ebfff7

instead of:

  a50e350e97c43b4708d09bcd85ebfff700000000

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-5-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Pass build id object to sysfs__read_build_id()
Jiri Olsa [Tue, 13 Oct 2020 19:24:35 +0000 (21:24 +0200)]
perf tools: Pass build id object to sysfs__read_build_id()

Passing build id object to sysfs__read_build_id function, so it can
populate the size of the build_id object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Pass build_id object to filename__read_build_id()
Jiri Olsa [Tue, 13 Oct 2020 19:24:34 +0000 (21:24 +0200)]
perf tools: Pass build_id object to filename__read_build_id()

Pass a build_id object to filename__read_build_id function, so it can
populate the size of the build_id object.

Changing filename__read_build_id() code for both ELF/non-ELF code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Use build_id object in dso
Jiri Olsa [Tue, 13 Oct 2020 19:24:33 +0000 (21:24 +0200)]
perf tools: Use build_id object in dso

Replace build_id byte array with struct build_id object and all the code
that references it.

The objective is to carry size together with build id array, so it's
better to keep both together.

This is preparatory change for following patches, and there's no
functional change.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf config: Export the perf_config_from_file() function
Arnaldo Carvalho de Melo [Tue, 13 Oct 2020 20:03:19 +0000 (17:03 -0300)]
perf config: Export the perf_config_from_file() function

We'll use it to ask for extra config files to be loaded, profile like
stuff that will be used first to make 'perf trace' mimic 'strace' output
via a 'perf strace' command that just sets up 'perf trace' output.

At some point it'll be used for regression tests, where we'll run some
simple commands like:

  perf strace ls > perf-strace.output
  strace ls > strace.output

And then do some mutable syscall arg aware diff like tool to deal with
arguments for things like mmap, that change at each execution, to be
first ignored and then properly tracked when used accoss multiple
syscalls.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf python: Autodetect python3 binary
James Clark [Mon, 5 Oct 2020 08:06:45 +0000 (09:06 +0100)]
perf python: Autodetect python3 binary

Some distros don't come with python2 and only have python3 available.
This causes the "'import perf' in python" self test to fail.

This change adds python3 to the list of possible python versions
that are autodetected but maintains the priorities for
'python2' and 'python' detection. Python3 has the lowest priority.

Committer notes:

On a fedora system without python2 packages the 'perf test python'
continues to work:

  # python2
  bash: python2: command not found...
  Similar command is: 'python'
  # rpm -qa | grep python2
  #

That "Similar command" gives the clue:

  # rpm -qf /usr/bin/python
  python-unversioned-command-3.8.5-5.fc32.noarch
  # rpm -ql python-unversioned-command
  /usr/bin/python
  /usr/share/man/man1/python.1.gz
  #

With it in place the 'python' binary is found and perf builds the python
binding using python3:

  # perf test -v python
  19: 'import perf' in python                                         :
  --- start ---
  test child forked, pid 379988
  python usage test: "echo "import sys ; sys.path.append('/tmp/build/perf/python'); import perf" | '/usr/bin/python' "
  test child finished with 0
  ---- end ----
  'import perf' in python: Ok
  #

Looking at that path:

  # ls -la /tmp/build/perf/python
  total 1864
  drwxrwxr-x.  2 acme acme      60 Oct 13 16:20 .
  drwxrwxr-x. 18 acme acme    4420 Oct 13 16:28 ..
  -rwxrwxr-x.  1 acme acme 1907216 Oct 13 16:28 perf.cpython-38-x86_64-linux-gnu.so
  #

And:

  # ldd ~/bin/perf | grep python
   libpython3.8.so.1.0 => /lib64/libpython3.8.so.1.0 (0x00007f5471187000)
  #

As soon as we remove it:

  # rpm -e python-unversioned-command-3.8.5-5.fc32.noarch
  # hash -r
  # python
  bash: python: command not found...
  Install package 'python-unversioned-command' to provide command 'python'? [N/y] n
  #

And rebuilding perf now doesn't find python in the system:

  make: Entering directory '/home/acme/git/perf/tools/perf'
    BUILD:   Doing 'make -j24' parallel build
  <SNIP>
  Makefile.config:786: No python interpreter was found: disables Python support - please install python-devel/python-dev
  <SNIP>

After this patch:

  $ rpm -qi python-unversioned-command
  package python-unversioned-command is not installed
  $
  $ python
  bash: python: command not found...
  Install package 'python-unversioned-command' to provide command 'python'? [N/y] ^C
  $
  $ m
  make: Entering directory '/home/acme/git/perf/tools/perf'
    BUILD:   Doing 'make -j24' parallel build
  <SNIP>
    CC       /tmp/build/perf/tests/attr.o
    CC       /tmp/build/perf/tests/python-use.o
    DESCEND  plugins
    GEN      /tmp/build/perf/python/perf.so
    INSTALL  trace_plugins
    LD       /tmp/build/perf/tests/perf-in.o
    LD       /tmp/build/perf/perf-in.o
    LINK     /tmp/build/perf/perf
  <SNIP>
  make: Leaving directory '/home/acme/git/perf/tools/perf'
  19: 'import perf' in python                                         : Ok
  $ ldd ~/bin/perf | grep python
   libpython3.8.so.1.0 => /lib64/libpython3.8.so.1.0 (0x00007f2c8c708000)
  $ ls -la /tmp/build/perf/python
  total 1864
  drwxrwxr-x.  2 acme acme      60 Oct 13 16:20 .
  drwxrwxr-x. 18 acme acme    4420 Oct 13 16:31 ..
  -rwxrwxr-x.  1 acme acme 1907216 Oct 13 16:31 perf.cpython-38-x86_64-linux-gnu.so
  $

Signed-off-by: James Clark <james.clark@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LPU-Reference: 20201005080645.6588-1-james.clark@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tests: Show python test script in verbose mode
Arnaldo Carvalho de Melo [Tue, 13 Oct 2020 19:22:03 +0000 (16:22 -0300)]
perf tests: Show python test script in verbose mode

To help figure out where it is getting the binding.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf build: Allow nested externs to enable BUILD_BUG() usage
Vasily Gorbik [Fri, 9 Oct 2020 12:25:23 +0000 (14:25 +0200)]
perf build: Allow nested externs to enable BUILD_BUG() usage

Currently BUILD_BUG() macro is expanded to smth like the following:

   do {
           extern void __compiletime_assert_0(void)
                   __attribute__((error("BUILD_BUG failed")));
           if (!(!(1)))
                   __compiletime_assert_0();
   } while (0);

If used in a function body this obviously would produce build errors
with -Wnested-externs and -Werror.

To enable BUILD_BUG() usage in tools/arch/x86/lib/insn.c which perf
includes in intel-pt-decoder, build perf without -Wnested-externs.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> # build tested
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lore.kernel.org/lkml/patch-1.thread-251403.git-2514037e9477.your-ad-here.call-01602244460-ext-7088@work.hours
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf trace: Fix off by ones in memset() after realloc() in arches using libaudit
Jiri Slaby [Thu, 1 Oct 2020 09:34:19 +0000 (11:34 +0200)]
perf trace: Fix off by ones in memset() after realloc() in arches using libaudit

'perf trace ls' started crashing after commit d21cb73a9025 on
!HAVE_SYSCALL_TABLE_SUPPORT configs (armv7l here) like this:

  0  strlen () at ../sysdeps/arm/armv6t2/strlen.S:126
  1  0xb6800780 in __vfprintf_internal (s=0xbeff9908, s@entry=0xbeff9900, format=0xa27160 "]: %s()", ap=..., mode_flags=<optimized out>) at vfprintf-internal.c:1688
  ...
  5  0x0056ecdc in fprintf (__fmt=0xa27160 "]: %s()", __stream=<optimized out>) at /usr/include/bits/stdio2.h:100
  6  trace__sys_exit (trace=trace@entry=0xbeffc710, evsel=evsel@entry=0xd968d0, event=<optimized out>, sample=sample@entry=0xbeffc3e8) at builtin-trace.c:2475
  7  0x00566d40 in trace__handle_event (sample=0xbeffc3e8, event=<optimized out>, trace=0xbeffc710) at builtin-trace.c:3122
  ...
  15 main (argc=2, argv=0xbefff6e8) at perf.c:538

It is because memset in trace__read_syscall_info zeroes wrong memory:

1) when initializing for the first time, it does not reset the last id.

2) in other cases, it resets the last id of previous buffer.

ad 1) it causes the crash above as sc->name used in the fprintf above
      contains garbage.

ad 2) it sets nonexistent from true back to false for id 11 here. Not
      sure, what the consequences are.

So fix it by introducing a special case for the initial initialization
and do the right +1 in both cases.

Fixes: d21cb73a9025 ("perf trace: Grow the syscall table as needed when using libaudit")
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20201001093419.15761-1-jslaby@suse.cz
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf c2c: Update usage for showing memory events
Leo Yan [Sun, 11 Oct 2020 12:10:22 +0000 (20:10 +0800)]
perf c2c: Update usage for showing memory events

Since commit b027cc6fdf1b ("perf c2c: Fix 'perf c2c record -e list' to
show the default events used"), "perf c2c" tool can show the memory
events properly, it's no reason to still suggest user to use the
command "perf mem record -e list" for showing events.

This patch updates the usage for showing memory events with command
"perf c2c record -e list".

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Ian Rogers <irogers@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201011121022.22409-1-leo.yan@linaro.org
3 years agoMerge branch 'perf/urgent' into perf/core
Arnaldo Carvalho de Melo [Tue, 13 Oct 2020 16:02:20 +0000 (13:02 -0300)]
Merge branch 'perf/urgent' into perf/core

To pick fixes that missed v5.9.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agotools lib traceevent: Hide non API functions
Tzvetomir Stoyanov (VMware) [Wed, 30 Sep 2020 11:07:33 +0000 (14:07 +0300)]
tools lib traceevent: Hide non API functions

There are internal library functions, which are not declared as a static.
They are used inside the library from different files. Hide them from
the library users, as they are not part of the API.
These functions are made hidden and are renamed without the prefix "tep_":
 tep_free_plugin_paths
 tep_peek_char
 tep_buffer_init
 tep_get_input_buf_ptr
 tep_get_input_buf
 tep_read_token
 tep_free_token
 tep_free_event
 tep_free_format_field
 __tep_parse_format

Link: https://lore.kernel.org/linux-trace-devel/e4afdd82deb5e023d53231bb13e08dca78085fb0.camel@decadent.org.uk/
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: linux-trace-devel@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200930110733.280534-1-tz.stoyanov@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf sched: Show start of latency as well
Joel Fernandes (Google) [Fri, 25 Sep 2020 23:56:34 +0000 (19:56 -0400)]
perf sched: Show start of latency as well

The 'perf sched latency' tool is really useful at showing worst-case
latencies that task encountered since wakeup. However it shows only the
end of the latency. Often times the start of a latency is interesting as
it can show what else was going on at the time to cause the latency. I
certainly myself spending a lot of time backtracking to the start of the
latency in "perf sched script" which wastes a lot of time.

This patch therefore adds a new column "Max delay start". Considering
this, also rename "Maximum delay at" to "Max delay end" as its easier to
understand.

Example of the new output:

  ----------------------------------------------------------------------------------------------------------------------------------
   Task                  | Runtime ms  | Switches | Avg delay ms  | Max delay ms   | Max delay start         | Max delay end       |
  ----------------------------------------------------------------------------------------------------------------------------------
   MediaScannerSer:11936 |  651.296 ms |    67978 | avg: 0.113 ms | max: 77.250 ms | max start: 477.691360 s | max end: 477.768610 s
   audio@2.0-servi:(3)   |    0.000 ms |     3440 | avg: 0.034 ms | max: 72.267 ms | max start: 477.697051 s | max end: 477.769318 s
   AudioOut_1D:8112      |    0.000 ms |     2588 | avg: 0.083 ms | max: 64.020 ms | max start: 477.710740 s | max end: 477.774760 s
   Time-limited te:14973 | 7966.090 ms |    24807 | avg: 0.073 ms | max: 15.563 ms | max start: 477.162746 s | max end: 477.178309 s
   surfaceflinger:8049   |    9.680 ms |      603 | avg: 0.063 ms | max: 13.275 ms | max start: 476.931791 s | max end: 476.945067 s
   HeapTaskDaemon:(3)    | 1588.830 ms |     7040 | avg: 0.065 ms | max:  6.880 ms | max start: 473.666043 s | max end: 473.672922 s
   mount-passthrou:(3)   | 1370.809 ms |    68904 | avg: 0.011 ms | max:  6.524 ms | max start: 478.090630 s | max end: 478.097154 s
   ReferenceQueueD:(3)   |   11.794 ms |     1725 | avg: 0.014 ms | max:  6.521 ms | max start: 476.119782 s | max end: 476.126303 s
   writer:14077          |   18.410 ms |     1427 | avg: 0.036 ms | max:  6.131 ms | max start: 474.169675 s | max end: 474.175805 s

Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20200925235634.4089867-1-joel@joelfernandes.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf vendor events: Fix typos in power8 PMU events
Sandipan Das [Mon, 12 Oct 2020 05:02:05 +0000 (10:32 +0530)]
perf vendor events: Fix typos in power8 PMU events

This replaces the incorrectly spelled word "localtion" with "location"
in some power8 PMU event descriptions.

Fixes: 2a81fa3bb5ed ("perf vendor events: Add power8 PMU events")
Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Link: http://lore.kernel.org/lkml/20201012050205.328523-1-sandipan@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf bench: Run inject-build-id with --buildid-all option too
Namhyung Kim [Mon, 12 Oct 2020 07:02:14 +0000 (16:02 +0900)]
perf bench: Run inject-build-id with --buildid-all option too

For comparison, it now runs the benchmark twice - one if regular -b and
another for --buildid-all.

  $ perf bench internals inject-build-id
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 21.002 msec (+- 0.172 msec)
    Average time per event: 2.059 usec (+- 0.017 usec)
    Average memory usage: 8169 KB (+- 0 KB)
    Average build-id-all injection took: 19.543 msec (+- 0.124 msec)
    Average time per event: 1.916 usec (+- 0.012 usec)
    Average memory usage: 7348 KB (+- 0 KB)

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201012070214.2074921-7-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf inject: Add --buildid-all option
Namhyung Kim [Mon, 12 Oct 2020 07:02:13 +0000 (16:02 +0900)]
perf inject: Add --buildid-all option

Like 'perf record', we can even more speedup build-id processing by just
using all DSOs.  Then we don't need to look at all the sample events
anymore.  The following patch will update 'perf bench' to show the result
of the --buildid-all option too.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Original-patch-by: Stephane Eranian <eranian@google.com>
Acked-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201012070214.2074921-6-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf inject: Do not load map/dso when injecting build-id
Namhyung Kim [Mon, 12 Oct 2020 07:02:12 +0000 (16:02 +0900)]
perf inject: Do not load map/dso when injecting build-id

No need to load symbols in a DSO when injecting build-id.  I guess the
reason was to check the DSO is a special file like anon files.  Use some
helper functions in map.c to check them before reading build-id.  Also
pass sample event's cpumode to a new build-id event.

It brought a speedup in the benchmark of 25 -> 21 msec on my laptop.
Also the memory usage (Max RSS) went down by ~200 KB.

  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 21.389 msec (+- 0.138 msec)
    Average time per event: 2.097 usec (+- 0.014 usec)
    Average memory usage: 8225 KB (+- 0 KB)

Committer notes:

Before:

  $ perf stat -r5 perf bench internals inject-build-id > /dev/null

   Performance counter stats for 'perf bench internals inject-build-id' (5 runs):

            4,020.56 msec task-clock:u              #    1.271 CPUs utilized            ( +-  0.74% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
             123,354      page-faults:u             #    0.031 M/sec                    ( +-  0.81% )
       7,119,951,568      cycles:u                  #    1.771 GHz                      ( +-  1.74% )  (83.27%)
         230,086,969      stalled-cycles-frontend:u #    3.23% frontend cycles idle     ( +-  1.97% )  (83.41%)
       1,168,298,765      stalled-cycles-backend:u  #   16.41% backend cycles idle      ( +-  1.13% )  (83.44%)
      11,173,083,669      instructions:u            #    1.57  insn per cycle
                                                    #    0.10  stalled cycles per insn  ( +-  1.58% )  (83.31%)
       2,413,908,936      branches:u                #  600.392 M/sec                    ( +-  1.69% )  (83.26%)
          46,576,289      branch-misses:u           #    1.93% of all branches          ( +-  2.20% )  (83.31%)

              3.1638 +- 0.0309 seconds time elapsed  ( +-  0.98% )

  $

After:

  $ perf stat -r5 perf bench internals inject-build-id > /dev/null

   Performance counter stats for 'perf bench internals inject-build-id' (5 runs):

            2,379.94 msec task-clock:u              #    1.473 CPUs utilized            ( +-  0.18% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
              62,584      page-faults:u             #    0.026 M/sec                    ( +-  0.07% )
       2,372,389,668      cycles:u                  #    0.997 GHz                      ( +-  0.29% )  (83.14%)
         106,937,862      stalled-cycles-frontend:u #    4.51% frontend cycles idle     ( +-  4.89% )  (83.20%)
         581,697,915      stalled-cycles-backend:u  #   24.52% backend cycles idle      ( +-  0.71% )  (83.47%)
       3,659,692,199      instructions:u            #    1.54  insn per cycle
                                                    #    0.16  stalled cycles per insn  ( +-  0.10% )  (83.63%)
         791,372,961      branches:u                #  332.518 M/sec                    ( +-  0.27% )  (83.39%)
          10,648,083      branch-misses:u           #    1.35% of all branches          ( +-  0.22% )  (83.16%)

             1.61570 +- 0.00172 seconds time elapsed  ( +-  0.11% )

  $

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Original-patch-by: Stephane Eranian <eranian@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201012070214.2074921-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf inject: Enter namespace when reading build-id
Namhyung Kim [Mon, 12 Oct 2020 07:02:11 +0000 (16:02 +0900)]
perf inject: Enter namespace when reading build-id

It should be in a proper mnt namespace when accessing the file.

I think this had no problem since the build-id was actually read from
map__load() -> dso__load() already.  But I'd like to change it in the
following commit.

Acked-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20201012070214.2074921-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf inject: Add missing callbacks in perf_tool
Namhyung Kim [Mon, 12 Oct 2020 07:02:10 +0000 (16:02 +0900)]
perf inject: Add missing callbacks in perf_tool

I found some events (like PERF_RECORD_CGROUP) are not copied by perf
inject due to the missing callbacks.  Let's add them.

While at it, I've changed the order of the callbacks to match with
struct perf_tool so that we can compare them easily.

Acked-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20201012070214.2074921-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf bench: Add build-id injection benchmark
Namhyung Kim [Mon, 12 Oct 2020 07:02:09 +0000 (16:02 +0900)]
perf bench: Add build-id injection benchmark

Sometimes I can see that 'perf record' piped with 'perf inject' take a
long time processing build-ids.

So introduce a inject-build-id benchmark to the internals benchmark
suite to measure its overhead regularly.

It runs the 'perf inject' command internally and feeds the given number
of synthesized events (MMAP2 + SAMPLE basically).

  Usage: perf bench internals inject-build-id <options>

    -i, --iterations <n>  Number of iterations used to compute average (default: 100)
    -m, --nr-mmaps <n>    Number of mmap events for each iteration (default: 100)
    -n, --nr-samples <n>  Number of sample events per mmap event (default: 100)
    -v, --verbose         be more verbose (show iteration count, DSO name, etc)

By default, it measures average processing time of 100 MMAP2 events
and 10000 SAMPLE events.  Below is a result on my laptop.

  $ perf bench internals inject-build-id
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 25.789 msec (+- 0.202 msec)
    Average time per event: 2.528 usec (+- 0.020 usec)
    Average memory usage: 8411 KB (+- 7 KB)

Committer testing:

  $ perf bench
  Usage:
   perf bench [<common options>] <collection> <benchmark> [<options>]

          # List of all available benchmark collections:

           sched: Scheduler and IPC benchmarks
         syscall: System call benchmarks
             mem: Memory access benchmarks
            numa: NUMA scheduling and MM benchmarks
           futex: Futex stressing benchmarks
           epoll: Epoll stressing benchmarks
       internals: Perf-internals benchmarks
             all: All benchmarks

  $ perf bench internals

          # List of available benchmarks for collection 'internals':

      synthesize: Benchmark perf event synthesis
  kallsyms-parse: Benchmark kallsyms parsing
  inject-build-id: Benchmark build-id injection

  $ perf bench internals inject-build-id
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.202 msec (+- 0.059 msec)
    Average time per event: 1.392 usec (+- 0.006 usec)
    Average memory usage: 12650 KB (+- 10 KB)
    Average build-id-all injection took: 12.831 msec (+- 0.071 msec)
    Average time per event: 1.258 usec (+- 0.007 usec)
    Average memory usage: 11895 KB (+- 10 KB)
  $

  $ perf stat -r5 perf bench internals inject-build-id
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.380 msec (+- 0.056 msec)
    Average time per event: 1.410 usec (+- 0.006 usec)
    Average memory usage: 12608 KB (+- 11 KB)
    Average build-id-all injection took: 11.889 msec (+- 0.064 msec)
    Average time per event: 1.166 usec (+- 0.006 usec)
    Average memory usage: 11838 KB (+- 10 KB)
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.246 msec (+- 0.065 msec)
    Average time per event: 1.397 usec (+- 0.006 usec)
    Average memory usage: 12744 KB (+- 10 KB)
    Average build-id-all injection took: 12.019 msec (+- 0.066 msec)
    Average time per event: 1.178 usec (+- 0.006 usec)
    Average memory usage: 11963 KB (+- 10 KB)
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.321 msec (+- 0.067 msec)
    Average time per event: 1.404 usec (+- 0.007 usec)
    Average memory usage: 12690 KB (+- 10 KB)
    Average build-id-all injection took: 11.909 msec (+- 0.041 msec)
    Average time per event: 1.168 usec (+- 0.004 usec)
    Average memory usage: 11938 KB (+- 10 KB)
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.287 msec (+- 0.059 msec)
    Average time per event: 1.401 usec (+- 0.006 usec)
    Average memory usage: 12864 KB (+- 10 KB)
    Average build-id-all injection took: 11.862 msec (+- 0.058 msec)
    Average time per event: 1.163 usec (+- 0.006 usec)
    Average memory usage: 12103 KB (+- 10 KB)
  # Running 'internals/inject-build-id' benchmark:
    Average build-id injection took: 14.402 msec (+- 0.053 msec)
    Average time per event: 1.412 usec (+- 0.005 usec)
    Average memory usage: 12876 KB (+- 10 KB)
    Average build-id-all injection took: 11.826 msec (+- 0.061 msec)
    Average time per event: 1.159 usec (+- 0.006 usec)
    Average memory usage: 12111 KB (+- 10 KB)

   Performance counter stats for 'perf bench internals inject-build-id' (5 runs):

            4,267.48 msec task-clock:u              #    1.502 CPUs utilized            ( +-  0.14% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
             102,092      page-faults:u             #    0.024 M/sec                    ( +-  0.08% )
       3,894,589,578      cycles:u                  #    0.913 GHz                      ( +-  0.19% )  (83.49%)
         140,078,421      stalled-cycles-frontend:u #    3.60% frontend cycles idle     ( +-  0.77% )  (83.34%)
         948,581,189      stalled-cycles-backend:u  #   24.36% backend cycles idle      ( +-  0.46% )  (83.25%)
       5,835,587,719      instructions:u            #    1.50  insn per cycle
                                                    #    0.16  stalled cycles per insn  ( +-  0.21% )  (83.24%)
       1,267,423,636      branches:u                #  296.996 M/sec                    ( +-  0.22% )  (83.12%)
          17,484,290      branch-misses:u           #    1.38% of all branches          ( +-  0.12% )  (83.55%)

             2.84176 +- 0.00222 seconds time elapsed  ( +-  0.08% )

  $

Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20201012070214.2074921-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf stat: Fix out of bounds CPU map access when handling armv8_pmu events
Namhyung Kim [Wed, 7 Oct 2020 08:13:11 +0000 (17:13 +0900)]
perf stat: Fix out of bounds CPU map access when handling armv8_pmu events

It was reported that 'perf stat' crashed when using with armv8_pmu (CPU)
events with the task mode.  As 'perf stat' uses an empty cpu map for
task mode but armv8_pmu has its own cpu mask, it has confused which map
it should use when accessing file descriptors and this causes segfaults:

  (gdb) bt
  #0  0x0000000000603fc8 in perf_evsel__close_fd_cpu (evsel=<optimized out>,
      cpu=<optimized out>) at evsel.c:122
  #1  perf_evsel__close_cpu (evsel=evsel@entry=0x716e950, cpu=7) at evsel.c:156
  #2  0x00000000004d4718 in evlist__close (evlist=0x70a7cb0) at util/evlist.c:1242
  #3  0x0000000000453404 in __run_perf_stat (argc=3, argc@entry=1, argv=0x30,
      argv@entry=0xfffffaea2f90, run_idx=119, run_idx@entry=1701998435)
      at builtin-stat.c:929
  #4  0x0000000000455058 in run_perf_stat (run_idx=1701998435, argv=0xfffffaea2f90,
      argc=1) at builtin-stat.c:947
  #5  cmd_stat (argc=1, argv=0xfffffaea2f90) at builtin-stat.c:2357
  #6  0x00000000004bb888 in run_builtin (p=p@entry=0x9764b8 <commands+288>,
      argc=argc@entry=4, argv=argv@entry=0xfffffaea2f90) at perf.c:312
  #7  0x00000000004bbb54 in handle_internal_command (argc=argc@entry=4,
      argv=argv@entry=0xfffffaea2f90) at perf.c:364
  #8  0x0000000000435378 in run_argv (argcp=<synthetic pointer>,
      argv=<synthetic pointer>) at perf.c:408
  #9  main (argc=4, argv=0xfffffaea2f90) at perf.c:538

To fix this, I simply used the given cpu map unless the evsel actually
is not a system-wide event (like uncore events).

Fixes: 7736627b865d ("perf stat: Use affinity for closing file descriptors")
Reported-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Barry Song <song.bao.hua@hisilicon.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201007081311.1831003-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf python scripting: Fix printable strings in python3 scripts
Jiri Olsa [Mon, 28 Sep 2020 20:11:35 +0000 (22:11 +0200)]
perf python scripting: Fix printable strings in python3 scripts

Hagen reported broken strings in python3 tracepoint scripts:

  make PYTHON=python3
  perf record -e sched:sched_switch -a -- sleep 5
  perf script --gen-script py
  perf script -s ./perf-script.py

  [..]
  sched__sched_switch      7 563231.759525792        0 swapper   prev_comm=bytearray(b'swapper/7\x00\x00\x00\x00\x00\x00\x00'), prev_pid=0, prev_prio=120, prev_state=, next_comm=bytearray(b'mutex-thread-co\x00'),

The problem is in the is_printable_array function that does not take the
zero byte into account and claim such string as not printable, so the
code will create byte array instead of string.

Committer testing:

After this fix:

sched__sched_switch 3 484522.497072626  1158680 kworker/3:0-eve  prev_comm=kworker/3:0, prev_pid=1158680, prev_prio=120, prev_state=I, next_comm=swapper/3, next_pid=0, next_prio=120
Sample: {addr=0, cpu=3, datasrc=84410401, datasrc_decode=N/A|SNP N/A|TLB N/A|LCK N/A, ip=18446744071841817196, period=1, phys_addr=0, pid=1158680, tid=1158680, time=484522497072626, transaction=0, values=[(0, 0)], weight=0}

sched__sched_switch 4 484522.497085610  1225814 perf             prev_comm=perf, prev_pid=1225814, prev_prio=120, prev_state=, next_comm=migration/4, next_pid=30, next_prio=0
Sample: {addr=0, cpu=4, datasrc=84410401, datasrc_decode=N/A|SNP N/A|TLB N/A|LCK N/A, ip=18446744071841817196, period=1, phys_addr=0, pid=1225814, tid=1225814, time=484522497085610, transaction=0, values=[(0, 0)], weight=0}

Fixes: 249de6e07458 ("perf script python: Fix string vs byte array resolving")
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: stable@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200928201135.3633850-1-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf trace: Use the autogenerated mmap 'prot' string/id table
Arnaldo Carvalho de Melo [Thu, 1 Oct 2020 14:23:38 +0000 (11:23 -0300)]
perf trace: Use the autogenerated mmap 'prot' string/id table

No change in behaviour:

  # perf trace -e mmap sleep 1
       0.000 ( 0.009 ms): sleep/751870 mmap(len: 143317, prot: READ, flags: PRIVATE, fd: 3)                  = 0x7fa96d0f7000
       0.028 ( 0.004 ms): sleep/751870 mmap(len: 8192, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS)           = 0x7fa96d0f5000
       0.037 ( 0.005 ms): sleep/751870 mmap(len: 1872744, prot: READ, flags: PRIVATE|DENYWRITE, fd: 3)       = 0x7fa96cf2b000
       0.044 ( 0.011 ms): sleep/751870 mmap(addr: 0x7fa96cf50000, len: 1376256, prot: READ|EXEC, flags: PRIVATE|FIXED|DENYWRITE, fd: 3, off: 0x25000) = 0x7fa96cf50000
       0.056 ( 0.007 ms): sleep/751870 mmap(addr: 0x7fa96d0a0000, len: 307200, prot: READ, flags: PRIVATE|FIXED|DENYWRITE, fd: 3, off: 0x175000) = 0x7fa96d0a0000
       0.064 ( 0.007 ms): sleep/751870 mmap(addr: 0x7fa96d0eb000, len: 24576, prot: READ|WRITE, flags: PRIVATE|FIXED|DENYWRITE, fd: 3, off: 0x1bf000) = 0x7fa96d0eb000
       0.075 ( 0.005 ms): sleep/751870 mmap(addr: 0x7fa96d0f1000, len: 13160, prot: READ|WRITE, flags: PRIVATE|FIXED|ANONYMOUS) = 0x7fa96d0f1000
       0.253 ( 0.005 ms): sleep/751870 mmap(len: 218049136, prot: READ, flags: PRIVATE, fd: 3)               = 0x7fa95ff38000
  #
  #
  # set -o vi
  # strace -e mmap sleep 1
  mmap(NULL, 143317, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f333bd83000
  mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f333bd81000
  mmap(NULL, 1872744, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f333bbb7000
  mmap(0x7f333bbdc000, 1376256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x25000) = 0x7f333bbdc000
  mmap(0x7f333bd2c000, 307200, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x175000) = 0x7f333bd2c000
  mmap(0x7f333bd77000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bf000) = 0x7f333bd77000
  mmap(0x7f333bd7d000, 13160, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f333bd7d000
  mmap(NULL, 218049136, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f332ebc4000
  +++ exited with 0 +++
  #

And you can as well tweak 'perf trace's output to more closely match
strace's:

  # perf config trace.show_arg_names=no
  # perf config trace.show_duration=no
  # perf config trace.show_prefix=yes
  # perf config trace.show_timestamp=no
  # perf config trace.show_zeros=yes
  # perf config trace.no_inherit=yes
  # perf trace -e mmap sleep 1
  mmap(NULL, 143317, PROT_READ, MAP_PRIVATE, 3, 0)                      = 0x7f0d287ca000
  mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS)     = 0x7f0d287c8000
  mmap(NULL, 1872744, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0)       = 0x7f0d285fe000
  mmap(0x7f0d28623000, 1376256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x25000) = 0x7f0d28623000
  mmap(0x7f0d28773000, 307200, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x175000) = 0x7f0d28773000
  mmap(0x7f0d287be000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bf000) = 0x7f0d287be000
  mmap(0x7f0d287c4000, 13160, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS) = 0x7f0d287c4000
  mmap(NULL, 218049136, PROT_READ, MAP_PRIVATE, 3, 0)                   = 0x7f0d1b60b000
  #

  # perf config | grep ^trace
  trace.show_arg_names=no
  trace.show_duration=no
  trace.show_prefix=yes
  trace.show_timestamp=no
  trace.show_zeros=yes
  trace.no_inherit=yes
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agotools beauty: Add script to generate table of mmap's 'prot' argument
Arnaldo Carvalho de Melo [Thu, 1 Oct 2020 14:14:22 +0000 (11:14 -0300)]
tools beauty: Add script to generate table of mmap's 'prot' argument

Will be wired up in the following csets:

  $ tools/perf/trace/beauty/mmap_prot.sh
  static const char *mmap_prot[] = {
   [ilog2(0x1) + 1] = "READ",
  #ifndef PROT_READ
  #define PROT_READ 0x1
  #endif
   [ilog2(0x2) + 1] = "WRITE",
  #ifndef PROT_WRITE
  #define PROT_WRITE 0x2
  #endif
   [ilog2(0x4) + 1] = "EXEC",
  #ifndef PROT_EXEC
  #define PROT_EXEC 0x4
  #endif
   [ilog2(0x8) + 1] = "SEM",
  #ifndef PROT_SEM
  #define PROT_SEM 0x8
  #endif
   [ilog2(0x01000000) + 1] = "GROWSDOWN",
  #ifndef PROT_GROWSDOWN
  #define PROT_GROWSDOWN 0x01000000
  #endif
   [ilog2(0x02000000) + 1] = "GROWSUP",
  #ifndef PROT_GROWSUP
  #define PROT_GROWSUP 0x02000000
  #endif
  };
  $
  $
  $
  $ tools/perf/trace/beauty/mmap_prot.sh alpha
  static const char *mmap_prot[] = {
   [ilog2(0x4) + 1] = "EXEC",
  #ifndef PROT_EXEC
  #define PROT_EXEC 0x4
  #endif
   [ilog2(0x01000000) + 1] = "GROWSDOWN",
  #ifndef PROT_GROWSDOWN
  #define PROT_GROWSDOWN 0x01000000
  #endif
   [ilog2(0x02000000) + 1] = "GROWSUP",
  #ifndef PROT_GROWSUP
  #define PROT_GROWSUP 0x02000000
  #endif
   [ilog2(0x1) + 1] = "READ",
  #ifndef PROT_READ
  #define PROT_READ 0x1
  #endif
   [ilog2(0x8) + 1] = "SEM",
  #ifndef PROT_SEM
  #define PROT_SEM 0x8
  #endif
   [ilog2(0x2) + 1] = "WRITE",
  #ifndef PROT_WRITE
  #define PROT_WRITE 0x2
  #endif
  };
  $

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf beauty mmap_flags: Conditionaly define the mmap flags
Arnaldo Carvalho de Melo [Wed, 30 Sep 2020 12:34:20 +0000 (09:34 -0300)]
perf beauty mmap_flags: Conditionaly define the mmap flags

So that in older systems we get it in the mmap flags scnprintf routines:

  $ tools/perf/trace/beauty/mmap_flags.sh  | head -9 2> /dev/null
  static const char *mmap_flags[] = {
   [ilog2(0x40) + 1] = "32BIT",
  #ifndef MAP_32BIT
  #define MAP_32BIT 0x40
  #endif
   [ilog2(0x01) + 1] = "SHARED",
  #ifndef MAP_SHARED
  #define MAP_SHARED 0x01
  #endif
  $

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf trace beauty: Add script to autogenerate mremap's flags args string/id table
Arnaldo Carvalho de Melo [Tue, 29 Sep 2020 21:07:27 +0000 (18:07 -0300)]
perf trace beauty: Add script to autogenerate mremap's flags args string/id table

It'll also conditionally generate the defines, so that if we don't have
those when building a new tool tarball in an older systems, we get
those, and we need them sometimes in the actual scnprintf routine, such
as when checking if a flags means we have an extra arg, like with
MREMAP_FIXED.

  $ tools/perf/trace/beauty/mremap_flags.sh
  static const char *mremap_flags[] = {
   [ilog2(1) + 1] = "MAYMOVE",
  #ifndef MREMAP_MAYMOVE
  #define MREMAP_MAYMOVE 1
  #endif
   [ilog2(2) + 1] = "FIXED",
  #ifndef MREMAP_FIXED
  #define MREMAP_FIXED 2
  #endif
   [ilog2(4) + 1] = "DONTUNMAP",
  #ifndef MREMAP_DONTUNMAP
  #define MREMAP_DONTUNMAP 4
  #endif
  };
  $

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Separate the checking of headers only used to build beautification tables
Arnaldo Carvalho de Melo [Tue, 29 Sep 2020 11:56:38 +0000 (08:56 -0300)]
perf tools: Separate the checking of headers only used to build beautification tables

Some headers are not used in building the tools directly, but instead to
generate tables that then gets source code included to do id->string and
string->id lookups for things like syscall flags and commands.

We were adding it directly to tools/include/ and this sometimes gets in
the way of building using system headers, lets untangle this a bit.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoMerge remote-tracking branch 'torvalds/master' into perf/core
Arnaldo Carvalho de Melo [Mon, 28 Sep 2020 18:44:52 +0000 (15:44 -0300)]
Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up fixes and get v5.10 development in sync with the main kernel
sources.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoMerge tag 'nfs-for-5.9-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Linus Torvalds [Mon, 28 Sep 2020 18:05:56 +0000 (11:05 -0700)]
Merge tag 'nfs-for-5.9-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs

Pull NFS client bugfixes from Trond Myklebust:
 "Highlights include:

   - NFSv4.2: copy_file_range needs to invalidate caches on success

   - NFSv4.2: Fix security label length not being reset

   - pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly
     on read

   - pNFS/flexfiles: Fix signed/unsigned type issues with mirror
     indices"

* tag 'nfs-for-5.9-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
  pNFS/flexfiles: Be consistent about mirror index types
  pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly on read
  NFSv4.2: fix client's attribute cache management for copy_file_range
  nfs: Fix security label length not being reset

3 years agomm: do not rely on mm == current->mm in __get_user_pages_locked
Jason A. Donenfeld [Mon, 28 Sep 2020 10:35:07 +0000 (12:35 +0200)]
mm: do not rely on mm == current->mm in __get_user_pages_locked

It seems likely this block was pasted from internal_get_user_pages_fast,
which is not passed an mm struct and therefore uses current's.  But
__get_user_pages_locked is passed an explicit mm, and current->mm is not
always valid. This was hit when being called from i915, which uses:

  pin_user_pages_remote->
    __get_user_pages_remote->
      __gup_longterm_locked->
        __get_user_pages_locked

Before, this would lead to an OOPS:

  BUG: kernel NULL pointer dereference, address: 0000000000000064
  #PF: supervisor write access in kernel mode
  #PF: error_code(0x0002) - not-present page
  CPU: 10 PID: 1431 Comm: kworker/u33:1 Tainted: P S   U     O      5.9.0-rc7+ #140
  Hardware name: LENOVO 20QTCTO1WW/20QTCTO1WW, BIOS N2OET47W (1.34 ) 08/06/2020
  Workqueue: i915-userptr-acquire __i915_gem_userptr_get_pages_worker [i915]
  RIP: 0010:__get_user_pages_remote+0xd7/0x310
  Call Trace:
   __i915_gem_userptr_get_pages_worker+0xc8/0x260 [i915]
   process_one_work+0x1ca/0x390
   worker_thread+0x48/0x3c0
   kthread+0x114/0x130
   ret_from_fork+0x1f/0x30
  CR2: 0000000000000064

This commit fixes the problem by using the mm pointer passed to the
function rather than the bogus one in current.

Fixes: 008cfe4418b3 ("mm: Introduce mm_struct.has_pinned")
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Harald Arnesen <harald@skogtun.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoperf test: Fix msan uninitialized use.
Ian Rogers [Wed, 23 Sep 2020 21:06:55 +0000 (14:06 -0700)]
perf test: Fix msan uninitialized use.

Ensure 'st' is initialized before an error branch is taken.
Fixes test "67: Parse and process metrics" with LLVM msan:

  ==6757==WARNING: MemorySanitizer: use-of-uninitialized-value
    #0 0x5570edae947d in rblist__exit tools/perf/util/rblist.c:114:2
    #1 0x5570edb1c6e8 in runtime_stat__exit tools/perf/util/stat-shadow.c:141:2
    #2 0x5570ed92cfae in __compute_metric tools/perf/tests/parse-metric.c:187:2
    #3 0x5570ed92cb74 in compute_metric tools/perf/tests/parse-metric.c:196:9
    #4 0x5570ed92c6d8 in test_recursion_fail tools/perf/tests/parse-metric.c:318:2
    #5 0x5570ed92b8c8 in test__parse_metric tools/perf/tests/parse-metric.c:356:2
    #6 0x5570ed8de8c1 in run_test tools/perf/tests/builtin-test.c:410:9
    #7 0x5570ed8ddadf in test_and_print tools/perf/tests/builtin-test.c:440:9
    #8 0x5570ed8dca04 in __cmd_test tools/perf/tests/builtin-test.c:661:4
    #9 0x5570ed8dbc07 in cmd_test tools/perf/tests/builtin-test.c:807:9
    #10 0x5570ed7326cc in run_builtin tools/perf/perf.c:313:11
    #11 0x5570ed731639 in handle_internal_command tools/perf/perf.c:365:8
    #12 0x5570ed7323cd in run_argv tools/perf/perf.c:409:2
    #13 0x5570ed731076 in main tools/perf/perf.c:539:3

Fixes: commit f5a56570a3f2 ("perf test: Fix memory leaks in parse-metric test")
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: clang-built-linux@googlegroups.com
Link: http://lore.kernel.org/lkml/20200923210655.4143682-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf parse-events: Reduce casts around bp_addr
Ian Rogers [Fri, 25 Sep 2020 00:39:03 +0000 (17:39 -0700)]
perf parse-events: Reduce casts around bp_addr

perf_event_attr bp_addr is a u64. parse-events.y parses it as a u64, but
casts it to a void* and then parse-events.c casts it back to a u64.
Rather than all the casts, change the type of the address to be a u64.

This removes an issue noted in:

  https://lore.kernel.org/lkml/20200903184359.GC3495158@kernel.org/

Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200925003903.561568-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf test: Add expand cgroup event test
Namhyung Kim [Thu, 24 Sep 2020 12:44:55 +0000 (21:44 +0900)]
perf test: Add expand cgroup event test

It'll expand given events for cgroups A, B and C.

  $ perf test -v expansion
  69: Event expansion for cgroups                      :
  --- start ---
  test child forked, pid 983140
  metric expr 1 / IPC for CPI
  metric expr instructions / cycles for IPC
  found event instructions
  found event cycles
  adding {instructions,cycles}:W
  copying metric event for cgroup 'A': instructions (idx=0)
  copying metric event for cgroup 'B': instructions (idx=0)
  copying metric event for cgroup 'C': instructions (idx=0)
  test child finished with 0
  ---- end ----
  Event expansion for cgroups: Ok

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-6-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Allow creation of cgroup without open
Namhyung Kim [Thu, 24 Sep 2020 12:44:54 +0000 (21:44 +0900)]
perf tools: Allow creation of cgroup without open

This is a preparation for a test case of expanding events for multiple
cgroups.  Instead of using real system cgroup, the test will use fake
cgroups so it needs a way to have them without a open file descriptor.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf tools: Copy metric events properly when expand cgroups
Namhyung Kim [Thu, 24 Sep 2020 12:44:53 +0000 (21:44 +0900)]
perf tools: Copy metric events properly when expand cgroups

The metricgroup__copy_metric_events() is to handle metrics events when
expanding event for cgroups.  As the metric events keep pointers to
evsel, it should be refreshed when events are cloned during the
operation.

The perf_stat__collect_metric_expr() is also called in case an event has
a metric directly.

During the copy, it references evsel by index as the evlist now has
cloned evsels for the given cgroup.

Also kernel test robot found an issue in the python module import so add
empty implementations of those two functions to fix it.

Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf stat: Add --for-each-cgroup option
Namhyung Kim [Thu, 24 Sep 2020 12:44:52 +0000 (21:44 +0900)]
perf stat: Add --for-each-cgroup option

The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily.  Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.

For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line.  But with this change, they can just specify 6
events and 200 cgroups with a new option.

A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B').  The result is that total 6 events are counted like below.

  $ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1

   Performance counter stats for 'system wide':

              988.18 msec cpu-clock                 A #    0.987 CPUs utilized
       3,153,761,702      cycles                    A #    3.200 GHz                      (100.00%)
       8,067,769,847      instructions              A #    2.57  insn per cycle           (100.00%)
              982.71 msec cpu-clock                 B #    0.982 CPUs utilized
       3,136,093,298      cycles                    B #    3.182 GHz                      (99.99%)
       8,109,619,327      instructions              B #    2.58  insn per cycle           (99.99%)

         1.001228054 seconds time elapsed

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf evsel: Add evsel__clone() function
Namhyung Kim [Thu, 24 Sep 2020 12:44:51 +0000 (21:44 +0900)]
perf evsel: Add evsel__clone() function

The evsel__clone() is to create an exactly same evsel from same
attributes.  The function assumes the given evsel is not configured
yet so it cares fields set during event parsing.  Those fields are now
moved together as Jiri suggested.  Note that metric events will be
handled by later patch.

It will be used by perf stat to generate separate events for each
cgroup.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf vendor events: Update SkylakeX events to v1.21
Jin Yao [Wed, 13 May 2020 08:13:33 +0000 (16:13 +0800)]
perf vendor events: Update SkylakeX events to v1.21

- Update SkylakeX events to v1.21.
- Update SkylakeX JSON metrics from TMAM 4.0.

Other fixes:

- Add NO_NMI_WATCHDOG metric constraint to Backend_Bound
- Fix misspelled error

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/lkml/20200922031918.3723-1-yao.jin@linux.intel.com/
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoperf vendor events intel: Update CascadelakeX events to v1.08
Jin Yao [Tue, 22 Sep 2020 02:51:19 +0000 (10:51 +0800)]
perf vendor events intel: Update CascadelakeX events to v1.08

- Update CascadelakeX events to v1.08.
- Update CascadelakeX JSON metrics from TMAM 4.0.

Other fixes:

- Add NO_NMI_WATCHDOG metric constraint to Backend_Bound
- Change 'MB/sec' to 'MB' in UNC_M_PMM_BANDWIDTH.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Link: https://lore.kernel.org/lkml/20200922031918.3723-1-yao.jin@linux.intel.com/
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
3 years agoLinux 5.9-rc7
Linus Torvalds [Sun, 27 Sep 2020 21:38:10 +0000 (14:38 -0700)]
Linux 5.9-rc7

3 years agoMerge tag 'kbuild-fixes-v5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 27 Sep 2020 19:18:57 +0000 (12:18 -0700)]
Merge tag 'kbuild-fixes-v5.9-4' of git://git./linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - ignore compiler stubs for PPC to fix builds

 - fix the usage of --target mentioned in the LLVM document

* tag 'kbuild-fixes-v5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  Documentation/llvm: Fix clang target examples
  scripts/kallsyms: skip ppc compiler stub *.long_branch.* / *.plt_branch.*

3 years agoMerge tag 'x86-urgent-2020-09-27' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 27 Sep 2020 19:15:21 +0000 (12:15 -0700)]
Merge tag 'x86-urgent-2020-09-27' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "Two fixes for the x86 interrupt code:

   - Unbreak the magic 'search the timer interrupt' logic in IO/APIC
     code which got wreckaged when the core interrupt code made the
     state tracking logic stricter.

     That caused the interrupt line to stay masked after switching from
     IO/APIC to PIC delivery mode, which obviously prevents interrupts
     from being delivered.

   - Make run_on_irqstack_code() typesafe. The function argument is a
     void pointer which is then cast to 'void (*fun)(void *).

     This breaks Control Flow Integrity checking in clang. Use proper
     helper functions for the three variants reuqired"

* tag 'x86-urgent-2020-09-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/ioapic: Unbreak check_timer()
  x86/irq: Make run_on_irqstack_cond() typesafe

3 years agoMerge tag 'timers-urgent-2020-09-27' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 27 Sep 2020 19:11:35 +0000 (12:11 -0700)]
Merge tag 'timers-urgent-2020-09-27' of git://git./linux/kernel/git/tip/tip

Pull timer updates from Thomas Gleixner:
 "A set of clocksource/clockevents updates:

   - Reset the TI/DM timer before enabling it instead of doing it the
     other way round.

   - Initialize the reload value for the GX6605s timer correctly so the
     hardware counter starts at 0 again after overrun.

   - Make error return value negative in the h8300 timer init function"

* tag 'timers-urgent-2020-09-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  clocksource/drivers/timer-gx6605s: Fixup counter reload
  clocksource/drivers/timer-ti-dm: Do reset before enable
  clocksource/drivers/h8300_timer8: Fix wrong return value in h8300_8timer_init()

3 years agomm/thp: Split huge pmds/puds if they're pinned when fork()
Peter Xu [Fri, 25 Sep 2020 22:26:00 +0000 (18:26 -0400)]
mm/thp: Split huge pmds/puds if they're pinned when fork()

Pinned pages shouldn't be write-protected when fork() happens, because
follow up copy-on-write on these pages could cause the pinned pages to
be replaced by random newly allocated pages.

For huge PMDs, we split the huge pmd if pinning is detected.  So that
future handling will be done by the PTE level (with our latest changes,
each of the small pages will be copied).  We can achieve this by let
copy_huge_pmd() return -EAGAIN for pinned pages, so that we'll
fallthrough in copy_pmd_range() and finally land the next
copy_pte_range() call.

Huge PUDs will be even more special - so far it does not support
anonymous pages.  But it can actually be done the same as the huge PMDs
even if the split huge PUDs means to erase the PUD entries.  It'll
guarantee the follow up fault ins will remap the same pages in either
parent/child later.

This might not be the most efficient way, but it should be easy and
clean enough.  It should be fine, since we're tackling with a very rare
case just to make sure userspaces that pinned some thps will still work
even without MADV_DONTFORK and after they fork()ed.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: Do early cow for pinned pages during fork() for ptes
Peter Xu [Fri, 25 Sep 2020 22:25:59 +0000 (18:25 -0400)]
mm: Do early cow for pinned pages during fork() for ptes

This allows copy_pte_range() to do early cow if the pages were pinned on
the source mm.

Currently we don't have an accurate way to know whether a page is pinned
or not.  The only thing we have is page_maybe_dma_pinned().  However
that's good enough for now.  Especially, with the newly added
mm->has_pinned flag to make sure we won't affect processes that never
pinned any pages.

It would be easier if we can do GFP_KERNEL allocation within
copy_one_pte().  Unluckily, we can't because we're with the page table
locks held for both the parent and child processes.  So the page
allocation needs to be done outside copy_one_pte().

Some trick is there in copy_present_pte(), majorly the wrprotect trick
to block concurrent fast-gup.  Comments in the function should explain
better in place.

Oleg Nesterov reported a (probably harmless) bug during review that we
didn't reset entry.val properly in copy_pte_range() so that potentially
there's chance to call add_swap_count_continuation() multiple times on
the same swp entry.  However that should be harmless since even if it
happens, the same function (add_swap_count_continuation()) will return
directly noticing that there're enough space for the swp counter.  So
instead of a standalone stable patch, it is touched up in this patch
directly.

Link: https://lore.kernel.org/lkml/20200914143829.GA1424636@nvidia.com/
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm/fork: Pass new vma pointer into copy_page_range()
Peter Xu [Fri, 25 Sep 2020 22:25:58 +0000 (18:25 -0400)]
mm/fork: Pass new vma pointer into copy_page_range()

This prepares for the future work to trigger early cow on pinned pages
during fork().

No functional change intended.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: Introduce mm_struct.has_pinned
Peter Xu [Fri, 25 Sep 2020 22:25:57 +0000 (18:25 -0400)]
mm: Introduce mm_struct.has_pinned

(Commit message majorly collected from Jason Gunthorpe)

Reduce the chance of false positive from page_maybe_dma_pinned() by
keeping track if the mm_struct has ever been used with pin_user_pages().
This allows cases that might drive up the page ref_count to avoid any
penalty from handling dma_pinned pages.

Future work is planned, to provide a more sophisticated solution, likely
to turn it into a real counter.  For now, make it atomic_t but use it as
a boolean for simplicity.

Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoMerge tag 'timers-v5.9-rc4' of https://git.linaro.org/people/daniel.lezcano/linux...
Thomas Gleixner [Sun, 27 Sep 2020 09:24:34 +0000 (11:24 +0200)]
Merge tag 'timers-v5.9-rc4' of https://git.linaro.org/people/daniel.lezcano/linux into timers/urgent

Pull clocksource/clockevent fixes from Daniel Lezcano:

 - Fix wrong signed return value when checking of_iomap in the probe
   function for the h8300 timer (Tianjia Zhang)

 - Fix reset sequence when setting up the timer on the dm_timer (Tony
   Lindgren)

 - Fix counter reload when the interrupt fires on gx6605s (Guo Ren)

3 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sat, 26 Sep 2020 18:18:37 +0000 (11:18 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Three fixes: one in drivers (lpfc) and two for zoned block devices.

  The latter also impinges on the block layer but only to introduce a
  new block API for setting the zone model rather than fiddling with the
  queue directly in the zoned block driver"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: sd: sd_zbc: Fix ZBC disk initialization
  scsi: sd: sd_zbc: Fix handling of host-aware ZBC disks
  scsi: lpfc: Fix initial FLOGI failure due to BBSCN not supported

3 years agoMerge tag 'io_uring-5.9-2020-09-25' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 26 Sep 2020 18:13:51 +0000 (11:13 -0700)]
Merge tag 'io_uring-5.9-2020-09-25' of git://git.kernel.dk/linux-block

Pull io_uring fixes from Jens Axboe:
 "Two fixes for regressions in this cycle, and one that goes to 5.8
  stable:

   - fix leak of getname() retrieved filename

   - remove plug->nowait assignment, fixing a regression with btrfs

   - fix for async buffered retry"

* tag 'io_uring-5.9-2020-09-25' of git://git.kernel.dk/linux-block:
  io_uring: ensure async buffered read-retry is setup properly
  io_uring: don't unconditionally set plug->nowait = true
  io_uring: ensure open/openat2 name is cleaned on cancelation

3 years agoMerge tag 'block-5.9-2020-09-25' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 26 Sep 2020 18:07:36 +0000 (11:07 -0700)]
Merge tag 'block-5.9-2020-09-25' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
 "NVMe pull request from Christoph, and removal of a dead define.

   - fix error during controller probe that cause double free irqs
     (Keith Busch)

   - FC connection establishment fix (James Smart)

   - properly handle completions for invalid tags (Xianting Tian)

   - pass the correct nsid to the command effects and supported log
     (Chaitanya Kulkarni)"

* tag 'block-5.9-2020-09-25' of git://git.kernel.dk/linux-block:
  block: remove unused BLK_QC_T_EAGAIN flag
  nvme-core: don't use NVME_NSID_ALL for command effects and supported log
  nvme-fc: fail new connections to a deleted host or remote port
  nvme-pci: fix NULL req in completion handler
  nvme: return errors for hwmon init

3 years agoMerge tag 's390-5.9-7' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Linus Torvalds [Sat, 26 Sep 2020 18:01:18 +0000 (11:01 -0700)]
Merge tag 's390-5.9-7' of git://git./linux/kernel/git/s390/linux

Pull s390 fix from Vasily Gorbik:
 "Fix truncated ZCRYPT_PERDEV_REQCNT ioctl result. Copy entire reqcnt
  list"

* tag 's390-5.9-7' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/zcrypt: Fix ZCRYPT_PERDEV_REQCNT ioctl

3 years agoMerge branch 'akpm' (patches from Andrew)
Linus Torvalds [Sat, 26 Sep 2020 17:53:35 +0000 (10:53 -0700)]
Merge branch 'akpm' (patches from Andrew)

Merge misc fixes from Andrew Morton:
 "9 patches.

  Subsystems affected by this patch series: mm (thp, memcg, gup,
  migration, memory-hotplug), lib, and x86"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  mm: don't rely on system state to detect hot-plug operations
  mm: replace memmap_context by meminit_context
  arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback
  lib/memregion.c: include memregion.h
  lib/string.c: implement stpcpy
  mm/migrate: correct thp migration stats
  mm/gup: fix gup_fast with dynamic page table folding
  mm: memcontrol: fix missing suffix of workingset_restore
  mm, THP, swap: fix allocating cluster for swapfile by mistake

3 years agomm: validate pmd after splitting
Minchan Kim [Tue, 15 Sep 2020 06:32:15 +0000 (23:32 -0700)]
mm: validate pmd after splitting

syzbot reported the following KASAN splat:

  general protection fault, probably for non-canonical address 0xdffffc0000000003: 0000 [#1] PREEMPT SMP KASAN
  KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f]
  CPU: 1 PID: 6826 Comm: syz-executor142 Not tainted 5.9.0-rc4-syzkaller #0
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
  RIP: 0010:__lock_acquire+0x84/0x2ae0 kernel/locking/lockdep.c:4296
  Code: ff df 8a 04 30 84 c0 0f 85 e3 16 00 00 83 3d 56 58 35 08 00 0f 84 0e 17 00 00 83 3d 25 c7 f5 07 00 74 2c 4c 89 e8 48 c1 e8 03 <80> 3c 30 00 74 12 4c 89 ef e8 3e d1 5a 00 48 be 00 00 00 00 00 fc
  RSP: 0018:ffffc90004b9f850 EFLAGS: 00010006
  Call Trace:
    lock_acquire+0x140/0x6f0 kernel/locking/lockdep.c:5006
    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
    spin_lock include/linux/spinlock.h:354 [inline]
    madvise_cold_or_pageout_pte_range+0x52f/0x25c0 mm/madvise.c:389
    walk_pmd_range mm/pagewalk.c:89 [inline]
    walk_pud_range mm/pagewalk.c:160 [inline]
    walk_p4d_range mm/pagewalk.c:193 [inline]
    walk_pgd_range mm/pagewalk.c:229 [inline]
    __walk_page_range+0xe7b/0x1da0 mm/pagewalk.c:331
    walk_page_range+0x2c3/0x5c0 mm/pagewalk.c:427
    madvise_pageout_page_range mm/madvise.c:521 [inline]
    madvise_pageout mm/madvise.c:557 [inline]
    madvise_vma mm/madvise.c:946 [inline]
    do_madvise+0x12d0/0x2090 mm/madvise.c:1145
    __do_sys_madvise mm/madvise.c:1171 [inline]
    __se_sys_madvise mm/madvise.c:1169 [inline]
    __x64_sys_madvise+0x76/0x80 mm/madvise.c:1169
    do_syscall_64+0x31/0x70 arch/x86/entry/common.c:46
    entry_SYSCALL_64_after_hwframe+0x44/0xa9

The backing vma was shmem.

In case of split page of file-backed THP, madvise zaps the pmd instead
of remapping of sub-pages.  So we need to check pmd validity after
split.

Reported-by: syzbot+ecf80462cb7d5d552bc7@syzkaller.appspotmail.com
Fixes: 1a4e58cce84e ("mm: introduce MADV_PAGEOUT")
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: don't rely on system state to detect hot-plug operations
Laurent Dufour [Sat, 26 Sep 2020 04:19:31 +0000 (21:19 -0700)]
mm: don't rely on system state to detect hot-plug operations

In register_mem_sect_under_node() the system_state's value is checked to
detect whether the call is made during boot time or during an hot-plug
operation.  Unfortunately, that check against SYSTEM_BOOTING is wrong
because regular memory is registered at SYSTEM_SCHEDULING state.  In
addition, memory hot-plug operation can be triggered at this system
state by the ACPI [1].  So checking against the system state is not
enough.

The consequence is that on system with interleaved node's ranges like this:

 Early memory node ranges
   node   1: [mem 0x0000000000000000-0x000000011fffffff]
   node   2: [mem 0x0000000120000000-0x000000014fffffff]
   node   1: [mem 0x0000000150000000-0x00000001ffffffff]
   node   0: [mem 0x0000000200000000-0x000000048fffffff]
   node   2: [mem 0x0000000490000000-0x00000007ffffffff]

This can be seen on PowerPC LPAR after multiple memory hot-plug and
hot-unplug operations are done.  At the next reboot the node's memory
ranges can be interleaved and since the call to link_mem_sections() is
made in topology_init() while the system is in the SYSTEM_SCHEDULING
state, the node's id is not checked, and the sections registered to
multiple nodes:

  $ ls -l /sys/devices/system/memory/memory21/node*
  total 0
  lrwxrwxrwx 1 root root     0 Aug 24 05:27 node1 -> ../../node/node1
  lrwxrwxrwx 1 root root     0 Aug 24 05:27 node2 -> ../../node/node2

In that case, the system is able to boot but if later one of theses
memory blocks is hot-unplugged and then hot-plugged, the sysfs
inconsistency is detected and this is triggering a BUG_ON():

  kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
  Oops: Exception in kernel mode, sig: 5 [#1]
  LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
  Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
  CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
  Call Trace:
    add_memory_resource+0x23c/0x340 (unreliable)
    __add_memory+0x5c/0xf0
    dlpar_add_lmb+0x1b4/0x500
    dlpar_memory+0x1f8/0xb80
    handle_dlpar_errorlog+0xc0/0x190
    dlpar_store+0x198/0x4a0
    kobj_attr_store+0x30/0x50
    sysfs_kf_write+0x64/0x90
    kernfs_fop_write+0x1b0/0x290
    vfs_write+0xe8/0x290
    ksys_write+0xdc/0x130
    system_call_exception+0x160/0x270
    system_call_common+0xf0/0x27c

This patch addresses the root cause by not relying on the system_state
value to detect whether the call is due to a hot-plug operation.  An
extra parameter is added to link_mem_sections() detailing whether the
operation is due to a hot-plug operation.

[1] According to Oscar Salvador, using this qemu command line, ACPI
memory hotplug operations are raised at SYSTEM_SCHEDULING state:

  $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \
        -m size=$MEM,slots=255,maxmem=4294967296k  \
        -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
        -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
        -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
        -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
        -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
        -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
        -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
        -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \

Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()")
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200915094143.79181-3-ldufour@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: replace memmap_context by meminit_context
Laurent Dufour [Sat, 26 Sep 2020 04:19:28 +0000 (21:19 -0700)]
mm: replace memmap_context by meminit_context

Patch series "mm: fix memory to node bad links in sysfs", v3.

Sometimes, firmware may expose interleaved memory layout like this:

 Early memory node ranges
   node   1: [mem 0x0000000000000000-0x000000011fffffff]
   node   2: [mem 0x0000000120000000-0x000000014fffffff]
   node   1: [mem 0x0000000150000000-0x00000001ffffffff]
   node   0: [mem 0x0000000200000000-0x000000048fffffff]
   node   2: [mem 0x0000000490000000-0x00000007ffffffff]

In that case, we can see memory blocks assigned to multiple nodes in
sysfs:

  $ ls -l /sys/devices/system/memory/memory21
  total 0
  lrwxrwxrwx 1 root root     0 Aug 24 05:27 node1 -> ../../node/node1
  lrwxrwxrwx 1 root root     0 Aug 24 05:27 node2 -> ../../node/node2
  -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
  -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
  -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
  drwxr-xr-x 2 root root     0 Aug 24 05:27 power
  -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
  -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
  lrwxrwxrwx 1 root root     0 Aug 24 05:25 subsystem -> ../../../../bus/memory
  -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
  -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones

The same applies in the node's directory with a memory21 link in both
the node1 and node2's directory.

This is wrong but doesn't prevent the system to run.  However when
later, one of these memory blocks is hot-unplugged and then hot-plugged,
the system is detecting an inconsistency in the sysfs layout and a
BUG_ON() is raised:

  kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
  LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
  Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
  CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
  Call Trace:
    add_memory_resource+0x23c/0x340 (unreliable)
    __add_memory+0x5c/0xf0
    dlpar_add_lmb+0x1b4/0x500
    dlpar_memory+0x1f8/0xb80
    handle_dlpar_errorlog+0xc0/0x190
    dlpar_store+0x198/0x4a0
    kobj_attr_store+0x30/0x50
    sysfs_kf_write+0x64/0x90
    kernfs_fop_write+0x1b0/0x290
    vfs_write+0xe8/0x290
    ksys_write+0xdc/0x130
    system_call_exception+0x160/0x270
    system_call_common+0xf0/0x27c

This has been seen on PowerPC LPAR.

The root cause of this issue is that when node's memory is registered,
the range used can overlap another node's range, thus the memory block
is registered to multiple nodes in sysfs.

There are two issues here:

 (a) The sysfs memory and node's layouts are broken due to these
     multiple links

 (b) The link errors in link_mem_sections() should not lead to a system
     panic.

To address (a) register_mem_sect_under_node should not rely on the
system state to detect whether the link operation is triggered by a hot
plug operation or not.  This is addressed by the patches 1 and 2 of this
series.

Issue (b) will be addressed separately.

This patch (of 2):

The memmap_context enum is used to detect whether a memory operation is
due to a hot-add operation or happening at boot time.

Make it general to the hotplug operation and rename it as
meminit_context.

There is no functional change introduced by this patch

Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J . Wysocki" <rafael@kernel.org>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200915094143.79181-1-ldufour@linux.ibm.com
Link: https://lkml.kernel.org/r/20200915132624.9723-1-ldufour@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoarch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback
Mikulas Patocka [Sat, 26 Sep 2020 04:19:24 +0000 (21:19 -0700)]
arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback

If we copy less than 8 bytes and if the destination crosses a cache
line, __copy_user_flushcache would invalidate only the first cache line.

This patch makes it invalidate the second cache line as well.

Fixes: 0aed55af88345b ("x86, uaccess: introduce copy_from_iter_flushcache for pmem / cache-bypass operations")
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Dan Williams <dan.j.wiilliams@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/alpine.LRH.2.02.2009161451140.21915@file01.intranet.prod.int.rdu2.redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agolib/memregion.c: include memregion.h
Jason Yan [Sat, 26 Sep 2020 04:19:21 +0000 (21:19 -0700)]
lib/memregion.c: include memregion.h

This addresses the following sparse warning:

  lib/memregion.c:8:5: warning: symbol 'memregion_alloc' was not declared. Should it be static?
  lib/memregion.c:14:6: warning: symbol 'memregion_free' was not declared. Should it be static?

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/20200921142852.875312-1-yanaijie@huawei.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agolib/string.c: implement stpcpy
Nick Desaulniers [Sat, 26 Sep 2020 04:19:18 +0000 (21:19 -0700)]
lib/string.c: implement stpcpy

LLVM implemented a recent "libcall optimization" that lowers calls to
`sprintf(dest, "%s", str)` where the return value is used to
`stpcpy(dest, str) - dest`.

This generally avoids the machinery involved in parsing format strings.
`stpcpy` is just like `strcpy` except it returns the pointer to the new
tail of `dest`.  This optimization was introduced into clang-12.

Implement this so that we don't observe linkage failures due to missing
symbol definitions for `stpcpy`.

Similar to last year's fire drill with: commit 5f074f3e192f
("lib/string.c: implement a basic bcmp")

The kernel is somewhere between a "freestanding" environment (no full
libc) and "hosted" environment (many symbols from libc exist with the
same type, function signature, and semantics).

As Peter Anvin notes, there's not really a great way to inform the
compiler that you're targeting a freestanding environment but would like
to opt-in to some libcall optimizations (see pr/47280 below), rather
than opt-out.

Arvind notes, -fno-builtin-* behaves slightly differently between GCC
and Clang, and Clang is missing many __builtin_* definitions, which I
consider a bug in Clang and am working on fixing.

Masahiro summarizes the subtle distinction between compilers justly:
  To prevent transformation from foo() into bar(), there are two ways in
  Clang to do that; -fno-builtin-foo, and -fno-builtin-bar.  There is
  only one in GCC; -fno-buitin-foo.

(Any difference in that behavior in Clang is likely a bug from a missing
__builtin_* definition.)

Masahiro also notes:
  We want to disable optimization from foo() to bar(),
  but we may still benefit from the optimization from
  foo() into something else. If GCC implements the same transform, we
  would run into a problem because it is not -fno-builtin-bar, but
  -fno-builtin-foo that disables that optimization.

  In this regard, -fno-builtin-foo would be more future-proof than
  -fno-built-bar, but -fno-builtin-foo is still potentially overkill. We
  may want to prevent calls from foo() being optimized into calls to
  bar(), but we still may want other optimization on calls to foo().

It seems that compilers today don't quite provide the fine grain control
over which libcall optimizations pseudo-freestanding environments would
prefer.

Finally, Kees notes that this interface is unsafe, so we should not
encourage its use.  As such, I've removed the declaration from any
header, but it still needs to be exported to avoid linkage errors in
modules.

Reported-by: Sami Tolvanen <samitolvanen@google.com>
Suggested-by: Andy Lavr <andy.lavr@gmail.com>
Suggested-by: Arvind Sankar <nivedita@alum.mit.edu>
Suggested-by: Joe Perches <joe@perches.com>
Suggested-by: Kees Cook <keescook@chromium.org>
Suggested-by: Masahiro Yamada <masahiroy@kernel.org>
Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200914161643.938408-1-ndesaulniers@google.com
Link: https://bugs.llvm.org/show_bug.cgi?id=47162
Link: https://bugs.llvm.org/show_bug.cgi?id=47280
Link: https://github.com/ClangBuiltLinux/linux/issues/1126
Link: https://man7.org/linux/man-pages/man3/stpcpy.3.html
Link: https://pubs.opengroup.org/onlinepubs/9699919799/functions/stpcpy.html
Link: https://reviews.llvm.org/D85963
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm/migrate: correct thp migration stats
Zi Yan [Sat, 26 Sep 2020 04:19:14 +0000 (21:19 -0700)]
mm/migrate: correct thp migration stats

PageTransHuge returns true for both thp and hugetlb, so thp stats was
counting both thp and hugetlb migrations.  Exclude hugetlb migration by
setting is_thp variable right.

Clean up thp handling code too when we are there.

Fixes: 1a5bae25e3cf ("mm/vmstat: add events for THP migration without split")
Signed-off-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lkml.kernel.org/r/20200917210413.1462975-1-zi.yan@sent.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm/gup: fix gup_fast with dynamic page table folding
Vasily Gorbik [Sat, 26 Sep 2020 04:19:10 +0000 (21:19 -0700)]
mm/gup: fix gup_fast with dynamic page table folding

Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:

  static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
                           unsigned int flags, struct page **pages, int *nr)
  ...
          pudp = pud_offset(&p4d, addr);

This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return.  This happens when the
level is folded (on most arches), and that pointer should not be
iterated.

On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.

Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:

  // addr = 0x1007ffff000, end = 0x10080001000
  static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
                           unsigned int flags, struct page **pages, int *nr)
  {
        unsigned long next;
        pud_t *pudp;

        // pud_offset returns &p4d itself (a pointer to a value on stack)
        pudp = pud_offset(&p4d, addr);
        do {
                // on second iteratation reading "random" stack value
                pud_t pud = READ_ONCE(*pudp);

                // next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
                next = pud_addr_end(addr, end);
                ...
        } while (pudp++, addr = next, addr != end); // pudp++ iterating over stack

        return 1;
  }

This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").

s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.

What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.

To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions.  And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter.  This has
already been discussed in

  https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1

Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: memcontrol: fix missing suffix of workingset_restore
Muchun Song [Sat, 26 Sep 2020 04:19:05 +0000 (21:19 -0700)]
mm: memcontrol: fix missing suffix of workingset_restore

We forget to add the suffix to the workingset_restore string, so fix it.

And also update the documentation of cgroup-v2.rst.

Fixes: 170b04b7ae49 ("mm/workingset: prepare the workingset detection infrastructure for anon LRU")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Link: https://lkml.kernel.org/r/20200916100030.71698-1-songmuchun@bytedance.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm, THP, swap: fix allocating cluster for swapfile by mistake
Gao Xiang [Sat, 26 Sep 2020 04:19:01 +0000 (21:19 -0700)]
mm, THP, swap: fix allocating cluster for swapfile by mistake

SWP_FS is used to make swap_{read,write}page() go through the
filesystem, and it's only used for swap files over NFS.  So, !SWP_FS
means non NFS for now, it could be either file backed or device backed.
Something similar goes with legacy SWP_FILE.

So in order to achieve the goal of the original patch, SWP_BLKDEV should
be used instead.

FS corruption can be observed with SSD device + XFS + fragmented
swapfile due to CONFIG_THP_SWAP=y.

I reproduced the issue with the following details:

Environment:

  QEMU + upstream kernel + buildroot + NVMe (2 GB)

Kernel config:

  CONFIG_BLK_DEV_NVME=y
  CONFIG_THP_SWAP=y

Some reproducible steps:

  mkfs.xfs -f /dev/nvme0n1
  mkdir /tmp/mnt
  mount /dev/nvme0n1 /tmp/mnt
  bs="32k"
  sz="1024m"    # doesn't matter too much, I also tried 16m
  xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw
  xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw
  xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw
  xfs_io -f -c "pwrite -F -S 0 -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw
  xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fsync" /tmp/mnt/sw

  mkswap /tmp/mnt/sw
  swapon /tmp/mnt/sw

  stress --vm 2 --vm-bytes 600M   # doesn't matter too much as well

Symptoms:
 - FS corruption (e.g. checksum failure)
 - memory corruption at: 0xd2808010
 - segfault

Fixes: f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device")
Fixes: 38d8b4e6bdc8 ("mm, THP, swap: delay splitting THP during swap out")
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Acked-by: Rafael Aquini <aquini@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Eric Sandeen <esandeen@redhat.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200820045323.7809-1-hsiangkao@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: slab: fix potential double free in ___cache_free
Shakeel Butt [Sat, 26 Sep 2020 14:13:41 +0000 (07:13 -0700)]
mm: slab: fix potential double free in ___cache_free

With the commit 10befea91b61 ("mm: memcg/slab: use a single set of
kmem_caches for all allocations"), it becomes possible to call kfree()
from the slabs_destroy().

The functions cache_flusharray() and do_drain() calls slabs_destroy() on
array_cache of the local CPU without updating the size of the
array_cache.  This enables the kfree() call from the slabs_destroy() to
recursively call cache_flusharray() which can potentially call
free_block() on the same elements of the array_cache of the local CPU
and causing double free and memory corruption.

To fix the issue, simply update the local CPU array_cache cache before
calling slabs_destroy().

Fixes: 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all allocations")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Roman Gushchin <guro@fb.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Reported-by: kernel test robot <rong.a.chen@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ted Ts'o <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoDocumentation/llvm: Fix clang target examples
Florian Fainelli [Fri, 25 Sep 2020 15:21:14 +0000 (08:21 -0700)]
Documentation/llvm: Fix clang target examples

clang --target=<triple> is how we can specify a particular toolchain
triple to be use, fix the two occurences in the documentation.

Fixes: fcf1b6a35c16 ("Documentation/llvm: add documentation on building w/ Clang/LLVM")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
3 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Sat, 26 Sep 2020 00:15:19 +0000 (17:15 -0700)]
Merge tag 'for-linus' of git://git./virt/kvm/kvm

Pull more kvm fixes from Paolo Bonzini:
 "Five small fixes.

  The nested migration bug will be fixed with a better API in 5.10 or
  5.11, for now this is a fix that works with existing userspace but
  keeps the current ugly API"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: SVM: Add a dedicated INVD intercept routine
  KVM: x86: Reset MMU context if guest toggles CR4.SMAP or CR4.PKE
  KVM: x86: fix MSR_IA32_TSC read for nested migration
  selftests: kvm: Fix assert failure in single-step test
  KVM: x86: VMX: Make smaller physical guest address space support user-configurable

3 years agoMerge tag 'mips_fixes_5.9_3' of git://git.kernel.org/pub/scm/linux/kernel/git/mips...
Linus Torvalds [Fri, 25 Sep 2020 22:24:04 +0000 (15:24 -0700)]
Merge tag 'mips_fixes_5.9_3' of git://git./linux/kernel/git/mips/linux

Pull MIPS fixes from Thomas Bogendoerfer:

 - fixed FP register access on Loongsoon-3

 - added missing 1074 cpu handling

 - fixed Loongson2ef build error

* tag 'mips_fixes_5.9_3' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
  MIPS: BCM47XX: Remove the needless check with the 1074K
  MIPS: Add the missing 'CPU_1074K' into __get_cpu_type()
  MIPS: Loongson2ef: Disable Loongson MMI instructions
  MIPS: Loongson-3: Fix fp register access if MSA enabled

3 years agoMerge tag 'spi-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Linus Torvalds [Fri, 25 Sep 2020 22:21:54 +0000 (15:21 -0700)]
Merge tag 'spi-fix-v5.9-rc6' of git://git./linux/kernel/git/broonie/spi

Pull spi fixes from Mark Brown:
 "A small collection of driver specific fixes, the fsl-espi and bcm-qspi
  changes in particular have been causing breakage for users"

* tag 'spi-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
  spi: bcm-qspi: Fix probe regression on iProc platforms
  spi: fsl-dspi: fix use-after-free in remove path
  spi: fsl-espi: Only process interrupts for expected events
  spi: bcm2835: Make polling_limit_us static
  spi: spi-fsl-dspi: use XSPI mode instead of DMA for DPAA2 SoCs

3 years agoMerge tag 'regulator-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 25 Sep 2020 22:16:01 +0000 (15:16 -0700)]
Merge tag 'regulator-fix-v5.9-rc6' of git://git./linux/kernel/git/broonie/regulator

Pull regulator fix from Mark Brown:
 "A single fix for incorrect specification of some of the register
  fields on axp20x devices which would break voltage setting on affected
  systems"

* tag 'regulator-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
  regulator: axp20x: fix LDO2/4 description

3 years agoMerge tag 'regmap-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 25 Sep 2020 22:11:24 +0000 (15:11 -0700)]
Merge tag 'regmap-fix-v5.9-rc6' of git://git./linux/kernel/git/broonie/regmap

Pull regmap fixes from Mark Brown:
 "Two issues here - one is a fix for use after free issues in the case
  where a regmap overrides its name using something dynamically
  generated, the other is that we weren't handling access checks
  non-incrementing I/O on registers within paged register regions
  correctly resulting in spurious errors.

  Both of these are quite rare but serious if they occur"

* tag 'regmap-fix-v5.9-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
  regmap: fix page selection for noinc writes
  regmap: fix page selection for noinc reads
  regmap: debugfs: Add back in erroneously removed initialisation of ret
  regmap: debugfs: Fix handling of name string for debugfs init delays

3 years agoio_uring: ensure async buffered read-retry is setup properly
Jens Axboe [Fri, 25 Sep 2020 21:23:43 +0000 (15:23 -0600)]
io_uring: ensure async buffered read-retry is setup properly

A previous commit for fixing up short reads botched the async retry
path, so we ended up going to worker threads more often than we should.
Fix this up, so retries work the way they originally were intended to.

Fixes: 227c0c9673d8 ("io_uring: internally retry short reads")
Reported-by: Hao_Xu <haoxu@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge tag 'nfsd-5.9-2' of git://git.linux-nfs.org/projects/cel/cel-2.6
Linus Torvalds [Fri, 25 Sep 2020 17:46:11 +0000 (10:46 -0700)]
Merge tag 'nfsd-5.9-2' of git://git.linux-nfs.org/projects/cel/cel-2.6

Pull NFS server fix from Chuck Lever:
 "Fix incorrect calculation on platforms that implement
  flush_dcache_page()"

* tag 'nfsd-5.9-2' of git://git.linux-nfs.org/projects/cel/cel-2.6:
  SUNRPC: Fix svc_flush_dcache()

3 years agoMerge tag 'pm-5.9-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Linus Torvalds [Fri, 25 Sep 2020 17:39:22 +0000 (10:39 -0700)]
Merge tag 'pm-5.9-rc7' of git://git./linux/kernel/git/rafael/linux-pm

Pull power management fixes from Rafael Wysocki:
 "These fix more fallout of recent RCU-lockdep changes in CPU idle code
  and two devfreq issues.

  Specifics:

   - Export rcu_idle_{enter,exit} to modules to fix build issues
     introduced by recent RCU-lockdep fixes (Borislav Petkov)

   - Add missing return statement to a stub function in the ACPI
     processor driver to fix a build issue introduced by recent
     RCU-lockdep fixes (Rafael Wysocki)

   - Fix recently introduced suspicious RCU usage warnings in the PSCI
     cpuidle driver and drop stale comments regarding RCU_NONIDLE()
     usage from enter_s2idle_proper() (Ulf Hansson)

   - Fix error code path in the tegra30 devfreq driver (Dan Carpenter)

   - Add missing information to devfreq_summary debugfs (Chanwoo Choi)"

* tag 'pm-5.9-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  ACPI: processor: Fix build for ARCH_APICTIMER_STOPS_ON_C3 unset
  PM / devfreq: tegra30: Disable clock on error in probe
  PM / devfreq: Add timer type to devfreq_summary debugfs
  cpuidle: Drop misleading comments about RCU usage
  cpuidle: psci: Fix suspicious RCU usage
  rcu/tree: Export rcu_idle_{enter,exit} to modules

3 years agoKVM: SVM: Add a dedicated INVD intercept routine
Tom Lendacky [Thu, 24 Sep 2020 18:41:57 +0000 (13:41 -0500)]
KVM: SVM: Add a dedicated INVD intercept routine

The INVD instruction intercept performs emulation. Emulation can't be done
on an SEV guest because the guest memory is encrypted.

Provide a dedicated intercept routine for the INVD intercept. And since
the instruction is emulated as a NOP, just skip it instead.

Fixes: 1654efcbc431 ("KVM: SVM: Add KVM_SEV_INIT command")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <a0b9a19ffa7fef86a3cc700c7ea01cb2731e04e5.1600972918.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Linus Torvalds [Fri, 25 Sep 2020 16:49:19 +0000 (09:49 -0700)]
Merge tag 'for-linus' of git://git./linux/kernel/git/rdma/rdma

Pull rdma fix from Jason Gunthorpe:
 "One fix for a bug that blktests hits when using rxe: tear down the CQ
  pool before waiting for all references to go away"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
  RDMA/core: Fix ordering of CQ pool destruction