From f266f4fc56c3f6916efd292d51602d92867eaba2 Mon Sep 17 00:00:00 2001 From: Brendan Gregg Date: Sun, 7 Feb 2016 12:22:50 -0800 Subject: [PATCH] more advice in the man page --- man/man8/fsslower.8 | 7 +++++++ tools/fsslower_example.txt | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/man/man8/fsslower.8 b/man/man8/fsslower.8 index 11566ec..c360f7f 100644 --- a/man/man8/fsslower.8 +++ b/man/man8/fsslower.8 @@ -87,6 +87,13 @@ instrumented events using the bcc funccount tool, eg: # ./funccount.py -i 1 -r '^__vfs_(read|write)$' .PP This also costs overhead, but is somewhat less than fsslower. +.PP +If the overhead is prohibitive for your workload, I'd recommend moving +down-stack a little from VFS into the file system functions (ext4, xfs, etc). +Look for updates to bcc for specific file system tools that do this. The +advantage of a per-file system approach is that we can trace post-cache, +greatly reducing events and overhead. The disadvantage is needing custom +tracing approaches for each different file system (whereas VFS is generic). .SH SOURCE This is from bcc. .IP diff --git a/tools/fsslower_example.txt b/tools/fsslower_example.txt index 9ff2617..e839435 100644 --- a/tools/fsslower_example.txt +++ b/tools/fsslower_example.txt @@ -97,7 +97,7 @@ TIME(s) COMM PID D BYTES LAT(ms) FILENAME 2.977 supervise 1876 W 18 4.23 status.new This caught an individual I/O reaching 163.12 ms, for the "preconv" file. While -the file system cache was flush, causing these to need to be read from disk, +the file system cache was flushed, causing these to need to be read from disk, the duration here may not be entirely disk I/O: it can include file system locks, run queue latency, etc. These can be explored using other commands. -- 2.7.4