platform/kernel/linux-starfive.git
3 years agobtrfs: use btrfs_read_node_slot in walk_down_tree
Josef Bacik [Thu, 5 Nov 2020 15:45:14 +0000 (10:45 -0500)]
btrfs: use btrfs_read_node_slot in walk_down_tree

We're open-coding btrfs_read_node_slot() here, replace with the helper.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use btrfs_read_node_slot in replace_path
Josef Bacik [Thu, 5 Nov 2020 15:45:13 +0000 (10:45 -0500)]
btrfs: use btrfs_read_node_slot in replace_path

We're open-coding btrfs_read_node_slot() here, replace with the helper.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use btrfs_read_node_slot in do_relocation
Josef Bacik [Thu, 5 Nov 2020 15:45:12 +0000 (10:45 -0500)]
btrfs: use btrfs_read_node_slot in do_relocation

We're open coding btrfs_read_node_slot in do_relocation, replace this
with the proper helper.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use btrfs_read_node_slot in walk_down_reloc_tree
Josef Bacik [Thu, 5 Nov 2020 15:45:11 +0000 (10:45 -0500)]
btrfs: use btrfs_read_node_slot in walk_down_reloc_tree

We do not need to call read_tree_block() here, simply use the
btrfs_read_node_slot helper.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use btrfs_read_node_slot in btrfs_realloc_node
Josef Bacik [Thu, 5 Nov 2020 15:45:10 +0000 (10:45 -0500)]
btrfs: use btrfs_read_node_slot in btrfs_realloc_node

We have this open-coded nightmare in btrfs_realloc_node that does
the same thing that the normal read path does, which is to see if we
have the eb in memory already, and if not read it, and verify the eb is
uptodate.  Delete this open coding and simply use btrfs_read_node_slot.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: cleanup extent buffer readahead
Josef Bacik [Thu, 5 Nov 2020 15:45:09 +0000 (10:45 -0500)]
btrfs: cleanup extent buffer readahead

We're going to pass around more information when we allocate extent
buffers, in order to make that cleaner how we do readahead.  Most of the
callers have the parent node that we're getting our blockptr from, with
the sole exception of relocation which simply has the bytenr it wants to
read.

Add a helper that takes the current arguments that we need (bytenr and
gen), and add another helper for simply reading the slot out of a node.
In followup patches the helper that takes all the extra arguments will
be expanded, and the simpler helper won't need to have it's arguments
adjusted.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove lockdep classes for the fs tree
Josef Bacik [Thu, 5 Nov 2020 15:45:08 +0000 (10:45 -0500)]
btrfs: remove lockdep classes for the fs tree

We have this weird problem where our lockdep class is set after we
read a tree block, which can race with concurrent readers and result in
erroneous lockdep errors.  We want to set the lockdep class at
allocation time if possible, but in certain cases we may not have the
actual root owner, such as with relocation or any backref lookups.  This
is only really a problem for reference counted trees, because all other
trees have their root reference set in their extent reference.  Remove
the fs tree specific lock class.  We need to still keep the reloc tree
one, it's still reference counted, because replace_path will lock the
reloc tree and the destination tree, and if they're both set to
tree-<level> we'll have issues.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: discard: reschedule work after sysfs param update
Pavel Begunkov [Wed, 4 Nov 2020 09:45:54 +0000 (09:45 +0000)]
btrfs: discard: reschedule work after sysfs param update

After sysfs updates discard's iops_limit or kbps_limit it also needs to
adjust current timer through rescheduling, otherwise the discard work
may wait for a long time for the previous timer to expire or bumped by
someone else.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: don't miss async discards after scheduled work override
Pavel Begunkov [Wed, 4 Nov 2020 09:45:53 +0000 (09:45 +0000)]
btrfs: don't miss async discards after scheduled work override

If btrfs_discard_schedule_work() is called with override=true, it sets
delay anew regardless how much time is left until the timer should have
fired. If delays are long (that can happen, for example, with low
kbps_limit), they might get constantly overridden without having a
chance to run the discard work.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: discard: store async discard delay as ns not as jiffies
Pavel Begunkov [Wed, 4 Nov 2020 09:45:52 +0000 (09:45 +0000)]
btrfs: discard: store async discard delay as ns not as jiffies

Most delay calculations are done in ns or ms, so store
discard_ctl->delay in ms and convert the final delay to jiffies only at
the end.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: discard: speed up async discard up to iops_limit
Pavel Begunkov [Wed, 4 Nov 2020 09:45:51 +0000 (09:45 +0000)]
btrfs: discard: speed up async discard up to iops_limit

Instead of using iops_limit only for cutting off extremes, calculate the
discard delay directly from it, so it closely follows iops_limit and
doesn't under-discard even though quotas are not saturated.

The iops limit could be hit more often in some cases and could increase
the discard rate.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: scrub: refactor scrub_find_csum()
Qu Wenruo [Tue, 3 Nov 2020 13:31:04 +0000 (21:31 +0800)]
btrfs: scrub: refactor scrub_find_csum()

Function scrub_find_csum() is to locate the csum for bytenr @logical
from sctx->csum_list.

However it lacks a lot of comments to explain things like how the
csum_list is organized and why we need to drop csum range which is
before us.

Refactor the function by:

- Add more comments explaining the behavior
- Add comment explaining why we need to drop the csum range
- Put the csum copy in the main loop
  This is mostly for the incoming patches to make scrub_find_csum() able
  to find multiple checksums.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: scrub: remove the force parameter from scrub_pages
Qu Wenruo [Tue, 3 Nov 2020 13:31:02 +0000 (21:31 +0800)]
btrfs: scrub: remove the force parameter from scrub_pages

The @force parameter for scrub_pages() is to indicate whether we want to
force bio submission.  Currently it's only used for the super block,
and it can be easily determined by the @flags, so we can remove the
parameter.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: scrub: distinguish scrub page from regular page
Qu Wenruo [Tue, 3 Nov 2020 13:31:01 +0000 (21:31 +0800)]
btrfs: scrub: distinguish scrub page from regular page

There are several call sites where we declare something like
"struct scrub_page *page".

This is confusing as we also use regular page in this code,
rename it to 'spage' where applicable.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: pass bvec to csum_dirty_buffer instead of page
Qu Wenruo [Tue, 3 Nov 2020 13:30:49 +0000 (21:30 +0800)]
btrfs: pass bvec to csum_dirty_buffer instead of page

Currently csum_dirty_buffer() uses page to grab extent buffer, but that
only works for sector size == PAGE_SIZE case.

For subpage we need page + page_offset to grab extent buffer.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: extract extent buffer verification from btrfs_validate_metadata_buffer()
Qu Wenruo [Tue, 3 Nov 2020 13:30:48 +0000 (21:30 +0800)]
btrfs: extract extent buffer verification from btrfs_validate_metadata_buffer()

Currently btrfs_validate_metadata_buffer() only needs to handle one
extent buffer as currently one page maps to at most one extent buffer.

For incoming subpage support, we need to extend the support where one
page could contain multiple extent buffers.

Split the function so we can call validate_extent_buffer on extent
buffers independently.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: make csum_tree_block() handle node smaller than page
Qu Wenruo [Tue, 3 Nov 2020 13:30:47 +0000 (21:30 +0800)]
btrfs: make csum_tree_block() handle node smaller than page

For subpage size support, metadata blocks of nodesize are smaller than
one page and this needs to be handled when calculating the checksum.

The checksummed start and length need to be adjusted but only for the
first page:

- start is simply offset in the page

- length is nodesize (subpage) or PAGE_SIZE for all other cases

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: grab fs_info from extent_buffer in btrfs_mark_buffer_dirty
Qu Wenruo [Tue, 3 Nov 2020 13:30:46 +0000 (21:30 +0800)]
btrfs: grab fs_info from extent_buffer in btrfs_mark_buffer_dirty

Since commit f28491e0a6c4 ("Btrfs: move the extent buffer radix tree into
the fs_info"), fs_info can be grabbed from extent_buffer directly.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: make buffer_radix take sector size units
Qu Wenruo [Wed, 21 Oct 2020 06:25:05 +0000 (14:25 +0800)]
btrfs: make buffer_radix take sector size units

For subpage sector size support, one page can contain multiple tree
blocks. The entries cannot be based on page size and index must be
derived from the sectorsize. No change for page size == sector size.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: assert page mapping lock in attach_extent_buffer_page
Qu Wenruo [Wed, 21 Oct 2020 06:25:02 +0000 (14:25 +0800)]
btrfs: assert page mapping lock in attach_extent_buffer_page

When calling attach_extent_buffer_page(), either we're attaching
anonymous pages, called from btrfs_clone_extent_buffer(),
or we're attaching btree inode pages, called from alloc_extent_buffer().

For the latter case, we should hold page->mapping->private_lock to avoid
parallel changes to page->private.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: protect fs_info->caching_block_groups by block_group_cache_lock
Josef Bacik [Fri, 23 Oct 2020 13:58:11 +0000 (09:58 -0400)]
btrfs: protect fs_info->caching_block_groups by block_group_cache_lock

I got the following lockdep splat

  ======================================================
  WARNING: possible circular locking dependency detected
  5.9.0+ #101 Not tainted
  ------------------------------------------------------
  btrfs-cleaner/3445 is trying to acquire lock:
  ffff89dbec39ab48 (btrfs-root-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x32/0x170

  but task is already holding lock:
  ffff89dbeaf28a88 (&fs_info->commit_root_sem){++++}-{3:3}, at: btrfs_find_all_roots+0x41/0x80

  which lock already depends on the new lock.

  the existing dependency chain (in reverse order) is:

  -> #2 (&fs_info->commit_root_sem){++++}-{3:3}:
 down_write+0x3d/0x70
 btrfs_cache_block_group+0x2d5/0x510
 find_free_extent+0xb6e/0x12f0
 btrfs_reserve_extent+0xb3/0x1b0
 btrfs_alloc_tree_block+0xb1/0x330
 alloc_tree_block_no_bg_flush+0x4f/0x60
 __btrfs_cow_block+0x11d/0x580
 btrfs_cow_block+0x10c/0x220
 commit_cowonly_roots+0x47/0x2e0
 btrfs_commit_transaction+0x595/0xbd0
 sync_filesystem+0x74/0x90
 generic_shutdown_super+0x22/0x100
 kill_anon_super+0x14/0x30
 btrfs_kill_super+0x12/0x20
 deactivate_locked_super+0x36/0xa0
 cleanup_mnt+0x12d/0x190
 task_work_run+0x5c/0xa0
 exit_to_user_mode_prepare+0x1df/0x200
 syscall_exit_to_user_mode+0x54/0x280
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

  -> #1 (&space_info->groups_sem){++++}-{3:3}:
 down_read+0x40/0x130
 find_free_extent+0x2ed/0x12f0
 btrfs_reserve_extent+0xb3/0x1b0
 btrfs_alloc_tree_block+0xb1/0x330
 alloc_tree_block_no_bg_flush+0x4f/0x60
 __btrfs_cow_block+0x11d/0x580
 btrfs_cow_block+0x10c/0x220
 commit_cowonly_roots+0x47/0x2e0
 btrfs_commit_transaction+0x595/0xbd0
 sync_filesystem+0x74/0x90
 generic_shutdown_super+0x22/0x100
 kill_anon_super+0x14/0x30
 btrfs_kill_super+0x12/0x20
 deactivate_locked_super+0x36/0xa0
 cleanup_mnt+0x12d/0x190
 task_work_run+0x5c/0xa0
 exit_to_user_mode_prepare+0x1df/0x200
 syscall_exit_to_user_mode+0x54/0x280
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

  -> #0 (btrfs-root-00){++++}-{3:3}:
 __lock_acquire+0x1167/0x2150
 lock_acquire+0xb9/0x3d0
 down_read_nested+0x43/0x130
 __btrfs_tree_read_lock+0x32/0x170
 __btrfs_read_lock_root_node+0x3a/0x50
 btrfs_search_slot+0x614/0x9d0
 btrfs_find_root+0x35/0x1b0
 btrfs_read_tree_root+0x61/0x120
 btrfs_get_root_ref+0x14b/0x600
 find_parent_nodes+0x3e6/0x1b30
 btrfs_find_all_roots_safe+0xb4/0x130
 btrfs_find_all_roots+0x60/0x80
 btrfs_qgroup_trace_extent_post+0x27/0x40
 btrfs_add_delayed_data_ref+0x3fd/0x460
 btrfs_free_extent+0x42/0x100
 __btrfs_mod_ref+0x1d7/0x2f0
 walk_up_proc+0x11c/0x400
 walk_up_tree+0xf0/0x180
 btrfs_drop_snapshot+0x1c7/0x780
 btrfs_clean_one_deleted_snapshot+0xfb/0x110
 cleaner_kthread+0xd4/0x140
 kthread+0x13a/0x150
 ret_from_fork+0x1f/0x30

  other info that might help us debug this:

  Chain exists of:
    btrfs-root-00 --> &space_info->groups_sem --> &fs_info->commit_root_sem

   Possible unsafe locking scenario:

 CPU0                    CPU1
 ----                    ----
    lock(&fs_info->commit_root_sem);
 lock(&space_info->groups_sem);
 lock(&fs_info->commit_root_sem);
    lock(btrfs-root-00);

   *** DEADLOCK ***

  3 locks held by btrfs-cleaner/3445:
   #0: ffff89dbeaf28838 (&fs_info->cleaner_mutex){+.+.}-{3:3}, at: cleaner_kthread+0x6e/0x140
   #1: ffff89dbeb6c7640 (sb_internal){.+.+}-{0:0}, at: start_transaction+0x40b/0x5c0
   #2: ffff89dbeaf28a88 (&fs_info->commit_root_sem){++++}-{3:3}, at: btrfs_find_all_roots+0x41/0x80

  stack backtrace:
  CPU: 0 PID: 3445 Comm: btrfs-cleaner Not tainted 5.9.0+ #101
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-2.fc32 04/01/2014
  Call Trace:
   dump_stack+0x8b/0xb0
   check_noncircular+0xcf/0xf0
   __lock_acquire+0x1167/0x2150
   ? __bfs+0x42/0x210
   lock_acquire+0xb9/0x3d0
   ? __btrfs_tree_read_lock+0x32/0x170
   down_read_nested+0x43/0x130
   ? __btrfs_tree_read_lock+0x32/0x170
   __btrfs_tree_read_lock+0x32/0x170
   __btrfs_read_lock_root_node+0x3a/0x50
   btrfs_search_slot+0x614/0x9d0
   ? find_held_lock+0x2b/0x80
   btrfs_find_root+0x35/0x1b0
   ? do_raw_spin_unlock+0x4b/0xa0
   btrfs_read_tree_root+0x61/0x120
   btrfs_get_root_ref+0x14b/0x600
   find_parent_nodes+0x3e6/0x1b30
   btrfs_find_all_roots_safe+0xb4/0x130
   btrfs_find_all_roots+0x60/0x80
   btrfs_qgroup_trace_extent_post+0x27/0x40
   btrfs_add_delayed_data_ref+0x3fd/0x460
   btrfs_free_extent+0x42/0x100
   __btrfs_mod_ref+0x1d7/0x2f0
   walk_up_proc+0x11c/0x400
   walk_up_tree+0xf0/0x180
   btrfs_drop_snapshot+0x1c7/0x780
   ? btrfs_clean_one_deleted_snapshot+0x73/0x110
   btrfs_clean_one_deleted_snapshot+0xfb/0x110
   cleaner_kthread+0xd4/0x140
   ? btrfs_alloc_root+0x50/0x50
   kthread+0x13a/0x150
   ? kthread_create_worker_on_cpu+0x40/0x40
   ret_from_fork+0x1f/0x30

while testing another lockdep fix.  This happens because we're using the
commit_root_sem to protect fs_info->caching_block_groups, which creates
a dependency on the groups_sem -> commit_root_sem, which is problematic
because we will allocate blocks while holding tree roots.  Fix this by
making the list itself protected by the fs_info->block_group_cache_lock.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: load free space cache asynchronously
Josef Bacik [Fri, 23 Oct 2020 13:58:10 +0000 (09:58 -0400)]
btrfs: load free space cache asynchronously

While documenting the usage of the commit_root_sem, I noticed that we do
not actually take the commit_root_sem in the case of the free space
cache.  This is problematic because we're supposed to hold that sem
while we're reading the commit roots, which is what we do for the free
space cache.

The reason I did it inline when I originally wrote the code was because
there's the case of unpinning where we need to make sure that the free
space cache is loaded if we're going to use the free space cache.  But
we can accomplish the same thing by simply waiting for the cache to be
loaded.

Rework this code to load the free space cache asynchronously.  This
allows us to greatly cleanup the caching code because now it's all
shared by the various caching methods.  We also are now in a position to
have the commit_root semaphore held while we're loading the free space
cache.  And finally our modification of ->last_byte_to_unpin is removed
because it can be handled in the proper way on commit.

Some care must be taken when replaying the log, when we expect that the
free space cache will be read entirely before we start excluding space
to replay. This could lead to overwriting space during replay.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: load the free space cache inode extents from commit root
Josef Bacik [Fri, 23 Oct 2020 13:58:09 +0000 (09:58 -0400)]
btrfs: load the free space cache inode extents from commit root

Historically we've allowed recursive locking specifically for the free
space inode.  This is because we are only doing reads and know that it's
safe.  However we don't actually need this feature, we can get away with
reading the commit root for the extents.  In fact if we want to allow
asynchronous loading of the free space cache we have to use the commit
root, otherwise we will deadlock.

Switch to using the commit root for the file extents.  These are only
read at load time, and are replaced as soon as we start writing the
cache out to disk.  The cache is never read again, so this is
legitimate.  This matches what we do for the inode itself, as we read
that from the commit root as well.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: load free space cache into a temporary ctl
Josef Bacik [Fri, 23 Oct 2020 13:58:08 +0000 (09:58 -0400)]
btrfs: load free space cache into a temporary ctl

The free space cache has been special in that we would load it right
away instead of farming the work off to a worker thread.  This resulted
in some weirdness that had to be taken into account for this fact,
namely that if we every found a block group being cached the fast way we
had to wait for it to finish, because we could get the cache before it
had been validated and we may throw the cache away.

To handle this particular case instead create a temporary
btrfs_free_space_ctl to load the free space cache into.  Then once we've
validated that it makes sense, copy it's contents into the actual
block_group->free_space_ctl.  This allows us to avoid the problems of
needing to wait for the caching to complete, we can clean up the discard
extent handling stuff in __load_free_space_cache, and we no longer need
to do the merge_space_tree() because the space is added one by one into
the real free_space_ctl.  This will allow further reworks of how we
handle loading the free space cache.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: cleanup btrfs_discard_update_discardable usage
Josef Bacik [Fri, 23 Oct 2020 13:58:07 +0000 (09:58 -0400)]
btrfs: cleanup btrfs_discard_update_discardable usage

This passes in the block_group and the free_space_ctl, but we can get
this from the block group itself.  Part of this is because we call it
from __load_free_space_cache, which can be called for the inode cache as
well.

Move that call into the block group specific load section, wrap it in
the right lock that we need for the assertion (but otherwise this is
safe without the lock because this happens in single-thread context).

Fix up the arguments to only take the block group.  Add a lockdep_assert
as well for good measure to make sure we don't mess up the locking
again.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: explicitly protect ->last_byte_to_unpin in unpin_extent_range
Josef Bacik [Fri, 23 Oct 2020 13:58:06 +0000 (09:58 -0400)]
btrfs: explicitly protect ->last_byte_to_unpin in unpin_extent_range

Currently unpin_extent_range happens in the transaction commit context,
so we are protected from ->last_byte_to_unpin changing while we're
unpinning, because any new transactions would have to wait for us to
complete before modifying ->last_byte_to_unpin.

However in the future we may want to change how this works, for instance
with async unpinning or other such TODO items.  To prepare for that
future explicitly protect ->last_byte_to_unpin with the commit_root_sem
so we are sure it won't change while we're doing our work.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: update last_byte_to_unpin in switch_commit_roots
Josef Bacik [Fri, 23 Oct 2020 13:58:05 +0000 (09:58 -0400)]
btrfs: update last_byte_to_unpin in switch_commit_roots

While writing an explanation for the need of the commit_root_sem for
btrfs_prepare_extent_commit, I realized we have a slight hole that could
result in leaked space if we have to do the old style caching.  Consider
the following scenario

 commit root
 +----+----+----+----+----+----+----+
 |\\\\|    |\\\\|\\\\|    |\\\\|\\\\|
 +----+----+----+----+----+----+----+
 0    1    2    3    4    5    6    7

 new commit root
 +----+----+----+----+----+----+----+
 |    |    |    |\\\\|    |    |\\\\|
 +----+----+----+----+----+----+----+
 0    1    2    3    4    5    6    7

Prior to this patch, we run btrfs_prepare_extent_commit, which updates
the last_byte_to_unpin, and then we subsequently run
switch_commit_roots.  In this example lets assume that
caching_ctl->progress == 1 at btrfs_prepare_extent_commit() time, which
means that cache->last_byte_to_unpin == 1.  Then we go and do the
switch_commit_roots(), but in the meantime the caching thread has made
some more progress, because we drop the commit_root_sem and re-acquired
it.  Now caching_ctl->progress == 3.  We swap out the commit root and
carry on to unpin.

The race can happen like:

  1) The caching thread was running using the old commit root when it
     found the extent for [2, 3);

  2) Then it released the commit_root_sem because it was in the last
     item of a leaf and the semaphore was contended, and set ->progress
     to 3 (value of 'last'), as the last extent item in the current leaf
     was for the extent for range [2, 3);

  3) Next time it gets the commit_root_sem, will start using the new
     commit root and search for a key with offset 3, so it never finds
     the hole for [2, 3).

  So the caching thread never saw [2, 3) as free space in any of the
  commit roots, and by the time finish_extent_commit() was called for
  the range [0, 3), ->last_byte_to_unpin was 1, so it only returned the
  subrange [0, 1) to the free space cache, skipping [2, 3).

In the unpin code we have last_byte_to_unpin == 1, so we unpin [0,1),
but do not unpin [2,3).  However because caching_ctl->progress == 3 we
do not see the newly freed section of [2,3), and thus do not add it to
our free space cache.  This results in us missing a chunk of free space
in memory (on disk too, unless we have a power failure before writing
the free space cache to disk).

Fix this by making sure the ->last_byte_to_unpin is set at the same time
that we swap the commit roots, this ensures that we will always be
consistent.

CC: stable@vger.kernel.org # 5.8+
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
[ update changelog with Filipe's review comments ]
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: do not shorten unpin len for caching block groups
Josef Bacik [Fri, 23 Oct 2020 13:58:04 +0000 (09:58 -0400)]
btrfs: do not shorten unpin len for caching block groups

While fixing up our ->last_byte_to_unpin locking I noticed that we will
shorten len based on ->last_byte_to_unpin if we're caching when we're
adding back the free space.  This is correct for the free space, as we
cannot unpin more than ->last_byte_to_unpin, however we use len to
adjust the ->bytes_pinned counters and such, which need to track the
actual pinned usage.  This could result in
WARN_ON(space_info->bytes_pinned) triggering at unmount time.

Fix this by using a local variable for the amount to add to free space
cache, and leave len untouched in this case.

CC: stable@vger.kernel.org # 5.4+
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: reorder extent buffer members for better packing
David Sterba [Thu, 29 Oct 2020 14:33:45 +0000 (15:33 +0100)]
btrfs: reorder extent buffer members for better packing

After the rwsem replaced the tree lock implementation, the extent buffer
got smaller but leaving some holes behind. By changing log_index type
and reordering, we can squeeze the size further to 240 bytes, measured on
release config on x86_64. Log_index spans only 3 values and needs to be
signed.

Before:

struct extent_buffer {
        u64                        start;                /*     0     8 */
        long unsigned int          len;                  /*     8     8 */
        long unsigned int          bflags;               /*    16     8 */
        struct btrfs_fs_info *     fs_info;              /*    24     8 */
        spinlock_t                 refs_lock;            /*    32     4 */
        atomic_t                   refs;                 /*    36     4 */
        atomic_t                   io_pages;             /*    40     4 */
        int                        read_mirror;          /*    44     4 */
        struct callback_head       callback_head __attribute__((__aligned__(8))); /*    48    16 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        pid_t                      lock_owner;           /*    64     4 */
        bool                       lock_recursed;        /*    68     1 */

        /* XXX 3 bytes hole, try to pack */

        struct rw_semaphore        lock;                 /*    72    40 */
        short int                  log_index;            /*   112     2 */

        /* XXX 6 bytes hole, try to pack */

        struct page *              pages[16];            /*   120   128 */

        /* size: 248, cachelines: 4, members: 14 */
        /* sum members: 239, holes: 2, sum holes: 9 */
        /* forced alignments: 1 */
        /* last cacheline: 56 bytes */
} __attribute__((__aligned__(8)));

After:

struct extent_buffer {
        u64                        start;                /*     0     8 */
        long unsigned int          len;                  /*     8     8 */
        long unsigned int          bflags;               /*    16     8 */
        struct btrfs_fs_info *     fs_info;              /*    24     8 */
        spinlock_t                 refs_lock;            /*    32     4 */
        atomic_t                   refs;                 /*    36     4 */
        atomic_t                   io_pages;             /*    40     4 */
        int                        read_mirror;          /*    44     4 */
        struct callback_head       callback_head __attribute__((__aligned__(8))); /*    48    16 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        pid_t                      lock_owner;           /*    64     4 */
        bool                       lock_recursed;        /*    68     1 */
        s8                         log_index;            /*    69     1 */

        /* XXX 2 bytes hole, try to pack */

        struct rw_semaphore        lock;                 /*    72    40 */
        struct page *              pages[16];            /*   112   128 */

        /* size: 240, cachelines: 4, members: 14 */
        /* sum members: 238, holes: 1, sum holes: 2 */
        /* forced alignments: 1 */
        /* last cacheline: 48 bytes */
} __attribute__((__aligned__(8)));

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: locking: rip out path->leave_spinning
Josef Bacik [Thu, 20 Aug 2020 15:46:11 +0000 (11:46 -0400)]
btrfs: locking: rip out path->leave_spinning

We no longer distinguish between blocking and spinning, so rip out all
this code.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: locking: remove all the blocking helpers
Josef Bacik [Thu, 20 Aug 2020 15:46:10 +0000 (11:46 -0400)]
btrfs: locking: remove all the blocking helpers

Now that we're using a rw_semaphore we no longer need to indicate if a
lock is blocking or not, nor do we need to flip the entire path from
blocking to spinning.  Remove these helpers and all the places they are
called.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: scrub: remove local copy of csum_size from context
David Sterba [Tue, 30 Jun 2020 15:44:49 +0000 (17:44 +0200)]
btrfs: scrub: remove local copy of csum_size from context

The context structure unnecessarily stores copy of the checksum size,
that can be now easily obtained from fs_info.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: check integrity: remove local copy of csum_size
David Sterba [Tue, 30 Jun 2020 15:42:23 +0000 (17:42 +0200)]
btrfs: check integrity: remove local copy of csum_size

The state structure unnecessarily stores copy of the checksum size, that
can be now easily obtained from fs_info.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove unnecessary local variables for checksum size
David Sterba [Tue, 30 Jun 2020 16:04:02 +0000 (18:04 +0200)]
btrfs: remove unnecessary local variables for checksum size

Remove local variable that is then used just once and replace it with
fs_info::csum_size.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: switch cached fs_info::csum_size from u16 to u32
David Sterba [Thu, 2 Jul 2020 09:27:30 +0000 (11:27 +0200)]
btrfs: switch cached fs_info::csum_size from u16 to u32

The fs_info value is 32bit, switch also the local u16 variables. This
leads to a better assembly code generated due to movzwl.

This simple change will shave some bytes on x86_64 and release config:

   text    data     bss     dec     hex filename
1090000   17980   14912 1122892  11224c pre/btrfs.ko
1089794   17980   14912 1122686  11217e post/btrfs.ko

DELTA: -206

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use cached value of fs_info::csum_size everywhere
David Sterba [Tue, 30 Jun 2020 00:01:31 +0000 (02:01 +0200)]
btrfs: use cached value of fs_info::csum_size everywhere

btrfs_get_16 shows up in the system performance profiles (helper to read
16bit values from on-disk structures). This is partially because of the
checksum size that's frequently read along with data reads/writes, other
u16 uses are from item size or directory entries.

Replace all calls to btrfs_super_csum_size by the cached value from
fs_info.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: precalculate checksums per leaf once
David Sterba [Thu, 2 Jul 2020 08:54:11 +0000 (10:54 +0200)]
btrfs: precalculate checksums per leaf once

btrfs_csum_bytes_to_leaves shows up in system profiles, which makes it a
candidate for optimizations. After the 64bit division has been replaced
by shift, there's still a calculation done each time the function is
called: checksums per leaf.

As this is a constant value for the entire filesystem lifetime, we
can calculate it once at mount time and reuse. This also allows to
reduce the division to 64bit/32bit as we know the constant will always
fit the 32bit type.

Replace the open-coded rounding up with a macro that internally handles
the 64bit division and as it's now a short function, make it static
inline (slight code increase, slight stack usage reduction).

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: store precalculated csum_size in fs_info
David Sterba [Thu, 2 Jul 2020 09:10:18 +0000 (11:10 +0200)]
btrfs: store precalculated csum_size in fs_info

In many places we need the checksum size and it is inefficient to read
it from the raw superblock. Store the value into fs_info, actual use
will be in followup patches.  The size is u32 as it allows to generate
better assembly than with u16.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: replace s_blocksize_bits with fs_info::sectorsize_bits
David Sterba [Wed, 1 Jul 2020 19:19:09 +0000 (21:19 +0200)]
btrfs: replace s_blocksize_bits with fs_info::sectorsize_bits

The value of super_block::s_blocksize_bits is the same as
fs_info::sectorsize_bits, but we don't need to do the extra dereferences
in many functions and storing the bits as u32 (in fs_info) generates
shorter assembly.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: replace div_u64 by shift in free_space_bitmap_size
David Sterba [Wed, 1 Jul 2020 19:07:40 +0000 (21:07 +0200)]
btrfs: replace div_u64 by shift in free_space_bitmap_size

Change free_space_bitmap_size to take btrfs_fs_info so we can get the
sectorsize_bits to do calculations.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use precalculated sectorsize_bits from fs_info
David Sterba [Wed, 1 Jul 2020 18:45:04 +0000 (20:45 +0200)]
btrfs: use precalculated sectorsize_bits from fs_info

We do a lot of calculations where we divide or multiply by sectorsize.
We also know and make sure that sectorsize is a power of two, so this
means all divisions can be turned to shifts and avoid eg. expensive
u64/u32 divisions.

The type is u32 as it's more register friendly on x86_64 compared to u8
and the resulting assembly is smaller (movzbl vs movl).

There's also superblock s_blocksize_bits but it's usually one more
pointer dereference farther than fs_info.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: rename page_size to io_size in submit_extent_page
Qu Wenruo [Wed, 21 Oct 2020 06:25:01 +0000 (14:25 +0800)]
btrfs: rename page_size to io_size in submit_extent_page

The variable @page_size in submit_extent_page() is not related to page
size.

It can already be smaller than PAGE_SIZE, so rename it to io_size to
reduce confusion, this is especially important for later subpage
support.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: only require sector size alignment for page read
Qu Wenruo [Wed, 21 Oct 2020 06:24:58 +0000 (14:24 +0800)]
btrfs: only require sector size alignment for page read

If we're reading partial page, btrfs will warn about this as read/write
is always done in sector size, which now equals page size.

But for the upcoming subpage read-only support, our data read is only
aligned to sectorsize, which can be smaller than page size.

Thus here we change the warning condition to check it against
sectorsize, the behavior is not changed for regular sectorsize ==
PAGE_SIZE case, and won't report error for subpage read.

Also, pass the proper start/end with bv_offset for check_data_csum() to
handle.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: rename pages_locked in process_pages_contig()
Qu Wenruo [Wed, 21 Oct 2020 06:24:57 +0000 (14:24 +0800)]
btrfs: rename pages_locked in process_pages_contig()

Function process_pages_contig() does not only handle page locking but
also other operations.  Rename the local variable pages_locked to
pages_processed to reduce confusion.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: sink parameter start and len to check_data_csum
Qu Wenruo [Wed, 21 Oct 2020 06:24:54 +0000 (14:24 +0800)]
btrfs: sink parameter start and len to check_data_csum

For check_data_csum(), the page we're using is directly from the inode
mapping, thus it has valid page_offset().

We can use (page_offset() + pg_off) to replace @start parameter
completely, while the @len should always be sectorsize.

Since we're here, also add some comment, as there are quite some
confusion in words like start/offset, without explaining whether it's
file_offset or logical bytenr.

This should not affect the existing behavior, as for current sectorsize
== PAGE_SIZE case, @pgoff should always be 0, and len is always
PAGE_SIZE (or sectorsize from the dio read path).

Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: replace fs_info and private_data with inode in btrfs_wq_submit_bio
Qu Wenruo [Wed, 21 Oct 2020 06:24:53 +0000 (14:24 +0800)]
btrfs: replace fs_info and private_data with inode in btrfs_wq_submit_bio

All callers of btrfs_wq_submit_bio() pass struct inode as @private_data,
so there is no need for it to be (void *), replace it with "struct inode
*inode".

While we can extract fs_info from struct inode, also remove the @fs_info
parameter.

Since we're here, also replace all the (void *private_data) into (struct
inode *inode).

Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: sink the failed_start parameter to set_extent_bit
Qu Wenruo [Wed, 21 Oct 2020 06:24:51 +0000 (14:24 +0800)]
btrfs: sink the failed_start parameter to set_extent_bit

The @failed_start parameter is only paired with @exclusive_bits, and
those parameters are only used for EXTENT_LOCKED bit, which have their
own wrappers lock_extent_bits().

Thus for regular set_extent_bit() calls, the failed_start makes no
sense, just sink the parameter.

Also, since @failed_start and @exclusive_bits are used in pairs, add
an assert to make it obvious.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: update the comment for find_first_extent_bit
Qu Wenruo [Wed, 21 Oct 2020 06:24:50 +0000 (14:24 +0800)]
btrfs: update the comment for find_first_extent_bit

The pitfall here is, if the parameter @bits has multiple bits set, we
will return the first range which just has one of the specified bits
set.

This is a little tricky if we want an exact match.  Anyway, update the
comment to make that clear.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: fix the comment on lock_extent_buffer_for_io
Qu Wenruo [Wed, 21 Oct 2020 06:24:49 +0000 (14:24 +0800)]
btrfs: fix the comment on lock_extent_buffer_for_io

The return value of that function is completely wrong.

That function only returns 0 if the extent buffer doesn't need to be
submitted.  The "ret = 1" and "ret = 0" are determined by the return
value of "test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)".

And if we get ret == 1, it's because the extent buffer is dirty, and we
set its status to EXTENT_BUFFER_WRITE_BACK, and continue to page
locking.

While if we get ret == 0, it means the extent is not dirty from the
beginning, so we don't need to write it back.

The caller also follows this, in btree_write_cache_pages(), if
lock_extent_buffer_for_io() returns 0, we just skip the extent buffer
completely.

So the comment is completely wrong.

Since we're here, also change the description a little.  The write bio
flushing won't be visible to the caller, thus it's not an major feature.
In the main description, only describe the locking part to make the
point more clear.

For reference, added in commit 2e3c25136adf ("btrfs: extent_io: add
proper error handling to lock_extent_buffer_for_io()")

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove unnecessary casts in printk
David Sterba [Tue, 15 Sep 2020 12:18:23 +0000 (14:18 +0200)]
btrfs: remove unnecessary casts in printk

Long time ago the explicit casts were necessary for u64 but we don't
need it.  Remove casts where the type matches, leaving only cases that
cast sector_t or loff_t.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: add set/get accessors for root_item::drop_level
David Sterba [Tue, 15 Sep 2020 19:44:52 +0000 (21:44 +0200)]
btrfs: add set/get accessors for root_item::drop_level

The drop_level member is used directly unlike all the other int types in
root_item. Add the definition and use it everywhere. The type is u8 so
there's no conversion necessary and the helpers are properly inlined,
this is for consistency.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use root_item helpers for limit and flags in btrfs_create_tree
David Sterba [Tue, 15 Sep 2020 19:00:04 +0000 (21:00 +0200)]
btrfs: use root_item helpers for limit and flags in btrfs_create_tree

For consistency use the available helpers to set flags and limit.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: check-integrity: use proper helper to access btrfs_header
David Sterba [Tue, 15 Sep 2020 12:26:27 +0000 (14:26 +0200)]
btrfs: check-integrity: use proper helper to access btrfs_header

There's one raw use of le->cpu conversion but we have a helper to do
that for us, so use it.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: send: use helpers to access root_item::ctransid
David Sterba [Tue, 15 Sep 2020 12:30:15 +0000 (14:30 +0200)]
btrfs: send: use helpers to access root_item::ctransid

We have helpers to access the on-disk item members, use that for
root_item::ctransid instead of raw le64_to_cpu.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: generate lockdep keyset names at compile time
David Sterba [Tue, 29 Sep 2020 12:56:39 +0000 (14:56 +0200)]
btrfs: generate lockdep keyset names at compile time

The names in btrfs_lockdep_keysets are generated from a simple pattern
using snprintf but we can generate them directly with some macro magic
and remove the helpers.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use the right number of levels for lockdep keysets
David Sterba [Tue, 27 Oct 2020 14:54:08 +0000 (15:54 +0100)]
btrfs: use the right number of levels for lockdep keysets

BTRFS_MAX_LEVEL is 8 and the keyset table is supposed to have a key for
each level, but we'll never have more than 8 levels.  The values passed
to btrfs_set_buffer_lockdep_class are always derived from a valid extent
buffer.  Set the array sizes to the right value.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove dio iomap DSYNC workaround
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:21 +0000 (11:39 -0500)]
btrfs: remove dio iomap DSYNC workaround

This effectively reverts 09745ff88d93 ("btrfs: dio iomap DSYNC
workaround") now that the iomap API has been updated to allow
iomap_dio_complete() not to be called under i_rwsem anymore.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: call iomap_dio_complete() without inode_lock
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:20 +0000 (11:39 -0500)]
btrfs: call iomap_dio_complete() without inode_lock

If direct writes are called with O_DIRECT | O_DSYNC, it will result in a
deadlock because iomap_dio_rw() is called under i_rwsem which calls:

  iomap_dio_complete()
    generic_write_sync()
      btrfs_sync_file()

btrfs_sync_file() requires i_rwsem, so call __iomap_dio_rw() with the
i_rwsem locked, and call iomap_dio_complete() after unlocking i_rwsem.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove btrfs_inode::dio_sem
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:19 +0000 (11:39 -0500)]
btrfs: remove btrfs_inode::dio_sem

The inode dio_sem can be eliminated because all DIO synchronization is
now performed through inode->i_rwsem that provides the same guarantees.

This reduces btrfs_inode size by 40 bytes.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use shared lock for direct writes within EOF
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:18 +0000 (11:39 -0500)]
btrfs: use shared lock for direct writes within EOF

Direct writes within EOF are safe to be performed with inode shared lock
to improve parallelization with other direct writes or reads because EOF
is not changed and there is no race with truncate().

Direct reads are already performed under shared inode lock.

This patch is precursor to removing btrfs_inode->dio_sem.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: push inode locking and unlocking into buffered/direct write
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:17 +0000 (11:39 -0500)]
btrfs: push inode locking and unlocking into buffered/direct write

Push inode locking and unlocking closer to where we perform the I/O. For
this we need to move the write checks inside the respective functions as
well.

pos is evaluated after generic_write_checks because O_APPEND can change
iocb->ki_pos.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: introduce btrfs_inode_lock()/unlock()
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:16 +0000 (11:39 -0500)]
btrfs: introduce btrfs_inode_lock()/unlock()

btrfs_inode_lock/unlock() are wrappers around inode locks, separating
the type of lock and actual locking.

- 0 - default, exclusive lock
- BTRFS_ILOCK_SHARED - for shared locks, for possible parallel DIO
- BTRFS_ILOCK_TRY - for the RWF_NOWAIT sequence

The bits SHARED and TRY can be combined together.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: introduce btrfs_write_check()
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:15 +0000 (11:39 -0500)]
btrfs: introduce btrfs_write_check()

btrfs_write_check() checks write parameters in one place before
beginning a write. This does away with inode_unlock() after every check.
In the later patches, it will help push inode_lock/unlock() in buffered
and direct write functions respectively.

generic_write_checks needs to be called before as it could truncate
iov_iter and its return used as count.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: check FS error state bit early during write
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:14 +0000 (11:39 -0500)]
btrfs: check FS error state bit early during write

fs_info::fs_state is a filesystem bit check as opposed to inode and can
be performed before we begin with write checks. This eliminates inode
lock/unlock in case the error bit is set.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: move pos increment and pagecache extension to btrfs_buffered_write
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:13 +0000 (11:39 -0500)]
btrfs: move pos increment and pagecache extension to btrfs_buffered_write

While we do this, correct the call to pagecache_isize_extended:

 - pagecache_isize_extended needs to be called to the start of the write
   as opposed to i_size

 - we don't need to check range before the call, this is done in the
   function

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: split btrfs_direct_IO to read and write
Goldwyn Rodrigues [Thu, 24 Sep 2020 16:39:12 +0000 (11:39 -0500)]
btrfs: split btrfs_direct_IO to read and write

The read and write DIO don't have anything in common except for the
call to iomap_dio_rw. Extract the write call into a new function to get
rid of conditional statements for direct write.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: sysfs: add per-fs attribute for read policy
Anand Jain [Wed, 28 Oct 2020 13:14:47 +0000 (21:14 +0800)]
btrfs: sysfs: add per-fs attribute for read policy

Add

 /sys/fs/btrfs/UUID/read_policy

attribute so that the read policy for the raid1, raid1c34 and raid10 can
be tuned.

When this attribute is read, it will show all available policies, with
active policy in [ ]. The read_policy attribute can be written using one
of the items listed in there.

For example:
  $ cat /sys/fs/btrfs/UUID/read_policy
  [pid]
  $ echo pid > /sys/fs/btrfs/UUID/read_policy

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: create read policy framework
Anand Jain [Wed, 28 Oct 2020 13:14:46 +0000 (21:14 +0800)]
btrfs: create read policy framework

As of now, we use the pid method to read striped mirrored data, which
means process id determines the stripe id to read. This type of routing
typically helps in a system with many small independent processes tying
to read random data. On the other hand, the pid based read IO policy is
inefficient because if there is a single process trying to read a large
file, the overall disk bandwidth remains underutilized.

So this patch introduces a read policy framework so that we could add
more read policies, such as IO routing based on the device's wait-queue
or manual when we have a read-preferred device or a policy based on the
target storage caching.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: add helper for string match ignoring leading/trailing whitespace
Anand Jain [Wed, 28 Oct 2020 13:14:45 +0000 (21:14 +0800)]
btrfs: add helper for string match ignoring leading/trailing whitespace

Add a generic helper to match the string in a given buffer, and ignore
the leading and trailing whitespace.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ rename variables, add comments ]
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: do not start and wait for delalloc on snapshot roots on transaction commit
Filipe Manana [Tue, 27 Oct 2020 12:40:06 +0000 (12:40 +0000)]
btrfs: do not start and wait for delalloc on snapshot roots on transaction commit

We do not need anymore to start writeback for delalloc of roots that are
being snapshotted and wait for it to complete. This was done in commit
609e804d771f59 ("Btrfs: fix file corruption after snapshotting due to mix
of buffered/DIO writes") to fix a type of file corruption where files in a
snapshot end up having their i_size updated in a non-ordered way, leaving
implicit file holes, when buffered IO writes that increase a file's size
are followed by direct IO writes that also increase the file's size.

This is not needed anymore because we now have a more generic mechanism
to prevent a non-ordered i_size update since commit 9ddc959e802bf7
("btrfs: use the file extent tree infrastructure"), which addresses this
scenario involving snapshots as well.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: switch extent buffer tree lock to rw_semaphore
Josef Bacik [Thu, 20 Aug 2020 15:46:09 +0000 (11:46 -0400)]
btrfs: switch extent buffer tree lock to rw_semaphore

Historically we've implemented our own locking because we wanted to be
able to selectively spin or sleep based on what we were doing in the
tree.  For instance, if all of our nodes were in cache then there's
rarely a reason to need to sleep waiting for node locks, as they'll
likely become available soon.  At the time this code was written the
rw_semaphore didn't do adaptive spinning, and thus was orders of
magnitude slower than our home grown locking.

However now the opposite is the case.  There are a few problems with how
we implement blocking locks, namely that we use a normal waitqueue and
simply wake everybody up in reverse sleep order.  This leads to some
suboptimal performance behavior, and a lot of context switches in highly
contended cases.  The rw_semaphores actually do this properly, and also
have adaptive spinning that works relatively well.

The locking code is also a bit of a bear to understand, and we lose the
benefit of lockdep for the most part because the blocking states of the
lock are simply ad-hoc and not mapped into lockdep.

So rework the locking code to drop all of this custom locking stuff, and
simply use a rw_semaphore for everything.  This makes the locking much
simpler for everything, as we can now drop a lot of cruft and blocking
transitions.  The performance numbers vary depending on the workload,
because generally speaking there doesn't tend to be a lot of contention
on the btree.  However, on my test system which is an 80 core single
socket system with 256GiB of RAM and a 2TiB NVMe drive I get the
following results (with all debug options off):

  dbench 200 baseline
  Throughput 216.056 MB/sec  200 clients  200 procs  max_latency=1471.197 ms

  dbench 200 with patch
  Throughput 737.188 MB/sec  200 clients  200 procs  max_latency=714.346 ms

Previously we also used fs_mark to test this sort of contention, and
those results are far less impressive, mostly because there's not enough
tasks to really stress the locking

  fs_mark -d /d[0-15] -S 0 -L 20 -n 100000 -s 0 -t 16

  baseline
    Average Files/sec:     160166.7
    p50 Files/sec:         165832
    p90 Files/sec:         123886
    p99 Files/sec:         123495

    real    3m26.527s
    user    2m19.223s
    sys     48m21.856s

  patched
    Average Files/sec:     164135.7
    p50 Files/sec:         171095
    p90 Files/sec:         122889
    p99 Files/sec:         113819

    real    3m29.660s
    user    2m19.990s
    sys     44m12.259s

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: open code insert_orphan_item
Nikolay Borisov [Thu, 22 Oct 2020 15:40:46 +0000 (18:40 +0300)]
btrfs: open code insert_orphan_item

Just open code it in its sole caller and remove a level of indirection.

Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: introduce mount option rescue=all
Josef Bacik [Fri, 16 Oct 2020 15:29:20 +0000 (11:29 -0400)]
btrfs: introduce mount option rescue=all

Now that we have the building blocks for some better recovery options
with corrupted file systems, add a rescue=all option to enable all of
the relevant rescue options.  This will allow distros to simply default
to rescue=all for the "oh dear lord the world's on fire" recovery
without needing to know all the different options that we have and may
add in the future.

Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: introduce mount option rescue=ignoredatacsums
Josef Bacik [Fri, 16 Oct 2020 15:29:19 +0000 (11:29 -0400)]
btrfs: introduce mount option rescue=ignoredatacsums

There are cases where you can end up with bad data csums because of
misbehaving applications.  This happens when an application modifies a
buffer in-flight when doing an O_DIRECT write.  In order to recover the
file we need a way to turn off data checksums so you can copy the file
off, and then you can delete the file and restore it properly later.

Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: introduce mount option rescue=ignorebadroots
Josef Bacik [Fri, 16 Oct 2020 15:29:18 +0000 (11:29 -0400)]
btrfs: introduce mount option rescue=ignorebadroots

In the face of extent root corruption, or any other core fs wide root
corruption we will fail to mount the file system.  This makes recovery
kind of a pain, because you need to fall back to userspace tools to
scrape off data.  Instead provide a mechanism to gracefully handle bad
roots, so we can at least mount read-only and possibly recover data from
the file system.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: show rescue=usebackuproot in /proc/mounts
Josef Bacik [Fri, 16 Oct 2020 15:29:17 +0000 (11:29 -0400)]
btrfs: show rescue=usebackuproot in /proc/mounts

The standalone option usebackuproot was intended as one-time use and it
was not necessary to keep it in the option list. Now that we're going to
have more rescue options, it's desirable to keep them intact as it could
be confusing why the option disappears.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ remove the btrfs_clear_opt part from open_ctree ]
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: add a helper to print out rescue= options
Josef Bacik [Fri, 16 Oct 2020 15:29:16 +0000 (11:29 -0400)]
btrfs: add a helper to print out rescue= options

We're going to have a lot of rescue options, add a helper to collapse
the /proc/mounts output to rescue=option1:option2:option3 format.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: sysfs: export supported rescue= mount options
Josef Bacik [Fri, 16 Oct 2020 15:29:15 +0000 (11:29 -0400)]
btrfs: sysfs: export supported rescue= mount options

We're going to be adding a variety of different rescue options, we
should advertise which ones we support to make user spaces life easier
in the future.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: push the NODATASUM check into btrfs_lookup_bio_sums
Josef Bacik [Fri, 16 Oct 2020 15:29:14 +0000 (11:29 -0400)]
btrfs: push the NODATASUM check into btrfs_lookup_bio_sums

When we move to being able to handle NULL csum_roots it'll be cleaner to
just check in btrfs_lookup_bio_sums instead of at all of the caller
locations, so push the NODATASUM check into it as well so it's unified.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: unify the ro checking for mount options
Josef Bacik [Fri, 16 Oct 2020 15:29:13 +0000 (11:29 -0400)]
btrfs: unify the ro checking for mount options

We're going to be adding more options that require RDONLY, so add a
helper to do the check and error out if we don't have RDONLY set.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: do not start readahead for csum tree when scrubbing non-data block groups
Filipe Manana [Mon, 12 Oct 2020 10:55:26 +0000 (11:55 +0100)]
btrfs: do not start readahead for csum tree when scrubbing non-data block groups

When scrubbing a stripe of a block group we always start readahead for the
checksums btree and wait for it to complete, however when the blockgroup is
not a data block group (or a mixed block group) it is a waste of time to do
it, since there are no checksums for metadata extents in that btree.

So skip that when the block group does not have the data flag set, saving
some time doing memory allocations, queueing a job in the readahead work
queue, waiting for it to complete and potentially avoiding some IO as well
(when csum tree extents are not in memory already).

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: assert we are holding the reada_lock when releasing a readahead zone
Filipe Manana [Mon, 12 Oct 2020 10:55:25 +0000 (11:55 +0100)]
btrfs: assert we are holding the reada_lock when releasing a readahead zone

When we drop the last reference of a zone, we end up releasing it through
the callback reada_zone_release(), which deletes the zone from a device's
reada_zones radix tree. This tree is protected by the global readahead
lock at fs_info->reada_lock. Currently all places that are sure that they
are dropping the last reference on a zone, are calling kref_put() in a
critical section delimited by this lock, while all other places that are
sure they are not dropping the last reference, do not bother calling
kref_put() while holding that lock.

When working on the previous fix for hangs and use-after-frees in the
readahead code, my initial attempts were different and I actually ended
up having reada_zone_release() called when not holding the lock, which
resulted in weird and unexpected problems. So just add an assertion
there to detect such problem more quickly and make the dependency more
obvious.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: set EXTENT_NORESERVE bits side btrfs_dirty_pages()
Goldwyn Rodrigues [Wed, 14 Oct 2020 14:55:45 +0000 (09:55 -0500)]
btrfs: set EXTENT_NORESERVE bits side btrfs_dirty_pages()

Set the extent bits EXTENT_NORESERVE inside btrfs_dirty_pages() as
opposed to calling set_extent_bits again later.

Fold check for written length within the function.

Note: EXTENT_NORESERVE is set before unlocking extents.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use round_down while calculating start position in btrfs_dirty_pages()
Goldwyn Rodrigues [Wed, 14 Oct 2020 14:55:44 +0000 (09:55 -0500)]
btrfs: use round_down while calculating start position in btrfs_dirty_pages()

round_down looks prettier than the bit mask operations.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use iosize while reading compressed pages
Goldwyn Rodrigues [Tue, 15 Sep 2020 15:41:40 +0000 (10:41 -0500)]
btrfs: use iosize while reading compressed pages

While using compression, a submitted bio is mapped with a compressed bio
which performs the read from disk, decompresses and returns uncompressed
data to original bio. The original bio must reflect the uncompressed
size (iosize) of the I/O to be performed, or else the page just gets the
decompressed I/O length of data (disk_io_size). The compressed bio
checks the extent map and gets the correct length while performing the
I/O from disk.

This came up in subpage work when only compressed length of the original
bio was filled in the page. This worked correctly for pagesize ==
sectorsize because both compressed and uncompressed data are at pagesize
boundaries, and would end up filling the requested page.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: calculate num_pages, reserve_bytes once in btrfs_buffered_write
Goldwyn Rodrigues [Fri, 25 Sep 2020 20:36:38 +0000 (15:36 -0500)]
btrfs: calculate num_pages, reserve_bytes once in btrfs_buffered_write

write_bytes can change in btrfs_check_nocow_lock(). Calculate variables
such as num_pages and reserve_bytes once we are sure of the value of
write_bytes so there is no need to re-calculate.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: calculate more accurate remaining time to sleep in transaction_kthread
Nikolay Borisov [Tue, 20 Oct 2020 09:44:17 +0000 (12:44 +0300)]
btrfs: calculate more accurate remaining time to sleep in transaction_kthread

If transaction_kthread is woken up before btrfs_fs_info::commit_interval
seconds have elapsed it will sleep for a fixed period of 5 seconds. This
is not a problem per-se but is not accurate. Instead the code should
sleep for an interval which guarantees on next wakeup commit_interval
would have passed. Since time tracking is not precise subtract 1 second
from delta to ensure the delay we end up waiting will be longer than
than the wake up period.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: record delta directly in transaction_kthread
Nikolay Borisov [Thu, 8 Oct 2020 12:24:29 +0000 (15:24 +0300)]
btrfs: record delta directly in transaction_kthread

Rename 'now' to 'delta' and store there the delta between transaction
start time and current time. This is in preparation for optimising the
sleep logic in the next patch. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: remove redundant time check in transaction kthread loop
Nikolay Borisov [Thu, 8 Oct 2020 12:24:28 +0000 (15:24 +0300)]
btrfs: remove redundant time check in transaction kthread loop

The value obtained from ktime_get_seconds() is guaranteed to be
monotonically increasing since it's taken from CLOCK_MONOTONIC. As
transaction_kthread obtains a reference to the currently running
transaction under holding btrfs_fs_info::trans_lock it's guaranteed to:

a) see an initialized 'cur', whose start_time is guaranteed to be smaller
   than 'now'

or

b) not obtain a 'cur' and simply go to sleep.

Given this remove the unnecessary check, if it sees
now < cur->start_time this would imply there are far greater problems on
the machine.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: use helpers to convert from seconds to jiffies in transaction_kthread
Nikolay Borisov [Thu, 8 Oct 2020 12:24:27 +0000 (15:24 +0300)]
btrfs: use helpers to convert from seconds to jiffies in transaction_kthread

The kernel provides easy to understand helpers to convert from human
understandable units to the kernel-friendly 'jiffies'. So let's use
those to make the code easier to understand. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agobtrfs: sysfs: export filesystem generation
Anand Jain [Wed, 7 Oct 2020 07:20:03 +0000 (15:20 +0800)]
btrfs: sysfs: export filesystem generation

Matching with the information that's available from the ioctl
FS_INFO, add generation to the per-filesystem directory
/sys/fs/btrfs/UUID/generation, which could be used by scripts.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
3 years agoLinux 5.10-rc7
Linus Torvalds [Sun, 6 Dec 2020 22:25:12 +0000 (14:25 -0800)]
Linux 5.10-rc7

3 years agoMerge tag 'char-misc-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregk...
Linus Torvalds [Sun, 6 Dec 2020 19:48:17 +0000 (11:48 -0800)]
Merge tag 'char-misc-5.10-rc7' of git://git./linux/kernel/git/gregkh/char-misc

Pull char/misc driver fixes from Greg KH:
 "Here are some small driver fixes, and one "large" revert, for
  5.10-rc7.

  They include:

   - revert mei patch from 5.10-rc1 that was using a reserved userspace
     value. It will be resubmitted once the proper id has been assigned
     by the virtio people.

   - habanalabs fixes found by the fall-through audit from Gustavo

   - speakup driver fixes for reported issues

   - fpga config build fix for reported issue.

  All of these except the revert have been in linux-next with no
  reported issues. The revert is "clean" and just removes a
  previously-added driver, so no real issue there"

* tag 'char-misc-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
  Revert "mei: virtio: virtualization frontend driver"
  fpga: Specify HAS_IOMEM dependency for FPGA_DFL
  habanalabs: put devices before driver removal
  habanalabs: free host huge va_range if not used
  speakup: Reject setting the speakup line discipline outside of speakup

3 years agoMerge tag 'tty-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Linus Torvalds [Sun, 6 Dec 2020 19:43:50 +0000 (11:43 -0800)]
Merge tag 'tty-5.10-rc7' of git://git./linux/kernel/git/gregkh/tty

Pull tty fixes from Greg KH:
 "Here are two tty core fixes for 5.10-rc7.

  They resolve some reported locking issues in the tty core. While they
  have not been in a released linux-next yet, they have passed all of
  the 0-day bot testing as well as the submitter's testing"

* tag 'tty-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
  tty: Fix ->session locking
  tty: Fix ->pgrp locking in tiocspgrp()

3 years agoMerge tag 'usb-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Linus Torvalds [Sun, 6 Dec 2020 19:38:36 +0000 (11:38 -0800)]
Merge tag 'usb-5.10-rc7' of git://git./linux/kernel/git/gregkh/usb

Pull USB fixes from Greg KH:
 "Here are some small USB fixes for 5.10-rc7 that resolve a number of
  reported issues, and add some new device ids.

  Nothing major here, but these solve some problems that people were
  having with the 5.10-rc tree:

   - reverts for USB storage dma settings that broke working devices

   - thunderbolt use-after-free fix

   - cdns3 driver fixes

   - gadget driver userspace copy fix

   - new device ids

  All of these except for the reverts have been in linux-next with no
  reported issues. The reverts are "clean" and were tested by Hans, as
  well as passing the 0-day tests"

* tag 'usb-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
  usb: gadget: f_fs: Use local copy of descriptors for userspace copy
  usb: ohci-omap: Fix descriptor conversion
  Revert "usb-storage: fix sdev->host->dma_dev"
  Revert "uas: fix sdev->host->dma_dev"
  Revert "uas: bump hw_max_sectors to 2048 blocks for SS or faster drives"
  USB: serial: kl5kusb105: fix memleak on open
  USB: serial: ch341: sort device-id entries
  USB: serial: ch341: add new Product ID for CH341A
  USB: serial: option: fix Quectel BG96 matching
  usb: cdns3: core: fix goto label for error path
  usb: cdns3: gadget: clear trb->length as zero after preparing every trb
  usb: cdns3: Fix hardware based role switch
  USB: serial: option: add support for Thales Cinterion EXS82
  USB: serial: option: add Fibocom NL668 variants
  thunderbolt: Fix use-after-free in remove_unplugged_switch()

3 years agoMerge tag 'x86-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 6 Dec 2020 19:22:39 +0000 (11:22 -0800)]
Merge tag 'x86-urgent-2020-12-06' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "A set of fixes for x86:

   - Make the AMD L3 QoS code and data priorization enable/disable
     mechanism work correctly.

     The control bit was only set/cleared on one of the CPUs in a L3
     domain, but it has to be modified on all CPUs in the domain. The
     initial documentation was not clear about this, but the updated one
     from Oct 2020 spells it out.

   - Fix an off by one in the UV platform detection code which causes
     the UV hubs to be identified wrongly.

     The chip revisions start at 1 not at 0.

   - Fix a long standing bug in the evaluation of prefixes in the
     uprobes code which fails to handle repeated prefixes properly.

     The aggregate size of the prefixes can be larger than the bytes
     array but the code blindly iterated over the aggregate size beyond
     the array boundary. Add a macro to handle this case properly and
     use it at the affected places"

* tag 'x86-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/sev-es: Use new for_each_insn_prefix() macro to loop over prefixes bytes
  x86/insn-eval: Use new for_each_insn_prefix() macro to loop over prefixes bytes
  x86/uprobes: Do not use prefixes.nbytes when looping over prefixes.bytes
  x86/platform/uv: Fix UV4 hub revision adjustment
  x86/resctrl: Fix AMD L3 QOS CDP enable/disable

3 years agoMerge tag 'perf-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 6 Dec 2020 19:20:18 +0000 (11:20 -0800)]
Merge tag 'perf-urgent-2020-12-06' of git://git./linux/kernel/git/tip/tip

Pull perf fixes from Thomas Gleixner:
 "Two fixes for performance monitoring on X86:

   - Add recursion protection to another callchain invoked from
     x86_pmu_stop() which can recurse back into x86_pmu_stop(). The
     first attempt to fix this missed this extra code path.

   - Use the already filtered status variable to check for PEBS counter
     overflow bits and not the unfiltered full status read from
     IA32_PERF_GLOBAL_STATUS which can have unrelated bits check which
     would be evaluated incorrectly"

* tag 'perf-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/x86/intel: Check PEBS status correctly
  perf/x86/intel: Fix a warning on x86_pmu_stop() with large PEBS

3 years agoMerge tag 'irq-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 6 Dec 2020 19:15:55 +0000 (11:15 -0800)]
Merge tag 'irq-urgent-2020-12-06' of git://git./linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A set of updates for the interrupt subsystem:

   - Make multiqueue devices which use the managed interrupt affinity
     infrastructure work on PowerPC/Pseries. PowerPC does not use the
     generic infrastructure for setting up PCI/MSI interrupts and the
     multiqueue changes failed to update the legacy PCI/MSI
     infrastructure. Make this work by passing the affinity setup
     information down to the mapping and allocation functions.

   - Move Jason Cooper from MAINTAINERS to CREDITS as his mail is
     bouncing and he's not reachable. We hope all is well with him and
     say thanks for his work over the years"

* tag 'irq-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  powerpc/pseries: Pass MSI affinity to irq_create_mapping()
  genirq/irqdomain: Add an irq_create_mapping_affinity() function
  MAINTAINERS: Move Jason Cooper to CREDITS

3 years agoMerge tag 'locking-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 6 Dec 2020 19:11:32 +0000 (11:11 -0800)]
Merge tag 'locking-urgent-2020-12-06' of git://git./linux/kernel/git/tip/tip

Pull intel_idle build fix from Thomas Gleixner:
 "A tiny build fix for a recent change in the intel_idle driver which
  missed a CONFIG dependency and broke the build for certain
  configurations"

* tag 'locking-urgent-2020-12-06' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  intel_idle: Build fix

3 years agoMerge tag 'kbuild-fixes-v5.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 6 Dec 2020 18:31:39 +0000 (10:31 -0800)]
Merge tag 'kbuild-fixes-v5.10-2' of git://git./linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - Move -Wcast-align to W=3, which tends to be false-positive and there
   is no tree-wide solution.

 - Pass -fmacro-prefix-map to KBUILD_CPPFLAGS because it is a
   preprocessor option and makes sense for .S files as well.

 - Disable -gdwarf-2 for Clang's integrated assembler to avoid warnings.

 - Disable --orphan-handling=warn for LLD 10.0.1 to avoid warnings.

 - Fix undesirable line breaks in *.mod files.

* tag 'kbuild-fixes-v5.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kbuild: avoid split lines in .mod files
  kbuild: Disable CONFIG_LD_ORPHAN_WARN for ld.lld 10.0.1
  kbuild: Hoist '--orphan-handling' into Kconfig
  Kbuild: do not emit debug info for assembly with LLVM_IAS=1
  kbuild: use -fmacro-prefix-map for .S sources
  Makefile.extrawarn: move -Wcast-align to W=3