bcache: Fix a writeback performance regression
authorKent Overstreet <kmo@daterainc.com>
Tue, 24 Sep 2013 06:17:31 +0000 (23:17 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 24 Sep 2013 21:41:43 +0000 (14:41 -0700)
commitc2a4f3183a1248f615a695fbd8905da55ad11bba
tree51233866301869506c0728c812fe3668ae1e94ce
parent61cbd250f867f98bb4738000afc6002d6f2b14bd
bcache: Fix a writeback performance regression

Background writeback works by scanning the btree for dirty data and
adding those keys into a fixed size buffer, then for each dirty key in
the keybuf writing it to the backing device.

When read_dirty() finishes and it's time to scan for more dirty data, we
need to wait for the outstanding writeback IO to finish - they still
take up slots in the keybuf (so that foreground writes can check for
them to avoid races) - without that wait, we'll continually rescan when
we'll be able to add at most a key or two to the keybuf, and that takes
locks that starves foreground IO.  Doh.

Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
drivers/md/bcache/bcache.h
drivers/md/bcache/util.c
drivers/md/bcache/util.h
drivers/md/bcache/writeback.c