All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/11] Per-bdi writeback flusher threads #4
@ 2009-05-18 12:19 Jens Axboe
  2009-05-18 12:19 ` [PATCH 01/11] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
                   ` (12 more replies)
  0 siblings, 13 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang

Hi,

This is the fourth version of this patchset. Chances since v3:

- Dropped a prep patch, it has been included in mainline since.

- Add a work-to-do list to the bdi. This is struct bdi_work. Each
  wb thread will notice and execute work on bdi->work_list. The arguments
  are which sb (or NULL for all) to flush and how many pages to flush.

- Fix a bug where not all bdi's would end up on the bdi_list, so potentially
  some data would not be flushed.

- Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
  behaviour for kupdated flushes.

- Have the wb thread flush first before sleeping, to avoid losing the
  first flush on lazy register.

- Rebase to newer kernels.

- Little fixes here and there.

So generally not a lot of changes, the major one is using the ->work_list
and getting rid of writeback_acquire()/writeback_release(). This fixes
the concern Jan Kara had about missing sync/WB_SYNC_ALL, if writeback
was already in progress.

I've run a few benchmarks today:

1) Large file writes from a single process
2) Random file writes from multiple (16) processes.

Each benchmark was run 3 times on each kernel. The disk used was an
Intel X25-E and it was security erased before each run for consistency.
2.6.30-rc6 (22ef37eed673587ac984965dc88ba94c68873291) is the baseline
at 100. Filesystem was ext4 without barriers. The system was a Core 2
Quad with 2G of memory.

Kernel		Test		TPS		CPU
---------------------------------------------------
Baseline	1		100		100
Writeback	1		101		 95
Baseline	2		100		100
Writeback	2		105		 94

For the sequential test, speed is almost identical, but CPU usage is a
lot lower. For the random write case with 16 threads, transaction rate
is up for the writeback patches while the CPU usage is down as well.
So pretty good results for this initial test, I'd expect larger
improvements on systems with more disks. As soon as Intel sends me
4 more drives for testing, I'll update the results :-)

You can pull the patches from the block git repo, branch is 'writeback':

  git://git.kernel.dk/linux-2.6-block.git writeback

---

 b/block/blk-core.c            |    1 
 b/drivers/block/aoe/aoeblk.c  |    1 
 b/drivers/char/mem.c          |    1 
 b/fs/btrfs/disk-io.c          |   24 +
 b/fs/buffer.c                 |    2 
 b/fs/char_dev.c               |    1 
 b/fs/configfs/inode.c         |    1 
 b/fs/fs-writeback.c           |  689 ++++++++++++++++++++++++++++++++----------
 b/fs/fuse/inode.c             |    1 
 b/fs/hugetlbfs/inode.c        |    1 
 b/fs/nfs/client.c             |    1 
 b/fs/ntfs/super.c             |   32 -
 b/fs/ocfs2/dlm/dlmfs.c        |    1 
 b/fs/ramfs/inode.c            |    1 
 b/fs/super.c                  |    3 
 b/fs/sync.c                   |    2 
 b/fs/sysfs/inode.c            |    1 
 b/fs/ubifs/super.c            |    1 
 b/include/linux/backing-dev.h |   74 ++++
 b/include/linux/fs.h          |   11 
 b/include/linux/writeback.h   |   15 
 b/kernel/cgroup.c             |    1 
 b/mm/Makefile                 |    2 
 b/mm/backing-dev.c            |  481 ++++++++++++++++++++++++++++-
 b/mm/page-writeback.c         |  144 --------
 b/mm/swap_state.c             |    1 
 b/mm/vmscan.c                 |    2 
 mm/pdflush.c                  |  269 ----------------
 28 files changed, 1130 insertions(+), 634 deletions(-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 01/11] writeback: move dirty inodes from super_block to backing_dev_info
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This is a first step at introducing per-bdi flusher threads. We should
have no change in behaviour, although sb_has_dirty_inodes() is now
ridiculously expensive, as there's no easy way to answer that question.
Not a huge problem, since it'll be deleted in subsequent patches.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  196 +++++++++++++++++++++++++++---------------
 fs/super.c                  |    3 -
 include/linux/backing-dev.h |    9 ++
 include/linux/fs.h          |    5 +-
 mm/backing-dev.c            |   30 +++++++
 mm/page-writeback.c         |    1 -
 6 files changed, 166 insertions(+), 78 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 91013ff..34c8d1d 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -25,6 +25,7 @@
 #include <linux/buffer_head.h>
 #include "internal.h"
 
+#define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
@@ -158,12 +159,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 			goto out;
 
 		/*
-		 * If the inode was already on s_dirty/s_io/s_more_io, don't
-		 * reposition it (that would break s_dirty time-ordering).
+		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list, &sb->s_dirty);
+			list_move(&inode->i_list,
+					&inode_to_bdi(inode)->b_dirty);
 		}
 	}
 out:
@@ -184,31 +186,30 @@ static int write_inode(struct inode *inode, int sync)
  * furthest end of its superblock's dirty-inode list.
  *
  * Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list.  If that is
+ * already the most-recently-dirtied inode on the b_dirty list.  If that is
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct super_block *sb = inode->i_sb;
+	struct backing_dev_info *bdi = inode_to_bdi(inode);
 
-	if (!list_empty(&sb->s_dirty)) {
-		struct inode *tail_inode;
+	if (!list_empty(&bdi->b_dirty)) {
+		struct inode *tail;
 
-		tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
-		if (time_before(inode->dirtied_when,
-				tail_inode->dirtied_when))
+		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &sb->s_dirty);
+	list_move(&inode->i_list, &bdi->b_dirty);
 }
 
 /*
- * requeue inode for re-scanning after sb->s_io list is exhausted.
+ * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode->i_sb->s_more_io);
+	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -255,18 +256,50 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct super_block *sb,
-				unsigned long *older_than_this)
+static void queue_io(struct backing_dev_info *bdi,
+		     unsigned long *older_than_this)
+{
+	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
+	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+}
+
+static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
 {
-	list_splice_init(&sb->s_more_io, sb->s_io.prev);
-	move_expired_inodes(&sb->s_dirty, &sb->s_io, older_than_this);
+	struct inode *inode;
+	int ret = 0;
+
+	spin_lock(&inode_lock);
+	list_for_each_entry(inode, list, i_list) {
+		if (inode->i_sb == sb) {
+			ret = 1;
+			break;
+		}
+	}
+	spin_unlock(&inode_lock);
+	return ret;
 }
 
 int sb_has_dirty_inodes(struct super_block *sb)
 {
-	return !list_empty(&sb->s_dirty) ||
-	       !list_empty(&sb->s_io) ||
-	       !list_empty(&sb->s_more_io);
+	struct backing_dev_info *bdi;
+	int ret = 0;
+
+	/*
+	 * This is REALLY expensive right now, but it'll go away
+	 * when the bdi writeback is introduced
+	 */
+	rcu_read_lock();
+	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
+		    sb_on_inode_list(sb, &bdi->b_io) ||
+		    sb_on_inode_list(sb, &bdi->b_more_io)) {
+			ret = 1;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return ret;
 }
 EXPORT_SYMBOL(sb_has_dirty_inodes);
 
@@ -322,11 +355,11 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			/*
 			 * We didn't write back all the pages.  nfs_writepages()
 			 * sometimes bales out without doing anything. Redirty
-			 * the inode; Move it from s_io onto s_more_io/s_dirty.
+			 * the inode; Move it from b_io onto b_more_io/b_dirty.
 			 */
 			/*
 			 * akpm: if the caller was the kupdate function we put
-			 * this inode at the head of s_dirty so it gets first
+			 * this inode at the head of b_dirty so it gets first
 			 * consideration.  Otherwise, move it to the tail, for
 			 * the reasons described there.  I'm not really sure
 			 * how much sense this makes.  Presumably I had a good
@@ -336,7 +369,7 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			if (wbc->for_kupdate) {
 				/*
 				 * For the kupdate function we move the inode
-				 * to s_more_io so it will get more writeout as
+				 * to b_more_io so it will get more writeout as
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
@@ -402,10 +435,10 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_SYNC)) {
 		/*
 		 * We're skipping this inode because it's locked, and we're not
-		 * doing writeback-for-data-integrity.  Move it to s_more_io so
-		 * that writeback can proceed with the other inodes on s_io.
+		 * doing writeback-for-data-integrity.  Move it to b_more_io so
+		 * that writeback can proceed with the other inodes on b_io.
 		 * We'll have another go at writing back this inode when we
-		 * completed a full scan of s_io.
+		 * completed a full scan of b_io.
 		 */
 		requeue_io(inode);
 		return 0;
@@ -428,51 +461,34 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-/*
- * Write out a superblock's list of dirty inodes.  A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdflush thread, then implement pdflush collision avoidance
- * against the entire list.
- *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched.  For other superblocks,
- * assume that all inodes are backed by the same queue.
- *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
- * The inodes to be written are parked on sb->s_io.  They are moved back onto
- * sb->s_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
- */
-void generic_sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc)
+static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
+				    struct writeback_control *wbc,
+				    struct super_block *sb,
+				    int is_blkdev_sb)
 {
 	const unsigned long start = jiffies;	/* livelock avoidance */
-	int sync = wbc->sync_mode == WB_SYNC_ALL;
 
 	spin_lock(&inode_lock);
-	if (!wbc->for_kupdate || list_empty(&sb->s_io))
-		queue_io(sb, wbc->older_than_this);
 
-	while (!list_empty(&sb->s_io)) {
-		struct inode *inode = list_entry(sb->s_io.prev,
+	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
+		queue_io(bdi, wbc->older_than_this);
+
+	while (!list_empty(&bdi->b_io)) {
+		struct inode *inode = list_entry(bdi->b_io.prev,
 						struct inode, i_list);
-		struct address_space *mapping = inode->i_mapping;
-		struct backing_dev_info *bdi = mapping->backing_dev_info;
 		long pages_skipped;
 
+		/*
+		 * super block given and doesn't match, skip this inode
+		 */
+		if (sb && sb != inode->i_sb) {
+			redirty_tail(inode);
+			continue;
+		}
+
 		if (!bdi_cap_writeback_dirty(bdi)) {
 			redirty_tail(inode);
-			if (sb_is_blkdev_sb(sb)) {
+			if (is_blkdev_sb) {
 				/*
 				 * Dirty memory-backed blockdev: the ramdisk
 				 * driver does this.  Skip just this inode
@@ -494,14 +510,14 @@ void generic_sync_sb_inodes(struct super_block *sb,
 
 		if (wbc->nonblocking && bdi_write_congested(bdi)) {
 			wbc->encountered_congestion = 1;
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
 			requeue_io(inode);
 			continue;		/* Skip a congested blockdev */
 		}
 
 		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* fs has the wrong queue */
 			requeue_io(inode);
 			continue;		/* blockdev has wrong queue */
@@ -539,13 +555,55 @@ void generic_sync_sb_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&sb->s_more_io))
+		if (!list_empty(&bdi->b_more_io))
 			wbc->more_io = 1;
 	}
 
-	if (sync) {
+	spin_unlock(&inode_lock);
+	/* Leave any unwritten inodes on b_io */
+}
+
+/*
+ * Write out a superblock's list of dirty inodes.  A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched.  For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * FIXME: this linear search could get expensive with many fileystems.  But
+ * how to fix?  We need to go from an address_space to all inodes which share
+ * a queue with that address_space.  (Easy: have a global "dirty superblocks"
+ * list).
+ *
+ * The inodes to be written are parked on bdi->b_io.  They are moved back onto
+ * bdi->b_dirty as they are selected for writing.  This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+void generic_sync_sb_inodes(struct super_block *sb,
+				struct writeback_control *wbc)
+{
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+	struct backing_dev_info *bdi;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
+		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
+	rcu_read_unlock();
+
+	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
 
+		spin_lock(&inode_lock);
+
 		/*
 		 * Data integrity sync. Must wait for all pages under writeback,
 		 * because there may have been pages dirtied before our sync
@@ -583,10 +641,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
 		}
 		spin_unlock(&inode_lock);
 		iput(old_inode);
-	} else
-		spin_unlock(&inode_lock);
+	}
 
-	return;		/* Leave any unwritten inodes on s_io */
 }
 EXPORT_SYMBOL_GPL(generic_sync_sb_inodes);
 
@@ -601,8 +657,8 @@ static void sync_sb_inodes(struct super_block *sb,
  *
  * Note:
  * We don't need to grab a reference to superblock here. If it has non-empty
- * ->s_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->s_dirty/s_io/s_more_io lists are all
+ * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
+ * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
  * empty. Since __sync_single_inode() regains inode_lock before it finally moves
  * inode from superblock lists we are OK.
  *
diff --git a/fs/super.c b/fs/super.c
index 1943fdf..76dd5b2 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -64,9 +64,6 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
-		INIT_LIST_HEAD(&s->s_dirty);
-		INIT_LIST_HEAD(&s->s_io);
-		INIT_LIST_HEAD(&s->s_more_io);
 		INIT_LIST_HEAD(&s->s_files);
 		INIT_LIST_HEAD(&s->s_instances);
 		INIT_HLIST_HEAD(&s->s_anon);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..86668c7 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -40,6 +40,8 @@ enum bdi_stat_item {
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
 struct backing_dev_info {
+	struct list_head bdi_list;
+
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -58,6 +60,10 @@ struct backing_dev_info {
 
 	struct device *dev;
 
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
 	struct dentry *debug_stats;
@@ -72,6 +78,9 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 
+extern spinlock_t bdi_lock;
+extern struct list_head bdi_list;
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3b534e5..6b475d4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -712,7 +712,7 @@ static inline int mapping_writably_mapped(struct address_space *mapping)
 
 struct inode {
 	struct hlist_node	i_hash;
-	struct list_head	i_list;
+	struct list_head	i_list;		/* backing dev IO list */
 	struct list_head	i_sb_list;
 	struct list_head	i_dentry;
 	unsigned long		i_ino;
@@ -1329,9 +1329,6 @@ struct super_block {
 	struct xattr_handler	**s_xattr;
 
 	struct list_head	s_inodes;	/* all inodes */
-	struct list_head	s_dirty;	/* dirty inodes */
-	struct list_head	s_io;		/* parked for writeback */
-	struct list_head	s_more_io;	/* parked for more writeback */
 	struct hlist_head	s_anon;		/* anonymous dentries for (nfs) exporting */
 	struct list_head	s_files;
 	/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..883ee8a 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -22,6 +22,8 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
+DEFINE_SPINLOCK(bdi_lock);
+LIST_HEAD(bdi_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -211,6 +213,10 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	spin_lock(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock(&bdi_lock);
+
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
 
@@ -225,9 +231,23 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
+static void bdi_remove_from_list(struct backing_dev_info *bdi)
+{
+	spin_lock(&bdi_lock);
+	list_del_rcu(&bdi->bdi_list);
+	spin_unlock(&bdi_lock);
+
+	/*
+	 * In case the bdi is freed right after unregister, we need to
+	 * make sure any RCU sections have exited
+	 */
+	synchronize_rcu();
+}
+
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
+		bdi_remove_from_list(bdi);
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -245,6 +265,10 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	INIT_LIST_HEAD(&bdi->bdi_list);
+	INIT_LIST_HEAD(&bdi->b_io);
+	INIT_LIST_HEAD(&bdi->b_dirty);
+	INIT_LIST_HEAD(&bdi->b_more_io);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -259,6 +283,8 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
+
+		bdi_remove_from_list(bdi);
 	}
 
 	return err;
@@ -269,6 +295,10 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
+	WARN_ON(!list_empty(&bdi->b_dirty));
+	WARN_ON(!list_empty(&bdi->b_io));
+	WARN_ON(!list_empty(&bdi->b_more_io));
+
 	bdi_unregister(bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..2296ff4 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -319,7 +319,6 @@ static void task_dirty_limit(struct task_struct *tsk, long *pdirty)
 /*
  *
  */
-static DEFINE_SPINLOCK(bdi_lock);
 static unsigned int bdi_min_ratio;
 
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
  2009-05-18 12:19 ` [PATCH 01/11] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-19 10:20   ` Richard Kennedy
                     ` (2 more replies)
  2009-05-18 12:19 ` [PATCH 03/11] writeback: get rid of pdflush completely Jens Axboe
                   ` (10 subsequent siblings)
  12 siblings, 3 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This gets rid of pdflush for bdi writeout and kupdated style cleaning.
This is an experiment to see if we get better writeout behaviour with
per-bdi flushing. Some initial tests look pretty encouraging. A sample
ffsb workload that does random writes to files is about 8% faster here
on a simple SATA drive during the benchmark phase. File layout also seems
a LOT more smooth in vmstat:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  1      0 608848   2652 375372    0    0     0 71024  604    24  1 10 48 42
 0  1      0 549644   2712 433736    0    0     0 60692  505    27  1  8 48 44
 1  0      0 476928   2784 505192    0    0     4 29540  553    24  0  9 53 37
 0  1      0 457972   2808 524008    0    0     0 54876  331    16  0  4 38 58
 0  1      0 366128   2928 614284    0    0     4 92168  710    58  0 13 53 34
 0  1      0 295092   3000 684140    0    0     0 62924  572    23  0  9 53 37
 0  1      0 236592   3064 741704    0    0     4 58256  523    17  0  8 48 44
 0  1      0 165608   3132 811464    0    0     0 57460  560    21  0  8 54 38
 0  1      0 102952   3200 873164    0    0     4 74748  540    29  1 10 48 41
 0  1      0  48604   3252 926472    0    0     0 53248  469    29  0  7 47 45

where vanilla tends to fluctuate a lot in the creation phase:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 1  1      0 678716   5792 303380    0    0     0 74064  565    50  1 11 52 36
 1  0      0 662488   5864 319396    0    0     4   352  302   329  0  2 47 51
 0  1      0 599312   5924 381468    0    0     0 78164  516    55  0  9 51 40
 0  1      0 519952   6008 459516    0    0     4 78156  622    56  1 11 52 37
 1  1      0 436640   6092 541632    0    0     0 82244  622    54  0 11 48 41
 0  1      0 436640   6092 541660    0    0     0     8  152    39  0  0 51 49
 0  1      0 332224   6200 644252    0    0     4 102800  728    46  1 13 49 36
 1  0      0 274492   6260 701056    0    0     4 12328  459    49  0  7 50 43
 0  1      0 211220   6324 763356    0    0     0 106940  515    37  1 10 51 39
 1  0      0 160412   6376 813468    0    0     0  8224  415    43  0  6 49 45
 1  1      0  85980   6452 886556    0    0     4 113516  575    39  1 11 54 34
 0  2      0  85968   6452 886620    0    0     0  1640  158   211  0  0 46 54

So apart from seemingly behaving better for buffered writeout, this also
allows us to potentially have more than one bdi thread flushing out data.
This may be useful for NUMA type setups.

A 10 disk test with btrfs performs 26% faster with per-bdi flushing. Other
tests pending.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/buffer.c                 |    2 +-
 fs/fs-writeback.c           |  309 ++++++++++++++++++++++++++-----------------
 fs/ntfs/super.c             |   32 +----
 fs/sync.c                   |    2 +-
 include/linux/backing-dev.h |   28 ++++
 include/linux/fs.h          |    3 +-
 include/linux/writeback.h   |    2 +-
 mm/backing-dev.c            |  198 ++++++++++++++++++++++++++--
 mm/page-writeback.c         |  141 +-------------------
 mm/vmscan.c                 |    2 +-
 10 files changed, 416 insertions(+), 303 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..14f0802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
 	struct zone *zone;
 	int nid;
 
-	wakeup_pdflush(1024);
+	wakeup_flusher_threads(1024);
 	yield();
 
 	for_each_online_node(nid) {
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 34c8d1d..c40345c 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,6 +19,8 @@
 #include <linux/sched.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/writeback.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
@@ -61,10 +63,186 @@ int writeback_in_progress(struct backing_dev_info *bdi)
  */
 static void writeback_release(struct backing_dev_info *bdi)
 {
-	BUG_ON(!writeback_in_progress(bdi));
+	WARN_ON_ONCE(!writeback_in_progress(bdi));
+	bdi->wb_arg.nr_pages = 0;
+	bdi->wb_arg.sb = NULL;
 	clear_bit(BDI_pdflush, &bdi->state);
 }
 
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages)
+{
+	/*
+	 * This only happens the first time someone kicks this bdi, so put
+	 * it out-of-line.
+	 */
+	if (unlikely(!bdi->task)) {
+		bdi_add_default_flusher_task(bdi);
+		return 1;
+	}
+
+	if (writeback_acquire(bdi)) {
+		bdi->wb_arg.nr_pages = nr_pages;
+		bdi->wb_arg.sb = sb;
+		/*
+		 * make above store seen before the task is woken
+		 */
+		smp_mb();
+		wake_up(&bdi->wait);
+	}
+
+	return 0;
+}
+
+/*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     1024
+
+/*
+ * Periodic writeback of "old" data.
+ *
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space.  So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
+ *
+ * Try to run once per dirty_writeback_interval.  But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+ * older_than_this takes precedence over nr_to_write.  So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+static void bdi_kupdated(struct backing_dev_info *bdi)
+{
+	unsigned long oldest_jif;
+	long nr_to_write;
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= &oldest_jif,
+		.nr_to_write		= 0,
+		.for_kupdate		= 1,
+		.range_cyclic		= 1,
+	};
+
+	sync_supers();
+
+	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
+
+	nr_to_write = global_page_state(NR_FILE_DIRTY) +
+			global_page_state(NR_UNSTABLE_NFS) +
+			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+	while (nr_to_write > 0) {
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		generic_sync_bdi_inodes(NULL, &wbc);
+		if (wbc.nr_to_write > 0)
+			break;	/* All the old data is written */
+		nr_to_write -= MAX_WRITEBACK_PAGES;
+	}
+}
+
+static void bdi_pdflush(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+	};
+	long nr_pages = bdi->wb_arg.nr_pages;
+
+	for (;;) {
+		unsigned long background_thresh, dirty_thresh;
+		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+		if ((global_page_state(NR_FILE_DIRTY) +
+		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
+		    nr_pages <= 0)
+			break;
+
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		wbc.pages_skipped = 0;
+		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		/*
+		 * If we ran out of stuff to write, bail unless more_io got set
+		 */
+		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+			if (wbc.more_io)
+				continue;
+			break;
+		}
+	}
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
+ */
+int bdi_writeback_task(struct backing_dev_info *bdi)
+{
+	while (!kthread_should_stop()) {
+		unsigned long wait_jiffies;
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+		schedule_timeout(wait_jiffies);
+		try_to_freeze();
+
+		/*
+		 * We get here in two cases:
+		 *
+		 *  schedule_timeout() returned because the dirty writeback
+		 *  interval has elapsed. If that happens, we will be able
+		 *  to acquire the writeback lock and will proceed to do
+		 *  kupdated style writeout.
+		 *
+		 *  Someone called bdi_start_writeback(), which will acquire
+		 *  the writeback lock. This means our writeback_acquire()
+		 *  below will fail and we call into bdi_pdflush() for
+		 *  pdflush style writeout.
+		 *
+		 */
+		if (writeback_acquire(bdi))
+			bdi_kupdated(bdi);
+		else
+			bdi_pdflush(bdi);
+
+		writeback_release(bdi);
+		finish_wait(&bdi->wait, &wait);
+	}
+
+	return 0;
+}
+
+void bdi_writeback_all(struct super_block *sb, long nr_pages)
+{
+	struct backing_dev_info *bdi;
+
+	rcu_read_lock();
+
+restart:
+	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+		if (!bdi_has_dirty_io(bdi))
+			continue;
+		if (bdi_start_writeback(bdi, sb, nr_pages))
+			goto restart;
+	}
+
+	rcu_read_unlock();
+}
+
 /**
  *	__mark_inode_dirty -	internal function
  *	@inode: inode to mark
@@ -263,46 +441,6 @@ static void queue_io(struct backing_dev_info *bdi,
 	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
 }
 
-static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
-{
-	struct inode *inode;
-	int ret = 0;
-
-	spin_lock(&inode_lock);
-	list_for_each_entry(inode, list, i_list) {
-		if (inode->i_sb == sb) {
-			ret = 1;
-			break;
-		}
-	}
-	spin_unlock(&inode_lock);
-	return ret;
-}
-
-int sb_has_dirty_inodes(struct super_block *sb)
-{
-	struct backing_dev_info *bdi;
-	int ret = 0;
-
-	/*
-	 * This is REALLY expensive right now, but it'll go away
-	 * when the bdi writeback is introduced
-	 */
-	rcu_read_lock();
-	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
-		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
-		    sb_on_inode_list(sb, &bdi->b_io) ||
-		    sb_on_inode_list(sb, &bdi->b_more_io)) {
-			ret = 1;
-			break;
-		}
-	}
-	rcu_read_unlock();
-
-	return ret;
-}
-EXPORT_SYMBOL(sb_has_dirty_inodes);
-
 /*
  * Write a single inode's dirty pages and inode data out to disk.
  * If `wait' is set, wait on the writeout.
@@ -461,11 +599,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
-				    struct writeback_control *wbc,
-				    struct super_block *sb,
-				    int is_blkdev_sb)
+void generic_sync_bdi_inodes(struct super_block *sb,
+			     struct writeback_control *wbc)
 {
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+	struct backing_dev_info *bdi = wbc->bdi;
 	const unsigned long start = jiffies;	/* livelock avoidance */
 
 	spin_lock(&inode_lock);
@@ -516,13 +654,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			continue;		/* Skip a congested blockdev */
 		}
 
-		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!is_blkdev_sb)
-				break;		/* fs has the wrong queue */
-			requeue_io(inode);
-			continue;		/* blockdev has wrong queue */
-		}
-
 		/*
 		 * Was this inode dirtied after sync_sb_inodes was called?
 		 * This keeps sync from extra jobs and livelock.
@@ -530,16 +661,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 		if (inode_dirtied_after(inode, start))
 			break;
 
-		/* Is another pdflush already flushing this queue? */
-		if (current_is_pdflush() && !writeback_acquire(bdi))
-			break;
-
 		BUG_ON(inode->i_state & I_FREEING);
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
 		__writeback_single_inode(inode, wbc);
-		if (current_is_pdflush())
-			writeback_release(bdi);
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -578,11 +703,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
  * a variety of queues, so all inodes are searched.  For other superblocks,
  * assume that all inodes are backed by the same queue.
  *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
  * The inodes to be written are parked on bdi->b_io.  They are moved back onto
  * bdi->b_dirty as they are selected for writing.  This way, none can be missed
  * on the writer throttling path, and we get decent balancing between many
@@ -591,13 +711,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc)
 {
-	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
-	struct backing_dev_info *bdi;
-
-	rcu_read_lock();
-	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
-		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
-	rcu_read_unlock();
+	if (wbc->bdi)
+		bdi_start_writeback(wbc->bdi, sb, 0);
+	else
+		bdi_writeback_all(sb, 0);
 
 	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
@@ -653,58 +770,6 @@ static void sync_sb_inodes(struct super_block *sb,
 }
 
 /*
- * Start writeback of dirty pagecache data against all unlocked inodes.
- *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
- *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
- *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones.  One group will be the dirty
- * inodes against a filesystem.  Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'.  Maybe not
- * super-efficient but we're about to do a ton of I/O...
- */
-void
-writeback_inodes(struct writeback_control *wbc)
-{
-	struct super_block *sb;
-
-	might_sleep();
-	spin_lock(&sb_lock);
-restart:
-	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
-		if (sb_has_dirty_inodes(sb)) {
-			/* we're making our own get_super here */
-			sb->s_count++;
-			spin_unlock(&sb_lock);
-			/*
-			 * If we can't get the readlock, there's no sense in
-			 * waiting around, most of the time the FS is going to
-			 * be unmounted by the time it is released.
-			 */
-			if (down_read_trylock(&sb->s_umount)) {
-				if (sb->s_root)
-					sync_sb_inodes(sb, wbc);
-				up_read(&sb->s_umount);
-			}
-			spin_lock(&sb_lock);
-			if (__put_super_and_need_restart(sb))
-				goto restart;
-		}
-		if (wbc->nr_to_write <= 0)
-			break;
-	}
-	spin_unlock(&sb_lock);
-}
-
-/*
  * writeback and wait upon the filesystem's dirty inodes.  The caller will
  * do this in two passes - one to write, and one to wait.
  *
diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..c4cb157 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
 		vol->mftmirr_ino = NULL;
 	}
 	/*
-	 * If any dirty inodes are left, throw away all mft data page cache
-	 * pages to allow a clean umount.  This should never happen any more
-	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
-	 * the underlying mft records are written out and cleaned.  If it does,
+	 * We should have no dirty inodes left, due to
+	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+	 * the underlying mft records are written out and cleaned.
 	 * happen anyway, we want to know...
 	 */
 	ntfs_commit_inode(vol->mft_ino);
 	write_inode_now(vol->mft_ino, 1);
-	if (sb_has_dirty_inodes(sb)) {
-		const char *s1, *s2;
-
-		mutex_lock(&vol->mft_ino->i_mutex);
-		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
-		mutex_unlock(&vol->mft_ino->i_mutex);
-		write_inode_now(vol->mft_ino, 1);
-		if (sb_has_dirty_inodes(sb)) {
-			static const char *_s1 = "inodes";
-			static const char *_s2 = "";
-			s1 = _s1;
-			s2 = _s2;
-		} else {
-			static const char *_s1 = "mft pages";
-			static const char *_s2 = "They have been thrown "
-					"away.  ";
-			s1 = _s1;
-			s2 = _s2;
-		}
-		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
-				"run chkdsk.  Please email "
-				"linux-ntfs-dev@lists.sourceforge.net and say "
-				"that you saw this message.  Thank you.", s1,
-				s2);
-	}
 #endif /* NTFS_RW */
 
 	iput(vol->mft_ino);
diff --git a/fs/sync.c b/fs/sync.c
index 7abc65f..3887f10 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -23,7 +23,7 @@
  */
 static void do_sync(unsigned long wait)
 {
-	wakeup_pdflush(0);
+	wakeup_flusher_threads(0);
 	sync_inodes(0);		/* All mappings, inodes and their blockdevs */
 	vfs_dq_sync(NULL);
 	sync_supers();		/* Write the superblocks */
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 86668c7..a848eea 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,6 +24,7 @@ struct dentry;
  */
 enum bdi_state {
 	BDI_pdflush,		/* A pdflush thread is working this device */
+	BDI_pending,		/* On its way to being activated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -39,8 +40,14 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
+struct bdi_writeback_arg {
+	unsigned long nr_pages;
+	struct super_block *sb;
+};
+
 struct backing_dev_info {
 	struct list_head bdi_list;
+	struct rcu_head rcu_head;
 
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
@@ -60,6 +67,9 @@ struct backing_dev_info {
 
 	struct device *dev;
 
+	struct task_struct	*task;		/* writeback task */
+	wait_queue_head_t	wait;
+	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
 	struct list_head	b_dirty;	/* dirty inodes */
 	struct list_head	b_io;		/* parked for writeback */
 	struct list_head	b_more_io;	/* parked for more writeback */
@@ -77,10 +87,22 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...);
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages);
+int bdi_writeback_task(struct backing_dev_info *bdi);
+void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
 
 extern spinlock_t bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	return !list_empty(&bdi->b_dirty) ||
+	       !list_empty(&bdi->b_io) ||
+	       !list_empty(&bdi->b_more_io);
+}
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
@@ -196,6 +218,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
 #define BDI_CAP_EXEC_MAP	0x00000040
 #define BDI_CAP_NO_ACCT_WB	0x00000080
 #define BDI_CAP_SWAP_BACKED	0x00000100
+#define BDI_CAP_FLUSH_FORKER	0x00000200
 
 #define BDI_CAP_VMFLAGS \
 	(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -265,6 +288,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
 	return bdi->capabilities & BDI_CAP_SWAP_BACKED;
 }
 
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+	return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
+}
+
 static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
 {
 	return bdi_cap_writeback_dirty(mapping->backing_dev_info);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6b475d4..ecdc544 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2063,6 +2063,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
 					 pgoff_t start, pgoff_t end);
 extern void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc);
+extern void generic_sync_bdi_inodes(struct super_block *sb,
+				struct writeback_control *);
 extern int write_inode_now(struct inode *, int);
 extern int filemap_fdatawrite(struct address_space *);
 extern int filemap_flush(struct address_space *);
@@ -2180,7 +2182,6 @@ extern int bdev_read_only(struct block_device *);
 extern int set_blocksize(struct block_device *, int);
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
 
 extern int generic_file_mmap(struct file *, struct vm_area_struct *);
 extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 9344547..a8e9f78 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -99,7 +99,7 @@ static inline void inode_sync_wait(struct inode *inode)
 /*
  * mm/page-writeback.c
  */
-int wakeup_pdflush(long nr_pages);
+void wakeup_flusher_threads(long nr_pages);
 void laptop_io_completion(void);
 void laptop_sync_completion(void);
 void throttle_vm_writeout(gfp_t gfp_mask);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 883ee8a..c759449 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
 
 #include <linux/wait.h>
 #include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/fs.h>
 #include <linux/pagemap.h>
+#include <linux/mm.h>
 #include <linux/sched.h>
 #include <linux/module.h>
 #include <linux/writeback.h>
@@ -16,7 +19,7 @@ EXPORT_SYMBOL(default_unplug_io_fn);
 struct backing_dev_info default_backing_dev_info = {
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
-	.capabilities	= BDI_CAP_MAP_COPY,
+	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
 	.unplug_io_fn	= default_unplug_io_fn,
 };
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
@@ -24,6 +27,7 @@ EXPORT_SYMBOL_GPL(default_backing_dev_info);
 static struct class *bdi_class;
 DEFINE_SPINLOCK(bdi_lock);
 LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -195,6 +199,146 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static int bdi_start_fn(void *ptr)
+{
+	struct backing_dev_info *bdi = ptr;
+	struct task_struct *tsk = current;
+
+	/*
+	 * Add us to the active bdi_list
+	 */
+	spin_lock_bh(&bdi_lock);
+	list_add_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock_bh(&bdi_lock);
+
+	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+	set_freezable();
+
+	/*
+	 * Our parent may run at a different priority, just set us to normal
+	 */
+	set_user_nice(tsk, 0);
+
+	/*
+	 * Clear pending bit and wakeup anybody waiting to tear us down
+	 */
+	clear_bit(BDI_pending, &bdi->state);
+	wake_up_bit(&bdi->state, BDI_pending);
+
+	return bdi_writeback_task(bdi);
+}
+
+static int bdi_forker_task(void *ptr)
+{
+	struct backing_dev_info *bdi, *me = ptr;
+
+	for (;;) {
+		DEFINE_WAIT(wait);
+
+		/*
+		 * Should never trigger on the default bdi
+		 */
+		WARN_ON(bdi_has_dirty_io(me));
+
+		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+		smp_mb();
+		if (list_empty(&bdi_pending_list))
+			schedule();
+		else {
+repeat:
+			bdi = NULL;
+
+			spin_lock_bh(&bdi_lock);
+			if (!list_empty(&bdi_pending_list)) {
+				bdi = list_entry(bdi_pending_list.next,
+						 struct backing_dev_info,
+						 bdi_list);
+				list_del_init(&bdi->bdi_list);
+			}
+			spin_unlock_bh(&bdi_lock);
+
+			/*
+			 * If no bdi or bdi already got setup, continue
+			 */
+			if (!bdi || bdi->task)
+				continue;
+
+			bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+						dev_name(bdi->dev));
+			/*
+			 * If task creation fails, then readd the bdi to
+			 * the pending list and force writeout of the bdi
+			 * from this forker thread. That will free some memory
+			 * and we can try again.
+			 */
+			if (!bdi->task) {
+				struct writeback_control wbc = {
+					.bdi			= bdi,
+					.sync_mode		= WB_SYNC_NONE,
+					.older_than_this	= NULL,
+					.range_cyclic		= 1,
+				};
+
+				/*
+				 * Add this 'bdi' to the back, so we get
+				 * a chance to flush other bdi's to free
+				 * memory.
+				 */
+				spin_lock_bh(&bdi_lock);
+				list_add_tail(&bdi->bdi_list,
+						&bdi_pending_list);
+				spin_unlock_bh(&bdi_lock);
+
+				wbc.nr_to_write = 1024;
+				generic_sync_bdi_inodes(NULL, &wbc);
+				goto repeat;
+			}
+		}
+
+		finish_wait(&me->wait, &wait);
+	}
+
+	return 0;
+}
+
+/*
+ * Grace period has now ended, init bdi->bdi_list and add us to the
+ * list of bdi's that are pending for task creation. Wake up
+ * bdi_forker_task() to finish the job and add us back to the
+ * active bdi_list.
+ */
+static void bdi_add_to_pending(struct rcu_head *head)
+{
+	struct backing_dev_info *bdi;
+
+	bdi = container_of(head, struct backing_dev_info, rcu_head);
+	INIT_LIST_HEAD(&bdi->bdi_list);
+
+	spin_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+	spin_unlock(&bdi_lock);
+
+	wake_up(&default_backing_dev_info.wait);
+}
+
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	if (test_and_set_bit(BDI_pending, &bdi->state))
+		return;
+
+	spin_lock_bh(&bdi_lock);
+	list_del_rcu(&bdi->bdi_list);
+	spin_unlock_bh(&bdi_lock);
+
+	/*
+	 * We need to wait for the current grace period to end,
+	 * in case others were browsing the bdi_list as well.
+	 * So defer the adding and wakeup to after the RCU
+	 * grace period has ended.
+	 */
+	call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+}
+
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
 {
@@ -213,9 +357,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
-	spin_lock(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock(&bdi_lock);
+	/*
+	 * Just start the forker thread for our default backing_dev_info,
+	 * and add other bdi's to the list. They will get a thread created
+	 * on-demand when they need it.
+	 */
+	if (bdi_cap_flush_forker(bdi)) {
+		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+						dev_name(dev));
+		if (!bdi->task) {
+			ret = -ENOMEM;
+			goto exit;
+		}
+	} else {
+		spin_lock_bh(&bdi_lock);
+		list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+		spin_unlock_bh(&bdi_lock);
+	}
 
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
@@ -231,11 +389,22 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
-static void bdi_remove_from_list(struct backing_dev_info *bdi)
+static int sched_wait(void *word)
 {
-	spin_lock(&bdi_lock);
+	schedule();
+	return 0;
+}
+
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+{
+	/*
+	 * If setup is pending, wait for that to complete first
+	 */
+	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+
+	spin_lock_bh(&bdi_lock);
 	list_del_rcu(&bdi->bdi_list);
-	spin_unlock(&bdi_lock);
+	spin_unlock_bh(&bdi_lock);
 
 	/*
 	 * In case the bdi is freed right after unregister, we need to
@@ -247,7 +416,13 @@ static void bdi_remove_from_list(struct backing_dev_info *bdi)
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
-		bdi_remove_from_list(bdi);
+		if (!bdi_cap_flush_forker(bdi)) {
+			bdi_wb_shutdown(bdi);
+			if (bdi->task) {
+				kthread_stop(bdi->task);
+				bdi->task = NULL;
+			}
+		}
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -257,14 +432,15 @@ EXPORT_SYMBOL(bdi_unregister);
 
 int bdi_init(struct backing_dev_info *bdi)
 {
-	int i;
-	int err;
+	int i, err;
 
+	INIT_RCU_HEAD(&bdi->rcu_head);
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	init_waitqueue_head(&bdi->wait);
 	INIT_LIST_HEAD(&bdi->bdi_list);
 	INIT_LIST_HEAD(&bdi->b_io);
 	INIT_LIST_HEAD(&bdi->b_dirty);
@@ -283,8 +459,6 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
-
-		bdi_remove_from_list(bdi);
 	}
 
 	return err;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2296ff4..76269f8 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
 #include <linux/pagevec.h>
 
 /*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES	1024
-
-/*
  * After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
  * will look to see if it needs to force writeback or throttling.
  */
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
 /* End of sysctl-exported parameters */
 
 
-static void background_writeout(unsigned long _min_pages);
-
 /*
  * Scale the writeback cache size proportional to the relative writeout speeds.
  *
@@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 		 * been flushed to permanent storage.
 		 */
 		if (bdi_nr_reclaimable) {
-			writeback_inodes(&wbc);
+			generic_sync_bdi_inodes(NULL, &wbc);
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
@@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
 					  + global_page_state(NR_UNSTABLE_NFS)
 					  > background_thresh)))
-		pdflush_operation(background_writeout, 0);
+		bdi_start_writeback(bdi, NULL, 0);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -677,152 +666,36 @@ void throttle_vm_writeout(gfp_t gfp_mask)
 }
 
 /*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
-	long min_pages = _min_pages;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = NULL,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.range_cyclic	= 1,
-	};
-
-	for ( ; ; ) {
-		unsigned long background_thresh;
-		unsigned long dirty_thresh;
-
-		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
-		if (global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) < background_thresh
-				&& min_pages <= 0)
-			break;
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		wbc.pages_skipped = 0;
-		writeback_inodes(&wbc);
-		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
-			/* Wrote less than expected */
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;
-		}
-	}
-}
-
-/*
  * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
  * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
  * -1 if all pdflush threads were busy.
  */
-int wakeup_pdflush(long nr_pages)
+void wakeup_flusher_threads(long nr_pages)
 {
 	if (nr_pages == 0)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
-	return pdflush_operation(background_writeout, nr_pages);
+	bdi_writeback_all(NULL, nr_pages);
+	return;
 }
 
-static void wb_timer_fn(unsigned long unused);
 static void laptop_timer_fn(unsigned long unused);
 
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
 static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
 
 /*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space.  So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval.  But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write.  So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
-	unsigned long oldest_jif;
-	unsigned long start_jif;
-	unsigned long next_jif;
-	long nr_to_write;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = &oldest_jif,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.for_kupdate	= 1,
-		.range_cyclic	= 1,
-	};
-
-	sync_supers();
-
-	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
-	start_jif = jiffies;
-	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
-	nr_to_write = global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) +
-			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
-	while (nr_to_write > 0) {
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		writeback_inodes(&wbc);
-		if (wbc.nr_to_write > 0) {
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;	/* All the old data is written */
-		}
-		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-	}
-	if (time_before(next_jif, jiffies + HZ))
-		next_jif = jiffies + HZ;
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, next_jif);
-}
-
-/*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
 int dirty_writeback_centisecs_handler(ctl_table *table, int write,
 	struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec(table, write, file, buffer, length, ppos);
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, jiffies +
-			msecs_to_jiffies(dirty_writeback_interval * 10));
-	else
-		del_timer(&wb_timer);
 	return 0;
 }
 
-static void wb_timer_fn(unsigned long unused)
-{
-	if (pdflush_operation(wb_kupdate, 0) < 0)
-		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
-{
-	sys_sync();
-}
-
 static void laptop_timer_fn(unsigned long unused)
 {
-	pdflush_operation(laptop_flush, 0);
+	wakeup_flusher_threads(0);
 }
 
 /*
@@ -905,8 +778,6 @@ void __init page_writeback_init(void)
 {
 	int shift;
 
-	mod_timer(&wb_timer,
-		  jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
 	writeback_set_ratelimit();
 	register_cpu_notifier(&ratelimit_nb);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5fa3eda..e37fd38 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		 */
 		if (total_scanned > sc->swap_cluster_max +
 					sc->swap_cluster_max / 2) {
-			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
 			sc->may_writepage = 1;
 		}
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 03/11] writeback: get rid of pdflush completely
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
  2009-05-18 12:19 ` [PATCH 01/11] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
  2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 04/11] writeback: separate the flushing state/task from the bdi Jens Axboe
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

It is now unused, so kill it off.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c         |    5 +
 include/linux/writeback.h |   12 --
 mm/Makefile               |    2 +-
 mm/pdflush.c              |  269 ---------------------------------------------
 4 files changed, 6 insertions(+), 282 deletions(-)
 delete mode 100644 mm/pdflush.c

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index c40345c..8a25d14 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -29,6 +29,11 @@
 
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
+/*
+ * We don't actually have pdflush, but this one is exported though /proc...
+ */
+int nr_pdflush_threads;
+
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
  * @bdi: the device's backing_dev_info structure
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index a8e9f78..baf04a9 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -14,17 +14,6 @@ extern struct list_head inode_in_use;
 extern struct list_head inode_unused;
 
 /*
- * Yes, writeback.h requires sched.h
- * No, sched.h is not included from here.
- */
-static inline int task_is_pdflush(struct task_struct *task)
-{
-	return task->flags & PF_FLUSHER;
-}
-
-#define current_is_pdflush()	task_is_pdflush(current)
-
-/*
  * fs/fs-writeback.c
  */
 enum writeback_sync_modes {
@@ -151,7 +140,6 @@ balance_dirty_pages_ratelimited(struct address_space *mapping)
 typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc,
 				void *data);
 
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0);
 int generic_writepages(struct address_space *mapping,
 		       struct writeback_control *wbc);
 int write_cache_pages(struct address_space *mapping,
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..2adb811 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -8,7 +8,7 @@ mmu-$(CONFIG_MMU)	:= fremap.o highmem.o madvise.o memory.o mincore.o \
 			   vmalloc.o
 
 obj-y			:= bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
-			   maccess.o page_alloc.o page-writeback.o pdflush.o \
+			   maccess.o page_alloc.o page-writeback.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
 			   page_isolation.o mm_init.o $(mmu-y)
diff --git a/mm/pdflush.c b/mm/pdflush.c
deleted file mode 100644
index 235ac44..0000000
--- a/mm/pdflush.c
+++ /dev/null
@@ -1,269 +0,0 @@
-/*
- * mm/pdflush.c - worker threads for writing back filesystem data
- *
- * Copyright (C) 2002, Linus Torvalds.
- *
- * 09Apr2002	Andrew Morton
- *		Initial version
- * 29Feb2004	kaos@sgi.com
- *		Move worker thread creation to kthread to avoid chewing
- *		up stack space with nested calls to kernel_thread.
- */
-
-#include <linux/sched.h>
-#include <linux/list.h>
-#include <linux/signal.h>
-#include <linux/spinlock.h>
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/fs.h>		/* Needed by writeback.h	  */
-#include <linux/writeback.h>	/* Prototypes pdflush_operation() */
-#include <linux/kthread.h>
-#include <linux/cpuset.h>
-#include <linux/freezer.h>
-
-
-/*
- * Minimum and maximum number of pdflush instances
- */
-#define MIN_PDFLUSH_THREADS	2
-#define MAX_PDFLUSH_THREADS	8
-
-static void start_one_pdflush_thread(void);
-
-
-/*
- * The pdflush threads are worker threads for writing back dirty data.
- * Ideally, we'd like one thread per active disk spindle.  But the disk
- * topology is very hard to divine at this level.   Instead, we take
- * care in various places to prevent more than one pdflush thread from
- * performing writeback against a single filesystem.  pdflush threads
- * have the PF_FLUSHER flag set in current->flags to aid in this.
- */
-
-/*
- * All the pdflush threads.  Protected by pdflush_lock
- */
-static LIST_HEAD(pdflush_list);
-static DEFINE_SPINLOCK(pdflush_lock);
-
-/*
- * The count of currently-running pdflush threads.  Protected
- * by pdflush_lock.
- *
- * Readable by sysctl, but not writable.  Published to userspace at
- * /proc/sys/vm/nr_pdflush_threads.
- */
-int nr_pdflush_threads = 0;
-
-/*
- * The time at which the pdflush thread pool last went empty
- */
-static unsigned long last_empty_jifs;
-
-/*
- * The pdflush thread.
- *
- * Thread pool management algorithm:
- * 
- * - The minimum and maximum number of pdflush instances are bound
- *   by MIN_PDFLUSH_THREADS and MAX_PDFLUSH_THREADS.
- * 
- * - If there have been no idle pdflush instances for 1 second, create
- *   a new one.
- * 
- * - If the least-recently-went-to-sleep pdflush thread has been asleep
- *   for more than one second, terminate a thread.
- */
-
-/*
- * A structure for passing work to a pdflush thread.  Also for passing
- * state information between pdflush threads.  Protected by pdflush_lock.
- */
-struct pdflush_work {
-	struct task_struct *who;	/* The thread */
-	void (*fn)(unsigned long);	/* A callback function */
-	unsigned long arg0;		/* An argument to the callback */
-	struct list_head list;		/* On pdflush_list, when idle */
-	unsigned long when_i_went_to_sleep;
-};
-
-static int __pdflush(struct pdflush_work *my_work)
-{
-	current->flags |= PF_FLUSHER | PF_SWAPWRITE;
-	set_freezable();
-	my_work->fn = NULL;
-	my_work->who = current;
-	INIT_LIST_HEAD(&my_work->list);
-
-	spin_lock_irq(&pdflush_lock);
-	for ( ; ; ) {
-		struct pdflush_work *pdf;
-
-		set_current_state(TASK_INTERRUPTIBLE);
-		list_move(&my_work->list, &pdflush_list);
-		my_work->when_i_went_to_sleep = jiffies;
-		spin_unlock_irq(&pdflush_lock);
-		schedule();
-		try_to_freeze();
-		spin_lock_irq(&pdflush_lock);
-		if (!list_empty(&my_work->list)) {
-			/*
-			 * Someone woke us up, but without removing our control
-			 * structure from the global list.  swsusp will do this
-			 * in try_to_freeze()->refrigerator().  Handle it.
-			 */
-			my_work->fn = NULL;
-			continue;
-		}
-		if (my_work->fn == NULL) {
-			printk("pdflush: bogus wakeup\n");
-			continue;
-		}
-		spin_unlock_irq(&pdflush_lock);
-
-		(*my_work->fn)(my_work->arg0);
-
-		spin_lock_irq(&pdflush_lock);
-
-		/*
-		 * Thread creation: For how long have there been zero
-		 * available threads?
-		 *
-		 * To throttle creation, we reset last_empty_jifs.
-		 */
-		if (time_after(jiffies, last_empty_jifs + 1 * HZ)) {
-			if (list_empty(&pdflush_list)) {
-				if (nr_pdflush_threads < MAX_PDFLUSH_THREADS) {
-					last_empty_jifs = jiffies;
-					nr_pdflush_threads++;
-					spin_unlock_irq(&pdflush_lock);
-					start_one_pdflush_thread();
-					spin_lock_irq(&pdflush_lock);
-				}
-			}
-		}
-
-		my_work->fn = NULL;
-
-		/*
-		 * Thread destruction: For how long has the sleepiest
-		 * thread slept?
-		 */
-		if (list_empty(&pdflush_list))
-			continue;
-		if (nr_pdflush_threads <= MIN_PDFLUSH_THREADS)
-			continue;
-		pdf = list_entry(pdflush_list.prev, struct pdflush_work, list);
-		if (time_after(jiffies, pdf->when_i_went_to_sleep + 1 * HZ)) {
-			/* Limit exit rate */
-			pdf->when_i_went_to_sleep = jiffies;
-			break;					/* exeunt */
-		}
-	}
-	nr_pdflush_threads--;
-	spin_unlock_irq(&pdflush_lock);
-	return 0;
-}
-
-/*
- * Of course, my_work wants to be just a local in __pdflush().  It is
- * separated out in this manner to hopefully prevent the compiler from
- * performing unfortunate optimisations against the auto variables.  Because
- * these are visible to other tasks and CPUs.  (No problem has actually
- * been observed.  This is just paranoia).
- */
-static int pdflush(void *dummy)
-{
-	struct pdflush_work my_work;
-	cpumask_var_t cpus_allowed;
-
-	/*
-	 * Since the caller doesn't even check kthread_run() worked, let's not
-	 * freak out too much if this fails.
-	 */
-	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
-		printk(KERN_WARNING "pdflush failed to allocate cpumask\n");
-		return 0;
-	}
-
-	/*
-	 * pdflush can spend a lot of time doing encryption via dm-crypt.  We
-	 * don't want to do that at keventd's priority.
-	 */
-	set_user_nice(current, 0);
-
-	/*
-	 * Some configs put our parent kthread in a limited cpuset,
-	 * which kthread() overrides, forcing cpus_allowed == cpu_all_mask.
-	 * Our needs are more modest - cut back to our cpusets cpus_allowed.
-	 * This is needed as pdflush's are dynamically created and destroyed.
-	 * The boottime pdflush's are easily placed w/o these 2 lines.
-	 */
-	cpuset_cpus_allowed(current, cpus_allowed);
-	set_cpus_allowed_ptr(current, cpus_allowed);
-	free_cpumask_var(cpus_allowed);
-
-	return __pdflush(&my_work);
-}
-
-/*
- * Attempt to wake up a pdflush thread, and get it to do some work for you.
- * Returns zero if it indeed managed to find a worker thread, and passed your
- * payload to it.
- */
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0)
-{
-	unsigned long flags;
-	int ret = 0;
-
-	BUG_ON(fn == NULL);	/* Hard to diagnose if it's deferred */
-
-	spin_lock_irqsave(&pdflush_lock, flags);
-	if (list_empty(&pdflush_list)) {
-		ret = -1;
-	} else {
-		struct pdflush_work *pdf;
-
-		pdf = list_entry(pdflush_list.next, struct pdflush_work, list);
-		list_del_init(&pdf->list);
-		if (list_empty(&pdflush_list))
-			last_empty_jifs = jiffies;
-		pdf->fn = fn;
-		pdf->arg0 = arg0;
-		wake_up_process(pdf->who);
-	}
-	spin_unlock_irqrestore(&pdflush_lock, flags);
-
-	return ret;
-}
-
-static void start_one_pdflush_thread(void)
-{
-	struct task_struct *k;
-
-	k = kthread_run(pdflush, NULL, "pdflush");
-	if (unlikely(IS_ERR(k))) {
-		spin_lock_irq(&pdflush_lock);
-		nr_pdflush_threads--;
-		spin_unlock_irq(&pdflush_lock);
-	}
-}
-
-static int __init pdflush_init(void)
-{
-	int i;
-
-	/*
-	 * Pre-set nr_pdflush_threads...  If we fail to create,
-	 * the count will be decremented.
-	 */
-	nr_pdflush_threads = MIN_PDFLUSH_THREADS;
-
-	for (i = 0; i < MIN_PDFLUSH_THREADS; i++)
-		start_one_pdflush_thread();
-	return 0;
-}
-
-module_init(pdflush_init);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 04/11] writeback: separate the flushing state/task from the bdi
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (2 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 03/11] writeback: get rid of pdflush completely Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-20 11:34   ` Jan Kara
  2009-05-18 12:19 ` [PATCH 05/11] writeback: support > 1 flusher thread per bdi Jens Axboe
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Add a struct bdi_writeback for tracking and handling dirty IO. This
is in preparation for adding > 1 flusher task per bdi.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  140 +++++++++++++++++-----------
 include/linux/backing-dev.h |   42 +++++----
 mm/backing-dev.c            |  218 ++++++++++++++++++++++++++++--------------
 3 files changed, 256 insertions(+), 144 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 8a25d14..50e21e8 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -46,9 +46,11 @@ int nr_pdflush_threads;
  * unless they implement their own.  Which is somewhat inefficient, as this
  * may prevent concurrent writeback against multiple devices.
  */
-static int writeback_acquire(struct backing_dev_info *bdi)
+static int writeback_acquire(struct bdi_writeback *wb)
 {
-	return !test_and_set_bit(BDI_pdflush, &bdi->state);
+	struct backing_dev_info *bdi = wb->bdi;
+
+	return !test_and_set_bit(wb->nr, &bdi->wb_active);
 }
 
 /**
@@ -59,19 +61,38 @@ static int writeback_acquire(struct backing_dev_info *bdi)
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return test_bit(BDI_pdflush, &bdi->state);
+	return bdi->wb_active != 0;
 }
 
 /**
  * writeback_release - relinquish exclusive writeback access against a device.
  * @bdi: the device's backing_dev_info structure
  */
-static void writeback_release(struct backing_dev_info *bdi)
+static void writeback_release(struct bdi_writeback *wb)
 {
-	WARN_ON_ONCE(!writeback_in_progress(bdi));
-	bdi->wb_arg.nr_pages = 0;
-	bdi->wb_arg.sb = NULL;
-	clear_bit(BDI_pdflush, &bdi->state);
+	struct backing_dev_info *bdi = wb->bdi;
+
+	wb->nr_pages = 0;
+	wb->sb = NULL;
+	clear_bit(wb->nr, &bdi->wb_active);
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
+			       long nr_pages)
+{
+	if (!wb_has_dirty_io(wb))
+		return;
+
+	if (writeback_acquire(wb)) {
+		wb->nr_pages = nr_pages;
+		wb->sb = sb;
+
+		/*
+		 * make above store seen before the task is woken
+		 */
+		smp_mb();
+		wake_up(&wb->wait);
+	}
 }
 
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
@@ -81,21 +102,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 	 * This only happens the first time someone kicks this bdi, so put
 	 * it out-of-line.
 	 */
-	if (unlikely(!bdi->task)) {
+	if (unlikely(!bdi->wb.task)) {
 		bdi_add_default_flusher_task(bdi);
 		return 1;
 	}
 
-	if (writeback_acquire(bdi)) {
-		bdi->wb_arg.nr_pages = nr_pages;
-		bdi->wb_arg.sb = sb;
-		/*
-		 * make above store seen before the task is woken
-		 */
-		smp_mb();
-		wake_up(&bdi->wait);
-	}
-
+	wb_start_writeback(&bdi->wb, sb, nr_pages);
 	return 0;
 }
 
@@ -123,12 +135,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
  * older_than_this takes precedence over nr_to_write.  So we'll only write back
  * all dirty pages if they are all attached to "old" mappings.
  */
-static void bdi_kupdated(struct backing_dev_info *bdi)
+static void wb_kupdated(struct bdi_writeback *wb)
 {
 	unsigned long oldest_jif;
 	long nr_to_write;
 	struct writeback_control wbc = {
-		.bdi			= bdi,
+		.bdi			= wb->bdi,
 		.sync_mode		= WB_SYNC_NONE,
 		.older_than_this	= &oldest_jif,
 		.nr_to_write		= 0,
@@ -155,15 +167,19 @@ static void bdi_kupdated(struct backing_dev_info *bdi)
 	}
 }
 
-static void bdi_pdflush(struct backing_dev_info *bdi)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc);
+
+static void wb_writeback(struct bdi_writeback *wb)
 {
 	struct writeback_control wbc = {
-		.bdi			= bdi,
+		.bdi			= wb->bdi,
 		.sync_mode		= WB_SYNC_NONE,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
-	long nr_pages = bdi->wb_arg.nr_pages;
+	long nr_pages = wb->nr_pages;
 
 	for (;;) {
 		unsigned long background_thresh, dirty_thresh;
@@ -177,7 +193,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+		generic_sync_wb_inodes(wb, wb->sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
@@ -194,13 +210,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
  * Handle writeback of dirty data for the device backed by this bdi. Also
  * wakes up periodically and does kupdated style flushing.
  */
-int bdi_writeback_task(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
 {
 	while (!kthread_should_stop()) {
 		unsigned long wait_jiffies;
 		DEFINE_WAIT(wait);
 
-		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
 		schedule_timeout(wait_jiffies);
 		try_to_freeze();
@@ -219,13 +235,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
 		 *  pdflush style writeout.
 		 *
 		 */
-		if (writeback_acquire(bdi))
-			bdi_kupdated(bdi);
+		if (writeback_acquire(wb))
+			wb_kupdated(wb);
 		else
-			bdi_pdflush(bdi);
+			wb_writeback(wb);
 
-		writeback_release(bdi);
-		finish_wait(&bdi->wait, &wait);
+		writeback_release(wb);
+		finish_wait(&wb->wait, &wait);
 	}
 
 	return 0;
@@ -248,6 +264,14 @@ restart:
 	rcu_read_unlock();
 }
 
+/*
+ * We have only a single wb per bdi, so just return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+	return &inode_to_bdi(inode)->wb;
+}
+
 /**
  *	__mark_inode_dirty -	internal function
  *	@inode: inode to mark
@@ -346,9 +370,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
+			struct bdi_writeback *wb = inode_get_wb(inode);
+
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list,
-					&inode_to_bdi(inode)->b_dirty);
+			list_move(&inode->i_list, &wb->b_dirty);
 		}
 	}
 out:
@@ -375,16 +400,16 @@ static int write_inode(struct inode *inode, int sync)
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct backing_dev_info *bdi = inode_to_bdi(inode);
+	struct bdi_writeback *wb = inode_get_wb(inode);
 
-	if (!list_empty(&bdi->b_dirty)) {
+	if (!list_empty(&wb->b_dirty)) {
 		struct inode *tail;
 
-		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		tail = list_entry(wb->b_dirty.next, struct inode, i_list);
 		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &bdi->b_dirty);
+	list_move(&inode->i_list, &wb->b_dirty);
 }
 
 /*
@@ -392,7 +417,9 @@ static void redirty_tail(struct inode *inode)
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+	struct bdi_writeback *wb = inode_get_wb(inode);
+
+	list_move(&inode->i_list, &wb->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -439,11 +466,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct backing_dev_info *bdi,
-		     unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
 {
-	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
-	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+	list_splice_init(&wb->b_more_io, wb->b_io.prev);
+	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
 }
 
 /*
@@ -604,20 +630,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-void generic_sync_bdi_inodes(struct super_block *sb,
-			     struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc)
 {
 	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
-	struct backing_dev_info *bdi = wbc->bdi;
 	const unsigned long start = jiffies;	/* livelock avoidance */
 
 	spin_lock(&inode_lock);
 
-	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
-		queue_io(bdi, wbc->older_than_this);
+	if (!wbc->for_kupdate || list_empty(&wb->b_io))
+		queue_io(wb, wbc->older_than_this);
 
-	while (!list_empty(&bdi->b_io)) {
-		struct inode *inode = list_entry(bdi->b_io.prev,
+	while (!list_empty(&wb->b_io)) {
+		struct inode *inode = list_entry(wb->b_io.prev,
 						struct inode, i_list);
 		long pages_skipped;
 
@@ -629,7 +655,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			continue;
 		}
 
-		if (!bdi_cap_writeback_dirty(bdi)) {
+		if (!bdi_cap_writeback_dirty(wb->bdi)) {
 			redirty_tail(inode);
 			if (is_blkdev_sb) {
 				/*
@@ -651,7 +677,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			continue;
 		}
 
-		if (wbc->nonblocking && bdi_write_congested(bdi)) {
+		if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
 			wbc->encountered_congestion = 1;
 			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
@@ -685,7 +711,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&bdi->b_more_io))
+		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
 	}
 
@@ -693,6 +719,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 	/* Leave any unwritten inodes on b_io */
 }
 
+void generic_sync_bdi_inodes(struct super_block *sb,
+			     struct writeback_control *wbc)
+{
+	struct backing_dev_info *bdi = wbc->bdi;
+
+	generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+}
+
 /*
  * Write out a superblock's list of dirty inodes.  A wait will be performed
  * upon no inodes, all inodes or the final one, depending upon sync_mode.
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index a848eea..a0c70f1 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -23,8 +23,8 @@ struct dentry;
  * Bits in backing_dev_info.state
  */
 enum bdi_state {
-	BDI_pdflush,		/* A pdflush thread is working this device */
 	BDI_pending,		/* On its way to being activated */
+	BDI_wb_alloc,		/* Default embedded wb allocated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -40,15 +40,23 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
-struct bdi_writeback_arg {
-	unsigned long nr_pages;
-	struct super_block *sb;
+struct bdi_writeback {
+	struct backing_dev_info *bdi;		/* our parent bdi */
+	unsigned int nr;
+
+	struct task_struct	*task;		/* writeback task */
+	wait_queue_head_t	wait;
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+
+	unsigned long		nr_pages;
+	struct super_block	*sb;
 };
 
 struct backing_dev_info {
-	struct list_head bdi_list;
 	struct rcu_head rcu_head;
-
+	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -65,14 +73,11 @@ struct backing_dev_info {
 	unsigned int min_ratio;
 	unsigned int max_ratio, max_prop_frac;
 
-	struct device *dev;
+	struct bdi_writeback wb;  /* default writeback info for this bdi */
+	unsigned long wb_active;  /* bitmap of active tasks */
+	unsigned long wb_mask;	  /* number of registered tasks */
 
-	struct task_struct	*task;		/* writeback task */
-	wait_queue_head_t	wait;
-	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
-	struct list_head	b_dirty;	/* dirty inodes */
-	struct list_head	b_io;		/* parked for writeback */
-	struct list_head	b_more_io;	/* parked for more writeback */
+	struct device *dev;
 
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
@@ -89,18 +94,19 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 			 long nr_pages);
-int bdi_writeback_task(struct backing_dev_info *bdi);
+int bdi_writeback_task(struct bdi_writeback *wb);
 void bdi_writeback_all(struct super_block *sb, long nr_pages);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
 extern spinlock_t bdi_lock;
 extern struct list_head bdi_list;
 
-static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
 {
-	return !list_empty(&bdi->b_dirty) ||
-	       !list_empty(&bdi->b_io) ||
-	       !list_empty(&bdi->b_more_io);
+	return !list_empty(&wb->b_dirty) ||
+	       !list_empty(&wb->b_io) ||
+	       !list_empty(&wb->b_more_io);
 }
 
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index c759449..677a8c6 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,17 +199,59 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+	memset(wb, 0, sizeof(*wb));
+
+	wb->bdi = bdi;
+	init_waitqueue_head(&wb->wait);
+	INIT_LIST_HEAD(&wb->b_dirty);
+	INIT_LIST_HEAD(&wb->b_io);
+	INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+		.nr_to_write		= 1024,
+	};
+
+	generic_sync_bdi_inodes(NULL, &wbc);
+}
+
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	set_bit(0, &bdi->wb_mask);
+	wb->nr = 0;
+	return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	clear_bit(wb->nr, &bdi->wb_mask);
+	clear_bit(BDI_wb_alloc, &bdi->state);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	set_bit(BDI_wb_alloc, &bdi->state);
+	wb = &bdi->wb;
+	wb_assign_nr(bdi, wb);
+	return wb;
+}
+
 static int bdi_start_fn(void *ptr)
 {
-	struct backing_dev_info *bdi = ptr;
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
 	struct task_struct *tsk = current;
-
-	/*
-	 * Add us to the active bdi_list
-	 */
-	spin_lock_bh(&bdi_lock);
-	list_add_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock_bh(&bdi_lock);
+	int ret;
 
 	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
 	set_freezable();
@@ -225,77 +267,81 @@ static int bdi_start_fn(void *ptr)
 	clear_bit(BDI_pending, &bdi->state);
 	wake_up_bit(&bdi->state, BDI_pending);
 
-	return bdi_writeback_task(bdi);
+	ret = bdi_writeback_task(wb);
+
+	bdi_put_wb(bdi, wb);
+	return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	return wb_has_dirty_io(&bdi->wb);
 }
 
 static int bdi_forker_task(void *ptr)
 {
-	struct backing_dev_info *bdi, *me = ptr;
+	struct bdi_writeback *me = ptr;
 
 	for (;;) {
+		struct backing_dev_info *bdi;
+		struct bdi_writeback *wb;
 		DEFINE_WAIT(wait);
 
 		/*
 		 * Should never trigger on the default bdi
 		 */
-		WARN_ON(bdi_has_dirty_io(me));
+		if (wb_has_dirty_io(me)) {
+			bdi_flush_io(me->bdi);
+			WARN_ON(1);
+		}
 
 		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
 		smp_mb();
 		if (list_empty(&bdi_pending_list))
 			schedule();
-		else {
+
+		finish_wait(&me->wait, &wait);
 repeat:
-			bdi = NULL;
+		bdi = NULL;
+		spin_lock_bh(&bdi_lock);
+		if (!list_empty(&bdi_pending_list)) {
+			bdi = list_entry(bdi_pending_list.next,
+					 struct backing_dev_info, bdi_list);
+			list_del_init(&bdi->bdi_list);
+		}
+		spin_unlock_bh(&bdi_lock);
 
-			spin_lock_bh(&bdi_lock);
-			if (!list_empty(&bdi_pending_list)) {
-				bdi = list_entry(bdi_pending_list.next,
-						 struct backing_dev_info,
-						 bdi_list);
-				list_del_init(&bdi->bdi_list);
-			}
-			spin_unlock_bh(&bdi_lock);
+		if (!bdi)
+			continue;
 
-			/*
-			 * If no bdi or bdi already got setup, continue
-			 */
-			if (!bdi || bdi->task)
-				continue;
+		wb = bdi_new_wb(bdi);
+		if (!wb)
+			goto readd_flush;
 
-			bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+		wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
 						dev_name(bdi->dev));
+		/*
+		 * If task creation fails, then readd the bdi to
+		 * the pending list and force writeout of the bdi
+		 * from this forker thread. That will free some memory
+		 * and we can try again.
+		 */
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
+readd_flush:
 			/*
-			 * If task creation fails, then readd the bdi to
-			 * the pending list and force writeout of the bdi
-			 * from this forker thread. That will free some memory
-			 * and we can try again.
+			 * Add this 'bdi' to the back, so we get
+			 * a chance to flush other bdi's to free
+			 * memory.
 			 */
-			if (!bdi->task) {
-				struct writeback_control wbc = {
-					.bdi			= bdi,
-					.sync_mode		= WB_SYNC_NONE,
-					.older_than_this	= NULL,
-					.range_cyclic		= 1,
-				};
-
-				/*
-				 * Add this 'bdi' to the back, so we get
-				 * a chance to flush other bdi's to free
-				 * memory.
-				 */
-				spin_lock_bh(&bdi_lock);
-				list_add_tail(&bdi->bdi_list,
-						&bdi_pending_list);
-				spin_unlock_bh(&bdi_lock);
-
-				wbc.nr_to_write = 1024;
-				generic_sync_bdi_inodes(NULL, &wbc);
-				goto repeat;
-			}
-		}
+			spin_lock_bh(&bdi_lock);
+			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+			spin_unlock_bh(&bdi_lock);
 
-		finish_wait(&me->wait, &wait);
+			bdi_flush_io(bdi);
+			goto repeat;
+		}
 	}
 
 	return 0;
@@ -318,11 +364,21 @@ static void bdi_add_to_pending(struct rcu_head *head)
 	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
 	spin_unlock(&bdi_lock);
 
-	wake_up(&default_backing_dev_info.wait);
+	wake_up(&default_backing_dev_info.wb.wait);
 }
 
+/*
+ * Add a new flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
 {
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	/*
+	 * Someone already marked this pending for task creation
+	 */
 	if (test_and_set_bit(BDI_pending, &bdi->state))
 		return;
 
@@ -363,9 +419,18 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 	 * on-demand when they need it.
 	 */
 	if (bdi_cap_flush_forker(bdi)) {
-		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+		struct bdi_writeback *wb;
+
+		wb = bdi_new_wb(bdi);
+		if (!wb) {
+			ret = -ENOMEM;
+			goto exit;
+		}
+
+		wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
 						dev_name(dev));
-		if (!bdi->task) {
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
 			ret = -ENOMEM;
 			goto exit;
 		}
@@ -395,34 +460,44 @@ static int sched_wait(void *word)
 	return 0;
 }
 
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
 static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 {
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
 	/*
 	 * If setup is pending, wait for that to complete first
+	 * Make sure nobody finds us on the bdi_list anymore
 	 */
 	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
 
+	/*
+	 * Make sure nobody finds us on the bdi_list anymore
+	 */
 	spin_lock_bh(&bdi_lock);
 	list_del_rcu(&bdi->bdi_list);
 	spin_unlock_bh(&bdi_lock);
 
 	/*
-	 * In case the bdi is freed right after unregister, we need to
-	 * make sure any RCU sections have exited
+	 * Now make sure that anybody who is currently looking at us from
+	 * the bdi_list iteration have exited.
 	 */
 	synchronize_rcu();
+
+	/*
+	 * Finally, kill the kernel thread
+	 */
+	kthread_stop(bdi->wb.task);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
-		if (!bdi_cap_flush_forker(bdi)) {
+		if (!bdi_cap_flush_forker(bdi))
 			bdi_wb_shutdown(bdi);
-			if (bdi->task) {
-				kthread_stop(bdi->task);
-				bdi->task = NULL;
-			}
-		}
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -440,11 +515,10 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
-	init_waitqueue_head(&bdi->wait);
 	INIT_LIST_HEAD(&bdi->bdi_list);
-	INIT_LIST_HEAD(&bdi->b_io);
-	INIT_LIST_HEAD(&bdi->b_dirty);
-	INIT_LIST_HEAD(&bdi->b_more_io);
+	bdi->wb_mask = bdi->wb_active = 0;
+
+	bdi_wb_init(&bdi->wb, bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -469,9 +543,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
-	WARN_ON(!list_empty(&bdi->b_dirty));
-	WARN_ON(!list_empty(&bdi->b_io));
-	WARN_ON(!list_empty(&bdi->b_more_io));
+	WARN_ON(bdi_has_dirty_io(bdi));
 
 	bdi_unregister(bdi);
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 05/11] writeback: support > 1 flusher thread per bdi
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (3 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 04/11] writeback: separate the flushing state/task from the bdi Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 06/11] writeback: include default_backing_dev_info in writeback Jens Axboe
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Build on the bdi_writeback support by allowing registration of
more than 1 flusher thread. File systems can call bdi_add_flusher_task(bdi)
to add more flusher threads to the device. If they do so, they must also
provide a super_operations function to return the suitable bdi_writeback
struct from any given inode.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  328 ++++++++++++++++++++++++++++++++-----------
 include/linux/backing-dev.h |   31 ++++-
 include/linux/fs.h          |    3 +
 mm/backing-dev.c            |  257 ++++++++++++++++++++++++++--------
 mm/page-writeback.c         |    4 +-
 5 files changed, 479 insertions(+), 144 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 50e21e8..efdce88 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -34,84 +34,175 @@
  */
 int nr_pdflush_threads;
 
-/**
- * writeback_acquire - attempt to get exclusive writeback access to a device
- * @bdi: the device's backing_dev_info structure
- *
- * It is a waste of resources to have more than one pdflush thread blocked on
- * a single request queue.  Exclusion at the request_queue level is obtained
- * via a flag in the request_queue's backing_dev_info.state.
- *
- * Non-request_queue-backed address_spaces will share default_backing_dev_info,
- * unless they implement their own.  Which is somewhat inefficient, as this
- * may prevent concurrent writeback against multiple devices.
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc);
+
+/*
+ * Work items for the bdi_writeback threads
  */
-static int writeback_acquire(struct bdi_writeback *wb)
+struct bdi_work {
+	struct list_head list;
+	struct rcu_head rcu_head;
+
+	unsigned long seen;
+	atomic_t pending;
+
+	unsigned long sb_data;
+	unsigned long nr_pages;
+
+	unsigned long state;
+};
+
+static struct super_block *bdi_work_sb(struct bdi_work *work)
 {
-	struct backing_dev_info *bdi = wb->bdi;
+	return (struct super_block *) (work->sb_data & ~1UL);
+}
+
+static inline bool bdi_work_on_stack(struct bdi_work *work)
+{
+	return work->sb_data & 1UL;
+}
+
+static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
+				 unsigned long nr_pages)
+{
+	INIT_RCU_HEAD(&work->rcu_head);
+	work->sb_data = (unsigned long) sb;
+	work->nr_pages = nr_pages;
+	work->state = 0;
+}
 
-	return !test_and_set_bit(wb->nr, &bdi->wb_active);
+static inline void bdi_work_init_on_stack(struct bdi_work *work,
+					  struct super_block *sb,
+					  unsigned long nr_pages)
+{
+	bdi_work_init(work, sb, nr_pages);
+	set_bit(0, &work->state);
+	work->sb_data |= 1UL;
 }
 
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
  *
- * Determine whether there is writeback in progress against a backing device.
+ * Determine whether there is writeback waiting to be handled against a
+ * backing device.
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return bdi->wb_active != 0;
+	return !list_empty(&bdi->work_list);
 }
 
-/**
- * writeback_release - relinquish exclusive writeback access against a device.
- * @bdi: the device's backing_dev_info structure
- */
-static void writeback_release(struct bdi_writeback *wb)
+static void bdi_work_free(struct rcu_head *head)
 {
-	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);
 
-	wb->nr_pages = 0;
-	wb->sb = NULL;
-	clear_bit(wb->nr, &bdi->wb_active);
+	if (!bdi_work_on_stack(work))
+		kfree(work);
+	else {
+		clear_bit(0, &work->state);
+		wake_up_bit(&work->state, 0);
+	}
 }
 
-static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
-			       long nr_pages)
+static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)
 {
-	if (!wb_has_dirty_io(wb))
-		return;
+	/*
+	 * The caller has retrieved the work arguments from this work,
+	 * drop our reference. If this is the last ref, delete and free it
+	 */
+	if (atomic_dec_and_test(&work->pending)) {
+		struct backing_dev_info *bdi = wb->bdi;
 
-	if (writeback_acquire(wb)) {
-		wb->nr_pages = nr_pages;
-		wb->sb = sb;
+		spin_lock(&bdi->wb_lock);
+		list_del_rcu(&work->list);
+		spin_unlock(&bdi->wb_lock);
+
+		call_rcu(&work->rcu_head, bdi_work_free);
+	}
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
+{
+	/*
+	 * If we failed allocating the bdi work item, wake up the wb thread
+	 * always. As a safety precaution, it'll flush out everything
+	 */
+	if (!wb_has_dirty_io(wb) && work)
+		wb_clear_pending(wb, work);
+	else
+		wake_up(&wb->wait);
+}
+
+static int bdi_queue_writeback(struct backing_dev_info *bdi,
+			       struct bdi_work *work)
+{
+	if (work) {
+		work->seen = bdi->wb_mask;
+		atomic_set(&work->pending, bdi->wb_cnt);
 
 		/*
-		 * make above store seen before the task is woken
+		 * Make sure stores are seen before it appears on the list
 		 */
 		smp_mb();
-		wake_up(&wb->wait);
+
+		spin_lock(&bdi->wb_lock);
+		list_add_tail_rcu(&work->list, &bdi->work_list);
+		spin_unlock(&bdi->wb_lock);
 	}
-}
 
-int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages)
-{
 	/*
 	 * This only happens the first time someone kicks this bdi, so put
 	 * it out-of-line.
 	 */
-	if (unlikely(!bdi->wb.task)) {
+	if (unlikely(list_empty_careful(&bdi->wb_list))) {
 		bdi_add_default_flusher_task(bdi);
 		return 1;
 	}
 
-	wb_start_writeback(&bdi->wb, sb, nr_pages);
+	if (!bdi_wblist_needs_lock(bdi))
+		wb_start_writeback(&bdi->wb, work);
+	else {
+		struct bdi_writeback *wb;
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			wb_start_writeback(wb, work);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
 	return 0;
 }
 
 /*
+ * Used for on-stack allocated work items. The caller needs to wait until
+ * the wb threads have acked the work before it's safe to continue.
+ */
+static void bdi_wait_on_work_start(struct bdi_work *work)
+{
+	wait_on_bit(&work->state, 0, bdi_sched_wait, TASK_UNINTERRUPTIBLE);
+}
+
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages)
+{
+	struct bdi_work work;
+	int ret;
+
+	bdi_work_init_on_stack(&work, sb, nr_pages);
+
+	ret = bdi_queue_writeback(bdi, &work);
+
+	bdi_wait_on_work_start(&work);
+
+	return ret;
+}
+
+/*
  * The maximum number of pages to writeout in a single bdi flush/kupdate
  * operation.  We do this so we don't hold I_SYNC against an inode for
  * enormous amounts of time, which would block a userspace task which has
@@ -160,18 +251,15 @@ static void wb_kupdated(struct bdi_writeback *wb)
 		wbc.more_io = 0;
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		generic_sync_bdi_inodes(NULL, &wbc);
+		generic_sync_wb_inodes(wb, NULL, &wbc);
 		if (wbc.nr_to_write > 0)
 			break;	/* All the old data is written */
 		nr_to_write -= MAX_WRITEBACK_PAGES;
 	}
 }
 
-static void generic_sync_wb_inodes(struct bdi_writeback *wb,
-				   struct super_block *sb,
-				   struct writeback_control *wbc);
-
-static void wb_writeback(struct bdi_writeback *wb)
+static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+			   struct super_block *sb)
 {
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
@@ -179,10 +267,10 @@ static void wb_writeback(struct bdi_writeback *wb)
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
-	long nr_pages = wb->nr_pages;
 
 	for (;;) {
 		unsigned long background_thresh, dirty_thresh;
+
 		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
 		if ((global_page_state(NR_FILE_DIRTY) +
 		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
@@ -193,7 +281,7 @@ static void wb_writeback(struct bdi_writeback *wb)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		generic_sync_wb_inodes(wb, wb->sb, &wbc);
+		generic_sync_wb_inodes(wb, sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
@@ -207,69 +295,135 @@ static void wb_writeback(struct bdi_writeback *wb)
 }
 
 /*
+ * Return the next bdi_work struct that hasn't been processed by this
+ * wb thread yet
+ */
+static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
+					   struct bdi_writeback *wb)
+{
+	struct bdi_work *work, *ret = NULL;
+
+	rcu_read_lock();
+
+	list_for_each_entry_rcu(work, &bdi->work_list, list) {
+		if (!test_and_clear_bit(wb->nr, &work->seen))
+			continue;
+
+		ret = work;
+		break;
+	}
+
+	rcu_read_unlock();
+	return ret;
+}
+
+static void wb_writeback(struct bdi_writeback *wb)
+{
+	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work;
+
+	while ((work = get_next_work_item(bdi, wb)) != NULL) {
+		struct super_block *sb = bdi_work_sb(work);
+		long nr_pages = work->nr_pages;
+
+		wb_clear_pending(wb, work);
+		__wb_writeback(wb, nr_pages, sb);
+	}
+}
+
+/*
+ * This will be inlined in bdi_writeback_task() once we get rid of any
+ * dirty inodes on the default_backing_dev_info
+ */
+static void wb_do_writeback(struct bdi_writeback *wb)
+{
+	/*
+	 * We get here in two cases:
+	 *
+	 *  schedule_timeout() returned because the dirty writeback
+	 *  interval has elapsed. If that happens, the work item list
+	 *  will be empty and we will proceed to do kupdated style writeout.
+	 *
+	 *  Someone called bdi_start_writeback(), which put one/more work
+	 *  items on the work_list. Process those.
+	 */
+	if (list_empty(&wb->bdi->work_list))
+		wb_kupdated(wb);
+	else
+		wb_writeback(wb);
+}
+
+/*
  * Handle writeback of dirty data for the device backed by this bdi. Also
  * wakes up periodically and does kupdated style flushing.
  */
 int bdi_writeback_task(struct bdi_writeback *wb)
 {
+	DEFINE_WAIT(wait);
+
 	while (!kthread_should_stop()) {
 		unsigned long wait_jiffies;
-		DEFINE_WAIT(wait);
+
+		wb_do_writeback(wb);
 
 		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
 		schedule_timeout(wait_jiffies);
 		try_to_freeze();
-
-		/*
-		 * We get here in two cases:
-		 *
-		 *  schedule_timeout() returned because the dirty writeback
-		 *  interval has elapsed. If that happens, we will be able
-		 *  to acquire the writeback lock and will proceed to do
-		 *  kupdated style writeout.
-		 *
-		 *  Someone called bdi_start_writeback(), which will acquire
-		 *  the writeback lock. This means our writeback_acquire()
-		 *  below will fail and we call into bdi_pdflush() for
-		 *  pdflush style writeout.
-		 *
-		 */
-		if (writeback_acquire(wb))
-			wb_kupdated(wb);
-		else
-			wb_writeback(wb);
-
-		writeback_release(wb);
-		finish_wait(&wb->wait, &wait);
 	}
 
+	finish_wait(&wb->wait, &wait);
 	return 0;
 }
 
 void bdi_writeback_all(struct super_block *sb, long nr_pages)
 {
-	struct backing_dev_info *bdi;
+	struct list_head *entry = &bdi_list;
 
 	rcu_read_lock();
 
-restart:
-	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+	list_for_each_continue_rcu(entry, &bdi_list) {
+		struct backing_dev_info *bdi;
+		struct list_head *next;
+		struct bdi_work *work;
+
+		bdi = list_entry(entry, struct backing_dev_info, bdi_list);
 		if (!bdi_has_dirty_io(bdi))
 			continue;
-		if (bdi_start_writeback(bdi, sb, nr_pages))
-			goto restart;
+
+		/*
+		 * If this allocation fails, we just wakeup the thread and
+		 * let it do kupdate writeback
+		 */
+		work = kmalloc(sizeof(*work), GFP_ATOMIC);
+		if (work)
+			bdi_work_init(work, sb, nr_pages);
+
+		/*
+		 * Prepare to start from previous entry if this one gets moved
+		 * to the bdi_pending list.
+		 */
+		next = entry->prev;
+		if (bdi_queue_writeback(bdi, work))
+			entry = next;
 	}
 
 	rcu_read_unlock();
 }
 
 /*
- * We have only a single wb per bdi, so just return that.
+ * If the filesystem didn't provide a way to map an inode to a dedicated
+ * flusher thread, it doesn't support more than 1 thread. So we know it's
+ * the default thread, return that.
  */
 static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
 {
-	return &inode_to_bdi(inode)->wb;
+	const struct super_operations *sop = inode->i_sb->s_op;
+
+	if (!sop->inode_get_wb)
+		return &inode_to_bdi(inode)->wb;
+
+	return sop->inode_get_wb(inode);
 }
 
 /**
@@ -723,8 +877,24 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			     struct writeback_control *wbc)
 {
 	struct backing_dev_info *bdi = wbc->bdi;
+	struct bdi_writeback *wb;
+
+	/*
+	 * Common case is just a single wb thread and that is embedded in
+	 * the bdi, so it doesn't need locking
+	 */
+	if (!bdi_wblist_needs_lock(bdi))
+		generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+	else {
+		int idx;
 
-	generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			generic_sync_wb_inodes(wb, sb, wbc);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
 }
 
 /*
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index a0c70f1..6ccfa35 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,8 @@
 #include <linux/proportions.h>
 #include <linux/kernel.h>
 #include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/srcu.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -25,6 +27,7 @@ struct dentry;
 enum bdi_state {
 	BDI_pending,		/* On its way to being activated */
 	BDI_wb_alloc,		/* Default embedded wb allocated */
+	BDI_wblist_lock,	/* bdi->wb_list now needs locking */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -41,6 +44,8 @@ enum bdi_stat_item {
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
 struct bdi_writeback {
+	struct list_head list;			/* hangs off the bdi */
+
 	struct backing_dev_info *bdi;		/* our parent bdi */
 	unsigned int nr;
 
@@ -49,13 +54,13 @@ struct bdi_writeback {
 	struct list_head	b_dirty;	/* dirty inodes */
 	struct list_head	b_io;		/* parked for writeback */
 	struct list_head	b_more_io;	/* parked for more writeback */
-
-	unsigned long		nr_pages;
-	struct super_block	*sb;
 };
 
+#define BDI_MAX_FLUSHERS	32
+
 struct backing_dev_info {
 	struct rcu_head rcu_head;
+	struct srcu_struct srcu; /* for wb_list read side protection */
 	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
@@ -74,8 +79,12 @@ struct backing_dev_info {
 	unsigned int max_ratio, max_prop_frac;
 
 	struct bdi_writeback wb;  /* default writeback info for this bdi */
-	unsigned long wb_active;  /* bitmap of active tasks */
-	unsigned long wb_mask;	  /* number of registered tasks */
+	spinlock_t wb_lock;	  /* protects update side of wb_list */
+	struct list_head wb_list; /* the flusher threads hanging off this bdi */
+	unsigned long wb_mask;	  /* bitmask of registered tasks */
+	unsigned int wb_cnt;	  /* number of registered tasks */
+
+	struct list_head work_list;
 
 	struct device *dev;
 
@@ -97,11 +106,17 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 int bdi_writeback_task(struct bdi_writeback *wb);
 void bdi_writeback_all(struct super_block *sb, long nr_pages);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+void bdi_add_flusher_task(struct backing_dev_info *bdi);
 int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
 extern spinlock_t bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
+{
+	return test_bit(BDI_wblist_lock, &bdi->state);
+}
+
 static inline int wb_has_dirty_io(struct bdi_writeback *wb)
 {
 	return !list_empty(&wb->b_dirty) ||
@@ -314,4 +329,10 @@ static inline bool mapping_cap_swap_backed(struct address_space *mapping)
 	return bdi_cap_swap_backed(mapping->backing_dev_info);
 }
 
+static inline int bdi_sched_wait(void *word)
+{
+	schedule();
+	return 0;
+}
+
 #endif		/* _LINUX_BACKING_DEV_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ecdc544..d3bda5d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1550,11 +1550,14 @@ extern ssize_t vfs_readv(struct file *, const struct iovec __user *,
 extern ssize_t vfs_writev(struct file *, const struct iovec __user *,
 		unsigned long, loff_t *);
 
+struct bdi_writeback;
+
 struct super_operations {
    	struct inode *(*alloc_inode)(struct super_block *sb);
 	void (*destroy_inode)(struct inode *);
 
    	void (*dirty_inode) (struct inode *);
+	struct bdi_writeback *(*inode_get_wb) (struct inode *);
 	int (*write_inode) (struct inode *, int);
 	void (*drop_inode) (struct inode *);
 	void (*delete_inode) (struct inode *);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 677a8c6..b4bcb14 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,7 +199,42 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
-static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	unsigned long mask = BDI_MAX_FLUSHERS - 1;
+	unsigned int nr;
+
+	do {
+		if ((bdi->wb_mask & mask) == mask)
+			return 1;
+
+		nr = find_first_zero_bit(&bdi->wb_mask, BDI_MAX_FLUSHERS);
+	} while (test_and_set_bit(nr, &bdi->wb_mask));
+
+	wb->nr = nr;
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt++;
+	spin_unlock(&bdi->wb_lock);
+
+	return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	clear_bit(wb->nr, &bdi->wb_mask);
+
+	if (wb == &bdi->wb)
+		clear_bit(BDI_wb_alloc, &bdi->state);
+	else
+		kfree(wb);
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt--;
+	spin_unlock(&bdi->wb_lock);
+}
+
+static int bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
 {
 	memset(wb, 0, sizeof(*wb));
 
@@ -208,6 +243,30 @@ static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
 	INIT_LIST_HEAD(&wb->b_dirty);
 	INIT_LIST_HEAD(&wb->b_io);
 	INIT_LIST_HEAD(&wb->b_more_io);
+
+	return wb_assign_nr(bdi, wb);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	/*
+	 * Default bdi->wb is already assigned, so just return it
+	 */
+	if (!test_and_set_bit(BDI_wb_alloc, &bdi->state))
+		wb = &bdi->wb;
+	else {
+		wb = kmalloc(sizeof(struct bdi_writeback), GFP_KERNEL);
+		if (wb) {
+			if (bdi_wb_init(wb, bdi)) {
+				kfree(wb);
+				wb = NULL;
+			}
+		}
+	}
+
+	return wb;
 }
 
 static void bdi_flush_io(struct backing_dev_info *bdi)
@@ -223,35 +282,26 @@ static void bdi_flush_io(struct backing_dev_info *bdi)
 	generic_sync_bdi_inodes(NULL, &wbc);
 }
 
-static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+static void bdi_task_init(struct backing_dev_info *bdi,
+			  struct bdi_writeback *wb)
 {
-	set_bit(0, &bdi->wb_mask);
-	wb->nr = 0;
-	return 0;
-}
+	struct task_struct *tsk = current;
+	int was_empty;
 
-static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
-{
-	clear_bit(wb->nr, &bdi->wb_mask);
-	clear_bit(BDI_wb_alloc, &bdi->state);
-}
+	/*
+	 * Add us to the active bdi_list. If we are adding threads beyond
+	 * the default embedded bdi_writeback, then we need to start using
+	 * proper locking. Check the list for empty first, then set the
+	 * BDI_wblist_lock flag if there's > 1 entry on the list now
+	 */
+	spin_lock(&bdi->wb_lock);
 
-static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
-{
-	struct bdi_writeback *wb;
+	was_empty = list_empty(&bdi->wb_list);
+	list_add_tail_rcu(&wb->list, &bdi->wb_list);
+	if (!was_empty)
+		set_bit(BDI_wblist_lock, &bdi->state);
 
-	set_bit(BDI_wb_alloc, &bdi->state);
-	wb = &bdi->wb;
-	wb_assign_nr(bdi, wb);
-	return wb;
-}
-
-static int bdi_start_fn(void *ptr)
-{
-	struct bdi_writeback *wb = ptr;
-	struct backing_dev_info *bdi = wb->bdi;
-	struct task_struct *tsk = current;
-	int ret;
+	spin_unlock(&bdi->wb_lock);
 
 	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
 	set_freezable();
@@ -260,6 +310,15 @@ static int bdi_start_fn(void *ptr)
 	 * Our parent may run at a different priority, just set us to normal
 	 */
 	set_user_nice(tsk, 0);
+}
+
+static int bdi_start_fn(void *ptr)
+{
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
+	int ret;
+
+	bdi_task_init(bdi, wb);
 
 	/*
 	 * Clear pending bit and wakeup anybody waiting to tear us down
@@ -267,25 +326,65 @@ static int bdi_start_fn(void *ptr)
 	clear_bit(BDI_pending, &bdi->state);
 	wake_up_bit(&bdi->state, BDI_pending);
 
+	/*
+	 * Make us discoverable on the bdi_list again
+	 */
+	spin_lock(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock(&bdi_lock);
+
 	ret = bdi_writeback_task(wb);
 
+	/*
+	 * Remove us from the list
+	 */
+	spin_lock(&bdi->wb_lock);
+	list_del_rcu(&wb->list);
+	spin_unlock(&bdi->wb_lock);
+
+	/*
+	 * wait for rcu grace period to end, so we can free wb
+	 */
+	synchronize_srcu(&bdi->srcu);
+
 	bdi_put_wb(bdi, wb);
 	return ret;
 }
 
 int bdi_has_dirty_io(struct backing_dev_info *bdi)
 {
-	return wb_has_dirty_io(&bdi->wb);
+	struct bdi_writeback *wb;
+	int ret = 0;
+
+	if (!bdi_wblist_needs_lock(bdi))
+		ret = wb_has_dirty_io(&bdi->wb);
+	else {
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list) {
+			ret = wb_has_dirty_io(wb);
+			if (ret)
+				break;
+		}
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
+	return ret;
 }
 
 static int bdi_forker_task(void *ptr)
 {
 	struct bdi_writeback *me = ptr;
+	DEFINE_WAIT(wait);
+
+	bdi_task_init(me->bdi, me);
 
 	for (;;) {
 		struct backing_dev_info *bdi;
 		struct bdi_writeback *wb;
-		DEFINE_WAIT(wait);
 
 		/*
 		 * Should never trigger on the default bdi
@@ -301,7 +400,6 @@ static int bdi_forker_task(void *ptr)
 		if (list_empty(&bdi_pending_list))
 			schedule();
 
-		finish_wait(&me->wait, &wait);
 repeat:
 		bdi = NULL;
 		spin_lock_bh(&bdi_lock);
@@ -344,6 +442,7 @@ readd_flush:
 		}
 	}
 
+	finish_wait(&me->wait, &wait);
 	return 0;
 }
 
@@ -367,34 +466,68 @@ static void bdi_add_to_pending(struct rcu_head *head)
 	wake_up(&default_backing_dev_info.wb.wait);
 }
 
-/*
- * Add a new flusher task that gets created for any bdi
- * that has dirty data pending writeout
- */
-void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
+				     int(*func)(struct backing_dev_info *))
 {
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
 	/*
-	 * Someone already marked this pending for task creation
+	 * Check with the helper whether to proceed adding a task. Will only
+	 * abort if we two or more simultanous calls to
+	 * bdi_add_default_flusher_task() occured, further additions will block
+	 * waiting for previous additions to finish.
 	 */
-	if (test_and_set_bit(BDI_pending, &bdi->state))
-		return;
+	if (!func(bdi)) {
+		spin_lock_bh(&bdi_lock);
+		list_del_rcu(&bdi->bdi_list);
+		spin_unlock_bh(&bdi_lock);
 
-	spin_lock_bh(&bdi_lock);
-	list_del_rcu(&bdi->bdi_list);
-	spin_unlock_bh(&bdi_lock);
+		/*
+		 * We need to wait for the current grace period to end,
+		 * in case others were browsing the bdi_list as well.
+		 * So defer the adding and wakeup to after the RCU
+		 * grace period has ended.
+		 */
+		call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+	}
+}
 
-	/*
-	 * We need to wait for the current grace period to end,
-	 * in case others were browsing the bdi_list as well.
-	 * So defer the adding and wakeup to after the RCU
-	 * grace period has ended.
-	 */
-	call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+static int flusher_add_helper_block(struct backing_dev_info *bdi)
+{
+	wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
+				TASK_UNINTERRUPTIBLE);
+	return 0;
+}
+
+static int flusher_add_helper_test(struct backing_dev_info *bdi)
+{
+	return test_and_set_bit(BDI_pending, &bdi->state);
+}
+
+/*
+ * Add the default flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_test);
 }
 
+/**
+ * bdi_add_flusher_task - add one more flusher task to this @bdi
+ *  @bdi:	the bdi
+ *
+ * Add an additional flusher task to this @bdi. Will block waiting on
+ * previous additions, if any.
+ *
+ */
+void bdi_add_flusher_task(struct backing_dev_info *bdi)
+{
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+}
+EXPORT_SYMBOL(bdi_add_flusher_task);
+
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
 {
@@ -454,17 +587,13 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
-static int sched_wait(void *word)
-{
-	schedule();
-	return 0;
-}
-
 /*
  * Remove bdi from global list and shutdown any threads we have running
  */
 static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 {
+	struct bdi_writeback *wb;
+
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
@@ -472,7 +601,8 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	 * If setup is pending, wait for that to complete first
 	 * Make sure nobody finds us on the bdi_list anymore
 	 */
-	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+	wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
+			TASK_UNINTERRUPTIBLE);
 
 	/*
 	 * Make sure nobody finds us on the bdi_list anymore
@@ -488,9 +618,11 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	synchronize_rcu();
 
 	/*
-	 * Finally, kill the kernel thread
+	 * Finally, kill the kernel threads. We don't need to be RCU
+	 * safe anymore, since the bdi is gone from visibility.
 	 */
-	kthread_stop(bdi->wb.task);
+	list_for_each_entry(wb, &bdi->wb_list, list)
+		kthread_stop(wb->task);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
@@ -515,8 +647,12 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	spin_lock_init(&bdi->wb_lock);
+	bdi->wb_mask = 0;
+	bdi->wb_cnt = 0;
 	INIT_LIST_HEAD(&bdi->bdi_list);
-	bdi->wb_mask = bdi->wb_active = 0;
+	INIT_LIST_HEAD(&bdi->wb_list);
+	INIT_LIST_HEAD(&bdi->work_list);
 
 	bdi_wb_init(&bdi->wb, bdi);
 
@@ -526,10 +662,15 @@ int bdi_init(struct backing_dev_info *bdi)
 			goto err;
 	}
 
+	err = init_srcu_struct(&bdi->srcu);
+	if (err)
+		goto err;
+
 	bdi->dirty_exceeded = 0;
 	err = prop_local_init_percpu(&bdi->completions);
 
 	if (err) {
+		cleanup_srcu_struct(&bdi->srcu);
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
@@ -547,6 +688,8 @@ void bdi_destroy(struct backing_dev_info *bdi)
 
 	bdi_unregister(bdi);
 
+	cleanup_srcu_struct(&bdi->srcu);
+
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
 		percpu_counter_destroy(&bdi->bdi_stat[i]);
 
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 76269f8..de3178a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -667,8 +667,7 @@ void throttle_vm_writeout(gfp_t gfp_mask)
 
 /*
  * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
- * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
- * -1 if all pdflush threads were busy.
+ * the whole world.
  */
 void wakeup_flusher_threads(long nr_pages)
 {
@@ -676,7 +675,6 @@ void wakeup_flusher_threads(long nr_pages)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
 	bdi_writeback_all(NULL, nr_pages);
-	return;
 }
 
 static void laptop_timer_fn(unsigned long unused);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 06/11] writeback: include default_backing_dev_info in writeback
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (4 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 05/11] writeback: support > 1 flusher thread per bdi Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 07/11] writeback: allow sleepy exit of default writeback task Jens Axboe
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

We see dirty inodes there occasionally, so better be safe and write them
out.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c         |    2 +-
 include/linux/writeback.h |    1 +
 mm/backing-dev.c          |   30 ++++++++++++++++++------------
 3 files changed, 20 insertions(+), 13 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index efdce88..d9cd3b7 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -335,7 +335,7 @@ static void wb_writeback(struct bdi_writeback *wb)
  * This will be inlined in bdi_writeback_task() once we get rid of any
  * dirty inodes on the default_backing_dev_info
  */
-static void wb_do_writeback(struct bdi_writeback *wb)
+void wb_do_writeback(struct bdi_writeback *wb)
 {
 	/*
 	 * We get here in two cases:
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index baf04a9..e414702 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
 void sync_inodes_sb(struct super_block *, int wait);
 void sync_inodes(int wait);
+void wb_do_writeback(struct bdi_writeback *wb);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index b4bcb14..89d6eea 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -386,20 +386,26 @@ static int bdi_forker_task(void *ptr)
 		struct backing_dev_info *bdi;
 		struct bdi_writeback *wb;
 
-		/*
-		 * Should never trigger on the default bdi
-		 */
-		if (wb_has_dirty_io(me)) {
-			bdi_flush_io(me->bdi);
-			WARN_ON(1);
-		}
-
 		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
 
 		smp_mb();
 		if (list_empty(&bdi_pending_list))
 			schedule();
 
+		/*
+		 * Ideally we'd like not to see any dirty inodes on the
+		 * default_backing_dev_info. Until these are tracked down,
+		 * perform the same writeback here that bdi_writeback_task
+		 * does. For logic, see comment in
+		 * fs/fs-writeback.c:bdi_writeback_task()
+		 */
+		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+			wb_do_writeback(me);
+
+		/*
+		 * This is our real job - check for pending entries in
+		 * bdi_pending_list, and create the tasks that got added
+		 */
 repeat:
 		bdi = NULL;
 		spin_lock_bh(&bdi_lock);
@@ -567,12 +573,12 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 			ret = -ENOMEM;
 			goto exit;
 		}
-	} else {
-		spin_lock_bh(&bdi_lock);
-		list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-		spin_unlock_bh(&bdi_lock);
 	}
 
+	spin_lock_bh(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock_bh(&bdi_lock);
+
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 07/11] writeback: allow sleepy exit of default writeback task
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (5 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 06/11] writeback: include default_backing_dev_info in writeback Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 08/11] writeback: btrfs must register its backing_devices Jens Axboe
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Since we do lazy create of default writeback tasks for a bdi, we can
allow sleepy exit if it has been completely idle for 5 minutes.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |   52 ++++++++++++++++++++++++++++++++++--------
 include/linux/backing-dev.h |    5 ++++
 include/linux/writeback.h   |    2 +-
 3 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d9cd3b7..7e70f80 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -226,10 +226,10 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
  * older_than_this takes precedence over nr_to_write.  So we'll only write back
  * all dirty pages if they are all attached to "old" mappings.
  */
-static void wb_kupdated(struct bdi_writeback *wb)
+static long wb_kupdated(struct bdi_writeback *wb)
 {
 	unsigned long oldest_jif;
-	long nr_to_write;
+	long nr_to_write, wrote = 0;
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
 		.sync_mode		= WB_SYNC_NONE,
@@ -252,13 +252,16 @@ static void wb_kupdated(struct bdi_writeback *wb)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		generic_sync_wb_inodes(wb, NULL, &wbc);
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		if (wbc.nr_to_write > 0)
 			break;	/* All the old data is written */
 		nr_to_write -= MAX_WRITEBACK_PAGES;
 	}
+
+	return wrote;
 }
 
-static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 			   struct super_block *sb)
 {
 	struct writeback_control wbc = {
@@ -267,6 +270,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
+	long wrote = 0;
 
 	for (;;) {
 		unsigned long background_thresh, dirty_thresh;
@@ -283,6 +287,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 		wbc.pages_skipped = 0;
 		generic_sync_wb_inodes(wb, sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
 		 */
@@ -292,6 +297,8 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 			break;
 		}
 	}
+
+	return wrote;
 }
 
 /*
@@ -317,26 +324,31 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
 	return ret;
 }
 
-static void wb_writeback(struct bdi_writeback *wb)
+static long wb_writeback(struct bdi_writeback *wb)
 {
 	struct backing_dev_info *bdi = wb->bdi;
 	struct bdi_work *work;
+	long wrote = 0;
 
 	while ((work = get_next_work_item(bdi, wb)) != NULL) {
 		struct super_block *sb = bdi_work_sb(work);
 		long nr_pages = work->nr_pages;
 
 		wb_clear_pending(wb, work);
-		__wb_writeback(wb, nr_pages, sb);
+		wrote += __wb_writeback(wb, nr_pages, sb);
 	}
+
+	return wrote;
 }
 
 /*
  * This will be inlined in bdi_writeback_task() once we get rid of any
  * dirty inodes on the default_backing_dev_info
  */
-void wb_do_writeback(struct bdi_writeback *wb)
+long wb_do_writeback(struct bdi_writeback *wb)
 {
+	long wrote;
+
 	/*
 	 * We get here in two cases:
 	 *
@@ -348,9 +360,11 @@ void wb_do_writeback(struct bdi_writeback *wb)
 	 *  items on the work_list. Process those.
 	 */
 	if (list_empty(&wb->bdi->work_list))
-		wb_kupdated(wb);
+		wrote = wb_kupdated(wb);
 	else
-		wb_writeback(wb);
+		wrote = wb_writeback(wb);
+
+	return wrote;
 }
 
 /*
@@ -359,12 +373,30 @@ void wb_do_writeback(struct bdi_writeback *wb)
  */
 int bdi_writeback_task(struct bdi_writeback *wb)
 {
+	unsigned long last_active = jiffies;
+	unsigned long wait_jiffies = -1UL;
+	long pages_written;
 	DEFINE_WAIT(wait);
 
 	while (!kthread_should_stop()) {
-		unsigned long wait_jiffies;
 
-		wb_do_writeback(wb);
+		pages_written = wb_do_writeback(wb);
+
+		if (pages_written)
+			last_active = jiffies;
+		else if (wait_jiffies != -1UL) {
+			unsigned long max_idle;
+
+			/*
+			 * Longest period of inactivity that we tolerate. If we
+			 * see dirty data again later, the task will get
+			 * recreated automatically.
+			 */
+			max_idle = max(5UL * 60 * HZ, wait_jiffies);
+			if (time_after(jiffies, max_idle + last_active) &&
+			    wb_is_default_task(wb))
+				break;
+		}
 
 		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 6ccfa35..5d93237 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -112,6 +112,11 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi);
 extern spinlock_t bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+	return wb == &wb->bdi->wb;
+}
+
 static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
 {
 	return test_bit(BDI_wblist_lock, &bdi->state);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index e414702..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,7 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
 void sync_inodes_sb(struct super_block *, int wait);
 void sync_inodes(int wait);
-void wb_do_writeback(struct bdi_writeback *wb);
+long wb_do_writeback(struct bdi_writeback *wb);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 08/11] writeback: btrfs must register its backing_devices
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (6 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 07/11] writeback: allow sleepy exit of default writeback task Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 09/11] writeback: add some debug inode list counters to bdi stats Jens Axboe
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

btrfs puts dirty inodes on there, so it must register a thread
to handle them. Also fixes failure to check bdi_init() return value,
and bad inherit of ->capabilities flags from the default bdi.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/btrfs/disk-io.c |   23 ++++++++++++++++++-----
 1 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..2dc19c9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1345,12 +1345,24 @@ static void btrfs_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 	free_extent_map(em);
 }
 
+/*
+ * If this fails, caller must call bdi_destroy() to get rid of the
+ * bdi again.
+ */
 static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
-	bdi_init(bdi);
+	int err;
+
+	bdi->capabilities = BDI_CAP_MAP_COPY;
+	err = bdi_init(bdi);
+	if (err)
+		return err;
+
+	err = bdi_register(bdi, NULL, "btrfs");
+	if (err)
+		return err;
+
 	bdi->ra_pages	= default_backing_dev_info.ra_pages;
-	bdi->state		= 0;
-	bdi->capabilities	= default_backing_dev_info.capabilities;
 	bdi->unplug_io_fn	= btrfs_unplug_io_fn;
 	bdi->unplug_io_data	= info;
 	bdi->congested_fn	= btrfs_congested_fn;
@@ -1574,7 +1586,8 @@ struct btrfs_root *open_ctree(struct super_block *sb,
 	fs_info->sb = sb;
 	fs_info->max_extent = (u64)-1;
 	fs_info->max_inline = 8192 * 1024;
-	setup_bdi(fs_info, &fs_info->bdi);
+	if (setup_bdi(fs_info, &fs_info->bdi))
+		goto fail_bdi;
 	fs_info->btree_inode = new_inode(sb);
 	fs_info->btree_inode->i_ino = 1;
 	fs_info->btree_inode->i_nlink = 1;
@@ -1931,8 +1944,8 @@ fail_iput:
 
 	btrfs_close_devices(fs_info->fs_devices);
 	btrfs_mapping_tree_free(&fs_info->mapping_tree);
+fail_bdi:
 	bdi_destroy(&fs_info->bdi);
-
 fail:
 	kfree(extent_root);
 	kfree(tree_root);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 09/11] writeback: add some debug inode list counters to bdi stats
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (7 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 08/11] writeback: btrfs must register its backing_devices Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 10/11] writeback: add name to backing_dev_info Jens Axboe
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Not meant for inclusion, just to monitor what is going on while testing
this stuff.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 mm/backing-dev.c |   43 +++++++++++++++++++++++++++++++++++++++----
 1 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 89d6eea..314b739 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -43,9 +43,33 @@ static void bdi_debug_init(void)
 static int bdi_debug_stats_show(struct seq_file *m, void *v)
 {
 	struct backing_dev_info *bdi = m->private;
+	struct bdi_writeback *wb;
 	unsigned long background_thresh;
 	unsigned long dirty_thresh;
 	unsigned long bdi_thresh;
+	unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+	struct inode *inode;
+
+	/*
+	 * inode lock is enough here, the bdi->wb_list is protected by
+	 * RCU on the reader side
+	 */
+	nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+	spin_lock(&inode_lock);
+	list_for_each_entry(wb, &bdi->wb_list, list) {
+		nr_wb++;
+		list_for_each_entry(inode, &wb->b_dirty, i_list)
+			nr_dirty++;
+		list_for_each_entry(inode, &wb->b_io, i_list)
+			nr_io++;
+		list_for_each_entry(inode, &wb->b_more_io, i_list)
+			nr_more_io++;
+	}
+	spin_unlock(&inode_lock);
+
+	nr_dirty <<= (PAGE_CACHE_SHIFT - 10);
+	nr_io <<= (PAGE_CACHE_SHIFT - 10);
+	nr_more_io <<= (PAGE_CACHE_SHIFT - 10);
 
 	get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
 
@@ -55,12 +79,23 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
 		   "BdiReclaimable:   %8lu kB\n"
 		   "BdiDirtyThresh:   %8lu kB\n"
 		   "DirtyThresh:      %8lu kB\n"
-		   "BackgroundThresh: %8lu kB\n",
+		   "BackgroundThresh: %8lu kB\n"
+		   "WriteBack threads:%8lu\n"
+		   "b_dirty:          %8lu\n"
+		   "b_io:             %8lu\n"
+		   "b_more_io:        %8lu\n"
+		   "bdi:              %8p\n"
+		   "bdi_list:         %8u\n"
+		   "state:            %8lx\n"
+		   "wb_mask:          %8lx\n"
+		   "wb_list:          %8u\n"
+		   "wb_cnt:           %8u\n",
 		   (unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
 		   (unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
-		   K(bdi_thresh),
-		   K(dirty_thresh),
-		   K(background_thresh));
+		   K(bdi_thresh), K(dirty_thresh),
+		   K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+		   bdi, !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+		   !list_empty(&bdi->wb_list), bdi->wb_cnt);
 #undef K
 
 	return 0;
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 10/11] writeback: add name to backing_dev_info
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (8 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 09/11] writeback: add some debug inode list counters to bdi stats Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-18 12:19 ` [PATCH 11/11] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Just for testing purposes, to be able to easier track who does what.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-core.c            |    1 +
 drivers/block/aoe/aoeblk.c  |    1 +
 drivers/char/mem.c          |    1 +
 fs/btrfs/disk-io.c          |    1 +
 fs/char_dev.c               |    1 +
 fs/configfs/inode.c         |    1 +
 fs/fuse/inode.c             |    1 +
 fs/hugetlbfs/inode.c        |    1 +
 fs/nfs/client.c             |    1 +
 fs/ocfs2/dlm/dlmfs.c        |    1 +
 fs/ramfs/inode.c            |    1 +
 fs/sysfs/inode.c            |    1 +
 fs/ubifs/super.c            |    1 +
 include/linux/backing-dev.h |    2 ++
 kernel/cgroup.c             |    1 +
 mm/backing-dev.c            |    1 +
 mm/swap_state.c             |    1 +
 17 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..d3f18b5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -517,6 +517,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 
 	q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
 	q->backing_dev_info.unplug_io_data = q;
+	q->backing_dev_info.name = "block";
 	err = bdi_init(&q->backing_dev_info);
 	if (err) {
 		kmem_cache_free(blk_requestq_cachep, q);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 2307a27..0efb8fc 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -265,6 +265,7 @@ aoeblk_gdalloc(void *vp)
 	}
 
 	blk_queue_make_request(&d->blkq, aoeblk_make_request);
+	d->blkq.backing_dev_info.name = "aoe";
 	if (bdi_init(&d->blkq.backing_dev_info))
 		goto err_mempool;
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 8f05c38..3b38093 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -820,6 +820,7 @@ static const struct file_operations zero_fops = {
  * - permits private mappings, "copies" are taken of the source of zeros
  */
 static struct backing_dev_info zero_bdi = {
+	.name		= "char/mem",
 	.capabilities	= BDI_CAP_MAP_COPY,
 };
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 2dc19c9..eff2a82 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1353,6 +1353,7 @@ static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
 	int err;
 
+	bdi->name = "btrfs";
 	bdi->capabilities = BDI_CAP_MAP_COPY;
 	err = bdi_init(bdi);
 	if (err)
diff --git a/fs/char_dev.c b/fs/char_dev.c
index 38f7122..350ef9c 100644
--- a/fs/char_dev.c
+++ b/fs/char_dev.c
@@ -32,6 +32,7 @@
  * - no readahead or I/O queue unplugging required
  */
 struct backing_dev_info directly_mappable_cdev_bdi = {
+	.name = "char",
 	.capabilities	= (
 #ifdef CONFIG_MMU
 		/* permit private copies of the data to be taken */
diff --git a/fs/configfs/inode.c b/fs/configfs/inode.c
index 5d349d3..9a266cd 100644
--- a/fs/configfs/inode.c
+++ b/fs/configfs/inode.c
@@ -46,6 +46,7 @@ static const struct address_space_operations configfs_aops = {
 };
 
 static struct backing_dev_info configfs_backing_dev_info = {
+	.name		= "configfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index 91f7c85..e5e8b03 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -484,6 +484,7 @@ int fuse_conn_init(struct fuse_conn *fc, struct super_block *sb)
 	INIT_LIST_HEAD(&fc->bg_queue);
 	INIT_LIST_HEAD(&fc->entry);
 	atomic_set(&fc->num_waiting, 0);
+	fc->bdi.name = "fuse";
 	fc->bdi.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
 	fc->bdi.unplug_io_fn = default_unplug_io_fn;
 	/* fuse does it's own writeback accounting */
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index c1462d4..db1e537 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -43,6 +43,7 @@ static const struct inode_operations hugetlbfs_dir_inode_operations;
 static const struct inode_operations hugetlbfs_inode_operations;
 
 static struct backing_dev_info hugetlbfs_backing_dev_info = {
+	.name		= "hugetlbfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 75c9cd2..3a26d06 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -836,6 +836,7 @@ static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *
 		server->rsize = NFS_MAX_FILE_IO_SIZE;
 	server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 
+	server->backing_dev_info.name = "nfs";
 	server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD;
 
 	if (server->wsize > max_rpc_payload)
diff --git a/fs/ocfs2/dlm/dlmfs.c b/fs/ocfs2/dlm/dlmfs.c
index 1c9efb4..02bf178 100644
--- a/fs/ocfs2/dlm/dlmfs.c
+++ b/fs/ocfs2/dlm/dlmfs.c
@@ -325,6 +325,7 @@ clear_fields:
 }
 
 static struct backing_dev_info dlmfs_backing_dev_info = {
+	.name		= "ocfs2-dlmfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
index 3a6b193..5a24199 100644
--- a/fs/ramfs/inode.c
+++ b/fs/ramfs/inode.c
@@ -46,6 +46,7 @@ static const struct super_operations ramfs_ops;
 static const struct inode_operations ramfs_dir_inode_operations;
 
 static struct backing_dev_info ramfs_backing_dev_info = {
+	.name		= "ramfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK |
 			  BDI_CAP_MAP_DIRECT | BDI_CAP_MAP_COPY |
diff --git a/fs/sysfs/inode.c b/fs/sysfs/inode.c
index 555f0ff..e57f98e 100644
--- a/fs/sysfs/inode.c
+++ b/fs/sysfs/inode.c
@@ -29,6 +29,7 @@ static const struct address_space_operations sysfs_aops = {
 };
 
 static struct backing_dev_info sysfs_backing_dev_info = {
+	.name		= "sysfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index e9f7a75..2349e2c 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -1923,6 +1923,7 @@ static int ubifs_fill_super(struct super_block *sb, void *data, int silent)
 	 *
 	 * Read-ahead will be disabled because @c->bdi.ra_pages is 0.
 	 */
+	c->bdi.name = "ubifs",
 	c->bdi.capabilities = BDI_CAP_MAP_COPY;
 	c->bdi.unplug_io_fn = default_unplug_io_fn;
 	err  = bdi_init(&c->bdi);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 5d93237..14fa7b1 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -70,6 +70,8 @@ struct backing_dev_info {
 	void (*unplug_io_fn)(struct backing_dev_info *, struct page *);
 	void *unplug_io_data;
 
+	char *name;
+
 	struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
 
 	struct prop_local_percpu completions;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a7267bf..0863c5f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -598,6 +598,7 @@ static struct inode_operations cgroup_dir_inode_operations;
 static struct file_operations proc_cgroupstats_operations;
 
 static struct backing_dev_info cgroup_backing_dev_info = {
+	.name		= "cgroup",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 314b739..89a8385 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -17,6 +17,7 @@ void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 EXPORT_SYMBOL(default_unplug_io_fn);
 
 struct backing_dev_info default_backing_dev_info = {
+	.name		= "default",
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
 	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3ecea98..323da00 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -34,6 +34,7 @@ static const struct address_space_operations swap_aops = {
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
+	.name		= "swap",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK | BDI_CAP_SWAP_BACKED,
 	.unplug_io_fn	= swap_unplug_io_fn,
 };
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 11/11] writeback: check for registered bdi in flusher add and inode dirty
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (9 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 10/11] writeback: add name to backing_dev_info Jens Axboe
@ 2009-05-18 12:19 ` Jens Axboe
  2009-05-19  6:11 ` [PATCH 0/11] Per-bdi writeback flusher threads #4 Zhang, Yanmin
  2009-05-25 15:57 ` Richard Kennedy
  12 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-18 12:19 UTC (permalink / raw
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Also a debugging aid. We want to catch dirty inodes being added to
backing devices that don't do writeback.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |    7 +++++++
 include/linux/backing-dev.h |    1 +
 mm/backing-dev.c            |    6 ++++++
 3 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7e70f80..a287c09 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -557,6 +557,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 		 */
 		if (!was_dirty) {
 			struct bdi_writeback *wb = inode_get_wb(inode);
+			struct backing_dev_info *bdi = wb->bdi;
+
+			if (bdi_cap_writeback_dirty(bdi) &&
+			    !test_bit(BDI_registered, &bdi->state)) {
+				WARN_ON(1);
+				printk("bdi-%s not registered\n", bdi->name);
+			}
 
 			inode->dirtied_when = jiffies;
 			list_move(&inode->i_list, &wb->b_dirty);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 14fa7b1..7c2874f 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -30,6 +30,7 @@ enum bdi_state {
 	BDI_wblist_lock,	/* bdi->wb_list now needs locking */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
+	BDI_registered,		/* bdi_register() was done */
 	BDI_unused,		/* Available bits start here */
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 89a8385..d45251f 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -514,6 +514,11 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
+	if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+		printk("bdi %p/%s is not registered!\n", bdi, bdi->name);
+		return;
+	}
+
 	/*
 	 * Check with the helper whether to proceed adding a task. Will only
 	 * abort if we two or more simultanous calls to
@@ -617,6 +622,7 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
 
 exit:
 	return ret;
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (10 preceding siblings ...)
  2009-05-18 12:19 ` [PATCH 11/11] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
@ 2009-05-19  6:11 ` Zhang, Yanmin
  2009-05-19  6:20   ` Jens Axboe
  2009-05-25 15:57 ` Richard Kennedy
  12 siblings, 1 reply; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-19  6:11 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> Hi,
> 
> This is the fourth version of this patchset. Chances since v3:
> 
> - Dropped a prep patch, it has been included in mainline since.
> 
> - Add a work-to-do list to the bdi. This is struct bdi_work. Each
>   wb thread will notice and execute work on bdi->work_list. The arguments
>   are which sb (or NULL for all) to flush and how many pages to flush.
> 
> - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
>   some data would not be flushed.
> 
> - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
>   behaviour for kupdated flushes.
> 
> - Have the wb thread flush first before sleeping, to avoid losing the
>   first flush on lazy register.
> 
> - Rebase to newer kernels.
Jens,

Applied V4 to 2.6.30-rc6 and got some confliction reports.
----------patch-2----------
patching file fs/buffer.c
patching file fs/fs-writeback.c
patching file fs/ntfs/super.c
patching file fs/sync.c
patching file include/linux/backing-dev.h
patching file include/linux/fs.h
patching file include/linux/writeback.h
patching file mm/backing-dev.c
patching file mm/page-writeback.c
Hunk #5 FAILED at 666.
1 out of 6 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
patching file mm/vmscan.c
----------patch-3----------
patching file fs/fs-writeback.c
patching file include/linux/writeback.h
patching file mm/Makefile
patching file mm/pdflush.c
----------patch-4----------
patching file fs/fs-writeback.c
patching file include/linux/backing-dev.h
patching file mm/backing-dev.c
----------patch-5----------
patching file fs/fs-writeback.c
patching file include/linux/backing-dev.h
patching file include/linux/fs.h
patching file mm/backing-dev.c
patching file mm/page-writeback.c
Hunk #1 succeeded at 708 with fuzz 2 (offset 41 lines).
Hunk #2 FAILED at 716.
1 out of 2 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej


Then, I manually fixed the conflictions, but compilation reports errors.
Your patches seem not clean.

  CC      fs/exec.o
mm/page-writeback.c: In function 'background_writeout':
mm/page-writeback.c:695: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
mm/page-writeback.c:695: error: (Each undeclared identifier is reported only once
mm/page-writeback.c:695: error: for each function it appears in.)
mm/page-writeback.c: In function 'wb_kupdate':
mm/page-writeback.c:769: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
mm/page-writeback.c: In function 'wb_timer_fn':
mm/page-writeback.c:802: error: implicit declaration of function 'pdflush_operation'
make[1]: *** [mm/page-writeback.o] Error 1
make[1]: *** Waiting for unfinished jobs....
  CC      fs/pipe.o


Yanmin

> 
> - Little fixes here and there.
> 
> So generally not a lot of changes, the major one is using the ->work_list
> and getting rid of writeback_acquire()/writeback_release(). This fixes
> the concern Jan Kara had about missing sync/WB_SYNC_ALL, if writeback
> was already in progress.
> 
> I've run a few benchmarks today:
> 
> 1) Large file writes from a single process
> 2) Random file writes from multiple (16) processes.



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-19  6:11 ` [PATCH 0/11] Per-bdi writeback flusher threads #4 Zhang, Yanmin
@ 2009-05-19  6:20   ` Jens Axboe
  2009-05-19  6:43     ` Zhang, Yanmin
  2009-05-20  7:51       ` Zhang, Yanmin
  0 siblings, 2 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-19  6:20 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

[-- Attachment #1: Type: text/plain, Size: 3386 bytes --]

On Tue, May 19 2009, Zhang, Yanmin wrote:
> On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > Hi,
> > 
> > This is the fourth version of this patchset. Chances since v3:
> > 
> > - Dropped a prep patch, it has been included in mainline since.
> > 
> > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> >   wb thread will notice and execute work on bdi->work_list. The arguments
> >   are which sb (or NULL for all) to flush and how many pages to flush.
> > 
> > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> >   some data would not be flushed.
> > 
> > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> >   behaviour for kupdated flushes.
> > 
> > - Have the wb thread flush first before sleeping, to avoid losing the
> >   first flush on lazy register.
> > 
> > - Rebase to newer kernels.
> Jens,
> 
> Applied V4 to 2.6.30-rc6 and got some confliction reports.
> ----------patch-2----------
> patching file fs/buffer.c
> patching file fs/fs-writeback.c
> patching file fs/ntfs/super.c
> patching file fs/sync.c
> patching file include/linux/backing-dev.h
> patching file include/linux/fs.h
> patching file include/linux/writeback.h
> patching file mm/backing-dev.c
> patching file mm/page-writeback.c
> Hunk #5 FAILED at 666.
> 1 out of 6 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
> patching file mm/vmscan.c
> ----------patch-3----------
> patching file fs/fs-writeback.c
> patching file include/linux/writeback.h
> patching file mm/Makefile
> patching file mm/pdflush.c
> ----------patch-4----------
> patching file fs/fs-writeback.c
> patching file include/linux/backing-dev.h
> patching file mm/backing-dev.c
> ----------patch-5----------
> patching file fs/fs-writeback.c
> patching file include/linux/backing-dev.h
> patching file include/linux/fs.h
> patching file mm/backing-dev.c
> patching file mm/page-writeback.c
> Hunk #1 succeeded at 708 with fuzz 2 (offset 41 lines).
> Hunk #2 FAILED at 716.
> 1 out of 2 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej

It's not against -rc6, it's against current -git. And current -git had a
one-liner fixup to the centisec calculation, so it'll fail. If you apply
the below patch to -rc6, then the series should apply cleanly on top of
that.

> Then, I manually fixed the conflictions, but compilation reports errors.
> Your patches seem not clean.
> 
>   CC      fs/exec.o
> mm/page-writeback.c: In function 'background_writeout':
> mm/page-writeback.c:695: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> mm/page-writeback.c:695: error: (Each undeclared identifier is reported only once
> mm/page-writeback.c:695: error: for each function it appears in.)
> mm/page-writeback.c: In function 'wb_kupdate':
> mm/page-writeback.c:769: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> mm/page-writeback.c: In function 'wb_timer_fn':
> mm/page-writeback.c:802: error: implicit declaration of function 'pdflush_operation'
> make[1]: *** [mm/page-writeback.o] Error 1
> make[1]: *** Waiting for unfinished jobs....
>   CC      fs/pipe.o

You still have remnants of pdflush, so there's definite something wrong
with your manual patching :-)

I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
of the patch series that you can apply next.

-- 
Jens Axboe


[-- Attachment #2: writeback-fix.patch --]
[-- Type: text/x-diff, Size: 2003 bytes --]

commit 22ef37eed673587ac984965dc88ba94c68873291
Author: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
Date:   Sat May 16 22:56:28 2009 -0700

    page-writeback: fix the calculation of the oldest_jif in wb_kupdate()
    
    wb_kupdate() function has a bug on linux-2.6.30-rc5.  This bug causes
    generic_sync_sb_inodes() to start to write inodes back much earlier than
    our expectations because it miscalculates oldest_jif in wb_kupdate().
    
    This bug was introduced in 704503d836042d4a4c7685b7036e7de0418fbc0f
    ('mm: fix proc_dointvec_userhz_jiffies "breakage"').
    
    Signed-off-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
    Cc: Alexey Dobriyan <adobriyan@gmail.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Nick Piggin <nickpiggin@yahoo.com.au>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 30351f0..bb553c3 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -94,12 +94,12 @@ unsigned long vm_dirty_bytes;
 /*
  * The interval between `kupdate'-style writebacks
  */
-unsigned int dirty_writeback_interval = 5 * 100; /* sentiseconds */
+unsigned int dirty_writeback_interval = 5 * 100; /* centiseconds */
 
 /*
  * The longest time for which data is allowed to remain dirty
  */
-unsigned int dirty_expire_interval = 30 * 100; /* sentiseconds */
+unsigned int dirty_expire_interval = 30 * 100; /* centiseconds */
 
 /*
  * Flag that makes the machine dump writes/reads and block dirtyings.
@@ -770,7 +770,7 @@ static void wb_kupdate(unsigned long arg)
 
 	sync_supers();
 
-	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval);
+	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
 	start_jif = jiffies;
 	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
 	nr_to_write = global_page_state(NR_FILE_DIRTY) +

[-- Attachment #3: writeback-20090519 --]
[-- Type: text/plain, Size: 74972 bytes --]

diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..d3f18b5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -517,6 +517,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 
 	q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
 	q->backing_dev_info.unplug_io_data = q;
+	q->backing_dev_info.name = "block";
 	err = bdi_init(&q->backing_dev_info);
 	if (err) {
 		kmem_cache_free(blk_requestq_cachep, q);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 2307a27..0efb8fc 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -265,6 +265,7 @@ aoeblk_gdalloc(void *vp)
 	}
 
 	blk_queue_make_request(&d->blkq, aoeblk_make_request);
+	d->blkq.backing_dev_info.name = "aoe";
 	if (bdi_init(&d->blkq.backing_dev_info))
 		goto err_mempool;
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 8f05c38..3b38093 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -820,6 +820,7 @@ static const struct file_operations zero_fops = {
  * - permits private mappings, "copies" are taken of the source of zeros
  */
 static struct backing_dev_info zero_bdi = {
+	.name		= "char/mem",
 	.capabilities	= BDI_CAP_MAP_COPY,
 };
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..eff2a82 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1345,12 +1345,25 @@ static void btrfs_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 	free_extent_map(em);
 }
 
+/*
+ * If this fails, caller must call bdi_destroy() to get rid of the
+ * bdi again.
+ */
 static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
-	bdi_init(bdi);
+	int err;
+
+	bdi->name = "btrfs";
+	bdi->capabilities = BDI_CAP_MAP_COPY;
+	err = bdi_init(bdi);
+	if (err)
+		return err;
+
+	err = bdi_register(bdi, NULL, "btrfs");
+	if (err)
+		return err;
+
 	bdi->ra_pages	= default_backing_dev_info.ra_pages;
-	bdi->state		= 0;
-	bdi->capabilities	= default_backing_dev_info.capabilities;
 	bdi->unplug_io_fn	= btrfs_unplug_io_fn;
 	bdi->unplug_io_data	= info;
 	bdi->congested_fn	= btrfs_congested_fn;
@@ -1574,7 +1587,8 @@ struct btrfs_root *open_ctree(struct super_block *sb,
 	fs_info->sb = sb;
 	fs_info->max_extent = (u64)-1;
 	fs_info->max_inline = 8192 * 1024;
-	setup_bdi(fs_info, &fs_info->bdi);
+	if (setup_bdi(fs_info, &fs_info->bdi))
+		goto fail_bdi;
 	fs_info->btree_inode = new_inode(sb);
 	fs_info->btree_inode->i_ino = 1;
 	fs_info->btree_inode->i_nlink = 1;
@@ -1931,8 +1945,8 @@ fail_iput:
 
 	btrfs_close_devices(fs_info->fs_devices);
 	btrfs_mapping_tree_free(&fs_info->mapping_tree);
+fail_bdi:
 	bdi_destroy(&fs_info->bdi);
-
 fail:
 	kfree(extent_root);
 	kfree(tree_root);
diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..14f0802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
 	struct zone *zone;
 	int nid;
 
-	wakeup_pdflush(1024);
+	wakeup_flusher_threads(1024);
 	yield();
 
 	for_each_online_node(nid) {
diff --git a/fs/char_dev.c b/fs/char_dev.c
index 38f7122..350ef9c 100644
--- a/fs/char_dev.c
+++ b/fs/char_dev.c
@@ -32,6 +32,7 @@
  * - no readahead or I/O queue unplugging required
  */
 struct backing_dev_info directly_mappable_cdev_bdi = {
+	.name = "char",
 	.capabilities	= (
 #ifdef CONFIG_MMU
 		/* permit private copies of the data to be taken */
diff --git a/fs/configfs/inode.c b/fs/configfs/inode.c
index 5d349d3..9a266cd 100644
--- a/fs/configfs/inode.c
+++ b/fs/configfs/inode.c
@@ -46,6 +46,7 @@ static const struct address_space_operations configfs_aops = {
 };
 
 static struct backing_dev_info configfs_backing_dev_info = {
+	.name		= "configfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 91013ff..a287c09 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,49 +19,443 @@
 #include <linux/sched.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/writeback.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
 #include <linux/buffer_head.h>
 #include "internal.h"
 
+#define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
-/**
- * writeback_acquire - attempt to get exclusive writeback access to a device
- * @bdi: the device's backing_dev_info structure
- *
- * It is a waste of resources to have more than one pdflush thread blocked on
- * a single request queue.  Exclusion at the request_queue level is obtained
- * via a flag in the request_queue's backing_dev_info.state.
- *
- * Non-request_queue-backed address_spaces will share default_backing_dev_info,
- * unless they implement their own.  Which is somewhat inefficient, as this
- * may prevent concurrent writeback against multiple devices.
+/*
+ * We don't actually have pdflush, but this one is exported though /proc...
  */
-static int writeback_acquire(struct backing_dev_info *bdi)
+int nr_pdflush_threads;
+
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc);
+
+/*
+ * Work items for the bdi_writeback threads
+ */
+struct bdi_work {
+	struct list_head list;
+	struct rcu_head rcu_head;
+
+	unsigned long seen;
+	atomic_t pending;
+
+	unsigned long sb_data;
+	unsigned long nr_pages;
+
+	unsigned long state;
+};
+
+static struct super_block *bdi_work_sb(struct bdi_work *work)
+{
+	return (struct super_block *) (work->sb_data & ~1UL);
+}
+
+static inline bool bdi_work_on_stack(struct bdi_work *work)
+{
+	return work->sb_data & 1UL;
+}
+
+static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
+				 unsigned long nr_pages)
 {
-	return !test_and_set_bit(BDI_pdflush, &bdi->state);
+	INIT_RCU_HEAD(&work->rcu_head);
+	work->sb_data = (unsigned long) sb;
+	work->nr_pages = nr_pages;
+	work->state = 0;
+}
+
+static inline void bdi_work_init_on_stack(struct bdi_work *work,
+					  struct super_block *sb,
+					  unsigned long nr_pages)
+{
+	bdi_work_init(work, sb, nr_pages);
+	set_bit(0, &work->state);
+	work->sb_data |= 1UL;
 }
 
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
  *
- * Determine whether there is writeback in progress against a backing device.
+ * Determine whether there is writeback waiting to be handled against a
+ * backing device.
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return test_bit(BDI_pdflush, &bdi->state);
+	return !list_empty(&bdi->work_list);
 }
 
-/**
- * writeback_release - relinquish exclusive writeback access against a device.
- * @bdi: the device's backing_dev_info structure
+static void bdi_work_free(struct rcu_head *head)
+{
+	struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);
+
+	if (!bdi_work_on_stack(work))
+		kfree(work);
+	else {
+		clear_bit(0, &work->state);
+		wake_up_bit(&work->state, 0);
+	}
+}
+
+static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)
+{
+	/*
+	 * The caller has retrieved the work arguments from this work,
+	 * drop our reference. If this is the last ref, delete and free it
+	 */
+	if (atomic_dec_and_test(&work->pending)) {
+		struct backing_dev_info *bdi = wb->bdi;
+
+		spin_lock(&bdi->wb_lock);
+		list_del_rcu(&work->list);
+		spin_unlock(&bdi->wb_lock);
+
+		call_rcu(&work->rcu_head, bdi_work_free);
+	}
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
+{
+	/*
+	 * If we failed allocating the bdi work item, wake up the wb thread
+	 * always. As a safety precaution, it'll flush out everything
+	 */
+	if (!wb_has_dirty_io(wb) && work)
+		wb_clear_pending(wb, work);
+	else
+		wake_up(&wb->wait);
+}
+
+static int bdi_queue_writeback(struct backing_dev_info *bdi,
+			       struct bdi_work *work)
+{
+	if (work) {
+		work->seen = bdi->wb_mask;
+		atomic_set(&work->pending, bdi->wb_cnt);
+
+		/*
+		 * Make sure stores are seen before it appears on the list
+		 */
+		smp_mb();
+
+		spin_lock(&bdi->wb_lock);
+		list_add_tail_rcu(&work->list, &bdi->work_list);
+		spin_unlock(&bdi->wb_lock);
+	}
+
+	/*
+	 * This only happens the first time someone kicks this bdi, so put
+	 * it out-of-line.
+	 */
+	if (unlikely(list_empty_careful(&bdi->wb_list))) {
+		bdi_add_default_flusher_task(bdi);
+		return 1;
+	}
+
+	if (!bdi_wblist_needs_lock(bdi))
+		wb_start_writeback(&bdi->wb, work);
+	else {
+		struct bdi_writeback *wb;
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			wb_start_writeback(wb, work);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
+	return 0;
+}
+
+/*
+ * Used for on-stack allocated work items. The caller needs to wait until
+ * the wb threads have acked the work before it's safe to continue.
+ */
+static void bdi_wait_on_work_start(struct bdi_work *work)
+{
+	wait_on_bit(&work->state, 0, bdi_sched_wait, TASK_UNINTERRUPTIBLE);
+}
+
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages)
+{
+	struct bdi_work work;
+	int ret;
+
+	bdi_work_init_on_stack(&work, sb, nr_pages);
+
+	ret = bdi_queue_writeback(bdi, &work);
+
+	bdi_wait_on_work_start(&work);
+
+	return ret;
+}
+
+/*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     1024
+
+/*
+ * Periodic writeback of "old" data.
+ *
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space.  So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
+ *
+ * Try to run once per dirty_writeback_interval.  But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+ * older_than_this takes precedence over nr_to_write.  So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+static long wb_kupdated(struct bdi_writeback *wb)
+{
+	unsigned long oldest_jif;
+	long nr_to_write, wrote = 0;
+	struct writeback_control wbc = {
+		.bdi			= wb->bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= &oldest_jif,
+		.nr_to_write		= 0,
+		.for_kupdate		= 1,
+		.range_cyclic		= 1,
+	};
+
+	sync_supers();
+
+	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
+
+	nr_to_write = global_page_state(NR_FILE_DIRTY) +
+			global_page_state(NR_UNSTABLE_NFS) +
+			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+	while (nr_to_write > 0) {
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		generic_sync_wb_inodes(wb, NULL, &wbc);
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		if (wbc.nr_to_write > 0)
+			break;	/* All the old data is written */
+		nr_to_write -= MAX_WRITEBACK_PAGES;
+	}
+
+	return wrote;
+}
+
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+			   struct super_block *sb)
+{
+	struct writeback_control wbc = {
+		.bdi			= wb->bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+	};
+	long wrote = 0;
+
+	for (;;) {
+		unsigned long background_thresh, dirty_thresh;
+
+		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+		if ((global_page_state(NR_FILE_DIRTY) +
+		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
+		    nr_pages <= 0)
+			break;
+
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		wbc.pages_skipped = 0;
+		generic_sync_wb_inodes(wb, sb, &wbc);
+		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		/*
+		 * If we ran out of stuff to write, bail unless more_io got set
+		 */
+		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+			if (wbc.more_io)
+				continue;
+			break;
+		}
+	}
+
+	return wrote;
+}
+
+/*
+ * Return the next bdi_work struct that hasn't been processed by this
+ * wb thread yet
+ */
+static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
+					   struct bdi_writeback *wb)
+{
+	struct bdi_work *work, *ret = NULL;
+
+	rcu_read_lock();
+
+	list_for_each_entry_rcu(work, &bdi->work_list, list) {
+		if (!test_and_clear_bit(wb->nr, &work->seen))
+			continue;
+
+		ret = work;
+		break;
+	}
+
+	rcu_read_unlock();
+	return ret;
+}
+
+static long wb_writeback(struct bdi_writeback *wb)
+{
+	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work;
+	long wrote = 0;
+
+	while ((work = get_next_work_item(bdi, wb)) != NULL) {
+		struct super_block *sb = bdi_work_sb(work);
+		long nr_pages = work->nr_pages;
+
+		wb_clear_pending(wb, work);
+		wrote += __wb_writeback(wb, nr_pages, sb);
+	}
+
+	return wrote;
+}
+
+/*
+ * This will be inlined in bdi_writeback_task() once we get rid of any
+ * dirty inodes on the default_backing_dev_info
+ */
+long wb_do_writeback(struct bdi_writeback *wb)
+{
+	long wrote;
+
+	/*
+	 * We get here in two cases:
+	 *
+	 *  schedule_timeout() returned because the dirty writeback
+	 *  interval has elapsed. If that happens, the work item list
+	 *  will be empty and we will proceed to do kupdated style writeout.
+	 *
+	 *  Someone called bdi_start_writeback(), which put one/more work
+	 *  items on the work_list. Process those.
+	 */
+	if (list_empty(&wb->bdi->work_list))
+		wrote = wb_kupdated(wb);
+	else
+		wrote = wb_writeback(wb);
+
+	return wrote;
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
  */
-static void writeback_release(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
 {
-	BUG_ON(!writeback_in_progress(bdi));
-	clear_bit(BDI_pdflush, &bdi->state);
+	unsigned long last_active = jiffies;
+	unsigned long wait_jiffies = -1UL;
+	long pages_written;
+	DEFINE_WAIT(wait);
+
+	while (!kthread_should_stop()) {
+
+		pages_written = wb_do_writeback(wb);
+
+		if (pages_written)
+			last_active = jiffies;
+		else if (wait_jiffies != -1UL) {
+			unsigned long max_idle;
+
+			/*
+			 * Longest period of inactivity that we tolerate. If we
+			 * see dirty data again later, the task will get
+			 * recreated automatically.
+			 */
+			max_idle = max(5UL * 60 * HZ, wait_jiffies);
+			if (time_after(jiffies, max_idle + last_active) &&
+			    wb_is_default_task(wb))
+				break;
+		}
+
+		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
+		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+		schedule_timeout(wait_jiffies);
+		try_to_freeze();
+	}
+
+	finish_wait(&wb->wait, &wait);
+	return 0;
+}
+
+void bdi_writeback_all(struct super_block *sb, long nr_pages)
+{
+	struct list_head *entry = &bdi_list;
+
+	rcu_read_lock();
+
+	list_for_each_continue_rcu(entry, &bdi_list) {
+		struct backing_dev_info *bdi;
+		struct list_head *next;
+		struct bdi_work *work;
+
+		bdi = list_entry(entry, struct backing_dev_info, bdi_list);
+		if (!bdi_has_dirty_io(bdi))
+			continue;
+
+		/*
+		 * If this allocation fails, we just wakeup the thread and
+		 * let it do kupdate writeback
+		 */
+		work = kmalloc(sizeof(*work), GFP_ATOMIC);
+		if (work)
+			bdi_work_init(work, sb, nr_pages);
+
+		/*
+		 * Prepare to start from previous entry if this one gets moved
+		 * to the bdi_pending list.
+		 */
+		next = entry->prev;
+		if (bdi_queue_writeback(bdi, work))
+			entry = next;
+	}
+
+	rcu_read_unlock();
+}
+
+/*
+ * If the filesystem didn't provide a way to map an inode to a dedicated
+ * flusher thread, it doesn't support more than 1 thread. So we know it's
+ * the default thread, return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+	const struct super_operations *sop = inode->i_sb->s_op;
+
+	if (!sop->inode_get_wb)
+		return &inode_to_bdi(inode)->wb;
+
+	return sop->inode_get_wb(inode);
 }
 
 /**
@@ -158,12 +552,21 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 			goto out;
 
 		/*
-		 * If the inode was already on s_dirty/s_io/s_more_io, don't
-		 * reposition it (that would break s_dirty time-ordering).
+		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
+			struct bdi_writeback *wb = inode_get_wb(inode);
+			struct backing_dev_info *bdi = wb->bdi;
+
+			if (bdi_cap_writeback_dirty(bdi) &&
+			    !test_bit(BDI_registered, &bdi->state)) {
+				WARN_ON(1);
+				printk("bdi-%s not registered\n", bdi->name);
+			}
+
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list, &sb->s_dirty);
+			list_move(&inode->i_list, &wb->b_dirty);
 		}
 	}
 out:
@@ -184,31 +587,32 @@ static int write_inode(struct inode *inode, int sync)
  * furthest end of its superblock's dirty-inode list.
  *
  * Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list.  If that is
+ * already the most-recently-dirtied inode on the b_dirty list.  If that is
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct super_block *sb = inode->i_sb;
+	struct bdi_writeback *wb = inode_get_wb(inode);
 
-	if (!list_empty(&sb->s_dirty)) {
-		struct inode *tail_inode;
+	if (!list_empty(&wb->b_dirty)) {
+		struct inode *tail;
 
-		tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
-		if (time_before(inode->dirtied_when,
-				tail_inode->dirtied_when))
+		tail = list_entry(wb->b_dirty.next, struct inode, i_list);
+		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &sb->s_dirty);
+	list_move(&inode->i_list, &wb->b_dirty);
 }
 
 /*
- * requeue inode for re-scanning after sb->s_io list is exhausted.
+ * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode->i_sb->s_more_io);
+	struct bdi_writeback *wb = inode_get_wb(inode);
+
+	list_move(&inode->i_list, &wb->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -255,21 +659,12 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct super_block *sb,
-				unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
 {
-	list_splice_init(&sb->s_more_io, sb->s_io.prev);
-	move_expired_inodes(&sb->s_dirty, &sb->s_io, older_than_this);
+	list_splice_init(&wb->b_more_io, wb->b_io.prev);
+	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
 }
 
-int sb_has_dirty_inodes(struct super_block *sb)
-{
-	return !list_empty(&sb->s_dirty) ||
-	       !list_empty(&sb->s_io) ||
-	       !list_empty(&sb->s_more_io);
-}
-EXPORT_SYMBOL(sb_has_dirty_inodes);
-
 /*
  * Write a single inode's dirty pages and inode data out to disk.
  * If `wait' is set, wait on the writeout.
@@ -322,11 +717,11 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			/*
 			 * We didn't write back all the pages.  nfs_writepages()
 			 * sometimes bales out without doing anything. Redirty
-			 * the inode; Move it from s_io onto s_more_io/s_dirty.
+			 * the inode; Move it from b_io onto b_more_io/b_dirty.
 			 */
 			/*
 			 * akpm: if the caller was the kupdate function we put
-			 * this inode at the head of s_dirty so it gets first
+			 * this inode at the head of b_dirty so it gets first
 			 * consideration.  Otherwise, move it to the tail, for
 			 * the reasons described there.  I'm not really sure
 			 * how much sense this makes.  Presumably I had a good
@@ -336,7 +731,7 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			if (wbc->for_kupdate) {
 				/*
 				 * For the kupdate function we move the inode
-				 * to s_more_io so it will get more writeout as
+				 * to b_more_io so it will get more writeout as
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
@@ -402,10 +797,10 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_SYNC)) {
 		/*
 		 * We're skipping this inode because it's locked, and we're not
-		 * doing writeback-for-data-integrity.  Move it to s_more_io so
-		 * that writeback can proceed with the other inodes on s_io.
+		 * doing writeback-for-data-integrity.  Move it to b_more_io so
+		 * that writeback can proceed with the other inodes on b_io.
 		 * We'll have another go at writing back this inode when we
-		 * completed a full scan of s_io.
+		 * completed a full scan of b_io.
 		 */
 		requeue_io(inode);
 		return 0;
@@ -428,51 +823,34 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-/*
- * Write out a superblock's list of dirty inodes.  A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdflush thread, then implement pdflush collision avoidance
- * against the entire list.
- *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched.  For other superblocks,
- * assume that all inodes are backed by the same queue.
- *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
- * The inodes to be written are parked on sb->s_io.  They are moved back onto
- * sb->s_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
- */
-void generic_sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc)
 {
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
 	const unsigned long start = jiffies;	/* livelock avoidance */
-	int sync = wbc->sync_mode == WB_SYNC_ALL;
 
 	spin_lock(&inode_lock);
-	if (!wbc->for_kupdate || list_empty(&sb->s_io))
-		queue_io(sb, wbc->older_than_this);
 
-	while (!list_empty(&sb->s_io)) {
-		struct inode *inode = list_entry(sb->s_io.prev,
+	if (!wbc->for_kupdate || list_empty(&wb->b_io))
+		queue_io(wb, wbc->older_than_this);
+
+	while (!list_empty(&wb->b_io)) {
+		struct inode *inode = list_entry(wb->b_io.prev,
 						struct inode, i_list);
-		struct address_space *mapping = inode->i_mapping;
-		struct backing_dev_info *bdi = mapping->backing_dev_info;
 		long pages_skipped;
 
-		if (!bdi_cap_writeback_dirty(bdi)) {
+		/*
+		 * super block given and doesn't match, skip this inode
+		 */
+		if (sb && sb != inode->i_sb) {
 			redirty_tail(inode);
-			if (sb_is_blkdev_sb(sb)) {
+			continue;
+		}
+
+		if (!bdi_cap_writeback_dirty(wb->bdi)) {
+			redirty_tail(inode);
+			if (is_blkdev_sb) {
 				/*
 				 * Dirty memory-backed blockdev: the ramdisk
 				 * driver does this.  Skip just this inode
@@ -492,21 +870,14 @@ void generic_sync_sb_inodes(struct super_block *sb,
 			continue;
 		}
 
-		if (wbc->nonblocking && bdi_write_congested(bdi)) {
+		if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
 			wbc->encountered_congestion = 1;
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
 			requeue_io(inode);
 			continue;		/* Skip a congested blockdev */
 		}
 
-		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!sb_is_blkdev_sb(sb))
-				break;		/* fs has the wrong queue */
-			requeue_io(inode);
-			continue;		/* blockdev has wrong queue */
-		}
-
 		/*
 		 * Was this inode dirtied after sync_sb_inodes was called?
 		 * This keeps sync from extra jobs and livelock.
@@ -514,16 +885,10 @@ void generic_sync_sb_inodes(struct super_block *sb,
 		if (inode_dirtied_after(inode, start))
 			break;
 
-		/* Is another pdflush already flushing this queue? */
-		if (current_is_pdflush() && !writeback_acquire(bdi))
-			break;
-
 		BUG_ON(inode->i_state & I_FREEING);
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
 		__writeback_single_inode(inode, wbc);
-		if (current_is_pdflush())
-			writeback_release(bdi);
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -539,13 +904,71 @@ void generic_sync_sb_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&sb->s_more_io))
+		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
 	}
 
-	if (sync) {
+	spin_unlock(&inode_lock);
+	/* Leave any unwritten inodes on b_io */
+}
+
+void generic_sync_bdi_inodes(struct super_block *sb,
+			     struct writeback_control *wbc)
+{
+	struct backing_dev_info *bdi = wbc->bdi;
+	struct bdi_writeback *wb;
+
+	/*
+	 * Common case is just a single wb thread and that is embedded in
+	 * the bdi, so it doesn't need locking
+	 */
+	if (!bdi_wblist_needs_lock(bdi))
+		generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+	else {
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			generic_sync_wb_inodes(wb, sb, wbc);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+}
+
+/*
+ * Write out a superblock's list of dirty inodes.  A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched.  For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * The inodes to be written are parked on bdi->b_io.  They are moved back onto
+ * bdi->b_dirty as they are selected for writing.  This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+void generic_sync_sb_inodes(struct super_block *sb,
+				struct writeback_control *wbc)
+{
+	if (wbc->bdi)
+		bdi_start_writeback(wbc->bdi, sb, 0);
+	else
+		bdi_writeback_all(sb, 0);
+
+	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
 
+		spin_lock(&inode_lock);
+
 		/*
 		 * Data integrity sync. Must wait for all pages under writeback,
 		 * because there may have been pages dirtied before our sync
@@ -583,10 +1006,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
 		}
 		spin_unlock(&inode_lock);
 		iput(old_inode);
-	} else
-		spin_unlock(&inode_lock);
+	}
 
-	return;		/* Leave any unwritten inodes on s_io */
 }
 EXPORT_SYMBOL_GPL(generic_sync_sb_inodes);
 
@@ -597,58 +1018,6 @@ static void sync_sb_inodes(struct super_block *sb,
 }
 
 /*
- * Start writeback of dirty pagecache data against all unlocked inodes.
- *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->s_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->s_dirty/s_io/s_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
- *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
- *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones.  One group will be the dirty
- * inodes against a filesystem.  Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'.  Maybe not
- * super-efficient but we're about to do a ton of I/O...
- */
-void
-writeback_inodes(struct writeback_control *wbc)
-{
-	struct super_block *sb;
-
-	might_sleep();
-	spin_lock(&sb_lock);
-restart:
-	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
-		if (sb_has_dirty_inodes(sb)) {
-			/* we're making our own get_super here */
-			sb->s_count++;
-			spin_unlock(&sb_lock);
-			/*
-			 * If we can't get the readlock, there's no sense in
-			 * waiting around, most of the time the FS is going to
-			 * be unmounted by the time it is released.
-			 */
-			if (down_read_trylock(&sb->s_umount)) {
-				if (sb->s_root)
-					sync_sb_inodes(sb, wbc);
-				up_read(&sb->s_umount);
-			}
-			spin_lock(&sb_lock);
-			if (__put_super_and_need_restart(sb))
-				goto restart;
-		}
-		if (wbc->nr_to_write <= 0)
-			break;
-	}
-	spin_unlock(&sb_lock);
-}
-
-/*
  * writeback and wait upon the filesystem's dirty inodes.  The caller will
  * do this in two passes - one to write, and one to wait.
  *
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index 91f7c85..e5e8b03 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -484,6 +484,7 @@ int fuse_conn_init(struct fuse_conn *fc, struct super_block *sb)
 	INIT_LIST_HEAD(&fc->bg_queue);
 	INIT_LIST_HEAD(&fc->entry);
 	atomic_set(&fc->num_waiting, 0);
+	fc->bdi.name = "fuse";
 	fc->bdi.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
 	fc->bdi.unplug_io_fn = default_unplug_io_fn;
 	/* fuse does it's own writeback accounting */
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index c1462d4..db1e537 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -43,6 +43,7 @@ static const struct inode_operations hugetlbfs_dir_inode_operations;
 static const struct inode_operations hugetlbfs_inode_operations;
 
 static struct backing_dev_info hugetlbfs_backing_dev_info = {
+	.name		= "hugetlbfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 75c9cd2..3a26d06 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -836,6 +836,7 @@ static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *
 		server->rsize = NFS_MAX_FILE_IO_SIZE;
 	server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 
+	server->backing_dev_info.name = "nfs";
 	server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD;
 
 	if (server->wsize > max_rpc_payload)
diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..c4cb157 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
 		vol->mftmirr_ino = NULL;
 	}
 	/*
-	 * If any dirty inodes are left, throw away all mft data page cache
-	 * pages to allow a clean umount.  This should never happen any more
-	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
-	 * the underlying mft records are written out and cleaned.  If it does,
+	 * We should have no dirty inodes left, due to
+	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+	 * the underlying mft records are written out and cleaned.
 	 * happen anyway, we want to know...
 	 */
 	ntfs_commit_inode(vol->mft_ino);
 	write_inode_now(vol->mft_ino, 1);
-	if (sb_has_dirty_inodes(sb)) {
-		const char *s1, *s2;
-
-		mutex_lock(&vol->mft_ino->i_mutex);
-		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
-		mutex_unlock(&vol->mft_ino->i_mutex);
-		write_inode_now(vol->mft_ino, 1);
-		if (sb_has_dirty_inodes(sb)) {
-			static const char *_s1 = "inodes";
-			static const char *_s2 = "";
-			s1 = _s1;
-			s2 = _s2;
-		} else {
-			static const char *_s1 = "mft pages";
-			static const char *_s2 = "They have been thrown "
-					"away.  ";
-			s1 = _s1;
-			s2 = _s2;
-		}
-		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
-				"run chkdsk.  Please email "
-				"linux-ntfs-dev@lists.sourceforge.net and say "
-				"that you saw this message.  Thank you.", s1,
-				s2);
-	}
 #endif /* NTFS_RW */
 
 	iput(vol->mft_ino);
diff --git a/fs/ocfs2/dlm/dlmfs.c b/fs/ocfs2/dlm/dlmfs.c
index 1c9efb4..02bf178 100644
--- a/fs/ocfs2/dlm/dlmfs.c
+++ b/fs/ocfs2/dlm/dlmfs.c
@@ -325,6 +325,7 @@ clear_fields:
 }
 
 static struct backing_dev_info dlmfs_backing_dev_info = {
+	.name		= "ocfs2-dlmfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
index 3a6b193..5a24199 100644
--- a/fs/ramfs/inode.c
+++ b/fs/ramfs/inode.c
@@ -46,6 +46,7 @@ static const struct super_operations ramfs_ops;
 static const struct inode_operations ramfs_dir_inode_operations;
 
 static struct backing_dev_info ramfs_backing_dev_info = {
+	.name		= "ramfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK |
 			  BDI_CAP_MAP_DIRECT | BDI_CAP_MAP_COPY |
diff --git a/fs/super.c b/fs/super.c
index 1943fdf..76dd5b2 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -64,9 +64,6 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
-		INIT_LIST_HEAD(&s->s_dirty);
-		INIT_LIST_HEAD(&s->s_io);
-		INIT_LIST_HEAD(&s->s_more_io);
 		INIT_LIST_HEAD(&s->s_files);
 		INIT_LIST_HEAD(&s->s_instances);
 		INIT_HLIST_HEAD(&s->s_anon);
diff --git a/fs/sync.c b/fs/sync.c
index 7abc65f..3887f10 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -23,7 +23,7 @@
  */
 static void do_sync(unsigned long wait)
 {
-	wakeup_pdflush(0);
+	wakeup_flusher_threads(0);
 	sync_inodes(0);		/* All mappings, inodes and their blockdevs */
 	vfs_dq_sync(NULL);
 	sync_supers();		/* Write the superblocks */
diff --git a/fs/sysfs/inode.c b/fs/sysfs/inode.c
index 555f0ff..e57f98e 100644
--- a/fs/sysfs/inode.c
+++ b/fs/sysfs/inode.c
@@ -29,6 +29,7 @@ static const struct address_space_operations sysfs_aops = {
 };
 
 static struct backing_dev_info sysfs_backing_dev_info = {
+	.name		= "sysfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index e9f7a75..2349e2c 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -1923,6 +1923,7 @@ static int ubifs_fill_super(struct super_block *sb, void *data, int silent)
 	 *
 	 * Read-ahead will be disabled because @c->bdi.ra_pages is 0.
 	 */
+	c->bdi.name = "ubifs",
 	c->bdi.capabilities = BDI_CAP_MAP_COPY;
 	c->bdi.unplug_io_fn = default_unplug_io_fn;
 	err  = bdi_init(&c->bdi);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..7c2874f 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,8 @@
 #include <linux/proportions.h>
 #include <linux/kernel.h>
 #include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/srcu.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -23,9 +25,12 @@ struct dentry;
  * Bits in backing_dev_info.state
  */
 enum bdi_state {
-	BDI_pdflush,		/* A pdflush thread is working this device */
+	BDI_pending,		/* On its way to being activated */
+	BDI_wb_alloc,		/* Default embedded wb allocated */
+	BDI_wblist_lock,	/* bdi->wb_list now needs locking */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
+	BDI_registered,		/* bdi_register() was done */
 	BDI_unused,		/* Available bits start here */
 };
 
@@ -39,7 +44,25 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
+struct bdi_writeback {
+	struct list_head list;			/* hangs off the bdi */
+
+	struct backing_dev_info *bdi;		/* our parent bdi */
+	unsigned int nr;
+
+	struct task_struct	*task;		/* writeback task */
+	wait_queue_head_t	wait;
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+};
+
+#define BDI_MAX_FLUSHERS	32
+
 struct backing_dev_info {
+	struct rcu_head rcu_head;
+	struct srcu_struct srcu; /* for wb_list read side protection */
+	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -48,6 +71,8 @@ struct backing_dev_info {
 	void (*unplug_io_fn)(struct backing_dev_info *, struct page *);
 	void *unplug_io_data;
 
+	char *name;
+
 	struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
 
 	struct prop_local_percpu completions;
@@ -56,6 +81,14 @@ struct backing_dev_info {
 	unsigned int min_ratio;
 	unsigned int max_ratio, max_prop_frac;
 
+	struct bdi_writeback wb;  /* default writeback info for this bdi */
+	spinlock_t wb_lock;	  /* protects update side of wb_list */
+	struct list_head wb_list; /* the flusher threads hanging off this bdi */
+	unsigned long wb_mask;	  /* bitmask of registered tasks */
+	unsigned int wb_cnt;	  /* number of registered tasks */
+
+	struct list_head work_list;
+
 	struct device *dev;
 
 #ifdef CONFIG_DEBUG_FS
@@ -71,6 +104,33 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...);
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages);
+int bdi_writeback_task(struct bdi_writeback *wb);
+void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+void bdi_add_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
+
+extern spinlock_t bdi_lock;
+extern struct list_head bdi_list;
+
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+	return wb == &wb->bdi->wb;
+}
+
+static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
+{
+	return test_bit(BDI_wblist_lock, &bdi->state);
+}
+
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
+{
+	return !list_empty(&wb->b_dirty) ||
+	       !list_empty(&wb->b_io) ||
+	       !list_empty(&wb->b_more_io);
+}
 
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
@@ -187,6 +247,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
 #define BDI_CAP_EXEC_MAP	0x00000040
 #define BDI_CAP_NO_ACCT_WB	0x00000080
 #define BDI_CAP_SWAP_BACKED	0x00000100
+#define BDI_CAP_FLUSH_FORKER	0x00000200
 
 #define BDI_CAP_VMFLAGS \
 	(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -256,6 +317,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
 	return bdi->capabilities & BDI_CAP_SWAP_BACKED;
 }
 
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+	return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
+}
+
 static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
 {
 	return bdi_cap_writeback_dirty(mapping->backing_dev_info);
@@ -271,4 +337,10 @@ static inline bool mapping_cap_swap_backed(struct address_space *mapping)
 	return bdi_cap_swap_backed(mapping->backing_dev_info);
 }
 
+static inline int bdi_sched_wait(void *word)
+{
+	schedule();
+	return 0;
+}
+
 #endif		/* _LINUX_BACKING_DEV_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3b534e5..d3bda5d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -712,7 +712,7 @@ static inline int mapping_writably_mapped(struct address_space *mapping)
 
 struct inode {
 	struct hlist_node	i_hash;
-	struct list_head	i_list;
+	struct list_head	i_list;		/* backing dev IO list */
 	struct list_head	i_sb_list;
 	struct list_head	i_dentry;
 	unsigned long		i_ino;
@@ -1329,9 +1329,6 @@ struct super_block {
 	struct xattr_handler	**s_xattr;
 
 	struct list_head	s_inodes;	/* all inodes */
-	struct list_head	s_dirty;	/* dirty inodes */
-	struct list_head	s_io;		/* parked for writeback */
-	struct list_head	s_more_io;	/* parked for more writeback */
 	struct hlist_head	s_anon;		/* anonymous dentries for (nfs) exporting */
 	struct list_head	s_files;
 	/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
@@ -1553,11 +1550,14 @@ extern ssize_t vfs_readv(struct file *, const struct iovec __user *,
 extern ssize_t vfs_writev(struct file *, const struct iovec __user *,
 		unsigned long, loff_t *);
 
+struct bdi_writeback;
+
 struct super_operations {
    	struct inode *(*alloc_inode)(struct super_block *sb);
 	void (*destroy_inode)(struct inode *);
 
    	void (*dirty_inode) (struct inode *);
+	struct bdi_writeback *(*inode_get_wb) (struct inode *);
 	int (*write_inode) (struct inode *, int);
 	void (*drop_inode) (struct inode *);
 	void (*delete_inode) (struct inode *);
@@ -2066,6 +2066,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
 					 pgoff_t start, pgoff_t end);
 extern void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc);
+extern void generic_sync_bdi_inodes(struct super_block *sb,
+				struct writeback_control *);
 extern int write_inode_now(struct inode *, int);
 extern int filemap_fdatawrite(struct address_space *);
 extern int filemap_flush(struct address_space *);
@@ -2183,7 +2185,6 @@ extern int bdev_read_only(struct block_device *);
 extern int set_blocksize(struct block_device *, int);
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
 
 extern int generic_file_mmap(struct file *, struct vm_area_struct *);
 extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 9344547..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -14,17 +14,6 @@ extern struct list_head inode_in_use;
 extern struct list_head inode_unused;
 
 /*
- * Yes, writeback.h requires sched.h
- * No, sched.h is not included from here.
- */
-static inline int task_is_pdflush(struct task_struct *task)
-{
-	return task->flags & PF_FLUSHER;
-}
-
-#define current_is_pdflush()	task_is_pdflush(current)
-
-/*
  * fs/fs-writeback.c
  */
 enum writeback_sync_modes {
@@ -80,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
 void sync_inodes_sb(struct super_block *, int wait);
 void sync_inodes(int wait);
+long wb_do_writeback(struct bdi_writeback *wb);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
@@ -99,7 +89,7 @@ static inline void inode_sync_wait(struct inode *inode)
 /*
  * mm/page-writeback.c
  */
-int wakeup_pdflush(long nr_pages);
+void wakeup_flusher_threads(long nr_pages);
 void laptop_io_completion(void);
 void laptop_sync_completion(void);
 void throttle_vm_writeout(gfp_t gfp_mask);
@@ -151,7 +141,6 @@ balance_dirty_pages_ratelimited(struct address_space *mapping)
 typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc,
 				void *data);
 
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0);
 int generic_writepages(struct address_space *mapping,
 		       struct writeback_control *wbc);
 int write_cache_pages(struct address_space *mapping,
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a7267bf..0863c5f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -598,6 +598,7 @@ static struct inode_operations cgroup_dir_inode_operations;
 static struct file_operations proc_cgroupstats_operations;
 
 static struct backing_dev_info cgroup_backing_dev_info = {
+	.name		= "cgroup",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
 
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..2adb811 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -8,7 +8,7 @@ mmu-$(CONFIG_MMU)	:= fremap.o highmem.o madvise.o memory.o mincore.o \
 			   vmalloc.o
 
 obj-y			:= bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
-			   maccess.o page_alloc.o page-writeback.o pdflush.o \
+			   maccess.o page_alloc.o page-writeback.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
 			   page_isolation.o mm_init.o $(mmu-y)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..d45251f 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
 
 #include <linux/wait.h>
 #include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/fs.h>
 #include <linux/pagemap.h>
+#include <linux/mm.h>
 #include <linux/sched.h>
 #include <linux/module.h>
 #include <linux/writeback.h>
@@ -14,14 +17,18 @@ void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 EXPORT_SYMBOL(default_unplug_io_fn);
 
 struct backing_dev_info default_backing_dev_info = {
+	.name		= "default",
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
-	.capabilities	= BDI_CAP_MAP_COPY,
+	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
 	.unplug_io_fn	= default_unplug_io_fn,
 };
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
+DEFINE_SPINLOCK(bdi_lock);
+LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -37,9 +44,33 @@ static void bdi_debug_init(void)
 static int bdi_debug_stats_show(struct seq_file *m, void *v)
 {
 	struct backing_dev_info *bdi = m->private;
+	struct bdi_writeback *wb;
 	unsigned long background_thresh;
 	unsigned long dirty_thresh;
 	unsigned long bdi_thresh;
+	unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+	struct inode *inode;
+
+	/*
+	 * inode lock is enough here, the bdi->wb_list is protected by
+	 * RCU on the reader side
+	 */
+	nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+	spin_lock(&inode_lock);
+	list_for_each_entry(wb, &bdi->wb_list, list) {
+		nr_wb++;
+		list_for_each_entry(inode, &wb->b_dirty, i_list)
+			nr_dirty++;
+		list_for_each_entry(inode, &wb->b_io, i_list)
+			nr_io++;
+		list_for_each_entry(inode, &wb->b_more_io, i_list)
+			nr_more_io++;
+	}
+	spin_unlock(&inode_lock);
+
+	nr_dirty <<= (PAGE_CACHE_SHIFT - 10);
+	nr_io <<= (PAGE_CACHE_SHIFT - 10);
+	nr_more_io <<= (PAGE_CACHE_SHIFT - 10);
 
 	get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
 
@@ -49,12 +80,23 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
 		   "BdiReclaimable:   %8lu kB\n"
 		   "BdiDirtyThresh:   %8lu kB\n"
 		   "DirtyThresh:      %8lu kB\n"
-		   "BackgroundThresh: %8lu kB\n",
+		   "BackgroundThresh: %8lu kB\n"
+		   "WriteBack threads:%8lu\n"
+		   "b_dirty:          %8lu\n"
+		   "b_io:             %8lu\n"
+		   "b_more_io:        %8lu\n"
+		   "bdi:              %8p\n"
+		   "bdi_list:         %8u\n"
+		   "state:            %8lx\n"
+		   "wb_mask:          %8lx\n"
+		   "wb_list:          %8u\n"
+		   "wb_cnt:           %8u\n",
 		   (unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
 		   (unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
-		   K(bdi_thresh),
-		   K(dirty_thresh),
-		   K(background_thresh));
+		   K(bdi_thresh), K(dirty_thresh),
+		   K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+		   bdi, !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+		   !list_empty(&bdi->wb_list), bdi->wb_cnt);
 #undef K
 
 	return 0;
@@ -193,6 +235,346 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	unsigned long mask = BDI_MAX_FLUSHERS - 1;
+	unsigned int nr;
+
+	do {
+		if ((bdi->wb_mask & mask) == mask)
+			return 1;
+
+		nr = find_first_zero_bit(&bdi->wb_mask, BDI_MAX_FLUSHERS);
+	} while (test_and_set_bit(nr, &bdi->wb_mask));
+
+	wb->nr = nr;
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt++;
+	spin_unlock(&bdi->wb_lock);
+
+	return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	clear_bit(wb->nr, &bdi->wb_mask);
+
+	if (wb == &bdi->wb)
+		clear_bit(BDI_wb_alloc, &bdi->state);
+	else
+		kfree(wb);
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt--;
+	spin_unlock(&bdi->wb_lock);
+}
+
+static int bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+	memset(wb, 0, sizeof(*wb));
+
+	wb->bdi = bdi;
+	init_waitqueue_head(&wb->wait);
+	INIT_LIST_HEAD(&wb->b_dirty);
+	INIT_LIST_HEAD(&wb->b_io);
+	INIT_LIST_HEAD(&wb->b_more_io);
+
+	return wb_assign_nr(bdi, wb);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	/*
+	 * Default bdi->wb is already assigned, so just return it
+	 */
+	if (!test_and_set_bit(BDI_wb_alloc, &bdi->state))
+		wb = &bdi->wb;
+	else {
+		wb = kmalloc(sizeof(struct bdi_writeback), GFP_KERNEL);
+		if (wb) {
+			if (bdi_wb_init(wb, bdi)) {
+				kfree(wb);
+				wb = NULL;
+			}
+		}
+	}
+
+	return wb;
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+		.nr_to_write		= 1024,
+	};
+
+	generic_sync_bdi_inodes(NULL, &wbc);
+}
+
+static void bdi_task_init(struct backing_dev_info *bdi,
+			  struct bdi_writeback *wb)
+{
+	struct task_struct *tsk = current;
+	int was_empty;
+
+	/*
+	 * Add us to the active bdi_list. If we are adding threads beyond
+	 * the default embedded bdi_writeback, then we need to start using
+	 * proper locking. Check the list for empty first, then set the
+	 * BDI_wblist_lock flag if there's > 1 entry on the list now
+	 */
+	spin_lock(&bdi->wb_lock);
+
+	was_empty = list_empty(&bdi->wb_list);
+	list_add_tail_rcu(&wb->list, &bdi->wb_list);
+	if (!was_empty)
+		set_bit(BDI_wblist_lock, &bdi->state);
+
+	spin_unlock(&bdi->wb_lock);
+
+	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+	set_freezable();
+
+	/*
+	 * Our parent may run at a different priority, just set us to normal
+	 */
+	set_user_nice(tsk, 0);
+}
+
+static int bdi_start_fn(void *ptr)
+{
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
+	int ret;
+
+	bdi_task_init(bdi, wb);
+
+	/*
+	 * Clear pending bit and wakeup anybody waiting to tear us down
+	 */
+	clear_bit(BDI_pending, &bdi->state);
+	wake_up_bit(&bdi->state, BDI_pending);
+
+	/*
+	 * Make us discoverable on the bdi_list again
+	 */
+	spin_lock(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock(&bdi_lock);
+
+	ret = bdi_writeback_task(wb);
+
+	/*
+	 * Remove us from the list
+	 */
+	spin_lock(&bdi->wb_lock);
+	list_del_rcu(&wb->list);
+	spin_unlock(&bdi->wb_lock);
+
+	/*
+	 * wait for rcu grace period to end, so we can free wb
+	 */
+	synchronize_srcu(&bdi->srcu);
+
+	bdi_put_wb(bdi, wb);
+	return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+	int ret = 0;
+
+	if (!bdi_wblist_needs_lock(bdi))
+		ret = wb_has_dirty_io(&bdi->wb);
+	else {
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list) {
+			ret = wb_has_dirty_io(wb);
+			if (ret)
+				break;
+		}
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
+	return ret;
+}
+
+static int bdi_forker_task(void *ptr)
+{
+	struct bdi_writeback *me = ptr;
+	DEFINE_WAIT(wait);
+
+	bdi_task_init(me->bdi, me);
+
+	for (;;) {
+		struct backing_dev_info *bdi;
+		struct bdi_writeback *wb;
+
+		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+		smp_mb();
+		if (list_empty(&bdi_pending_list))
+			schedule();
+
+		/*
+		 * Ideally we'd like not to see any dirty inodes on the
+		 * default_backing_dev_info. Until these are tracked down,
+		 * perform the same writeback here that bdi_writeback_task
+		 * does. For logic, see comment in
+		 * fs/fs-writeback.c:bdi_writeback_task()
+		 */
+		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+			wb_do_writeback(me);
+
+		/*
+		 * This is our real job - check for pending entries in
+		 * bdi_pending_list, and create the tasks that got added
+		 */
+repeat:
+		bdi = NULL;
+		spin_lock_bh(&bdi_lock);
+		if (!list_empty(&bdi_pending_list)) {
+			bdi = list_entry(bdi_pending_list.next,
+					 struct backing_dev_info, bdi_list);
+			list_del_init(&bdi->bdi_list);
+		}
+		spin_unlock_bh(&bdi_lock);
+
+		if (!bdi)
+			continue;
+
+		wb = bdi_new_wb(bdi);
+		if (!wb)
+			goto readd_flush;
+
+		wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
+						dev_name(bdi->dev));
+		/*
+		 * If task creation fails, then readd the bdi to
+		 * the pending list and force writeout of the bdi
+		 * from this forker thread. That will free some memory
+		 * and we can try again.
+		 */
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
+readd_flush:
+			/*
+			 * Add this 'bdi' to the back, so we get
+			 * a chance to flush other bdi's to free
+			 * memory.
+			 */
+			spin_lock_bh(&bdi_lock);
+			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+			spin_unlock_bh(&bdi_lock);
+
+			bdi_flush_io(bdi);
+			goto repeat;
+		}
+	}
+
+	finish_wait(&me->wait, &wait);
+	return 0;
+}
+
+/*
+ * Grace period has now ended, init bdi->bdi_list and add us to the
+ * list of bdi's that are pending for task creation. Wake up
+ * bdi_forker_task() to finish the job and add us back to the
+ * active bdi_list.
+ */
+static void bdi_add_to_pending(struct rcu_head *head)
+{
+	struct backing_dev_info *bdi;
+
+	bdi = container_of(head, struct backing_dev_info, rcu_head);
+	INIT_LIST_HEAD(&bdi->bdi_list);
+
+	spin_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+	spin_unlock(&bdi_lock);
+
+	wake_up(&default_backing_dev_info.wb.wait);
+}
+
+static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
+				     int(*func)(struct backing_dev_info *))
+{
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+		printk("bdi %p/%s is not registered!\n", bdi, bdi->name);
+		return;
+	}
+
+	/*
+	 * Check with the helper whether to proceed adding a task. Will only
+	 * abort if we two or more simultanous calls to
+	 * bdi_add_default_flusher_task() occured, further additions will block
+	 * waiting for previous additions to finish.
+	 */
+	if (!func(bdi)) {
+		spin_lock_bh(&bdi_lock);
+		list_del_rcu(&bdi->bdi_list);
+		spin_unlock_bh(&bdi_lock);
+
+		/*
+		 * We need to wait for the current grace period to end,
+		 * in case others were browsing the bdi_list as well.
+		 * So defer the adding and wakeup to after the RCU
+		 * grace period has ended.
+		 */
+		call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+	}
+}
+
+static int flusher_add_helper_block(struct backing_dev_info *bdi)
+{
+	wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
+				TASK_UNINTERRUPTIBLE);
+	return 0;
+}
+
+static int flusher_add_helper_test(struct backing_dev_info *bdi)
+{
+	return test_and_set_bit(BDI_pending, &bdi->state);
+}
+
+/*
+ * Add the default flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_test);
+}
+
+/**
+ * bdi_add_flusher_task - add one more flusher task to this @bdi
+ *  @bdi:	the bdi
+ *
+ * Add an additional flusher task to this @bdi. Will block waiting on
+ * previous additions, if any.
+ *
+ */
+void bdi_add_flusher_task(struct backing_dev_info *bdi)
+{
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+}
+EXPORT_SYMBOL(bdi_add_flusher_task);
+
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
 {
@@ -211,8 +593,36 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	/*
+	 * Just start the forker thread for our default backing_dev_info,
+	 * and add other bdi's to the list. They will get a thread created
+	 * on-demand when they need it.
+	 */
+	if (bdi_cap_flush_forker(bdi)) {
+		struct bdi_writeback *wb;
+
+		wb = bdi_new_wb(bdi);
+		if (!wb) {
+			ret = -ENOMEM;
+			goto exit;
+		}
+
+		wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
+						dev_name(dev));
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
+			ret = -ENOMEM;
+			goto exit;
+		}
+	}
+
+	spin_lock_bh(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	spin_unlock_bh(&bdi_lock);
+
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
 
 exit:
 	return ret;
@@ -225,9 +635,49 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	/*
+	 * If setup is pending, wait for that to complete first
+	 * Make sure nobody finds us on the bdi_list anymore
+	 */
+	wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
+			TASK_UNINTERRUPTIBLE);
+
+	/*
+	 * Make sure nobody finds us on the bdi_list anymore
+	 */
+	spin_lock_bh(&bdi_lock);
+	list_del_rcu(&bdi->bdi_list);
+	spin_unlock_bh(&bdi_lock);
+
+	/*
+	 * Now make sure that anybody who is currently looking at us from
+	 * the bdi_list iteration have exited.
+	 */
+	synchronize_rcu();
+
+	/*
+	 * Finally, kill the kernel threads. We don't need to be RCU
+	 * safe anymore, since the bdi is gone from visibility.
+	 */
+	list_for_each_entry(wb, &bdi->wb_list, list)
+		kthread_stop(wb->task);
+}
+
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
+		if (!bdi_cap_flush_forker(bdi))
+			bdi_wb_shutdown(bdi);
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -237,14 +687,22 @@ EXPORT_SYMBOL(bdi_unregister);
 
 int bdi_init(struct backing_dev_info *bdi)
 {
-	int i;
-	int err;
+	int i, err;
 
+	INIT_RCU_HEAD(&bdi->rcu_head);
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	spin_lock_init(&bdi->wb_lock);
+	bdi->wb_mask = 0;
+	bdi->wb_cnt = 0;
+	INIT_LIST_HEAD(&bdi->bdi_list);
+	INIT_LIST_HEAD(&bdi->wb_list);
+	INIT_LIST_HEAD(&bdi->work_list);
+
+	bdi_wb_init(&bdi->wb, bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -252,10 +710,15 @@ int bdi_init(struct backing_dev_info *bdi)
 			goto err;
 	}
 
+	err = init_srcu_struct(&bdi->srcu);
+	if (err)
+		goto err;
+
 	bdi->dirty_exceeded = 0;
 	err = prop_local_init_percpu(&bdi->completions);
 
 	if (err) {
+		cleanup_srcu_struct(&bdi->srcu);
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
@@ -269,8 +732,12 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
+	WARN_ON(bdi_has_dirty_io(bdi));
+
 	bdi_unregister(bdi);
 
+	cleanup_srcu_struct(&bdi->srcu);
+
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
 		percpu_counter_destroy(&bdi->bdi_stat[i]);
 
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..de3178a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
 #include <linux/pagevec.h>
 
 /*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES	1024
-
-/*
  * After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
  * will look to see if it needs to force writeback or throttling.
  */
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
 /* End of sysctl-exported parameters */
 
 
-static void background_writeout(unsigned long _min_pages);
-
 /*
  * Scale the writeback cache size proportional to the relative writeout speeds.
  *
@@ -319,7 +308,6 @@ static void task_dirty_limit(struct task_struct *tsk, long *pdirty)
 /*
  *
  */
-static DEFINE_SPINLOCK(bdi_lock);
 static unsigned int bdi_min_ratio;
 
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
@@ -542,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 		 * been flushed to permanent storage.
 		 */
 		if (bdi_nr_reclaimable) {
-			writeback_inodes(&wbc);
+			generic_sync_bdi_inodes(NULL, &wbc);
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
@@ -593,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
 					  + global_page_state(NR_UNSTABLE_NFS)
 					  > background_thresh)))
-		pdflush_operation(background_writeout, 0);
+		bdi_start_writeback(bdi, NULL, 0);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -678,152 +666,34 @@ void throttle_vm_writeout(gfp_t gfp_mask)
 }
 
 /*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
-	long min_pages = _min_pages;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = NULL,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.range_cyclic	= 1,
-	};
-
-	for ( ; ; ) {
-		unsigned long background_thresh;
-		unsigned long dirty_thresh;
-
-		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
-		if (global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) < background_thresh
-				&& min_pages <= 0)
-			break;
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		wbc.pages_skipped = 0;
-		writeback_inodes(&wbc);
-		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
-			/* Wrote less than expected */
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;
-		}
-	}
-}
-
-/*
  * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
- * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
- * -1 if all pdflush threads were busy.
+ * the whole world.
  */
-int wakeup_pdflush(long nr_pages)
+void wakeup_flusher_threads(long nr_pages)
 {
 	if (nr_pages == 0)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
-	return pdflush_operation(background_writeout, nr_pages);
+	bdi_writeback_all(NULL, nr_pages);
 }
 
-static void wb_timer_fn(unsigned long unused);
 static void laptop_timer_fn(unsigned long unused);
 
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
 static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
 
 /*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space.  So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval.  But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write.  So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
-	unsigned long oldest_jif;
-	unsigned long start_jif;
-	unsigned long next_jif;
-	long nr_to_write;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = &oldest_jif,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.for_kupdate	= 1,
-		.range_cyclic	= 1,
-	};
-
-	sync_supers();
-
-	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
-	start_jif = jiffies;
-	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
-	nr_to_write = global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) +
-			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
-	while (nr_to_write > 0) {
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		writeback_inodes(&wbc);
-		if (wbc.nr_to_write > 0) {
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;	/* All the old data is written */
-		}
-		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-	}
-	if (time_before(next_jif, jiffies + HZ))
-		next_jif = jiffies + HZ;
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, next_jif);
-}
-
-/*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
 int dirty_writeback_centisecs_handler(ctl_table *table, int write,
 	struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec(table, write, file, buffer, length, ppos);
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, jiffies +
-			msecs_to_jiffies(dirty_writeback_interval * 10));
-	else
-		del_timer(&wb_timer);
 	return 0;
 }
 
-static void wb_timer_fn(unsigned long unused)
-{
-	if (pdflush_operation(wb_kupdate, 0) < 0)
-		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
-{
-	sys_sync();
-}
-
 static void laptop_timer_fn(unsigned long unused)
 {
-	pdflush_operation(laptop_flush, 0);
+	wakeup_flusher_threads(0);
 }
 
 /*
@@ -906,8 +776,6 @@ void __init page_writeback_init(void)
 {
 	int shift;
 
-	mod_timer(&wb_timer,
-		  jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
 	writeback_set_ratelimit();
 	register_cpu_notifier(&ratelimit_nb);
 
diff --git a/mm/pdflush.c b/mm/pdflush.c
deleted file mode 100644
index 235ac44..0000000
--- a/mm/pdflush.c
+++ /dev/null
@@ -1,269 +0,0 @@
-/*
- * mm/pdflush.c - worker threads for writing back filesystem data
- *
- * Copyright (C) 2002, Linus Torvalds.
- *
- * 09Apr2002	Andrew Morton
- *		Initial version
- * 29Feb2004	kaos@sgi.com
- *		Move worker thread creation to kthread to avoid chewing
- *		up stack space with nested calls to kernel_thread.
- */
-
-#include <linux/sched.h>
-#include <linux/list.h>
-#include <linux/signal.h>
-#include <linux/spinlock.h>
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/fs.h>		/* Needed by writeback.h	  */
-#include <linux/writeback.h>	/* Prototypes pdflush_operation() */
-#include <linux/kthread.h>
-#include <linux/cpuset.h>
-#include <linux/freezer.h>
-
-
-/*
- * Minimum and maximum number of pdflush instances
- */
-#define MIN_PDFLUSH_THREADS	2
-#define MAX_PDFLUSH_THREADS	8
-
-static void start_one_pdflush_thread(void);
-
-
-/*
- * The pdflush threads are worker threads for writing back dirty data.
- * Ideally, we'd like one thread per active disk spindle.  But the disk
- * topology is very hard to divine at this level.   Instead, we take
- * care in various places to prevent more than one pdflush thread from
- * performing writeback against a single filesystem.  pdflush threads
- * have the PF_FLUSHER flag set in current->flags to aid in this.
- */
-
-/*
- * All the pdflush threads.  Protected by pdflush_lock
- */
-static LIST_HEAD(pdflush_list);
-static DEFINE_SPINLOCK(pdflush_lock);
-
-/*
- * The count of currently-running pdflush threads.  Protected
- * by pdflush_lock.
- *
- * Readable by sysctl, but not writable.  Published to userspace at
- * /proc/sys/vm/nr_pdflush_threads.
- */
-int nr_pdflush_threads = 0;
-
-/*
- * The time at which the pdflush thread pool last went empty
- */
-static unsigned long last_empty_jifs;
-
-/*
- * The pdflush thread.
- *
- * Thread pool management algorithm:
- * 
- * - The minimum and maximum number of pdflush instances are bound
- *   by MIN_PDFLUSH_THREADS and MAX_PDFLUSH_THREADS.
- * 
- * - If there have been no idle pdflush instances for 1 second, create
- *   a new one.
- * 
- * - If the least-recently-went-to-sleep pdflush thread has been asleep
- *   for more than one second, terminate a thread.
- */
-
-/*
- * A structure for passing work to a pdflush thread.  Also for passing
- * state information between pdflush threads.  Protected by pdflush_lock.
- */
-struct pdflush_work {
-	struct task_struct *who;	/* The thread */
-	void (*fn)(unsigned long);	/* A callback function */
-	unsigned long arg0;		/* An argument to the callback */
-	struct list_head list;		/* On pdflush_list, when idle */
-	unsigned long when_i_went_to_sleep;
-};
-
-static int __pdflush(struct pdflush_work *my_work)
-{
-	current->flags |= PF_FLUSHER | PF_SWAPWRITE;
-	set_freezable();
-	my_work->fn = NULL;
-	my_work->who = current;
-	INIT_LIST_HEAD(&my_work->list);
-
-	spin_lock_irq(&pdflush_lock);
-	for ( ; ; ) {
-		struct pdflush_work *pdf;
-
-		set_current_state(TASK_INTERRUPTIBLE);
-		list_move(&my_work->list, &pdflush_list);
-		my_work->when_i_went_to_sleep = jiffies;
-		spin_unlock_irq(&pdflush_lock);
-		schedule();
-		try_to_freeze();
-		spin_lock_irq(&pdflush_lock);
-		if (!list_empty(&my_work->list)) {
-			/*
-			 * Someone woke us up, but without removing our control
-			 * structure from the global list.  swsusp will do this
-			 * in try_to_freeze()->refrigerator().  Handle it.
-			 */
-			my_work->fn = NULL;
-			continue;
-		}
-		if (my_work->fn == NULL) {
-			printk("pdflush: bogus wakeup\n");
-			continue;
-		}
-		spin_unlock_irq(&pdflush_lock);
-
-		(*my_work->fn)(my_work->arg0);
-
-		spin_lock_irq(&pdflush_lock);
-
-		/*
-		 * Thread creation: For how long have there been zero
-		 * available threads?
-		 *
-		 * To throttle creation, we reset last_empty_jifs.
-		 */
-		if (time_after(jiffies, last_empty_jifs + 1 * HZ)) {
-			if (list_empty(&pdflush_list)) {
-				if (nr_pdflush_threads < MAX_PDFLUSH_THREADS) {
-					last_empty_jifs = jiffies;
-					nr_pdflush_threads++;
-					spin_unlock_irq(&pdflush_lock);
-					start_one_pdflush_thread();
-					spin_lock_irq(&pdflush_lock);
-				}
-			}
-		}
-
-		my_work->fn = NULL;
-
-		/*
-		 * Thread destruction: For how long has the sleepiest
-		 * thread slept?
-		 */
-		if (list_empty(&pdflush_list))
-			continue;
-		if (nr_pdflush_threads <= MIN_PDFLUSH_THREADS)
-			continue;
-		pdf = list_entry(pdflush_list.prev, struct pdflush_work, list);
-		if (time_after(jiffies, pdf->when_i_went_to_sleep + 1 * HZ)) {
-			/* Limit exit rate */
-			pdf->when_i_went_to_sleep = jiffies;
-			break;					/* exeunt */
-		}
-	}
-	nr_pdflush_threads--;
-	spin_unlock_irq(&pdflush_lock);
-	return 0;
-}
-
-/*
- * Of course, my_work wants to be just a local in __pdflush().  It is
- * separated out in this manner to hopefully prevent the compiler from
- * performing unfortunate optimisations against the auto variables.  Because
- * these are visible to other tasks and CPUs.  (No problem has actually
- * been observed.  This is just paranoia).
- */
-static int pdflush(void *dummy)
-{
-	struct pdflush_work my_work;
-	cpumask_var_t cpus_allowed;
-
-	/*
-	 * Since the caller doesn't even check kthread_run() worked, let's not
-	 * freak out too much if this fails.
-	 */
-	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
-		printk(KERN_WARNING "pdflush failed to allocate cpumask\n");
-		return 0;
-	}
-
-	/*
-	 * pdflush can spend a lot of time doing encryption via dm-crypt.  We
-	 * don't want to do that at keventd's priority.
-	 */
-	set_user_nice(current, 0);
-
-	/*
-	 * Some configs put our parent kthread in a limited cpuset,
-	 * which kthread() overrides, forcing cpus_allowed == cpu_all_mask.
-	 * Our needs are more modest - cut back to our cpusets cpus_allowed.
-	 * This is needed as pdflush's are dynamically created and destroyed.
-	 * The boottime pdflush's are easily placed w/o these 2 lines.
-	 */
-	cpuset_cpus_allowed(current, cpus_allowed);
-	set_cpus_allowed_ptr(current, cpus_allowed);
-	free_cpumask_var(cpus_allowed);
-
-	return __pdflush(&my_work);
-}
-
-/*
- * Attempt to wake up a pdflush thread, and get it to do some work for you.
- * Returns zero if it indeed managed to find a worker thread, and passed your
- * payload to it.
- */
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0)
-{
-	unsigned long flags;
-	int ret = 0;
-
-	BUG_ON(fn == NULL);	/* Hard to diagnose if it's deferred */
-
-	spin_lock_irqsave(&pdflush_lock, flags);
-	if (list_empty(&pdflush_list)) {
-		ret = -1;
-	} else {
-		struct pdflush_work *pdf;
-
-		pdf = list_entry(pdflush_list.next, struct pdflush_work, list);
-		list_del_init(&pdf->list);
-		if (list_empty(&pdflush_list))
-			last_empty_jifs = jiffies;
-		pdf->fn = fn;
-		pdf->arg0 = arg0;
-		wake_up_process(pdf->who);
-	}
-	spin_unlock_irqrestore(&pdflush_lock, flags);
-
-	return ret;
-}
-
-static void start_one_pdflush_thread(void)
-{
-	struct task_struct *k;
-
-	k = kthread_run(pdflush, NULL, "pdflush");
-	if (unlikely(IS_ERR(k))) {
-		spin_lock_irq(&pdflush_lock);
-		nr_pdflush_threads--;
-		spin_unlock_irq(&pdflush_lock);
-	}
-}
-
-static int __init pdflush_init(void)
-{
-	int i;
-
-	/*
-	 * Pre-set nr_pdflush_threads...  If we fail to create,
-	 * the count will be decremented.
-	 */
-	nr_pdflush_threads = MIN_PDFLUSH_THREADS;
-
-	for (i = 0; i < MIN_PDFLUSH_THREADS; i++)
-		start_one_pdflush_thread();
-	return 0;
-}
-
-module_init(pdflush_init);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3ecea98..323da00 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -34,6 +34,7 @@ static const struct address_space_operations swap_aops = {
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
+	.name		= "swap",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK | BDI_CAP_SWAP_BACKED,
 	.unplug_io_fn	= swap_unplug_io_fn,
 };
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5fa3eda..e37fd38 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		 */
 		if (total_scanned > sc->swap_cluster_max +
 					sc->swap_cluster_max / 2) {
-			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
 			sc->may_writepage = 1;
 		}
 

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-19  6:20   ` Jens Axboe
@ 2009-05-19  6:43     ` Zhang, Yanmin
  2009-05-20  7:51       ` Zhang, Yanmin
  1 sibling, 0 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-19  6:43 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> On Tue, May 19 2009, Zhang, Yanmin wrote:
> > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > Hi,
> > > 
> > > This is the fourth version of this patchset. Chances since v3:
> > > 
> > > - Dropped a prep patch, it has been included in mainline since.
> > > 
> > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > 
> > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > >   some data would not be flushed.
> > > 
> > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > >   behaviour for kupdated flushes.
> > > 
> > > - Have the wb thread flush first before sleeping, to avoid losing the
> > >   first flush on lazy register.
> > > 
> > > - Rebase to newer kernels.
> > Jens,
> > 
> > Applied V4 to 2.6.30-rc6 and got some confliction reports.
> > ----------patch-2----------
> > patching file fs/buffer.c
> > patching file fs/fs-writeback.c
> > patching file fs/ntfs/super.c
> > patching file fs/sync.c
> > patching file include/linux/backing-dev.h
> > patching file include/linux/fs.h
> > patching file include/linux/writeback.h
> > patching file mm/backing-dev.c
> > patching file mm/page-writeback.c
> > Hunk #5 FAILED at 666.
> > 1 out of 6 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
> > patching file mm/vmscan.c
> > ----------patch-3----------
> > patching file fs/fs-writeback.c
> > patching file include/linux/writeback.h
> > patching file mm/Makefile
> > patching file mm/pdflush.c
> > ----------patch-4----------
> > patching file fs/fs-writeback.c
> > patching file include/linux/backing-dev.h
> > patching file mm/backing-dev.c
> > ----------patch-5----------
> > patching file fs/fs-writeback.c
> > patching file include/linux/backing-dev.h
> > patching file include/linux/fs.h
> > patching file mm/backing-dev.c
> > patching file mm/page-writeback.c
> > Hunk #1 succeeded at 708 with fuzz 2 (offset 41 lines).
> > Hunk #2 FAILED at 716.
> > 1 out of 2 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
> 
> It's not against -rc6, it's against current -git. And current -git had a
> one-liner fixup to the centisec calculation, so it'll fail. If you apply
> the below patch to -rc6, then the series should apply cleanly on top of
> that.
> 
> > Then, I manually fixed the conflictions, but compilation reports errors.
> > Your patches seem not clean.
> > 
> >   CC      fs/exec.o
> > mm/page-writeback.c: In function 'background_writeout':
> > mm/page-writeback.c:695: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> > mm/page-writeback.c:695: error: (Each undeclared identifier is reported only once
> > mm/page-writeback.c:695: error: for each function it appears in.)
> > mm/page-writeback.c: In function 'wb_kupdate':
> > mm/page-writeback.c:769: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> > mm/page-writeback.c: In function 'wb_timer_fn':
> > mm/page-writeback.c:802: error: implicit declaration of function 'pdflush_operation'
> > make[1]: *** [mm/page-writeback.o] Error 1
> > make[1]: *** Waiting for unfinished jobs....
> >   CC      fs/pipe.o
> 
> You still have remnants of pdflush, so there's definite something wrong
> with your manual patching :-)
> 
> I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> of the patch series that you can apply next.
The new patches do work.

Thanks,
Yanmin



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
@ 2009-05-19 10:20   ` Richard Kennedy
  2009-05-19 12:23     ` Jens Axboe
  2009-05-20 11:18   ` Jan Kara
  2009-05-20 12:37   ` Christoph Hellwig
  2 siblings, 1 reply; 57+ messages in thread
From: Richard Kennedy @ 2009-05-19 10:20 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

Jens Axboe wrote:
> This gets rid of pdflush for bdi writeout and kupdated style cleaning.
> <snip>
> index 2296ff4..76269f8 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
>  		 * been flushed to permanent storage.
>  		 */
>  		if (bdi_nr_reclaimable) {
> -			writeback_inodes(&wbc);
> +			generic_sync_bdi_inodes(NULL, &wbc);
>  			pages_written += write_chunk - wbc.nr_to_write;
>  			get_dirty_limits(&background_thresh, &dirty_thresh,
>  				       &bdi_thresh, bdi);
> @@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
>  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
>  					  + global_page_state(NR_UNSTABLE_NFS)
>  					  > background_thresh)))
> -		pdflush_operation(background_writeout, 0);
> +		bdi_start_writeback(bdi, NULL, 0);
>  }
>  
Hi Jens,

I'm interested in this slight change of behaviour, when over the
background dirty limit background_writeout will write any dirty pages
while bdi_start_writeout writes only pages for the current bdi. Are
there any benefits in making this change?

Thinking about the case of 2 apps writing to different bdis. When app A
stops writing, then next time app B goes over the background dirty
threshold it will only be able to write its own pages, leaving any from
app A dirty until they reach their age limit.

So we may be keeping dirty pages for the app that's finished longer than
necessary. Keeping pages for a finished app while flushing pages from a
running app seems a bit strange. I guess this is an odd corner case and
may not be worth worrying about, but I'd be interested to hear what you
think.

Do you think your new code will require any changes to the per bdi dirty
limits? It may be informative & interesting to run some tests writing to
fast & slow devices at the same time.

regards
Richard






^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing  data
  2009-05-19 10:20   ` Richard Kennedy
@ 2009-05-19 12:23     ` Jens Axboe
  2009-05-19 13:45       ` Richard Kennedy
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-19 12:23 UTC (permalink / raw
  To: Richard Kennedy
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On Tue, May 19 2009, Richard Kennedy wrote:
> Jens Axboe wrote:
> > This gets rid of pdflush for bdi writeout and kupdated style cleaning.
> > <snip>
> > index 2296ff4..76269f8 100644
> > --- a/mm/page-writeback.c
> > +++ b/mm/page-writeback.c
> > @@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> >  		 * been flushed to permanent storage.
> >  		 */
> >  		if (bdi_nr_reclaimable) {
> > -			writeback_inodes(&wbc);
> > +			generic_sync_bdi_inodes(NULL, &wbc);
> >  			pages_written += write_chunk - wbc.nr_to_write;
> >  			get_dirty_limits(&background_thresh, &dirty_thresh,
> >  				       &bdi_thresh, bdi);
> > @@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> >  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
> >  					  + global_page_state(NR_UNSTABLE_NFS)
> >  					  > background_thresh)))
> > -		pdflush_operation(background_writeout, 0);
> > +		bdi_start_writeback(bdi, NULL, 0);
> >  }
> >  
> Hi Jens,
> 
> I'm interested in this slight change of behaviour, when over the
> background dirty limit background_writeout will write any dirty pages
> while bdi_start_writeout writes only pages for the current bdi. Are
> there any benefits in making this change?
> 
> Thinking about the case of 2 apps writing to different bdis. When app A
> stops writing, then next time app B goes over the background dirty
> threshold it will only be able to write its own pages, leaving any from
> app A dirty until they reach their age limit.

The function in question balances dirty pages against a specific address
space, which has a specific mapping. The async part of the background
writeout could be global as you mention. The whole thing is a bit weird
in balance_dirty_pages(), for instance it checks for writeout against a
given queue then proceeds to do a global writeout if not busy. At least
it's consistent now.

> So we may be keeping dirty pages for the app that's finished longer than
> necessary. Keeping pages for a finished app while flushing pages from a
> running app seems a bit strange. I guess this is an odd corner case and
> may not be worth worrying about, but I'd be interested to hear what you
> think.

The kupdated() initiated background writeout will take care of that, if
nobody does a sync on that data first. If nobody is dirtying new data on
the given bdi, then it seems perfectly fine to let normal background
writeout handle it.

> Do you think your new code will require any changes to the per bdi dirty
> limits? It may be informative & interesting to run some tests writing to
> fast & slow devices at the same time.

Generally the code should behave fairly closely to the existing pdflush
based code, so I don't think bdi dirty limit tweaking will be necessary.
I'd definitely welcome some testing though, particularly slow vs fast as
you mention. I've mainly been doing benchmarking to make sure we don't
regress on performance, and that has been for fairly similar hardware.
Since testing does take a lot of time, it would be nice if someone else
would gather their own experiences, especially in areas that have been
problematic in the past (slow vs fast devices, for instance!).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-19 12:23     ` Jens Axboe
@ 2009-05-19 13:45       ` Richard Kennedy
  2009-05-19 17:56         ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Richard Kennedy @ 2009-05-19 13:45 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On Tue, 2009-05-19 at 14:23 +0200, Jens Axboe wrote:
> On Tue, May 19 2009, Richard Kennedy wrote:
> > Jens Axboe wrote:
> > > This gets rid of pdflush for bdi writeout and kupdated style cleaning.
> > > <snip>
> > > index 2296ff4..76269f8 100644
> > > --- a/mm/page-writeback.c
> > > +++ b/mm/page-writeback.c
> > > @@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> > >  		 * been flushed to permanent storage.
> > >  		 */
> > >  		if (bdi_nr_reclaimable) {
> > > -			writeback_inodes(&wbc);
> > > +			generic_sync_bdi_inodes(NULL, &wbc);
> > >  			pages_written += write_chunk - wbc.nr_to_write;
> > >  			get_dirty_limits(&background_thresh, &dirty_thresh,
> > >  				       &bdi_thresh, bdi);
> > > @@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> > >  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
> > >  					  + global_page_state(NR_UNSTABLE_NFS)
> > >  					  > background_thresh)))
> > > -		pdflush_operation(background_writeout, 0);
> > > +		bdi_start_writeback(bdi, NULL, 0);
> > >  }
> > >  
> > Hi Jens,
> > 
> > I'm interested in this slight change of behaviour, when over the
> > background dirty limit background_writeout will write any dirty pages
> > while bdi_start_writeout writes only pages for the current bdi. Are
> > there any benefits in making this change?
> > 
> > Thinking about the case of 2 apps writing to different bdis. When app A
> > stops writing, then next time app B goes over the background dirty
> > threshold it will only be able to write its own pages, leaving any from
> > app A dirty until they reach their age limit.
> 
> The function in question balances dirty pages against a specific address
> space, which has a specific mapping. The async part of the background
> writeout could be global as you mention. The whole thing is a bit weird
> in balance_dirty_pages(), for instance it checks for writeout against a
> given queue then proceeds to do a global writeout if not busy. At least
> it's consistent now.
> 
> > So we may be keeping dirty pages for the app that's finished longer than
> > necessary. Keeping pages for a finished app while flushing pages from a
> > running app seems a bit strange. I guess this is an odd corner case and
> > may not be worth worrying about, but I'd be interested to hear what you
> > think.
> 
> The kupdated() initiated background writeout will take care of that, if
> nobody does a sync on that data first. If nobody is dirtying new data on
> the given bdi, then it seems perfectly fine to let normal background
> writeout handle it.
> 
> > Do you think your new code will require any changes to the per bdi dirty
> > limits? It may be informative & interesting to run some tests writing to
> > fast & slow devices at the same time.
> 
> Generally the code should behave fairly closely to the existing pdflush
> based code, so I don't think bdi dirty limit tweaking will be necessary.
> I'd definitely welcome some testing though, particularly slow vs fast as
> you mention. I've mainly been doing benchmarking to make sure we don't
> regress on performance, and that has been for fairly similar hardware.
> Since testing does take a lot of time, it would be nice if someone else
> would gather their own experiences, especially in areas that have been
> problematic in the past (slow vs fast devices, for instance!).

Thanks for the explanation.
I'm definitely going to test this, although I don't have any interesting
hardware, only a basic workstation. But I'll let you know if I turn up
anything useful.

Balance_dirty_pages contains Peter Zijlstra's per bdi write throttling
code and I wonder if it will need tuning for best performance with your
changes, just because some of its assumptions may have changed. I'll run
some tests here and see what happens. Peter may have some insight and
possibly useful test cases.

regards
Richard


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-19 13:45       ` Richard Kennedy
@ 2009-05-19 17:56         ` Jens Axboe
  2009-05-19 22:11           ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-19 17:56 UTC (permalink / raw
  To: Richard Kennedy
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, peterz

On Tue, May 19 2009, Richard Kennedy wrote:
> On Tue, 2009-05-19 at 14:23 +0200, Jens Axboe wrote:
> > On Tue, May 19 2009, Richard Kennedy wrote:
> > > Jens Axboe wrote:
> > > > This gets rid of pdflush for bdi writeout and kupdated style cleaning.
> > > > <snip>
> > > > index 2296ff4..76269f8 100644
> > > > --- a/mm/page-writeback.c
> > > > +++ b/mm/page-writeback.c
> > > > @@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> > > >  		 * been flushed to permanent storage.
> > > >  		 */
> > > >  		if (bdi_nr_reclaimable) {
> > > > -			writeback_inodes(&wbc);
> > > > +			generic_sync_bdi_inodes(NULL, &wbc);
> > > >  			pages_written += write_chunk - wbc.nr_to_write;
> > > >  			get_dirty_limits(&background_thresh, &dirty_thresh,
> > > >  				       &bdi_thresh, bdi);
> > > > @@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
> > > >  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
> > > >  					  + global_page_state(NR_UNSTABLE_NFS)
> > > >  					  > background_thresh)))
> > > > -		pdflush_operation(background_writeout, 0);
> > > > +		bdi_start_writeback(bdi, NULL, 0);
> > > >  }
> > > >  
> > > Hi Jens,
> > > 
> > > I'm interested in this slight change of behaviour, when over the
> > > background dirty limit background_writeout will write any dirty pages
> > > while bdi_start_writeout writes only pages for the current bdi. Are
> > > there any benefits in making this change?
> > > 
> > > Thinking about the case of 2 apps writing to different bdis. When app A
> > > stops writing, then next time app B goes over the background dirty
> > > threshold it will only be able to write its own pages, leaving any from
> > > app A dirty until they reach their age limit.
> > 
> > The function in question balances dirty pages against a specific address
> > space, which has a specific mapping. The async part of the background
> > writeout could be global as you mention. The whole thing is a bit weird
> > in balance_dirty_pages(), for instance it checks for writeout against a
> > given queue then proceeds to do a global writeout if not busy. At least
> > it's consistent now.
> > 
> > > So we may be keeping dirty pages for the app that's finished longer than
> > > necessary. Keeping pages for a finished app while flushing pages from a
> > > running app seems a bit strange. I guess this is an odd corner case and
> > > may not be worth worrying about, but I'd be interested to hear what you
> > > think.
> > 
> > The kupdated() initiated background writeout will take care of that, if
> > nobody does a sync on that data first. If nobody is dirtying new data on
> > the given bdi, then it seems perfectly fine to let normal background
> > writeout handle it.
> > 
> > > Do you think your new code will require any changes to the per bdi dirty
> > > limits? It may be informative & interesting to run some tests writing to
> > > fast & slow devices at the same time.
> > 
> > Generally the code should behave fairly closely to the existing pdflush
> > based code, so I don't think bdi dirty limit tweaking will be necessary.
> > I'd definitely welcome some testing though, particularly slow vs fast as
> > you mention. I've mainly been doing benchmarking to make sure we don't
> > regress on performance, and that has been for fairly similar hardware.
> > Since testing does take a lot of time, it would be nice if someone else
> > would gather their own experiences, especially in areas that have been
> > problematic in the past (slow vs fast devices, for instance!).
> 
> Thanks for the explanation.
> I'm definitely going to test this, although I don't have any interesting
> hardware, only a basic workstation. But I'll let you know if I turn up
> anything useful.

Any testing is useful, so go for it.

> Balance_dirty_pages contains Peter Zijlstra's per bdi write throttling
> code and I wonder if it will need tuning for best performance with your
> changes, just because some of its assumptions may have changed. I'll run
> some tests here and see what happens. Peter may have some insight and
> possibly useful test cases.

I'm assuming those are setting in -mm? I'll take a look.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-19 17:56         ` Jens Axboe
@ 2009-05-19 22:11           ` Peter Zijlstra
  0 siblings, 0 replies; 57+ messages in thread
From: Peter Zijlstra @ 2009-05-19 22:11 UTC (permalink / raw
  To: Jens Axboe
  Cc: Richard Kennedy, linux-kernel, linux-fsdevel, chris.mason, david,
	hch, akpm, jack, yanmin_zhang

On Tue, 2009-05-19 at 19:56 +0200, Jens Axboe wrote:
> > Balance_dirty_pages contains Peter Zijlstra's per bdi write throttling
> > code and I wonder if it will need tuning for best performance with your
> > changes, just because some of its assumptions may have changed. I'll run
> > some tests here and see what happens. Peter may have some insight and
> > possibly useful test cases.
> 
> I'm assuming those are setting in -mm? I'll take a look.

Nah, those got merged ages ago (.24 iirc) and I don't think that would
need any touch ups wrt this series.



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-19  6:20   ` Jens Axboe
@ 2009-05-20  7:51       ` Zhang, Yanmin
  2009-05-20  7:51       ` Zhang, Yanmin
  1 sibling, 0 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-20  7:51 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> On Tue, May 19 2009, Zhang, Yanmin wrote:
> > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > Hi,
> > > 
> > > This is the fourth version of this patchset. Chances since v3:
> > > 
> > > - Dropped a prep patch, it has been included in mainline since.
> > > 
> > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > 
> > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > >   some data would not be flushed.
> > > 
> > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > >   behaviour for kupdated flushes.
> > > 
> > > - Have the wb thread flush first before sleeping, to avoid losing the
> > >   first flush on lazy register.
> > > 
> > > - Rebase to newer kernels.

> I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> of the patch series that you can apply next.
Jens,

I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.

Tue May 19 00:00:00 CST 2009
BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
Stack:
 0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
 ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
 0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
Call Trace:
 [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
 [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
 [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
 [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
 [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
 [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
 [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
 [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
 [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
 [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
 [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
 [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
 [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
 [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
 [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
 [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
 [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
 [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
 [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
 [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
 [<ffffffff8024c860>] ? kthread+0x54/0x80
 [<ffffffff8020c97a>] ? child_rip+0xa/0x20
 [<ffffffff8024c80c>] ? kthread+0x0/0x80
 [<ffffffff8020c970>] ? child_rip+0x0/0x20

The panic happened at the beginging of a mmap randrw after a mmap randwrite.

It's triggered in __generic_make_request => bdev_get_queue(bio->bi_bdev),
because bio->bi_bdev->bd_disk is equal to NULL.

The callchain is:
bdi_writeback_task =>
	wb_do_writeback =>
		generic_sync_wb_inodes =>
			__writeback_single_inode =>
				...
				__block_write_full_page =>
				submit_bh =>
				submit_bio=>
				generic_make_request

yanmin



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
@ 2009-05-20  7:51       ` Zhang, Yanmin
  0 siblings, 0 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-20  7:51 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> On Tue, May 19 2009, Zhang, Yanmin wrote:
> > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > Hi,
> > > 
> > > This is the fourth version of this patchset. Chances since v3:
> > > 
> > > - Dropped a prep patch, it has been included in mainline since.
> > > 
> > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > 
> > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > >   some data would not be flushed.
> > > 
> > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > >   behaviour for kupdated flushes.
> > > 
> > > - Have the wb thread flush first before sleeping, to avoid losing the
> > >   first flush on lazy register.
> > > 
> > > - Rebase to newer kernels.

> I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> of the patch series that you can apply next.
Jens,

I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.

Tue May 19 00:00:00 CST 2009
BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
Stack:
 0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
 ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
 0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
Call Trace:
 [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
 [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
 [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
 [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
 [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
 [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
 [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
 [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
 [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
 [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
 [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
 [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
 [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
 [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
 [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
 [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
 [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
 [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
 [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
 [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
 [<ffffffff8024c860>] ? kthread+0x54/0x80
 [<ffffffff8020c97a>] ? child_rip+0xa/0x20
 [<ffffffff8024c80c>] ? kthread+0x0/0x80
 [<ffffffff8020c970>] ? child_rip+0x0/0x20

The panic happened at the beginging of a mmap randrw after a mmap randwrite.

It's triggered in __generic_make_request => bdev_get_queue(bio->bi_bdev),
because bio->bi_bdev->bd_disk is equal to NULL.

The callchain is:
bdi_writeback_task =>
	wb_do_writeback =>
		generic_sync_wb_inodes =>
			__writeback_single_inode =>
				...
				__block_write_full_page =>
				submit_bh =>
				submit_bio=>
				generic_make_request

yanmin


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20  7:51       ` Zhang, Yanmin
  (?)
@ 2009-05-20  8:09       ` Jens Axboe
  2009-05-20  8:54         ` Jens Axboe
  -1 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20  8:09 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Wed, May 20 2009, Zhang, Yanmin wrote:
> On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > Hi,
> > > >
> > > > This is the fourth version of this patchset. Chances since v3:
> > > >
> > > > - Dropped a prep patch, it has been included in mainline since.
> > > >
> > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > >
> > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > >   some data would not be flushed.
> > > >
> > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > >   behaviour for kupdated flushes.
> > > >
> > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > >   first flush on lazy register.
> > > >
> > > > - Rebase to newer kernels.
> 
> > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > of the patch series that you can apply next.
> Jens,
> 
> I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> 
> Tue May 19 00:00:00 CST 2009
> BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> PGD 0
> Oops: 0000 [#1] SMP
> last sysfs file: /sys/block/sdb/stat
> CPU 0
> Modules linked in: igb
> Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> Stack:
>  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
>  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
>  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> Call Trace:
>  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
>  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
>  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
>  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
>  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
>  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
>  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
>  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
>  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
>  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
>  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
>  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
>  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
>  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
>  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
>  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
>  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
>  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
>  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
>  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
>  [<ffffffff8024c860>] ? kthread+0x54/0x80
>  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
>  [<ffffffff8024c80c>] ? kthread+0x0/0x80
>  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> 
> The panic happened at the beginging of a mmap randrw after a mmap randwrite.
> 
> It's triggered in __generic_make_request => bdev_get_queue(bio->bi_bdev),
> because ???bio->bi_bdev->bd_disk is equal to NULL.
> 
> The callchain is:
> ???bdi_writeback_task =>
> 	wb_do_writeback =>
> 		???generic_sync_wb_inodes =>
> 			???__writeback_single_inode =>
> 				...
> 				???__block_write_full_page =>
> 				???submit_bh =>
> 				submit_bio=>
> 				???generic_make_request

Wow, that is really odd. Can you pass the details of the test you ran?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20  8:09       ` Jens Axboe
@ 2009-05-20  8:54         ` Jens Axboe
  2009-05-20  9:19           ` Zhang, Yanmin
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20  8:54 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

[-- Attachment #1: Type: text/plain, Size: 5097 bytes --]

On Wed, May 20 2009, Jens Axboe wrote:
> On Wed, May 20 2009, Zhang, Yanmin wrote:
> > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > Hi,
> > > > >
> > > > > This is the fourth version of this patchset. Chances since v3:
> > > > >
> > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > >
> > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > >
> > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > >   some data would not be flushed.
> > > > >
> > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > >   behaviour for kupdated flushes.
> > > > >
> > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > >   first flush on lazy register.
> > > > >
> > > > > - Rebase to newer kernels.
> > 
> > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > of the patch series that you can apply next.
> > Jens,
> > 
> > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > 
> > Tue May 19 00:00:00 CST 2009
> > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > PGD 0
> > Oops: 0000 [#1] SMP
> > last sysfs file: /sys/block/sdb/stat
> > CPU 0
> > Modules linked in: igb
> > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > Stack:
> >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > Call Trace:
> >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > 
> > The panic happened at the beginging of a mmap randrw after a mmap randwrite.
> > 
> > It's triggered in __generic_make_request => bdev_get_queue(bio->bi_bdev),
> > because ???bio->bi_bdev->bd_disk is equal to NULL.
> > 
> > The callchain is:
> > ???bdi_writeback_task =>
> > 	wb_do_writeback =>
> > 		???generic_sync_wb_inodes =>
> > 			???__writeback_single_inode =>
> > 				...
> > 				???__block_write_full_page =>
> > 				???submit_bh =>
> > 				submit_bio=>
> > 				???generic_make_request
> 
> Wow, that is really odd. Can you pass the details of the test you ran?

I found one issue yesterday and one today that could cause issues, not
sure it would explain this one. But at least it's worth a try, if it's
reproducible. I'm attaching the three patches I have against the posted
series. The one in the middle is just an optimization, the first and
third are the bug fixes.

-- 
Jens Axboe


[-- Attachment #2: 0001-writeback-add-memory-barrier-before-wake_up_bit-in-b.patch --]
[-- Type: text/x-diff, Size: 857 bytes --]

>From 9025f9ffc675c3d8bf6c25fdebe30ca98082bab6 Mon Sep 17 00:00:00 2001
From: Jens Axboe <jens.axboe@oracle.com>
Date: Tue, 19 May 2009 09:47:02 +0200
Subject: [PATCH 1/3] writeback: add memory barrier before wake_up_bit() in bdi_work_free()

As per wake_up_bit() documentation, was also triggered in the wild.
Process got stuck forever waiting for a bit clear that had happened.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index a287c09..6052701 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -102,6 +102,7 @@ static void bdi_work_free(struct rcu_head *head)
 		kfree(work);
 	else {
 		clear_bit(0, &work->state);
+		smp_mb__after_clear_bit();
 		wake_up_bit(&work->state, 0);
 	}
 }
-- 
1.6.3.9.g6345


[-- Attachment #3: 0002-writeback-attempt-to-allocate-work-struct-in-bdi_sta.patch --]
[-- Type: text/x-diff, Size: 1601 bytes --]

>From b4c4af0be4ff04648d2033dc3ac4dd4d50d5864d Mon Sep 17 00:00:00 2001
From: Jens Axboe <jens.axboe@oracle.com>
Date: Tue, 19 May 2009 11:26:58 +0200
Subject: [PATCH 2/3] writeback: attempt to allocate work struct in bdi_start_writeback()

If the allocation works, then we don't have to wait for the threads
to wake up and notice the work. So it would potentially cause less
lag in bdi_start_writeback(). If it fails, just fall back to an on-stack
work struct again.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c |   19 +++++++++++++++----
 1 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 6052701..f80afaa 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -191,14 +191,25 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 			 long nr_pages)
 {
-	struct bdi_work work;
+	struct bdi_work work_stack, *work;
 	int ret;
 
-	bdi_work_init_on_stack(&work, sb, nr_pages);
+	work = kmalloc(sizeof(*work), GFP_ATOMIC);
+	if (work)
+		bdi_work_init(work, sb, nr_pages);
+	else {
+		work = &work_stack;
+		bdi_work_init_on_stack(work, sb, nr_pages);
+	}
 
-	ret = bdi_queue_writeback(bdi, &work);
+	ret = bdi_queue_writeback(bdi, work);
 
-	bdi_wait_on_work_start(&work);
+	/*
+	 * If this came from our stack, we need to wait until the wb threads
+	 * have noticed this work before we return (and invalidate the stack)
+	 */
+	if (work == &work_stack)
+		bdi_wait_on_work_start(work);
 
 	return ret;
 }
-- 
1.6.3.9.g6345


[-- Attachment #4: 0003-writeback-mm-backing-dev.c-bdi_start_fn-should-use-b.patch --]
[-- Type: text/x-diff, Size: 992 bytes --]

>From 81eabcf5ca618e2453d97a8822bc6b00fdad81c2 Mon Sep 17 00:00:00 2001
From: Jens Axboe <jens.axboe@oracle.com>
Date: Wed, 20 May 2009 10:53:44 +0200
Subject: [PATCH 3/3] writeback: mm/backing-dev.c:bdi_start_fn() should use bh disabling locks

bdi_lock is grabbed from softirq context, so we need to always use
bh disabling spinlocks. All the other callsites are OK, but this one
missed the _bh() postfix.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 mm/backing-dev.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index d45251f..60578bc 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -365,9 +365,9 @@ static int bdi_start_fn(void *ptr)
 	/*
 	 * Make us discoverable on the bdi_list again
 	 */
-	spin_lock(&bdi_lock);
+	spin_lock_bh(&bdi_lock);
 	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock(&bdi_lock);
+	spin_unlock_bh(&bdi_lock);
 
 	ret = bdi_writeback_task(wb);
 
-- 
1.6.3.9.g6345


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20  8:54         ` Jens Axboe
@ 2009-05-20  9:19           ` Zhang, Yanmin
  2009-05-20  9:25             ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-20  9:19 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> On Wed, May 20 2009, Jens Axboe wrote:
> > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > Hi,
> > > > > >
> > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > >
> > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > >
> > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > >
> > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > >   some data would not be flushed.
> > > > > >
> > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > >   behaviour for kupdated flushes.
> > > > > >
> > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > >   first flush on lazy register.
> > > > > >
> > > > > > - Rebase to newer kernels.
> > > 
> > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > of the patch series that you can apply next.
> > > Jens,
> > > 
> > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > 
> > > Tue May 19 00:00:00 CST 2009
> > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > PGD 0
> > > Oops: 0000 [#1] SMP
> > > last sysfs file: /sys/block/sdb/stat
> > > CPU 0
> > > Modules linked in: igb
> > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > Stack:
> > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > Call Trace:
> > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > 


> 
> I found one issue yesterday and one today that could cause issues, not
> sure it would explain this one. But at least it's worth a try, if it's
> reproducible.
I just reproduced it a moment ago manually.

[global]
direct=0
ioengine=mmap
iodepth=256
iodepth_batch=32
size=4G
bs=4k
pre_read=1
overwrite=1
numjobs=1
loops=5
runtime=600
group_reporting
directory=/mnt/stp/fiodata
[job_group0_sub0]
startdelay=0
rw=randwrite
filename=data0/f1:data0/f2


The fio includes my preread patch to flush files to memory.

Before starting the second testing, I did a cache dropping by:
#echo "3">/proc/sys/vm/drop_caches.

I suspect the drop_caches trigger it.

>  I'm attaching the three patches I have against the posted
> series. The one in the middle is just an optimization, the first and
> third are the bug fixes.
I will test it tomorrow.



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20  9:19           ` Zhang, Yanmin
@ 2009-05-20  9:25             ` Jens Axboe
  2009-05-20 11:19               ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20  9:25 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Wed, May 20 2009, Zhang, Yanmin wrote:
> On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > On Wed, May 20 2009, Jens Axboe wrote:
> > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > >
> > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > >
> > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > >
> > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > >   some data would not be flushed.
> > > > > > >
> > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > >   behaviour for kupdated flushes.
> > > > > > >
> > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > >   first flush on lazy register.
> > > > > > >
> > > > > > > - Rebase to newer kernels.
> > > > 
> > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > of the patch series that you can apply next.
> > > > Jens,
> > > > 
> > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > 
> > > > Tue May 19 00:00:00 CST 2009
> > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > PGD 0
> > > > Oops: 0000 [#1] SMP
> > > > last sysfs file: /sys/block/sdb/stat
> > > > CPU 0
> > > > Modules linked in: igb
> > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > Stack:
> > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > Call Trace:
> > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > 
> 
> 
> > 
> > I found one issue yesterday and one today that could cause issues, not
> > sure it would explain this one. But at least it's worth a try, if it's
> > reproducible.
> I just reproduced it a moment ago manually.
> 
> [global]
> direct=0
> ioengine=mmap
> iodepth=256
> iodepth_batch=32
> size=4G
> bs=4k
> pre_read=1
> overwrite=1
> numjobs=1
> loops=5
> runtime=600
> group_reporting
> directory=/mnt/stp/fiodata
> [job_group0_sub0]
> startdelay=0
> rw=randwrite
> filename=data0/f1:data0/f2
> 
> 
> The fio includes my preread patch to flush files to memory.
> 
> Before starting the second testing, I did a cache dropping by:
> #echo "3">/proc/sys/vm/drop_caches.
> 
> I suspect the drop_caches trigger it.

Thanks, will try this. What filesystem and mount options did you use?

> >  I'm attaching the three patches I have against the posted
> > series. The one in the middle is just an optimization, the first and
> > third are the bug fixes.
> I will test it tomorrow.
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
  2009-05-19 10:20   ` Richard Kennedy
@ 2009-05-20 11:18   ` Jan Kara
  2009-05-20 11:32     ` Jens Axboe
  2009-05-20 12:37   ` Christoph Hellwig
  2 siblings, 1 reply; 57+ messages in thread
From: Jan Kara @ 2009-05-20 11:18 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

  Hi Jens,

  a few comments here. Mainly, I still don't think the sys_sync() is
working right - see comments below.

On Mon 18-05-09 14:19:43, Jens Axboe wrote:
> diff --git a/fs/buffer.c b/fs/buffer.c
> index aed2977..14f0802 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -281,7 +281,7 @@ static void free_more_memory(void)
>  	struct zone *zone;
>  	int nid;
>  
> -	wakeup_pdflush(1024);
> +	wakeup_flusher_threads(1024);
>  	yield();
>  
>  	for_each_online_node(nid) {
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 34c8d1d..c40345c 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -19,6 +19,8 @@
>  #include <linux/sched.h>
>  #include <linux/fs.h>
>  #include <linux/mm.h>
> +#include <linux/kthread.h>
> +#include <linux/freezer.h>
>  #include <linux/writeback.h>
>  #include <linux/blkdev.h>
>  #include <linux/backing-dev.h>
> @@ -61,10 +63,186 @@ int writeback_in_progress(struct backing_dev_info *bdi)
>   */
>  static void writeback_release(struct backing_dev_info *bdi)
>  {
> -	BUG_ON(!writeback_in_progress(bdi));
> +	WARN_ON_ONCE(!writeback_in_progress(bdi));
> +	bdi->wb_arg.nr_pages = 0;
> +	bdi->wb_arg.sb = NULL;
>  	clear_bit(BDI_pdflush, &bdi->state);
>  }
>  
> +int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
> +			 long nr_pages)
> +{
> +	/*
> +	 * This only happens the first time someone kicks this bdi, so put
> +	 * it out-of-line.
> +	 */
> +	if (unlikely(!bdi->task)) {
> +		bdi_add_default_flusher_task(bdi);
> +		return 1;
> +	}
> +
> +	if (writeback_acquire(bdi)) {
> +		bdi->wb_arg.nr_pages = nr_pages;
> +		bdi->wb_arg.sb = sb;
> +		/*
> +		 * make above store seen before the task is woken
> +		 */
> +		smp_mb();
> +		wake_up(&bdi->wait);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * The maximum number of pages to writeout in a single bdi flush/kupdate
> + * operation.  We do this so we don't hold I_SYNC against an inode for
> + * enormous amounts of time, which would block a userspace task which has
> + * been forced to throttle against that inode.  Also, the code reevaluates
> + * the dirty each time it has written this many pages.
> + */
> +#define MAX_WRITEBACK_PAGES     1024
> +
> +/*
> + * Periodic writeback of "old" data.
> + *
> + * Define "old": the first time one of an inode's pages is dirtied, we mark the
> + * dirtying-time in the inode's address_space.  So this periodic writeback code
> + * just walks the superblock inode list, writing back any inodes which are
> + * older than a specific point in time.
> + *
> + * Try to run once per dirty_writeback_interval.  But if a writeback event
> + * takes longer than a dirty_writeback_interval interval, then leave a
> + * one-second gap.
> + *
> + * older_than_this takes precedence over nr_to_write.  So we'll only write back
> + * all dirty pages if they are all attached to "old" mappings.
> + */
> +static void bdi_kupdated(struct backing_dev_info *bdi)
> +{
> +	unsigned long oldest_jif;
> +	long nr_to_write;
> +	struct writeback_control wbc = {
> +		.bdi			= bdi,
> +		.sync_mode		= WB_SYNC_NONE,
> +		.older_than_this	= &oldest_jif,
> +		.nr_to_write		= 0,
> +		.for_kupdate		= 1,
> +		.range_cyclic		= 1,
> +	};
> +
> +	sync_supers();
> +
> +	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
> +
> +	nr_to_write = global_page_state(NR_FILE_DIRTY) +
> +			global_page_state(NR_UNSTABLE_NFS) +
> +			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
> +
> +	while (nr_to_write > 0) {
> +		wbc.more_io = 0;
> +		wbc.encountered_congestion = 0;
> +		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
> +		generic_sync_bdi_inodes(NULL, &wbc);
> +		if (wbc.nr_to_write > 0)
> +			break;	/* All the old data is written */
> +		nr_to_write -= MAX_WRITEBACK_PAGES;
> +	}
> +}
> +
> +static void bdi_pdflush(struct backing_dev_info *bdi)
> +{
> +	struct writeback_control wbc = {
> +		.bdi			= bdi,
> +		.sync_mode		= WB_SYNC_NONE,
> +		.older_than_this	= NULL,
> +		.range_cyclic		= 1,
> +	};
> +	long nr_pages = bdi->wb_arg.nr_pages;
> +
> +	for (;;) {
> +		unsigned long background_thresh, dirty_thresh;
> +		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
> +		if ((global_page_state(NR_FILE_DIRTY) +
> +		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
> +		    nr_pages <= 0)
> +			break;
> +
> +		wbc.more_io = 0;
> +		wbc.encountered_congestion = 0;
> +		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
> +		wbc.pages_skipped = 0;
> +		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
> +		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> +		/*
> +		 * If we ran out of stuff to write, bail unless more_io got set
> +		 */
> +		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
> +			if (wbc.more_io)
> +				continue;
> +			break;
> +		}
> +	}
> +}
> +
> +/*
> + * Handle writeback of dirty data for the device backed by this bdi. Also
> + * wakes up periodically and does kupdated style flushing.
> + */
> +int bdi_writeback_task(struct backing_dev_info *bdi)
> +{
> +	while (!kthread_should_stop()) {
> +		unsigned long wait_jiffies;
> +		DEFINE_WAIT(wait);
> +
> +		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
> +		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
> +		schedule_timeout(wait_jiffies);
> +		try_to_freeze();
> +
> +		/*
> +		 * We get here in two cases:
> +		 *
> +		 *  schedule_timeout() returned because the dirty writeback
> +		 *  interval has elapsed. If that happens, we will be able
> +		 *  to acquire the writeback lock and will proceed to do
> +		 *  kupdated style writeout.
> +		 *
> +		 *  Someone called bdi_start_writeback(), which will acquire
> +		 *  the writeback lock. This means our writeback_acquire()
> +		 *  below will fail and we call into bdi_pdflush() for
> +		 *  pdflush style writeout.
> +		 *
> +		 */
> +		if (writeback_acquire(bdi))
> +			bdi_kupdated(bdi);
> +		else
> +			bdi_pdflush(bdi);
> +
> +		writeback_release(bdi);
> +		finish_wait(&bdi->wait, &wait);
> +	}
> +
> +	return 0;
> +}
> +
> +void bdi_writeback_all(struct super_block *sb, long nr_pages)
> +{
> +	struct backing_dev_info *bdi;
> +
> +	rcu_read_lock();
> +
> +restart:
> +	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
  Isn't the RCU list here a bit overengineering? AFAICS we use the list
only here and if I'm grepping right, generic_sync_sb_inodes() is currently
only used for data integrity sync (after your patches) from fs-writeback.c
and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
guarding the list should be just fine. Or am I missing something?

> +		if (!bdi_has_dirty_io(bdi))
> +			continue;
> +		if (bdi_start_writeback(bdi, sb, nr_pages))
> +			goto restart;
> +	}
> +
> +	rcu_read_unlock();
> +}
> +
>  /**
>   *	__mark_inode_dirty -	internal function
>   *	@inode: inode to mark
> @@ -263,46 +441,6 @@ static void queue_io(struct backing_dev_info *bdi,
>  	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
>  }
>  
> -static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
> -{
> -	struct inode *inode;
> -	int ret = 0;
> -
> -	spin_lock(&inode_lock);
> -	list_for_each_entry(inode, list, i_list) {
> -		if (inode->i_sb == sb) {
> -			ret = 1;
> -			break;
> -		}
> -	}
> -	spin_unlock(&inode_lock);
> -	return ret;
> -}
> -
> -int sb_has_dirty_inodes(struct super_block *sb)
> -{
> -	struct backing_dev_info *bdi;
> -	int ret = 0;
> -
> -	/*
> -	 * This is REALLY expensive right now, but it'll go away
> -	 * when the bdi writeback is introduced
> -	 */
> -	rcu_read_lock();
> -	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
> -		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
> -		    sb_on_inode_list(sb, &bdi->b_io) ||
> -		    sb_on_inode_list(sb, &bdi->b_more_io)) {
> -			ret = 1;
> -			break;
> -		}
> -	}
> -	rcu_read_unlock();
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(sb_has_dirty_inodes);
> -
>  /*
>   * Write a single inode's dirty pages and inode data out to disk.
>   * If `wait' is set, wait on the writeout.
> @@ -461,11 +599,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
>  	return __sync_single_inode(inode, wbc);
>  }
>  
> -static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
> -				    struct writeback_control *wbc,
> -				    struct super_block *sb,
> -				    int is_blkdev_sb)
> +void generic_sync_bdi_inodes(struct super_block *sb,
> +			     struct writeback_control *wbc)
>  {
> +	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
> +	struct backing_dev_info *bdi = wbc->bdi;
>  	const unsigned long start = jiffies;	/* livelock avoidance */
>  
>  	spin_lock(&inode_lock);
> @@ -516,13 +654,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
>  			continue;		/* Skip a congested blockdev */
>  		}
>  
> -		if (wbc->bdi && bdi != wbc->bdi) {
> -			if (!is_blkdev_sb)
> -				break;		/* fs has the wrong queue */
> -			requeue_io(inode);
> -			continue;		/* blockdev has wrong queue */
> -		}
> -
>  		/*
>  		 * Was this inode dirtied after sync_sb_inodes was called?
>  		 * This keeps sync from extra jobs and livelock.
> @@ -530,16 +661,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
>  		if (inode_dirtied_after(inode, start))
>  			break;
>  
> -		/* Is another pdflush already flushing this queue? */
> -		if (current_is_pdflush() && !writeback_acquire(bdi))
> -			break;
> -
>  		BUG_ON(inode->i_state & I_FREEING);
>  		__iget(inode);
>  		pages_skipped = wbc->pages_skipped;
>  		__writeback_single_inode(inode, wbc);
> -		if (current_is_pdflush())
> -			writeback_release(bdi);
>  		if (wbc->pages_skipped != pages_skipped) {
>  			/*
>  			 * writeback is not making progress due to locked
> @@ -578,11 +703,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
>   * a variety of queues, so all inodes are searched.  For other superblocks,
>   * assume that all inodes are backed by the same queue.
>   *
> - * FIXME: this linear search could get expensive with many fileystems.  But
> - * how to fix?  We need to go from an address_space to all inodes which share
> - * a queue with that address_space.  (Easy: have a global "dirty superblocks"
> - * list).
> - *
>   * The inodes to be written are parked on bdi->b_io.  They are moved back onto
>   * bdi->b_dirty as they are selected for writing.  This way, none can be missed
>   * on the writer throttling path, and we get decent balancing between many
> @@ -591,13 +711,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
>  void generic_sync_sb_inodes(struct super_block *sb,
>  				struct writeback_control *wbc)
>  {
> -	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
> -	struct backing_dev_info *bdi;
> -
> -	rcu_read_lock();
> -	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
> -		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
> -	rcu_read_unlock();
> +	if (wbc->bdi)
> +		bdi_start_writeback(wbc->bdi, sb, 0);
> +	else
> +		bdi_writeback_all(sb, 0);
  It does not work like this. The way you call writeback here, you never
endup calling __writeback_single_inode() with WB_SYNC_ALL set in wbc (your
writeback routines always call inode writeback with WB_SYNC_NONE). And
that is required for proper data integrity sync... So you have to somehow
propagate this down to the writeback thread.
  Alternatively, what probably makes a lot of sence, is to separate data
integrity sync path from just data writeback. In the first case we care
more about correctness, in the second case we care more about performance
and overall throughput.
  BTW your patch also significantly changes one thing: With your patch data
integrity sync is done by flusher threads while previously is was done from
the context of the thread calling sync(). I'm undecided whether it is a
good or bad thing but it definitely deserves a comment in the changelog.

>  	if (wbc->sync_mode == WB_SYNC_ALL) {
>  		struct inode *inode, *old_inode = NULL;
> @@ -653,58 +770,6 @@ static void sync_sb_inodes(struct super_block *sb,
>  }
>  
>  /*
> - * Start writeback of dirty pagecache data against all unlocked inodes.
> - *
> - * Note:
> - * We don't need to grab a reference to superblock here. If it has non-empty
> - * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
> - * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
> - * empty. Since __sync_single_inode() regains inode_lock before it finally moves
> - * inode from superblock lists we are OK.
> - *
> - * If `older_than_this' is non-zero then only flush inodes which have a
> - * flushtime older than *older_than_this.
> - *
> - * If `bdi' is non-zero then we will scan the first inode against each
> - * superblock until we find the matching ones.  One group will be the dirty
> - * inodes against a filesystem.  Then when we hit the dummy blockdev superblock,
> - * sync_sb_inodes will seekout the blockdev which matches `bdi'.  Maybe not
> - * super-efficient but we're about to do a ton of I/O...
> - */
> -void
> -writeback_inodes(struct writeback_control *wbc)
> -{
> -	struct super_block *sb;
> -
> -	might_sleep();
> -	spin_lock(&sb_lock);
> -restart:
> -	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
> -		if (sb_has_dirty_inodes(sb)) {
> -			/* we're making our own get_super here */
> -			sb->s_count++;
> -			spin_unlock(&sb_lock);
> -			/*
> -			 * If we can't get the readlock, there's no sense in
> -			 * waiting around, most of the time the FS is going to
> -			 * be unmounted by the time it is released.
> -			 */
> -			if (down_read_trylock(&sb->s_umount)) {
> -				if (sb->s_root)
> -					sync_sb_inodes(sb, wbc);
> -				up_read(&sb->s_umount);
> -			}
> -			spin_lock(&sb_lock);
> -			if (__put_super_and_need_restart(sb))
> -				goto restart;
> -		}
> -		if (wbc->nr_to_write <= 0)
> -			break;
> -	}
> -	spin_unlock(&sb_lock);
> -}
> -
> -/*
>   * writeback and wait upon the filesystem's dirty inodes.  The caller will
>   * do this in two passes - one to write, and one to wait.
>   *
> diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
> index f76951d..c4cb157 100644
> --- a/fs/ntfs/super.c
> +++ b/fs/ntfs/super.c
> @@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
>  		vol->mftmirr_ino = NULL;
>  	}
>  	/*
> -	 * If any dirty inodes are left, throw away all mft data page cache
> -	 * pages to allow a clean umount.  This should never happen any more
> -	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> -	 * the underlying mft records are written out and cleaned.  If it does,
> +	 * We should have no dirty inodes left, due to
> +	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> +	 * the underlying mft records are written out and cleaned.
>  	 * happen anyway, we want to know...
>  	 */
>  	ntfs_commit_inode(vol->mft_ino);
>  	write_inode_now(vol->mft_ino, 1);
> -	if (sb_has_dirty_inodes(sb)) {
> -		const char *s1, *s2;
> -
> -		mutex_lock(&vol->mft_ino->i_mutex);
> -		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
> -		mutex_unlock(&vol->mft_ino->i_mutex);
> -		write_inode_now(vol->mft_ino, 1);
> -		if (sb_has_dirty_inodes(sb)) {
> -			static const char *_s1 = "inodes";
> -			static const char *_s2 = "";
> -			s1 = _s1;
> -			s2 = _s2;
> -		} else {
> -			static const char *_s1 = "mft pages";
> -			static const char *_s2 = "They have been thrown "
> -					"away.  ";
> -			s1 = _s1;
> -			s2 = _s2;
> -		}
> -		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
> -				"run chkdsk.  Please email "
> -				"linux-ntfs-dev@lists.sourceforge.net and say "
> -				"that you saw this message.  Thank you.", s1,
> -				s2);
> -	}
>  #endif /* NTFS_RW */
>  
>  	iput(vol->mft_ino);
> diff --git a/fs/sync.c b/fs/sync.c
> index 7abc65f..3887f10 100644
> --- a/fs/sync.c
> +++ b/fs/sync.c
> @@ -23,7 +23,7 @@
>   */
>  static void do_sync(unsigned long wait)
>  {
> -	wakeup_pdflush(0);
> +	wakeup_flusher_threads(0);
>  	sync_inodes(0);		/* All mappings, inodes and their blockdevs */
>  	vfs_dq_sync(NULL);
>  	sync_supers();		/* Write the superblocks */
> diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
> index 86668c7..a848eea 100644
> --- a/include/linux/backing-dev.h
> +++ b/include/linux/backing-dev.h
> @@ -24,6 +24,7 @@ struct dentry;
>   */
>  enum bdi_state {
>  	BDI_pdflush,		/* A pdflush thread is working this device */
> +	BDI_pending,		/* On its way to being activated */
>  	BDI_async_congested,	/* The async (write) queue is getting full */
>  	BDI_sync_congested,	/* The sync queue is getting full */
>  	BDI_unused,		/* Available bits start here */
> @@ -39,8 +40,14 @@ enum bdi_stat_item {
>  
>  #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
>  
> +struct bdi_writeback_arg {
> +	unsigned long nr_pages;
> +	struct super_block *sb;
> +};
> +
>  struct backing_dev_info {
>  	struct list_head bdi_list;
> +	struct rcu_head rcu_head;
>  
>  	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
>  	unsigned long state;	/* Always use atomic bitops on this */
> @@ -60,6 +67,9 @@ struct backing_dev_info {
>  
>  	struct device *dev;
>  
> +	struct task_struct	*task;		/* writeback task */
> +	wait_queue_head_t	wait;
> +	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
>  	struct list_head	b_dirty;	/* dirty inodes */
>  	struct list_head	b_io;		/* parked for writeback */
>  	struct list_head	b_more_io;	/* parked for more writeback */
> @@ -77,10 +87,22 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  		const char *fmt, ...);
>  int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
>  void bdi_unregister(struct backing_dev_info *bdi);
> +int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
> +			 long nr_pages);
> +int bdi_writeback_task(struct backing_dev_info *bdi);
> +void bdi_writeback_all(struct super_block *sb, long nr_pages);
> +void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
>  
>  extern spinlock_t bdi_lock;
>  extern struct list_head bdi_list;
>  
> +static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
> +{
> +	return !list_empty(&bdi->b_dirty) ||
> +	       !list_empty(&bdi->b_io) ||
> +	       !list_empty(&bdi->b_more_io);
> +}
> +
>  static inline void __add_bdi_stat(struct backing_dev_info *bdi,
>  		enum bdi_stat_item item, s64 amount)
>  {
> @@ -196,6 +218,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
>  #define BDI_CAP_EXEC_MAP	0x00000040
>  #define BDI_CAP_NO_ACCT_WB	0x00000080
>  #define BDI_CAP_SWAP_BACKED	0x00000100
> +#define BDI_CAP_FLUSH_FORKER	0x00000200
>  
>  #define BDI_CAP_VMFLAGS \
>  	(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
> @@ -265,6 +288,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
>  	return bdi->capabilities & BDI_CAP_SWAP_BACKED;
>  }
>  
> +static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
> +{
> +	return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
> +}
> +
>  static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
>  {
>  	return bdi_cap_writeback_dirty(mapping->backing_dev_info);
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 6b475d4..ecdc544 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -2063,6 +2063,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
>  					 pgoff_t start, pgoff_t end);
>  extern void generic_sync_sb_inodes(struct super_block *sb,
>  				struct writeback_control *wbc);
> +extern void generic_sync_bdi_inodes(struct super_block *sb,
> +				struct writeback_control *);
>  extern int write_inode_now(struct inode *, int);
>  extern int filemap_fdatawrite(struct address_space *);
>  extern int filemap_flush(struct address_space *);
> @@ -2180,7 +2182,6 @@ extern int bdev_read_only(struct block_device *);
>  extern int set_blocksize(struct block_device *, int);
>  extern int sb_set_blocksize(struct super_block *, int);
>  extern int sb_min_blocksize(struct super_block *, int);
> -extern int sb_has_dirty_inodes(struct super_block *);
>  
>  extern int generic_file_mmap(struct file *, struct vm_area_struct *);
>  extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
> diff --git a/include/linux/writeback.h b/include/linux/writeback.h
> index 9344547..a8e9f78 100644
> --- a/include/linux/writeback.h
> +++ b/include/linux/writeback.h
> @@ -99,7 +99,7 @@ static inline void inode_sync_wait(struct inode *inode)
>  /*
>   * mm/page-writeback.c
>   */
> -int wakeup_pdflush(long nr_pages);
> +void wakeup_flusher_threads(long nr_pages);
>  void laptop_io_completion(void);
>  void laptop_sync_completion(void);
>  void throttle_vm_writeout(gfp_t gfp_mask);
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 883ee8a..c759449 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -1,8 +1,11 @@
>  
>  #include <linux/wait.h>
>  #include <linux/backing-dev.h>
> +#include <linux/kthread.h>
> +#include <linux/freezer.h>
>  #include <linux/fs.h>
>  #include <linux/pagemap.h>
> +#include <linux/mm.h>
>  #include <linux/sched.h>
>  #include <linux/module.h>
>  #include <linux/writeback.h>
> @@ -16,7 +19,7 @@ EXPORT_SYMBOL(default_unplug_io_fn);
>  struct backing_dev_info default_backing_dev_info = {
>  	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
>  	.state		= 0,
> -	.capabilities	= BDI_CAP_MAP_COPY,
> +	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
>  	.unplug_io_fn	= default_unplug_io_fn,
>  };
>  EXPORT_SYMBOL_GPL(default_backing_dev_info);
> @@ -24,6 +27,7 @@ EXPORT_SYMBOL_GPL(default_backing_dev_info);
>  static struct class *bdi_class;
>  DEFINE_SPINLOCK(bdi_lock);
>  LIST_HEAD(bdi_list);
> +LIST_HEAD(bdi_pending_list);
>  
>  #ifdef CONFIG_DEBUG_FS
>  #include <linux/debugfs.h>
> @@ -195,6 +199,146 @@ static int __init default_bdi_init(void)
>  }
>  subsys_initcall(default_bdi_init);
>  
> +static int bdi_start_fn(void *ptr)
> +{
> +	struct backing_dev_info *bdi = ptr;
> +	struct task_struct *tsk = current;
> +
> +	/*
> +	 * Add us to the active bdi_list
> +	 */
> +	spin_lock_bh(&bdi_lock);
> +	list_add_rcu(&bdi->bdi_list, &bdi_list);
> +	spin_unlock_bh(&bdi_lock);
> +
> +	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
> +	set_freezable();
> +
> +	/*
> +	 * Our parent may run at a different priority, just set us to normal
> +	 */
> +	set_user_nice(tsk, 0);
> +
> +	/*
> +	 * Clear pending bit and wakeup anybody waiting to tear us down
> +	 */
> +	clear_bit(BDI_pending, &bdi->state);
> +	wake_up_bit(&bdi->state, BDI_pending);
> +
> +	return bdi_writeback_task(bdi);
> +}
> +
> +static int bdi_forker_task(void *ptr)
> +{
> +	struct backing_dev_info *bdi, *me = ptr;
> +
> +	for (;;) {
> +		DEFINE_WAIT(wait);
> +
> +		/*
> +		 * Should never trigger on the default bdi
> +		 */
> +		WARN_ON(bdi_has_dirty_io(me));
> +
> +		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> +		smp_mb();
  Wouldn't the code look simpler like:
	spin_lock_bh(&bdi_lock);
	if (list_empty(&bdi_pending_list)) {
		spin_unlock_bh(&bdi_lock);
		schedule();
	} else {
		bdi = list_entry(bdi_pending_list.next,
				 struct backing_dev_info, bdi_list);
		list_del_init(&bdi->bdi_list);
		spin_unlock_bh(&bdi_lock);
		if (bdi->task)
			continue;
		... do work ...
	}

> +		if (list_empty(&bdi_pending_list))
> +			schedule();
> +		else {
> +repeat:
> +			bdi = NULL;
> +
> +			spin_lock_bh(&bdi_lock);
> +			if (!list_empty(&bdi_pending_list)) {
> +				bdi = list_entry(bdi_pending_list.next,
> +						 struct backing_dev_info,
> +						 bdi_list);
> +				list_del_init(&bdi->bdi_list);
> +			}
> +			spin_unlock_bh(&bdi_lock);
> +
> +			/*
> +			 * If no bdi or bdi already got setup, continue
> +			 */
> +			if (!bdi || bdi->task)
> +				continue;
> +
> +			bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
> +						dev_name(bdi->dev));
> +			/*
> +			 * If task creation fails, then readd the bdi to
> +			 * the pending list and force writeout of the bdi
> +			 * from this forker thread. That will free some memory
> +			 * and we can try again.
> +			 */
> +			if (!bdi->task) {
> +				struct writeback_control wbc = {
> +					.bdi			= bdi,
> +					.sync_mode		= WB_SYNC_NONE,
> +					.older_than_this	= NULL,
> +					.range_cyclic		= 1,
> +				};
> +
> +				/*
> +				 * Add this 'bdi' to the back, so we get
> +				 * a chance to flush other bdi's to free
> +				 * memory.
> +				 */
> +				spin_lock_bh(&bdi_lock);
> +				list_add_tail(&bdi->bdi_list,
> +						&bdi_pending_list);
> +				spin_unlock_bh(&bdi_lock);
> +
> +				wbc.nr_to_write = 1024;
> +				generic_sync_bdi_inodes(NULL, &wbc);
> +				goto repeat;
> +			}
> +		}
> +
> +		finish_wait(&me->wait, &wait);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Grace period has now ended, init bdi->bdi_list and add us to the
> + * list of bdi's that are pending for task creation. Wake up
> + * bdi_forker_task() to finish the job and add us back to the
> + * active bdi_list.
> + */
> +static void bdi_add_to_pending(struct rcu_head *head)
> +{
> +	struct backing_dev_info *bdi;
> +
> +	bdi = container_of(head, struct backing_dev_info, rcu_head);
> +	INIT_LIST_HEAD(&bdi->bdi_list);
> +
> +	spin_lock(&bdi_lock);
> +	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
> +	spin_unlock(&bdi_lock);
> +
> +	wake_up(&default_backing_dev_info.wait);
> +}
> +
> +void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
> +{
> +	if (test_and_set_bit(BDI_pending, &bdi->state))
> +		return;
> +
> +	spin_lock_bh(&bdi_lock);
> +	list_del_rcu(&bdi->bdi_list);
> +	spin_unlock_bh(&bdi_lock);
> +
> +	/*
> +	 * We need to wait for the current grace period to end,
> +	 * in case others were browsing the bdi_list as well.
> +	 * So defer the adding and wakeup to after the RCU
> +	 * grace period has ended.
> +	 */
> +	call_rcu(&bdi->rcu_head, bdi_add_to_pending);
> +}
> +
>  int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  		const char *fmt, ...)
>  {
> @@ -213,9 +357,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  		goto exit;
>  	}
>  
> -	spin_lock(&bdi_lock);
> -	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
> -	spin_unlock(&bdi_lock);
> +	/*
> +	 * Just start the forker thread for our default backing_dev_info,
> +	 * and add other bdi's to the list. They will get a thread created
> +	 * on-demand when they need it.
> +	 */
> +	if (bdi_cap_flush_forker(bdi)) {
> +		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
> +						dev_name(dev));
> +		if (!bdi->task) {
> +			ret = -ENOMEM;
> +			goto exit;
> +		}
> +	} else {
> +		spin_lock_bh(&bdi_lock);
> +		list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
> +		spin_unlock_bh(&bdi_lock);
> +	}
>  
>  	bdi->dev = dev;
>  	bdi_debug_register(bdi, dev_name(dev));
> @@ -231,11 +389,22 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
>  }
>  EXPORT_SYMBOL(bdi_register_dev);
>  
> -static void bdi_remove_from_list(struct backing_dev_info *bdi)
> +static int sched_wait(void *word)
>  {
> -	spin_lock(&bdi_lock);
> +	schedule();
> +	return 0;
> +}
> +
> +static void bdi_wb_shutdown(struct backing_dev_info *bdi)
> +{
> +	/*
> +	 * If setup is pending, wait for that to complete first
> +	 */
> +	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
> +
> +	spin_lock_bh(&bdi_lock);
>  	list_del_rcu(&bdi->bdi_list);
> -	spin_unlock(&bdi_lock);
> +	spin_unlock_bh(&bdi_lock);
>  
>  	/*
>  	 * In case the bdi is freed right after unregister, we need to
> @@ -247,7 +416,13 @@ static void bdi_remove_from_list(struct backing_dev_info *bdi)
>  void bdi_unregister(struct backing_dev_info *bdi)
>  {
>  	if (bdi->dev) {
> -		bdi_remove_from_list(bdi);
> +		if (!bdi_cap_flush_forker(bdi)) {
> +			bdi_wb_shutdown(bdi);
> +			if (bdi->task) {
> +				kthread_stop(bdi->task);
> +				bdi->task = NULL;
> +			}
> +		}
>  		bdi_debug_unregister(bdi);
>  		device_unregister(bdi->dev);
>  		bdi->dev = NULL;
> @@ -257,14 +432,15 @@ EXPORT_SYMBOL(bdi_unregister);
>  
>  int bdi_init(struct backing_dev_info *bdi)
>  {
> -	int i;
> -	int err;
> +	int i, err;
>  
> +	INIT_RCU_HEAD(&bdi->rcu_head);
>  	bdi->dev = NULL;
>  
>  	bdi->min_ratio = 0;
>  	bdi->max_ratio = 100;
>  	bdi->max_prop_frac = PROP_FRAC_BASE;
> +	init_waitqueue_head(&bdi->wait);
>  	INIT_LIST_HEAD(&bdi->bdi_list);
>  	INIT_LIST_HEAD(&bdi->b_io);
>  	INIT_LIST_HEAD(&bdi->b_dirty);
> @@ -283,8 +459,6 @@ int bdi_init(struct backing_dev_info *bdi)
>  err:
>  		while (i--)
>  			percpu_counter_destroy(&bdi->bdi_stat[i]);
> -
> -		bdi_remove_from_list(bdi);
>  	}
>  
>  	return err;
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 2296ff4..76269f8 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -36,15 +36,6 @@
>  #include <linux/pagevec.h>
>  
>  /*
> - * The maximum number of pages to writeout in a single bdflush/kupdate
> - * operation.  We do this so we don't hold I_SYNC against an inode for
> - * enormous amounts of time, which would block a userspace task which has
> - * been forced to throttle against that inode.  Also, the code reevaluates
> - * the dirty each time it has written this many pages.
> - */
> -#define MAX_WRITEBACK_PAGES	1024
> -
> -/*
>   * After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
>   * will look to see if it needs to force writeback or throttling.
>   */
> @@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
>  /* End of sysctl-exported parameters */
>  
>  
> -static void background_writeout(unsigned long _min_pages);
> -
>  /*
>   * Scale the writeback cache size proportional to the relative writeout speeds.
>   *
> @@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
>  		 * been flushed to permanent storage.
>  		 */
>  		if (bdi_nr_reclaimable) {
> -			writeback_inodes(&wbc);
> +			generic_sync_bdi_inodes(NULL, &wbc);
>  			pages_written += write_chunk - wbc.nr_to_write;
>  			get_dirty_limits(&background_thresh, &dirty_thresh,
>  				       &bdi_thresh, bdi);
> @@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
>  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
>  					  + global_page_state(NR_UNSTABLE_NFS)
>  					  > background_thresh)))
> -		pdflush_operation(background_writeout, 0);
> +		bdi_start_writeback(bdi, NULL, 0);
>  }
>  
>  void set_page_dirty_balance(struct page *page, int page_mkwrite)
> @@ -677,152 +666,36 @@ void throttle_vm_writeout(gfp_t gfp_mask)
>  }
>  
>  /*
> - * writeback at least _min_pages, and keep writing until the amount of dirty
> - * memory is less than the background threshold, or until we're all clean.
> - */
> -static void background_writeout(unsigned long _min_pages)
> -{
> -	long min_pages = _min_pages;
> -	struct writeback_control wbc = {
> -		.bdi		= NULL,
> -		.sync_mode	= WB_SYNC_NONE,
> -		.older_than_this = NULL,
> -		.nr_to_write	= 0,
> -		.nonblocking	= 1,
> -		.range_cyclic	= 1,
> -	};
> -
> -	for ( ; ; ) {
> -		unsigned long background_thresh;
> -		unsigned long dirty_thresh;
> -
> -		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
> -		if (global_page_state(NR_FILE_DIRTY) +
> -			global_page_state(NR_UNSTABLE_NFS) < background_thresh
> -				&& min_pages <= 0)
> -			break;
> -		wbc.more_io = 0;
> -		wbc.encountered_congestion = 0;
> -		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
> -		wbc.pages_skipped = 0;
> -		writeback_inodes(&wbc);
> -		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> -		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
> -			/* Wrote less than expected */
> -			if (wbc.encountered_congestion || wbc.more_io)
> -				congestion_wait(WRITE, HZ/10);
> -			else
> -				break;
> -		}
> -	}
> -}
> -
> -/*
>   * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
>   * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
>   * -1 if all pdflush threads were busy.
>   */
> -int wakeup_pdflush(long nr_pages)
> +void wakeup_flusher_threads(long nr_pages)
>  {
>  	if (nr_pages == 0)
>  		nr_pages = global_page_state(NR_FILE_DIRTY) +
>  				global_page_state(NR_UNSTABLE_NFS);
> -	return pdflush_operation(background_writeout, nr_pages);
> +	bdi_writeback_all(NULL, nr_pages);
> +	return;
>  }
>  
> -static void wb_timer_fn(unsigned long unused);
>  static void laptop_timer_fn(unsigned long unused);
>  
> -static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
>  static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
>  
>  /*
> - * Periodic writeback of "old" data.
> - *
> - * Define "old": the first time one of an inode's pages is dirtied, we mark the
> - * dirtying-time in the inode's address_space.  So this periodic writeback code
> - * just walks the superblock inode list, writing back any inodes which are
> - * older than a specific point in time.
> - *
> - * Try to run once per dirty_writeback_interval.  But if a writeback event
> - * takes longer than a dirty_writeback_interval interval, then leave a
> - * one-second gap.
> - *
> - * older_than_this takes precedence over nr_to_write.  So we'll only write back
> - * all dirty pages if they are all attached to "old" mappings.
> - */
> -static void wb_kupdate(unsigned long arg)
> -{
> -	unsigned long oldest_jif;
> -	unsigned long start_jif;
> -	unsigned long next_jif;
> -	long nr_to_write;
> -	struct writeback_control wbc = {
> -		.bdi		= NULL,
> -		.sync_mode	= WB_SYNC_NONE,
> -		.older_than_this = &oldest_jif,
> -		.nr_to_write	= 0,
> -		.nonblocking	= 1,
> -		.for_kupdate	= 1,
> -		.range_cyclic	= 1,
> -	};
> -
> -	sync_supers();
> -
> -	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
> -	start_jif = jiffies;
> -	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
> -	nr_to_write = global_page_state(NR_FILE_DIRTY) +
> -			global_page_state(NR_UNSTABLE_NFS) +
> -			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
> -	while (nr_to_write > 0) {
> -		wbc.more_io = 0;
> -		wbc.encountered_congestion = 0;
> -		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
> -		writeback_inodes(&wbc);
> -		if (wbc.nr_to_write > 0) {
> -			if (wbc.encountered_congestion || wbc.more_io)
> -				congestion_wait(WRITE, HZ/10);
> -			else
> -				break;	/* All the old data is written */
> -		}
> -		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> -	}
> -	if (time_before(next_jif, jiffies + HZ))
> -		next_jif = jiffies + HZ;
> -	if (dirty_writeback_interval)
> -		mod_timer(&wb_timer, next_jif);
> -}
> -
> -/*
>   * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
>   */
>  int dirty_writeback_centisecs_handler(ctl_table *table, int write,
>  	struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	proc_dointvec(table, write, file, buffer, length, ppos);
> -	if (dirty_writeback_interval)
> -		mod_timer(&wb_timer, jiffies +
> -			msecs_to_jiffies(dirty_writeback_interval * 10));
> -	else
> -		del_timer(&wb_timer);
>  	return 0;
>  }
>  
> -static void wb_timer_fn(unsigned long unused)
> -{
> -	if (pdflush_operation(wb_kupdate, 0) < 0)
> -		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
> -}
> -
> -static void laptop_flush(unsigned long unused)
> -{
> -	sys_sync();
> -}
> -
>  static void laptop_timer_fn(unsigned long unused)
>  {
> -	pdflush_operation(laptop_flush, 0);
> +	wakeup_flusher_threads(0);
>  }
>  
>  /*
> @@ -905,8 +778,6 @@ void __init page_writeback_init(void)
>  {
>  	int shift;
>  
> -	mod_timer(&wb_timer,
> -		  jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
>  	writeback_set_ratelimit();
>  	register_cpu_notifier(&ratelimit_nb);
>  
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 5fa3eda..e37fd38 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		 */
>  		if (total_scanned > sc->swap_cluster_max +
>  					sc->swap_cluster_max / 2) {
> -			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
> +			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
>  			sc->may_writepage = 1;
>  		}
>  
> -- 
> 1.6.3.rc0.1.gf800
> 
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20  9:25             ` Jens Axboe
@ 2009-05-20 11:19               ` Jens Axboe
  2009-05-21  6:33                 ` Zhang, Yanmin
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 11:19 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Wed, May 20 2009, Jens Axboe wrote:
> On Wed, May 20 2009, Zhang, Yanmin wrote:
> > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > >
> > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > >
> > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > >
> > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > >   some data would not be flushed.
> > > > > > > >
> > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > >   behaviour for kupdated flushes.
> > > > > > > >
> > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > >   first flush on lazy register.
> > > > > > > >
> > > > > > > > - Rebase to newer kernels.
> > > > > 
> > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > of the patch series that you can apply next.
> > > > > Jens,
> > > > > 
> > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > 
> > > > > Tue May 19 00:00:00 CST 2009
> > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > PGD 0
> > > > > Oops: 0000 [#1] SMP
> > > > > last sysfs file: /sys/block/sdb/stat
> > > > > CPU 0
> > > > > Modules linked in: igb
> > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > Stack:
> > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > Call Trace:
> > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > 
> > 
> > 
> > > 
> > > I found one issue yesterday and one today that could cause issues, not
> > > sure it would explain this one. But at least it's worth a try, if it's
> > > reproducible.
> > I just reproduced it a moment ago manually.
> > 
> > [global]
> > direct=0
> > ioengine=mmap
> > iodepth=256
> > iodepth_batch=32
> > size=4G
> > bs=4k
> > pre_read=1
> > overwrite=1
> > numjobs=1
> > loops=5
> > runtime=600
> > group_reporting
> > directory=/mnt/stp/fiodata
> > [job_group0_sub0]
> > startdelay=0
> > rw=randwrite
> > filename=data0/f1:data0/f2
> > 
> > 
> > The fio includes my preread patch to flush files to memory.
> > 
> > Before starting the second testing, I did a cache dropping by:
> > #echo "3">/proc/sys/vm/drop_caches.
> > 
> > I suspect the drop_caches trigger it.
> 
> Thanks, will try this. What filesystem and mount options did you use?

No luck reproducing so far. In other news, I have finally merged your
fio pre_read patch :-)

I've run it here many times, works fine with the current writeback
branch. Since I did the runs anyway, I did comparisons between mainline
and writeback for this test. Each test was run 10 times, averages below.
The throughput deviated less than 1MB/sec, so results are very stable.
CPU usage percentages were always within 0.5%.

Kernel          Throughput       usr         sys        disk util
-----------------------------------------------------------------
writeback       175MB/sec        17.55%      43.04%     97.80%
vanilla         147MB/sec        13.44%      47.33%     85.98%

The results for this test is particularly interesting, since it's very
heavy on the writeback side. pdflush/bdi threads were pretty busy. User
time is up (even if corrected for higher throughput), but system time is
down a lot. Vanilla isn't close to keeping the disk busy, with the
writeback patches we are basically there (100% would be pretty much
impossible to reach).

Please try with the patches I sent. If you still see problems, we need
to look more closely into that.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 11:18   ` Jan Kara
@ 2009-05-20 11:32     ` Jens Axboe
  2009-05-20 12:11       ` Jan Kara
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 11:32 UTC (permalink / raw
  To: Jan Kara
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed, May 20 2009, Jan Kara wrote:
>   Hi Jens,
> 
>   a few comments here. Mainly, I still don't think the sys_sync() is
> working right - see comments below.

Thanks! I took the liberty of killing some of the code in between, to
make it easier to see.

> > +void bdi_writeback_all(struct super_block *sb, long nr_pages)
> > +{
> > +	struct backing_dev_info *bdi;
> > +
> > +	rcu_read_lock();
> > +
> > +restart:
> > +	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
>   Isn't the RCU list here a bit overengineering? AFAICS we use the list
> only here and if I'm grepping right, generic_sync_sb_inodes() is currently
> only used for data integrity sync (after your patches) from fs-writeback.c
> and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
> guarding the list should be just fine. Or am I missing something?

Sure, we could. But it's really not that much of a difference,
implementation wise.

> > @@ -591,13 +711,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
> >  void generic_sync_sb_inodes(struct super_block *sb,
> >  				struct writeback_control *wbc)
> >  {
> > -	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
> > -	struct backing_dev_info *bdi;
> > -
> > -	rcu_read_lock();
> > -	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
> > -		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
> > -	rcu_read_unlock();
> > +	if (wbc->bdi)
> > +		bdi_start_writeback(wbc->bdi, sb, 0);
> > +	else
> > +		bdi_writeback_all(sb, 0);
>   It does not work like this. The way you call writeback here, you never
> endup calling __writeback_single_inode() with WB_SYNC_ALL set in wbc (your
> writeback routines always call inode writeback with WB_SYNC_NONE). And
> that is required for proper data integrity sync... So you have to somehow
> propagate this down to the writeback thread.

Good point, we need to pass down sync mode too. Not a big problem, we
can just add that to bdi_work as well.

>   Alternatively, what probably makes a lot of sence, is to separate data
> integrity sync path from just data writeback. In the first case we care
> more about correctness, in the second case we care more about performance
> and overall throughput.

Yep agree, that would clean it up as well. I'll include that in the next
revision, I think I'll post it on friday.

>   BTW your patch also significantly changes one thing: With your patch data
> integrity sync is done by flusher threads while previously is was done from
> the context of the thread calling sync(). I'm undecided whether it is a
> good or bad thing but it definitely deserves a comment in the changelog.

I'll look at the implications of this again, perhaps it'll be better to
just switch it back for now.

> > +static int bdi_forker_task(void *ptr)
> > +{
> > +	struct backing_dev_info *bdi, *me = ptr;
> > +
> > +	for (;;) {
> > +		DEFINE_WAIT(wait);
> > +
> > +		/*
> > +		 * Should never trigger on the default bdi
> > +		 */
> > +		WARN_ON(bdi_has_dirty_io(me));
> > +
> > +		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> > +		smp_mb();
>   Wouldn't the code look simpler like:
> 	spin_lock_bh(&bdi_lock);
> 	if (list_empty(&bdi_pending_list)) {
> 		spin_unlock_bh(&bdi_lock);
> 		schedule();
> 	} else {
> 		bdi = list_entry(bdi_pending_list.next,
> 				 struct backing_dev_info, bdi_list);
> 		list_del_init(&bdi->bdi_list);
> 		spin_unlock_bh(&bdi_lock);
> 		if (bdi->task)
> 			continue;
> 		... do work ...
> 	}

Not a bad suggestion, I'll fiddle it around a bit.

> 
> > +		if (list_empty(&bdi_pending_list))
> > +			schedule();
> > +		else {
> > +repeat:
> > +			bdi = NULL;
> > +
> > +			spin_lock_bh(&bdi_lock);
> > +			if (!list_empty(&bdi_pending_list)) {
> > +				bdi = list_entry(bdi_pending_list.next,
> > +						 struct backing_dev_info,
> > +						 bdi_list);
> > +				list_del_init(&bdi->bdi_list);
> > +			}
> > +			spin_unlock_bh(&bdi_lock);
> > +
> > +			/*
> > +			 * If no bdi or bdi already got setup, continue
> > +			 */
> > +			if (!bdi || bdi->task)
> > +				continue;
> > +
> > +			bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
> > +						dev_name(bdi->dev));
> > +			/*
> > +			 * If task creation fails, then readd the bdi to
> > +			 * the pending list and force writeout of the bdi
> > +			 * from this forker thread. That will free some memory
> > +			 * and we can try again.
> > +			 */
> > +			if (!bdi->task) {
> > +				struct writeback_control wbc = {
> > +					.bdi			= bdi,
> > +					.sync_mode		= WB_SYNC_NONE,
> > +					.older_than_this	= NULL,
> > +					.range_cyclic		= 1,
> > +				};
> > +
> > +				/*
> > +				 * Add this 'bdi' to the back, so we get
> > +				 * a chance to flush other bdi's to free
> > +				 * memory.
> > +				 */
> > +				spin_lock_bh(&bdi_lock);
> > +				list_add_tail(&bdi->bdi_list,
> > +						&bdi_pending_list);
> > +				spin_unlock_bh(&bdi_lock);
> > +
> > +				wbc.nr_to_write = 1024;
> > +				generic_sync_bdi_inodes(NULL, &wbc);
> > +				goto repeat;
> > +			}
> > +		}
> > +
> > +		finish_wait(&me->wait, &wait);
> > +	}
> > +
> > +	return 0;

Thanks for your review Jan, always helpful!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 04/11] writeback: separate the flushing state/task from the bdi
  2009-05-18 12:19 ` [PATCH 04/11] writeback: separate the flushing state/task from the bdi Jens Axboe
@ 2009-05-20 11:34   ` Jan Kara
  2009-05-20 11:39     ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Jan Kara @ 2009-05-20 11:34 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On Mon 18-05-09 14:19:45, Jens Axboe wrote:
> Add a struct bdi_writeback for tracking and handling dirty IO. This
> is in preparation for adding > 1 flusher task per bdi.
  Some changes (IMO the most complicated ones ;) in this patch set seem to
be just reordering / cleanup of changes which happened in patch #2. Could
you maybe move it there. Commented below...

> 
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  fs/fs-writeback.c           |  140 +++++++++++++++++-----------
>  include/linux/backing-dev.h |   42 +++++----
>  mm/backing-dev.c            |  218 ++++++++++++++++++++++++++++--------------
>  3 files changed, 256 insertions(+), 144 deletions(-)
> 
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 8a25d14..50e21e8 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -46,9 +46,11 @@ int nr_pdflush_threads;
>   * unless they implement their own.  Which is somewhat inefficient, as this
>   * may prevent concurrent writeback against multiple devices.
>   */
> -static int writeback_acquire(struct backing_dev_info *bdi)
> +static int writeback_acquire(struct bdi_writeback *wb)
>  {
> -	return !test_and_set_bit(BDI_pdflush, &bdi->state);
> +	struct backing_dev_info *bdi = wb->bdi;
> +
> +	return !test_and_set_bit(wb->nr, &bdi->wb_active);
>  }
>  
>  /**
> @@ -59,19 +61,38 @@ static int writeback_acquire(struct backing_dev_info *bdi)
>   */
>  int writeback_in_progress(struct backing_dev_info *bdi)
>  {
> -	return test_bit(BDI_pdflush, &bdi->state);
> +	return bdi->wb_active != 0;
>  }
>  
>  /**
>   * writeback_release - relinquish exclusive writeback access against a device.
>   * @bdi: the device's backing_dev_info structure
>   */
> -static void writeback_release(struct backing_dev_info *bdi)
> +static void writeback_release(struct bdi_writeback *wb)
>  {
> -	WARN_ON_ONCE(!writeback_in_progress(bdi));
> -	bdi->wb_arg.nr_pages = 0;
> -	bdi->wb_arg.sb = NULL;
> -	clear_bit(BDI_pdflush, &bdi->state);
> +	struct backing_dev_info *bdi = wb->bdi;
> +
> +	wb->nr_pages = 0;
> +	wb->sb = NULL;
> +	clear_bit(wb->nr, &bdi->wb_active);
> +}
> +
> +static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
> +			       long nr_pages)
> +{
> +	if (!wb_has_dirty_io(wb))
> +		return;
> +
> +	if (writeback_acquire(wb)) {
> +		wb->nr_pages = nr_pages;
> +		wb->sb = sb;
> +
> +		/*
> +		 * make above store seen before the task is woken
> +		 */
> +		smp_mb();
> +		wake_up(&wb->wait);
> +	}
>  }
>  
>  int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
> @@ -81,21 +102,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
>  	 * This only happens the first time someone kicks this bdi, so put
>  	 * it out-of-line.
>  	 */
> -	if (unlikely(!bdi->task)) {
> +	if (unlikely(!bdi->wb.task)) {
>  		bdi_add_default_flusher_task(bdi);
>  		return 1;
>  	}
>  
> -	if (writeback_acquire(bdi)) {
> -		bdi->wb_arg.nr_pages = nr_pages;
> -		bdi->wb_arg.sb = sb;
> -		/*
> -		 * make above store seen before the task is woken
> -		 */
> -		smp_mb();
> -		wake_up(&bdi->wait);
> -	}
> -
> +	wb_start_writeback(&bdi->wb, sb, nr_pages);
>  	return 0;
>  }
>  
> @@ -123,12 +135,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
>   * older_than_this takes precedence over nr_to_write.  So we'll only write back
>   * all dirty pages if they are all attached to "old" mappings.
>   */
> -static void bdi_kupdated(struct backing_dev_info *bdi)
> +static void wb_kupdated(struct bdi_writeback *wb)
>  {
>  	unsigned long oldest_jif;
>  	long nr_to_write;
>  	struct writeback_control wbc = {
> -		.bdi			= bdi,
> +		.bdi			= wb->bdi,
>  		.sync_mode		= WB_SYNC_NONE,
>  		.older_than_this	= &oldest_jif,
>  		.nr_to_write		= 0,
> @@ -155,15 +167,19 @@ static void bdi_kupdated(struct backing_dev_info *bdi)
>  	}
>  }
>  
> -static void bdi_pdflush(struct backing_dev_info *bdi)
> +static void generic_sync_wb_inodes(struct bdi_writeback *wb,
> +				   struct super_block *sb,
> +				   struct writeback_control *wbc);
> +
> +static void wb_writeback(struct bdi_writeback *wb)
>  {
>  	struct writeback_control wbc = {
> -		.bdi			= bdi,
> +		.bdi			= wb->bdi,
>  		.sync_mode		= WB_SYNC_NONE,
>  		.older_than_this	= NULL,
>  		.range_cyclic		= 1,
>  	};
> -	long nr_pages = bdi->wb_arg.nr_pages;
> +	long nr_pages = wb->nr_pages;
>  
>  	for (;;) {
>  		unsigned long background_thresh, dirty_thresh;
> @@ -177,7 +193,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
>  		wbc.encountered_congestion = 0;
>  		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
>  		wbc.pages_skipped = 0;
> -		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
> +		generic_sync_wb_inodes(wb, wb->sb, &wbc);
>  		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
>  		/*
>  		 * If we ran out of stuff to write, bail unless more_io got set
> @@ -194,13 +210,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
>   * Handle writeback of dirty data for the device backed by this bdi. Also
>   * wakes up periodically and does kupdated style flushing.
>   */
> -int bdi_writeback_task(struct backing_dev_info *bdi)
> +int bdi_writeback_task(struct bdi_writeback *wb)
>  {
>  	while (!kthread_should_stop()) {
>  		unsigned long wait_jiffies;
>  		DEFINE_WAIT(wait);
>  
> -		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
> +		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
>  		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
>  		schedule_timeout(wait_jiffies);
>  		try_to_freeze();
> @@ -219,13 +235,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
>  		 *  pdflush style writeout.
>  		 *
>  		 */
> -		if (writeback_acquire(bdi))
> -			bdi_kupdated(bdi);
> +		if (writeback_acquire(wb))
> +			wb_kupdated(wb);
>  		else
> -			bdi_pdflush(bdi);
> +			wb_writeback(wb);
>  
> -		writeback_release(bdi);
> -		finish_wait(&bdi->wait, &wait);
> +		writeback_release(wb);
> +		finish_wait(&wb->wait, &wait);
>  	}
>  
>  	return 0;
> @@ -248,6 +264,14 @@ restart:
>  	rcu_read_unlock();
>  }
>  
> +/*
> + * We have only a single wb per bdi, so just return that.
> + */
> +static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
> +{
> +	return &inode_to_bdi(inode)->wb;
> +}
> +
>  /**
>   *	__mark_inode_dirty -	internal function
>   *	@inode: inode to mark
> @@ -346,9 +370,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
>  		 * reposition it (that would break b_dirty time-ordering).
>  		 */
>  		if (!was_dirty) {
> +			struct bdi_writeback *wb = inode_get_wb(inode);
> +
>  			inode->dirtied_when = jiffies;
> -			list_move(&inode->i_list,
> -					&inode_to_bdi(inode)->b_dirty);
> +			list_move(&inode->i_list, &wb->b_dirty);
>  		}
>  	}
>  out:
> @@ -375,16 +400,16 @@ static int write_inode(struct inode *inode, int sync)
>   */
>  static void redirty_tail(struct inode *inode)
>  {
> -	struct backing_dev_info *bdi = inode_to_bdi(inode);
> +	struct bdi_writeback *wb = inode_get_wb(inode);
>  
> -	if (!list_empty(&bdi->b_dirty)) {
> +	if (!list_empty(&wb->b_dirty)) {
>  		struct inode *tail;
>  
> -		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
> +		tail = list_entry(wb->b_dirty.next, struct inode, i_list);
>  		if (time_before(inode->dirtied_when, tail->dirtied_when))
>  			inode->dirtied_when = jiffies;
>  	}
> -	list_move(&inode->i_list, &bdi->b_dirty);
> +	list_move(&inode->i_list, &wb->b_dirty);
>  }
>  
>  /*
> @@ -392,7 +417,9 @@ static void redirty_tail(struct inode *inode)
>   */
>  static void requeue_io(struct inode *inode)
>  {
> -	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
> +	struct bdi_writeback *wb = inode_get_wb(inode);
> +
> +	list_move(&inode->i_list, &wb->b_more_io);
>  }
>  
>  static void inode_sync_complete(struct inode *inode)
> @@ -439,11 +466,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
>  /*
>   * Queue all expired dirty inodes for io, eldest first.
>   */
> -static void queue_io(struct backing_dev_info *bdi,
> -		     unsigned long *older_than_this)
> +static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
>  {
> -	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
> -	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
> +	list_splice_init(&wb->b_more_io, wb->b_io.prev);
> +	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
>  }
>  
>  /*
> @@ -604,20 +630,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
>  	return __sync_single_inode(inode, wbc);
>  }
>  
> -void generic_sync_bdi_inodes(struct super_block *sb,
> -			     struct writeback_control *wbc)
> +static void generic_sync_wb_inodes(struct bdi_writeback *wb,
> +				   struct super_block *sb,
> +				   struct writeback_control *wbc)
>  {
>  	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
> -	struct backing_dev_info *bdi = wbc->bdi;
>  	const unsigned long start = jiffies;	/* livelock avoidance */
>  
>  	spin_lock(&inode_lock);
>  
> -	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
> -		queue_io(bdi, wbc->older_than_this);
> +	if (!wbc->for_kupdate || list_empty(&wb->b_io))
> +		queue_io(wb, wbc->older_than_this);
>  
> -	while (!list_empty(&bdi->b_io)) {
> -		struct inode *inode = list_entry(bdi->b_io.prev,
> +	while (!list_empty(&wb->b_io)) {
> +		struct inode *inode = list_entry(wb->b_io.prev,
>  						struct inode, i_list);
>  		long pages_skipped;
>  
> @@ -629,7 +655,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
>  			continue;
>  		}
>  
> -		if (!bdi_cap_writeback_dirty(bdi)) {
> +		if (!bdi_cap_writeback_dirty(wb->bdi)) {
>  			redirty_tail(inode);
>  			if (is_blkdev_sb) {
>  				/*
> @@ -651,7 +677,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
>  			continue;
>  		}
>  
> -		if (wbc->nonblocking && bdi_write_congested(bdi)) {
> +		if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
>  			wbc->encountered_congestion = 1;
>  			if (!is_blkdev_sb)
>  				break;		/* Skip a congested fs */
> @@ -685,7 +711,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
>  			wbc->more_io = 1;
>  			break;
>  		}
> -		if (!list_empty(&bdi->b_more_io))
> +		if (!list_empty(&wb->b_more_io))
>  			wbc->more_io = 1;
>  	}
>  
> @@ -693,6 +719,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
>  	/* Leave any unwritten inodes on b_io */
>  }
>  
> +void generic_sync_bdi_inodes(struct super_block *sb,
> +			     struct writeback_control *wbc)
> +{
> +	struct backing_dev_info *bdi = wbc->bdi;
> +
> +	generic_sync_wb_inodes(&bdi->wb, sb, wbc);
> +}
> +
>  /*
>   * Write out a superblock's list of dirty inodes.  A wait will be performed
>   * upon no inodes, all inodes or the final one, depending upon sync_mode.
> diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
> index a848eea..a0c70f1 100644
> --- a/include/linux/backing-dev.h
> +++ b/include/linux/backing-dev.h
> @@ -23,8 +23,8 @@ struct dentry;
>   * Bits in backing_dev_info.state
>   */
>  enum bdi_state {
> -	BDI_pdflush,		/* A pdflush thread is working this device */
>  	BDI_pending,		/* On its way to being activated */
> +	BDI_wb_alloc,		/* Default embedded wb allocated */
>  	BDI_async_congested,	/* The async (write) queue is getting full */
>  	BDI_sync_congested,	/* The sync queue is getting full */
>  	BDI_unused,		/* Available bits start here */
> @@ -40,15 +40,23 @@ enum bdi_stat_item {
>  
>  #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
>  
> -struct bdi_writeback_arg {
> -	unsigned long nr_pages;
> -	struct super_block *sb;
> +struct bdi_writeback {
> +	struct backing_dev_info *bdi;		/* our parent bdi */
> +	unsigned int nr;
> +
> +	struct task_struct	*task;		/* writeback task */
> +	wait_queue_head_t	wait;
> +	struct list_head	b_dirty;	/* dirty inodes */
> +	struct list_head	b_io;		/* parked for writeback */
> +	struct list_head	b_more_io;	/* parked for more writeback */
> +
> +	unsigned long		nr_pages;
> +	struct super_block	*sb;
>  };
>  
>  struct backing_dev_info {
> -	struct list_head bdi_list;
>  	struct rcu_head rcu_head;
> -
> +	struct list_head bdi_list;
>  	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
>  	unsigned long state;	/* Always use atomic bitops on this */
>  	unsigned int capabilities; /* Device capabilities */
> @@ -65,14 +73,11 @@ struct backing_dev_info {
>  	unsigned int min_ratio;
>  	unsigned int max_ratio, max_prop_frac;
>  
> -	struct device *dev;
> +	struct bdi_writeback wb;  /* default writeback info for this bdi */
> +	unsigned long wb_active;  /* bitmap of active tasks */
> +	unsigned long wb_mask;	  /* number of registered tasks */
>  
> -	struct task_struct	*task;		/* writeback task */
> -	wait_queue_head_t	wait;
> -	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
> -	struct list_head	b_dirty;	/* dirty inodes */
> -	struct list_head	b_io;		/* parked for writeback */
> -	struct list_head	b_more_io;	/* parked for more writeback */
> +	struct device *dev;
>  
>  #ifdef CONFIG_DEBUG_FS
>  	struct dentry *debug_dir;
> @@ -89,18 +94,19 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
>  void bdi_unregister(struct backing_dev_info *bdi);
>  int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
>  			 long nr_pages);
> -int bdi_writeback_task(struct backing_dev_info *bdi);
> +int bdi_writeback_task(struct bdi_writeback *wb);
>  void bdi_writeback_all(struct super_block *sb, long nr_pages);
>  void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
> +int bdi_has_dirty_io(struct backing_dev_info *bdi);
>  
>  extern spinlock_t bdi_lock;
>  extern struct list_head bdi_list;
>  
> -static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
> +static inline int wb_has_dirty_io(struct bdi_writeback *wb)
>  {
> -	return !list_empty(&bdi->b_dirty) ||
> -	       !list_empty(&bdi->b_io) ||
> -	       !list_empty(&bdi->b_more_io);
> +	return !list_empty(&wb->b_dirty) ||
> +	       !list_empty(&wb->b_io) ||
> +	       !list_empty(&wb->b_more_io);
>  }
>  
>  static inline void __add_bdi_stat(struct backing_dev_info *bdi,
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index c759449..677a8c6 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -199,17 +199,59 @@ static int __init default_bdi_init(void)
>  }
>  subsys_initcall(default_bdi_init);
>  
> +static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
> +{
> +	memset(wb, 0, sizeof(*wb));
> +
> +	wb->bdi = bdi;
> +	init_waitqueue_head(&wb->wait);
> +	INIT_LIST_HEAD(&wb->b_dirty);
> +	INIT_LIST_HEAD(&wb->b_io);
> +	INIT_LIST_HEAD(&wb->b_more_io);
> +}
> +
> +static void bdi_flush_io(struct backing_dev_info *bdi)
> +{
> +	struct writeback_control wbc = {
> +		.bdi			= bdi,
> +		.sync_mode		= WB_SYNC_NONE,
> +		.older_than_this	= NULL,
> +		.range_cyclic		= 1,
> +		.nr_to_write		= 1024,
> +	};
> +
> +	generic_sync_bdi_inodes(NULL, &wbc);
> +}
> +
> +static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
> +{
> +	set_bit(0, &bdi->wb_mask);
> +	wb->nr = 0;
> +	return 0;
> +}
> +
> +static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
> +{
> +	clear_bit(wb->nr, &bdi->wb_mask);
> +	clear_bit(BDI_wb_alloc, &bdi->state);
> +}
> +
> +static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
> +{
> +	struct bdi_writeback *wb;
> +
> +	set_bit(BDI_wb_alloc, &bdi->state);
> +	wb = &bdi->wb;
> +	wb_assign_nr(bdi, wb);
> +	return wb;
> +}
> +
>  static int bdi_start_fn(void *ptr)
>  {
> -	struct backing_dev_info *bdi = ptr;
> +	struct bdi_writeback *wb = ptr;
> +	struct backing_dev_info *bdi = wb->bdi;
>  	struct task_struct *tsk = current;
> -
> -	/*
> -	 * Add us to the active bdi_list
> -	 */
> -	spin_lock_bh(&bdi_lock);
> -	list_add_rcu(&bdi->bdi_list, &bdi_list);
> -	spin_unlock_bh(&bdi_lock);
> +	int ret;
>  
>  	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
>  	set_freezable();
> @@ -225,77 +267,81 @@ static int bdi_start_fn(void *ptr)
>  	clear_bit(BDI_pending, &bdi->state);
>  	wake_up_bit(&bdi->state, BDI_pending);
>  
> -	return bdi_writeback_task(bdi);
> +	ret = bdi_writeback_task(wb);
> +
> +	bdi_put_wb(bdi, wb);
> +	return ret;
> +}
> +
> +int bdi_has_dirty_io(struct backing_dev_info *bdi)
> +{
> +	return wb_has_dirty_io(&bdi->wb);
>  }
>  
>  static int bdi_forker_task(void *ptr)
>  {
> -	struct backing_dev_info *bdi, *me = ptr;
> +	struct bdi_writeback *me = ptr;
>  
>  	for (;;) {
> +		struct backing_dev_info *bdi;
> +		struct bdi_writeback *wb;
>  		DEFINE_WAIT(wait);
>  
>  		/*
>  		 * Should never trigger on the default bdi
>  		 */
> -		WARN_ON(bdi_has_dirty_io(me));
> +		if (wb_has_dirty_io(me)) {
> +			bdi_flush_io(me->bdi);
> +			WARN_ON(1);
> +		}
>  
>  		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> +
>  		smp_mb();
>  		if (list_empty(&bdi_pending_list))
>  			schedule();
> -		else {
> +
> +		finish_wait(&me->wait, &wait);
>  repeat:
> -			bdi = NULL;
> +		bdi = NULL;
> +		spin_lock_bh(&bdi_lock);
> +		if (!list_empty(&bdi_pending_list)) {
> +			bdi = list_entry(bdi_pending_list.next,
> +					 struct backing_dev_info, bdi_list);
> +			list_del_init(&bdi->bdi_list);
> +		}
> +		spin_unlock_bh(&bdi_lock);
>  
> -			spin_lock_bh(&bdi_lock);
> -			if (!list_empty(&bdi_pending_list)) {
> -				bdi = list_entry(bdi_pending_list.next,
> -						 struct backing_dev_info,
> -						 bdi_list);
> -				list_del_init(&bdi->bdi_list);
> -			}
> -			spin_unlock_bh(&bdi_lock);
> +		if (!bdi)
> +			continue;
>  
> -			/*
> -			 * If no bdi or bdi already got setup, continue
> -			 */
> -			if (!bdi || bdi->task)
> -				continue;
> +		wb = bdi_new_wb(bdi);
> +		if (!wb)
> +			goto readd_flush;
>  
> -			bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
> +		wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
>  						dev_name(bdi->dev));
> +		/*
> +		 * If task creation fails, then readd the bdi to
> +		 * the pending list and force writeout of the bdi
> +		 * from this forker thread. That will free some memory
> +		 * and we can try again.
> +		 */
> +		if (!wb->task) {
> +			bdi_put_wb(bdi, wb);
> +readd_flush:
>  			/*
> -			 * If task creation fails, then readd the bdi to
> -			 * the pending list and force writeout of the bdi
> -			 * from this forker thread. That will free some memory
> -			 * and we can try again.
> +			 * Add this 'bdi' to the back, so we get
> +			 * a chance to flush other bdi's to free
> +			 * memory.
>  			 */
> -			if (!bdi->task) {
> -				struct writeback_control wbc = {
> -					.bdi			= bdi,
> -					.sync_mode		= WB_SYNC_NONE,
> -					.older_than_this	= NULL,
> -					.range_cyclic		= 1,
> -				};
> -
> -				/*
> -				 * Add this 'bdi' to the back, so we get
> -				 * a chance to flush other bdi's to free
> -				 * memory.
> -				 */
> -				spin_lock_bh(&bdi_lock);
> -				list_add_tail(&bdi->bdi_list,
> -						&bdi_pending_list);
> -				spin_unlock_bh(&bdi_lock);
> -
> -				wbc.nr_to_write = 1024;
> -				generic_sync_bdi_inodes(NULL, &wbc);
> -				goto repeat;
> -			}
> -		}
> +			spin_lock_bh(&bdi_lock);
> +			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
> +			spin_unlock_bh(&bdi_lock);
>  
> -		finish_wait(&me->wait, &wait);
> +			bdi_flush_io(bdi);
> +			goto repeat;
> +		}
>  	}
  Quite a lot of changes in this function (and creation of bdi_flush_io())
are just cleanups of patch #2 so it would be nice to move them there...

>  
>  	return 0;
> @@ -318,11 +364,21 @@ static void bdi_add_to_pending(struct rcu_head *head)
>  	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
>  	spin_unlock(&bdi_lock);
>  
> -	wake_up(&default_backing_dev_info.wait);
> +	wake_up(&default_backing_dev_info.wb.wait);
>  }
>  
> +/*
> + * Add a new flusher task that gets created for any bdi
> + * that has dirty data pending writeout
> + */
>  void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
>  {
> +	if (!bdi_cap_writeback_dirty(bdi))
> +		return;
> +
> +	/*
> +	 * Someone already marked this pending for task creation
> +	 */
>  	if (test_and_set_bit(BDI_pending, &bdi->state))
>  		return;
>  
> @@ -363,9 +419,18 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  	 * on-demand when they need it.
>  	 */
>  	if (bdi_cap_flush_forker(bdi)) {
> -		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
> +		struct bdi_writeback *wb;
> +
> +		wb = bdi_new_wb(bdi);
> +		if (!wb) {
> +			ret = -ENOMEM;
> +			goto exit;
> +		}
> +
> +		wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
>  						dev_name(dev));
> -		if (!bdi->task) {
> +		if (!wb->task) {
> +			bdi_put_wb(bdi, wb);
>  			ret = -ENOMEM;
>  			goto exit;
>  		}
> @@ -395,34 +460,44 @@ static int sched_wait(void *word)
>  	return 0;
>  }
>  
> +/*
> + * Remove bdi from global list and shutdown any threads we have running
> + */
>  static void bdi_wb_shutdown(struct backing_dev_info *bdi)
>  {
> +	if (!bdi_cap_writeback_dirty(bdi))
> +		return;
> +
>  	/*
>  	 * If setup is pending, wait for that to complete first
> +	 * Make sure nobody finds us on the bdi_list anymore
>  	 */
>  	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
>  
> +	/*
> +	 * Make sure nobody finds us on the bdi_list anymore
> +	 */
>  	spin_lock_bh(&bdi_lock);
>  	list_del_rcu(&bdi->bdi_list);
>  	spin_unlock_bh(&bdi_lock);
>  
>  	/*
> -	 * In case the bdi is freed right after unregister, we need to
> -	 * make sure any RCU sections have exited
> +	 * Now make sure that anybody who is currently looking at us from
> +	 * the bdi_list iteration have exited.
>  	 */
>  	synchronize_rcu();
> +
> +	/*
> +	 * Finally, kill the kernel thread
> +	 */
> +	kthread_stop(bdi->wb.task);
>  }
>  
>  void bdi_unregister(struct backing_dev_info *bdi)
>  {
>  	if (bdi->dev) {
> -		if (!bdi_cap_flush_forker(bdi)) {
> +		if (!bdi_cap_flush_forker(bdi))
>  			bdi_wb_shutdown(bdi);
> -			if (bdi->task) {
> -				kthread_stop(bdi->task);
> -				bdi->task = NULL;
> -			}
> -		}
>  		bdi_debug_unregister(bdi);
>  		device_unregister(bdi->dev);
>  		bdi->dev = NULL;
  This whole chunk is just a cleanup of patch #2, isn't it? Maybe move
there?

> @@ -440,11 +515,10 @@ int bdi_init(struct backing_dev_info *bdi)
>  	bdi->min_ratio = 0;
>  	bdi->max_ratio = 100;
>  	bdi->max_prop_frac = PROP_FRAC_BASE;
> -	init_waitqueue_head(&bdi->wait);
>  	INIT_LIST_HEAD(&bdi->bdi_list);
> -	INIT_LIST_HEAD(&bdi->b_io);
> -	INIT_LIST_HEAD(&bdi->b_dirty);
> -	INIT_LIST_HEAD(&bdi->b_more_io);
> +	bdi->wb_mask = bdi->wb_active = 0;
> +
> +	bdi_wb_init(&bdi->wb, bdi);
>  
>  	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
>  		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
> @@ -469,9 +543,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
>  {
>  	int i;
>  
> -	WARN_ON(!list_empty(&bdi->b_dirty));
> -	WARN_ON(!list_empty(&bdi->b_io));
> -	WARN_ON(!list_empty(&bdi->b_more_io));
> +	WARN_ON(bdi_has_dirty_io(bdi));
>  
>  	bdi_unregister(bdi);
>  

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 04/11] writeback: separate the flushing state/task from the bdi
  2009-05-20 11:34   ` Jan Kara
@ 2009-05-20 11:39     ` Jens Axboe
  2009-05-20 12:06       ` Jan Kara
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 11:39 UTC (permalink / raw
  To: Jan Kara
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed, May 20 2009, Jan Kara wrote:
> On Mon 18-05-09 14:19:45, Jens Axboe wrote:
> > Add a struct bdi_writeback for tracking and handling dirty IO. This
> > is in preparation for adding > 1 flusher task per bdi.
>   Some changes (IMO the most complicated ones ;) in this patch set seem to
> be just reordering / cleanup of changes which happened in patch #2. Could
> you maybe move it there. Commented below...

Some of it, most of it is due to switching from one fixed thread to the
potential of having lots more. The moving code around is mostly due to
other callers now having to use functions that were below them, and I'd
rather move them around instead of having prototypes at the top.

It would be easy to unify the two patches, but I wanted to separate the
switch from pdflush to 1 bdi thread from the transition from 1 bdi
thread to several.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 04/11] writeback: separate the flushing state/task from the bdi
  2009-05-20 11:39     ` Jens Axboe
@ 2009-05-20 12:06       ` Jan Kara
  2009-05-20 12:09         ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Jan Kara @ 2009-05-20 12:06 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed 20-05-09 13:39:18, Jens Axboe wrote:
> On Wed, May 20 2009, Jan Kara wrote:
> > On Mon 18-05-09 14:19:45, Jens Axboe wrote:
> > > Add a struct bdi_writeback for tracking and handling dirty IO. This
> > > is in preparation for adding > 1 flusher task per bdi.
> >   Some changes (IMO the most complicated ones ;) in this patch set seem to
> > be just reordering / cleanup of changes which happened in patch #2. Could
> > you maybe move it there. Commented below...
> 
> Some of it, most of it is due to switching from one fixed thread to the
> potential of having lots more. The moving code around is mostly due to
> other callers now having to use functions that were below them, and I'd
> rather move them around instead of having prototypes at the top.
  I meant mainly the changes in forker thread and such.

> It would be easy to unify the two patches, but I wanted to separate the
> switch from pdflush to 1 bdi thread from the transition from 1 bdi
> thread to several.
  Yes, this is probably desirable.

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 04/11] writeback: separate the flushing state/task from the bdi
  2009-05-20 12:06       ` Jan Kara
@ 2009-05-20 12:09         ` Jens Axboe
  0 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 12:09 UTC (permalink / raw
  To: Jan Kara
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed, May 20 2009, Jan Kara wrote:
> On Wed 20-05-09 13:39:18, Jens Axboe wrote:
> > On Wed, May 20 2009, Jan Kara wrote:
> > > On Mon 18-05-09 14:19:45, Jens Axboe wrote:
> > > > Add a struct bdi_writeback for tracking and handling dirty IO. This
> > > > is in preparation for adding > 1 flusher task per bdi.
> > >   Some changes (IMO the most complicated ones ;) in this patch set seem to
> > > be just reordering / cleanup of changes which happened in patch #2. Could
> > > you maybe move it there. Commented below...
> > 
> > Some of it, most of it is due to switching from one fixed thread to the
> > potential of having lots more. The moving code around is mostly due to
> > other callers now having to use functions that were below them, and I'd
> > rather move them around instead of having prototypes at the top.
>   I meant mainly the changes in forker thread and such.

OK, I'll double check for silly changes between the two. Since I added
some functionalty at the end of the series and then later moved it back
up the chain, it's quite likely that there are silly diffs between those
two patches.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 11:32     ` Jens Axboe
@ 2009-05-20 12:11       ` Jan Kara
  2009-05-20 12:16         ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Jan Kara @ 2009-05-20 12:11 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed 20-05-09 13:32:34, Jens Axboe wrote:
> On Wed, May 20 2009, Jan Kara wrote:
> >   Hi Jens,
> > 
> >   a few comments here. Mainly, I still don't think the sys_sync() is
> > working right - see comments below.
> 
> Thanks! I took the liberty of killing some of the code in between, to
> make it easier to see.
> 
> > > +void bdi_writeback_all(struct super_block *sb, long nr_pages)
> > > +{
> > > +	struct backing_dev_info *bdi;
> > > +
> > > +	rcu_read_lock();
> > > +
> > > +restart:
> > > +	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
> >   Isn't the RCU list here a bit overengineering? AFAICS we use the list
> > only here and if I'm grepping right, generic_sync_sb_inodes() is currently
> > only used for data integrity sync (after your patches) from fs-writeback.c
> > and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
> > guarding the list should be just fine. Or am I missing something?
> 
> Sure, we could. But it's really not that much of a difference,
> implementation wise.
  Yeah. It's just that when I see RCU, I'm a bit cautious what's going on.
When I see spinlock, everything is simple and clear ;). And I'm in favor of
using the simplest synchronization primitive that does it's work good
enough ;).

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 12:11       ` Jan Kara
@ 2009-05-20 12:16         ` Jens Axboe
  2009-05-20 12:24           ` Christoph Hellwig
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 12:16 UTC (permalink / raw
  To: Jan Kara
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm,
	yanmin_zhang

On Wed, May 20 2009, Jan Kara wrote:
> On Wed 20-05-09 13:32:34, Jens Axboe wrote:
> > On Wed, May 20 2009, Jan Kara wrote:
> > >   Hi Jens,
> > > 
> > >   a few comments here. Mainly, I still don't think the sys_sync() is
> > > working right - see comments below.
> > 
> > Thanks! I took the liberty of killing some of the code in between, to
> > make it easier to see.
> > 
> > > > +void bdi_writeback_all(struct super_block *sb, long nr_pages)
> > > > +{
> > > > +	struct backing_dev_info *bdi;
> > > > +
> > > > +	rcu_read_lock();
> > > > +
> > > > +restart:
> > > > +	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
> > >   Isn't the RCU list here a bit overengineering? AFAICS we use the list
> > > only here and if I'm grepping right, generic_sync_sb_inodes() is currently
> > > only used for data integrity sync (after your patches) from fs-writeback.c
> > > and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
> > > guarding the list should be just fine. Or am I missing something?
> > 
> > Sure, we could. But it's really not that much of a difference,
> > implementation wise.
>   Yeah. It's just that when I see RCU, I'm a bit cautious what's going on.
> When I see spinlock, everything is simple and clear ;). And I'm in favor of
> using the simplest synchronization primitive that does it's work good
> enough ;).

It's a fine rule, I agree ;-)

I'll take another look at this when splitting the sync paths.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 12:16         ` Jens Axboe
@ 2009-05-20 12:24           ` Christoph Hellwig
  2009-05-20 12:48             ` Jens Axboe
  0 siblings, 1 reply; 57+ messages in thread
From: Christoph Hellwig @ 2009-05-20 12:24 UTC (permalink / raw
  To: Jens Axboe
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm, yanmin_zhang

On Wed, May 20, 2009 at 02:16:30PM +0200, Jens Axboe wrote:
> It's a fine rule, I agree ;-)
> 
> I'll take another look at this when splitting the sync paths.

Btw, there has been quite a bit of work on the higher level sync code in
the VFS tree, and I have some TODO list items for the lower level sync
code.  The most important one would be splitting data and metadata
writeback.

Currently __sync_single_inode first calls do_writepages to write back
the data, then write_inode to potentially write the metadata and then
finally filemap_fdatawait to wait for the inode to be completed.

Now for one thing doing the data wait after the metadata writeout is
wrong for all those filesystems performing some kind of metadata updates
in the I/O completion handler, and e.g. XFS has to work around this
by doing a wait by itself in it's write_inode handler.

Second inodes are usually clustered together, so if a filesystem can
issue multiple dirty inodes at the same time performance will be much
better.

So an optimal sync could would first issue data I/O for all inodes it
wants to write back, then wait for the data I/O to finish and finally
write out the inodes in big clusters.

I'm not quite sure when we'll get to that, just making sure we don't
work against this direction anywhere.

And yeah, I really need to take a detailed look at the current
incarnation of your patchset :)


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
  2009-05-19 10:20   ` Richard Kennedy
  2009-05-20 11:18   ` Jan Kara
@ 2009-05-20 12:37   ` Christoph Hellwig
  2009-05-20 12:49     ` Jens Axboe
  2 siblings, 1 reply; 57+ messages in thread
From: Christoph Hellwig @ 2009-05-20 12:37 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, aia21, linux-ntfs-dev

Can you run the hunk below past Anton and get it upstream separately?
The code does indeed looks extremly fishy, but I'd rather not see it
go in a large unrelated patch..


On Mon, May 18, 2009 at 02:19:43PM +0200, Jens Axboe wrote:
> diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
> index f76951d..c4cb157 100644
> --- a/fs/ntfs/super.c
> +++ b/fs/ntfs/super.c
> @@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
>  		vol->mftmirr_ino = NULL;
>  	}
>  	/*
> -	 * If any dirty inodes are left, throw away all mft data page cache
> -	 * pages to allow a clean umount.  This should never happen any more
> -	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> -	 * the underlying mft records are written out and cleaned.  If it does,
> +	 * We should have no dirty inodes left, due to
> +	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> +	 * the underlying mft records are written out and cleaned.
>  	 * happen anyway, we want to know...
>  	 */
>  	ntfs_commit_inode(vol->mft_ino);
>  	write_inode_now(vol->mft_ino, 1);
> -	if (sb_has_dirty_inodes(sb)) {
> -		const char *s1, *s2;
> -
> -		mutex_lock(&vol->mft_ino->i_mutex);
> -		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
> -		mutex_unlock(&vol->mft_ino->i_mutex);
> -		write_inode_now(vol->mft_ino, 1);
> -		if (sb_has_dirty_inodes(sb)) {
> -			static const char *_s1 = "inodes";
> -			static const char *_s2 = "";
> -			s1 = _s1;
> -			s2 = _s2;
> -		} else {
> -			static const char *_s1 = "mft pages";
> -			static const char *_s2 = "They have been thrown "
> -					"away.  ";
> -			s1 = _s1;
> -			s2 = _s2;
> -		}
> -		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
> -				"run chkdsk.  Please email "
> -				"linux-ntfs-dev@lists.sourceforge.net and say "
> -				"that you saw this message.  Thank you.", s1,
> -				s2);
> -	}
>  #endif /* NTFS_RW */
>  
>  	iput(vol->mft_ino);


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 12:24           ` Christoph Hellwig
@ 2009-05-20 12:48             ` Jens Axboe
  0 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 12:48 UTC (permalink / raw
  To: Christoph Hellwig
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, akpm,
	yanmin_zhang

On Wed, May 20 2009, Christoph Hellwig wrote:
> On Wed, May 20, 2009 at 02:16:30PM +0200, Jens Axboe wrote:
> > It's a fine rule, I agree ;-)
> > 
> > I'll take another look at this when splitting the sync paths.
> 
> Btw, there has been quite a bit of work on the higher level sync code in
> the VFS tree, and I have some TODO list items for the lower level sync
> code.  The most important one would be splitting data and metadata
> writeback.
> 
> Currently __sync_single_inode first calls do_writepages to write back
> the data, then write_inode to potentially write the metadata and then
> finally filemap_fdatawait to wait for the inode to be completed.
> 
> Now for one thing doing the data wait after the metadata writeout is
> wrong for all those filesystems performing some kind of metadata updates
> in the I/O completion handler, and e.g. XFS has to work around this
> by doing a wait by itself in it's write_inode handler.
> 
> Second inodes are usually clustered together, so if a filesystem can
> issue multiple dirty inodes at the same time performance will be much
> better.
> 
> So an optimal sync could would first issue data I/O for all inodes it
> wants to write back, then wait for the data I/O to finish and finally
> write out the inodes in big clusters.
> 
> I'm not quite sure when we'll get to that, just making sure we don't
> work against this direction anywhere.
> 
> And yeah, I really need to take a detailed look at the current
> incarnation of your patchset :)

Please do, I'm particularly interested in the possibility of having
multiple inode placements. Would it be feasible to have the inode
backing be differentiated by type (eg data or meta-data)?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 12:37   ` Christoph Hellwig
@ 2009-05-20 12:49     ` Jens Axboe
  2009-05-20 14:02         ` Anton Altaparmakov
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-20 12:49 UTC (permalink / raw
  To: Christoph Hellwig
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, akpm, jack,
	yanmin_zhang, aia21, linux-ntfs-dev

On Wed, May 20 2009, Christoph Hellwig wrote:
> Can you run the hunk below past Anton and get it upstream separately?
> The code does indeed looks extremly fishy, but I'd rather not see it
> go in a large unrelated patch..

Yes, it really should go out of this patchset and into a prep patch.
Anton, care to comment?

> On Mon, May 18, 2009 at 02:19:43PM +0200, Jens Axboe wrote:
> > diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
> > index f76951d..c4cb157 100644
> > --- a/fs/ntfs/super.c
> > +++ b/fs/ntfs/super.c
> > @@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
> >  		vol->mftmirr_ino = NULL;
> >  	}
> >  	/*
> > -	 * If any dirty inodes are left, throw away all mft data page cache
> > -	 * pages to allow a clean umount.  This should never happen any more
> > -	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> > -	 * the underlying mft records are written out and cleaned.  If it does,
> > +	 * We should have no dirty inodes left, due to
> > +	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
> > +	 * the underlying mft records are written out and cleaned.
> >  	 * happen anyway, we want to know...
> >  	 */
> >  	ntfs_commit_inode(vol->mft_ino);
> >  	write_inode_now(vol->mft_ino, 1);
> > -	if (sb_has_dirty_inodes(sb)) {
> > -		const char *s1, *s2;
> > -
> > -		mutex_lock(&vol->mft_ino->i_mutex);
> > -		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
> > -		mutex_unlock(&vol->mft_ino->i_mutex);
> > -		write_inode_now(vol->mft_ino, 1);
> > -		if (sb_has_dirty_inodes(sb)) {
> > -			static const char *_s1 = "inodes";
> > -			static const char *_s2 = "";
> > -			s1 = _s1;
> > -			s2 = _s2;
> > -		} else {
> > -			static const char *_s1 = "mft pages";
> > -			static const char *_s2 = "They have been thrown "
> > -					"away.  ";
> > -			s1 = _s1;
> > -			s2 = _s2;
> > -		}
> > -		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
> > -				"run chkdsk.  Please email "
> > -				"linux-ntfs-dev@lists.sourceforge.net and say "
> > -				"that you saw this message.  Thank you.", s1,
> > -				s2);
> > -	}
> >  #endif /* NTFS_RW */
> >  
> >  	iput(vol->mft_ino);
> 

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
  2009-05-20 12:49     ` Jens Axboe
@ 2009-05-20 14:02         ` Anton Altaparmakov
  0 siblings, 0 replies; 57+ messages in thread
From: Anton Altaparmakov @ 2009-05-20 14:02 UTC (permalink / raw
  To: Jens Axboe
  Cc: Christoph Hellwig, linux-kernel, linux-fsdevel, chris.mason,
	david, akpm, jack, yanmin_zhang, aia21, linux-ntfs-dev

Hi,

On 20 May 2009, at 13:49, Jens Axboe wrote:
> On Wed, May 20 2009, Christoph Hellwig wrote:
>> Can you run the hunk below past Anton and get it upstream separately?
>> The code does indeed looks extremly fishy, but I'd rather not see it
>> go in a large unrelated patch..
>
> Yes, it really should go out of this patchset and into a prep patch.
> Anton, care to comment?
>
>> On Mon, May 18, 2009 at 02:19:43PM +0200, Jens Axboe wrote:
>>> diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
>>> index f76951d..c4cb157 100644
>>> --- a/fs/ntfs/super.c
>>> +++ b/fs/ntfs/super.c
>>> @@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct  
>>> super_block *sb)
>>> 		vol->mftmirr_ino = NULL;
>>> 	}
>>> 	/*
>>> -	 * If any dirty inodes are left, throw away all mft data page  
>>> cache
>>> -	 * pages to allow a clean umount.  This should never happen any  
>>> more
>>> -	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty  
>>> pages as
>>> -	 * the underlying mft records are written out and cleaned.  If  
>>> it does,
>>> +	 * We should have no dirty inodes left, due to
>>> +	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
>>> +	 * the underlying mft records are written out and cleaned.
>>> 	 * happen anyway, we want to know...

You need to remove the above line, too.  It does not make sense to  
leave half a sentence there...

Otherwise you can apply this patch if you really want.  It is just a  
debug/bandaid.  I used to have problems where dirty inodes were left  
and I had put that in to allow the unmount to succeed properly.  I  
believe that should not happen any more as explained in the comment  
above but I left the fixup code as a sanity check that would produce  
output to the system log that people would hopefully report should my  
fix not be correct/sufficient...

Best regards,

Anton


>>>
>>> 	 */
>>> 	ntfs_commit_inode(vol->mft_ino);
>>> 	write_inode_now(vol->mft_ino, 1);
>>> -	if (sb_has_dirty_inodes(sb)) {
>>> -		const char *s1, *s2;
>>> -
>>> -		mutex_lock(&vol->mft_ino->i_mutex);
>>> -		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
>>> -		mutex_unlock(&vol->mft_ino->i_mutex);
>>> -		write_inode_now(vol->mft_ino, 1);
>>> -		if (sb_has_dirty_inodes(sb)) {
>>> -			static const char *_s1 = "inodes";
>>> -			static const char *_s2 = "";
>>> -			s1 = _s1;
>>> -			s2 = _s2;
>>> -		} else {
>>> -			static const char *_s1 = "mft pages";
>>> -			static const char *_s2 = "They have been thrown "
>>> -					"away.  ";
>>> -			s1 = _s1;
>>> -			s2 = _s2;
>>> -		}
>>> -		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
>>> -				"run chkdsk.  Please email "
>>> -				"linux-ntfs-dev@lists.sourceforge.net and say "
>>> -				"that you saw this message.  Thank you.", s1,
>>> -				s2);
>>> -	}
>>> #endif /* NTFS_RW */
>>>
>>> 	iput(vol->mft_ino);

-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 02/11] writeback: switch to per-bdi threads for flushing data
@ 2009-05-20 14:02         ` Anton Altaparmakov
  0 siblings, 0 replies; 57+ messages in thread
From: Anton Altaparmakov @ 2009-05-20 14:02 UTC (permalink / raw
  To: Jens Axboe
  Cc: yanmin_zhang, jack, linux-ntfs-dev, david, linux-kernel,
	Christoph Hellwig, aia21, linux-fsdevel, akpm, chris.mason

Hi,

On 20 May 2009, at 13:49, Jens Axboe wrote:
> On Wed, May 20 2009, Christoph Hellwig wrote:
>> Can you run the hunk below past Anton and get it upstream separately?
>> The code does indeed looks extremly fishy, but I'd rather not see it
>> go in a large unrelated patch..
>
> Yes, it really should go out of this patchset and into a prep patch.
> Anton, care to comment?
>
>> On Mon, May 18, 2009 at 02:19:43PM +0200, Jens Axboe wrote:
>>> diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
>>> index f76951d..c4cb157 100644
>>> --- a/fs/ntfs/super.c
>>> +++ b/fs/ntfs/super.c
>>> @@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct  
>>> super_block *sb)
>>> 		vol->mftmirr_ino = NULL;
>>> 	}
>>> 	/*
>>> -	 * If any dirty inodes are left, throw away all mft data page  
>>> cache
>>> -	 * pages to allow a clean umount.  This should never happen any  
>>> more
>>> -	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty  
>>> pages as
>>> -	 * the underlying mft records are written out and cleaned.  If  
>>> it does,
>>> +	 * We should have no dirty inodes left, due to
>>> +	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
>>> +	 * the underlying mft records are written out and cleaned.
>>> 	 * happen anyway, we want to know...

You need to remove the above line, too.  It does not make sense to  
leave half a sentence there...

Otherwise you can apply this patch if you really want.  It is just a  
debug/bandaid.  I used to have problems where dirty inodes were left  
and I had put that in to allow the unmount to succeed properly.  I  
believe that should not happen any more as explained in the comment  
above but I left the fixup code as a sanity check that would produce  
output to the system log that people would hopefully report should my  
fix not be correct/sufficient...

Best regards,

Anton


>>>
>>> 	 */
>>> 	ntfs_commit_inode(vol->mft_ino);
>>> 	write_inode_now(vol->mft_ino, 1);
>>> -	if (sb_has_dirty_inodes(sb)) {
>>> -		const char *s1, *s2;
>>> -
>>> -		mutex_lock(&vol->mft_ino->i_mutex);
>>> -		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
>>> -		mutex_unlock(&vol->mft_ino->i_mutex);
>>> -		write_inode_now(vol->mft_ino, 1);
>>> -		if (sb_has_dirty_inodes(sb)) {
>>> -			static const char *_s1 = "inodes";
>>> -			static const char *_s2 = "";
>>> -			s1 = _s1;
>>> -			s2 = _s2;
>>> -		} else {
>>> -			static const char *_s1 = "mft pages";
>>> -			static const char *_s2 = "They have been thrown "
>>> -					"away.  ";
>>> -			s1 = _s1;
>>> -			s2 = _s2;
>>> -		}
>>> -		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
>>> -				"run chkdsk.  Please email "
>>> -				"linux-ntfs-dev@lists.sourceforge.net and say "
>>> -				"that you saw this message.  Thank you.", s1,
>>> -				s2);
>>> -	}
>>> #endif /* NTFS_RW */
>>>
>>> 	iput(vol->mft_ino);

-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


------------------------------------------------------------------------------
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-20 11:19               ` Jens Axboe
@ 2009-05-21  6:33                 ` Zhang, Yanmin
  2009-05-21  9:10                   ` Jan Kara
  0 siblings, 1 reply; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-21  6:33 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack

On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> On Wed, May 20 2009, Jens Axboe wrote:
> > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > >
> > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > >
> > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > >
> > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > >   some data would not be flushed.
> > > > > > > > >
> > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > >
> > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > >   first flush on lazy register.
> > > > > > > > >
> > > > > > > > > - Rebase to newer kernels.
> > > > > > 
> > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > of the patch series that you can apply next.
> > > > > > Jens,
> > > > > > 
> > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > 
> > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > PGD 0
> > > > > > Oops: 0000 [#1] SMP
> > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > CPU 0
> > > > > > Modules linked in: igb
> > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > Stack:
> > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > Call Trace:
> > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > 
> > > 
> > > 
> > > > 
> > > > I found one issue yesterday and one today that could cause issues, not
> > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > reproducible.
> > > I just reproduced it a moment ago manually.
> > > 
> > > [global]
> > > direct=0
> > > ioengine=mmap
> > > iodepth=256
> > > iodepth_batch=32
> > > size=4G
> > > bs=4k
> > > pre_read=1
> > > overwrite=1
> > > numjobs=1
> > > loops=5
> > > runtime=600
> > > group_reporting
> > > directory=/mnt/stp/fiodata
> > > [job_group0_sub0]
> > > startdelay=0
> > > rw=randwrite
> > > filename=data0/f1:data0/f2
> > > 
> > > 
> > > The fio includes my preread patch to flush files to memory.
> > > 
> > > Before starting the second testing, I did a cache dropping by:
> > > #echo "3">/proc/sys/vm/drop_caches.
> > > 
> > > I suspect the drop_caches trigger it.
> > 
> > Thanks, will try this. What filesystem and mount options did you use?
> 
> No luck reproducing so far.
All my testing are started with automation scripts. I found below step could
trigger it.
1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
machine;
2) After running the fio test case, immediately umount and mount the disk back:
#sudo umount /dev/sdb1
#sudo mount /dev/sdb1 /mnt/stp


>  In other news, I have finally merged your
> fio pre_read patch :-)
Thanks.

> 
> I've run it here many times, works fine with the current writeback
> branch. Since I did the runs anyway, I did comparisons between mainline
> and writeback for this test. Each test was run 10 times, averages below.
> The throughput deviated less than 1MB/sec, so results are very stable.
> CPU usage percentages were always within 0.5%.
> 
> Kernel          Throughput       usr         sys        disk util
> -----------------------------------------------------------------
> writeback       175MB/sec        17.55%      43.04%     97.80%
> vanilla         147MB/sec        13.44%      47.33%     85.98%
> 
> The results for this test is particularly interesting, since it's very
> heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> time is up (even if corrected for higher throughput), but system time is
> down a lot. Vanilla isn't close to keeping the disk busy, with the
> writeback patches we are basically there (100% would be pretty much
> impossible to reach).
> 
> Please try with the patches I sent. If you still see problems, we need
> to look more closely into that.
I tried the new patches. It seems it improves fio mmap randwrite 4k for about
50% on the machine (single disk). The old panic disappears, but there is a new panic.

[ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
Stack:
 ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
 0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
 ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
Call Trace:
 [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
 [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
 [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
 [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
 [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
 [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
 [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
 [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
 [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b



ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
EXT3_SB((inode)->i_sb) is equal to NULL.

It seems umount triggers the new panic.

Yanmin



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-21  6:33                 ` Zhang, Yanmin
@ 2009-05-21  9:10                   ` Jan Kara
  2009-05-22  1:28                     ` Zhang, Yanmin
  2009-05-22  7:53                     ` Jens Axboe
  0 siblings, 2 replies; 57+ messages in thread
From: Jan Kara @ 2009-05-21  9:10 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jens Axboe, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm, jack

On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > On Wed, May 20 2009, Jens Axboe wrote:
> > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > >
> > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > >
> > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > >
> > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > >   some data would not be flushed.
> > > > > > > > > >
> > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > >
> > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > >   first flush on lazy register.
> > > > > > > > > >
> > > > > > > > > > - Rebase to newer kernels.
> > > > > > > 
> > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > of the patch series that you can apply next.
> > > > > > > Jens,
> > > > > > > 
> > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > 
> > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > PGD 0
> > > > > > > Oops: 0000 [#1] SMP
> > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > CPU 0
> > > > > > > Modules linked in: igb
> > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > Stack:
> > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > Call Trace:
> > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > 
> > > > 
> > > > 
> > > > > 
> > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > reproducible.
> > > > I just reproduced it a moment ago manually.
> > > > 
> > > > [global]
> > > > direct=0
> > > > ioengine=mmap
> > > > iodepth=256
> > > > iodepth_batch=32
> > > > size=4G
> > > > bs=4k
> > > > pre_read=1
> > > > overwrite=1
> > > > numjobs=1
> > > > loops=5
> > > > runtime=600
> > > > group_reporting
> > > > directory=/mnt/stp/fiodata
> > > > [job_group0_sub0]
> > > > startdelay=0
> > > > rw=randwrite
> > > > filename=data0/f1:data0/f2
> > > > 
> > > > 
> > > > The fio includes my preread patch to flush files to memory.
> > > > 
> > > > Before starting the second testing, I did a cache dropping by:
> > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > 
> > > > I suspect the drop_caches trigger it.
> > > 
> > > Thanks, will try this. What filesystem and mount options did you use?
> > 
> > No luck reproducing so far.
> All my testing are started with automation scripts. I found below step could
> trigger it.
> 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> machine;
> 2) After running the fio test case, immediately umount and mount the disk back:
> #sudo umount /dev/sdb1
> #sudo mount /dev/sdb1 /mnt/stp
> 
> 
> >  In other news, I have finally merged your
> > fio pre_read patch :-)
> Thanks.
> 
> > 
> > I've run it here many times, works fine with the current writeback
> > branch. Since I did the runs anyway, I did comparisons between mainline
> > and writeback for this test. Each test was run 10 times, averages below.
> > The throughput deviated less than 1MB/sec, so results are very stable.
> > CPU usage percentages were always within 0.5%.
> > 
> > Kernel          Throughput       usr         sys        disk util
> > -----------------------------------------------------------------
> > writeback       175MB/sec        17.55%      43.04%     97.80%
> > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > 
> > The results for this test is particularly interesting, since it's very
> > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > time is up (even if corrected for higher throughput), but system time is
> > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > writeback patches we are basically there (100% would be pretty much
> > impossible to reach).
> > 
> > Please try with the patches I sent. If you still see problems, we need
> > to look more closely into that.
> I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> 
> [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> PGD 0
> Oops: 0000 [#1] SMP
> last sysfs file: /sys/block/sdb/stat
> CPU 0
> Modules linked in: igb
> Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> Stack:
>  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
>  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
>  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> Call Trace:
>  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
>  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
>  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
>  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
>  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
>  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
>  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
>  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
>  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> 
> 
> 
> ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> EXT3_SB((inode)->i_sb) is equal to NULL.
> 
> It seems umount triggers the new panic.
  Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
(at least at the first sight). Somehow invalidate_inodes() is able to find
invalidated inodes on i_sb_list...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-21  9:10                   ` Jan Kara
@ 2009-05-22  1:28                     ` Zhang, Yanmin
  2009-05-22  8:15                       ` Jens Axboe
  2009-05-22  7:53                     ` Jens Axboe
  1 sibling, 1 reply; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-22  1:28 UTC (permalink / raw
  To: Jan Kara
  Cc: Jens Axboe, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Thu, 2009-05-21 at 11:10 +0200, Jan Kara wrote:
> On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> > On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > > >
> > > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > > >
> > > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > > >
> > > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > > >   some data would not be flushed.
> > > > > > > > > > >
> > > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > > >
> > > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > > >   first flush on lazy register.
> > > > > > > > > > >
> > > > > > > > > > > - Rebase to newer kernels.
> > > > > > > > 
> > > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > > of the patch series that you can apply next.
> > > > > > > > Jens,
> > > > > > > > 
> > > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > > 
> > > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > PGD 0
> > > > > > > > Oops: 0000 [#1] SMP
> > > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > > CPU 0
> > > > > > > > Modules linked in: igb
> > > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > > Stack:
> > > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > > Call Trace:
> > > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > > reproducible.
> > > > > I just reproduced it a moment ago manually.
> > > > > 
> > > > > [global]
> > > > > direct=0
> > > > > ioengine=mmap
> > > > > iodepth=256
> > > > > iodepth_batch=32
> > > > > size=4G
> > > > > bs=4k
> > > > > pre_read=1
> > > > > overwrite=1
> > > > > numjobs=1
> > > > > loops=5
> > > > > runtime=600
> > > > > group_reporting
> > > > > directory=/mnt/stp/fiodata
> > > > > [job_group0_sub0]
> > > > > startdelay=0
> > > > > rw=randwrite
> > > > > filename=data0/f1:data0/f2
> > > > > 
> > > > > 
> > > > > The fio includes my preread patch to flush files to memory.
> > > > > 
> > > > > Before starting the second testing, I did a cache dropping by:
> > > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > > 
> > > > > I suspect the drop_caches trigger it.
> > > > 
> > > > Thanks, will try this. What filesystem and mount options did you use?
> > > 
> > > No luck reproducing so far.
> > All my testing are started with automation scripts. I found below step could
> > trigger it.
> > 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> > machine;
> > 2) After running the fio test case, immediately umount and mount the disk back:
> > #sudo umount /dev/sdb1
> > #sudo mount /dev/sdb1 /mnt/stp
> > 
> > 
> > >  In other news, I have finally merged your
> > > fio pre_read patch :-)
> > Thanks.
> > 
> > > 
> > > I've run it here many times, works fine with the current writeback
> > > branch. Since I did the runs anyway, I did comparisons between mainline
> > > and writeback for this test. Each test was run 10 times, averages below.
> > > The throughput deviated less than 1MB/sec, so results are very stable.
> > > CPU usage percentages were always within 0.5%.
> > > 
> > > Kernel          Throughput       usr         sys        disk util
> > > -----------------------------------------------------------------
> > > writeback       175MB/sec        17.55%      43.04%     97.80%
> > > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > > 
> > > The results for this test is particularly interesting, since it's very
> > > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > > time is up (even if corrected for higher throughput), but system time is
> > > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > > writeback patches we are basically there (100% would be pretty much
> > > impossible to reach).
> > > 
> > > Please try with the patches I sent. If you still see problems, we need
> > > to look more closely into that.
> > I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> > 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> > 
> > [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> > IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > PGD 0
> > Oops: 0000 [#1] SMP
> > last sysfs file: /sys/block/sdb/stat
> > CPU 0
> > Modules linked in: igb
> > Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> > RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> > RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> > RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> > RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> > R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> > R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> > FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> > Stack:
> >  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
> >  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
> >  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> > Call Trace:
> >  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
> >  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
> >  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
> >  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
> >  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
> >  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
> >  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
> >  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
> >  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> > 
> > 
> > 
> > ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> > EXT3_SB((inode)->i_sb) is equal to NULL.
> > 
> > It seems umount triggers the new panic.
>   Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
> (at least at the first sight). Somehow invalidate_inodes() is able to find
> invalidated inodes on i_sb_list...
Caught previous oops again.
I(my script) do a sync after fio testing and before umount /dev/sdb1.


                            BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
IP: [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
PGD 0 
Oops: 0000 [#1] SMP 
last sysfs file: /sys/block/sdb/stat
CPU 0 
Modules linked in: igb
Pid: 1446, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherV4fix #1 X8DTN
RIP: 0010:[<ffffffff803f3cec>]  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
RSP: 0018:ffff8800bd295a60  EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8800bd405b00 RCX: 0000000002cd1a40
RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf4096c0
RBP: ffff8800bd405b00 R08: ffffe20006141cf8 R09: ffff8800bd295a98
R10: 0000000000000000 R11: ffff8800bd405c80 R12: ffff8800bd405b00
R13: ffff88008bc4c150 R14: 0000000000000008 R15: ffff88008059dda0
FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process bdi-8:16 (pid: 1446, threadinfo ffff8800bd294000, task ffff8800bd2375f0)
Stack:
 0000000000000008 ffffffff8027a613 00000000bd0f60d0 ffffffffffffffff
 ffff88007b5cfb10 0000000000000001 ffff88007d504000 ffff880000000006
 0000000000011200 ffff8800bd61d444 ffffffffffffffcf 0000000000000000
Call Trace:
 [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
 [<ffffffff803f4010>] ? submit_bio+0xaa/0xb1
 [<ffffffff802c6aeb>] ? submit_bh+0xe3/0x103
 [<ffffffff802c9396>] ? __block_write_full_page+0x1fb/0x2f2
 [<ffffffff802c7e16>] ? end_buffer_async_write+0x0/0xfb
 [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
 [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
 [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
 [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
 [<ffffffff802c22c9>] ? __writeback_single_inode+0x159/0x2b3
 [<ffffffff8071e5ca>] ? thread_return+0x3e/0xaa
 [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
 [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
 [<ffffffff802c27c4>] ? generic_sync_wb_inodes+0x1b4/0x220
 [<ffffffff802c31dd>] ? wb_do_writeback+0x16c/0x215
 [<ffffffff802c32eb>] ? bdi_writeback_task+0x65/0x10d
 [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
 [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
 [<ffffffff80289257>] ? bdi_start_fn+0x0/0xc0
 [<ffffffff802892cc>] ? bdi_start_fn+0x75/0xc0
 [<ffffffff8024c860>] ? kthread+0x54/0x80
 [<ffffffff8020c97a>] ? child_rip+0xa/0x20
 [<ffffffff8024c80c>] ? kthread+0x0/0x80
 [<ffffffff8020c970>] ? child_rip+0x0/0x20
Code: 39 c8 0f 82 ba 01 00 00 44 89 f0 c7 44 24 14 00 00 00 00 48 c7 44 24 18 ff ff ff ff 48 89 04 24 48 8b 7d 10 48 8b 87  
RIP  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-21  9:10                   ` Jan Kara
  2009-05-22  1:28                     ` Zhang, Yanmin
@ 2009-05-22  7:53                     ` Jens Axboe
  2009-05-22  7:53                       ` Jens Axboe
  1 sibling, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-22  7:53 UTC (permalink / raw
  To: Jan Kara
  Cc: Zhang, Yanmin, linux-kernel, linux-fsdevel, chris.mason, david,
	hch, akpm

On Thu, May 21 2009, Jan Kara wrote:
> On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> > On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > > >
> > > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > > >
> > > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > > >
> > > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > > >   some data would not be flushed.
> > > > > > > > > > >
> > > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > > >
> > > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > > >   first flush on lazy register.
> > > > > > > > > > >
> > > > > > > > > > > - Rebase to newer kernels.
> > > > > > > > 
> > > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > > of the patch series that you can apply next.
> > > > > > > > Jens,
> > > > > > > > 
> > > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > > 
> > > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > PGD 0
> > > > > > > > Oops: 0000 [#1] SMP
> > > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > > CPU 0
> > > > > > > > Modules linked in: igb
> > > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > > Stack:
> > > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > > Call Trace:
> > > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > > reproducible.
> > > > > I just reproduced it a moment ago manually.
> > > > > 
> > > > > [global]
> > > > > direct=0
> > > > > ioengine=mmap
> > > > > iodepth=256
> > > > > iodepth_batch=32
> > > > > size=4G
> > > > > bs=4k
> > > > > pre_read=1
> > > > > overwrite=1
> > > > > numjobs=1
> > > > > loops=5
> > > > > runtime=600
> > > > > group_reporting
> > > > > directory=/mnt/stp/fiodata
> > > > > [job_group0_sub0]
> > > > > startdelay=0
> > > > > rw=randwrite
> > > > > filename=data0/f1:data0/f2
> > > > > 
> > > > > 
> > > > > The fio includes my preread patch to flush files to memory.
> > > > > 
> > > > > Before starting the second testing, I did a cache dropping by:
> > > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > > 
> > > > > I suspect the drop_caches trigger it.
> > > > 
> > > > Thanks, will try this. What filesystem and mount options did you use?
> > > 
> > > No luck reproducing so far.
> > All my testing are started with automation scripts. I found below step could
> > trigger it.
> > 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> > machine;
> > 2) After running the fio test case, immediately umount and mount the disk back:
> > #sudo umount /dev/sdb1
> > #sudo mount /dev/sdb1 /mnt/stp
> > 
> > 
> > >  In other news, I have finally merged your
> > > fio pre_read patch :-)
> > Thanks.
> > 
> > > 
> > > I've run it here many times, works fine with the current writeback
> > > branch. Since I did the runs anyway, I did comparisons between mainline
> > > and writeback for this test. Each test was run 10 times, averages below.
> > > The throughput deviated less than 1MB/sec, so results are very stable.
> > > CPU usage percentages were always within 0.5%.
> > > 
> > > Kernel          Throughput       usr         sys        disk util
> > > -----------------------------------------------------------------
> > > writeback       175MB/sec        17.55%      43.04%     97.80%
> > > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > > 
> > > The results for this test is particularly interesting, since it's very
> > > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > > time is up (even if corrected for higher throughput), but system time is
> > > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > > writeback patches we are basically there (100% would be pretty much
> > > impossible to reach).
> > > 
> > > Please try with the patches I sent. If you still see problems, we need
> > > to look more closely into that.
> > I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> > 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> > 
> > [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> > IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > PGD 0
> > Oops: 0000 [#1] SMP
> > last sysfs file: /sys/block/sdb/stat
> > CPU 0
> > Modules linked in: igb
> > Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> > RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> > RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> > RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> > RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> > R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> > R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> > FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> > Stack:
> >  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
> >  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
> >  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> > Call Trace:
> >  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
> >  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
> >  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
> >  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
> >  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
> >  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
> >  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
> >  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
> >  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> > 
> > 
> > 
> > ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> > EXT3_SB((inode)->i_sb) is equal to NULL.
> > 
> > It seems umount triggers the new panic.
>   Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
> (at least at the first sight). Somehow invalidate_inodes() is able to find
> invalidated inodes on i_sb_list...

Could this be due to the missing WB_SYNC_ALL carry? Or the out-of-line
flushing in generic_sync_sb_inodes()? The latter could be exposing a
missing wait somewhere.

I'll see about reproducing and fixing it locally.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-22  7:53                     ` Jens Axboe
@ 2009-05-22  7:53                       ` Jens Axboe
  0 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-22  7:53 UTC (permalink / raw
  To: Jan Kara
  Cc: Zhang, Yanmin, linux-kernel, linux-fsdevel, chris.mason, david,
	hch, akpm

On Fri, May 22 2009, Jens Axboe wrote:
> On Thu, May 21 2009, Jan Kara wrote:
> > On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> > > On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > > > Hi,
> > > > > > > > > > > >
> > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > > > >
> > > > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > > > >
> > > > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > > > >
> > > > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > > > >   some data would not be flushed.
> > > > > > > > > > > >
> > > > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > > > >
> > > > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > > > >   first flush on lazy register.
> > > > > > > > > > > >
> > > > > > > > > > > > - Rebase to newer kernels.
> > > > > > > > > 
> > > > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > > > of the patch series that you can apply next.
> > > > > > > > > Jens,
> > > > > > > > > 
> > > > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > > > 
> > > > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > PGD 0
> > > > > > > > > Oops: 0000 [#1] SMP
> > > > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > > > CPU 0
> > > > > > > > > Modules linked in: igb
> > > > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > > > Stack:
> > > > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > > > Call Trace:
> > > > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > > > reproducible.
> > > > > > I just reproduced it a moment ago manually.
> > > > > > 
> > > > > > [global]
> > > > > > direct=0
> > > > > > ioengine=mmap
> > > > > > iodepth=256
> > > > > > iodepth_batch=32
> > > > > > size=4G
> > > > > > bs=4k
> > > > > > pre_read=1
> > > > > > overwrite=1
> > > > > > numjobs=1
> > > > > > loops=5
> > > > > > runtime=600
> > > > > > group_reporting
> > > > > > directory=/mnt/stp/fiodata
> > > > > > [job_group0_sub0]
> > > > > > startdelay=0
> > > > > > rw=randwrite
> > > > > > filename=data0/f1:data0/f2
> > > > > > 
> > > > > > 
> > > > > > The fio includes my preread patch to flush files to memory.
> > > > > > 
> > > > > > Before starting the second testing, I did a cache dropping by:
> > > > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > > > 
> > > > > > I suspect the drop_caches trigger it.
> > > > > 
> > > > > Thanks, will try this. What filesystem and mount options did you use?
> > > > 
> > > > No luck reproducing so far.
> > > All my testing are started with automation scripts. I found below step could
> > > trigger it.
> > > 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> > > machine;
> > > 2) After running the fio test case, immediately umount and mount the disk back:
> > > #sudo umount /dev/sdb1
> > > #sudo mount /dev/sdb1 /mnt/stp
> > > 
> > > 
> > > >  In other news, I have finally merged your
> > > > fio pre_read patch :-)
> > > Thanks.
> > > 
> > > > 
> > > > I've run it here many times, works fine with the current writeback
> > > > branch. Since I did the runs anyway, I did comparisons between mainline
> > > > and writeback for this test. Each test was run 10 times, averages below.
> > > > The throughput deviated less than 1MB/sec, so results are very stable.
> > > > CPU usage percentages were always within 0.5%.
> > > > 
> > > > Kernel          Throughput       usr         sys        disk util
> > > > -----------------------------------------------------------------
> > > > writeback       175MB/sec        17.55%      43.04%     97.80%
> > > > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > > > 
> > > > The results for this test is particularly interesting, since it's very
> > > > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > > > time is up (even if corrected for higher throughput), but system time is
> > > > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > > > writeback patches we are basically there (100% would be pretty much
> > > > impossible to reach).
> > > > 
> > > > Please try with the patches I sent. If you still see problems, we need
> > > > to look more closely into that.
> > > I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> > > 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> > > 
> > > [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> > > IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > PGD 0
> > > Oops: 0000 [#1] SMP
> > > last sysfs file: /sys/block/sdb/stat
> > > CPU 0
> > > Modules linked in: igb
> > > Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> > > RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> > > RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> > > RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> > > RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> > > R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> > > R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> > > FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > > CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> > > Stack:
> > >  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
> > >  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
> > >  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> > > Call Trace:
> > >  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
> > >  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
> > >  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
> > >  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
> > >  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
> > >  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
> > >  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
> > >  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
> > >  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> > > 
> > > 
> > > 
> > > ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> > > EXT3_SB((inode)->i_sb) is equal to NULL.
> > > 
> > > It seems umount triggers the new panic.
> >   Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
> > (at least at the first sight). Somehow invalidate_inodes() is able to find
> > invalidated inodes on i_sb_list...
> 
> Could this be due to the missing WB_SYNC_ALL carry? Or the out-of-line
> flushing in generic_sync_sb_inodes()? The latter could be exposing a
> missing wait somewhere.

The former got fixed this morning, btw:

http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=237af7b3c87a37ab8aacd99eb842e6bd35a30289

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-22  1:28                     ` Zhang, Yanmin
@ 2009-05-22  8:15                       ` Jens Axboe
  2009-05-22 20:44                         ` Jens Axboe
  2009-05-25  8:54                           ` Zhang, Yanmin
  0 siblings, 2 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-22  8:15 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Fri, May 22 2009, Zhang, Yanmin wrote:
> On Thu, 2009-05-21 at 11:10 +0200, Jan Kara wrote:
> > On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> > > On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > > > Hi,
> > > > > > > > > > > >
> > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > > > >
> > > > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > > > >
> > > > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > > > >
> > > > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > > > >   some data would not be flushed.
> > > > > > > > > > > >
> > > > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > > > >
> > > > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > > > >   first flush on lazy register.
> > > > > > > > > > > >
> > > > > > > > > > > > - Rebase to newer kernels.
> > > > > > > > > 
> > > > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > > > of the patch series that you can apply next.
> > > > > > > > > Jens,
> > > > > > > > > 
> > > > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > > > 
> > > > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > PGD 0
> > > > > > > > > Oops: 0000 [#1] SMP
> > > > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > > > CPU 0
> > > > > > > > > Modules linked in: igb
> > > > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > > > Stack:
> > > > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > > > Call Trace:
> > > > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > > > reproducible.
> > > > > > I just reproduced it a moment ago manually.
> > > > > > 
> > > > > > [global]
> > > > > > direct=0
> > > > > > ioengine=mmap
> > > > > > iodepth=256
> > > > > > iodepth_batch=32
> > > > > > size=4G
> > > > > > bs=4k
> > > > > > pre_read=1
> > > > > > overwrite=1
> > > > > > numjobs=1
> > > > > > loops=5
> > > > > > runtime=600
> > > > > > group_reporting
> > > > > > directory=/mnt/stp/fiodata
> > > > > > [job_group0_sub0]
> > > > > > startdelay=0
> > > > > > rw=randwrite
> > > > > > filename=data0/f1:data0/f2
> > > > > > 
> > > > > > 
> > > > > > The fio includes my preread patch to flush files to memory.
> > > > > > 
> > > > > > Before starting the second testing, I did a cache dropping by:
> > > > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > > > 
> > > > > > I suspect the drop_caches trigger it.
> > > > > 
> > > > > Thanks, will try this. What filesystem and mount options did you use?
> > > > 
> > > > No luck reproducing so far.
> > > All my testing are started with automation scripts. I found below step could
> > > trigger it.
> > > 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> > > machine;
> > > 2) After running the fio test case, immediately umount and mount the disk back:
> > > #sudo umount /dev/sdb1
> > > #sudo mount /dev/sdb1 /mnt/stp
> > > 
> > > 
> > > >  In other news, I have finally merged your
> > > > fio pre_read patch :-)
> > > Thanks.
> > > 
> > > > 
> > > > I've run it here many times, works fine with the current writeback
> > > > branch. Since I did the runs anyway, I did comparisons between mainline
> > > > and writeback for this test. Each test was run 10 times, averages below.
> > > > The throughput deviated less than 1MB/sec, so results are very stable.
> > > > CPU usage percentages were always within 0.5%.
> > > > 
> > > > Kernel          Throughput       usr         sys        disk util
> > > > -----------------------------------------------------------------
> > > > writeback       175MB/sec        17.55%      43.04%     97.80%
> > > > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > > > 
> > > > The results for this test is particularly interesting, since it's very
> > > > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > > > time is up (even if corrected for higher throughput), but system time is
> > > > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > > > writeback patches we are basically there (100% would be pretty much
> > > > impossible to reach).
> > > > 
> > > > Please try with the patches I sent. If you still see problems, we need
> > > > to look more closely into that.
> > > I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> > > 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> > > 
> > > [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> > > IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > PGD 0
> > > Oops: 0000 [#1] SMP
> > > last sysfs file: /sys/block/sdb/stat
> > > CPU 0
> > > Modules linked in: igb
> > > Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> > > RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> > > RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> > > RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> > > RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> > > R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> > > R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> > > FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > > CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> > > Stack:
> > >  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
> > >  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
> > >  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> > > Call Trace:
> > >  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
> > >  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
> > >  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
> > >  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
> > >  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
> > >  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
> > >  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
> > >  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
> > >  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> > > 
> > > 
> > > 
> > > ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> > > EXT3_SB((inode)->i_sb) is equal to NULL.
> > > 
> > > It seems umount triggers the new panic.
> >   Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
> > (at least at the first sight). Somehow invalidate_inodes() is able to find
> > invalidated inodes on i_sb_list...
> Caught previous oops again.
> I(my script) do a sync after fio testing and before umount /dev/sdb1.
> 
> 
>                             BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> IP: [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
> PGD 0 
> Oops: 0000 [#1] SMP 
> last sysfs file: /sys/block/sdb/stat
> CPU 0 
> Modules linked in: igb
> Pid: 1446, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherV4fix #1 X8DTN
> RIP: 0010:[<ffffffff803f3cec>]  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
> RSP: 0018:ffff8800bd295a60  EFLAGS: 00010206
> RAX: 0000000000000000 RBX: ffff8800bd405b00 RCX: 0000000002cd1a40
> RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf4096c0
> RBP: ffff8800bd405b00 R08: ffffe20006141cf8 R09: ffff8800bd295a98
> R10: 0000000000000000 R11: ffff8800bd405c80 R12: ffff8800bd405b00
> R13: ffff88008bc4c150 R14: 0000000000000008 R15: ffff88008059dda0
> FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process bdi-8:16 (pid: 1446, threadinfo ffff8800bd294000, task ffff8800bd2375f0)
> Stack:
>  0000000000000008 ffffffff8027a613 00000000bd0f60d0 ffffffffffffffff
>  ffff88007b5cfb10 0000000000000001 ffff88007d504000 ffff880000000006
>  0000000000011200 ffff8800bd61d444 ffffffffffffffcf 0000000000000000
> Call Trace:
>  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
>  [<ffffffff803f4010>] ? submit_bio+0xaa/0xb1
>  [<ffffffff802c6aeb>] ? submit_bh+0xe3/0x103
>  [<ffffffff802c9396>] ? __block_write_full_page+0x1fb/0x2f2
>  [<ffffffff802c7e16>] ? end_buffer_async_write+0x0/0xfb
>  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
>  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
>  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
>  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
>  [<ffffffff802c22c9>] ? __writeback_single_inode+0x159/0x2b3
>  [<ffffffff8071e5ca>] ? thread_return+0x3e/0xaa
>  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
>  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
>  [<ffffffff802c27c4>] ? generic_sync_wb_inodes+0x1b4/0x220
>  [<ffffffff802c31dd>] ? wb_do_writeback+0x16c/0x215
>  [<ffffffff802c32eb>] ? bdi_writeback_task+0x65/0x10d
>  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
>  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
>  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xc0
>  [<ffffffff802892cc>] ? bdi_start_fn+0x75/0xc0
>  [<ffffffff8024c860>] ? kthread+0x54/0x80
>  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
>  [<ffffffff8024c80c>] ? kthread+0x0/0x80
>  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> Code: 39 c8 0f 82 ba 01 00 00 44 89 f0 c7 44 24 14 00 00 00 00 48 c7 44 24 18 ff ff ff ff 48 89 04 24 48 8b 7d 10 48 8b 87  
> RIP  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384

Thanks, I'll get this reproduced and fixed. Can you post the results
you got comparing writeback and vanilla meanwhile?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-22  8:15                       ` Jens Axboe
@ 2009-05-22 20:44                         ` Jens Axboe
  2009-05-23 19:15                           ` Jens Axboe
  2009-05-25  8:54                           ` Zhang, Yanmin
  1 sibling, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-22 20:44 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Fri, May 22 2009, Jens Axboe wrote:
> On Fri, May 22 2009, Zhang, Yanmin wrote:
> > On Thu, 2009-05-21 at 11:10 +0200, Jan Kara wrote:
> > > On Thu 21-05-09 14:33:47, Zhang, Yanmin wrote:
> > > > On Wed, 2009-05-20 at 13:19 +0200, Jens Axboe wrote:
> > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> > > > > > > > On Wed, May 20 2009, Jens Axboe wrote:
> > > > > > > > > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > > > > > > > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > > > > > > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > > > > > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > > > > > > > > Hi,
> > > > > > > > > > > > >
> > > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > > > > > > > >   wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > > > > > > > >   are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > > > > > > > >   some data would not be flushed.
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > > > > > > > >   behaviour for kupdated flushes.
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > > > > > > > >   first flush on lazy register.
> > > > > > > > > > > > >
> > > > > > > > > > > > > - Rebase to newer kernels.
> > > > > > > > > > 
> > > > > > > > > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > > > > > > > > of the patch series that you can apply next.
> > > > > > > > > > Jens,
> > > > > > > > > > 
> > > > > > > > > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > > > > > > > > > 
> > > > > > > > > > Tue May 19 00:00:00 CST 2009
> > > > > > > > > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > > > > > > > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > > PGD 0
> > > > > > > > > > Oops: 0000 [#1] SMP
> > > > > > > > > > last sysfs file: /sys/block/sdb/stat
> > > > > > > > > > CPU 0
> > > > > > > > > > Modules linked in: igb
> > > > > > > > > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > > > > > > > > RIP: 0010:[<ffffffff803f3c4c>]  [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > > > > > > > > RSP: 0018:ffff8800bd04da60  EFLAGS: 00010206
> > > > > > > > > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > > > > > > > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > > > > > > > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > > > > > > > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > > > > > > > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > > > > > > > > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > > > > > > > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > > > > > > > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > > > > > > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > > > > > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > > > > > > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > > > > > > > > Stack:
> > > > > > > > > >  0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > > > > > > > >  ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > > > > > > > >  0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > > > > > > > > Call Trace:
> > > > > > > > > >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > > > > > > > >  [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > > > > > > > >  [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > > > > > > > >  [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > > > > > > > >  [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > > > > > > > >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > > > > > > > >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > > > > > > > >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > > > > > > > >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > > > > > > > >  [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > > > > > > > >  [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > > > > > > > >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > > > > > > > >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > > > > > > > >  [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > > > > > > > >  [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > > > > > > > >  [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > > > > > > > >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > > > > > > > >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > > > > > > > >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > > > > > > > >  [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > > > > > > > >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > > > > > > > >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > > > > > > > >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > > > > > > > >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > > > > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > I found one issue yesterday and one today that could cause issues, not
> > > > > > > > sure it would explain this one. But at least it's worth a try, if it's
> > > > > > > > reproducible.
> > > > > > > I just reproduced it a moment ago manually.
> > > > > > > 
> > > > > > > [global]
> > > > > > > direct=0
> > > > > > > ioengine=mmap
> > > > > > > iodepth=256
> > > > > > > iodepth_batch=32
> > > > > > > size=4G
> > > > > > > bs=4k
> > > > > > > pre_read=1
> > > > > > > overwrite=1
> > > > > > > numjobs=1
> > > > > > > loops=5
> > > > > > > runtime=600
> > > > > > > group_reporting
> > > > > > > directory=/mnt/stp/fiodata
> > > > > > > [job_group0_sub0]
> > > > > > > startdelay=0
> > > > > > > rw=randwrite
> > > > > > > filename=data0/f1:data0/f2
> > > > > > > 
> > > > > > > 
> > > > > > > The fio includes my preread patch to flush files to memory.
> > > > > > > 
> > > > > > > Before starting the second testing, I did a cache dropping by:
> > > > > > > #echo "3">/proc/sys/vm/drop_caches.
> > > > > > > 
> > > > > > > I suspect the drop_caches trigger it.
> > > > > > 
> > > > > > Thanks, will try this. What filesystem and mount options did you use?
> > > > > 
> > > > > No luck reproducing so far.
> > > > All my testing are started with automation scripts. I found below step could
> > > > trigger it.
> > > > 1) Use an exclusive partition to test it; for example I use /dev/sdb1 on this
> > > > machine;
> > > > 2) After running the fio test case, immediately umount and mount the disk back:
> > > > #sudo umount /dev/sdb1
> > > > #sudo mount /dev/sdb1 /mnt/stp
> > > > 
> > > > 
> > > > >  In other news, I have finally merged your
> > > > > fio pre_read patch :-)
> > > > Thanks.
> > > > 
> > > > > 
> > > > > I've run it here many times, works fine with the current writeback
> > > > > branch. Since I did the runs anyway, I did comparisons between mainline
> > > > > and writeback for this test. Each test was run 10 times, averages below.
> > > > > The throughput deviated less than 1MB/sec, so results are very stable.
> > > > > CPU usage percentages were always within 0.5%.
> > > > > 
> > > > > Kernel          Throughput       usr         sys        disk util
> > > > > -----------------------------------------------------------------
> > > > > writeback       175MB/sec        17.55%      43.04%     97.80%
> > > > > vanilla         147MB/sec        13.44%      47.33%     85.98%
> > > > > 
> > > > > The results for this test is particularly interesting, since it's very
> > > > > heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> > > > > time is up (even if corrected for higher throughput), but system time is
> > > > > down a lot. Vanilla isn't close to keeping the disk busy, with the
> > > > > writeback patches we are basically there (100% would be pretty much
> > > > > impossible to reach).
> > > > > 
> > > > > Please try with the patches I sent. If you still see problems, we need
> > > > > to look more closely into that.
> > > > I tried the new patches. It seems it improves fio mmap randwrite 4k for about
> > > > 50% on the machine (single disk). The old panic disappears, but there is a new panic.
> > > > 
> > > > [ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
> > > > IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > > PGD 0
> > > > Oops: 0000 [#1] SMP
> > > > last sysfs file: /sys/block/sdb/stat
> > > > CPU 0
> > > > Modules linked in: igb
> > > > Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
> > > > RIP: 0010:[<ffffffff803270b6>]  [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
> > > > RSP: 0018:ffff8801bdc47d20  EFLAGS: 00010246
> > > > RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
> > > > RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
> > > > RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
> > > > R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
> > > > R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
> > > > FS:  00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > > > CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
> > > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > > Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
> > > > Stack:
> > > >  ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
> > > >  0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
> > > >  ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
> > > > Call Trace:
> > > >  [<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
> > > >  [<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
> > > >  [<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
> > > >  [<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
> > > >  [<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
> > > >  [<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
> > > >  [<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
> > > >  [<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
> > > >  [<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
> > > > 
> > > > 
> > > > 
> > > > ext3_invalidatepage =>  EXT3_JOURNAL(page->mapping->host) while
> > > > EXT3_SB((inode)->i_sb) is equal to NULL.
> > > > 
> > > > It seems umount triggers the new panic.
> > >   Hmm, unlike previous oops in ext3, this does not seem to be ext3 problem
> > > (at least at the first sight). Somehow invalidate_inodes() is able to find
> > > invalidated inodes on i_sb_list...
> > Caught previous oops again.
> > I(my script) do a sync after fio testing and before umount /dev/sdb1.
> > 
> > 
> >                             BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > IP: [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
> > PGD 0 
> > Oops: 0000 [#1] SMP 
> > last sysfs file: /sys/block/sdb/stat
> > CPU 0 
> > Modules linked in: igb
> > Pid: 1446, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherV4fix #1 X8DTN
> > RIP: 0010:[<ffffffff803f3cec>]  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
> > RSP: 0018:ffff8800bd295a60  EFLAGS: 00010206
> > RAX: 0000000000000000 RBX: ffff8800bd405b00 RCX: 0000000002cd1a40
> > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf4096c0
> > RBP: ffff8800bd405b00 R08: ffffe20006141cf8 R09: ffff8800bd295a98
> > R10: 0000000000000000 R11: ffff8800bd405c80 R12: ffff8800bd405b00
> > R13: ffff88008bc4c150 R14: 0000000000000008 R15: ffff88008059dda0
> > FS:  0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Process bdi-8:16 (pid: 1446, threadinfo ffff8800bd294000, task ffff8800bd2375f0)
> > Stack:
> >  0000000000000008 ffffffff8027a613 00000000bd0f60d0 ffffffffffffffff
> >  ffff88007b5cfb10 0000000000000001 ffff88007d504000 ffff880000000006
> >  0000000000011200 ffff8800bd61d444 ffffffffffffffcf 0000000000000000
> > Call Trace:
> >  [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> >  [<ffffffff803f4010>] ? submit_bio+0xaa/0xb1
> >  [<ffffffff802c6aeb>] ? submit_bh+0xe3/0x103
> >  [<ffffffff802c9396>] ? __block_write_full_page+0x1fb/0x2f2
> >  [<ffffffff802c7e16>] ? end_buffer_async_write+0x0/0xfb
> >  [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> >  [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> >  [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> >  [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> >  [<ffffffff802c22c9>] ? __writeback_single_inode+0x159/0x2b3
> >  [<ffffffff8071e5ca>] ? thread_return+0x3e/0xaa
> >  [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> >  [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> >  [<ffffffff802c27c4>] ? generic_sync_wb_inodes+0x1b4/0x220
> >  [<ffffffff802c31dd>] ? wb_do_writeback+0x16c/0x215
> >  [<ffffffff802c32eb>] ? bdi_writeback_task+0x65/0x10d
> >  [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> >  [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> >  [<ffffffff80289257>] ? bdi_start_fn+0x0/0xc0
> >  [<ffffffff802892cc>] ? bdi_start_fn+0x75/0xc0
> >  [<ffffffff8024c860>] ? kthread+0x54/0x80
> >  [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> >  [<ffffffff8024c80c>] ? kthread+0x0/0x80
> >  [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > Code: 39 c8 0f 82 ba 01 00 00 44 89 f0 c7 44 24 14 00 00 00 00 48 c7 44 24 18 ff ff ff ff 48 89 04 24 48 8b 7d 10 48 8b 87  
> > RIP  [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
> 
> Thanks, I'll get this reproduced and fixed. Can you post the results
> you got comparing writeback and vanilla meanwhile?

Please try with this combined patch against what you are running now, it
should resolve the issue. It needs a bit more work, but I'm running out
of time today. I'l get it finalized, cleaned up, and integrated. Then
I'll post a new revision of the patch set.

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index f80afaa..e9fc346 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -50,6 +50,7 @@ struct bdi_work {
 
 	unsigned long sb_data;
 	unsigned long nr_pages;
+	enum writeback_sync_modes sync_mode;
 
 	unsigned long state;
 };
@@ -65,19 +66,22 @@ static inline bool bdi_work_on_stack(struct bdi_work *work)
 }
 
 static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
-				 unsigned long nr_pages)
+				 unsigned long nr_pages,
+				 enum writeback_sync_modes sync_mode)
 {
 	INIT_RCU_HEAD(&work->rcu_head);
 	work->sb_data = (unsigned long) sb;
 	work->nr_pages = nr_pages;
+	work->sync_mode = sync_mode;
 	work->state = 0;
 }
 
 static inline void bdi_work_init_on_stack(struct bdi_work *work,
 					  struct super_block *sb,
-					  unsigned long nr_pages)
+					  unsigned long nr_pages,
+				 	  enum writeback_sync_modes sync_mode)
 {
-	bdi_work_init(work, sb, nr_pages);
+	bdi_work_init(work, sb, nr_pages, sync_mode);
 	set_bit(0, &work->state);
 	work->sb_data |= 1UL;
 }
@@ -189,17 +193,17 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
 }
 
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages)
+			 long nr_pages, enum writeback_sync_modes sync_mode)
 {
 	struct bdi_work work_stack, *work;
 	int ret;
 
 	work = kmalloc(sizeof(*work), GFP_ATOMIC);
 	if (work)
-		bdi_work_init(work, sb, nr_pages);
+		bdi_work_init(work, sb, nr_pages, sync_mode);
 	else {
 		work = &work_stack;
-		bdi_work_init_on_stack(work, sb, nr_pages);
+		bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
 	}
 
 	ret = bdi_queue_writeback(bdi, work);
@@ -274,11 +278,12 @@ static long wb_kupdated(struct bdi_writeback *wb)
 }
 
 static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
-			   struct super_block *sb)
+			   struct super_block *sb,
+			   enum writeback_sync_modes sync_mode)
 {
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
-		.sync_mode		= WB_SYNC_NONE,
+		.sync_mode		= sync_mode,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
@@ -345,9 +350,10 @@ static long wb_writeback(struct bdi_writeback *wb)
 	while ((work = get_next_work_item(bdi, wb)) != NULL) {
 		struct super_block *sb = bdi_work_sb(work);
 		long nr_pages = work->nr_pages;
+		enum writeback_sync_modes sync_mode = work->sync_mode;
 
 		wb_clear_pending(wb, work);
-		wrote += __wb_writeback(wb, nr_pages, sb);
+		wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
 	}
 
 	return wrote;
@@ -420,39 +426,36 @@ int bdi_writeback_task(struct bdi_writeback *wb)
 	return 0;
 }
 
-void bdi_writeback_all(struct super_block *sb, long nr_pages)
+/*
+ * Do in-line writeback of all backing devices. Expensive!
+ */
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+		       enum writeback_sync_modes sync_mode)
 {
-	struct list_head *entry = &bdi_list;
+	struct backing_dev_info *bdi;
 
-	rcu_read_lock();
+	mutex_lock(&bdi_mutex);
 
-	list_for_each_continue_rcu(entry, &bdi_list) {
-		struct backing_dev_info *bdi;
-		struct list_head *next;
-		struct bdi_work *work;
-
-		bdi = list_entry(entry, struct backing_dev_info, bdi_list);
+	list_for_each_entry(bdi, &bdi_list, bdi_list) {
 		if (!bdi_has_dirty_io(bdi))
 			continue;
 
-		/*
-		 * If this allocation fails, we just wakeup the thread and
-		 * let it do kupdate writeback
-		 */
-		work = kmalloc(sizeof(*work), GFP_ATOMIC);
-		if (work)
-			bdi_work_init(work, sb, nr_pages);
+		if (!bdi_wblist_needs_lock(bdi))
+			r = __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+		else {
+			struct bdi_writeback *wb;
+			int idx;
 
-		/*
-		 * Prepare to start from previous entry if this one gets moved
-		 * to the bdi_pending list.
-		 */
-		next = entry->prev;
-		if (bdi_queue_writeback(bdi, work))
-			entry = next;
+			idx = srcu_read_lock(&bdi->srcu);
+
+			list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+				r += __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+
+			srcu_read_unlock(&bdi->srcu, idx);
+		}
 	}
 
-	rcu_read_unlock();
+	mutex_unlock(&bdi_mutex);
 }
 
 /*
@@ -972,9 +975,9 @@ void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc)
 {
 	if (wbc->bdi)
-		bdi_start_writeback(wbc->bdi, sb, 0);
+		generic_sync_bdi_inodes(sb, wbc);
 	else
-		bdi_writeback_all(sb, 0);
+		bdi_writeback_all(sb, 0, wbc->sync_mode);
 
 	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 7c2874f..c9ddca4 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -15,6 +15,7 @@
 #include <linux/fs.h>
 #include <linux/sched.h>
 #include <linux/srcu.h>
+#include <linux/writeback.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -60,7 +61,6 @@ struct bdi_writeback {
 #define BDI_MAX_FLUSHERS	32
 
 struct backing_dev_info {
-	struct rcu_head rcu_head;
 	struct srcu_struct srcu; /* for wb_list read side protection */
 	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
@@ -105,14 +105,15 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages);
+			 long nr_pages, enum writeback_sync_modes sync_mode);
 int bdi_writeback_task(struct bdi_writeback *wb);
-void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+			enum writeback_sync_modes sync_mode);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
 void bdi_add_flusher_task(struct backing_dev_info *bdi);
 int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
-extern spinlock_t bdi_lock;
+extern struct mutex bdi_mutex;
 extern struct list_head bdi_list;
 
 static inline int wb_is_default_task(struct bdi_writeback *wb)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 60578bc..0e09051 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -26,7 +26,7 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
-DEFINE_SPINLOCK(bdi_lock);
+DEFINE_MUTEX(bdi_mutex);
 LIST_HEAD(bdi_list);
 LIST_HEAD(bdi_pending_list);
 
@@ -360,14 +360,15 @@ static int bdi_start_fn(void *ptr)
 	 * Clear pending bit and wakeup anybody waiting to tear us down
 	 */
 	clear_bit(BDI_pending, &bdi->state);
+	smp_mb__after_clear_bit();
 	wake_up_bit(&bdi->state, BDI_pending);
 
 	/*
 	 * Make us discoverable on the bdi_list again
 	 */
-	spin_lock_bh(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock_bh(&bdi_lock);
+	mutex_lock(&bdi_mutex);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_mutex);
 
 	ret = bdi_writeback_task(wb);
 
@@ -422,12 +423,6 @@ static int bdi_forker_task(void *ptr)
 		struct backing_dev_info *bdi;
 		struct bdi_writeback *wb;
 
-		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
-
-		smp_mb();
-		if (list_empty(&bdi_pending_list))
-			schedule();
-
 		/*
 		 * Ideally we'd like not to see any dirty inodes on the
 		 * default_backing_dev_info. Until these are tracked down,
@@ -438,19 +433,23 @@ static int bdi_forker_task(void *ptr)
 		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
 			wb_do_writeback(me);
 
+		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+		mutex_lock(&bdi_mutex);
+		if (list_empty(&bdi_pending_list)) {
+			mutex_unlock(&bdi_mutex);
+			schedule();
+			continue;
+		}
+
 		/*
 		 * This is our real job - check for pending entries in
 		 * bdi_pending_list, and create the tasks that got added
 		 */
-repeat:
-		bdi = NULL;
-		spin_lock_bh(&bdi_lock);
-		if (!list_empty(&bdi_pending_list)) {
-			bdi = list_entry(bdi_pending_list.next,
+		bdi = list_entry(bdi_pending_list.next,
 					 struct backing_dev_info, bdi_list);
-			list_del_init(&bdi->bdi_list);
-		}
-		spin_unlock_bh(&bdi_lock);
+		list_del_init(&bdi->bdi_list);
+		mutex_unlock(&bdi_mutex);
 
 		if (!bdi)
 			continue;
@@ -475,12 +474,11 @@ readd_flush:
 			 * a chance to flush other bdi's to free
 			 * memory.
 			 */
-			spin_lock_bh(&bdi_lock);
+			mutex_lock(&bdi_mutex);
 			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
-			spin_unlock_bh(&bdi_lock);
+			mutex_unlock(&bdi_mutex);
 
 			bdi_flush_io(bdi);
-			goto repeat;
 		}
 	}
 
@@ -488,26 +486,6 @@ readd_flush:
 	return 0;
 }
 
-/*
- * Grace period has now ended, init bdi->bdi_list and add us to the
- * list of bdi's that are pending for task creation. Wake up
- * bdi_forker_task() to finish the job and add us back to the
- * active bdi_list.
- */
-static void bdi_add_to_pending(struct rcu_head *head)
-{
-	struct backing_dev_info *bdi;
-
-	bdi = container_of(head, struct backing_dev_info, rcu_head);
-	INIT_LIST_HEAD(&bdi->bdi_list);
-
-	spin_lock(&bdi_lock);
-	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
-	spin_unlock(&bdi_lock);
-
-	wake_up(&default_backing_dev_info.wb.wait);
-}
-
 static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 				     int(*func)(struct backing_dev_info *))
 {
@@ -526,17 +504,15 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 	 * waiting for previous additions to finish.
 	 */
 	if (!func(bdi)) {
-		spin_lock_bh(&bdi_lock);
-		list_del_rcu(&bdi->bdi_list);
-		spin_unlock_bh(&bdi_lock);
+		mutex_lock(&bdi_mutex);
+		list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+		mutex_unlock(&bdi_mutex);
 
 		/*
-		 * We need to wait for the current grace period to end,
-		 * in case others were browsing the bdi_list as well.
-		 * So defer the adding and wakeup to after the RCU
-		 * grace period has ended.
+		 * We are now on the pending list, wake up bdi_forker_task()
+		 * to finish the job and add us abck to the active bdi_list
 		 */
-		call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+		wake_up(&default_backing_dev_info.wb.wait);
 	}
 }
 
@@ -593,6 +569,14 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	mutex_lock(&bdi_mutex);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_mutex);
+
+	bdi->dev = dev;
+	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
+
 	/*
 	 * Just start the forker thread for our default backing_dev_info,
 	 * and add other bdi's to the list. They will get a thread created
@@ -614,16 +598,16 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 			ret = -ENOMEM;
 			goto exit;
 		}
+	} else {
+		/*
+		 * start the default thread. this will exit if nothing
+		 * happens for a while, but it's important to start it here
+		 * or we will not notice that we have dirty data there,
+		 * until memory pressure sets in.
+		 */
+		bdi_add_default_flusher_task(bdi);
 	}
 
-	spin_lock_bh(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock_bh(&bdi_lock);
-
-	bdi->dev = dev;
-	bdi_debug_register(bdi, dev_name(dev));
-	set_bit(BDI_registered, &bdi->state);
-
 exit:
 	return ret;
 }
@@ -655,15 +639,9 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	/*
 	 * Make sure nobody finds us on the bdi_list anymore
 	 */
-	spin_lock_bh(&bdi_lock);
+	mutex_lock(&bdi_mutex);
 	list_del_rcu(&bdi->bdi_list);
-	spin_unlock_bh(&bdi_lock);
-
-	/*
-	 * Now make sure that anybody who is currently looking at us from
-	 * the bdi_list iteration have exited.
-	 */
-	synchronize_rcu();
+	mutex_unlock(&bdi_mutex);
 
 	/*
 	 * Finally, kill the kernel threads. We don't need to be RCU
@@ -689,7 +667,6 @@ int bdi_init(struct backing_dev_info *bdi)
 {
 	int i, err;
 
-	INIT_RCU_HEAD(&bdi->rcu_head);
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index de3178a..f1785bb 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -313,9 +313,8 @@ static unsigned int bdi_min_ratio;
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 {
 	int ret = 0;
-	unsigned long flags;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_mutex);
 	if (min_ratio > bdi->max_ratio) {
 		ret = -EINVAL;
 	} else {
@@ -327,27 +326,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 			ret = -EINVAL;
 		}
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_mutex);
 
 	return ret;
 }
 
 int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 {
-	unsigned long flags;
 	int ret = 0;
 
 	if (max_ratio > 100)
 		return -EINVAL;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_mutex);
 	if (bdi->min_ratio > max_ratio) {
 		ret = -EINVAL;
 	} else {
 		bdi->max_ratio = max_ratio;
 		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_mutex);
 
 	return ret;
 }
@@ -581,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
 					  + global_page_state(NR_UNSTABLE_NFS)
 					  > background_thresh)))
-		bdi_start_writeback(bdi, NULL, 0);
+		bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -674,7 +672,7 @@ void wakeup_flusher_threads(long nr_pages)
 	if (nr_pages == 0)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
-	bdi_writeback_all(NULL, nr_pages);
+	bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
 }
 
 static void laptop_timer_fn(unsigned long unused);

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-22 20:44                         ` Jens Axboe
@ 2009-05-23 19:15                           ` Jens Axboe
  2009-05-25  8:02                             ` Zhang, Yanmin
  0 siblings, 1 reply; 57+ messages in thread
From: Jens Axboe @ 2009-05-23 19:15 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Fri, May 22 2009, Jens Axboe wrote:
> Please try with this combined patch against what you are running now, it
> should resolve the issue. It needs a bit more work, but I'm running out
> of time today. I'l get it finalized, cleaned up, and integrated. Then
> I'll post a new revision of the patch set.
> 

This one has been tested good and has a few more tweaks. So please try
that! It should be pretty close to final now, will repost the series on
monday.

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index f80afaa..33357c3 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -50,6 +50,7 @@ struct bdi_work {
 
 	unsigned long sb_data;
 	unsigned long nr_pages;
+	enum writeback_sync_modes sync_mode;
 
 	unsigned long state;
 };
@@ -65,19 +66,22 @@ static inline bool bdi_work_on_stack(struct bdi_work *work)
 }
 
 static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
-				 unsigned long nr_pages)
+				 unsigned long nr_pages,
+				 enum writeback_sync_modes sync_mode)
 {
 	INIT_RCU_HEAD(&work->rcu_head);
 	work->sb_data = (unsigned long) sb;
 	work->nr_pages = nr_pages;
+	work->sync_mode = sync_mode;
 	work->state = 0;
 }
 
 static inline void bdi_work_init_on_stack(struct bdi_work *work,
 					  struct super_block *sb,
-					  unsigned long nr_pages)
+					  unsigned long nr_pages,
+				 	  enum writeback_sync_modes sync_mode)
 {
-	bdi_work_init(work, sb, nr_pages);
+	bdi_work_init(work, sb, nr_pages, sync_mode);
 	set_bit(0, &work->state);
 	work->sb_data |= 1UL;
 }
@@ -136,6 +140,9 @@ static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
 		wake_up(&wb->wait);
 }
 
+/*
+ * Add work to bdi work list.
+ */
 static int bdi_queue_writeback(struct backing_dev_info *bdi,
 			       struct bdi_work *work)
 {
@@ -189,17 +196,17 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
 }
 
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages)
+			 long nr_pages, enum writeback_sync_modes sync_mode)
 {
 	struct bdi_work work_stack, *work;
 	int ret;
 
 	work = kmalloc(sizeof(*work), GFP_ATOMIC);
 	if (work)
-		bdi_work_init(work, sb, nr_pages);
+		bdi_work_init(work, sb, nr_pages, sync_mode);
 	else {
 		work = &work_stack;
-		bdi_work_init_on_stack(work, sb, nr_pages);
+		bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
 	}
 
 	ret = bdi_queue_writeback(bdi, work);
@@ -273,24 +280,31 @@ static long wb_kupdated(struct bdi_writeback *wb)
 	return wrote;
 }
 
+static inline bool over_bground_thresh(void)
+{
+	unsigned long background_thresh, dirty_thresh;
+
+	get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+
+	return (global_page_state(NR_FILE_DIRTY) +
+		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
+}
+
 static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
-			   struct super_block *sb)
+			   struct super_block *sb,
+			   enum writeback_sync_modes sync_mode)
 {
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
-		.sync_mode		= WB_SYNC_NONE,
+		.sync_mode		= sync_mode,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
 	long wrote = 0;
 
 	for (;;) {
-		unsigned long background_thresh, dirty_thresh;
-
-		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
-		if ((global_page_state(NR_FILE_DIRTY) +
-		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
-		    nr_pages <= 0)
+		if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
+		    !over_bground_thresh())
 			break;
 
 		wbc.more_io = 0;
@@ -345,9 +359,10 @@ static long wb_writeback(struct bdi_writeback *wb)
 	while ((work = get_next_work_item(bdi, wb)) != NULL) {
 		struct super_block *sb = bdi_work_sb(work);
 		long nr_pages = work->nr_pages;
+		enum writeback_sync_modes sync_mode = work->sync_mode;
 
 		wb_clear_pending(wb, work);
-		wrote += __wb_writeback(wb, nr_pages, sb);
+		wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
 	}
 
 	return wrote;
@@ -420,39 +435,36 @@ int bdi_writeback_task(struct bdi_writeback *wb)
 	return 0;
 }
 
-void bdi_writeback_all(struct super_block *sb, long nr_pages)
+/*
+ * Do in-line writeback for all backing devices. Expensive!
+ */
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+		       enum writeback_sync_modes sync_mode)
 {
-	struct list_head *entry = &bdi_list;
+	struct backing_dev_info *bdi, *tmp;
 
-	rcu_read_lock();
-
-	list_for_each_continue_rcu(entry, &bdi_list) {
-		struct backing_dev_info *bdi;
-		struct list_head *next;
-		struct bdi_work *work;
+	mutex_lock(&bdi_lock);
 
-		bdi = list_entry(entry, struct backing_dev_info, bdi_list);
+	list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
 		if (!bdi_has_dirty_io(bdi))
 			continue;
 
-		/*
-		 * If this allocation fails, we just wakeup the thread and
-		 * let it do kupdate writeback
-		 */
-		work = kmalloc(sizeof(*work), GFP_ATOMIC);
-		if (work)
-			bdi_work_init(work, sb, nr_pages);
+		if (!bdi_wblist_needs_lock(bdi))
+			__wb_writeback(&bdi->wb, 0, sb, sync_mode);
+		else {
+			struct bdi_writeback *wb;
+			int idx;
 
-		/*
-		 * Prepare to start from previous entry if this one gets moved
-		 * to the bdi_pending list.
-		 */
-		next = entry->prev;
-		if (bdi_queue_writeback(bdi, work))
-			entry = next;
+			idx = srcu_read_lock(&bdi->srcu);
+
+			list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+				__wb_writeback(&bdi->wb, 0, sb, sync_mode);
+
+			srcu_read_unlock(&bdi->srcu, idx);
+		}
 	}
 
-	rcu_read_unlock();
+	mutex_unlock(&bdi_lock);
 }
 
 /*
@@ -972,9 +984,9 @@ void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc)
 {
 	if (wbc->bdi)
-		bdi_start_writeback(wbc->bdi, sb, 0);
+		generic_sync_bdi_inodes(sb, wbc);
 	else
-		bdi_writeback_all(sb, 0);
+		bdi_writeback_all(sb, 0, wbc->sync_mode);
 
 	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 7c2874f..0b20d4b 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -15,6 +15,7 @@
 #include <linux/fs.h>
 #include <linux/sched.h>
 #include <linux/srcu.h>
+#include <linux/writeback.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -60,7 +61,6 @@ struct bdi_writeback {
 #define BDI_MAX_FLUSHERS	32
 
 struct backing_dev_info {
-	struct rcu_head rcu_head;
 	struct srcu_struct srcu; /* for wb_list read side protection */
 	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
@@ -105,14 +105,15 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages);
+			 long nr_pages, enum writeback_sync_modes sync_mode);
 int bdi_writeback_task(struct bdi_writeback *wb);
-void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+			enum writeback_sync_modes sync_mode);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
 void bdi_add_flusher_task(struct backing_dev_info *bdi);
 int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
-extern spinlock_t bdi_lock;
+extern struct mutex bdi_lock;
 extern struct list_head bdi_list;
 
 static inline int wb_is_default_task(struct bdi_writeback *wb)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 60578bc..3ce3b57 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -26,7 +26,7 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
-DEFINE_SPINLOCK(bdi_lock);
+DEFINE_MUTEX(bdi_lock);
 LIST_HEAD(bdi_list);
 LIST_HEAD(bdi_pending_list);
 
@@ -360,14 +360,15 @@ static int bdi_start_fn(void *ptr)
 	 * Clear pending bit and wakeup anybody waiting to tear us down
 	 */
 	clear_bit(BDI_pending, &bdi->state);
+	smp_mb__after_clear_bit();
 	wake_up_bit(&bdi->state, BDI_pending);
 
 	/*
 	 * Make us discoverable on the bdi_list again
 	 */
-	spin_lock_bh(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock_bh(&bdi_lock);
+	mutex_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
 
 	ret = bdi_writeback_task(wb);
 
@@ -419,15 +420,9 @@ static int bdi_forker_task(void *ptr)
 	bdi_task_init(me->bdi, me);
 
 	for (;;) {
-		struct backing_dev_info *bdi;
+		struct backing_dev_info *bdi, *tmp;
 		struct bdi_writeback *wb;
 
-		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
-
-		smp_mb();
-		if (list_empty(&bdi_pending_list))
-			schedule();
-
 		/*
 		 * Ideally we'd like not to see any dirty inodes on the
 		 * default_backing_dev_info. Until these are tracked down,
@@ -438,19 +433,39 @@ static int bdi_forker_task(void *ptr)
 		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
 			wb_do_writeback(me);
 
+		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+		mutex_lock(&bdi_lock);
+
+		/*
+		 * Check if any existing bdi's have dirty data without
+		 * a thread registered. If so, set that up.
+		 */
+		list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+			if (!list_empty(&bdi->wb_list) ||
+			    !bdi_has_dirty_io(bdi))
+				continue;
+
+			bdi_add_default_flusher_task(bdi);
+		}
+
+		if (list_empty(&bdi_pending_list)) {
+			unsigned long wait;
+
+			mutex_unlock(&bdi_lock);
+			wait = msecs_to_jiffies(dirty_writeback_interval * 10);
+			schedule_timeout(wait);
+			continue;
+		}
+
 		/*
 		 * This is our real job - check for pending entries in
 		 * bdi_pending_list, and create the tasks that got added
 		 */
-repeat:
-		bdi = NULL;
-		spin_lock_bh(&bdi_lock);
-		if (!list_empty(&bdi_pending_list)) {
-			bdi = list_entry(bdi_pending_list.next,
+		bdi = list_entry(bdi_pending_list.next,
 					 struct backing_dev_info, bdi_list);
-			list_del_init(&bdi->bdi_list);
-		}
-		spin_unlock_bh(&bdi_lock);
+		list_del_init(&bdi->bdi_list);
+		mutex_unlock(&bdi_lock);
 
 		if (!bdi)
 			continue;
@@ -475,12 +490,11 @@ readd_flush:
 			 * a chance to flush other bdi's to free
 			 * memory.
 			 */
-			spin_lock_bh(&bdi_lock);
+			mutex_lock(&bdi_lock);
 			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
-			spin_unlock_bh(&bdi_lock);
+			mutex_unlock(&bdi_lock);
 
 			bdi_flush_io(bdi);
-			goto repeat;
 		}
 	}
 
@@ -489,25 +503,8 @@ readd_flush:
 }
 
 /*
- * Grace period has now ended, init bdi->bdi_list and add us to the
- * list of bdi's that are pending for task creation. Wake up
- * bdi_forker_task() to finish the job and add us back to the
- * active bdi_list.
+ * bdi_lock held on entry
  */
-static void bdi_add_to_pending(struct rcu_head *head)
-{
-	struct backing_dev_info *bdi;
-
-	bdi = container_of(head, struct backing_dev_info, rcu_head);
-	INIT_LIST_HEAD(&bdi->bdi_list);
-
-	spin_lock(&bdi_lock);
-	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
-	spin_unlock(&bdi_lock);
-
-	wake_up(&default_backing_dev_info.wb.wait);
-}
-
 static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 				     int(*func)(struct backing_dev_info *))
 {
@@ -526,24 +523,22 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 	 * waiting for previous additions to finish.
 	 */
 	if (!func(bdi)) {
-		spin_lock_bh(&bdi_lock);
-		list_del_rcu(&bdi->bdi_list);
-		spin_unlock_bh(&bdi_lock);
+		list_move_tail(&bdi->bdi_list, &bdi_pending_list);
 
 		/*
-		 * We need to wait for the current grace period to end,
-		 * in case others were browsing the bdi_list as well.
-		 * So defer the adding and wakeup to after the RCU
-		 * grace period has ended.
+		 * We are now on the pending list, wake up bdi_forker_task()
+		 * to finish the job and add us abck to the active bdi_list
 		 */
-		call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+		wake_up(&default_backing_dev_info.wb.wait);
 	}
 }
 
 static int flusher_add_helper_block(struct backing_dev_info *bdi)
 {
+	mutex_unlock(&bdi_lock);
 	wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
 				TASK_UNINTERRUPTIBLE);
+	mutex_lock(&bdi_lock);
 	return 0;
 }
 
@@ -571,7 +566,9 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
  */
 void bdi_add_flusher_task(struct backing_dev_info *bdi)
 {
+	mutex_lock(&bdi_lock);
 	bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+	mutex_unlock(&bdi_lock);
 }
 EXPORT_SYMBOL(bdi_add_flusher_task);
 
@@ -593,6 +590,14 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	mutex_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
+
+	bdi->dev = dev;
+	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
+
 	/*
 	 * Just start the forker thread for our default backing_dev_info,
 	 * and add other bdi's to the list. They will get a thread created
@@ -616,14 +621,6 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		}
 	}
 
-	spin_lock_bh(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
-	spin_unlock_bh(&bdi_lock);
-
-	bdi->dev = dev;
-	bdi_debug_register(bdi, dev_name(dev));
-	set_bit(BDI_registered, &bdi->state);
-
 exit:
 	return ret;
 }
@@ -655,15 +652,9 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	/*
 	 * Make sure nobody finds us on the bdi_list anymore
 	 */
-	spin_lock_bh(&bdi_lock);
-	list_del_rcu(&bdi->bdi_list);
-	spin_unlock_bh(&bdi_lock);
-
-	/*
-	 * Now make sure that anybody who is currently looking at us from
-	 * the bdi_list iteration have exited.
-	 */
-	synchronize_rcu();
+	mutex_lock(&bdi_lock);
+	list_del(&bdi->bdi_list);
+	mutex_unlock(&bdi_lock);
 
 	/*
 	 * Finally, kill the kernel threads. We don't need to be RCU
@@ -689,7 +680,6 @@ int bdi_init(struct backing_dev_info *bdi)
 {
 	int i, err;
 
-	INIT_RCU_HEAD(&bdi->rcu_head);
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index de3178a..7dd7de7 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -313,9 +313,8 @@ static unsigned int bdi_min_ratio;
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 {
 	int ret = 0;
-	unsigned long flags;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (min_ratio > bdi->max_ratio) {
 		ret = -EINVAL;
 	} else {
@@ -327,27 +326,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 			ret = -EINVAL;
 		}
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
 
 int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 {
-	unsigned long flags;
 	int ret = 0;
 
 	if (max_ratio > 100)
 		return -EINVAL;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (bdi->min_ratio > max_ratio) {
 		ret = -EINVAL;
 	} else {
 		bdi->max_ratio = max_ratio;
 		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
@@ -581,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
 					  + global_page_state(NR_UNSTABLE_NFS)
 					  > background_thresh)))
-		bdi_start_writeback(bdi, NULL, 0);
+		bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -674,7 +672,7 @@ void wakeup_flusher_threads(long nr_pages)
 	if (nr_pages == 0)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
-	bdi_writeback_all(NULL, nr_pages);
+	bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
 }
 
 static void laptop_timer_fn(unsigned long unused);

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-23 19:15                           ` Jens Axboe
@ 2009-05-25  8:02                             ` Zhang, Yanmin
  2009-05-25  8:06                               ` Jens Axboe
  2009-05-25  8:43                               ` Zhang, Yanmin
  0 siblings, 2 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-25  8:02 UTC (permalink / raw
  To: Jens Axboe
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Sat, 2009-05-23 at 21:15 +0200, Jens Axboe wrote:
> On Fri, May 22 2009, Jens Axboe wrote:
> > Please try with this combined patch against what you are running now, it
> > should resolve the issue. It needs a bit more work, but I'm running out
> > of time today. I'l get it finalized, cleaned up, and integrated. Then
> > I'll post a new revision of the patch set.
> > 
> 
> This one has been tested good and has a few more tweaks. So please try
> that! It should be pretty close to final now, will repost the series on
> monday.
I ran the workload for 10 times and didn't trigger it yet. So the bug is
fixed.

yanmin

> 
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index f80afaa..33357c3 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -50,6 +50,7 @@ struct bdi_work {
>  
>  	unsigned long sb_data;
>  	unsigned long nr_pages;
> +	enum writeback_sync_modes sync_mode;
>  
>  	unsigned long state;
>  };
> @@ -65,19 +66,22 @@ static inline bool bdi_work_on_stack(struct bdi_work *work)
>  }
>  
>  static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
> -				 unsigned long nr_pages)
> +				 unsigned long nr_pages,
> +				 enum writeback_sync_modes sync_mode)
>  {
>  	INIT_RCU_HEAD(&work->rcu_head);
>  	work->sb_data = (unsigned long) sb;
>  	work->nr_pages = nr_pages;
> +	work->sync_mode = sync_mode;
>  	work->state = 0;
>  }
>  
>  static inline void bdi_work_init_on_stack(struct bdi_work *work,
>  					  struct super_block *sb,
> -					  unsigned long nr_pages)
> +					  unsigned long nr_pages,
> +				 	  enum writeback_sync_modes sync_mode)
>  {
> -	bdi_work_init(work, sb, nr_pages);
> +	bdi_work_init(work, sb, nr_pages, sync_mode);
>  	set_bit(0, &work->state);
>  	work->sb_data |= 1UL;
>  }
> @@ -136,6 +140,9 @@ static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
>  		wake_up(&wb->wait);
>  }
>  
> +/*
> + * Add work to bdi work list.
> + */
>  static int bdi_queue_writeback(struct backing_dev_info *bdi,
>  			       struct bdi_work *work)
>  {
> @@ -189,17 +196,17 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
>  }
>  
>  int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
> -			 long nr_pages)
> +			 long nr_pages, enum writeback_sync_modes sync_mode)
>  {
>  	struct bdi_work work_stack, *work;
>  	int ret;
>  
>  	work = kmalloc(sizeof(*work), GFP_ATOMIC);
>  	if (work)
> -		bdi_work_init(work, sb, nr_pages);
> +		bdi_work_init(work, sb, nr_pages, sync_mode);
>  	else {
>  		work = &work_stack;
> -		bdi_work_init_on_stack(work, sb, nr_pages);
> +		bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
>  	}
>  
>  	ret = bdi_queue_writeback(bdi, work);
> @@ -273,24 +280,31 @@ static long wb_kupdated(struct bdi_writeback *wb)
>  	return wrote;
>  }
>  
> +static inline bool over_bground_thresh(void)
> +{
> +	unsigned long background_thresh, dirty_thresh;
> +
> +	get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
> +
> +	return (global_page_state(NR_FILE_DIRTY) +
> +		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
> +}
> +
>  static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
> -			   struct super_block *sb)
> +			   struct super_block *sb,
> +			   enum writeback_sync_modes sync_mode)
>  {
>  	struct writeback_control wbc = {
>  		.bdi			= wb->bdi,
> -		.sync_mode		= WB_SYNC_NONE,
> +		.sync_mode		= sync_mode,
>  		.older_than_this	= NULL,
>  		.range_cyclic		= 1,
>  	};
>  	long wrote = 0;
>  
>  	for (;;) {
> -		unsigned long background_thresh, dirty_thresh;
> -
> -		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
> -		if ((global_page_state(NR_FILE_DIRTY) +
> -		    global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
> -		    nr_pages <= 0)
> +		if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
> +		    !over_bground_thresh())
>  			break;
>  
>  		wbc.more_io = 0;
> @@ -345,9 +359,10 @@ static long wb_writeback(struct bdi_writeback *wb)
>  	while ((work = get_next_work_item(bdi, wb)) != NULL) {
>  		struct super_block *sb = bdi_work_sb(work);
>  		long nr_pages = work->nr_pages;
> +		enum writeback_sync_modes sync_mode = work->sync_mode;
>  
>  		wb_clear_pending(wb, work);
> -		wrote += __wb_writeback(wb, nr_pages, sb);
> +		wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
>  	}
>  
>  	return wrote;
> @@ -420,39 +435,36 @@ int bdi_writeback_task(struct bdi_writeback *wb)
>  	return 0;
>  }
>  
> -void bdi_writeback_all(struct super_block *sb, long nr_pages)
> +/*
> + * Do in-line writeback for all backing devices. Expensive!
> + */
> +void bdi_writeback_all(struct super_block *sb, long nr_pages,
> +		       enum writeback_sync_modes sync_mode)
>  {
> -	struct list_head *entry = &bdi_list;
> +	struct backing_dev_info *bdi, *tmp;
>  
> -	rcu_read_lock();
> -
> -	list_for_each_continue_rcu(entry, &bdi_list) {
> -		struct backing_dev_info *bdi;
> -		struct list_head *next;
> -		struct bdi_work *work;
> +	mutex_lock(&bdi_lock);
>  
> -		bdi = list_entry(entry, struct backing_dev_info, bdi_list);
> +	list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
>  		if (!bdi_has_dirty_io(bdi))
>  			continue;
>  
> -		/*
> -		 * If this allocation fails, we just wakeup the thread and
> -		 * let it do kupdate writeback
> -		 */
> -		work = kmalloc(sizeof(*work), GFP_ATOMIC);
> -		if (work)
> -			bdi_work_init(work, sb, nr_pages);
> +		if (!bdi_wblist_needs_lock(bdi))
> +			__wb_writeback(&bdi->wb, 0, sb, sync_mode);
> +		else {
> +			struct bdi_writeback *wb;
> +			int idx;
>  
> -		/*
> -		 * Prepare to start from previous entry if this one gets moved
> -		 * to the bdi_pending list.
> -		 */
> -		next = entry->prev;
> -		if (bdi_queue_writeback(bdi, work))
> -			entry = next;
> +			idx = srcu_read_lock(&bdi->srcu);
> +
> +			list_for_each_entry_rcu(wb, &bdi->wb_list, list)
> +				__wb_writeback(&bdi->wb, 0, sb, sync_mode);
> +
> +			srcu_read_unlock(&bdi->srcu, idx);
> +		}
>  	}
>  
> -	rcu_read_unlock();
> +	mutex_unlock(&bdi_lock);
>  }
>  
>  /*
> @@ -972,9 +984,9 @@ void generic_sync_sb_inodes(struct super_block *sb,
>  				struct writeback_control *wbc)
>  {
>  	if (wbc->bdi)
> -		bdi_start_writeback(wbc->bdi, sb, 0);
> +		generic_sync_bdi_inodes(sb, wbc);
>  	else
> -		bdi_writeback_all(sb, 0);
> +		bdi_writeback_all(sb, 0, wbc->sync_mode);
>  
>  	if (wbc->sync_mode == WB_SYNC_ALL) {
>  		struct inode *inode, *old_inode = NULL;
> diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
> index 7c2874f..0b20d4b 100644
> --- a/include/linux/backing-dev.h
> +++ b/include/linux/backing-dev.h
> @@ -15,6 +15,7 @@
>  #include <linux/fs.h>
>  #include <linux/sched.h>
>  #include <linux/srcu.h>
> +#include <linux/writeback.h>
>  #include <asm/atomic.h>
>  
>  struct page;
> @@ -60,7 +61,6 @@ struct bdi_writeback {
>  #define BDI_MAX_FLUSHERS	32
>  
>  struct backing_dev_info {
> -	struct rcu_head rcu_head;
>  	struct srcu_struct srcu; /* for wb_list read side protection */
>  	struct list_head bdi_list;
>  	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
> @@ -105,14 +105,15 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
>  void bdi_unregister(struct backing_dev_info *bdi);
>  int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
> -			 long nr_pages);
> +			 long nr_pages, enum writeback_sync_modes sync_mode);
>  int bdi_writeback_task(struct bdi_writeback *wb);
> -void bdi_writeback_all(struct super_block *sb, long nr_pages);
> +void bdi_writeback_all(struct super_block *sb, long nr_pages,
> +			enum writeback_sync_modes sync_mode);
>  void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
>  void bdi_add_flusher_task(struct backing_dev_info *bdi);
>  int bdi_has_dirty_io(struct backing_dev_info *bdi);
>  
> -extern spinlock_t bdi_lock;
> +extern struct mutex bdi_lock;
>  extern struct list_head bdi_list;
>  
>  static inline int wb_is_default_task(struct bdi_writeback *wb)
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 60578bc..3ce3b57 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -26,7 +26,7 @@ struct backing_dev_info default_backing_dev_info = {
>  EXPORT_SYMBOL_GPL(default_backing_dev_info);
>  
>  static struct class *bdi_class;
> -DEFINE_SPINLOCK(bdi_lock);
> +DEFINE_MUTEX(bdi_lock);
>  LIST_HEAD(bdi_list);
>  LIST_HEAD(bdi_pending_list);
>  
> @@ -360,14 +360,15 @@ static int bdi_start_fn(void *ptr)
>  	 * Clear pending bit and wakeup anybody waiting to tear us down
>  	 */
>  	clear_bit(BDI_pending, &bdi->state);
> +	smp_mb__after_clear_bit();
>  	wake_up_bit(&bdi->state, BDI_pending);
>  
>  	/*
>  	 * Make us discoverable on the bdi_list again
>  	 */
> -	spin_lock_bh(&bdi_lock);
> -	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
> -	spin_unlock_bh(&bdi_lock);
> +	mutex_lock(&bdi_lock);
> +	list_add_tail(&bdi->bdi_list, &bdi_list);
> +	mutex_unlock(&bdi_lock);
>  
>  	ret = bdi_writeback_task(wb);
>  
> @@ -419,15 +420,9 @@ static int bdi_forker_task(void *ptr)
>  	bdi_task_init(me->bdi, me);
>  
>  	for (;;) {
> -		struct backing_dev_info *bdi;
> +		struct backing_dev_info *bdi, *tmp;
>  		struct bdi_writeback *wb;
>  
> -		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> -
> -		smp_mb();
> -		if (list_empty(&bdi_pending_list))
> -			schedule();
> -
>  		/*
>  		 * Ideally we'd like not to see any dirty inodes on the
>  		 * default_backing_dev_info. Until these are tracked down,
> @@ -438,19 +433,39 @@ static int bdi_forker_task(void *ptr)
>  		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
>  			wb_do_writeback(me);
>  
> +		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> +
> +		mutex_lock(&bdi_lock);
> +
> +		/*
> +		 * Check if any existing bdi's have dirty data without
> +		 * a thread registered. If so, set that up.
> +		 */
> +		list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
> +			if (!list_empty(&bdi->wb_list) ||
> +			    !bdi_has_dirty_io(bdi))
> +				continue;
> +
> +			bdi_add_default_flusher_task(bdi);
> +		}
> +
> +		if (list_empty(&bdi_pending_list)) {
> +			unsigned long wait;
> +
> +			mutex_unlock(&bdi_lock);
> +			wait = msecs_to_jiffies(dirty_writeback_interval * 10);
> +			schedule_timeout(wait);
> +			continue;
> +		}
> +
>  		/*
>  		 * This is our real job - check for pending entries in
>  		 * bdi_pending_list, and create the tasks that got added
>  		 */
> -repeat:
> -		bdi = NULL;
> -		spin_lock_bh(&bdi_lock);
> -		if (!list_empty(&bdi_pending_list)) {
> -			bdi = list_entry(bdi_pending_list.next,
> +		bdi = list_entry(bdi_pending_list.next,
>  					 struct backing_dev_info, bdi_list);
> -			list_del_init(&bdi->bdi_list);
> -		}
> -		spin_unlock_bh(&bdi_lock);
> +		list_del_init(&bdi->bdi_list);
> +		mutex_unlock(&bdi_lock);
>  
>  		if (!bdi)
>  			continue;
> @@ -475,12 +490,11 @@ readd_flush:
>  			 * a chance to flush other bdi's to free
>  			 * memory.
>  			 */
> -			spin_lock_bh(&bdi_lock);
> +			mutex_lock(&bdi_lock);
>  			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
> -			spin_unlock_bh(&bdi_lock);
> +			mutex_unlock(&bdi_lock);
>  
>  			bdi_flush_io(bdi);
> -			goto repeat;
>  		}
>  	}
>  
> @@ -489,25 +503,8 @@ readd_flush:
>  }
>  
>  /*
> - * Grace period has now ended, init bdi->bdi_list and add us to the
> - * list of bdi's that are pending for task creation. Wake up
> - * bdi_forker_task() to finish the job and add us back to the
> - * active bdi_list.
> + * bdi_lock held on entry
>   */
> -static void bdi_add_to_pending(struct rcu_head *head)
> -{
> -	struct backing_dev_info *bdi;
> -
> -	bdi = container_of(head, struct backing_dev_info, rcu_head);
> -	INIT_LIST_HEAD(&bdi->bdi_list);
> -
> -	spin_lock(&bdi_lock);
> -	list_add_tail(&bdi->bdi_list, &bdi_pending_list);
> -	spin_unlock(&bdi_lock);
> -
> -	wake_up(&default_backing_dev_info.wb.wait);
> -}
> -
>  static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
>  				     int(*func)(struct backing_dev_info *))
>  {
> @@ -526,24 +523,22 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
>  	 * waiting for previous additions to finish.
>  	 */
>  	if (!func(bdi)) {
> -		spin_lock_bh(&bdi_lock);
> -		list_del_rcu(&bdi->bdi_list);
> -		spin_unlock_bh(&bdi_lock);
> +		list_move_tail(&bdi->bdi_list, &bdi_pending_list);
>  
>  		/*
> -		 * We need to wait for the current grace period to end,
> -		 * in case others were browsing the bdi_list as well.
> -		 * So defer the adding and wakeup to after the RCU
> -		 * grace period has ended.
> +		 * We are now on the pending list, wake up bdi_forker_task()
> +		 * to finish the job and add us abck to the active bdi_list
>  		 */
> -		call_rcu(&bdi->rcu_head, bdi_add_to_pending);
> +		wake_up(&default_backing_dev_info.wb.wait);
>  	}
>  }
>  
>  static int flusher_add_helper_block(struct backing_dev_info *bdi)
>  {
> +	mutex_unlock(&bdi_lock);
>  	wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
>  				TASK_UNINTERRUPTIBLE);
> +	mutex_lock(&bdi_lock);
>  	return 0;
>  }
>  
> @@ -571,7 +566,9 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
>   */
>  void bdi_add_flusher_task(struct backing_dev_info *bdi)
>  {
> +	mutex_lock(&bdi_lock);
>  	bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
> +	mutex_unlock(&bdi_lock);
>  }
>  EXPORT_SYMBOL(bdi_add_flusher_task);
>  
> @@ -593,6 +590,14 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  		goto exit;
>  	}
>  
> +	mutex_lock(&bdi_lock);
> +	list_add_tail(&bdi->bdi_list, &bdi_list);
> +	mutex_unlock(&bdi_lock);
> +
> +	bdi->dev = dev;
> +	bdi_debug_register(bdi, dev_name(dev));
> +	set_bit(BDI_registered, &bdi->state);
> +
>  	/*
>  	 * Just start the forker thread for our default backing_dev_info,
>  	 * and add other bdi's to the list. They will get a thread created
> @@ -616,14 +621,6 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
>  		}
>  	}
>  
> -	spin_lock_bh(&bdi_lock);
> -	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
> -	spin_unlock_bh(&bdi_lock);
> -
> -	bdi->dev = dev;
> -	bdi_debug_register(bdi, dev_name(dev));
> -	set_bit(BDI_registered, &bdi->state);
> -
>  exit:
>  	return ret;
>  }
> @@ -655,15 +652,9 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
>  	/*
>  	 * Make sure nobody finds us on the bdi_list anymore
>  	 */
> -	spin_lock_bh(&bdi_lock);
> -	list_del_rcu(&bdi->bdi_list);
> -	spin_unlock_bh(&bdi_lock);
> -
> -	/*
> -	 * Now make sure that anybody who is currently looking at us from
> -	 * the bdi_list iteration have exited.
> -	 */
> -	synchronize_rcu();
> +	mutex_lock(&bdi_lock);
> +	list_del(&bdi->bdi_list);
> +	mutex_unlock(&bdi_lock);
>  
>  	/*
>  	 * Finally, kill the kernel threads. We don't need to be RCU
> @@ -689,7 +680,6 @@ int bdi_init(struct backing_dev_info *bdi)
>  {
>  	int i, err;
>  
> -	INIT_RCU_HEAD(&bdi->rcu_head);
>  	bdi->dev = NULL;
>  
>  	bdi->min_ratio = 0;
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index de3178a..7dd7de7 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -313,9 +313,8 @@ static unsigned int bdi_min_ratio;
>  int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
>  {
>  	int ret = 0;
> -	unsigned long flags;
>  
> -	spin_lock_irqsave(&bdi_lock, flags);
> +	mutex_lock(&bdi_lock);
>  	if (min_ratio > bdi->max_ratio) {
>  		ret = -EINVAL;
>  	} else {
> @@ -327,27 +326,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
>  			ret = -EINVAL;
>  		}
>  	}
> -	spin_unlock_irqrestore(&bdi_lock, flags);
> +	mutex_unlock(&bdi_lock);
>  
>  	return ret;
>  }
>  
>  int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
>  {
> -	unsigned long flags;
>  	int ret = 0;
>  
>  	if (max_ratio > 100)
>  		return -EINVAL;
>  
> -	spin_lock_irqsave(&bdi_lock, flags);
> +	mutex_lock(&bdi_lock);
>  	if (bdi->min_ratio > max_ratio) {
>  		ret = -EINVAL;
>  	} else {
>  		bdi->max_ratio = max_ratio;
>  		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
>  	}
> -	spin_unlock_irqrestore(&bdi_lock, flags);
> +	mutex_unlock(&bdi_lock);
>  
>  	return ret;
>  }
> @@ -581,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
>  			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
>  					  + global_page_state(NR_UNSTABLE_NFS)
>  					  > background_thresh)))
> -		bdi_start_writeback(bdi, NULL, 0);
> +		bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
>  }
>  
>  void set_page_dirty_balance(struct page *page, int page_mkwrite)
> @@ -674,7 +672,7 @@ void wakeup_flusher_threads(long nr_pages)
>  	if (nr_pages == 0)
>  		nr_pages = global_page_state(NR_FILE_DIRTY) +
>  				global_page_state(NR_UNSTABLE_NFS);
> -	bdi_writeback_all(NULL, nr_pages);
> +	bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
>  }
>  
>  static void laptop_timer_fn(unsigned long unused);
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-25  8:02                             ` Zhang, Yanmin
@ 2009-05-25  8:06                               ` Jens Axboe
  2009-05-25  8:43                               ` Zhang, Yanmin
  1 sibling, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-25  8:06 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Mon, May 25 2009, Zhang, Yanmin wrote:
> On Sat, 2009-05-23 at 21:15 +0200, Jens Axboe wrote:
> > On Fri, May 22 2009, Jens Axboe wrote:
> > > Please try with this combined patch against what you are running now, it
> > > should resolve the issue. It needs a bit more work, but I'm running out
> > > of time today. I'l get it finalized, cleaned up, and integrated. Then
> > > I'll post a new revision of the patch set.
> > > 
> > 
> > This one has been tested good and has a few more tweaks. So please try
> > that! It should be pretty close to final now, will repost the series on
> > monday.
> I ran the workload for 10 times and didn't trigger it yet. So the bug is
> fixed.

Goodness, thanks for retesting. Can you share some performance
comparisons and what hw/storage you are running it on?

The v5/v6 posting includes these fixes, so it should work fine now.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-25  8:02                             ` Zhang, Yanmin
  2009-05-25  8:06                               ` Jens Axboe
@ 2009-05-25  8:43                               ` Zhang, Yanmin
  2009-05-25  8:48                                 ` Jens Axboe
  1 sibling, 1 reply; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-25  8:43 UTC (permalink / raw
  To: Jens Axboe
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Mon, 2009-05-25 at 16:02 +0800, Zhang, Yanmin wrote:
> On Sat, 2009-05-23 at 21:15 +0200, Jens Axboe wrote:
> > On Fri, May 22 2009, Jens Axboe wrote:
> > > Please try with this combined patch against what you are running now, it
> > > should resolve the issue. It needs a bit more work, but I'm running out
> > > of time today. I'l get it finalized, cleaned up, and integrated. Then
> > > I'll post a new revision of the patch set.
> > > 
> > 
> > This one has been tested good and has a few more tweaks. So please try
> > that! It should be pretty close to final now, will repost the series on
> > monday.
> I ran the workload for 10 times and didn't trigger it yet. So the bug is
> fixed.
> 
> yanmin
Another issue of V4 is fio hangs when testing fio_sync_read_4k. It seems it hangs
when prepareing the data (part data is ready).
cpu idle is 100%. It happens randomly. 

INFO: task fio:6566 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
fio           D ffff8800280a9300  4976  6566   6564
 ffff88022f8c0de0 0000000000000086 ffff8800b584fcb0 000000000000000a
 0000000000000002 ffff88022df0c560 ffff88022df0c8e8 000000010000daea
 ffffe200027457d8 0000000000000246 000000c10000000d 0000000000000313
Call Trace:
 [<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
 [<ffffffff807254f6>] ? schedule+0x9/0x1d
 [<ffffffff802b68a0>] ? bdi_sched_wait+0x9/0xd
 [<ffffffff80725aa5>] ? __wait_on_bit+0x40/0x6f
 [<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
 [<ffffffff80725b40>] ? out_of_line_wait_on_bit+0x6c/0x78
 [<ffffffff8024a42e>] ? wake_bit_function+0x0/0x23
 [<ffffffff802b62a4>] ? bdi_queue_writeback+0x7a/0xe6
 [<ffffffff802b6461>] ? bdi_start_writeback+0x63/0x6c
 [<ffffffff8027a3a9>] ? balance_dirty_pages_ratelimited_nr+0x2a9/0x2b8
 [<ffffffff80274c90>] ? generic_file_buffered_write+0x1d8/0x2b2
 [<ffffffff80275230>] ? __generic_file_aio_write_nolock+0x33b/0x3a5
 [<ffffffff802866ab>] ? handle_mm_fault+0x2e5/0x6f3
 [<ffffffff80275498>] ? generic_file_aio_write+0x61/0xc1
 [<ffffffff80315efe>] ? ext3_file_write+0x16/0x94
 [<ffffffff8029d8c2>] ? do_sync_write+0xc9/0x10c
 [<ffffffff8024a400>] ? autoremove_wake_function+0x0/0x2e
 [<ffffffff8024c8f6>] ? __hrtimer_start_range_ns+0x101/0x114
 [<ffffffff8029dfcf>] ? vfs_write+0xad/0x136
 [<ffffffff8029e513>] ? sys_write+0x45/0x6e
 [<ffffffff8020b9ab>] ? system_call_fastpath+0x16/0x1b


I didn't run into it with the 3 new patches and am not sure if it's resolved.

yanmin



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-25  8:43                               ` Zhang, Yanmin
@ 2009-05-25  8:48                                 ` Jens Axboe
  0 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-25  8:48 UTC (permalink / raw
  To: Zhang, Yanmin
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Mon, May 25 2009, Zhang, Yanmin wrote:
> On Mon, 2009-05-25 at 16:02 +0800, Zhang, Yanmin wrote:
> > On Sat, 2009-05-23 at 21:15 +0200, Jens Axboe wrote:
> > > On Fri, May 22 2009, Jens Axboe wrote:
> > > > Please try with this combined patch against what you are running now, it
> > > > should resolve the issue. It needs a bit more work, but I'm running out
> > > > of time today. I'l get it finalized, cleaned up, and integrated. Then
> > > > I'll post a new revision of the patch set.
> > > > 
> > > 
> > > This one has been tested good and has a few more tweaks. So please try
> > > that! It should be pretty close to final now, will repost the series on
> > > monday.
> > I ran the workload for 10 times and didn't trigger it yet. So the bug is
> > fixed.
> > 
> > yanmin
> Another issue of V4 is fio hangs when testing fio_sync_read_4k. It seems it hangs
> when prepareing the data (part data is ready).
> cpu idle is 100%. It happens randomly. 
> 
> INFO: task fio:6566 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> fio           D ffff8800280a9300  4976  6566   6564
>  ffff88022f8c0de0 0000000000000086 ffff8800b584fcb0 000000000000000a
>  0000000000000002 ffff88022df0c560 ffff88022df0c8e8 000000010000daea
>  ffffe200027457d8 0000000000000246 000000c10000000d 0000000000000313
> Call Trace:
>  [<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
>  [<ffffffff807254f6>] ? schedule+0x9/0x1d
>  [<ffffffff802b68a0>] ? bdi_sched_wait+0x9/0xd
>  [<ffffffff80725aa5>] ? __wait_on_bit+0x40/0x6f
>  [<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
>  [<ffffffff80725b40>] ? out_of_line_wait_on_bit+0x6c/0x78
>  [<ffffffff8024a42e>] ? wake_bit_function+0x0/0x23
>  [<ffffffff802b62a4>] ? bdi_queue_writeback+0x7a/0xe6
>  [<ffffffff802b6461>] ? bdi_start_writeback+0x63/0x6c
>  [<ffffffff8027a3a9>] ? balance_dirty_pages_ratelimited_nr+0x2a9/0x2b8
>  [<ffffffff80274c90>] ? generic_file_buffered_write+0x1d8/0x2b2
>  [<ffffffff80275230>] ? __generic_file_aio_write_nolock+0x33b/0x3a5
>  [<ffffffff802866ab>] ? handle_mm_fault+0x2e5/0x6f3
>  [<ffffffff80275498>] ? generic_file_aio_write+0x61/0xc1
>  [<ffffffff80315efe>] ? ext3_file_write+0x16/0x94
>  [<ffffffff8029d8c2>] ? do_sync_write+0xc9/0x10c
>  [<ffffffff8024a400>] ? autoremove_wake_function+0x0/0x2e
>  [<ffffffff8024c8f6>] ? __hrtimer_start_range_ns+0x101/0x114
>  [<ffffffff8029dfcf>] ? vfs_write+0xad/0x136
>  [<ffffffff8029e513>] ? sys_write+0x45/0x6e
>  [<ffffffff8020b9ab>] ? system_call_fastpath+0x16/0x1b
> 
> 
> I didn't run into it with the 3 new patches and am not sure if it's
> resolved.

That's the wake_up_bit() race that was fixed with one of the 3 new
patches, so v5/6 should be good here too.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-22  8:15                       ` Jens Axboe
@ 2009-05-25  8:54                           ` Zhang, Yanmin
  2009-05-25  8:54                           ` Zhang, Yanmin
  1 sibling, 0 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-25  8:54 UTC (permalink / raw
  To: Jens Axboe
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Fri, 2009-05-22 at 10:15 +0200, Jens Axboe wrote:
> > > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:

> Thanks, I'll get this reproduced and fixed. Can you post the results
> you got comparing writeback and vanilla meanwhile?
I didn't post the result because some test cases benefit from the patches
while others are hurt from the patches. Sometime one case benefit from the patches
on this machine, but is hurt on another machine.

As a matter of fact, I tested the patches on 4 machines. One machine which
triggered the bug has only 1 disk. The other 3 machines have 1 JBOD per machine.
1) machine lkp-st02 (stoakley): has a fiberchannel JBOD with 13 SCSI disks. Every
disk has 1 partition (ext3 filesystem). Memory is 8GB.
2) machine lkp-st01: has a SAS JBOD with 7 SAS disks. Every disk has 2
partitions. 8GB memory.
3) Machine lkp-ne02 (nehalem): has a SATA JBOD with 11 disks. Every disk has
2 partitions. 6GB memory.

The HBA cards connecting to JBOD have no raid capability,
or they have, but I don't turn raid on.

Mount ext3 with option '-o writeback'.

Below results focus on the 3 machines who have JBOD. 

I use iozone/tiobench/fio/ffsb for this testing. With iozone/tiobench, I always
use one disk on all machines. But with fio/ffsb which has lots of subtest cases,
I use all disks of the JBOD connecting to the corresponding machine.

The comparation is between 2.6.30-rc6 and 2.6.30-rc6+V4_patches, or plus
3 new patches (starting with 0001~0003).

1) iozone: 500MB iozone testing has no result difference. But 1.2GB testing has
about 40% regression on rewrite with the 3 new patches (001~003). If no the 3 new
patches, the regression is more than 90%. write has the simular regression, but its
regression disappears with the new 3 patches.

2) tiobench: result variation is considered as fluctuation.

3) fio: consists of more than 30 sub test cases, including sync/aio/mmap,
plus the combination with block size (less4k/4k/64k, soetimes 128k) and random.
As for write testing, mostly, one thread per partition.
   Mostly, fio_mmap_randwrite(randrw)_4k_preread has 5%~30% improvement. But with
the new 3 patches, the improvement becomes smaller, for example becomes 14% from 30%.
   fio_mmap_randwrite has 5%~10% regression on lkp-st01 and lkp-ne02 (both machines'
JBOD has 2 partitions per disk), but has 2%~15% improvement on lkp-st02 (one partition
per disk).fio_mmap_randrw has the similar behavior.
   fio_mmap_randwrite_4k_halfbusy (Use 4 disks and less workload than other fio cases)
has about 20%~30% improvement.
   fio sync read has about 15%~30% regression on lkp-st01, but the regression disappears
with the 3 new patches. Other machines haven't the issue.
  aio has no regression.

4) ffsb:
   ffsb_create (blocksize 4k, 64k) has 10%~20% improvement on lkp-st01 and
lkp-ne02, but hasn't on lkp-st02.
   The data of other ffsb test cases looks suspicious, so I need double-check it, or
tune parameters to rerun.

Yanmin



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
@ 2009-05-25  8:54                           ` Zhang, Yanmin
  0 siblings, 0 replies; 57+ messages in thread
From: Zhang, Yanmin @ 2009-05-25  8:54 UTC (permalink / raw
  To: Jens Axboe
  Cc: Jan Kara, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm

On Fri, 2009-05-22 at 10:15 +0200, Jens Axboe wrote:
> > > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:

> Thanks, I'll get this reproduced and fixed. Can you post the results
> you got comparing writeback and vanilla meanwhile?
I didn't post the result because some test cases benefit from the patches
while others are hurt from the patches. Sometime one case benefit from the patches
on this machine, but is hurt on another machine.

As a matter of fact, I tested the patches on 4 machines. One machine which
triggered the bug has only 1 disk. The other 3 machines have 1 JBOD per machine.
1) machine lkp-st02 (stoakley): has a fiberchannel JBOD with 13 SCSI disks. Every
disk has 1 partition (ext3 filesystem). Memory is 8GB.
2) machine lkp-st01: has a SAS JBOD with 7 SAS disks. Every disk has 2
partitions. 8GB memory.
3) Machine lkp-ne02 (nehalem): has a SATA JBOD with 11 disks. Every disk has
2 partitions. 6GB memory.

The HBA cards connecting to JBOD have no raid capability,
or they have, but I don't turn raid on.

Mount ext3 with option '-o writeback'.

Below results focus on the 3 machines who have JBOD. 

I use iozone/tiobench/fio/ffsb for this testing. With iozone/tiobench, I always
use one disk on all machines. But with fio/ffsb which has lots of subtest cases,
I use all disks of the JBOD connecting to the corresponding machine.

The comparation is between 2.6.30-rc6 and 2.6.30-rc6+V4_patches, or plus
3 new patches (starting with 0001~0003).

1) iozone: 500MB iozone testing has no result difference. But 1.2GB testing has
about 40% regression on rewrite with the 3 new patches (001~003). If no the 3 new
patches, the regression is more than 90%. write has the simular regression, but its
regression disappears with the new 3 patches.

2) tiobench: result variation is considered as fluctuation.

3) fio: consists of more than 30 sub test cases, including sync/aio/mmap,
plus the combination with block size (less4k/4k/64k, soetimes 128k) and random.
As for write testing, mostly, one thread per partition.
   Mostly, fio_mmap_randwrite(randrw)_4k_preread has 5%~30% improvement. But with
the new 3 patches, the improvement becomes smaller, for example becomes 14% from 30%.
   fio_mmap_randwrite has 5%~10% regression on lkp-st01 and lkp-ne02 (both machines'
JBOD has 2 partitions per disk), but has 2%~15% improvement on lkp-st02 (one partition
per disk).fio_mmap_randrw has the similar behavior.
   fio_mmap_randwrite_4k_halfbusy (Use 4 disks and less workload than other fio cases)
has about 20%~30% improvement.
   fio sync read has about 15%~30% regression on lkp-st01, but the regression disappears
with the 3 new patches. Other machines haven't the issue.
  aio has no regression.

4) ffsb:
   ffsb_create (blocksize 4k, 64k) has 10%~20% improvement on lkp-st01 and
lkp-ne02, but hasn't on lkp-st02.
   The data of other ffsb test cases looks suspicious, so I need double-check it, or
tune parameters to rerun.

Yanmin


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
                   ` (11 preceding siblings ...)
  2009-05-19  6:11 ` [PATCH 0/11] Per-bdi writeback flusher threads #4 Zhang, Yanmin
@ 2009-05-25 15:57 ` Richard Kennedy
  2009-05-25 17:05   ` Jens Axboe
  12 siblings, 1 reply; 57+ messages in thread
From: Richard Kennedy @ 2009-05-25 15:57 UTC (permalink / raw
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

Hi Jens,

I've been testing this from your git version which builds as
2.6.30-rc6-00057-g81eabcf.

Unfortunately it's not doing too well.

When building a kernel with 'make -j 8' on my AMDX2 64bit, the screen
repeatedly locked up for several minutes at a time,and my music player
also froze.
In total the full kernel build took over 80 minutes, normally it's only
about 15.

However the machine seems to have recovered correctly, & now everything
is back to normal.

Maybe it does need the congestion handling after all?

regards
Richard




^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
  2009-05-25 15:57 ` Richard Kennedy
@ 2009-05-25 17:05   ` Jens Axboe
  0 siblings, 0 replies; 57+ messages in thread
From: Jens Axboe @ 2009-05-25 17:05 UTC (permalink / raw
  To: Richard Kennedy
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On Mon, May 25 2009, Richard Kennedy wrote:
> Hi Jens,
> 
> I've been testing this from your git version which builds as
> 2.6.30-rc6-00057-g81eabcf.
> 
> Unfortunately it's not doing too well.
> 
> When building a kernel with 'make -j 8' on my AMDX2 64bit, the screen
> repeatedly locked up for several minutes at a time,and my music player
> also froze.
> In total the full kernel build took over 80 minutes, normally it's only
> about 15.
> 
> However the machine seems to have recovered correctly, & now everything
> is back to normal.

Weird, perhaps you hit an unlucky revision. I only use the git branch
for development, and it's continually rebased to collect and split
patches and fixes. So I don't generally recommend to use that, just the
posted patches. I build -j8 or larger kernels with the writeback patches
all the time, and haven't seen any issues. That's on a core 2 quad. Just
for kicks, can you send me your .config?

I'll post a new revision tomorrow, if you could try that I'd appreciate
it!

> Maybe it does need the congestion handling after all?

No it does not, by the very nature of the bdi threads being blocking,
congestion is not relevant.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2009-05-25 17:05 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-18 12:19 [PATCH 0/11] Per-bdi writeback flusher threads #4 Jens Axboe
2009-05-18 12:19 ` [PATCH 01/11] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
2009-05-18 12:19 ` [PATCH 02/11] writeback: switch to per-bdi threads for flushing data Jens Axboe
2009-05-19 10:20   ` Richard Kennedy
2009-05-19 12:23     ` Jens Axboe
2009-05-19 13:45       ` Richard Kennedy
2009-05-19 17:56         ` Jens Axboe
2009-05-19 22:11           ` Peter Zijlstra
2009-05-20 11:18   ` Jan Kara
2009-05-20 11:32     ` Jens Axboe
2009-05-20 12:11       ` Jan Kara
2009-05-20 12:16         ` Jens Axboe
2009-05-20 12:24           ` Christoph Hellwig
2009-05-20 12:48             ` Jens Axboe
2009-05-20 12:37   ` Christoph Hellwig
2009-05-20 12:49     ` Jens Axboe
2009-05-20 14:02       ` Anton Altaparmakov
2009-05-20 14:02         ` Anton Altaparmakov
2009-05-18 12:19 ` [PATCH 03/11] writeback: get rid of pdflush completely Jens Axboe
2009-05-18 12:19 ` [PATCH 04/11] writeback: separate the flushing state/task from the bdi Jens Axboe
2009-05-20 11:34   ` Jan Kara
2009-05-20 11:39     ` Jens Axboe
2009-05-20 12:06       ` Jan Kara
2009-05-20 12:09         ` Jens Axboe
2009-05-18 12:19 ` [PATCH 05/11] writeback: support > 1 flusher thread per bdi Jens Axboe
2009-05-18 12:19 ` [PATCH 06/11] writeback: include default_backing_dev_info in writeback Jens Axboe
2009-05-18 12:19 ` [PATCH 07/11] writeback: allow sleepy exit of default writeback task Jens Axboe
2009-05-18 12:19 ` [PATCH 08/11] writeback: btrfs must register its backing_devices Jens Axboe
2009-05-18 12:19 ` [PATCH 09/11] writeback: add some debug inode list counters to bdi stats Jens Axboe
2009-05-18 12:19 ` [PATCH 10/11] writeback: add name to backing_dev_info Jens Axboe
2009-05-18 12:19 ` [PATCH 11/11] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
2009-05-19  6:11 ` [PATCH 0/11] Per-bdi writeback flusher threads #4 Zhang, Yanmin
2009-05-19  6:20   ` Jens Axboe
2009-05-19  6:43     ` Zhang, Yanmin
2009-05-20  7:51     ` Zhang, Yanmin
2009-05-20  7:51       ` Zhang, Yanmin
2009-05-20  8:09       ` Jens Axboe
2009-05-20  8:54         ` Jens Axboe
2009-05-20  9:19           ` Zhang, Yanmin
2009-05-20  9:25             ` Jens Axboe
2009-05-20 11:19               ` Jens Axboe
2009-05-21  6:33                 ` Zhang, Yanmin
2009-05-21  9:10                   ` Jan Kara
2009-05-22  1:28                     ` Zhang, Yanmin
2009-05-22  8:15                       ` Jens Axboe
2009-05-22 20:44                         ` Jens Axboe
2009-05-23 19:15                           ` Jens Axboe
2009-05-25  8:02                             ` Zhang, Yanmin
2009-05-25  8:06                               ` Jens Axboe
2009-05-25  8:43                               ` Zhang, Yanmin
2009-05-25  8:48                                 ` Jens Axboe
2009-05-25  8:54                         ` Zhang, Yanmin
2009-05-25  8:54                           ` Zhang, Yanmin
2009-05-22  7:53                     ` Jens Axboe
2009-05-22  7:53                       ` Jens Axboe
2009-05-25 15:57 ` Richard Kennedy
2009-05-25 17:05   ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.