All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes.
@ 2022-07-27 11:49 Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 1/4 v2] fs/dcache: d_add_ci() needs to complete parallel lookup Sebastian Andrzej Siewior
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-27 11:49 UTC (permalink / raw
  To: linux-fsdevel; +Cc: Alexander Viro, Matthew Wilcox, Thomas Gleixner

This is v2 of the series, v1 is available at
   https://https://lkml.kernel.org/r/.kernel.org/all/20220613140712.77932-1-bigeasy@linutronix.de/

v1…v2:
   - Make patch around Al's description of a bug in d_add_ci(). I took
     the liberty to make him Author and added his signed-off-by since I
     sinmply added a patch-body around his words.

   - The reasoning of why delaying the wakeup is reasonable has been
     replaced with Al's analysis of the code.

   - The split of wake up has been done differently (and I hope this is
     what Al meant). First the wake up has been pushed to the caller and
     then delayed to end_dir_add() after preemption has been enabled.

   - There is still __d_lookup_unhash(), __d_lookup_unhash_wake() and
     __d_lookup_done() is removed.
     __d_lookup_done() is removed because it is exported and the return
     value changes which will affect OOT users which are not aware of
     it.
     There is still d_lookup_done() which invokes
     __d_lookup_unhash_wake(). This can't remain in the header file due
     to cyclic depencies which in turn can't resolve wake_up_all()
     within the inline function.

The original cover letter:

PREEMPT_RT has two issues with the dcache code:

   1) i_dir_seq is a special cased sequence counter which uses the lowest
      bit as writer lock. This requires that preemption is disabled.

      On !RT kernels preemption is implictly disabled by spin_lock(), but
      that's not the case on RT kernels.

      Replacing i_dir_seq on RT with a seqlock_t comes with its own
      problems due to arbitrary lock nesting. Using a seqcount with an
      associated lock is not possible either because the locks held by the
      callers are not necessarily related.

      Explicitly disabling preemption on RT kernels across the i_dir_seq
      write held critical section is the obvious and simplest solution. The
      critical section is small, so latency is not really a concern.

   2) The wake up of dentry::d_wait waiters is in a preemption disabled
      section, which violates the RT constraints as wake_up_all() has
      to acquire the wait queue head lock which is a 'sleeping' spinlock
      on RT.

      There are two reasons for the non-preemtible section:

         A) The wake up happens under the hash list bit lock

         B) The wake up happens inside the i_dir_seq write side
            critical section

       #A is solvable by moving it outside of the hash list bit lock held                                                                                                                      
       section.

       Making #B preemptible on RT is hard or even impossible due to lock
       nesting constraints.

       A possible solution would be to replace the waitqueue by a simple
       waitqueue which can be woken up inside atomic sections on RT.

       But aside of Linus not being a fan of simple waitqueues, there is
       another observation vs. this wake up. It's likely for the woken up
       waiter to immediately contend on dentry::lock.

       It turns out that there is no reason to do the wake up within the
       i_dir_seq write held region. The only requirement is to do the wake
       up within the dentry::lock held region. Further details in the
       individual patches.

       That allows to move the wake up out of the non-preemptible section
       on RT, which also reduces the dentry::lock held time after wake up.

Thanks,

Sebastian



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4 v2] fs/dcache: d_add_ci() needs to complete parallel lookup.
  2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
@ 2022-07-27 11:49 ` Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 2/4 v2] fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-27 11:49 UTC (permalink / raw
  To: linux-fsdevel
  Cc: Alexander Viro, Matthew Wilcox, Thomas Gleixner,
	Sebastian Andrzej Siewior

From: Al Viro <viro@zeniv.linux.org.uk>

Result of d_alloc_parallel() in d_add_ci() is fed to d_splice_alias(), which
*NORMALLY* feeds it to __d_add() or __d_move() in a way that will have
__d_lookup_done() applied to it.

However, there is a nasty possibility - d_splice_alias() might legitimately
fail without having marked the sucker not in-lookup. dentry will get dropped
by d_add_ci(), so ->d_wait won't end up pointing to freed object, but it's
still a bug - retain_dentry() will scream bloody murder upon seeing that, and
for a good reason; we'll get hash chain corrupted. It's impossible to hit
without corrupted fs image (ntfs or case-insensitive xfs), but it's a bug.

Invoke d_lookup_done() after d_splice_alias() to ensure that the
in-lookip flag is always cleared.

Fixes: d9171b9345261 ("parallel lookups machinery, part 4 (and last)")
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 fs/dcache.c |    1 +
 1 file changed, 1 insertion(+)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2239,6 +2239,7 @@ struct dentry *d_add_ci(struct dentry *d
 		} 
 	}
 	res = d_splice_alias(inode, found);
+	d_lookup_done(found);
 	if (res) {
 		dput(found);
 		return res;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/4 v2] fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT
  2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 1/4 v2] fs/dcache: d_add_ci() needs to complete parallel lookup Sebastian Andrzej Siewior
@ 2022-07-27 11:49 ` Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 3/4 v2] fs/dcache: Move the wakeup from __d_lookup_done() to the caller Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-27 11:49 UTC (permalink / raw
  To: linux-fsdevel
  Cc: Alexander Viro, Matthew Wilcox, Thomas Gleixner,
	Sebastian Andrzej Siewior, Oleg.Karfich

i_dir_seq is a sequence counter with a lock which is represented by the
lowest bit. The writer atomically updates the counter which ensures that it
can be modified by only one writer at a time. This requires preemption to
be disabled across the write side critical section.

On !PREEMPT_RT kernels this is implicit by the caller acquiring
dentry::lock. On PREEMPT_RT kernels spin_lock() does not disable preemption
which means that a preempting writer or reader would live lock. It's
therefore required to disable preemption explicitly.

An alternative solution would be to replace i_dir_seq with a seqlock_t for
PREEMPT_RT, but that comes with its own set of problems due to arbitrary
lock nesting. A pure sequence count with an associated spinlock is not
possible because the locks held by the caller are not necessarily related.

As the critical section is small, disabling preemption is a sensible
solution.

Reported-by: Oleg.Karfich@wago.com
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lkml.kernel.org/r/20220613140712.77932-2-bigeasy@linutronix.de
---
 fs/dcache.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2564,7 +2564,15 @@ EXPORT_SYMBOL(d_rehash);
 
 static inline unsigned start_dir_add(struct inode *dir)
 {
-
+	/*
+	 * The caller holds a spinlock (dentry::d_lock). On !PREEMPT_RT
+	 * kernels spin_lock() implicitly disables preemption, but not on
+	 * PREEMPT_RT.  So for RT it has to be done explicitly to protect
+	 * the sequence count write side critical section against a reader
+	 * or another writer preempting, which would result in a live lock.
+	 */
+	if (IS_ENABLED(CONFIG_PREEMPT_RT))
+		preempt_disable();
 	for (;;) {
 		unsigned n = dir->i_dir_seq;
 		if (!(n & 1) && cmpxchg(&dir->i_dir_seq, n, n + 1) == n)
@@ -2576,6 +2584,8 @@ static inline unsigned start_dir_add(str
 static inline void end_dir_add(struct inode *dir, unsigned n)
 {
 	smp_store_release(&dir->i_dir_seq, n + 2);
+	if (IS_ENABLED(CONFIG_PREEMPT_RT))
+		preempt_enable();
 }
 
 static void d_wait_lookup(struct dentry *dentry)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 3/4 v2] fs/dcache: Move the wakeup from __d_lookup_done() to the caller.
  2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 1/4 v2] fs/dcache: d_add_ci() needs to complete parallel lookup Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 2/4 v2] fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT Sebastian Andrzej Siewior
@ 2022-07-27 11:49 ` Sebastian Andrzej Siewior
  2022-07-27 11:49 ` [PATCH 4/4 v2] fs/dcache: Move wakeup out of i_seq_dir write held region Sebastian Andrzej Siewior
  2022-07-30  4:41 ` [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Al Viro
  4 siblings, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-27 11:49 UTC (permalink / raw
  To: linux-fsdevel
  Cc: Alexander Viro, Matthew Wilcox, Thomas Gleixner,
	Sebastian Andrzej Siewior

__d_lookup_done() wakes waiters on dentry->d_wait.  On PREEMPT_RT we are
not allowed to do that with preemption disabled, since the wakeup
acquired wait_queue_head::lock, which is a "sleeping" spinlock on RT.

Calling it under dentry->d_lock is not a problem, since that is also a
"sleeping" spinlock on the same configs.  Unfortunately, two of its
callers (__d_add() and __d_move()) are holding more than just ->d_lock
and that needs to be dealt with.

The key observation is that wakeup can be moved to any point before
dropping ->d_lock.

As a first step to solve this, move the wake up outside of the
hlist_bl_lock() held section.

This is safe because:

Waiters get inserted into ->d_wait only after they'd taken ->d_lock
and observed DCACHE_PAR_LOOKUP in flags.  As long as they are
woken up (and evicted from the queue) between the moment __d_lookup_done()
has removed DCACHE_PAR_LOOKUP and dropping ->d_lock, we are safe,
since the waitqueue ->d_wait points to won't get destroyed without
having __d_lookup_done(dentry) called (under ->d_lock).

->d_wait is set only by d_alloc_parallel() and only in case when
it returns a freshly allocated in-lookup dentry.  Whenever that happens,
we are guaranteed that __d_lookup_done() will be called for resulting
dentry (under ->d_lock) before the wq in question gets destroyed.

With two exceptions wq lives in call frame of the caller of
d_alloc_parallel() and we have an explicit d_lookup_done() on the
resulting in-lookup dentry before we leave that frame.

One of those exceptions is nfs_call_unlink(), where wq is embedded into
(dynamically allocated) struct nfs_unlinkdata.  It is destroyed in
nfs_async_unlink_release() after an explicit d_lookup_done() on the
dentry wq went into.

Remaining exception is d_add_ci(). There wq is what we'd found in
->d_wait of d_add_ci() argument. Callers of d_add_ci() are two
instances of ->d_lookup() and they must have been given an in-lookup
dentry.  Which means that they'd been called by __lookup_slow() or
lookup_open(), with wq in the call frame of one of those.

Result of d_alloc_parallel() in d_add_ci() is fed to
d_splice_alias(), which either returns non-NULL (and d_add_ci() does
d_lookup_done()) or feeds dentry to __d_add() that will do
__d_lookup_done() under ->d_lock.  That concludes the analysis.

Let __d_lookup_unhash():

  1) Lock the lookup hash and clear DCACHE_PAR_LOOKUP
  2) Unhash the dentry
  3) Retrieve and clear dentry::d_wait
  4) Unlock the hash and return the retrieved waitqueue head pointer
  5) Let the caller handle the wake up.
  6) Rename __d_lookup_done() to __d_lookup_unhash_wake() to enforce
     build failures for OOT code that used __d_lookup_done() and is not
     aware of the new return value.

This does not yet solve the PREEMPT_RT problem completely because
preemption is still disabled due to i_dir_seq being held for write. This
will be addressed in subsequent steps.

An alternative solution would be to switch the waitqueue to a simple
waitqueue, but aside of Linus not being a fan of them, moving the wake up
closer to the place where dentry::lock is unlocked reduces lock contention
time for the woken up waiter.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lkml.kernel.org/r/20220613140712.77932-3-bigeasy@linutronix.de
---
 fs/dcache.c            |   35 ++++++++++++++++++++++++++++-------
 include/linux/dcache.h |    9 +++------
 2 files changed, 31 insertions(+), 13 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2712,32 +2712,51 @@ struct dentry *d_alloc_parallel(struct d
 }
 EXPORT_SYMBOL(d_alloc_parallel);
 
-void __d_lookup_done(struct dentry *dentry)
+/*
+ * - Unhash the dentry
+ * - Retrieve and clear the waitqueue head in dentry
+ * - Return the waitqueue head
+ */
+static wait_queue_head_t *__d_lookup_unhash(struct dentry *dentry)
 {
-	struct hlist_bl_head *b = in_lookup_hash(dentry->d_parent,
-						 dentry->d_name.hash);
+	wait_queue_head_t *d_wait;
+	struct hlist_bl_head *b;
+
+	lockdep_assert_held(&dentry->d_lock);
+
+	b = in_lookup_hash(dentry->d_parent, dentry->d_name.hash);
 	hlist_bl_lock(b);
 	dentry->d_flags &= ~DCACHE_PAR_LOOKUP;
 	__hlist_bl_del(&dentry->d_u.d_in_lookup_hash);
-	wake_up_all(dentry->d_wait);
+	d_wait = dentry->d_wait;
 	dentry->d_wait = NULL;
 	hlist_bl_unlock(b);
 	INIT_HLIST_NODE(&dentry->d_u.d_alias);
 	INIT_LIST_HEAD(&dentry->d_lru);
+	return d_wait;
+}
+
+void __d_lookup_unhash_wake(struct dentry *dentry)
+{
+	spin_lock(&dentry->d_lock);
+	wake_up_all(__d_lookup_unhash(dentry));
+	spin_unlock(&dentry->d_lock);
 }
-EXPORT_SYMBOL(__d_lookup_done);
+EXPORT_SYMBOL(__d_lookup_unhash_wake);
 
 /* inode->i_lock held if inode is non-NULL */
 
 static inline void __d_add(struct dentry *dentry, struct inode *inode)
 {
+	wait_queue_head_t *d_wait;
 	struct inode *dir = NULL;
 	unsigned n;
 	spin_lock(&dentry->d_lock);
 	if (unlikely(d_in_lookup(dentry))) {
 		dir = dentry->d_parent->d_inode;
 		n = start_dir_add(dir);
-		__d_lookup_done(dentry);
+		d_wait = __d_lookup_unhash(dentry);
+		wake_up_all(d_wait);
 	}
 	if (inode) {
 		unsigned add_flags = d_flags_for_inode(inode);
@@ -2896,6 +2915,7 @@ static void __d_move(struct dentry *dent
 		     bool exchange)
 {
 	struct dentry *old_parent, *p;
+	wait_queue_head_t *d_wait;
 	struct inode *dir = NULL;
 	unsigned n;
 
@@ -2926,7 +2946,8 @@ static void __d_move(struct dentry *dent
 	if (unlikely(d_in_lookup(target))) {
 		dir = target->d_parent->d_inode;
 		n = start_dir_add(dir);
-		__d_lookup_done(target);
+		d_wait = __d_lookup_unhash(target);
+		wake_up_all(d_wait);
 	}
 
 	write_seqcount_begin(&dentry->d_seq);
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -349,7 +349,7 @@ static inline void dont_mount(struct den
 	spin_unlock(&dentry->d_lock);
 }
 
-extern void __d_lookup_done(struct dentry *);
+extern void __d_lookup_unhash_wake(struct dentry *dentry);
 
 static inline int d_in_lookup(const struct dentry *dentry)
 {
@@ -358,11 +358,8 @@ static inline int d_in_lookup(const stru
 
 static inline void d_lookup_done(struct dentry *dentry)
 {
-	if (unlikely(d_in_lookup(dentry))) {
-		spin_lock(&dentry->d_lock);
-		__d_lookup_done(dentry);
-		spin_unlock(&dentry->d_lock);
-	}
+	if (unlikely(d_in_lookup(dentry)))
+		__d_lookup_unhash_wake(dentry);
 }
 
 extern void dput(struct dentry *);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 4/4 v2] fs/dcache: Move wakeup out of i_seq_dir write held region.
  2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2022-07-27 11:49 ` [PATCH 3/4 v2] fs/dcache: Move the wakeup from __d_lookup_done() to the caller Sebastian Andrzej Siewior
@ 2022-07-27 11:49 ` Sebastian Andrzej Siewior
  2022-07-30  4:41 ` [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Al Viro
  4 siblings, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-27 11:49 UTC (permalink / raw
  To: linux-fsdevel
  Cc: Alexander Viro, Matthew Wilcox, Thomas Gleixner,
	Sebastian Andrzej Siewior

__d_add() and __d_move() wake up waiters on dentry::d_wait from within
the i_seq_dir write held region.  This violates the PREEMPT_RT
constraints as the wake up acquires wait_queue_head::lock which is a
"sleeping" spinlock on RT.

There is no requirement to do so. __d_lookup_unhash() has cleared
DCACHE_PAR_LOOKUP and dentry::d_wait and returned the now unreachable wait
queue head pointer to the caller, so the actual wake up can be postponed
until the i_dir_seq write side critical section is left. The only
requirement is that dentry::lock is held across the whole sequence
including the wake up. The previous commit includes an analysis why this
is considered safe.

Move the wake up past end_dir_add() which leaves the i_dir_seq write side
critical section and enables preemption.

For non RT kernels there is no difference because preemption is still
disabled due to dentry::lock being held, but it shortens the time between
wake up and unlocking dentry::lock, which reduces the contention for the
woken up waiter.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 fs/dcache.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2581,11 +2581,13 @@ static inline unsigned start_dir_add(str
 	}
 }
 
-static inline void end_dir_add(struct inode *dir, unsigned n)
+static inline void end_dir_add(struct inode *dir, unsigned int n,
+			       wait_queue_head_t *d_wait)
 {
 	smp_store_release(&dir->i_dir_seq, n + 2);
 	if (IS_ENABLED(CONFIG_PREEMPT_RT))
 		preempt_enable();
+	wake_up_all(d_wait);
 }
 
 static void d_wait_lookup(struct dentry *dentry)
@@ -2756,7 +2758,6 @@ static inline void __d_add(struct dentry
 		dir = dentry->d_parent->d_inode;
 		n = start_dir_add(dir);
 		d_wait = __d_lookup_unhash(dentry);
-		wake_up_all(d_wait);
 	}
 	if (inode) {
 		unsigned add_flags = d_flags_for_inode(inode);
@@ -2768,7 +2769,7 @@ static inline void __d_add(struct dentry
 	}
 	__d_rehash(dentry);
 	if (dir)
-		end_dir_add(dir, n);
+		end_dir_add(dir, n, d_wait);
 	spin_unlock(&dentry->d_lock);
 	if (inode)
 		spin_unlock(&inode->i_lock);
@@ -2947,7 +2948,6 @@ static void __d_move(struct dentry *dent
 		dir = target->d_parent->d_inode;
 		n = start_dir_add(dir);
 		d_wait = __d_lookup_unhash(target);
-		wake_up_all(d_wait);
 	}
 
 	write_seqcount_begin(&dentry->d_seq);
@@ -2983,7 +2983,7 @@ static void __d_move(struct dentry *dent
 	write_seqcount_end(&dentry->d_seq);
 
 	if (dir)
-		end_dir_add(dir, n);
+		end_dir_add(dir, n, d_wait);
 
 	if (dentry->d_parent != old_parent)
 		spin_unlock(&dentry->d_parent->d_lock);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes.
  2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2022-07-27 11:49 ` [PATCH 4/4 v2] fs/dcache: Move wakeup out of i_seq_dir write held region Sebastian Andrzej Siewior
@ 2022-07-30  4:41 ` Al Viro
  4 siblings, 0 replies; 6+ messages in thread
From: Al Viro @ 2022-07-30  4:41 UTC (permalink / raw
  To: Sebastian Andrzej Siewior; +Cc: linux-fsdevel, Matthew Wilcox, Thomas Gleixner

On Wed, Jul 27, 2022 at 01:49:00PM +0200, Sebastian Andrzej Siewior wrote:
> This is v2 of the series, v1 is available at
>    https://https://lkml.kernel.org/r/.kernel.org/all/20220613140712.77932-1-bigeasy@linutronix.de/
> 
> v1…v2:
>    - Make patch around Al's description of a bug in d_add_ci(). I took
>      the liberty to make him Author and added his signed-off-by since I
>      sinmply added a patch-body around his words.
> 
>    - The reasoning of why delaying the wakeup is reasonable has been
>      replaced with Al's analysis of the code.
> 
>    - The split of wake up has been done differently (and I hope this is
>      what Al meant). First the wake up has been pushed to the caller and
>      then delayed to end_dir_add() after preemption has been enabled.
> 
>    - There is still __d_lookup_unhash(), __d_lookup_unhash_wake() and
>      __d_lookup_done() is removed.
>      __d_lookup_done() is removed because it is exported and the return
>      value changes which will affect OOT users which are not aware of
>      it.
>      There is still d_lookup_done() which invokes
>      __d_lookup_unhash_wake(). This can't remain in the header file due
>      to cyclic depencies which in turn can't resolve wake_up_all()
>      within the inline function.

Applied, with slightly different first commit (we only care about
d_lookup_done() if d_splice_alias() returns non-NULL).

See vfs.git#work.dcache.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-07-30  4:41 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-27 11:49 [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Sebastian Andrzej Siewior
2022-07-27 11:49 ` [PATCH 1/4 v2] fs/dcache: d_add_ci() needs to complete parallel lookup Sebastian Andrzej Siewior
2022-07-27 11:49 ` [PATCH 2/4 v2] fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT Sebastian Andrzej Siewior
2022-07-27 11:49 ` [PATCH 3/4 v2] fs/dcache: Move the wakeup from __d_lookup_done() to the caller Sebastian Andrzej Siewior
2022-07-27 11:49 ` [PATCH 4/4 v2] fs/dcache: Move wakeup out of i_seq_dir write held region Sebastian Andrzej Siewior
2022-07-30  4:41 ` [PATCH 0/4 v2] fs/dcache: Resolve the last RT woes Al Viro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.