RCU Archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Steven Rostedt <rostedt@goodmis.org>, linke li <lilinke99@qq.com>,
	joel@joelfernandes.org, boqun.feng@gmail.com, dave@stgolabs.net,
	frederic@kernel.org, jiangshanlai@gmail.com,
	josh@joshtriplett.org, linux-kernel@vger.kernel.org,
	qiang.zhang1211@gmail.com, quic_neeraju@quicinc.com,
	rcu@vger.kernel.org
Subject: Re: [PATCH] rcutorture: Fix rcu_torture_pipe_update_one()/rcu_torture_writer() data race and concurrency bug
Date: Thu, 7 Mar 2024 16:58:50 -0800	[thread overview]
Message-ID: <c3b94335-cf11-456d-bec7-4e4262235868@paulmck-laptop> (raw)
In-Reply-To: <2721d70f-67a8-428f-903b-1c7c6c6da7f9@efficios.com>

On Thu, Mar 07, 2024 at 02:53:43PM -0500, Mathieu Desnoyers wrote:
> On 2024-03-07 14:47, Paul E. McKenney wrote:
> > On Thu, Mar 07, 2024 at 08:53:05AM -0500, Mathieu Desnoyers wrote:
> > > On 2024-03-06 22:37, Paul E. McKenney wrote:
> > > > On Wed, Mar 06, 2024 at 10:06:21PM -0500, Mathieu Desnoyers wrote:
> > > [...]
> > > > 
> > > > > As far as the WRITE_ONCE(x, READ_ONCE(x) + 1) pattern
> > > > > is concerned, the only valid use-case I can think of is
> > > > > split counters or RCU implementations where there is a
> > > > > single updater doing the increment, and one or more
> > > > > concurrent reader threads that need to snapshot a
> > > > > consistent value with READ_ONCE().
> > > > 
> > > [...]
> > > > 
> > > > So what would you use that pattern for?
> > > > 
> > > > One possibility is a per-CPU statistical counter in userspace on a
> > > > fastpath, in cases where losing the occasional count is OK.  Then learning
> > > > your CPU (and possibly being immediately migrated to some other CPU),
> > > > READ_ONCE() of the count, increment, and WRITE_ONCE() might (or might not)
> > > > make sense.
> > > > 
> > > > I suppose the same in the kernel if there was a fastpath so extreme you
> > > > could not afford to disable preemption.
> > > > 
> > > > At best, very niche.
> > > > 
> > > > Or am I suffering a failure of imagination yet again?  ;-)
> > > 
> > > The (niche) use-cases I have in mind are split-counters and RCU
> > > grace period tracking, where precise counters totals are needed
> > > (no lost count).
> > > 
> > > In the kernel, this could be:
> > 
> > Thank you for looking into this!
> > 
> > > - A per-cpu counter, each counter incremented from thread context with
> > >    preemption disabled (single updater per counter), read concurrently by
> > >    other threads. WRITE_ONCE/READ_ONCE is useful to make sure there
> > >    is no store/load tearing there. Atomics on the update would be stronger
> > >    than necessary on the increment fast-path.
> > 
> > But if preemption is disabled, the updater can read the value without
> > READ_ONCE() without risk of concurrent update.  Or are you concerned about
> > interrupt handlers?  This would have to be a read from the interrupt
> > handler, given that an updated from the interrupt handler could result
> > in a lost count.
> 
> You are correct that the updater don't need READ_ONCE there. It would
> however require a WRITE_ONCE() to match READ_ONCE() from concurrent
> reader threads.
> 
> > 
> > > - A per-thread counter (e.g. within task_struct), only incremented by the
> > >    single thread, read by various threads concurrently.
> > 
> > Ditto.
> 
> Right, only WRITE_ONCE() on the single updater, READ_ONCE() on readers.
> 
> > 
> > > - A counter which increment happens to be already protected by a lock, read
> > >    by various threads without taking the lock. (technically doable, but
> > >    I'm not sure I see a relevant use-case for it)
> > 
> > In that case, the lock would exclude concurrent updates, so the lock
> > holder would not need READ_ONCE(), correct?
> 
> Correct.
> 
> > 
> > > In user-space:
> > > 
> > > - The "per-cpu" counter would have to use rseq for increments to prevent
> > >    inopportune migrations, which needs to be implemented in assembler anyway.
> > >    The counter reads would have to use READ_ONCE().
> > 
> > Fair enough!
> > 
> > > - The per-thread counter (Thread-Local Storage) incremented by a single
> > >    thread, read by various threads concurrently, is a good target
> > >    for WRITE_ONCE()/READ_ONCE() pairing. This is actually what we do in
> > >    various liburcu implementations which track read-side critical sections
> > >    per-thread.
> > 
> > Agreed, but do any of these use WRITE_ONCE(x, READ_ONCE(x) + 1) or
> > similar?
> 
> Not quite, I recall it's more like WRITE_ONCE(x, READ_ONCE(y)) or such,
> so we can grab the value of the current gp counter and store it into a
> TLS variable.

Good point, you could use that pattern to grab a shared snapshot.

Still sounds niche, but you never know!  ;-)

							Thanx, Paul

> > > - Same as for the kernel, a counter increment protected by a lock which
> > >    needs to be read from various threads concurrently without taking
> > >    the lock could be a valid use-case, though I fail to see how it is
> > >    useful due to lack of imagination on my part. ;-)
> > 
> > In RCU, we have "WRITE_ONCE(*sp, *sp + 1)" for this use case, though
> > here we have the WRITE_ONCE() but not the READ_ONCE() because we hold
> > the lock excluding any other updates.
> 
> You are right, the READ_ONCE() is not needed in this case for the
> updater, only for the concurrent readers.
> 
> Thanks,
> 
> Mathieu
> 
> -- 
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
> 

  reply	other threads:[~2024-03-08  0:58 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-04 10:54 [PATCH] rcutorture: Fix rcu_torture_pipe_update_one()/rcu_torture_writer() data race and concurrency bug linke li
2024-03-04 16:19 ` Joel Fernandes
2024-03-04 17:14   ` Paul E. McKenney
2024-03-04 19:10     ` Joel Fernandes
2024-03-04 19:44       ` Paul E. McKenney
2024-03-04 20:13         ` Joel Fernandes
2024-03-04 20:47           ` Paul E. McKenney
2024-03-05  3:30             ` linke
2024-03-05  6:24   ` linke li
2024-03-06 15:37     ` Steven Rostedt
2024-03-06 17:36       ` Paul E. McKenney
2024-03-06 18:01         ` Steven Rostedt
2024-03-06 18:09           ` Paul E. McKenney
2024-03-06 18:20             ` Steven Rostedt
2024-03-06 18:43           ` Linus Torvalds
2024-03-06 18:55             ` Steven Rostedt
2024-03-06 19:01               ` Linus Torvalds
2024-03-06 19:27                 ` Linus Torvalds
2024-03-06 19:47                   ` Steven Rostedt
2024-03-06 20:06                     ` Linus Torvalds
2024-03-07 13:20                       ` Steven Rostedt
2024-03-07 16:12                         ` Steven Rostedt
2024-03-06 19:27                 ` Steven Rostedt
2024-03-06 19:46                   ` Linus Torvalds
2024-03-06 20:20                     ` Linus Torvalds
2024-03-07  2:29                     ` Paul E. McKenney
2024-03-07  2:43                       ` Linus Torvalds
2024-03-07  2:49                         ` Linus Torvalds
2024-03-07  3:21                           ` Paul E. McKenney
2024-03-07  3:06                         ` Paul E. McKenney
2024-03-07  3:06                         ` Mathieu Desnoyers
2024-03-07  3:37                           ` Paul E. McKenney
2024-03-07  5:44                             ` Joel Fernandes
2024-03-07 19:05                               ` Paul E. McKenney
2024-03-07 13:53                             ` Mathieu Desnoyers
2024-03-07 19:47                               ` Paul E. McKenney
2024-03-07 19:53                                 ` Mathieu Desnoyers
2024-03-08  0:58                                   ` Paul E. McKenney [this message]
2024-03-07 20:00                                 ` Linus Torvalds
2024-03-07 20:57                                   ` Paul E. McKenney
2024-03-07 21:40                                     ` Julia Lawall
2024-03-07 22:09                                       ` Linus Torvalds
2024-03-08  0:55                                         ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3b94335-cf11-456d-bec7-4e4262235868@paulmck-laptop \
    --to=paulmck@kernel.org \
    --cc=boqun.feng@gmail.com \
    --cc=dave@stgolabs.net \
    --cc=frederic@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=lilinke99@qq.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=qiang.zhang1211@gmail.com \
    --cc=quic_neeraju@quicinc.com \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).