Linux-GPIO Archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Bartosz Golaszewski <brgl@bgdev.pl>
Cc: Linus Walleij <linus.walleij@linaro.org>,
	Kent Gibson <warthog618@gmail.com>,
	Neil Armstrong <neil.armstrong@linaro.org>,
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>
Subject: Re: Performance regression in GPIOLIB with SRCU when using the user-space ABI in a *wrong* way
Date: Mon, 6 May 2024 11:07:24 -0700	[thread overview]
Message-ID: <1abdbbea-7d2b-46f6-851d-94531e889136@paulmck-laptop> (raw)
In-Reply-To: <CAMRc=MfF4kkXToy+RSt4QPXPtsOEUcM4xpXdosWTxtjUy9x6CA@mail.gmail.com>

On Mon, May 06, 2024 at 07:46:18PM +0200, Bartosz Golaszewski wrote:
> On Mon, May 6, 2024 at 7:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Mon, May 06, 2024 at 06:34:27PM +0200, Bartosz Golaszewski wrote:
> > > On Mon, May 6, 2024 at 3:55 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > >
> > > > On Mon, May 06, 2024 at 02:32:57PM +0200, Bartosz Golaszewski wrote:
> > > >>
> > > > > The offending kernel code looks like this:
> > > > >
> > > > >     old = rcu_replace_pointer(desc->label, new, 1);
> > > > >     synchronize_srcu(&desc->srcu);
> > > > >     kfree_const(old);
> > > > >
> > > > > I was wondering if we even have to synchronize here? The corresponding
> > > > > read-only sections call srcu_dereference(desc->label, &desc->srcu).
> > > > > Would it be enough to implement kfree_const_rcu() and use it here
> > > > > without synchronizing? Would the read-only users correctly see that
> > > > > last dereferenced address still points to valid memory until they all
> > > > > release the lock and the memory would only then be freed? Is my
> > > > > understanding of kfree_rcu() correct?
> > > >
> > > > It looks like kfree_const() just does a kfree(), so why not use
> > > > call_srcu() to replace the above calls to synchronize_srcu() and
> > > > kfree_const()?
> > > >
> > > > Something like this:
> > > >
> > > >         if (!is_kernel_rodata((unsigned long)(old)))
> > > >                 call_srcu(&desc->srcu, &desc->rh, gpio_cb);
> > > >
> > > > This requires adding an rcu_head field named "rh" to the structure
> > > > referenced by "desc" and creating a gpio_cb() wrapper function:
> > > >
> > > > static void connection_release(struct rcu_head *rhp)
> > > > {
> > > >         struct beats_me *bmp = container_of(rhp, struct beats_me, rh);
> > > >
> > > >         kfree(bmp);
> > > > }
> > > >
> > > > I could not find the code, so I don't know what "beats_me" above
> > > > should be replaced with.
> > > >
> > > > Would that work?
> > >
> > > Thanks, this looks like a potential solution but something's not clear
> > > for me. What I want to free here is "old". However, its address is a
> > > field in struct beats_me and it's replaced (using
> > > rcu_replace_pointer()) before scheduling the free. When
> > > connection_release() will eventually be called, I think the address
> > > under bmp->label will be the new one? How would I pass some arbitrary
> > > user data to this callback (if at all possible?).
> >
> > You are quite right, that "&desc->rh" should be "&old->rh", and the
> > struct rcu_head would need to go into the structure referenced by "old".
> >
> > Apologies for my confusion, but in my defense, I could not find this code.
> 
> Sorry, I should have pointed to it in the first place. It lives in
> drivers/gpio/gpiolib.c in desc_set_label(). So "old" here is actually
> a const char * so I guess it should be made into a full-blown struct
> for this to work?

I should have been able to find that.  Maybe too early on Monday.  :-/

Anyway, yes, and one approach would be to have a structure containing
an rcu_head structure at the beginning and then the const char array
following that.  Please note that the rcu_head structure is modified by
call_srcu(), and thus cannot be const.

> Also: do I need to somehow manage struct rcu_head? Initialize/release
> it with some dedicated API? I couldn't find anything.

No management is needed unless the enclosing structure is allocated
on the stack.  In that case, you need topass a pointer to the rcu_head
structure to init_rcu_head_on_stack() before passing it to call_rcu()
and to destroy_rcu_head_on_stack() before the rcu_head structure goes
out of scope.

But in the usual case where the rcu_head structure is dynamically
allocated, no management, initialization, or cleanup is needed.

							Thanx, Paul

      reply	other threads:[~2024-05-06 18:07 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-06 12:32 Performance regression in GPIOLIB with SRCU when using the user-space ABI in a *wrong* way Bartosz Golaszewski
2024-05-06 13:55 ` Paul E. McKenney
2024-05-06 16:34   ` Bartosz Golaszewski
2024-05-06 17:01     ` Paul E. McKenney
2024-05-06 17:46       ` Bartosz Golaszewski
2024-05-06 18:07         ` Paul E. McKenney [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1abdbbea-7d2b-46f6-851d-94531e889136@paulmck-laptop \
    --to=paulmck@kernel.org \
    --cc=brgl@bgdev.pl \
    --cc=linus.walleij@linaro.org \
    --cc=linux-gpio@vger.kernel.org \
    --cc=neil.armstrong@linaro.org \
    --cc=warthog618@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).