IOMMU Archive mirror
 help / color / mirror / Atom feed
From: "Li,Rongqing" <lirongqing@baidu.com>
To: Robin Murphy <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>
Subject: RE: [PATCH] iommu/iova: Consider NUMA affinity when allocating memory for per-CPU iova_magazine
Date: Wed, 17 Apr 2024 12:06:00 +0000	[thread overview]
Message-ID: <b36e1e64d06149f7bdd70add8607ca46@baidu.com> (raw)
In-Reply-To: <1bdeb8a0-4b92-4cf4-93b9-60528fb487a5@arm.com>



> On 15/04/2024 7:56 am, Li RongQing wrote:
> > per-CPU iova_magazine are dominantly accessed from their own local
> > CPUs, so allocate them node-local to improve performance.
> 
> Note that this will only hold for certain workloads (typically rather light ones, I'd
> guess) - do you have one where this makes a measurable difference? If so I'd be
> interested to know what sort of numbers we're talking about.
> 

I use iperf does some simple tests on Intel servers, no obvious difference, maybe it can give a good result on some NUMA unfriendly system

> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> > ---
> >   drivers/iommu/iova.c | 15 +++++++++------
> >   1 file changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index
> > d59d0ea..70f89ea 100644
> > --- a/drivers/iommu/iova.c
> > +++ b/drivers/iommu/iova.c
> > @@ -597,11 +597,11 @@ unsigned long iova_rcache_range(void)
> >   	return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
> >   }
> >
> > -static struct iova_magazine *iova_magazine_alloc(gfp_t flags)
> > +static struct iova_magazine *iova_magazine_alloc_node(gfp_t flags,
> > +int node)
> >   {
> >   	struct iova_magazine *mag;
> >
> > -	mag = kmem_cache_alloc(iova_magazine_cache, flags);
> > +	mag = kmem_cache_alloc_node(iova_magazine_cache, flags, node);
> >   	if (mag)
> >   		mag->size = 0;
> >
> > @@ -707,7 +707,7 @@ static void iova_depot_work_func(struct work_struct
> *work)
> >   int iova_domain_init_rcaches(struct iova_domain *iovad)
> >   {
> >   	unsigned int cpu;
> > -	int i, ret;
> > +	int i, ret, nid;
> >
> >   	iovad->rcaches = kcalloc(IOVA_RANGE_CACHE_MAX_SIZE,
> >   				 sizeof(struct iova_rcache),
> > @@ -731,10 +731,11 @@ int iova_domain_init_rcaches(struct iova_domain
> *iovad)
> >   		}
> >   		for_each_possible_cpu(cpu) {
> >   			cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
> > +			nid = cpu_to_node(cpu);
> >
> >   			spin_lock_init(&cpu_rcache->lock);
> > -			cpu_rcache->loaded = iova_magazine_alloc(GFP_KERNEL);
> > -			cpu_rcache->prev = iova_magazine_alloc(GFP_KERNEL);
> > +			cpu_rcache->loaded = iova_magazine_alloc_node(GFP_KERNEL,
> nid);
> > +			cpu_rcache->prev = iova_magazine_alloc_node(GFP_KERNEL,
> nid);
> 
> This much seems reasonable - however small the benefit might be, there's little
> harm in aiming for optimal initial conditions, for as long as they do happen to
> last...
> 
> >   			if (!cpu_rcache->loaded || !cpu_rcache->prev) {
> >   				ret = -ENOMEM;
> >   				goto out_err;
> > @@ -777,7 +778,9 @@ static bool __iova_rcache_insert(struct iova_domain
> *iovad,
> >   		swap(cpu_rcache->prev, cpu_rcache->loaded);
> >   		can_insert = true;
> >   	} else {
> > -		struct iova_magazine *new_mag =
> iova_magazine_alloc(GFP_ATOMIC);
> > +		int nid = cpu_to_node(raw_smp_processor_id());
> > +		struct iova_magazine *new_mag =
> > +			iova_magazine_alloc_node(GFP_ATOMIC, nid);
> 
> ...however if we get to this point then we're already busy enough to be churning
> magazines between the CPU caches and the global depot, so it's unlikely that
> new ones are going to stay CPU-local for very long either.
> 

We can drop this change

Thanks

-Li


      reply	other threads:[~2024-04-17 12:06 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-15  6:56 [PATCH] iommu/iova: Consider NUMA affinity when allocating memory for per-CPU iova_magazine Li RongQing
2024-04-15 14:04 ` Robin Murphy
2024-04-17 12:06   ` Li,Rongqing [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b36e1e64d06149f7bdd70add8607ca46@baidu.com \
    --to=lirongqing@baidu.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).