From: "Clément Léger" <cleger@rivosinc.com> To: Alexandre Ghiti <alexghiti@rivosinc.com>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] riscv: Call secondary mmu notifier when flushing the tlb Date: Wed, 10 Apr 2024 15:30:08 +0200 [thread overview] Message-ID: <b40ee162-0b65-4259-a14c-e927a4f90ea6@rivosinc.com> (raw) In-Reply-To: <20240328073838.8776-1-alexghiti@rivosinc.com> On 28/03/2024 08:38, Alexandre Ghiti wrote: > This is required to allow the IOMMU driver to correctly flush its own > TLB. > > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > --- > > Changes in v2: > - Rebase on top of 6.9-rc1 > > arch/riscv/mm/tlbflush.c | 39 +++++++++++++++++++++++---------------- > 1 file changed, 23 insertions(+), 16 deletions(-) > > diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c > index 893566e004b7..854d984deb07 100644 > --- a/arch/riscv/mm/tlbflush.c > +++ b/arch/riscv/mm/tlbflush.c > @@ -4,6 +4,7 @@ > #include <linux/smp.h> > #include <linux/sched.h> > #include <linux/hugetlb.h> > +#include <linux/mmu_notifier.h> > #include <asm/sbi.h> > #include <asm/mmu_context.h> > > @@ -99,11 +100,19 @@ static void __ipi_flush_tlb_range_asid(void *info) > local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); > } > > -static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid, > +static inline unsigned long get_mm_asid(struct mm_struct *mm) Hi Alex, Nit: the inline attribute is probably useless. > +{ > + return (mm && static_branch_unlikely(&use_asid_allocator)) ? > + atomic_long_read(&mm->context.id) & asid_mask : FLUSH_TLB_NO_ASID; > +} > + > +static void __flush_tlb_range(struct mm_struct *mm, > + struct cpumask *cmask, > unsigned long start, unsigned long size, > unsigned long stride) > { > struct flush_tlb_range_data ftd; > + unsigned long asid = get_mm_asid(mm); > bool broadcast; > > if (cpumask_empty(cmask)) > @@ -137,31 +146,26 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid, > > if (cmask != cpu_online_mask) > put_cpu(); > -} > > -static inline unsigned long get_mm_asid(struct mm_struct *mm) > -{ > - return static_branch_unlikely(&use_asid_allocator) ? > - atomic_long_read(&mm->context.id) & asid_mask : FLUSH_TLB_NO_ASID; > + if (mm) > + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + size); > } > > void flush_tlb_mm(struct mm_struct *mm) > { > - __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm), > - 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > + __flush_tlb_range(mm, mm_cpumask(mm), 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > } > > void flush_tlb_mm_range(struct mm_struct *mm, > unsigned long start, unsigned long end, > unsigned int page_size) > { > - __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm), > - start, end - start, page_size); > + __flush_tlb_range(mm, mm_cpumask(mm), start, end - start, page_size); > } > > void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) > { > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > addr, PAGE_SIZE, PAGE_SIZE); > } > > @@ -194,13 +198,13 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, > } > } > > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > start, end - start, stride_size); > } > > void flush_tlb_kernel_range(unsigned long start, unsigned long end) > { > - __flush_tlb_range((struct cpumask *)cpu_online_mask, FLUSH_TLB_NO_ASID, > + __flush_tlb_range(NULL, (struct cpumask *)cpu_online_mask, > start, end - start, PAGE_SIZE); > } > > @@ -208,7 +212,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) > void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, > unsigned long end) > { > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > start, end - start, PMD_SIZE); > } > #endif > @@ -222,7 +226,10 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, > struct mm_struct *mm, > unsigned long uaddr) > { > + unsigned long start = uaddr & PAGE_MASK; > + > cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); > + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + PAGE_SIZE); > } > > void arch_flush_tlb_batched_pending(struct mm_struct *mm) > @@ -232,7 +239,7 @@ void arch_flush_tlb_batched_pending(struct mm_struct *mm) > > void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > { > - __flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0, > - FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > + __flush_tlb_range(NULL, &batch->cpumask, > + 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > cpumask_clear(&batch->cpumask); > } Other than that, looks good to me, Reviewed-by: Clément Léger <cleger@rivosinc.com> Thanks, Clément
WARNING: multiple messages have this Message-ID (diff)
From: "Clément Léger" <cleger@rivosinc.com> To: Alexandre Ghiti <alexghiti@rivosinc.com>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] riscv: Call secondary mmu notifier when flushing the tlb Date: Wed, 10 Apr 2024 15:30:08 +0200 [thread overview] Message-ID: <b40ee162-0b65-4259-a14c-e927a4f90ea6@rivosinc.com> (raw) In-Reply-To: <20240328073838.8776-1-alexghiti@rivosinc.com> On 28/03/2024 08:38, Alexandre Ghiti wrote: > This is required to allow the IOMMU driver to correctly flush its own > TLB. > > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > --- > > Changes in v2: > - Rebase on top of 6.9-rc1 > > arch/riscv/mm/tlbflush.c | 39 +++++++++++++++++++++++---------------- > 1 file changed, 23 insertions(+), 16 deletions(-) > > diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c > index 893566e004b7..854d984deb07 100644 > --- a/arch/riscv/mm/tlbflush.c > +++ b/arch/riscv/mm/tlbflush.c > @@ -4,6 +4,7 @@ > #include <linux/smp.h> > #include <linux/sched.h> > #include <linux/hugetlb.h> > +#include <linux/mmu_notifier.h> > #include <asm/sbi.h> > #include <asm/mmu_context.h> > > @@ -99,11 +100,19 @@ static void __ipi_flush_tlb_range_asid(void *info) > local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); > } > > -static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid, > +static inline unsigned long get_mm_asid(struct mm_struct *mm) Hi Alex, Nit: the inline attribute is probably useless. > +{ > + return (mm && static_branch_unlikely(&use_asid_allocator)) ? > + atomic_long_read(&mm->context.id) & asid_mask : FLUSH_TLB_NO_ASID; > +} > + > +static void __flush_tlb_range(struct mm_struct *mm, > + struct cpumask *cmask, > unsigned long start, unsigned long size, > unsigned long stride) > { > struct flush_tlb_range_data ftd; > + unsigned long asid = get_mm_asid(mm); > bool broadcast; > > if (cpumask_empty(cmask)) > @@ -137,31 +146,26 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid, > > if (cmask != cpu_online_mask) > put_cpu(); > -} > > -static inline unsigned long get_mm_asid(struct mm_struct *mm) > -{ > - return static_branch_unlikely(&use_asid_allocator) ? > - atomic_long_read(&mm->context.id) & asid_mask : FLUSH_TLB_NO_ASID; > + if (mm) > + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + size); > } > > void flush_tlb_mm(struct mm_struct *mm) > { > - __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm), > - 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > + __flush_tlb_range(mm, mm_cpumask(mm), 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > } > > void flush_tlb_mm_range(struct mm_struct *mm, > unsigned long start, unsigned long end, > unsigned int page_size) > { > - __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm), > - start, end - start, page_size); > + __flush_tlb_range(mm, mm_cpumask(mm), start, end - start, page_size); > } > > void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) > { > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > addr, PAGE_SIZE, PAGE_SIZE); > } > > @@ -194,13 +198,13 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, > } > } > > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > start, end - start, stride_size); > } > > void flush_tlb_kernel_range(unsigned long start, unsigned long end) > { > - __flush_tlb_range((struct cpumask *)cpu_online_mask, FLUSH_TLB_NO_ASID, > + __flush_tlb_range(NULL, (struct cpumask *)cpu_online_mask, > start, end - start, PAGE_SIZE); > } > > @@ -208,7 +212,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) > void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, > unsigned long end) > { > - __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm), > + __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm), > start, end - start, PMD_SIZE); > } > #endif > @@ -222,7 +226,10 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, > struct mm_struct *mm, > unsigned long uaddr) > { > + unsigned long start = uaddr & PAGE_MASK; > + > cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); > + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + PAGE_SIZE); > } > > void arch_flush_tlb_batched_pending(struct mm_struct *mm) > @@ -232,7 +239,7 @@ void arch_flush_tlb_batched_pending(struct mm_struct *mm) > > void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > { > - __flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0, > - FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > + __flush_tlb_range(NULL, &batch->cpumask, > + 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); > cpumask_clear(&batch->cpumask); > } Other than that, looks good to me, Reviewed-by: Clément Léger <cleger@rivosinc.com> Thanks, Clément _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2024-04-10 13:30 UTC|newest] Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-03-28 7:38 [PATCH v2] riscv: Call secondary mmu notifier when flushing the tlb Alexandre Ghiti 2024-03-28 7:38 ` Alexandre Ghiti 2024-03-29 8:27 ` Alexandre Ghiti 2024-03-29 8:27 ` Alexandre Ghiti 2024-04-10 13:30 ` Clément Léger [this message] 2024-04-10 13:30 ` Clément Léger
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b40ee162-0b65-4259-a14c-e927a4f90ea6@rivosinc.com \ --to=cleger@rivosinc.com \ --cc=alexghiti@rivosinc.com \ --cc=aou@eecs.berkeley.edu \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=palmer@dabbelt.com \ --cc=paul.walmsley@sifive.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.