From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75404CD1288 for ; Wed, 3 Apr 2024 18:32:58 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4V8tfK0Jhlz3vgt for ; Thu, 4 Apr 2024 05:32:57 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=145.40.73.55; helo=sin.source.kernel.org; envelope-from=cmarinas@kernel.org; receiver=lists.ozlabs.org) Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4V8tdt6BDKz3d2Y for ; Thu, 4 Apr 2024 05:32:34 +1100 (AEDT) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E81B9CE2C12; Wed, 3 Apr 2024 18:32:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 881B8C433C7; Wed, 3 Apr 2024 18:32:27 +0000 (UTC) Date: Wed, 3 Apr 2024 19:32:25 +0100 From: Catalin Marinas To: Kefeng Wang Subject: Re: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS Message-ID: References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> <20240402075142.196265-3-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240402075142.196265-3-wangkefeng.wang@huawei.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, Albert Ou , linux-s390@vger.kernel.org, Peter Zijlstra , Dave Hansen , Russell King , surenb@google.com, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Nicholas Piggin , Andy Lutomirski , Paul Walmsley , akpm@linux-foundation.org, Gerald Schaefer , Will Deacon , Alexander Gordeev , linux-arm-kernel@lists.infradead.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, Apr 02, 2024 at 03:51:37PM +0800, Kefeng Wang wrote: > The vm_flags of vma already checked under per-VMA lock, if it is a > bad access, directly set fault to VM_FAULT_BADACCESS and handle error, > no need to lock_mm_and_find_vma() and check vm_flags again, the latency > time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'. > > Signed-off-by: Kefeng Wang > --- > arch/arm64/mm/fault.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 9bb9f395351a..405f9aa831bd 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, > > if (!(vma->vm_flags & vm_flags)) { > vma_end_read(vma); > - goto lock_mmap; > + fault = VM_FAULT_BADACCESS; > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + goto done; > } > fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); > if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) I think this makes sense. A concurrent modification of vma->vm_flags (e.g. mprotect()) would do a vma_start_write(), so no need to recheck again with the mmap lock held. Reviewed-by: Catalin Marinas