perfbook.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alan Huang <mmpgouride@gmail.com>
To: paulmck@kernel.org, akiyks@gmail.com
Cc: perfbook@vger.kernel.org, Alan Huang <mmpgouride@gmail.com>
Subject: [PATCH] CodeSamples/count: Remove unnecessary memory barriers
Date: Fri,  7 Apr 2023 13:58:13 -0400	[thread overview]
Message-ID: <20230407175813.1334028-1-mmpgouride@gmail.com> (raw)

In count_lim_sig.c, there is only one ordering required, that is
writing to counter happens before setting theft to THEFT_READY in
add_count/sub_count's fast path. Therefore, partial memory barrier
will suffice.

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 CodeSamples/count/count_lim_sig.c | 10 +++-------
 count/count.tex                   | 12 ++++++------
 2 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/CodeSamples/count/count_lim_sig.c b/CodeSamples/count/count_lim_sig.c
index 59da8077..c2f61197 100644
--- a/CodeSamples/count/count_lim_sig.c
+++ b/CodeSamples/count/count_lim_sig.c
@@ -56,12 +56,10 @@ static void flush_local_count_sig(int unused)	//\lnlbl{flush_sig:b}
 {
 	if (READ_ONCE(theft) != THEFT_REQ)	//\lnlbl{flush_sig:check:REQ}
 		return;				//\lnlbl{flush_sig:return:n}
-	smp_mb();				//\lnlbl{flush_sig:mb:1}
 	WRITE_ONCE(theft, THEFT_ACK);		//\lnlbl{flush_sig:set:ACK}
 	if (!counting) {			//\lnlbl{flush_sig:check:fast}
-		WRITE_ONCE(theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
+		smp_store_release(&theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
 	}
-	smp_mb();
 }						//\lnlbl{flush_sig:e}
 
 static void flush_local_count(void)			//\lnlbl{flush:b}
@@ -125,8 +123,7 @@ int add_count(unsigned long delta)			//\lnlbl{b}
 	WRITE_ONCE(counting, 0);			//\lnlbl{clearcnt}
 	barrier();					//\lnlbl{barrier:3}
 	if (READ_ONCE(theft) == THEFT_ACK) {		//\lnlbl{check:ACK}
-		smp_mb();				//\lnlbl{mb}
-		WRITE_ONCE(theft, THEFT_READY);		//\lnlbl{READY}
+		smp_store_release(&theft, THEFT_READY);		//\lnlbl{READY}
 	}
 	if (fastpath)
 		return 1;				//\lnlbl{return:fs}
@@ -164,8 +161,7 @@ int sub_count(unsigned long delta)
 	WRITE_ONCE(counting, 0);
 	barrier();
 	if (READ_ONCE(theft) == THEFT_ACK) {
-		smp_mb();
-		WRITE_ONCE(theft, THEFT_READY);
+		smp_store_release(&theft, THEFT_READY);
 	}
 	if (fastpath)
 		return 1;
diff --git a/count/count.tex b/count/count.tex
index 8ab67e2e..4c139d63 100644
--- a/count/count.tex
+++ b/count/count.tex
@@ -2425,12 +2425,11 @@ handler used in the theft process.
 \Clnref{check:REQ,return:n} check to see if
 the \co{theft} state is REQ, and, if not
 returns without change.
-\Clnref{mb:1} executes a \IX{memory barrier} to ensure that the sampling
-of the theft variable happens before any change to that variable.
 \Clnref{set:ACK} sets the \co{theft} state to ACK, and, if
 \clnref{check:fast} sees that
 this thread's fastpaths are not running, \clnref{set:READY} sets the \co{theft}
-state to READY\@.
+state to READY, with the release store ensuring any change to counter in
+the fastpath happens before the change of theft to READY\@.
 \end{fcvref}
 
 \begin{listing}
@@ -2595,9 +2594,10 @@ handlers to undertake theft.
 \Clnref{barrier:3} again disables compiler reordering, and then
 \clnref{check:ACK}
 checks to see if the signal handler deferred the \co{theft}
-state-change to READY, and, if so, \clnref{mb} executes a memory
-barrier to ensure that any CPU that sees \clnref{READY} setting state to
-READY also sees the effects of \clnref{add:f}.
+state-change to READY, and, if so, \clnref{READY} changes theft to
+READY with the release store ensuring that
+any CPU that sees the READY state also sees the effects
+of \clnref{add:f}.
 If the fastpath addition at \clnref{add:f} was executed, then
 \clnref{return:fs} returns
 success.
-- 
2.34.1


             reply	other threads:[~2023-04-07 17:58 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-07 17:58 Alan Huang [this message]
2023-04-07 18:04 ` [PATCH] CodeSamples/count: Remove unnecessary memory barriers Alan Huang
2023-04-09  0:11   ` Akira Yokosawa
2023-04-10 19:06     ` Paul E. McKenney
2023-04-11 16:34       ` Alan Huang
2023-04-12 11:32         ` Elad Lahav
2023-04-12 18:50           ` Paul E. McKenney
2023-04-08  4:25 ` Akira Yokosawa
2023-04-08  6:02   ` Alan Huang
2023-04-08  6:04   ` Alan Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230407175813.1334028-1-mmpgouride@gmail.com \
    --to=mmpgouride@gmail.com \
    --cc=akiyks@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=perfbook@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).