Linux-Block Archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v3 0/4] block/badblocks: fix badblocks setting error
  2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
@ 2023-06-21 13:32 ` Ashok Raj
  2023-06-25  8:13   ` Li Nan
  2023-06-21 17:20 ` [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool linan666
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Ashok Raj @ 2023-06-21 13:32 UTC (permalink / raw
  To: linan666
  Cc: axboe, linan122, dan.j.williams, vishal.l.verma, linux-block,
	linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj

On Thu, Jun 22, 2023 at 01:20:48AM +0800, linan666@huaweicloud.com wrote:
> From: Li Nan <linan122@huawei.com>
> 
> This patch series fixes some simple bugs of setting badblocks and
> optimizing struct badblocks. Coly Li has been trying to refactor badblocks
> in patch series "badblocks improvement for multiple bad block ranges", but
> the workload is significant. Before that, I will fix some easily triggered
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You mean the refactor is going to take longer to complete? 
If so, maybe state it that way...

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool
  2023-06-21 17:20 ` [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool linan666
@ 2023-06-21 14:02   ` Ashok Raj
  2023-06-25  9:11     ` Li Nan
  0 siblings, 1 reply; 13+ messages in thread
From: Ashok Raj @ 2023-06-21 14:02 UTC (permalink / raw
  To: linan666
  Cc: axboe, linan122, dan.j.williams, vishal.l.verma, linux-block,
	linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj

On Thu, Jun 22, 2023 at 01:20:49AM +0800, linan666@huaweicloud.com wrote:
> From: Li Nan <linan122@huawei.com>
> 
> "changed" and "unacked_exist" are used as boolean type. Change the type
> of them to bool. And reorder fields to reduce memory hole.

minor nit: If you use a .gitorderfile to list .h before .c it will help review them in
order.

I don't know if its even worth doing this manual compaction unless you are
storing the entire struct in some flash or its in a sensitive cache
thrashing structure.

bool is useful that it makes the code easier to read and can eliminate some
class of bugs that you would otherwise use !! operator.

> 
> No functional changed intended.
> 
> Signed-off-by: Li Nan <linan122@huawei.com>
> ---
>  block/badblocks.c         | 14 +++++++-------
>  include/linux/badblocks.h | 10 +++++-----
>  2 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/block/badblocks.c b/block/badblocks.c
> index 3afb550c0f7b..1b4caa42c5f1 100644
> --- a/block/badblocks.c
> +++ b/block/badblocks.c
> @@ -141,7 +141,7 @@ static void badblocks_update_acked(struct badblocks *bb)
>  	}
>  
>  	if (!unacked)
> -		bb->unacked_exist = 0;
> +		bb->unacked_exist = false;
>  }
>  
>  /**
> @@ -302,9 +302,9 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>  		}
>  	}
>  
> -	bb->changed = 1;
> +	bb->changed = true;
>  	if (!acknowledged)
> -		bb->unacked_exist = 1;
> +		bb->unacked_exist = true;
>  	else
>  		badblocks_update_acked(bb);
>  	write_sequnlock_irqrestore(&bb->lock, flags);
> @@ -414,7 +414,7 @@ int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
>  	}
>  
>  	badblocks_update_acked(bb);
> -	bb->changed = 1;
> +	bb->changed = true;
>  out:
>  	write_sequnlock_irq(&bb->lock);
>  	return rv;
> @@ -435,7 +435,7 @@ void ack_all_badblocks(struct badblocks *bb)
>  		return;
>  	write_seqlock_irq(&bb->lock);
>  
> -	if (bb->changed == 0 && bb->unacked_exist) {
> +	if (bb->changed == false && bb->unacked_exist) {

	if (!bb->changed && bb->unacked_exist)


>  		u64 *p = bb->page;
>  		int i;
>  
> @@ -447,7 +447,7 @@ void ack_all_badblocks(struct badblocks *bb)
>  				p[i] = BB_MAKE(start, len, 1);
>  			}
>  		}
> -		bb->unacked_exist = 0;
> +		bb->unacked_exist = false;
>  	}
>  	write_sequnlock_irq(&bb->lock);
>  }
> @@ -493,7 +493,7 @@ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack)
>  				length << bb->shift);
>  	}
>  	if (unack && len == 0)
> -		bb->unacked_exist = 0;
> +		bb->unacked_exist = false;
>  
>  	if (read_seqretry(&bb->lock, seq))
>  		goto retry;
> diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
> index 2426276b9bd3..c2723f97d22d 100644
> --- a/include/linux/badblocks.h
> +++ b/include/linux/badblocks.h
> @@ -27,15 +27,15 @@
>  struct badblocks {
>  	struct device *dev;	/* set by devm_init_badblocks */
>  	int count;		/* count of bad blocks */
> -	int unacked_exist;	/* there probably are unacknowledged
> -				 * bad blocks.  This is only cleared
> -				 * when a read discovers none
> -				 */
>  	int shift;		/* shift from sectors to block size
>  				 * a -ve shift means badblocks are
>  				 * disabled.*/
> +	bool unacked_exist;	/* there probably are unacknowledged
> +				 * bad blocks.  This is only cleared
> +				 * when a read discovers none

read of what?

> +				 */
> +	bool changed;
>  	u64 *page;		/* badblock list */
> -	int changed;
>  	seqlock_t lock;
>  	sector_t sector;
>  	sector_t size;		/* in sectors */
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine
  2023-06-21 17:20 ` [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine linan666
@ 2023-06-21 14:09   ` Ashok Raj
  2023-06-25  9:16     ` Li Nan
  0 siblings, 1 reply; 13+ messages in thread
From: Ashok Raj @ 2023-06-21 14:09 UTC (permalink / raw
  To: linan666
  Cc: axboe, linan122, dan.j.williams, vishal.l.verma, linux-block,
	linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj

On Thu, Jun 22, 2023 at 01:20:51AM +0800, linan666@huaweicloud.com wrote:
> From: Li Nan <linan122@huawei.com>
> 
> badblocks will loss if we set it as below:
> 
>   # echo 1 1 > bad_blocks
>   # echo 3 1 > bad_blocks
>   # echo 1 5 > bad_blocks
>   # cat bad_blocks
>     1 3
> 
> In badblocks_set(), if there is an intersection between p[lo] and p[hi],
> we will combine them. The end of new badblocks is p[hi]'s end now. but
> p[lo] may cross p[hi] and new end should be the larger of p[lo] and p[hi].

Reconsider rewriting the commit log. It seems you converted code to
sentence ;-). 

Also it might help to show after the patch how the above example would be
for cat bad_blocks

> 
> Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
> Signed-off-by: Li Nan <linan122@huawei.com>
> ---
>  block/badblocks.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/block/badblocks.c b/block/badblocks.c
> index 7e6ebe2ac12c..2c2ef8284a3f 100644
> --- a/block/badblocks.c
> +++ b/block/badblocks.c
> @@ -267,16 +267,14 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>  	if (sectors == 0 && hi < bb->count) {
>  		/* we might be able to combine lo and hi */
>  		/* Note: 's' is at the end of 'lo' */
> -		sector_t a = BB_OFFSET(p[hi]);
> -		int lolen = BB_LEN(p[lo]);
> -		int hilen = BB_LEN(p[hi]);
> -		int newlen = lolen + hilen - (s - a);
> +		sector_t a = BB_OFFSET(p[lo]);
> +		int newlen = max(s, BB_OFFSET(p[hi]) + BB_LEN(p[hi])) - a;
>  
> -		if (s >= a && newlen < BB_MAX_LEN) {
> +		if (s >= BB_OFFSET(p[hi]) && newlen < BB_MAX_LEN) {
>  			/* yes, we can combine them */
>  			int ack = BB_ACK(p[lo]) && BB_ACK(p[hi]);
>  
> -			p[lo] = BB_MAKE(BB_OFFSET(p[lo]), newlen, ack);
> +			p[lo] = BB_MAKE(a, newlen, ack);
>  			memmove(p + hi, p + hi + 1,
>  				(bb->count - hi - 1) * 8);
>  			bb->count--;
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 4/4] block/badblocks: fix the bug of reverse order
  2023-06-21 17:20 ` [PATCH v3 4/4] block/badblocks: fix the bug of reverse order linan666
@ 2023-06-21 14:15   ` Ashok Raj
  2023-06-25  9:22     ` Li Nan
  0 siblings, 1 reply; 13+ messages in thread
From: Ashok Raj @ 2023-06-21 14:15 UTC (permalink / raw
  To: linan666
  Cc: axboe, linan122, dan.j.williams, vishal.l.verma, linux-block,
	linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj

On Thu, Jun 22, 2023 at 01:20:52AM +0800, linan666@huaweicloud.com wrote:
> From: Li Nan <linan122@huawei.com>
> 
> Order of badblocks will be reversed if we set a large area at once. 'hi'
> remains unchanged while adding continuous badblocks is wrong, the next
> setting is greater than 'hi', it should be added to the next position.
> Let 'hi' +1 each cycle.

The commitlog needs more work. 
> 
>   # echo 0 2048 > bad_blocks
>   # cat bad_blocks
>     1536 512
>     1024 512
>     512 512
>     0 512

Is the above before or after this patch is applied?

> 
> Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
> Signed-off-by: Li Nan <linan122@huawei.com>
> ---
>  block/badblocks.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/block/badblocks.c b/block/badblocks.c
> index 2c2ef8284a3f..3b816690b940 100644
> --- a/block/badblocks.c
> +++ b/block/badblocks.c
> @@ -301,6 +301,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>  			p[hi] = BB_MAKE(s, this_sectors, acknowledged);
>  			sectors -= this_sectors;
>  			s += this_sectors;
> +			hi++;
>  			changed = true;
>  		}
>  	}
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 0/4] block/badblocks: fix badblocks setting error
@ 2023-06-21 17:20 linan666
  2023-06-21 13:32 ` Ashok Raj
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: linan666 @ 2023-06-21 17:20 UTC (permalink / raw
  To: axboe, linan122, dan.j.williams, vishal.l.verma
  Cc: linux-block, linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun

From: Li Nan <linan122@huawei.com>

This patch series fixes some simple bugs of setting badblocks and
optimizing struct badblocks. Coly Li has been trying to refactor badblocks
in patch series "badblocks improvement for multiple bad block ranges", but
the workload is significant. Before that, I will fix some easily triggered
issues and optimize some code that does not conflict with Coly's changes.

Changes in v3:
 - delete patchs with significant changes.

Li Nan (4):
  block/badblocks: change some members of badblocks to bool
  block/badblocks: only set bb->changed/unacked_exist when badblocks
    changes
  block/badblocks: fix badblocks loss when badblocks combine
  block/badblocks: fix the bug of reverse order

 block/badblocks.c         | 38 ++++++++++++++++++++++----------------
 include/linux/badblocks.h | 10 +++++-----
 2 files changed, 27 insertions(+), 21 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool
  2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
  2023-06-21 13:32 ` Ashok Raj
@ 2023-06-21 17:20 ` linan666
  2023-06-21 14:02   ` Ashok Raj
  2023-06-21 17:20 ` [PATCH v3 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes linan666
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: linan666 @ 2023-06-21 17:20 UTC (permalink / raw
  To: axboe, linan122, dan.j.williams, vishal.l.verma
  Cc: linux-block, linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun

From: Li Nan <linan122@huawei.com>

"changed" and "unacked_exist" are used as boolean type. Change the type
of them to bool. And reorder fields to reduce memory hole.

No functional changed intended.

Signed-off-by: Li Nan <linan122@huawei.com>
---
 block/badblocks.c         | 14 +++++++-------
 include/linux/badblocks.h | 10 +++++-----
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/badblocks.c b/block/badblocks.c
index 3afb550c0f7b..1b4caa42c5f1 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -141,7 +141,7 @@ static void badblocks_update_acked(struct badblocks *bb)
 	}
 
 	if (!unacked)
-		bb->unacked_exist = 0;
+		bb->unacked_exist = false;
 }
 
 /**
@@ -302,9 +302,9 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 		}
 	}
 
-	bb->changed = 1;
+	bb->changed = true;
 	if (!acknowledged)
-		bb->unacked_exist = 1;
+		bb->unacked_exist = true;
 	else
 		badblocks_update_acked(bb);
 	write_sequnlock_irqrestore(&bb->lock, flags);
@@ -414,7 +414,7 @@ int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
 	}
 
 	badblocks_update_acked(bb);
-	bb->changed = 1;
+	bb->changed = true;
 out:
 	write_sequnlock_irq(&bb->lock);
 	return rv;
@@ -435,7 +435,7 @@ void ack_all_badblocks(struct badblocks *bb)
 		return;
 	write_seqlock_irq(&bb->lock);
 
-	if (bb->changed == 0 && bb->unacked_exist) {
+	if (bb->changed == false && bb->unacked_exist) {
 		u64 *p = bb->page;
 		int i;
 
@@ -447,7 +447,7 @@ void ack_all_badblocks(struct badblocks *bb)
 				p[i] = BB_MAKE(start, len, 1);
 			}
 		}
-		bb->unacked_exist = 0;
+		bb->unacked_exist = false;
 	}
 	write_sequnlock_irq(&bb->lock);
 }
@@ -493,7 +493,7 @@ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack)
 				length << bb->shift);
 	}
 	if (unack && len == 0)
-		bb->unacked_exist = 0;
+		bb->unacked_exist = false;
 
 	if (read_seqretry(&bb->lock, seq))
 		goto retry;
diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
index 2426276b9bd3..c2723f97d22d 100644
--- a/include/linux/badblocks.h
+++ b/include/linux/badblocks.h
@@ -27,15 +27,15 @@
 struct badblocks {
 	struct device *dev;	/* set by devm_init_badblocks */
 	int count;		/* count of bad blocks */
-	int unacked_exist;	/* there probably are unacknowledged
-				 * bad blocks.  This is only cleared
-				 * when a read discovers none
-				 */
 	int shift;		/* shift from sectors to block size
 				 * a -ve shift means badblocks are
 				 * disabled.*/
+	bool unacked_exist;	/* there probably are unacknowledged
+				 * bad blocks.  This is only cleared
+				 * when a read discovers none
+				 */
+	bool changed;
 	u64 *page;		/* badblock list */
-	int changed;
 	seqlock_t lock;
 	sector_t sector;
 	sector_t size;		/* in sectors */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes
  2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
  2023-06-21 13:32 ` Ashok Raj
  2023-06-21 17:20 ` [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool linan666
@ 2023-06-21 17:20 ` linan666
  2023-06-21 17:20 ` [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine linan666
  2023-06-21 17:20 ` [PATCH v3 4/4] block/badblocks: fix the bug of reverse order linan666
  4 siblings, 0 replies; 13+ messages in thread
From: linan666 @ 2023-06-21 17:20 UTC (permalink / raw
  To: axboe, linan122, dan.j.williams, vishal.l.verma
  Cc: linux-block, linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun

From: Li Nan <linan122@huawei.com>

In badblocks_set(), even if no badblocks changes, bb->changed and
unacked_exist will still be set. Only set them when badblocks changes.

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <linan122@huawei.com>
---
 block/badblocks.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/block/badblocks.c b/block/badblocks.c
index 1b4caa42c5f1..7e6ebe2ac12c 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -166,6 +166,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 	int lo, hi;
 	int rv = 0;
 	unsigned long flags;
+	bool changed = false;
 
 	if (bb->shift < 0)
 		/* badblocks are disabled */
@@ -229,6 +230,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 				s = a + BB_MAX_LEN;
 			}
 			sectors = e - s;
+			changed = true;
 		}
 	}
 	if (sectors && hi < bb->count) {
@@ -259,6 +261,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 			sectors = e - s;
 			lo = hi;
 			hi++;
+			changed = true;
 		}
 	}
 	if (sectors == 0 && hi < bb->count) {
@@ -277,6 +280,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 			memmove(p + hi, p + hi + 1,
 				(bb->count - hi - 1) * 8);
 			bb->count--;
+			changed = true;
 		}
 	}
 	while (sectors) {
@@ -299,14 +303,17 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 			p[hi] = BB_MAKE(s, this_sectors, acknowledged);
 			sectors -= this_sectors;
 			s += this_sectors;
+			changed = true;
 		}
 	}
 
-	bb->changed = true;
-	if (!acknowledged)
-		bb->unacked_exist = true;
-	else
-		badblocks_update_acked(bb);
+	if (changed) {
+		bb->changed = changed;
+		if (!acknowledged)
+			bb->unacked_exist = true;
+		else
+			badblocks_update_acked(bb);
+	}
 	write_sequnlock_irqrestore(&bb->lock, flags);
 
 	return rv;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine
  2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
                   ` (2 preceding siblings ...)
  2023-06-21 17:20 ` [PATCH v3 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes linan666
@ 2023-06-21 17:20 ` linan666
  2023-06-21 14:09   ` Ashok Raj
  2023-06-21 17:20 ` [PATCH v3 4/4] block/badblocks: fix the bug of reverse order linan666
  4 siblings, 1 reply; 13+ messages in thread
From: linan666 @ 2023-06-21 17:20 UTC (permalink / raw
  To: axboe, linan122, dan.j.williams, vishal.l.verma
  Cc: linux-block, linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun

From: Li Nan <linan122@huawei.com>

badblocks will loss if we set it as below:

  # echo 1 1 > bad_blocks
  # echo 3 1 > bad_blocks
  # echo 1 5 > bad_blocks
  # cat bad_blocks
    1 3

In badblocks_set(), if there is an intersection between p[lo] and p[hi],
we will combine them. The end of new badblocks is p[hi]'s end now. but
p[lo] may cross p[hi] and new end should be the larger of p[lo] and p[hi].

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <linan122@huawei.com>
---
 block/badblocks.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/block/badblocks.c b/block/badblocks.c
index 7e6ebe2ac12c..2c2ef8284a3f 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -267,16 +267,14 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 	if (sectors == 0 && hi < bb->count) {
 		/* we might be able to combine lo and hi */
 		/* Note: 's' is at the end of 'lo' */
-		sector_t a = BB_OFFSET(p[hi]);
-		int lolen = BB_LEN(p[lo]);
-		int hilen = BB_LEN(p[hi]);
-		int newlen = lolen + hilen - (s - a);
+		sector_t a = BB_OFFSET(p[lo]);
+		int newlen = max(s, BB_OFFSET(p[hi]) + BB_LEN(p[hi])) - a;
 
-		if (s >= a && newlen < BB_MAX_LEN) {
+		if (s >= BB_OFFSET(p[hi]) && newlen < BB_MAX_LEN) {
 			/* yes, we can combine them */
 			int ack = BB_ACK(p[lo]) && BB_ACK(p[hi]);
 
-			p[lo] = BB_MAKE(BB_OFFSET(p[lo]), newlen, ack);
+			p[lo] = BB_MAKE(a, newlen, ack);
 			memmove(p + hi, p + hi + 1,
 				(bb->count - hi - 1) * 8);
 			bb->count--;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/4] block/badblocks: fix the bug of reverse order
  2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
                   ` (3 preceding siblings ...)
  2023-06-21 17:20 ` [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine linan666
@ 2023-06-21 17:20 ` linan666
  2023-06-21 14:15   ` Ashok Raj
  4 siblings, 1 reply; 13+ messages in thread
From: linan666 @ 2023-06-21 17:20 UTC (permalink / raw
  To: axboe, linan122, dan.j.williams, vishal.l.verma
  Cc: linux-block, linux-kernel, yukuai3, yi.zhang, houtao1, yangerkun

From: Li Nan <linan122@huawei.com>

Order of badblocks will be reversed if we set a large area at once. 'hi'
remains unchanged while adding continuous badblocks is wrong, the next
setting is greater than 'hi', it should be added to the next position.
Let 'hi' +1 each cycle.

  # echo 0 2048 > bad_blocks
  # cat bad_blocks
    1536 512
    1024 512
    512 512
    0 512

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <linan122@huawei.com>
---
 block/badblocks.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/badblocks.c b/block/badblocks.c
index 2c2ef8284a3f..3b816690b940 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -301,6 +301,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
 			p[hi] = BB_MAKE(s, this_sectors, acknowledged);
 			sectors -= this_sectors;
 			s += this_sectors;
+			hi++;
 			changed = true;
 		}
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] block/badblocks: fix badblocks setting error
  2023-06-21 13:32 ` Ashok Raj
@ 2023-06-25  8:13   ` Li Nan
  0 siblings, 0 replies; 13+ messages in thread
From: Li Nan @ 2023-06-25  8:13 UTC (permalink / raw
  To: Ashok Raj, linan666
  Cc: axboe, dan.j.williams, vishal.l.verma, linux-block, linux-kernel,
	yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj



在 2023/6/21 21:32, Ashok Raj 写道:
> On Thu, Jun 22, 2023 at 01:20:48AM +0800, linan666@huaweicloud.com wrote:
>> From: Li Nan <linan122@huawei.com>
>>
>> This patch series fixes some simple bugs of setting badblocks and
>> optimizing struct badblocks. Coly Li has been trying to refactor badblocks
>> in patch series "badblocks improvement for multiple bad block ranges", but
>> the workload is significant. Before that, I will fix some easily triggered
>    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> You mean the refactor is going to take longer to complete?

Yes, refer to the link:
https://lore.kernel.org/all/20220721121152.4180-1-colyli@suse.de

> If so, maybe state it that way...
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool
  2023-06-21 14:02   ` Ashok Raj
@ 2023-06-25  9:11     ` Li Nan
  0 siblings, 0 replies; 13+ messages in thread
From: Li Nan @ 2023-06-25  9:11 UTC (permalink / raw
  To: Ashok Raj
  Cc: axboe, dan.j.williams, vishal.l.verma, linux-block, linux-kernel,
	yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj, linan122



在 2023/6/21 22:02, Ashok Raj 写道:
> On Thu, Jun 22, 2023 at 01:20:49AM +0800, linan666@huaweicloud.com wrote:
>> From: Li Nan <linan122@huawei.com>
>>
>> "changed" and "unacked_exist" are used as boolean type. Change the type
>> of them to bool. And reorder fields to reduce memory hole.
> 
> minor nit: If you use a .gitorderfile to list .h before .c it will help review them in
> order.
> 

I will config my git.

> I don't know if its even worth doing this manual compaction unless you are
> storing the entire struct in some flash or its in a sensitive cache
> thrashing structure.
> 

Yeah, it is worthless to manual compaction.

> bool is useful that it makes the code easier to read and can eliminate some
> class of bugs that you would otherwise use !! operator.
> 
>>
>> No functional changed intended.
>>
>> Signed-off-by: Li Nan <linan122@huawei.com>
>> ---
>>   block/badblocks.c         | 14 +++++++-------
>>   include/linux/badblocks.h | 10 +++++-----
>>   2 files changed, 12 insertions(+), 12 deletions(-)
>>
>> diff --git a/block/badblocks.c b/block/badblocks.c
>> index 3afb550c0f7b..1b4caa42c5f1 100644
>> --- a/block/badblocks.c
>> +++ b/block/badblocks.c
>> @@ -141,7 +141,7 @@ static void badblocks_update_acked(struct badblocks *bb)
>>   	}
>>   
>>   	if (!unacked)
>> -		bb->unacked_exist = 0;
>> +		bb->unacked_exist = false;
>>   }
>>   
>>   /**
>> @@ -302,9 +302,9 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>>   		}
>>   	}
>>   
>> -	bb->changed = 1;
>> +	bb->changed = true;
>>   	if (!acknowledged)
>> -		bb->unacked_exist = 1;
>> +		bb->unacked_exist = true;
>>   	else
>>   		badblocks_update_acked(bb);
>>   	write_sequnlock_irqrestore(&bb->lock, flags);
>> @@ -414,7 +414,7 @@ int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
>>   	}
>>   
>>   	badblocks_update_acked(bb);
>> -	bb->changed = 1;
>> +	bb->changed = true;
>>   out:
>>   	write_sequnlock_irq(&bb->lock);
>>   	return rv;
>> @@ -435,7 +435,7 @@ void ack_all_badblocks(struct badblocks *bb)
>>   		return;
>>   	write_seqlock_irq(&bb->lock);
>>   
>> -	if (bb->changed == 0 && bb->unacked_exist) {
>> +	if (bb->changed == false && bb->unacked_exist) {
> 
> 	if (!bb->changed && bb->unacked_exist)

I will change it in next version.

> 
> 
>>   		u64 *p = bb->page;
>>   		int i;
>>   
>> @@ -447,7 +447,7 @@ void ack_all_badblocks(struct badblocks *bb)
>>   				p[i] = BB_MAKE(start, len, 1);
>>   			}
>>   		}
>> -		bb->unacked_exist = 0;
>> +		bb->unacked_exist = false;
>>   	}
>>   	write_sequnlock_irq(&bb->lock);
>>   }
>> @@ -493,7 +493,7 @@ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack)
>>   				length << bb->shift);
>>   	}
>>   	if (unack && len == 0)
>> -		bb->unacked_exist = 0;
>> +		bb->unacked_exist = false;
>>   
>>   	if (read_seqretry(&bb->lock, seq))
>>   		goto retry;
>> diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
>> index 2426276b9bd3..c2723f97d22d 100644
>> --- a/include/linux/badblocks.h
>> +++ b/include/linux/badblocks.h
>> @@ -27,15 +27,15 @@
>>   struct badblocks {
>>   	struct device *dev;	/* set by devm_init_badblocks */
>>   	int count;		/* count of bad blocks */
>> -	int unacked_exist;	/* there probably are unacknowledged
>> -				 * bad blocks.  This is only cleared
>> -				 * when a read discovers none
>> -				 */
>>   	int shift;		/* shift from sectors to block size
>>   				 * a -ve shift means badblocks are
>>   				 * disabled.*/
>> +	bool unacked_exist;	/* there probably are unacknowledged
>> +				 * bad blocks.  This is only cleared
>> +				 * when a read discovers none
> 
> read of what?

"... when a read of unacknowledged bad blocks discovers none"

Would this be better?

Thank for your suggestion.

> 
>> +				 */
>> +	bool changed;
>>   	u64 *page;		/* badblock list */
>> -	int changed;
>>   	seqlock_t lock;
>>   	sector_t sector;
>>   	sector_t size;		/* in sectors */
>> -- 
>> 2.39.2
>>
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine
  2023-06-21 14:09   ` Ashok Raj
@ 2023-06-25  9:16     ` Li Nan
  0 siblings, 0 replies; 13+ messages in thread
From: Li Nan @ 2023-06-25  9:16 UTC (permalink / raw
  To: Ashok Raj
  Cc: axboe, dan.j.williams, vishal.l.verma, linux-block, linux-kernel,
	yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj, linan122



在 2023/6/21 22:09, Ashok Raj 写道:
> On Thu, Jun 22, 2023 at 01:20:51AM +0800, linan666@huaweicloud.com wrote:
>> From: Li Nan <linan122@huawei.com>
>>
>> badblocks will loss if we set it as below:
>>
>>    # echo 1 1 > bad_blocks
>>    # echo 3 1 > bad_blocks
>>    # echo 1 5 > bad_blocks
>>    # cat bad_blocks
>>      1 3
>>
>> In badblocks_set(), if there is an intersection between p[lo] and p[hi],
>> we will combine them. The end of new badblocks is p[hi]'s end now. but
>> p[lo] may cross p[hi] and new end should be the larger of p[lo] and p[hi].
> 
> Reconsider rewriting the commit log. It seems you converted code to
> sentence ;-).

I will rewrite log.

> 
> Also it might help to show after the patch how the above example would be
> for cat bad_blocks
> 

after patch:

# cat bad_blocks
   1 5

I will show it in next version. Thanks for your suggestion.

>>
>> Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
>> Signed-off-by: Li Nan <linan122@huawei.com>
>> ---
>>   block/badblocks.c | 10 ++++------
>>   1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/block/badblocks.c b/block/badblocks.c
>> index 7e6ebe2ac12c..2c2ef8284a3f 100644
>> --- a/block/badblocks.c
>> +++ b/block/badblocks.c
>> @@ -267,16 +267,14 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>>   	if (sectors == 0 && hi < bb->count) {
>>   		/* we might be able to combine lo and hi */
>>   		/* Note: 's' is at the end of 'lo' */
>> -		sector_t a = BB_OFFSET(p[hi]);
>> -		int lolen = BB_LEN(p[lo]);
>> -		int hilen = BB_LEN(p[hi]);
>> -		int newlen = lolen + hilen - (s - a);
>> +		sector_t a = BB_OFFSET(p[lo]);
>> +		int newlen = max(s, BB_OFFSET(p[hi]) + BB_LEN(p[hi])) - a;
>>   
>> -		if (s >= a && newlen < BB_MAX_LEN) {
>> +		if (s >= BB_OFFSET(p[hi]) && newlen < BB_MAX_LEN) {
>>   			/* yes, we can combine them */
>>   			int ack = BB_ACK(p[lo]) && BB_ACK(p[hi]);
>>   
>> -			p[lo] = BB_MAKE(BB_OFFSET(p[lo]), newlen, ack);
>> +			p[lo] = BB_MAKE(a, newlen, ack);
>>   			memmove(p + hi, p + hi + 1,
>>   				(bb->count - hi - 1) * 8);
>>   			bb->count--;
>> -- 
>> 2.39.2
>>
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 4/4] block/badblocks: fix the bug of reverse order
  2023-06-21 14:15   ` Ashok Raj
@ 2023-06-25  9:22     ` Li Nan
  0 siblings, 0 replies; 13+ messages in thread
From: Li Nan @ 2023-06-25  9:22 UTC (permalink / raw
  To: Ashok Raj, linan666
  Cc: axboe, dan.j.williams, vishal.l.verma, linux-block, linux-kernel,
	yukuai3, yi.zhang, houtao1, yangerkun, Ashok Raj, linan122



在 2023/6/21 22:15, Ashok Raj 写道:
> On Thu, Jun 22, 2023 at 01:20:52AM +0800, linan666@huaweicloud.com wrote:
>> From: Li Nan <linan122@huawei.com>
>>
>> Order of badblocks will be reversed if we set a large area at once. 'hi'
>> remains unchanged while adding continuous badblocks is wrong, the next
>> setting is greater than 'hi', it should be added to the next position.
>> Let 'hi' +1 each cycle.
> 
> The commitlog needs more work.

OK, I will improve this.

>>
>>    # echo 0 2048 > bad_blocks
>>    # cat bad_blocks
>>      1536 512
>>      1024 512
>>      512 512
>>      0 512
> 
> Is the above before or after this patch is applied?

All badblocks are arranged from small to large. after patch:

# cat bad_blocks
   0 512
   512 512
   1024 512
   1536 512

I will show it in next version. Thanks for your suggestion.

> 
>>
>> Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
>> Signed-off-by: Li Nan <linan122@huawei.com>
>> ---
>>   block/badblocks.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/block/badblocks.c b/block/badblocks.c
>> index 2c2ef8284a3f..3b816690b940 100644
>> --- a/block/badblocks.c
>> +++ b/block/badblocks.c
>> @@ -301,6 +301,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
>>   			p[hi] = BB_MAKE(s, this_sectors, acknowledged);
>>   			sectors -= this_sectors;
>>   			s += this_sectors;
>> +			hi++;
>>   			changed = true;
>>   		}
>>   	}
>> -- 
>> 2.39.2
>>
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-06-25  9:22 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-21 17:20 [PATCH v3 0/4] block/badblocks: fix badblocks setting error linan666
2023-06-21 13:32 ` Ashok Raj
2023-06-25  8:13   ` Li Nan
2023-06-21 17:20 ` [PATCH v3 1/4] block/badblocks: change some members of badblocks to bool linan666
2023-06-21 14:02   ` Ashok Raj
2023-06-25  9:11     ` Li Nan
2023-06-21 17:20 ` [PATCH v3 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes linan666
2023-06-21 17:20 ` [PATCH v3 3/4] block/badblocks: fix badblocks loss when badblocks combine linan666
2023-06-21 14:09   ` Ashok Raj
2023-06-25  9:16     ` Li Nan
2023-06-21 17:20 ` [PATCH v3 4/4] block/badblocks: fix the bug of reverse order linan666
2023-06-21 14:15   ` Ashok Raj
2023-06-25  9:22     ` Li Nan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).