Stable Archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations
       [not found] <20210608183955.280836-1-keescook@chromium.org>
@ 2021-06-08 18:39 ` Kees Cook
  2021-06-11  9:13   ` Vlastimil Babka
  2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
  1 sibling, 1 reply; 5+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw
  To: Andrew Morton
  Cc: Kees Cook, stable, Vlastimil Babka, Marco Elver,
	Christoph Lameter, Lin, Zhenpeng, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Roman Gushchin, linux-kernel, linux-doc, linux-mm

The redzone area for SLUB exists between s->object_size and s->inuse
(which is at least the word-aligned object_size). If a cache were created
with an object_size smaller than sizeof(void *), the in-object stored
freelist pointer would overwrite the redzone (e.g. with boot param
"slub_debug=ZF"):

BUG test (Tainted: G    B            ): Right Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620

Redzone  (____ptrval____): bb bb bb bb bb bb bb bb    ........
Object   (____ptrval____): f6 f4 a5 40 1d e8          ...@..
Redzone  (____ptrval____): 1a aa                      ..
Padding  (____ptrval____): 00 00 00 00 00 00 00 00    ........

Store the freelist pointer out of line when object_size is smaller than
sizeof(void *) and redzoning is enabled.

Additionally remove the "smaller than sizeof(void *)" check under
CONFIG_DEBUG_VM in kmem_cache_sanity_check() as it is now redundant:
SLAB and SLOB both handle small sizes.

(Note that no caches within this size range are known to exist in the
kernel currently.)

Fixes: 81819f0fc828 ("SLUB core")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/slab_common.c | 3 +--
 mm/slub.c        | 8 +++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a4a571428c51..7cab77655f11 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -97,8 +97,7 @@ EXPORT_SYMBOL(kmem_cache_size);
 #ifdef CONFIG_DEBUG_VM
 static int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
-	if (!name || in_interrupt() || size < sizeof(void *) ||
-		size > KMALLOC_MAX_SIZE) {
+	if (!name || in_interrupt() || size > KMALLOC_MAX_SIZE) {
 		pr_err("kmem_cache_create(%s) integrity check failed\n", name);
 		return -EINVAL;
 	}
diff --git a/mm/slub.c b/mm/slub.c
index f91d9fe7d0d8..f58cfd456548 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3734,15 +3734,17 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	 */
 	s->inuse = size;
 
-	if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
-		s->ctor)) {
+	if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
+	    ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
+	    s->ctor) {
 		/*
 		 * Relocate free pointer after the object if it is not
 		 * permitted to overwrite the first word of the object on
 		 * kmem_cache_free.
 		 *
 		 * This is the case if we do RCU, have a constructor or
-		 * destructor or are poisoning the objects.
+		 * destructor, are poisoning the objects, or are
+		 * redzoning an object smaller than sizeof(void *).
 		 *
 		 * The assumption that s->offset >= s->inuse means free
 		 * pointer is outside of the object is used in the
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
       [not found] <20210608183955.280836-1-keescook@chromium.org>
  2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
@ 2021-06-08 18:39 ` Kees Cook
  2021-06-08 20:56   ` Andrew Morton
  1 sibling, 1 reply; 5+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw
  To: Andrew Morton
  Cc: Kees Cook, Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

It turns out that SLUB redzoning ("slub_debug=Z") checks from
s->object_size rather than from s->inuse (which is normally bumped
to make room for the freelist pointer), so a cache created with an
object size less than 24 would have the freelist pointer written beyond
s->object_size, causing the redzone to be corrupted by the freelist
pointer. This was very visible with "slub_debug=ZF":

BUG test (Tainted: G    B            ): Right Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620

Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........

Adjust the offset to stay within s->object_size.

(Note that no caches of in this size range are known to exist in the
kernel currently.)

Reported-by: Marco Elver <elver@google.com>
Reported-by: "Lin, Zhenpeng" <zplin@psu.edu>
Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
Cc: stable@vger.kernel.org
Tested-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/lkml/CANpmjNOwZ5VpKQn+SYWovTkFB4VsT-RPwyENBmaK0dLcpqStkA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Link: https://lore.kernel.org/lkml/0f7dd7b2-7496-5e2d-9488-2ec9f8e90441@suse.cz/
---
 mm/slub.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f58cfd456548..fe30df460fad 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
 	unsigned int size = s->object_size;
-	unsigned int freepointer_area;
 	unsigned int order;
 
 	/*
@@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	 * the possible location of the free pointer.
 	 */
 	size = ALIGN(size, sizeof(void *));
-	/*
-	 * This is the area of the object where a freepointer can be
-	 * safely written. If redzoning adds more to the inuse size, we
-	 * can't use that portion for writing the freepointer, so
-	 * s->offset must be limited within this for the general case.
-	 */
-	freepointer_area = size;
 
 #ifdef CONFIG_SLUB_DEBUG
 	/*
@@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 
 	/*
 	 * With that we have determined the number of bytes in actual use
-	 * by the object. This is the potential offset to the free pointer.
+	 * by the object and redzoning.
 	 */
 	s->inuse = size;
 
@@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		 */
 		s->offset = size;
 		size += sizeof(void *);
-	} else if (freepointer_area > sizeof(void *)) {
+	} else {
 		/*
 		 * Store freelist pointer near middle of object to keep
 		 * it away from the edges of the object to avoid small
 		 * sized over/underflows from neighboring allocations.
 		 */
-		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
+		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
 	}
 
 #ifdef CONFIG_SLUB_DEBUG
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
  2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
@ 2021-06-08 20:56   ` Andrew Morton
  2021-06-08 23:11     ` Kees Cook
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2021-06-08 20:56 UTC (permalink / raw
  To: Kees Cook
  Cc: Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm, Lin, Zhenpeng

On Tue,  8 Jun 2021 11:39:55 -0700 Kees Cook <keescook@chromium.org> wrote:

> It turns out that SLUB redzoning ("slub_debug=Z") checks from
> s->object_size rather than from s->inuse (which is normally bumped
> to make room for the freelist pointer), so a cache created with an
> object size less than 24 would have the freelist pointer written beyond
> s->object_size, causing the redzone to be corrupted by the freelist
> pointer. This was very visible with "slub_debug=ZF":
> 
> BUG test (Tainted: G    B            ): Right Redzone overwritten
> -----------------------------------------------------------------------------
> 
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> 
> Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
> Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
> Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........
> 
> Adjust the offset to stay within s->object_size.
> 
> (Note that no caches of in this size range are known to exist in the
> kernel currently.)

We already have
https://lkml.kernel.org/r/6746FEEA-FD69-4792-8DDA-C78F5FE7DA02@psu.edu.
Is this patch better?

> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  {
>  	slab_flags_t flags = s->flags;
>  	unsigned int size = s->object_size;
> -	unsigned int freepointer_area;
>  	unsigned int order;
>  
>  	/*
> @@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  	 * the possible location of the free pointer.
>  	 */
>  	size = ALIGN(size, sizeof(void *));
> -	/*
> -	 * This is the area of the object where a freepointer can be
> -	 * safely written. If redzoning adds more to the inuse size, we
> -	 * can't use that portion for writing the freepointer, so
> -	 * s->offset must be limited within this for the general case.
> -	 */
> -	freepointer_area = size;
>  
>  #ifdef CONFIG_SLUB_DEBUG
>  	/*
> @@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  
>  	/*
>  	 * With that we have determined the number of bytes in actual use
> -	 * by the object. This is the potential offset to the free pointer.
> +	 * by the object and redzoning.
>  	 */
>  	s->inuse = size;
>  
> @@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  		 */
>  		s->offset = size;
>  		size += sizeof(void *);
> -	} else if (freepointer_area > sizeof(void *)) {
> +	} else {
>  		/*
>  		 * Store freelist pointer near middle of object to keep
>  		 * it away from the edges of the object to avoid small
>  		 * sized over/underflows from neighboring allocations.
>  		 */
> -		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> +		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
>  	}
>  
>  #ifdef CONFIG_SLUB_DEBUG
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
  2021-06-08 20:56   ` Andrew Morton
@ 2021-06-08 23:11     ` Kees Cook
  0 siblings, 0 replies; 5+ messages in thread
From: Kees Cook @ 2021-06-08 23:11 UTC (permalink / raw
  To: Andrew Morton
  Cc: Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

On Tue, Jun 08, 2021 at 01:56:33PM -0700, Andrew Morton wrote:
> On Tue,  8 Jun 2021 11:39:55 -0700 Kees Cook <keescook@chromium.org> wrote:
> 
> > It turns out that SLUB redzoning ("slub_debug=Z") checks from
> > s->object_size rather than from s->inuse (which is normally bumped
> > to make room for the freelist pointer), so a cache created with an
> > object size less than 24 would have the freelist pointer written beyond
> > s->object_size, causing the redzone to be corrupted by the freelist
> > pointer. This was very visible with "slub_debug=ZF":
> > 
> > BUG test (Tainted: G    B            ): Right Redzone overwritten
> > -----------------------------------------------------------------------------
> > 
> > INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> > INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> > INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> > 
> > Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
> > Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> > Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
> > Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........
> > 
> > Adjust the offset to stay within s->object_size.
> > 
> > (Note that no caches of in this size range are known to exist in the
> > kernel currently.)
> 
> We already have
> https://lkml.kernel.org/r/6746FEEA-FD69-4792-8DDA-C78F5FE7DA02@psu.edu.
> Is this patch better?

Yes, I believe so, since it reduces code and corrects the size checking
more directly (and more clearly demonstrates the redzone calculation
problem in the commit log).

-Kees

> 
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  {
> >  	slab_flags_t flags = s->flags;
> >  	unsigned int size = s->object_size;
> > -	unsigned int freepointer_area;
> >  	unsigned int order;
> >  
> >  	/*
> > @@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  	 * the possible location of the free pointer.
> >  	 */
> >  	size = ALIGN(size, sizeof(void *));
> > -	/*
> > -	 * This is the area of the object where a freepointer can be
> > -	 * safely written. If redzoning adds more to the inuse size, we
> > -	 * can't use that portion for writing the freepointer, so
> > -	 * s->offset must be limited within this for the general case.
> > -	 */
> > -	freepointer_area = size;
> >  
> >  #ifdef CONFIG_SLUB_DEBUG
> >  	/*
> > @@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  
> >  	/*
> >  	 * With that we have determined the number of bytes in actual use
> > -	 * by the object. This is the potential offset to the free pointer.
> > +	 * by the object and redzoning.
> >  	 */
> >  	s->inuse = size;
> >  
> > @@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  		 */
> >  		s->offset = size;
> >  		size += sizeof(void *);
> > -	} else if (freepointer_area > sizeof(void *)) {
> > +	} else {
> >  		/*
> >  		 * Store freelist pointer near middle of object to keep
> >  		 * it away from the edges of the object to avoid small
> >  		 * sized over/underflows from neighboring allocations.
> >  		 */
> > -		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> > +		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
> >  	}
> >  
> >  #ifdef CONFIG_SLUB_DEBUG
> > -- 
> > 2.25.1

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations
  2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
@ 2021-06-11  9:13   ` Vlastimil Babka
  0 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2021-06-11  9:13 UTC (permalink / raw
  To: Kees Cook, Andrew Morton
  Cc: stable, Marco Elver, Christoph Lameter, Lin, Zhenpeng,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Roman Gushchin,
	linux-kernel, linux-doc, linux-mm

On 6/8/21 8:39 PM, Kees Cook wrote:
> The redzone area for SLUB exists between s->object_size and s->inuse
> (which is at least the word-aligned object_size). If a cache were created
> with an object_size smaller than sizeof(void *), the in-object stored
> freelist pointer would overwrite the redzone (e.g. with boot param
> "slub_debug=ZF"):
> 
> BUG test (Tainted: G    B            ): Right Redzone overwritten
> -----------------------------------------------------------------------------
> 
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> 
> Redzone  (____ptrval____): bb bb bb bb bb bb bb bb    ........
> Object   (____ptrval____): f6 f4 a5 40 1d e8          ...@..
> Redzone  (____ptrval____): 1a aa                      ..
> Padding  (____ptrval____): 00 00 00 00 00 00 00 00    ........
> 
> Store the freelist pointer out of line when object_size is smaller than
> sizeof(void *) and redzoning is enabled.
> 
> Additionally remove the "smaller than sizeof(void *)" check under
> CONFIG_DEBUG_VM in kmem_cache_sanity_check() as it is now redundant:
> SLAB and SLOB both handle small sizes.
> 
> (Note that no caches within this size range are known to exist in the
> kernel currently.)
> 
> Fixes: 81819f0fc828 ("SLUB core")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/slab_common.c | 3 +--
>  mm/slub.c        | 8 +++++---
>  2 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a4a571428c51..7cab77655f11 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -97,8 +97,7 @@ EXPORT_SYMBOL(kmem_cache_size);
>  #ifdef CONFIG_DEBUG_VM
>  static int kmem_cache_sanity_check(const char *name, unsigned int size)
>  {
> -	if (!name || in_interrupt() || size < sizeof(void *) ||
> -		size > KMALLOC_MAX_SIZE) {
> +	if (!name || in_interrupt() || size > KMALLOC_MAX_SIZE) {
>  		pr_err("kmem_cache_create(%s) integrity check failed\n", name);
>  		return -EINVAL;
>  	}
> diff --git a/mm/slub.c b/mm/slub.c
> index f91d9fe7d0d8..f58cfd456548 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3734,15 +3734,17 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  	 */
>  	s->inuse = size;
>  
> -	if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
> -		s->ctor)) {
> +	if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
> +	    ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
> +	    s->ctor) {
>  		/*
>  		 * Relocate free pointer after the object if it is not
>  		 * permitted to overwrite the first word of the object on
>  		 * kmem_cache_free.
>  		 *
>  		 * This is the case if we do RCU, have a constructor or
> -		 * destructor or are poisoning the objects.
> +		 * destructor, are poisoning the objects, or are
> +		 * redzoning an object smaller than sizeof(void *).
>  		 *
>  		 * The assumption that s->offset >= s->inuse means free
>  		 * pointer is outside of the object is used in the
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-11  9:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20210608183955.280836-1-keescook@chromium.org>
2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
2021-06-11  9:13   ` Vlastimil Babka
2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
2021-06-08 20:56   ` Andrew Morton
2021-06-08 23:11     ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).