Linux Kernel Mentees Archive mirror
 help / color / mirror / Atom feed
From: Juntong Deng <juntong.deng@outlook.com>
To: Andrey Konovalov <andreyknvl@gmail.com>
Cc: ryabinin.a.a@gmail.com, glider@google.com, dvyukov@google.com,
	vincenzo.frascino@arm.com, akpm@linux-foundation.org,
	kasan-dev@googlegroups.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	linux-kernel-mentees@lists.linuxfoundation.org
Subject: Re: [PATCH v2] kasan: Improve free meta storage in Generic KASAN
Date: Wed, 22 Nov 2023 17:31:19 +0800	[thread overview]
Message-ID: <VI1P193MB0752C8CE08222B9EA4D744F399BAA@VI1P193MB0752.EURP193.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <CA+fCnZfBM=UU0AyArERNMxBMeaPvbV-e6uyQDpwgqA5c6_f_DQ@mail.gmail.com>

On 2023/11/22 10:27, Andrey Konovalov wrote:
> On Tue, Nov 21, 2023 at 10:42 PM Juntong Deng <juntong.deng@outlook.com> wrote:
>>
>> Currently free meta can only be stored in object if the object is
>> not smaller than free meta.
>>
>> After the improvement, even when the object is smaller than free meta,
>> it is still possible to store part of the free meta in the object,
>> reducing the increased size of the redzone.
>>
>> Example:
>>
>> free meta size: 16 bytes
>> alloc meta size: 16 bytes
>> object size: 8 bytes
>> optimal redzone size (object_size <= 64): 16 bytes
>>
>> Before improvement:
>> actual redzone size = alloc meta size + free meta size = 32 bytes
>>
>> After improvement:
>> actual redzone size = alloc meta size + (free meta size - object size)
>>                      = 24 bytes
>>
>> Suggested-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Juntong Deng <juntong.deng@outlook.com>
> 
> I think this change as is does not work well with slub_debug.
> 
> slub_debug puts its metadata (redzone, tracks, and orig_size) right
> after the object (see calculate_sizes and the comment before
> check_pad_bytes). With the current code, KASAN's free meta either fits
> within the object or is placed after the slub_debug metadata and
> everything works well. With this change, KASAN's free meta tail goes
> right past object_size, overlaps with the slub_debug metadata, and
> thus can corrupt it.
> 
> Thus, to make this patch work properly, we need to carefully think
> about all metadatas layout and teach slub_debug that KASAN's free meta
> can go past object_size. Possibly, adjusting s->inuse by the size of
> KASAN's metas (along with moving kasan_cache_create and fixing up
> set_orig_size) would be enough. But I'm not familiar with the
> slub_debug code enough to be sure.
> 
> If you decide to proceed with improving this change, I've left some
> comments for the current code below.
> 
> Thank you!
> 


I delved into the memory layout of SLUB_DEBUG today.

I think a better option would be to let the free meta not pass through
the object when SLUB_DEBUG is enabled.

In other words, the free meta continues to be stored according to the
previous method when SLUB_DEBUG is enabled.

Even if we teach SLUB_DEBUG that KASAN's free meta may pass through the
object and move SLUB_DEBUG's metadata backward, it still destroys the
original design intent of SLUB_DEBUG.

Because SLUB_DEBUG checks for out-of-bounds by filling the redzones
on both sides of the object with magic number, if SLUB_DEBUG's redzones
move backward, leaving a gap, that will break the out-of-bounds
checking.

I will send patch V3 to fix this issue.


>> ---
>> V1 -> V2: Make kasan_metadata_size() adapt to the improved
>> free meta storage
>>
>>   mm/kasan/generic.c | 50 +++++++++++++++++++++++++++++++---------------
>>   1 file changed, 34 insertions(+), 16 deletions(-)
>>
>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>> index 4d837ab83f08..802c738738d7 100644
>> --- a/mm/kasan/generic.c
>> +++ b/mm/kasan/generic.c
>> @@ -361,6 +361,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>>   {
>>          unsigned int ok_size;
>>          unsigned int optimal_size;
>> +       unsigned int rem_free_meta_size;
>> +       unsigned int orig_alloc_meta_offset;
>>
>>          if (!kasan_requires_meta())
>>                  return;
>> @@ -394,6 +396,9 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>>                  /* Continue, since free meta might still fit. */
>>          }
>>
>> +       ok_size = *size;
>> +       orig_alloc_meta_offset = cache->kasan_info.alloc_meta_offset;
>> +
>>          /*
>>           * Add free meta into redzone when it's not possible to store
>>           * it in the object. This is the case when:
>> @@ -401,21 +406,26 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>>           *    be touched after it was freed, or
>>           * 2. Object has a constructor, which means it's expected to
>>           *    retain its content until the next allocation, or
> 
> Please drop "or" on the line above.
> 
>> -        * 3. Object is too small.
>>           * Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
>> +        * Even if the object is smaller than free meta, it is still
>> +        * possible to store part of the free meta in the object.
>>           */
>> -       if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
>> -           cache->object_size < sizeof(struct kasan_free_meta)) {
>> -               ok_size = *size;
>> -
>> +       if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) {
>>                  cache->kasan_info.free_meta_offset = *size;
>>                  *size += sizeof(struct kasan_free_meta);
>> +       } else if (cache->object_size < sizeof(struct kasan_free_meta)) {
>> +               rem_free_meta_size = sizeof(struct kasan_free_meta) -
>> +                                                               cache->object_size;
>> +               *size += rem_free_meta_size;
>> +               if (cache->kasan_info.alloc_meta_offset != 0)
>> +                       cache->kasan_info.alloc_meta_offset += rem_free_meta_size;
>> +       }
>>
>> -               /* If free meta doesn't fit, don't add it. */
>> -               if (*size > KMALLOC_MAX_SIZE) {
>> -                       cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
>> -                       *size = ok_size;
>> -               }
>> +       /* If free meta doesn't fit, don't add it. */
>> +       if (*size > KMALLOC_MAX_SIZE) {
>> +               cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
>> +               cache->kasan_info.alloc_meta_offset = orig_alloc_meta_offset;
>> +               *size = ok_size;
>>          }
>>
>>          /* Calculate size with optimal redzone. */
>> @@ -464,12 +474,20 @@ size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object)
>>          if (in_object)
>>                  return (info->free_meta_offset ?
>>                          0 : sizeof(struct kasan_free_meta));
> 
> This needs to be changed as well to something like min(cache->object,
> sizeof(struct kasan_free_meta)). However, with the slub_debug
> conflicts I mentioned above, we might need to change this to something
> else.
> 
> 
> 
>> -       else
>> -               return (info->alloc_meta_offset ?
>> -                       sizeof(struct kasan_alloc_meta) : 0) +
>> -                       ((info->free_meta_offset &&
>> -                       info->free_meta_offset != KASAN_NO_FREE_META) ?
>> -                       sizeof(struct kasan_free_meta) : 0);
>> +       else {
>> +               size_t alloc_meta_size = info->alloc_meta_offset ?
>> +                                                               sizeof(struct kasan_alloc_meta) : 0;
>> +               size_t free_meta_size = 0;
>> +
>> +               if (info->free_meta_offset != KASAN_NO_FREE_META) {
>> +                       if (info->free_meta_offset)
>> +                               free_meta_size = sizeof(struct kasan_free_meta);
>> +                       else if (cache->object_size < sizeof(struct kasan_free_meta))
>> +                               free_meta_size = sizeof(struct kasan_free_meta) -
>> +                                                                       cache->object_size;
>> +               }
>> +               return alloc_meta_size + free_meta_size;
>> +       }
>>   }
>>
>>   static void __kasan_record_aux_stack(void *addr, bool can_alloc)
>> --
>> 2.39.2


      reply	other threads:[~2023-11-22  9:31 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-21 21:41 [PATCH v2] kasan: Improve free meta storage in Generic KASAN Juntong Deng
2023-11-22  2:27 ` Andrey Konovalov
2023-11-22  9:31   ` Juntong Deng [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VI1P193MB0752C8CE08222B9EA4D744F399BAA@VI1P193MB0752.EURP193.PROD.OUTLOOK.COM \
    --to=juntong.deng@outlook.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@gmail.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel-mentees@lists.linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryabinin.a.a@gmail.com \
    --cc=vincenzo.frascino@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).