All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Yi Zhang <yi.zhang@redhat.com>
Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>,
	Jens Axboe <axboe@kernel.dk>,
	linux-block <linux-block@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Subject: Re: [bug report] kmemleak observed with blktests nvme/tcp
Date: Mon, 22 Apr 2024 13:46:02 +0300	[thread overview]
Message-ID: <54666822-4e7f-4ec2-982b-541380cec154@grimberg.me> (raw)
In-Reply-To: <CAHj4cs9UN_pV_raSL2+wNRP9yBeJWkx0_GtHSQ0QoC3jYxhfQA@mail.gmail.com>



On 22/04/2024 7:59, Yi Zhang wrote:
> On Sun, Apr 21, 2024 at 6:31 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>>
>>
>> On 16/04/2024 6:19, Chaitanya Kulkarni wrote:
>>> +linux-nvme list for awareness ...
>>>
>>> -ck
>>>
>>>
>>> On 4/6/24 17:38, Yi Zhang wrote:
>>>> Hello
>>>>
>>>> I found the kmemleak issue after blktests nvme/tcp tests on the latest
>>>> linux-block/for-next, please help check it and let me know if you need
>>>> any info/testing for it, thanks.
>>> it will help others to specify which testcase you are using ...
>>>
>>>> # dmesg | grep kmemleak
>>>> [ 2580.572467] kmemleak: 92 new suspected memory leaks (see
>>>> /sys/kernel/debug/kmemleak)
>>>>
>>>> # cat kmemleak.log
>>>> unreferenced object 0xffff8885a1abe740 (size 32):
>>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
>>>>      hex dump (first 32 bytes):
>>>>        c2 4a 4a 04 00 ea ff ff 00 00 00 00 00 10 00 00  .JJ.............
>>>>        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>>>>      backtrace (crc 6328eade):
>>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
>>>>        [<ffffffffa86a9b1f>] sgl_alloc_order+0x7f/0x360
>>>>        [<ffffffffc261f6c5>] lo_read_simple+0x1d5/0x5b0 [loop]
>>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
>>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
>>>>        [<ffffffffc262a881>] 0xffffffffc262a881
>>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
>>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
>>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
>>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
>>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
>>>> unreferenced object 0xffff88a8b03647c0 (size 16):
>>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
>>>>      hex dump (first 16 bytes):
>>>>        c0 4a 4a 04 00 ea ff ff 00 10 00 00 00 00 00 00  .JJ.............
>>>>      backtrace (crc 860ce62b):
>>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
>>>>        [<ffffffffc261f805>] lo_read_simple+0x315/0x5b0 [loop]
>>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
>>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
>>>>        [<ffffffffc262a881>] 0xffffffffc262a881
>>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
>>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
>>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
>>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
>>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
>> kmemleak suggest that the leakage is coming from lo_read_simple() Is
>> this a regression that can be bisected?
>>
> It's not one regression issue, I tried 6.7 and it also can be reproduced.

Its strange that the stack makes it look like lo_read_simple is allocating
the sgl, it is probably nvmet-tcp though.

Can you try with the patch below:
--
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index a5422e2c979a..bfd1cf7cc1c2 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -348,6 +348,7 @@ static int nvmet_tcp_check_ddgst(struct 
nvmet_tcp_queue *queue, void *pdu)
         return 0;
  }

+/* safe to call multiple times */
  static void nvmet_tcp_free_cmd_buffers(struct nvmet_tcp_cmd *cmd)
  {
         kfree(cmd->iov);
@@ -1581,13 +1582,9 @@ static void 
nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue)
         struct nvmet_tcp_cmd *cmd = queue->cmds;
         int i;

-       for (i = 0; i < queue->nr_cmds; i++, cmd++) {
-               if (nvmet_tcp_need_data_in(cmd))
-                       nvmet_tcp_free_cmd_buffers(cmd);
-       }
-
-       if (!queue->nr_cmds && nvmet_tcp_need_data_in(&queue->connect))
-               nvmet_tcp_free_cmd_buffers(&queue->connect);
+       for (i = 0; i < queue->nr_cmds; i++, cmd++)
+               nvmet_tcp_free_cmd_buffers(cmd);
+       nvmet_tcp_free_cmd_buffers(&queue->connect);
  }

  static void nvmet_tcp_release_queue_work(struct work_struct *w)
--

  reply	other threads:[~2024-04-22 10:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-07  0:38 [bug report] kmemleak observed with blktests nvme/tcp Yi Zhang
2024-04-16  3:19 ` Chaitanya Kulkarni
2024-04-19  1:04   ` Yi Zhang
2024-04-21 10:31   ` Sagi Grimberg
2024-04-22  4:59     ` Yi Zhang
2024-04-22 10:46       ` Sagi Grimberg [this message]
2024-04-24  6:18         ` Yi Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54666822-4e7f-4ec2-982b-541380cec154@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=chaitanyak@nvidia.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=shinichiro.kawasaki@wdc.com \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.