All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Gregory Price <gregory.price@memverge.com>, <linux-cxl@vger.kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Dave Jiang <dave.jiang@intel.com>
Subject: RE: [BUG] DAX access of Memory Expander on RCH topology fires BUG on page_table_check
Date: Mon, 17 Apr 2023 23:35:15 -0700	[thread overview]
Message-ID: <643e3a2344460_556e294a2@dwillia2-mobl3.amr.corp.intel.com.notmuch> (raw)
In-Reply-To: <ZDb71ZXGtzz0ttQT@memverge.com>

Gregory Price wrote:
> 
> 
> I was looking to validate mlock-ability of various pages when CXL is in
> different states (numa, dax, etc), and I discovered a page_table_check
> BUG when accessing MemExp memory while a device is in daxdev mode.
> 
> this happens essentially on a fault of the first accessed page
> 
> int dax_fd = open(device_path, O_RDWR);
> void *mapped_memory = mmap(NULL, (1024*1024*2), PROT_READ | PROT_WRITE, MAP_SHARED, dax_fd, 0);
> ((char*)mapped_memory)[0] = 1;
> 
> 
> Full details of my test here:
> 
> Step 1) Test that memory onlined in NUMA node works
> 
> [user@host0 ~]# numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
> node 0 size: 63892 MB
> node 0 free: 59622 MB
> node 1 cpus:
> node 1 size: 129024 MB
> node 1 free: 129024 MB
> node distances:
> node   0   1
>   0:  10  50
>   1:  255  10
> 
> 
> [user@host0 ~]# numactl --preferred=1 memhog 128G
> ... snip ...
> 
> Passes no problem, all memory is accessible and used.
> 
> 
> 
> Next, reconfigure the device to daxdev mode
> 
> 
> [user@host0 ~]# daxctl list
> [
>   {
>     "chardev":"dax0.0",
>     "size":137438953472,
>     "target_node":1,
>     "align":2097152,
>     "mode":"system-ram",
>     "online_memblocks":63,
>     "total_memblocks":63,
>     "movable":true
>   }
> ]
> [user@host0 ~]# daxctl offline-memory dax0.0
> offlined memory for 1 device
> [user@host0 ~]# daxctl reconfigure-device --human --mode=devdax dax0.0
> {
>   "chardev":"dax0.0",
>   "size":"128.00 GiB (137.44 GB)",
>   "target_node":1,
>   "align":2097152,
>   "mode":"devdax"
> }
> reconfigured 1 device
> [user@host0 mapping0]# daxctl list -M -u
> {
>   "chardev":"dax0.0",
>   "size":"128.00 GiB (137.44 GB)",
>   "target_node":1,
>   "align":2097152,
>   "mode":"devdax",
>   "mappings":[
>     {
>       "page_offset":"0",
>       "start":"0x1050000000",
>       "end":"0x304fffffff",
>       "size":"128.00 GiB (137.44 GB)"
>     }
>   ]
> }
> 
> 
> Now map and access the memory via /dev/dax0.0  (test program attached)
> 
> [ 1028.430734] kernel BUG at mm/page_table_check.c:53!

I have never tested DAX with CONFIG_PAGE_TABLE_CHECK=y, so would need to
dig in further here. A quick test passes the unit tests, but the unit
tests don't have this, "map dax after system-ram" scenario. Just for
completenees, does it behave without that debug option enabled?

[..] 
> 
> Test program:
> 
> #include <sys/mman.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <fcntl.h>
> #include <unistd.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <errno.h>
> #include <string.h>
> 
> int main() {
>     // Open the DAX device
>     const char *device_path = "/dev/dax0.0"; // Replace with your DAX device path
>     int dax_fd = open(device_path, O_RDWR);
> 
>     if (dax_fd < 0) {
>         printf("Error: Unable to open DAX device: %s\n", strerror(errno));
>         return 1;
>     }
>     printf("file opened\n");
> 
>     // Memory-map the DAX device
>     size_t size = 1024*1024*2; // 2MB
>     void *mapped_memory = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, dax_fd, 0);
> 
>     if (mapped_memory == MAP_FAILED) {
>         printf("Error: Unable to mmap DAX device: %s\n", strerror(errno));
>         close(dax_fd);
>         return 1;
>     }
>     printf("mmaped\n");
> 
>     ((char*)mapped_memory)[0] = 1;
> 
> /*

i.e. just touching the memory fails, no need to mlock it? This smells
more like the CONFIG_PAGE_TABLE_CHECK machinery is getting confused, but
I would have expected its metadata to be reset by the dax device
reconfiguration.

  parent reply	other threads:[~2023-04-18  6:35 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-12 18:43 [BUG] DAX access of Memory Expander on RCH topology fires BUG on page_table_check Gregory Price
2023-04-13 11:39 ` Gregory Price
2023-04-18  6:43   ` Dan Williams
2023-04-20  0:58     ` Gregory Price
2023-04-18  6:35 ` Dan Williams [this message]
2023-04-20  1:29   ` Gregory Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=643e3a2344460_556e294a2@dwillia2-mobl3.amr.corp.intel.com.notmuch \
    --to=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=gregory.price@memverge.com \
    --cc=linux-cxl@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.