All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Zeng, Oak" <oak.zeng@intel.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Brost, Matthew" <matthew.brost@intel.com>,
	"Welty, Brian" <brian.welty@intel.com>,
	"Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>,
	"Bommu, Krishnaiah" <krishnaiah.bommu@intel.com>,
	"Vishwanathapura,
	Niranjana" <niranjana.vishwanathapura@intel.com>,
	"Leon Romanovsky" <leon@kernel.org>
Subject: Re: [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range
Date: Mon, 6 May 2024 10:33:23 -0300	[thread overview]
Message-ID: <20240506133323.GI3341011@nvidia.com> (raw)
In-Reply-To: <SA1PR11MB69911B3E675B1072A17A3E49921F2@SA1PR11MB6991.namprd11.prod.outlook.com>

On Fri, May 03, 2024 at 08:29:39PM +0000, Zeng, Oak wrote:

> > > But we have use case where we want to fault-in pages other than the
> > > page which contains the GPU fault address, e.g., user malloc'ed or
> > > mmap'ed 8MiB buffer, and no CPU touching of this buffer before GPU
> > > access it. Let's say GPU access caused a GPU page fault a 2MiB
> > > place. The first hmm-range-fault would only fault-in the page at
> > > 2MiB place, because in the first call we only set REQ_FAULT to the
> > > pfn at 2MiB place.
> > 
> > Honestly, that doesn't make alot of sense to me, but if you really
> > want that you should add some new flag and have hmm_range_fault do
> > this kind of speculative faulting. I think you will end up
> > significantly over faulting.
> 
> Above 2 steps hmm-range-fault approach is just my guess of what you
> were suggesting. Since you don't like the CPU vma look up, so we
> come out this 2 steps hmm-range-fault thing. The first step has the
> same functionality of a CPU vma lookup.

If you want to retain the GPU fault flag as a signal for changing
locality then you have to correct the locality and resolve all faults
before calling hmm_range_fault(). hmm_range_fault() will never do
faulting. It will always just read in the already resolved pages.

> > It also doesn't make sense to do faulting in hmm prefetch if you are
> > going to do migration to force the fault anyhow.
> 
> What do you mean by hmm prefetch?

I mean the pages that are not part of the critical fault
resultion. The pages you are preloading into the GPU page table
without an immediate need.
 
> As explained, we call hmm-range-fault in two scenarios:
> 
> 1) call hmm-range-fault to get the current status of cpu page table
> without causing CPU fault. When address range is already accessed by
> CPU before GPU, or when we migrate for such range, we run into this
> scenario

This is because you are trying to keep locality management outside of
the code code - it is creating this problem. As I said below locality
management should be core code, not in drivers. It may be hmm core
code, not drm, but regardless.
 
> We do have another prefetch API which can be called from user space
> to prefetch before GPU job submission.

This API seems like it would break the use of faulting as a mechanism
to manage locality...

> > I'm not sure I full agree there is a real need to agressively optimize
> > the faulting path like you are describing when it shouldn't really be
> > used in a performant application :\
> 
> As a driver, we need to support all possible scenarios. 

Functionally support is different from micro optimizing it.

> > > 2) decide a migration window per migration granularity (e.g., 2MiB)
> > > settings, inside the CPU VMA. If CPU VMA is smaller than the
> > > migration granular, migration window is the whole CPU vma range;
> > > otherwise, partially of the VMA range is migrated.
> > 
> > Seems rather arbitary to me. You are quite likely to capture some
> > memory that is CPU memory and cause thrashing. As I said before in
> > common cases the heap will be large single VMAs, so this kind of
> > scheme is just going to fault a whole bunch of unrelated malloc
> > objects over to the GPU.
> 
> I want to listen more here.
> 
> Here is my understanding. Malloc of small size such as less than one
> page, or a few pages, memory is allocated from heap.
> 
> When malloc is much more than one pages, the GlibC's behavior is
> mmap it directly from OS, vs from heap

Yes "much more", there is some cross over where very large allocations
may get there own arena.
 
> In glibC the threshold is defined by MMAP_THRESHOLD. The default
> value is 128K:
> https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-Tunables.html

Sure
 
> So on the heap, it is some small VMAs each contains a few pages,
> normally one page per VMA. In the worst case, VMA on pages shouldn't
> be bigger than MMAP_THRESHOLD.

Huh? That isn't quite how it works. The glibc arenas for < 128K
allocation can be quite big, they often will come from the brk heap
which is a single large VMA. The above only says that allocations over
128K will get their own VMAs. It doesn't say small allocations get
small VMAs.

Of course there are many allocator libraries with different schemes
and tunables.

> In a reasonable GPU application, people use GPU for compute which
> usually involves large amount of data which can be many MiB,
> sometimes it can even be many GiB of data

Then the application can also prefault the whole thing.

> Now going back our scheme. I picture in most application, the CPU
> vma search end up big vma, MiB, GiB etc

I'm not sure. Some may, but not all, and not all memory touched by the
GPU will necessarily come from the giant allocation even in the apps
that do work that way.

> If we end up with a vma that is only a few pages, we fault in the
> whole vma. It is true that for this case we fault in unrelated
> malloc objects. Maybe we can fine tune here to only fault in one
> page (which is minimum fault size) for such case. Admittedly one
> page can also have bunch of unrelated objects. But overall we think
> this should not be common case.

This is the obvious solution, without some kind of special knowledge
the kernel possibly shouldn't attempt the optimize by speculating how
to resolve the fault - or minimally the speculation needs to be a
tunable (ugh)

Broadly, I think using fault indication to indicate locality of pages
that haven't been faulted is pretty bad. Locality indications need to
come from some way that reliably indicates if the device is touching
the pages at all.

Arguably this can never be performant, so I'd argue you should focus
on making things simply work (ie single fault, no prefault, basic
prefetch) and do not expect to achieve a high quality dynamic
locality. Application must specify, application must prefault &
prefetch.

Jason

  parent reply	other threads:[~2024-05-06 13:33 UTC|newest]

Thread overview: 198+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-17 22:12 [PATCH 00/23] XeKmd basic SVM support Oak Zeng
2024-01-17 22:12 ` Oak Zeng
2024-01-17 22:12 ` [PATCH 01/23] drm/xe/svm: Add SVM document Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 02/23] drm/xe/svm: Add svm key data structures Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 03/23] drm/xe/svm: create xe svm during vm creation Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 04/23] drm/xe/svm: Trace svm creation Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 05/23] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-04-05  0:39   ` Jason Gunthorpe
2024-04-05  3:33     ` Zeng, Oak
2024-04-05 12:37       ` Jason Gunthorpe
2024-04-05 16:42         ` Zeng, Oak
2024-04-05 18:02           ` Jason Gunthorpe
2024-04-09 16:45             ` Zeng, Oak
2024-04-09 17:24               ` Jason Gunthorpe
2024-04-23 21:17                 ` Zeng, Oak
2024-04-24  2:31                   ` Matthew Brost
2024-04-24 13:57                     ` Jason Gunthorpe
2024-04-24 16:35                       ` Matthew Brost
2024-04-24 16:44                         ` Jason Gunthorpe
2024-04-24 16:56                           ` Matthew Brost
2024-04-24 17:48                             ` Jason Gunthorpe
2024-04-24 13:48                   ` Jason Gunthorpe
2024-04-24 23:59                     ` Zeng, Oak
2024-04-25  1:05                       ` Jason Gunthorpe
2024-04-26  9:55                         ` Thomas Hellström
2024-04-26 12:00                           ` Jason Gunthorpe
2024-04-26 14:49                             ` Thomas Hellström
2024-04-26 16:35                               ` Jason Gunthorpe
2024-04-29  8:25                                 ` Thomas Hellström
2024-04-30 17:30                                   ` Jason Gunthorpe
2024-04-30 18:57                                     ` Daniel Vetter
2024-05-01  0:09                                       ` Jason Gunthorpe
2024-05-02  8:04                                         ` Daniel Vetter
2024-05-02  9:11                                           ` Thomas Hellström
2024-05-02 12:46                                             ` Jason Gunthorpe
2024-05-02 15:01                                               ` Thomas Hellström
2024-05-02 19:25                                                 ` Zeng, Oak
2024-05-03 13:37                                                   ` Jason Gunthorpe
2024-05-03 14:43                                                     ` Zeng, Oak
2024-05-03 16:28                                                       ` Jason Gunthorpe
2024-05-03 20:29                                                         ` Zeng, Oak
2024-05-04  1:03                                                           ` Dave Airlie
2024-05-06 13:04                                                             ` Daniel Vetter
2024-05-06 23:50                                                               ` Matthew Brost
2024-05-07 11:56                                                                 ` Jason Gunthorpe
2024-05-06 13:33                                                           ` Jason Gunthorpe [this message]
2024-04-09 17:33               ` Matthew Brost
2024-01-17 22:12 ` [PATCH 07/23] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 08/23] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 10/23] drm/xe/svm: Introduce svm migration function Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 12/23] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 13/23] drm/xe/svm: Handle CPU page fault Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 14/23] drm/xe/svm: trace svm range migration Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 15/23] drm/xe/svm: Implement functions to register and unregister mmu notifier Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 16/23] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 17/23] drm/xe/svm: clean up svm range during process exit Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 18/23] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 19/23] drm/xe/svm: migrate svm range to vram Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 20/23] drm/xe/svm: Populate svm range Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 21/23] drm/xe/svm: GPU page fault support Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-23  2:06   ` Welty, Brian
2024-01-23  2:06     ` Welty, Brian
2024-01-23  3:09     ` Zeng, Oak
2024-01-23  3:09       ` Zeng, Oak
2024-01-23  3:21       ` Making drm_gpuvm work across gpu devices Zeng, Oak
2024-01-23  3:21         ` Zeng, Oak
2024-01-23 11:13         ` Christian König
2024-01-23 11:13           ` Christian König
2024-01-23 19:37           ` Zeng, Oak
2024-01-23 19:37             ` Zeng, Oak
2024-01-23 20:17             ` Felix Kuehling
2024-01-23 20:17               ` Felix Kuehling
2024-01-25  1:39               ` Zeng, Oak
2024-01-25  1:39                 ` Zeng, Oak
2024-01-23 23:56             ` Danilo Krummrich
2024-01-23 23:56               ` Danilo Krummrich
2024-01-24  3:57               ` Zeng, Oak
2024-01-24  3:57                 ` Zeng, Oak
2024-01-24  4:14                 ` Zeng, Oak
2024-01-24  4:14                   ` Zeng, Oak
2024-01-24  6:48                   ` Christian König
2024-01-24  6:48                     ` Christian König
2024-01-25 22:13                 ` Danilo Krummrich
2024-01-25 22:13                   ` Danilo Krummrich
2024-01-24  8:33             ` Christian König
2024-01-24  8:33               ` Christian König
2024-01-25  1:17               ` Zeng, Oak
2024-01-25  1:17                 ` Zeng, Oak
2024-01-25  1:25                 ` David Airlie
2024-01-25  1:25                   ` David Airlie
2024-01-25  5:25                   ` Zeng, Oak
2024-01-25  5:25                     ` Zeng, Oak
2024-01-26 10:09                     ` Christian König
2024-01-26 10:09                       ` Christian König
2024-01-26 20:13                       ` Zeng, Oak
2024-01-26 20:13                         ` Zeng, Oak
2024-01-29 10:10                         ` Christian König
2024-01-29 10:10                           ` Christian König
2024-01-29 20:09                           ` Zeng, Oak
2024-01-29 20:09                             ` Zeng, Oak
2024-01-25 11:00                 ` 回复:Making " 周春明(日月)
2024-01-25 11:00                   ` 周春明(日月)
2024-01-25 17:00                   ` Zeng, Oak
2024-01-25 17:00                     ` Zeng, Oak
2024-01-25 17:15                 ` Making " Felix Kuehling
2024-01-25 17:15                   ` Felix Kuehling
2024-01-25 18:37                   ` Zeng, Oak
2024-01-25 18:37                     ` Zeng, Oak
2024-01-26 13:23                     ` Christian König
2024-01-26 13:23                       ` Christian König
2024-01-25 16:42               ` Zeng, Oak
2024-01-25 16:42                 ` Zeng, Oak
2024-01-25 18:32               ` Daniel Vetter
2024-01-25 18:32                 ` Daniel Vetter
2024-01-25 21:02                 ` Zeng, Oak
2024-01-25 21:02                   ` Zeng, Oak
2024-01-26  8:21                 ` Thomas Hellström
2024-01-26  8:21                   ` Thomas Hellström
2024-01-26 12:52                   ` Christian König
2024-01-26 12:52                     ` Christian König
2024-01-27  2:21                     ` Zeng, Oak
2024-01-27  2:21                       ` Zeng, Oak
2024-01-29 10:19                       ` Christian König
2024-01-29 10:19                         ` Christian König
2024-01-30  0:21                         ` Zeng, Oak
2024-01-30  0:21                           ` Zeng, Oak
2024-01-30  8:39                           ` Christian König
2024-01-30  8:39                             ` Christian König
2024-01-30 22:29                             ` Zeng, Oak
2024-01-30 22:29                               ` Zeng, Oak
2024-01-30 23:12                               ` David Airlie
2024-01-30 23:12                                 ` David Airlie
2024-01-31  9:15                                 ` Daniel Vetter
2024-01-31  9:15                                   ` Daniel Vetter
2024-01-31 20:17                                   ` Zeng, Oak
2024-01-31 20:17                                     ` Zeng, Oak
2024-01-31 20:59                                     ` Zeng, Oak
2024-01-31 20:59                                       ` Zeng, Oak
2024-02-01  8:52                                     ` Christian König
2024-02-01  8:52                                       ` Christian König
2024-02-29 18:22                                       ` Zeng, Oak
2024-03-08  4:43                                         ` Zeng, Oak
2024-03-08 10:07                                           ` Christian König
2024-01-30  8:43                           ` Thomas Hellström
2024-01-30  8:43                             ` Thomas Hellström
2024-01-29 15:03                 ` Felix Kuehling
2024-01-29 15:03                   ` Felix Kuehling
2024-01-29 15:33                   ` Christian König
2024-01-29 15:33                     ` Christian König
2024-01-29 16:24                     ` Felix Kuehling
2024-01-29 16:24                       ` Felix Kuehling
2024-01-29 16:28                       ` Christian König
2024-01-29 16:28                         ` Christian König
2024-01-29 17:52                         ` Felix Kuehling
2024-01-29 17:52                           ` Felix Kuehling
2024-01-29 19:03                           ` Christian König
2024-01-29 19:03                             ` Christian König
2024-01-29 20:24                             ` Felix Kuehling
2024-01-29 20:24                               ` Felix Kuehling
2024-02-23 20:12               ` Zeng, Oak
2024-02-27  6:54                 ` Christian König
2024-02-27 15:58                   ` Zeng, Oak
2024-02-28 19:51                     ` Zeng, Oak
2024-02-29  9:41                       ` Christian König
2024-02-29 16:05                         ` Zeng, Oak
2024-02-29 17:12                         ` Thomas Hellström
2024-03-01  7:01                           ` Christian König
2024-01-17 22:12 ` [PATCH 22/23] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-17 22:12 ` [PATCH 23/23] drm/xe/svm: Add svm memory hints interface Oak Zeng
2024-01-17 22:12   ` Oak Zeng
2024-01-18  2:45 ` ✓ CI.Patch_applied: success for XeKmd basic SVM support Patchwork
2024-01-18  2:46 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-18  2:46 ` ✗ CI.KUnit: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240506133323.GI3341011@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=brian.welty@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=krishnaiah.bommu@intel.com \
    --cc=leon@kernel.org \
    --cc=matthew.brost@intel.com \
    --cc=niranjana.vishwanathapura@intel.com \
    --cc=oak.zeng@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.