From: Oak Zeng <oak.zeng@intel.com> To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Cc: matthew.brost@intel.com, Thomas.Hellstrom@linux.intel.com, brian.welty@intel.com, himal.prasad.ghimiray@intel.com, krishnaiah.bommu@intel.com, niranjana.vishwanathapura@intel.com Subject: [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Date: Wed, 17 Jan 2024 17:12:09 -0500 [thread overview] Message-ID: <20240117221223.18540-10-oak.zeng@intel.com> (raw) In-Reply-To: <20240117221223.18540-1-oak.zeng@intel.com> Memory remap GPU vram using devm_memremap_pages, so each GPU vram page is backed by a struct page. Those struct pages are created to allow hmm migrate buffer b/t GPU vram and CPU system memory using existing Linux migration mechanism (i.e., migrating b/t CPU system memory and hard disk). This is prepare work to enable svm (shared virtual memory) through Linux kernel hmm framework. The memory remap's page map type is set to MEMORY_DEVICE_PRIVATE for now. This means even though each GPU vram page get a struct page and can be mapped in CPU page table, but such pages are treated as GPU's private resource, so CPU can't access them. If CPU access such page, a page fault is triggered and page will be migrate to system memory. For GPU device which supports coherent memory protocol b/t CPU and GPU (such as CXL and CAPI protocol), we can remap device memory as MEMORY_DEVICE_COHERENT. This is TBD. Signed-off-by: Oak Zeng <oak.zeng@intel.com> Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Thomas Hellström <thomas.hellstrom@intel.com> Cc: Brian Welty <brian.welty@intel.com> --- drivers/gpu/drm/xe/xe_device_types.h | 8 +++ drivers/gpu/drm/xe/xe_mmio.c | 7 +++ drivers/gpu/drm/xe/xe_svm.h | 2 + drivers/gpu/drm/xe/xe_svm_devmem.c | 87 ++++++++++++++++++++++++++++ 4 files changed, 104 insertions(+) create mode 100644 drivers/gpu/drm/xe/xe_svm_devmem.c diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index 7eda86bd4c2a..6dba5b0ab481 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -99,6 +99,14 @@ struct xe_mem_region { resource_size_t actual_physical_size; /** @mapping: pointer to VRAM mappable space */ void __iomem *mapping; + /** @pagemap: Used to remap device memory as ZONE_DEVICE */ + struct dev_pagemap pagemap; + /** + * @hpa_base: base host physical address + * + * This is generated when remap device memory as ZONE_DEVICE + */ + resource_size_t hpa_base; }; /** diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c index c8c5d74b6e90..3d34dcfa3b3a 100644 --- a/drivers/gpu/drm/xe/xe_mmio.c +++ b/drivers/gpu/drm/xe/xe_mmio.c @@ -21,6 +21,7 @@ #include "xe_macros.h" #include "xe_module.h" #include "xe_tile.h" +#include "xe_svm.h" #define XEHP_MTCFG_ADDR XE_REG(0x101800) #define TILE_COUNT REG_GENMASK(15, 8) @@ -285,6 +286,7 @@ int xe_mmio_probe_vram(struct xe_device *xe) } io_size -= min_t(u64, tile_size, io_size); + xe_svm_devm_add(tile, &tile->mem.vram); } xe->mem.vram.actual_physical_size = total_size; @@ -353,10 +355,15 @@ void xe_mmio_probe_tiles(struct xe_device *xe) static void mmio_fini(struct drm_device *drm, void *arg) { struct xe_device *xe = arg; + struct xe_tile *tile; + u8 id; pci_iounmap(to_pci_dev(xe->drm.dev), xe->mmio.regs); if (xe->mem.vram.mapping) iounmap(xe->mem.vram.mapping); + for_each_tile(tile, xe, id) { + xe_svm_devm_remove(xe, &tile->mem.vram); + } } static int xe_verify_lmem_ready(struct xe_device *xe) diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index 191bce6425db..b54f7714a1fc 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -72,4 +72,6 @@ struct xe_svm *xe_lookup_svm_by_mm(struct mm_struct *mm); struct xe_svm_range *xe_svm_range_from_addr(struct xe_svm *svm, unsigned long addr); int xe_svm_build_sg(struct hmm_range *range, struct sg_table *st); +int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mem); +void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mem); #endif diff --git a/drivers/gpu/drm/xe/xe_svm_devmem.c b/drivers/gpu/drm/xe/xe_svm_devmem.c new file mode 100644 index 000000000000..cf7882830247 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_svm_devmem.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2023 Intel Corporation + */ + +#include <linux/mm_types.h> +#include <linux/sched/mm.h> + +#include "xe_device_types.h" +#include "xe_trace.h" + + +static vm_fault_t xe_devm_migrate_to_ram(struct vm_fault *vmf) +{ + return 0; +} + +static void xe_devm_page_free(struct page *page) +{ +} + +static const struct dev_pagemap_ops xe_devm_pagemap_ops = { + .page_free = xe_devm_page_free, + .migrate_to_ram = xe_devm_migrate_to_ram, +}; + +/** + * xe_svm_devm_add: Remap and provide memmap backing for device memory + * @tile: tile that the memory region blongs to + * @mr: memory region to remap + * + * This remap device memory to host physical address space and create + * struct page to back device memory + * + * Return: 0 on success standard error code otherwise + */ +int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mr) +{ + struct device *dev = &to_pci_dev(tile->xe->drm.dev)->dev; + struct resource *res; + void *addr; + int ret; + + res = devm_request_free_mem_region(dev, &iomem_resource, + mr->usable_size); + if (IS_ERR(res)) { + ret = PTR_ERR(res); + return ret; + } + + mr->pagemap.type = MEMORY_DEVICE_PRIVATE; + mr->pagemap.range.start = res->start; + mr->pagemap.range.end = res->end; + mr->pagemap.nr_range = 1; + mr->pagemap.ops = &xe_devm_pagemap_ops; + mr->pagemap.owner = tile->xe->drm.dev; + addr = devm_memremap_pages(dev, &mr->pagemap); + if (IS_ERR(addr)) { + devm_release_mem_region(dev, res->start, resource_size(res)); + ret = PTR_ERR(addr); + drm_err(&tile->xe->drm, "Failed to remap tile %d memory, errno %d\n", + tile->id, ret); + return ret; + } + mr->hpa_base = res->start; + + drm_info(&tile->xe->drm, "Added tile %d memory [%llx-%llx] to devm, remapped to %pr\n", + tile->id, mr->io_start, mr->io_start + mr->usable_size, res); + return 0; +} + +/** + * xe_svm_devm_remove: Unmap device memory and free resources + * @xe: xe device + * @mr: memory region to remove + */ +void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mr) +{ + struct device *dev = &to_pci_dev(xe->drm.dev)->dev; + + if (mr->hpa_base) { + devm_memunmap_pages(dev, &mr->pagemap); + devm_release_mem_region(dev, mr->pagemap.range.start, + mr->pagemap.range.end - mr->pagemap.range.start +1); + } +} + -- 2.26.3
WARNING: multiple messages have this Message-ID (diff)
From: Oak Zeng <oak.zeng@intel.com> To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Date: Wed, 17 Jan 2024 17:12:09 -0500 [thread overview] Message-ID: <20240117221223.18540-10-oak.zeng@intel.com> (raw) In-Reply-To: <20240117221223.18540-1-oak.zeng@intel.com> Memory remap GPU vram using devm_memremap_pages, so each GPU vram page is backed by a struct page. Those struct pages are created to allow hmm migrate buffer b/t GPU vram and CPU system memory using existing Linux migration mechanism (i.e., migrating b/t CPU system memory and hard disk). This is prepare work to enable svm (shared virtual memory) through Linux kernel hmm framework. The memory remap's page map type is set to MEMORY_DEVICE_PRIVATE for now. This means even though each GPU vram page get a struct page and can be mapped in CPU page table, but such pages are treated as GPU's private resource, so CPU can't access them. If CPU access such page, a page fault is triggered and page will be migrate to system memory. For GPU device which supports coherent memory protocol b/t CPU and GPU (such as CXL and CAPI protocol), we can remap device memory as MEMORY_DEVICE_COHERENT. This is TBD. Signed-off-by: Oak Zeng <oak.zeng@intel.com> Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Thomas Hellström <thomas.hellstrom@intel.com> Cc: Brian Welty <brian.welty@intel.com> --- drivers/gpu/drm/xe/xe_device_types.h | 8 +++ drivers/gpu/drm/xe/xe_mmio.c | 7 +++ drivers/gpu/drm/xe/xe_svm.h | 2 + drivers/gpu/drm/xe/xe_svm_devmem.c | 87 ++++++++++++++++++++++++++++ 4 files changed, 104 insertions(+) create mode 100644 drivers/gpu/drm/xe/xe_svm_devmem.c diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index 7eda86bd4c2a..6dba5b0ab481 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -99,6 +99,14 @@ struct xe_mem_region { resource_size_t actual_physical_size; /** @mapping: pointer to VRAM mappable space */ void __iomem *mapping; + /** @pagemap: Used to remap device memory as ZONE_DEVICE */ + struct dev_pagemap pagemap; + /** + * @hpa_base: base host physical address + * + * This is generated when remap device memory as ZONE_DEVICE + */ + resource_size_t hpa_base; }; /** diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c index c8c5d74b6e90..3d34dcfa3b3a 100644 --- a/drivers/gpu/drm/xe/xe_mmio.c +++ b/drivers/gpu/drm/xe/xe_mmio.c @@ -21,6 +21,7 @@ #include "xe_macros.h" #include "xe_module.h" #include "xe_tile.h" +#include "xe_svm.h" #define XEHP_MTCFG_ADDR XE_REG(0x101800) #define TILE_COUNT REG_GENMASK(15, 8) @@ -285,6 +286,7 @@ int xe_mmio_probe_vram(struct xe_device *xe) } io_size -= min_t(u64, tile_size, io_size); + xe_svm_devm_add(tile, &tile->mem.vram); } xe->mem.vram.actual_physical_size = total_size; @@ -353,10 +355,15 @@ void xe_mmio_probe_tiles(struct xe_device *xe) static void mmio_fini(struct drm_device *drm, void *arg) { struct xe_device *xe = arg; + struct xe_tile *tile; + u8 id; pci_iounmap(to_pci_dev(xe->drm.dev), xe->mmio.regs); if (xe->mem.vram.mapping) iounmap(xe->mem.vram.mapping); + for_each_tile(tile, xe, id) { + xe_svm_devm_remove(xe, &tile->mem.vram); + } } static int xe_verify_lmem_ready(struct xe_device *xe) diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index 191bce6425db..b54f7714a1fc 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -72,4 +72,6 @@ struct xe_svm *xe_lookup_svm_by_mm(struct mm_struct *mm); struct xe_svm_range *xe_svm_range_from_addr(struct xe_svm *svm, unsigned long addr); int xe_svm_build_sg(struct hmm_range *range, struct sg_table *st); +int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mem); +void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mem); #endif diff --git a/drivers/gpu/drm/xe/xe_svm_devmem.c b/drivers/gpu/drm/xe/xe_svm_devmem.c new file mode 100644 index 000000000000..cf7882830247 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_svm_devmem.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2023 Intel Corporation + */ + +#include <linux/mm_types.h> +#include <linux/sched/mm.h> + +#include "xe_device_types.h" +#include "xe_trace.h" + + +static vm_fault_t xe_devm_migrate_to_ram(struct vm_fault *vmf) +{ + return 0; +} + +static void xe_devm_page_free(struct page *page) +{ +} + +static const struct dev_pagemap_ops xe_devm_pagemap_ops = { + .page_free = xe_devm_page_free, + .migrate_to_ram = xe_devm_migrate_to_ram, +}; + +/** + * xe_svm_devm_add: Remap and provide memmap backing for device memory + * @tile: tile that the memory region blongs to + * @mr: memory region to remap + * + * This remap device memory to host physical address space and create + * struct page to back device memory + * + * Return: 0 on success standard error code otherwise + */ +int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mr) +{ + struct device *dev = &to_pci_dev(tile->xe->drm.dev)->dev; + struct resource *res; + void *addr; + int ret; + + res = devm_request_free_mem_region(dev, &iomem_resource, + mr->usable_size); + if (IS_ERR(res)) { + ret = PTR_ERR(res); + return ret; + } + + mr->pagemap.type = MEMORY_DEVICE_PRIVATE; + mr->pagemap.range.start = res->start; + mr->pagemap.range.end = res->end; + mr->pagemap.nr_range = 1; + mr->pagemap.ops = &xe_devm_pagemap_ops; + mr->pagemap.owner = tile->xe->drm.dev; + addr = devm_memremap_pages(dev, &mr->pagemap); + if (IS_ERR(addr)) { + devm_release_mem_region(dev, res->start, resource_size(res)); + ret = PTR_ERR(addr); + drm_err(&tile->xe->drm, "Failed to remap tile %d memory, errno %d\n", + tile->id, ret); + return ret; + } + mr->hpa_base = res->start; + + drm_info(&tile->xe->drm, "Added tile %d memory [%llx-%llx] to devm, remapped to %pr\n", + tile->id, mr->io_start, mr->io_start + mr->usable_size, res); + return 0; +} + +/** + * xe_svm_devm_remove: Unmap device memory and free resources + * @xe: xe device + * @mr: memory region to remove + */ +void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mr) +{ + struct device *dev = &to_pci_dev(xe->drm.dev)->dev; + + if (mr->hpa_base) { + devm_memunmap_pages(dev, &mr->pagemap); + devm_release_mem_region(dev, mr->pagemap.range.start, + mr->pagemap.range.end - mr->pagemap.range.start +1); + } +} + -- 2.26.3
next prev parent reply other threads:[~2024-01-17 22:02 UTC|newest] Thread overview: 198+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-01-17 22:12 [PATCH 00/23] XeKmd basic SVM support Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 01/23] drm/xe/svm: Add SVM document Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 02/23] drm/xe/svm: Add svm key data structures Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 03/23] drm/xe/svm: create xe svm during vm creation Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 04/23] drm/xe/svm: Trace svm creation Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 05/23] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-04-05 0:39 ` Jason Gunthorpe 2024-04-05 3:33 ` Zeng, Oak 2024-04-05 12:37 ` Jason Gunthorpe 2024-04-05 16:42 ` Zeng, Oak 2024-04-05 18:02 ` Jason Gunthorpe 2024-04-09 16:45 ` Zeng, Oak 2024-04-09 17:24 ` Jason Gunthorpe 2024-04-23 21:17 ` Zeng, Oak 2024-04-24 2:31 ` Matthew Brost 2024-04-24 13:57 ` Jason Gunthorpe 2024-04-24 16:35 ` Matthew Brost 2024-04-24 16:44 ` Jason Gunthorpe 2024-04-24 16:56 ` Matthew Brost 2024-04-24 17:48 ` Jason Gunthorpe 2024-04-24 13:48 ` Jason Gunthorpe 2024-04-24 23:59 ` Zeng, Oak 2024-04-25 1:05 ` Jason Gunthorpe 2024-04-26 9:55 ` Thomas Hellström 2024-04-26 12:00 ` Jason Gunthorpe 2024-04-26 14:49 ` Thomas Hellström 2024-04-26 16:35 ` Jason Gunthorpe 2024-04-29 8:25 ` Thomas Hellström 2024-04-30 17:30 ` Jason Gunthorpe 2024-04-30 18:57 ` Daniel Vetter 2024-05-01 0:09 ` Jason Gunthorpe 2024-05-02 8:04 ` Daniel Vetter 2024-05-02 9:11 ` Thomas Hellström 2024-05-02 12:46 ` Jason Gunthorpe 2024-05-02 15:01 ` Thomas Hellström 2024-05-02 19:25 ` Zeng, Oak 2024-05-03 13:37 ` Jason Gunthorpe 2024-05-03 14:43 ` Zeng, Oak 2024-05-03 16:28 ` Jason Gunthorpe 2024-05-03 20:29 ` Zeng, Oak 2024-05-04 1:03 ` Dave Airlie 2024-05-06 13:04 ` Daniel Vetter 2024-05-06 23:50 ` Matthew Brost 2024-05-07 11:56 ` Jason Gunthorpe 2024-05-06 13:33 ` Jason Gunthorpe 2024-04-09 17:33 ` Matthew Brost 2024-01-17 22:12 ` [PATCH 07/23] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 08/23] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` Oak Zeng [this message] 2024-01-17 22:12 ` [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng 2024-01-17 22:12 ` [PATCH 10/23] drm/xe/svm: Introduce svm migration function Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 12/23] drm/xe/svm: Trace buddy block allocation and free Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 13/23] drm/xe/svm: Handle CPU page fault Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 14/23] drm/xe/svm: trace svm range migration Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 15/23] drm/xe/svm: Implement functions to register and unregister mmu notifier Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 16/23] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 17/23] drm/xe/svm: clean up svm range during process exit Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 18/23] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 19/23] drm/xe/svm: migrate svm range to vram Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 20/23] drm/xe/svm: Populate svm range Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 21/23] drm/xe/svm: GPU page fault support Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-23 2:06 ` Welty, Brian 2024-01-23 2:06 ` Welty, Brian 2024-01-23 3:09 ` Zeng, Oak 2024-01-23 3:09 ` Zeng, Oak 2024-01-23 3:21 ` Making drm_gpuvm work across gpu devices Zeng, Oak 2024-01-23 3:21 ` Zeng, Oak 2024-01-23 11:13 ` Christian König 2024-01-23 11:13 ` Christian König 2024-01-23 19:37 ` Zeng, Oak 2024-01-23 19:37 ` Zeng, Oak 2024-01-23 20:17 ` Felix Kuehling 2024-01-23 20:17 ` Felix Kuehling 2024-01-25 1:39 ` Zeng, Oak 2024-01-25 1:39 ` Zeng, Oak 2024-01-23 23:56 ` Danilo Krummrich 2024-01-23 23:56 ` Danilo Krummrich 2024-01-24 3:57 ` Zeng, Oak 2024-01-24 3:57 ` Zeng, Oak 2024-01-24 4:14 ` Zeng, Oak 2024-01-24 4:14 ` Zeng, Oak 2024-01-24 6:48 ` Christian König 2024-01-24 6:48 ` Christian König 2024-01-25 22:13 ` Danilo Krummrich 2024-01-25 22:13 ` Danilo Krummrich 2024-01-24 8:33 ` Christian König 2024-01-24 8:33 ` Christian König 2024-01-25 1:17 ` Zeng, Oak 2024-01-25 1:17 ` Zeng, Oak 2024-01-25 1:25 ` David Airlie 2024-01-25 1:25 ` David Airlie 2024-01-25 5:25 ` Zeng, Oak 2024-01-25 5:25 ` Zeng, Oak 2024-01-26 10:09 ` Christian König 2024-01-26 10:09 ` Christian König 2024-01-26 20:13 ` Zeng, Oak 2024-01-26 20:13 ` Zeng, Oak 2024-01-29 10:10 ` Christian König 2024-01-29 10:10 ` Christian König 2024-01-29 20:09 ` Zeng, Oak 2024-01-29 20:09 ` Zeng, Oak 2024-01-25 11:00 ` 回复:Making " 周春明(日月) 2024-01-25 11:00 ` 周春明(日月) 2024-01-25 17:00 ` Zeng, Oak 2024-01-25 17:00 ` Zeng, Oak 2024-01-25 17:15 ` Making " Felix Kuehling 2024-01-25 17:15 ` Felix Kuehling 2024-01-25 18:37 ` Zeng, Oak 2024-01-25 18:37 ` Zeng, Oak 2024-01-26 13:23 ` Christian König 2024-01-26 13:23 ` Christian König 2024-01-25 16:42 ` Zeng, Oak 2024-01-25 16:42 ` Zeng, Oak 2024-01-25 18:32 ` Daniel Vetter 2024-01-25 18:32 ` Daniel Vetter 2024-01-25 21:02 ` Zeng, Oak 2024-01-25 21:02 ` Zeng, Oak 2024-01-26 8:21 ` Thomas Hellström 2024-01-26 8:21 ` Thomas Hellström 2024-01-26 12:52 ` Christian König 2024-01-26 12:52 ` Christian König 2024-01-27 2:21 ` Zeng, Oak 2024-01-27 2:21 ` Zeng, Oak 2024-01-29 10:19 ` Christian König 2024-01-29 10:19 ` Christian König 2024-01-30 0:21 ` Zeng, Oak 2024-01-30 0:21 ` Zeng, Oak 2024-01-30 8:39 ` Christian König 2024-01-30 8:39 ` Christian König 2024-01-30 22:29 ` Zeng, Oak 2024-01-30 22:29 ` Zeng, Oak 2024-01-30 23:12 ` David Airlie 2024-01-30 23:12 ` David Airlie 2024-01-31 9:15 ` Daniel Vetter 2024-01-31 9:15 ` Daniel Vetter 2024-01-31 20:17 ` Zeng, Oak 2024-01-31 20:17 ` Zeng, Oak 2024-01-31 20:59 ` Zeng, Oak 2024-01-31 20:59 ` Zeng, Oak 2024-02-01 8:52 ` Christian König 2024-02-01 8:52 ` Christian König 2024-02-29 18:22 ` Zeng, Oak 2024-03-08 4:43 ` Zeng, Oak 2024-03-08 10:07 ` Christian König 2024-01-30 8:43 ` Thomas Hellström 2024-01-30 8:43 ` Thomas Hellström 2024-01-29 15:03 ` Felix Kuehling 2024-01-29 15:03 ` Felix Kuehling 2024-01-29 15:33 ` Christian König 2024-01-29 15:33 ` Christian König 2024-01-29 16:24 ` Felix Kuehling 2024-01-29 16:24 ` Felix Kuehling 2024-01-29 16:28 ` Christian König 2024-01-29 16:28 ` Christian König 2024-01-29 17:52 ` Felix Kuehling 2024-01-29 17:52 ` Felix Kuehling 2024-01-29 19:03 ` Christian König 2024-01-29 19:03 ` Christian König 2024-01-29 20:24 ` Felix Kuehling 2024-01-29 20:24 ` Felix Kuehling 2024-02-23 20:12 ` Zeng, Oak 2024-02-27 6:54 ` Christian König 2024-02-27 15:58 ` Zeng, Oak 2024-02-28 19:51 ` Zeng, Oak 2024-02-29 9:41 ` Christian König 2024-02-29 16:05 ` Zeng, Oak 2024-02-29 17:12 ` Thomas Hellström 2024-03-01 7:01 ` Christian König 2024-01-17 22:12 ` [PATCH 22/23] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 23/23] drm/xe/svm: Add svm memory hints interface Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-18 2:45 ` ✓ CI.Patch_applied: success for XeKmd basic SVM support Patchwork 2024-01-18 2:46 ` ✗ CI.checkpatch: warning " Patchwork 2024-01-18 2:46 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20240117221223.18540-10-oak.zeng@intel.com \ --to=oak.zeng@intel.com \ --cc=Thomas.Hellstrom@linux.intel.com \ --cc=brian.welty@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=himal.prasad.ghimiray@intel.com \ --cc=intel-xe@lists.freedesktop.org \ --cc=krishnaiah.bommu@intel.com \ --cc=matthew.brost@intel.com \ --cc=niranjana.vishwanathapura@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.