From: Oak Zeng <oak.zeng@intel.com> To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Cc: matthew.brost@intel.com, Thomas.Hellstrom@linux.intel.com, brian.welty@intel.com, himal.prasad.ghimiray@intel.com, krishnaiah.bommu@intel.com, niranjana.vishwanathapura@intel.com Subject: [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Date: Wed, 17 Jan 2024 17:12:11 -0500 [thread overview] Message-ID: <20240117221223.18540-12-oak.zeng@intel.com> (raw) In-Reply-To: <20240117221223.18540-1-oak.zeng@intel.com> Function xe_devm_alloc_pages allocate pages from drm buddy and perform house keeping work for all the pages allocated, such as get a page refcount, keep a bitmap of all pages to denote whether a page is in use, put pages to a drm lru list for eviction purpose. Function xe_devm_free_blocks return all memory blocks to drm buddy allocator. Function xe_devm_free_page is a call back function from hmm layer. It is called whenever a page's refcount reaches to 1. This function clears the bit of this page in the bitmap. If all the bits in the bitmap is cleared, it means all the pages have been freed, we return all the pages in this memory block back to drm buddy. Signed-off-by: Oak Zeng <oak.zeng@intel.com> Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Thomas Hellström <thomas.hellstrom@intel.com> Cc: Brian Welty <brian.welty@intel.com> --- drivers/gpu/drm/xe/xe_svm.h | 9 ++ drivers/gpu/drm/xe/xe_svm_devmem.c | 146 ++++++++++++++++++++++++++++- 2 files changed, 154 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index b54f7714a1fc..8551df2b9780 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -74,4 +74,13 @@ struct xe_svm_range *xe_svm_range_from_addr(struct xe_svm *svm, int xe_svm_build_sg(struct hmm_range *range, struct sg_table *st); int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mem); void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mem); + + +int xe_devm_alloc_pages(struct xe_tile *tile, + unsigned long npages, + struct list_head *blocks, + unsigned long *pfn); + +void xe_devm_free_blocks(struct list_head *blocks); +void xe_devm_page_free(struct page *page); #endif diff --git a/drivers/gpu/drm/xe/xe_svm_devmem.c b/drivers/gpu/drm/xe/xe_svm_devmem.c index cf7882830247..445e0e1bc3b4 100644 --- a/drivers/gpu/drm/xe/xe_svm_devmem.c +++ b/drivers/gpu/drm/xe/xe_svm_devmem.c @@ -5,18 +5,162 @@ #include <linux/mm_types.h> #include <linux/sched/mm.h> +#include <linux/gfp.h> +#include <linux/migrate.h> +#include <linux/dma-mapping.h> +#include <linux/dma-fence.h> +#include <linux/bitops.h> +#include <linux/bitmap.h> +#include <drm/drm_buddy.h> #include "xe_device_types.h" #include "xe_trace.h" +#include "xe_migrate.h" +#include "xe_ttm_vram_mgr_types.h" +#include "xe_assert.h" +/** + * struct xe_svm_block_meta - svm uses this data structure to manage each + * block allocated from drm buddy. This will be set to the drm_buddy_block's + * private field. + * + * @lru: used to link this block to drm's lru lists. This will be replace + * with struct drm_lru_entity later. + * @tile: tile from which we allocated this block + * @bitmap: A bitmap of each page in this block. 1 means this page is used, + * 0 means this page is idle. When all bits of this block are 0, it is time + * to return this block to drm buddy subsystem. + */ +struct xe_svm_block_meta { + struct list_head lru; + struct xe_tile *tile; + unsigned long bitmap[]; +}; + +static u64 block_offset_to_pfn(struct xe_mem_region *mr, u64 offset) +{ + /** DRM buddy's block offset is 0-based*/ + offset += mr->hpa_base; + + return PHYS_PFN(offset); +} + +/** + * xe_devm_alloc_pages() - allocate device pages from buddy allocator + * + * @xe_tile: which tile to allocate device memory from + * @npages: how many pages to allocate + * @blocks: used to return the allocated blocks + * @pfn: used to return the pfn of all allocated pages. Must be big enough + * to hold at @npages entries. + * + * This function allocate blocks of memory from drm buddy allocator, and + * performs initialization work: set struct page::zone_device_data to point + * to the memory block; set/initialize drm_buddy_block::private field; + * lock_page for each page allocated; add memory block to lru managers lru + * list - this is TBD. + * + * return: 0 on success + * error code otherwise + */ +int xe_devm_alloc_pages(struct xe_tile *tile, + unsigned long npages, + struct list_head *blocks, + unsigned long *pfn) +{ + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + struct drm_buddy_block *block, *tmp; + u64 size = npages << PAGE_SHIFT; + int ret = 0, i, j = 0; + + ret = drm_buddy_alloc_blocks(mm, 0, mm->size, size, PAGE_SIZE, + blocks, DRM_BUDDY_TOPDOWN_ALLOCATION); + + if (unlikely(ret)) + return ret; + + list_for_each_entry_safe(block, tmp, blocks, link) { + struct xe_mem_region *mr = &tile->mem.vram; + u64 block_pfn_first, pages_per_block; + struct xe_svm_block_meta *meta; + u32 meta_size; + + size = drm_buddy_block_size(mm, block); + pages_per_block = size >> PAGE_SHIFT; + meta_size = BITS_TO_BYTES(pages_per_block) + + sizeof(struct xe_svm_block_meta); + meta = kzalloc(meta_size, GFP_KERNEL); + bitmap_fill(meta->bitmap, pages_per_block); + meta->tile = tile; + block->private = meta; + block_pfn_first = + block_offset_to_pfn(mr, drm_buddy_block_offset(block)); + for(i = 0; i < pages_per_block; i++) { + struct page *page; + + pfn[j++] = block_pfn_first + i; + page = pfn_to_page(block_pfn_first + i); + /**Lock page per hmm requirement, see hmm.rst.*/ + zone_device_page_init(page); + page->zone_device_data = block; + } + } + + return ret; +} + +/** FIXME: we locked page by calling zone_device_page_init + * inxe_dev_alloc_pages. Should we unlock pages here? + */ +static void free_block(struct drm_buddy_block *block) +{ + struct xe_svm_block_meta *meta = + (struct xe_svm_block_meta *)block->private; + struct xe_tile *tile = meta->tile; + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + + kfree(block->private); + drm_buddy_free_block(mm, block); +} + +/** + * xe_devm_free_blocks() - free all memory blocks + * + * @blocks: memory blocks list head + */ +void xe_devm_free_blocks(struct list_head *blocks) +{ + struct drm_buddy_block *block, *tmp; + + list_for_each_entry_safe(block, tmp, blocks, link) + free_block(block); +} static vm_fault_t xe_devm_migrate_to_ram(struct vm_fault *vmf) { return 0; } -static void xe_devm_page_free(struct page *page) +void xe_devm_page_free(struct page *page) { + struct drm_buddy_block *block = + (struct drm_buddy_block *)page->zone_device_data; + struct xe_svm_block_meta *meta = + (struct xe_svm_block_meta *)block->private; + struct xe_tile *tile = meta->tile; + struct xe_mem_region *mr = &tile->mem.vram; + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + u64 size = drm_buddy_block_size(mm, block); + u64 pages_per_block = size >> PAGE_SHIFT; + u64 block_pfn_first = + block_offset_to_pfn(mr, drm_buddy_block_offset(block)); + u64 page_pfn = page_to_pfn(page); + u64 i = page_pfn - block_pfn_first; + + xe_assert(tile->xe, i < pages_per_block); + clear_bit(i, meta->bitmap); + if (bitmap_empty(meta->bitmap, pages_per_block)) + free_block(block); } static const struct dev_pagemap_ops xe_devm_pagemap_ops = { -- 2.26.3
WARNING: multiple messages have this Message-ID (diff)
From: Oak Zeng <oak.zeng@intel.com> To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Date: Wed, 17 Jan 2024 17:12:11 -0500 [thread overview] Message-ID: <20240117221223.18540-12-oak.zeng@intel.com> (raw) In-Reply-To: <20240117221223.18540-1-oak.zeng@intel.com> Function xe_devm_alloc_pages allocate pages from drm buddy and perform house keeping work for all the pages allocated, such as get a page refcount, keep a bitmap of all pages to denote whether a page is in use, put pages to a drm lru list for eviction purpose. Function xe_devm_free_blocks return all memory blocks to drm buddy allocator. Function xe_devm_free_page is a call back function from hmm layer. It is called whenever a page's refcount reaches to 1. This function clears the bit of this page in the bitmap. If all the bits in the bitmap is cleared, it means all the pages have been freed, we return all the pages in this memory block back to drm buddy. Signed-off-by: Oak Zeng <oak.zeng@intel.com> Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Thomas Hellström <thomas.hellstrom@intel.com> Cc: Brian Welty <brian.welty@intel.com> --- drivers/gpu/drm/xe/xe_svm.h | 9 ++ drivers/gpu/drm/xe/xe_svm_devmem.c | 146 ++++++++++++++++++++++++++++- 2 files changed, 154 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index b54f7714a1fc..8551df2b9780 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -74,4 +74,13 @@ struct xe_svm_range *xe_svm_range_from_addr(struct xe_svm *svm, int xe_svm_build_sg(struct hmm_range *range, struct sg_table *st); int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mem); void xe_svm_devm_remove(struct xe_device *xe, struct xe_mem_region *mem); + + +int xe_devm_alloc_pages(struct xe_tile *tile, + unsigned long npages, + struct list_head *blocks, + unsigned long *pfn); + +void xe_devm_free_blocks(struct list_head *blocks); +void xe_devm_page_free(struct page *page); #endif diff --git a/drivers/gpu/drm/xe/xe_svm_devmem.c b/drivers/gpu/drm/xe/xe_svm_devmem.c index cf7882830247..445e0e1bc3b4 100644 --- a/drivers/gpu/drm/xe/xe_svm_devmem.c +++ b/drivers/gpu/drm/xe/xe_svm_devmem.c @@ -5,18 +5,162 @@ #include <linux/mm_types.h> #include <linux/sched/mm.h> +#include <linux/gfp.h> +#include <linux/migrate.h> +#include <linux/dma-mapping.h> +#include <linux/dma-fence.h> +#include <linux/bitops.h> +#include <linux/bitmap.h> +#include <drm/drm_buddy.h> #include "xe_device_types.h" #include "xe_trace.h" +#include "xe_migrate.h" +#include "xe_ttm_vram_mgr_types.h" +#include "xe_assert.h" +/** + * struct xe_svm_block_meta - svm uses this data structure to manage each + * block allocated from drm buddy. This will be set to the drm_buddy_block's + * private field. + * + * @lru: used to link this block to drm's lru lists. This will be replace + * with struct drm_lru_entity later. + * @tile: tile from which we allocated this block + * @bitmap: A bitmap of each page in this block. 1 means this page is used, + * 0 means this page is idle. When all bits of this block are 0, it is time + * to return this block to drm buddy subsystem. + */ +struct xe_svm_block_meta { + struct list_head lru; + struct xe_tile *tile; + unsigned long bitmap[]; +}; + +static u64 block_offset_to_pfn(struct xe_mem_region *mr, u64 offset) +{ + /** DRM buddy's block offset is 0-based*/ + offset += mr->hpa_base; + + return PHYS_PFN(offset); +} + +/** + * xe_devm_alloc_pages() - allocate device pages from buddy allocator + * + * @xe_tile: which tile to allocate device memory from + * @npages: how many pages to allocate + * @blocks: used to return the allocated blocks + * @pfn: used to return the pfn of all allocated pages. Must be big enough + * to hold at @npages entries. + * + * This function allocate blocks of memory from drm buddy allocator, and + * performs initialization work: set struct page::zone_device_data to point + * to the memory block; set/initialize drm_buddy_block::private field; + * lock_page for each page allocated; add memory block to lru managers lru + * list - this is TBD. + * + * return: 0 on success + * error code otherwise + */ +int xe_devm_alloc_pages(struct xe_tile *tile, + unsigned long npages, + struct list_head *blocks, + unsigned long *pfn) +{ + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + struct drm_buddy_block *block, *tmp; + u64 size = npages << PAGE_SHIFT; + int ret = 0, i, j = 0; + + ret = drm_buddy_alloc_blocks(mm, 0, mm->size, size, PAGE_SIZE, + blocks, DRM_BUDDY_TOPDOWN_ALLOCATION); + + if (unlikely(ret)) + return ret; + + list_for_each_entry_safe(block, tmp, blocks, link) { + struct xe_mem_region *mr = &tile->mem.vram; + u64 block_pfn_first, pages_per_block; + struct xe_svm_block_meta *meta; + u32 meta_size; + + size = drm_buddy_block_size(mm, block); + pages_per_block = size >> PAGE_SHIFT; + meta_size = BITS_TO_BYTES(pages_per_block) + + sizeof(struct xe_svm_block_meta); + meta = kzalloc(meta_size, GFP_KERNEL); + bitmap_fill(meta->bitmap, pages_per_block); + meta->tile = tile; + block->private = meta; + block_pfn_first = + block_offset_to_pfn(mr, drm_buddy_block_offset(block)); + for(i = 0; i < pages_per_block; i++) { + struct page *page; + + pfn[j++] = block_pfn_first + i; + page = pfn_to_page(block_pfn_first + i); + /**Lock page per hmm requirement, see hmm.rst.*/ + zone_device_page_init(page); + page->zone_device_data = block; + } + } + + return ret; +} + +/** FIXME: we locked page by calling zone_device_page_init + * inxe_dev_alloc_pages. Should we unlock pages here? + */ +static void free_block(struct drm_buddy_block *block) +{ + struct xe_svm_block_meta *meta = + (struct xe_svm_block_meta *)block->private; + struct xe_tile *tile = meta->tile; + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + + kfree(block->private); + drm_buddy_free_block(mm, block); +} + +/** + * xe_devm_free_blocks() - free all memory blocks + * + * @blocks: memory blocks list head + */ +void xe_devm_free_blocks(struct list_head *blocks) +{ + struct drm_buddy_block *block, *tmp; + + list_for_each_entry_safe(block, tmp, blocks, link) + free_block(block); +} static vm_fault_t xe_devm_migrate_to_ram(struct vm_fault *vmf) { return 0; } -static void xe_devm_page_free(struct page *page) +void xe_devm_page_free(struct page *page) { + struct drm_buddy_block *block = + (struct drm_buddy_block *)page->zone_device_data; + struct xe_svm_block_meta *meta = + (struct xe_svm_block_meta *)block->private; + struct xe_tile *tile = meta->tile; + struct xe_mem_region *mr = &tile->mem.vram; + struct drm_buddy *mm = &tile->mem.vram_mgr->mm; + u64 size = drm_buddy_block_size(mm, block); + u64 pages_per_block = size >> PAGE_SHIFT; + u64 block_pfn_first = + block_offset_to_pfn(mr, drm_buddy_block_offset(block)); + u64 page_pfn = page_to_pfn(page); + u64 i = page_pfn - block_pfn_first; + + xe_assert(tile->xe, i < pages_per_block); + clear_bit(i, meta->bitmap); + if (bitmap_empty(meta->bitmap, pages_per_block)) + free_block(block); } static const struct dev_pagemap_ops xe_devm_pagemap_ops = { -- 2.26.3
next prev parent reply other threads:[~2024-01-17 22:03 UTC|newest] Thread overview: 198+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-01-17 22:12 [PATCH 00/23] XeKmd basic SVM support Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 01/23] drm/xe/svm: Add SVM document Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 02/23] drm/xe/svm: Add svm key data structures Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 03/23] drm/xe/svm: create xe svm during vm creation Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 04/23] drm/xe/svm: Trace svm creation Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 05/23] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-04-05 0:39 ` Jason Gunthorpe 2024-04-05 3:33 ` Zeng, Oak 2024-04-05 12:37 ` Jason Gunthorpe 2024-04-05 16:42 ` Zeng, Oak 2024-04-05 18:02 ` Jason Gunthorpe 2024-04-09 16:45 ` Zeng, Oak 2024-04-09 17:24 ` Jason Gunthorpe 2024-04-23 21:17 ` Zeng, Oak 2024-04-24 2:31 ` Matthew Brost 2024-04-24 13:57 ` Jason Gunthorpe 2024-04-24 16:35 ` Matthew Brost 2024-04-24 16:44 ` Jason Gunthorpe 2024-04-24 16:56 ` Matthew Brost 2024-04-24 17:48 ` Jason Gunthorpe 2024-04-24 13:48 ` Jason Gunthorpe 2024-04-24 23:59 ` Zeng, Oak 2024-04-25 1:05 ` Jason Gunthorpe 2024-04-26 9:55 ` Thomas Hellström 2024-04-26 12:00 ` Jason Gunthorpe 2024-04-26 14:49 ` Thomas Hellström 2024-04-26 16:35 ` Jason Gunthorpe 2024-04-29 8:25 ` Thomas Hellström 2024-04-30 17:30 ` Jason Gunthorpe 2024-04-30 18:57 ` Daniel Vetter 2024-05-01 0:09 ` Jason Gunthorpe 2024-05-02 8:04 ` Daniel Vetter 2024-05-02 9:11 ` Thomas Hellström 2024-05-02 12:46 ` Jason Gunthorpe 2024-05-02 15:01 ` Thomas Hellström 2024-05-02 19:25 ` Zeng, Oak 2024-05-03 13:37 ` Jason Gunthorpe 2024-05-03 14:43 ` Zeng, Oak 2024-05-03 16:28 ` Jason Gunthorpe 2024-05-03 20:29 ` Zeng, Oak 2024-05-04 1:03 ` Dave Airlie 2024-05-06 13:04 ` Daniel Vetter 2024-05-06 23:50 ` Matthew Brost 2024-05-07 11:56 ` Jason Gunthorpe 2024-05-06 13:33 ` Jason Gunthorpe 2024-04-09 17:33 ` Matthew Brost 2024-01-17 22:12 ` [PATCH 07/23] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 08/23] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 10/23] drm/xe/svm: Introduce svm migration function Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` Oak Zeng [this message] 2024-01-17 22:12 ` [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng 2024-01-17 22:12 ` [PATCH 12/23] drm/xe/svm: Trace buddy block allocation and free Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 13/23] drm/xe/svm: Handle CPU page fault Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 14/23] drm/xe/svm: trace svm range migration Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 15/23] drm/xe/svm: Implement functions to register and unregister mmu notifier Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 16/23] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 17/23] drm/xe/svm: clean up svm range during process exit Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 18/23] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 19/23] drm/xe/svm: migrate svm range to vram Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 20/23] drm/xe/svm: Populate svm range Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 21/23] drm/xe/svm: GPU page fault support Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-23 2:06 ` Welty, Brian 2024-01-23 2:06 ` Welty, Brian 2024-01-23 3:09 ` Zeng, Oak 2024-01-23 3:09 ` Zeng, Oak 2024-01-23 3:21 ` Making drm_gpuvm work across gpu devices Zeng, Oak 2024-01-23 3:21 ` Zeng, Oak 2024-01-23 11:13 ` Christian König 2024-01-23 11:13 ` Christian König 2024-01-23 19:37 ` Zeng, Oak 2024-01-23 19:37 ` Zeng, Oak 2024-01-23 20:17 ` Felix Kuehling 2024-01-23 20:17 ` Felix Kuehling 2024-01-25 1:39 ` Zeng, Oak 2024-01-25 1:39 ` Zeng, Oak 2024-01-23 23:56 ` Danilo Krummrich 2024-01-23 23:56 ` Danilo Krummrich 2024-01-24 3:57 ` Zeng, Oak 2024-01-24 3:57 ` Zeng, Oak 2024-01-24 4:14 ` Zeng, Oak 2024-01-24 4:14 ` Zeng, Oak 2024-01-24 6:48 ` Christian König 2024-01-24 6:48 ` Christian König 2024-01-25 22:13 ` Danilo Krummrich 2024-01-25 22:13 ` Danilo Krummrich 2024-01-24 8:33 ` Christian König 2024-01-24 8:33 ` Christian König 2024-01-25 1:17 ` Zeng, Oak 2024-01-25 1:17 ` Zeng, Oak 2024-01-25 1:25 ` David Airlie 2024-01-25 1:25 ` David Airlie 2024-01-25 5:25 ` Zeng, Oak 2024-01-25 5:25 ` Zeng, Oak 2024-01-26 10:09 ` Christian König 2024-01-26 10:09 ` Christian König 2024-01-26 20:13 ` Zeng, Oak 2024-01-26 20:13 ` Zeng, Oak 2024-01-29 10:10 ` Christian König 2024-01-29 10:10 ` Christian König 2024-01-29 20:09 ` Zeng, Oak 2024-01-29 20:09 ` Zeng, Oak 2024-01-25 11:00 ` 回复:Making " 周春明(日月) 2024-01-25 11:00 ` 周春明(日月) 2024-01-25 17:00 ` Zeng, Oak 2024-01-25 17:00 ` Zeng, Oak 2024-01-25 17:15 ` Making " Felix Kuehling 2024-01-25 17:15 ` Felix Kuehling 2024-01-25 18:37 ` Zeng, Oak 2024-01-25 18:37 ` Zeng, Oak 2024-01-26 13:23 ` Christian König 2024-01-26 13:23 ` Christian König 2024-01-25 16:42 ` Zeng, Oak 2024-01-25 16:42 ` Zeng, Oak 2024-01-25 18:32 ` Daniel Vetter 2024-01-25 18:32 ` Daniel Vetter 2024-01-25 21:02 ` Zeng, Oak 2024-01-25 21:02 ` Zeng, Oak 2024-01-26 8:21 ` Thomas Hellström 2024-01-26 8:21 ` Thomas Hellström 2024-01-26 12:52 ` Christian König 2024-01-26 12:52 ` Christian König 2024-01-27 2:21 ` Zeng, Oak 2024-01-27 2:21 ` Zeng, Oak 2024-01-29 10:19 ` Christian König 2024-01-29 10:19 ` Christian König 2024-01-30 0:21 ` Zeng, Oak 2024-01-30 0:21 ` Zeng, Oak 2024-01-30 8:39 ` Christian König 2024-01-30 8:39 ` Christian König 2024-01-30 22:29 ` Zeng, Oak 2024-01-30 22:29 ` Zeng, Oak 2024-01-30 23:12 ` David Airlie 2024-01-30 23:12 ` David Airlie 2024-01-31 9:15 ` Daniel Vetter 2024-01-31 9:15 ` Daniel Vetter 2024-01-31 20:17 ` Zeng, Oak 2024-01-31 20:17 ` Zeng, Oak 2024-01-31 20:59 ` Zeng, Oak 2024-01-31 20:59 ` Zeng, Oak 2024-02-01 8:52 ` Christian König 2024-02-01 8:52 ` Christian König 2024-02-29 18:22 ` Zeng, Oak 2024-03-08 4:43 ` Zeng, Oak 2024-03-08 10:07 ` Christian König 2024-01-30 8:43 ` Thomas Hellström 2024-01-30 8:43 ` Thomas Hellström 2024-01-29 15:03 ` Felix Kuehling 2024-01-29 15:03 ` Felix Kuehling 2024-01-29 15:33 ` Christian König 2024-01-29 15:33 ` Christian König 2024-01-29 16:24 ` Felix Kuehling 2024-01-29 16:24 ` Felix Kuehling 2024-01-29 16:28 ` Christian König 2024-01-29 16:28 ` Christian König 2024-01-29 17:52 ` Felix Kuehling 2024-01-29 17:52 ` Felix Kuehling 2024-01-29 19:03 ` Christian König 2024-01-29 19:03 ` Christian König 2024-01-29 20:24 ` Felix Kuehling 2024-01-29 20:24 ` Felix Kuehling 2024-02-23 20:12 ` Zeng, Oak 2024-02-27 6:54 ` Christian König 2024-02-27 15:58 ` Zeng, Oak 2024-02-28 19:51 ` Zeng, Oak 2024-02-29 9:41 ` Christian König 2024-02-29 16:05 ` Zeng, Oak 2024-02-29 17:12 ` Thomas Hellström 2024-03-01 7:01 ` Christian König 2024-01-17 22:12 ` [PATCH 22/23] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-17 22:12 ` [PATCH 23/23] drm/xe/svm: Add svm memory hints interface Oak Zeng 2024-01-17 22:12 ` Oak Zeng 2024-01-18 2:45 ` ✓ CI.Patch_applied: success for XeKmd basic SVM support Patchwork 2024-01-18 2:46 ` ✗ CI.checkpatch: warning " Patchwork 2024-01-18 2:46 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20240117221223.18540-12-oak.zeng@intel.com \ --to=oak.zeng@intel.com \ --cc=Thomas.Hellstrom@linux.intel.com \ --cc=brian.welty@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=himal.prasad.ghimiray@intel.com \ --cc=intel-xe@lists.freedesktop.org \ --cc=krishnaiah.bommu@intel.com \ --cc=matthew.brost@intel.com \ --cc=niranjana.vishwanathapura@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.