LKML Archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document.
@ 2023-05-30  8:42 Thomas Hellström
  2023-05-30  9:42 ` [Intel-xe] " Nirmoy Das
  2023-05-30 14:06 ` Zeng, Oak
  0 siblings, 2 replies; 4+ messages in thread
From: Thomas Hellström @ 2023-05-30  8:42 UTC (permalink / raw
  To: intel-xe
  Cc: Thomas Hellström, Matthew Brost, Danilo Krummrich,
	Joonas Lahtinen, dri-devel, linux-kernel

Add a motivation for and description of asynchronous VM_BIND operation

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 Documentation/gpu/drm-vm-bind-async.rst | 138 ++++++++++++++++++++++++
 1 file changed, 138 insertions(+)
 create mode 100644 Documentation/gpu/drm-vm-bind-async.rst

diff --git a/Documentation/gpu/drm-vm-bind-async.rst b/Documentation/gpu/drm-vm-bind-async.rst
new file mode 100644
index 000000000000..7f7f8f7ddfea
--- /dev/null
+++ b/Documentation/gpu/drm-vm-bind-async.rst
@@ -0,0 +1,138 @@
+====================
+Asynchronous VM_BIND
+====================
+
+Nomenclature:
+=============
+
+* VRAM: On-device memory. Sometimes referred to as device local memory.
+
+* vm: A GPU address space. Typically per process, but can be shared by
+  multiple processes.
+
+* VM_BIND: An operation or a list of operations to modify a vm using
+  an IOCTL. The operations include mapping and unmapping system- or
+  VRAM memory.
+
+* syncobj: A container that abstracts synchronization objects. The
+  synchronization objects can be either generic, like dma-fences or
+  driver specific. A syncobj typically indicates the type of the
+  underlying synchronization object.
+
+* in-syncobj: Argument to a VM_BIND IOCTL, the VM_BIND operation waits
+  for these before starting.
+
+* out-syncbj: Argument to a VM_BIND_IOCTL, the VM_BIND operation
+  signals these when the bind operation is complete.
+
+* memory fence: A synchronization object, different from a dma-fence
+  that uses the value of a specified memory location to determine
+  signaled status. A memory fence can be awaited and signaled by both
+  the GPU and CPU. Memory fences are sometimes referred to as
+  user-fences.
+
+* long-running workload: A workload that may take more than the
+  current stipulated dma-fence maximum signal delay to complete and
+  which therefore needs to set the VM or the GPU execution context in
+  a certain mode that disallows completion dma-fences.
+
+* UMD: User-mode driver.
+
+* KMD: Kernel-mode driver.
+
+
+Synchronous / Asynchronous VM_BIND operation
+============================================
+
+Synchronous VM_BIND
+___________________
+With Synchronous VM_BIND, the VM_BIND operations all complete before the
+ioctl returns. A synchronous VM_BIND takes neither in-fences nor
+out-fences. Synchronous VM_BIND may block and wait for GPU operations;
+for example swapin or clearing, or even previous binds.
+
+Asynchronous VM_BIND
+____________________
+Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. While the
+IOCTL may return immediately, the VM_BIND operations wait for the in-syncobjs
+before modifying the GPU page-tables, and signal the out-syncobjs when
+the modification is done in the sense that the next execbuf that
+awaits for the out-syncobjs will see the change. Errors are reported
+synchronously assuming that the asynchronous part of the job never errors.
+In low-memory situations the implementation may block, performing the
+VM_BIND synchronously, because there might not be enough memory
+immediately available for preparing the asynchronous operation.
+
+If the VM_BIND IOCTL takes a list or an array of operations as an argument,
+the in-syncobjs needs to signal before the first operation starts to
+execute, and the out-syncobjs signal after the last operation
+completes. Operations in the operation list can be assumed, where it
+matters, to complete in order.
+
+To aid in supporting user-space queues, the VM_BIND may take a bind context
+AKA bind engine identifier argument. All VM_BIND operations using the same
+bind engine can then be assumed, where it matters, to complete in
+order. No such assumptions can be made between VM_BIND operations
+using separate bind contexts.
+
+The purpose of an Asynchronous VM_BIND operation is for user-mode
+drivers to be able to pipeline interleaved vm modifications and
+execbufs. For long-running workloads, such pipelining of a bind
+operation is not allowed and any in-fences need to be awaited
+synchronously.
+
+Also for VM_BINDS for long-running VMs the user-mode driver should typically
+select memory fences as out-fences since that gives greater flexibility for
+the kernel mode driver to inject other  operations into the bind /
+unbind operations. Like for example inserting breakpoints into batch
+buffers. The workload execution can then easily be pipelined behind
+the bind completion using the memory out-fence as the signal condition
+for a gpu semaphore embedded by UMD in the workload.
+
+Multi-operation VM_BIND IOCTL error handling and interrupts
+========================================
+
+The VM_BIND operations of the ioctl may error due to lack of resources
+to complete and also due to interrupted waits. In both situations UMD
+should preferrably restart the IOCTL after taking suitable action. If
+UMD has overcommited a memory resource, an -ENOSPC error will be
+returned, and UMD may then unbind resources that are not used at the
+moment and restart the IOCTL. On -EINTR, UMD should simply restart the
+IOCTL and on -ENOMEM user-space may either attempt to free known
+system memory resources or abort the operation. If aborting as a
+result of a failed operation in a list of operations, some operations
+may still have completed, and to get back to a known state, user-space
+should therefore attempt to unbind all virtual memory regions touched
+by the failing IOCTL.
+Unbind operations are guaranteed not to cause any errors due to
+resource constraints.
+In between a failed VM_BIND ioctl and a successful restart there may
+be implementation defined restrictions on the use of the VM. For a
+description why, please see KMD implementation details under [error
+state saving]_.
+
+
+KMD implementation details
+==========================
+
+.. [error state saving] Open: When the VM_BIND ioctl returns an error, some
+			or even parts of an operation may have been
+			completed. If the ioctl is restarted, in order
+			to know where to restart, the KMD can
+			either put the VM in an error state and save
+			one instance of the needed restart state
+			internally. In this case, KMD needs to block
+			further modifications of the VM state that may
+			cause additional failures requiring a restart
+			state save, until the error has been fully resolved.
+			If the uAPI instead defines a pointer to a
+			UMD allocated cookie in the IOCTL struct, it
+			could also choose to store the restart state
+			in that cookie.
+
+			The restart state may, for example, be the
+			number of successfully completed operations.
+
+			Easiest for UMD would of course be if KMD did
+			a full unwind on error so that no error state
+			needs to be saved.
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [Intel-xe] [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document.
  2023-05-30  8:42 [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document Thomas Hellström
@ 2023-05-30  9:42 ` Nirmoy Das
  2023-05-30 14:06 ` Zeng, Oak
  1 sibling, 0 replies; 4+ messages in thread
From: Nirmoy Das @ 2023-05-30  9:42 UTC (permalink / raw
  To: Thomas Hellström, intel-xe
  Cc: Joonas Lahtinen, linux-kernel, dri-devel, Danilo Krummrich

Hi Thomas,


On 5/30/2023 10:42 AM, Thomas Hellström wrote:
> Add a motivation for and description of asynchronous VM_BIND operation
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   Documentation/gpu/drm-vm-bind-async.rst | 138 ++++++++++++++++++++++++
>   1 file changed, 138 insertions(+)
>   create mode 100644 Documentation/gpu/drm-vm-bind-async.rst
>
> diff --git a/Documentation/gpu/drm-vm-bind-async.rst b/Documentation/gpu/drm-vm-bind-async.rst
> new file mode 100644
> index 000000000000..7f7f8f7ddfea
> --- /dev/null
> +++ b/Documentation/gpu/drm-vm-bind-async.rst
> @@ -0,0 +1,138 @@
> +====================
> +Asynchronous VM_BIND
> +====================
> +
> +Nomenclature:
> +=============
> +
> +* VRAM: On-device memory. Sometimes referred to as device local memory.
> +
> +* vm: A GPU address space. Typically per process, but can be shared by
> +  multiple processes.
> +
> +* VM_BIND: An operation or a list of operations to modify a vm using
> +  an IOCTL. The operations include mapping and unmapping system- or
> +  VRAM memory.
> +
> +* syncobj: A container that abstracts synchronization objects. The
> +  synchronization objects can be either generic, like dma-fences or
> +  driver specific. A syncobj typically indicates the type of the
> +  underlying synchronization object.
> +
> +* in-syncobj: Argument to a VM_BIND IOCTL, the VM_BIND operation waits
> +  for these before starting.
> +
> +* out-syncbj: Argument to a VM_BIND_IOCTL, the VM_BIND operation
> +  signals these when the bind operation is complete.
> +
> +* memory fence: A synchronization object, different from a dma-fence
> +  that uses the value of a specified memory location to determine
> +  signaled status. A memory fence can be awaited and signaled by both
> +  the GPU and CPU. Memory fences are sometimes referred to as
> +  user-fences.
> +
> +* long-running workload: A workload that may take more than the
> +  current stipulated dma-fence maximum signal delay to complete and
> +  which therefore needs to set the VM or the GPU execution context in
> +  a certain mode that disallows completion dma-fences.
> +
> +* UMD: User-mode driver.
> +
> +* KMD: Kernel-mode driver.
> +
> +
> +Synchronous / Asynchronous VM_BIND operation
> +============================================
> +
> +Synchronous VM_BIND
> +___________________
> +With Synchronous VM_BIND, the VM_BIND operations all complete before the
> +ioctl returns. A synchronous VM_BIND takes neither in-fences nor
> +out-fences. Synchronous VM_BIND may block and wait for GPU operations;
> +for example swapin or clearing, or even previous binds.
> +
> +Asynchronous VM_BIND
> +____________________
> +Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. While the
> +IOCTL may return immediately, the VM_BIND operations wait for the in-syncobjs
> +before modifying the GPU page-tables, and signal the out-syncobjs when
> +the modification is done in the sense that the next execbuf that
> +awaits for the out-syncobjs will see the change. Errors are reported
> +synchronously assuming that the asynchronous part of the job never errors.
> +In low-memory situations the implementation may block, performing the
> +VM_BIND synchronously, because there might not be enough memory
> +immediately available for preparing the asynchronous operation.
> +
> +If the VM_BIND IOCTL takes a list or an array of operations as an argument,
> +the in-syncobjs needs to signal before the first operation starts to
> +execute, and the out-syncobjs signal after the last operation
> +completes. Operations in the operation list can be assumed, where it
> +matters, to complete in order.
> +
> +To aid in supporting user-space queues, the VM_BIND may take a bind context
> +AKA bind engine identifier argument. All VM_BIND operations using the same
> +bind engine can then be assumed, where it matters, to complete in
> +order. No such assumptions can be made between VM_BIND operations
> +using separate bind contexts.
> +
> +The purpose of an Asynchronous VM_BIND operation is for user-mode
> +drivers to be able to pipeline interleaved vm modifications and
> +execbufs. For long-running workloads, such pipelining of a bind
> +operation is not allowed and any in-fences need to be awaited
> +synchronously.
> +
> +Also for VM_BINDS for long-running VMs the user-mode driver should typically
> +select memory fences as out-fences since that gives greater flexibility for
> +the kernel mode driver to inject other  operations into the bind /
> +unbind operations. Like for example inserting breakpoints into batch
> +buffers. The workload execution can then easily be pipelined behind
> +the bind completion using the memory out-fence as the signal condition
> +for a gpu semaphore embedded by UMD in the workload.
> +
> +Multi-operation VM_BIND IOCTL error handling and interrupts
> +========================================
> +
> +The VM_BIND operations of the ioctl may error due to lack of resources
> +to complete and also due to interrupted waits. In both situations UMD
> +should preferrably
s/preferrably/preferably
>   restart the IOCTL after taking suitable action. If
> +UMD has overcommited

s/overcommited/overcommitted


Thanks for documenting this complex topic.

Acked-by: Nirmoy Das <nirmoy.das@intel.com>


Regards,

Nirmoy

>   a memory resource, an -ENOSPC error will be
> +returned, and UMD may then unbind resources that are not used at the
> +moment and restart the IOCTL. On -EINTR, UMD should simply restart the
> +IOCTL and on -ENOMEM user-space may either attempt to free known
> +system memory resources or abort the operation. If aborting as a
> +result of a failed operation in a list of operations, some operations
> +may still have completed, and to get back to a known state, user-space
> +should therefore attempt to unbind all virtual memory regions touched
> +by the failing IOCTL.
> +Unbind operations are guaranteed not to cause any errors due to
> +resource constraints.
> +In between a failed VM_BIND ioctl and a successful restart there may
> +be implementation defined restrictions on the use of the VM. For a
> +description why, please see KMD implementation details under [error
> +state saving]_.
> +
> +
> +KMD implementation details
> +==========================
> +
> +.. [error state saving] Open: When the VM_BIND ioctl returns an error, some
> +			or even parts of an operation may have been
> +			completed. If the ioctl is restarted, in order
> +			to know where to restart, the KMD can
> +			either put the VM in an error state and save
> +			one instance of the needed restart state
> +			internally. In this case, KMD needs to block
> +			further modifications of the VM state that may
> +			cause additional failures requiring a restart
> +			state save, until the error has been fully resolved.
> +			If the uAPI instead defines a pointer to a
> +			UMD allocated cookie in the IOCTL struct, it
> +			could also choose to store the restart state
> +			in that cookie.
> +
> +			The restart state may, for example, be the
> +			number of successfully completed operations.
> +
> +			Easiest for UMD would of course be if KMD did
> +			a full unwind on error so that no error state
> +			needs to be saved.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document.
  2023-05-30  8:42 [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document Thomas Hellström
  2023-05-30  9:42 ` [Intel-xe] " Nirmoy Das
@ 2023-05-30 14:06 ` Zeng, Oak
  2023-05-30 14:27   ` Thomas Hellström
  1 sibling, 1 reply; 4+ messages in thread
From: Zeng, Oak @ 2023-05-30 14:06 UTC (permalink / raw
  To: Thomas Hellström, intel-xe@lists.freedesktop.org
  Cc: Brost, Matthew, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, Danilo Krummrich

Hi Thomas,

Thanks for the document. See one question inline.

Thanks,
Oak

> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> Thomas Hellström
> Sent: May 30, 2023 4:43 AM
> To: intel-xe@lists.freedesktop.org
> Cc: Brost, Matthew <matthew.brost@intel.com>; Thomas Hellström
> <thomas.hellstrom@linux.intel.com>; linux-kernel@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Danilo Krummrich <dakr@redhat.com>
> Subject: [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft
> document.
> 
> Add a motivation for and description of asynchronous VM_BIND operation
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  Documentation/gpu/drm-vm-bind-async.rst | 138
> ++++++++++++++++++++++++
>  1 file changed, 138 insertions(+)
>  create mode 100644 Documentation/gpu/drm-vm-bind-async.rst
> 
> diff --git a/Documentation/gpu/drm-vm-bind-async.rst
> b/Documentation/gpu/drm-vm-bind-async.rst
> new file mode 100644
> index 000000000000..7f7f8f7ddfea
> --- /dev/null
> +++ b/Documentation/gpu/drm-vm-bind-async.rst
> @@ -0,0 +1,138 @@
> +====================
> +Asynchronous VM_BIND
> +====================
> +
> +Nomenclature:
> +=============
> +
> +* VRAM: On-device memory. Sometimes referred to as device local memory.
> +
> +* vm: A GPU address space. Typically per process, but can be shared by
> +  multiple processes.
> +
> +* VM_BIND: An operation or a list of operations to modify a vm using
> +  an IOCTL. The operations include mapping and unmapping system- or
> +  VRAM memory.
> +
> +* syncobj: A container that abstracts synchronization objects. The
> +  synchronization objects can be either generic, like dma-fences or
> +  driver specific. A syncobj typically indicates the type of the
> +  underlying synchronization object.
> +
> +* in-syncobj: Argument to a VM_BIND IOCTL, the VM_BIND operation waits
> +  for these before starting.
> +
> +* out-syncbj: Argument to a VM_BIND_IOCTL, the VM_BIND operation
> +  signals these when the bind operation is complete.
> +
> +* memory fence: A synchronization object, different from a dma-fence
> +  that uses the value of a specified memory location to determine
> +  signaled status. 

Are you saying memory fence (user fence) uses specific memory location to determine signaled status, while dma-fence doesn't use specific memory location to determine status?

My understanding is, both user fence and dma fence use a memory to determine status...in the dma fence case, it is the seqno field of struct dma_fence. The difference b/t those two is, for dma-fence, people agreed it has to be signaled in certain amount of time; while user fence doesn't has such contract.

-Oak

A memory fence can be awaited and signaled by both
> +  the GPU and CPU. Memory fences are sometimes referred to as
> +  user-fences.
> +
> +* long-running workload: A workload that may take more than the
> +  current stipulated dma-fence maximum signal delay to complete and
> +  which therefore needs to set the VM or the GPU execution context in
> +  a certain mode that disallows completion dma-fences.
> +
> +* UMD: User-mode driver.
> +
> +* KMD: Kernel-mode driver.
> +
> +
> +Synchronous / Asynchronous VM_BIND operation
> +============================================
> +
> +Synchronous VM_BIND
> +___________________
> +With Synchronous VM_BIND, the VM_BIND operations all complete before the
> +ioctl returns. A synchronous VM_BIND takes neither in-fences nor
> +out-fences. Synchronous VM_BIND may block and wait for GPU operations;
> +for example swapin or clearing, or even previous binds.
> +
> +Asynchronous VM_BIND
> +____________________
> +Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. While the
> +IOCTL may return immediately, the VM_BIND operations wait for the in-
> syncobjs
> +before modifying the GPU page-tables, and signal the out-syncobjs when
> +the modification is done in the sense that the next execbuf that
> +awaits for the out-syncobjs will see the change. Errors are reported
> +synchronously assuming that the asynchronous part of the job never errors.
> +In low-memory situations the implementation may block, performing the
> +VM_BIND synchronously, because there might not be enough memory
> +immediately available for preparing the asynchronous operation.
> +
> +If the VM_BIND IOCTL takes a list or an array of operations as an argument,
> +the in-syncobjs needs to signal before the first operation starts to
> +execute, and the out-syncobjs signal after the last operation
> +completes. Operations in the operation list can be assumed, where it
> +matters, to complete in order.
> +
> +To aid in supporting user-space queues, the VM_BIND may take a bind context
> +AKA bind engine identifier argument. All VM_BIND operations using the same
> +bind engine can then be assumed, where it matters, to complete in
> +order. No such assumptions can be made between VM_BIND operations
> +using separate bind contexts.
> +
> +The purpose of an Asynchronous VM_BIND operation is for user-mode
> +drivers to be able to pipeline interleaved vm modifications and
> +execbufs. For long-running workloads, such pipelining of a bind
> +operation is not allowed and any in-fences need to be awaited
> +synchronously.
> +
> +Also for VM_BINDS for long-running VMs the user-mode driver should typically
> +select memory fences as out-fences since that gives greater flexibility for
> +the kernel mode driver to inject other  operations into the bind /
> +unbind operations. Like for example inserting breakpoints into batch
> +buffers. The workload execution can then easily be pipelined behind
> +the bind completion using the memory out-fence as the signal condition
> +for a gpu semaphore embedded by UMD in the workload.
> +
> +Multi-operation VM_BIND IOCTL error handling and interrupts
> +========================================
> +
> +The VM_BIND operations of the ioctl may error due to lack of resources
> +to complete and also due to interrupted waits. In both situations UMD
> +should preferrably restart the IOCTL after taking suitable action. If
> +UMD has overcommited a memory resource, an -ENOSPC error will be
> +returned, and UMD may then unbind resources that are not used at the
> +moment and restart the IOCTL. On -EINTR, UMD should simply restart the
> +IOCTL and on -ENOMEM user-space may either attempt to free known
> +system memory resources or abort the operation. If aborting as a
> +result of a failed operation in a list of operations, some operations
> +may still have completed, and to get back to a known state, user-space
> +should therefore attempt to unbind all virtual memory regions touched
> +by the failing IOCTL.
> +Unbind operations are guaranteed not to cause any errors due to
> +resource constraints.
> +In between a failed VM_BIND ioctl and a successful restart there may
> +be implementation defined restrictions on the use of the VM. For a
> +description why, please see KMD implementation details under [error
> +state saving]_.
> +
> +
> +KMD implementation details
> +==========================
> +
> +.. [error state saving] Open: When the VM_BIND ioctl returns an error, some
> +			or even parts of an operation may have been
> +			completed. If the ioctl is restarted, in order
> +			to know where to restart, the KMD can
> +			either put the VM in an error state and save
> +			one instance of the needed restart state
> +			internally. In this case, KMD needs to block
> +			further modifications of the VM state that may
> +			cause additional failures requiring a restart
> +			state save, until the error has been fully resolved.
> +			If the uAPI instead defines a pointer to a
> +			UMD allocated cookie in the IOCTL struct, it
> +			could also choose to store the restart state
> +			in that cookie.
> +
> +			The restart state may, for example, be the
> +			number of successfully completed operations.
> +
> +			Easiest for UMD would of course be if KMD did
> +			a full unwind on error so that no error state
> +			needs to be saved.
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document.
  2023-05-30 14:06 ` Zeng, Oak
@ 2023-05-30 14:27   ` Thomas Hellström
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Hellström @ 2023-05-30 14:27 UTC (permalink / raw
  To: Zeng, Oak, intel-xe@lists.freedesktop.org
  Cc: Brost, Matthew, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, Danilo Krummrich

Hi, Oak,

On 5/30/23 16:06, Zeng, Oak wrote:
> Hi Thomas,
>
> Thanks for the document. See one question inline.
>
> Thanks,
> Oak
>
>> -----Original Message-----
>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>> Thomas Hellström
>> Sent: May 30, 2023 4:43 AM
>> To: intel-xe@lists.freedesktop.org
>> Cc: Brost, Matthew <matthew.brost@intel.com>; Thomas Hellström
>> <thomas.hellstrom@linux.intel.com>; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Danilo Krummrich <dakr@redhat.com>
>> Subject: [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft
>> document.
>>
>> Add a motivation for and description of asynchronous VM_BIND operation
>>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>>   Documentation/gpu/drm-vm-bind-async.rst | 138
>> ++++++++++++++++++++++++
>>   1 file changed, 138 insertions(+)
>>   create mode 100644 Documentation/gpu/drm-vm-bind-async.rst
>>
>> diff --git a/Documentation/gpu/drm-vm-bind-async.rst
>> b/Documentation/gpu/drm-vm-bind-async.rst
>> new file mode 100644
>> index 000000000000..7f7f8f7ddfea
>> --- /dev/null
>> +++ b/Documentation/gpu/drm-vm-bind-async.rst
>> @@ -0,0 +1,138 @@
>> +====================
>> +Asynchronous VM_BIND
>> +====================
>> +
>> +Nomenclature:
>> +=============
>> +
>> +* VRAM: On-device memory. Sometimes referred to as device local memory.
>> +
>> +* vm: A GPU address space. Typically per process, but can be shared by
>> +  multiple processes.
>> +
>> +* VM_BIND: An operation or a list of operations to modify a vm using
>> +  an IOCTL. The operations include mapping and unmapping system- or
>> +  VRAM memory.
>> +
>> +* syncobj: A container that abstracts synchronization objects. The
>> +  synchronization objects can be either generic, like dma-fences or
>> +  driver specific. A syncobj typically indicates the type of the
>> +  underlying synchronization object.
>> +
>> +* in-syncobj: Argument to a VM_BIND IOCTL, the VM_BIND operation waits
>> +  for these before starting.
>> +
>> +* out-syncbj: Argument to a VM_BIND_IOCTL, the VM_BIND operation
>> +  signals these when the bind operation is complete.
>> +
>> +* memory fence: A synchronization object, different from a dma-fence
>> +  that uses the value of a specified memory location to determine
>> +  signaled status.
> Are you saying memory fence (user fence) uses specific memory location to determine signaled status, while dma-fence doesn't use specific memory location to determine status?
>
> My understanding is, both user fence and dma fence use a memory to determine status...in the dma fence case, it is the seqno field of struct dma_fence. The difference b/t those two is, for dma-fence, people agreed it has to be signaled in certain amount of time; while user fence doesn't has such contract.

Yes, the section there wasn't intending to say anything about dma-fences 
other than that memory fences are different from dma-fences. I'll 
rephrase that to be a bit clearer.

Thanks,

Thomas



>
> -Oak
>
> A memory fence can be awaited and signaled by both
>> +  the GPU and CPU. Memory fences are sometimes referred to as
>> +  user-fences.
>> +
>> +* long-running workload: A workload that may take more than the
>> +  current stipulated dma-fence maximum signal delay to complete and
>> +  which therefore needs to set the VM or the GPU execution context in
>> +  a certain mode that disallows completion dma-fences.
>> +
>> +* UMD: User-mode driver.
>> +
>> +* KMD: Kernel-mode driver.
>> +
>> +
>> +Synchronous / Asynchronous VM_BIND operation
>> +============================================
>> +
>> +Synchronous VM_BIND
>> +___________________
>> +With Synchronous VM_BIND, the VM_BIND operations all complete before the
>> +ioctl returns. A synchronous VM_BIND takes neither in-fences nor
>> +out-fences. Synchronous VM_BIND may block and wait for GPU operations;
>> +for example swapin or clearing, or even previous binds.
>> +
>> +Asynchronous VM_BIND
>> +____________________
>> +Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. While the
>> +IOCTL may return immediately, the VM_BIND operations wait for the in-
>> syncobjs
>> +before modifying the GPU page-tables, and signal the out-syncobjs when
>> +the modification is done in the sense that the next execbuf that
>> +awaits for the out-syncobjs will see the change. Errors are reported
>> +synchronously assuming that the asynchronous part of the job never errors.
>> +In low-memory situations the implementation may block, performing the
>> +VM_BIND synchronously, because there might not be enough memory
>> +immediately available for preparing the asynchronous operation.
>> +
>> +If the VM_BIND IOCTL takes a list or an array of operations as an argument,
>> +the in-syncobjs needs to signal before the first operation starts to
>> +execute, and the out-syncobjs signal after the last operation
>> +completes. Operations in the operation list can be assumed, where it
>> +matters, to complete in order.
>> +
>> +To aid in supporting user-space queues, the VM_BIND may take a bind context
>> +AKA bind engine identifier argument. All VM_BIND operations using the same
>> +bind engine can then be assumed, where it matters, to complete in
>> +order. No such assumptions can be made between VM_BIND operations
>> +using separate bind contexts.
>> +
>> +The purpose of an Asynchronous VM_BIND operation is for user-mode
>> +drivers to be able to pipeline interleaved vm modifications and
>> +execbufs. For long-running workloads, such pipelining of a bind
>> +operation is not allowed and any in-fences need to be awaited
>> +synchronously.
>> +
>> +Also for VM_BINDS for long-running VMs the user-mode driver should typically
>> +select memory fences as out-fences since that gives greater flexibility for
>> +the kernel mode driver to inject other  operations into the bind /
>> +unbind operations. Like for example inserting breakpoints into batch
>> +buffers. The workload execution can then easily be pipelined behind
>> +the bind completion using the memory out-fence as the signal condition
>> +for a gpu semaphore embedded by UMD in the workload.
>> +
>> +Multi-operation VM_BIND IOCTL error handling and interrupts
>> +========================================
>> +
>> +The VM_BIND operations of the ioctl may error due to lack of resources
>> +to complete and also due to interrupted waits. In both situations UMD
>> +should preferrably restart the IOCTL after taking suitable action. If
>> +UMD has overcommited a memory resource, an -ENOSPC error will be
>> +returned, and UMD may then unbind resources that are not used at the
>> +moment and restart the IOCTL. On -EINTR, UMD should simply restart the
>> +IOCTL and on -ENOMEM user-space may either attempt to free known
>> +system memory resources or abort the operation. If aborting as a
>> +result of a failed operation in a list of operations, some operations
>> +may still have completed, and to get back to a known state, user-space
>> +should therefore attempt to unbind all virtual memory regions touched
>> +by the failing IOCTL.
>> +Unbind operations are guaranteed not to cause any errors due to
>> +resource constraints.
>> +In between a failed VM_BIND ioctl and a successful restart there may
>> +be implementation defined restrictions on the use of the VM. For a
>> +description why, please see KMD implementation details under [error
>> +state saving]_.
>> +
>> +
>> +KMD implementation details
>> +==========================
>> +
>> +.. [error state saving] Open: When the VM_BIND ioctl returns an error, some
>> +			or even parts of an operation may have been
>> +			completed. If the ioctl is restarted, in order
>> +			to know where to restart, the KMD can
>> +			either put the VM in an error state and save
>> +			one instance of the needed restart state
>> +			internally. In this case, KMD needs to block
>> +			further modifications of the VM state that may
>> +			cause additional failures requiring a restart
>> +			state save, until the error has been fully resolved.
>> +			If the uAPI instead defines a pointer to a
>> +			UMD allocated cookie in the IOCTL struct, it
>> +			could also choose to store the restart state
>> +			in that cookie.
>> +
>> +			The restart state may, for example, be the
>> +			number of successfully completed operations.
>> +
>> +			Easiest for UMD would of course be if KMD did
>> +			a full unwind on error so that no error state
>> +			needs to be saved.
>> --
>> 2.39.2

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-05-30 14:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-30  8:42 [RFC PATCH] Documentation/gpu: Add a VM_BIND async draft document Thomas Hellström
2023-05-30  9:42 ` [Intel-xe] " Nirmoy Das
2023-05-30 14:06 ` Zeng, Oak
2023-05-30 14:27   ` Thomas Hellström

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).