All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Fixes to drm_sched_priority
@ 2020-08-14 19:13 Luben Tuikov
  2020-08-14 19:14 ` [PATCH 1/2] drm/scheduler: Scheduler priority fixes Luben Tuikov
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 19:13 UTC (permalink / raw
  To: amd-gfx; +Cc: Luben Tuikov

Some fixes to enum drm_sched_priority which I'd wanted to do
since last year.

For instance, renaming MAX to COUNT, as usually a maximum value
is a value which is part of the set of values, (e.g. a maxima of
a function), and thus assignable, whereby a count is the size of
a set (the enumeration in this case). It also makes it clearer
when used to define size of arrays.

Removing some redundant naming and perhaps better naming could be
had for PRIORITY_SW and PRIORITY_HW, maybe "moderate" and
"temperate" respectively. However, I've left them as "sw" and
"hw".

Also remove a macro which was used in only a single place.

In the second patch, remove priority INVALID, since it is not a
priority, e.g. a scheduler cannot be assigned and run at priority
"invalid". It seems to be more of a directive, a status, of user
input, and as such has no place in the enumeration of priority
levels.

Something else I'd like to do, is to eliminate priority
enumeration "UNSET", since it is not really a priority level,
but  a directive, instructing the code to "reset the priority
of a context to the initial priority".

At the moment, this functionality is overloaded to this priority
value, and it should be an IOCTL op, as opposed to a priority value.

Tested on my desktop system, which is running a kernel with
this patch applied.

Luben Tuikov (2):
  drm/scheduler: Scheduler priority fixes
  drm/scheduler: Remove priority macro INVALID

 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 27 +++++-----
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
 drivers/gpu/drm/scheduler/sched_main.c    |  8 +--
 include/drm/gpu_scheduler.h               | 16 +++---
 9 files changed, 73 insertions(+), 51 deletions(-)

-- 
2.28.0.215.g878e727637

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] drm/scheduler: Scheduler priority fixes
  2020-08-14 19:13 [PATCH 0/2] Fixes to drm_sched_priority Luben Tuikov
@ 2020-08-14 19:14 ` Luben Tuikov
  2020-08-14 19:33   ` Alex Deucher
  2020-08-14 19:14 ` [PATCH 2/2] drm/scheduler: Remove priority macro INVALID Luben Tuikov
  2020-08-14 19:24   ` Alex Deucher
  2 siblings, 1 reply; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 19:14 UTC (permalink / raw
  To: amd-gfx; +Cc: Luben Tuikov

Remove DRM_SCHED_PRIORITY_LOW, as it was used
in only one place.

Rename and separate by a line
DRM_SCHED_PRIORITY_MAX to DRM_SCHED_PRIORITY_COUNT
as it represents a (total) count of said
priorities and it is used as such in loops
throughout the code. (0-based indexing is the
the count number.)

Remove redundant word HIGH in priority names,
and rename *KERNEL* to *HIGH*, as it really
means that, high.

Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  6 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c |  6 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
 drivers/gpu/drm/scheduler/sched_main.c    |  8 ++++----
 include/drm/gpu_scheduler.h               | 15 +++++++++------
 8 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index d85d13f7a043..fd97ac34277b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
 static int amdgpu_ctx_priority_permit(struct drm_file *filp,
 				      enum drm_sched_priority priority)
 {
-	if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
+	if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
 		return -EINVAL;
 
 	/* NORMAL and below are accessible by everyone */
@@ -65,8 +65,8 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
 static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
 {
 	switch (prio) {
-	case DRM_SCHED_PRIORITY_HIGH_HW:
-	case DRM_SCHED_PRIORITY_KERNEL:
+	case DRM_SCHED_PRIORITY_HW:
+	case DRM_SCHED_PRIORITY_HIGH:
 		return AMDGPU_GFX_PIPE_PRIO_HIGH;
 	default:
 		return AMDGPU_GFX_PIPE_PRIO_NORMAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 75d37dfb51aa..bb9e5481ff3c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
 	int i;
 
 	/* Signal all jobs not yet scheduled */
-	for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
 		struct drm_sched_rq *rq = &sched->sched_rq[i];
 
 		if (!rq)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 13ea8ebc421c..6d4fc79bf84a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 			&ring->sched;
 	}
 
-	for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
+	for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i)
 		atomic_set(&ring->num_jobs[i], 0);
 
 	return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index da871d84b742..7112137689db 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -243,7 +243,7 @@ struct amdgpu_ring {
 	bool			has_compute_vm_bug;
 	bool			no_scheduler;
 
-	atomic_t		num_jobs[DRM_SCHED_PRIORITY_MAX];
+	atomic_t		num_jobs[DRM_SCHED_PRIORITY_COUNT];
 	struct mutex		priority_mutex;
 	/* protected by priority_mutex */
 	int			priority;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
index c799691dfa84..e05bc22a0a49 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
@@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
 {
 	switch (amdgpu_priority) {
 	case AMDGPU_CTX_PRIORITY_VERY_HIGH:
-		return DRM_SCHED_PRIORITY_HIGH_HW;
+		return DRM_SCHED_PRIORITY_HW;
 	case AMDGPU_CTX_PRIORITY_HIGH:
-		return DRM_SCHED_PRIORITY_HIGH_SW;
+		return DRM_SCHED_PRIORITY_SW;
 	case AMDGPU_CTX_PRIORITY_NORMAL:
 		return DRM_SCHED_PRIORITY_NORMAL;
 	case AMDGPU_CTX_PRIORITY_LOW:
 	case AMDGPU_CTX_PRIORITY_VERY_LOW:
-		return DRM_SCHED_PRIORITY_LOW;
+		return DRM_SCHED_PRIORITY_MIN;
 	case AMDGPU_CTX_PRIORITY_UNSET:
 		return DRM_SCHED_PRIORITY_UNSET;
 	default:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 20fa0497aaa4..bc4bdb90d8f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2114,7 +2114,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		ring = adev->mman.buffer_funcs_ring;
 		sched = &ring->sched;
 		r = drm_sched_entity_init(&adev->mman.entity,
-				          DRM_SCHED_PRIORITY_KERNEL, &sched,
+					  DRM_SCHED_PRIORITY_HIGH, &sched,
 					  1, NULL);
 		if (r) {
 			DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 2f319102ae9f..c2947e73d1b6 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -335,9 +335,9 @@ void drm_sched_increase_karma(struct drm_sched_job *bad)
 	 * because sometimes GPU hang would cause kernel jobs (like VM updating jobs)
 	 * corrupt but keep in mind that kernel jobs always considered good.
 	 */
-	if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) {
+	if (bad->s_priority != DRM_SCHED_PRIORITY_HIGH) {
 		atomic_inc(&bad->karma);
-		for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_KERNEL;
+		for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_HIGH;
 		     i++) {
 			struct drm_sched_rq *rq = &sched->sched_rq[i];
 
@@ -623,7 +623,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
 		return NULL;
 
 	/* Kernel run queue has higher priority than normal run queue*/
-	for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
 		entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
 		if (entity)
 			break;
@@ -851,7 +851,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
 	sched->name = name;
 	sched->timeout = timeout;
 	sched->hang_limit = hang_limit;
-	for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
+	for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
 		drm_sched_rq_init(sched, &sched->sched_rq[i]);
 
 	init_waitqueue_head(&sched->wake_up_worker);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 26b04ff62676..8ae9b8f73bf9 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -33,14 +33,17 @@
 struct drm_gpu_scheduler;
 struct drm_sched_rq;
 
+/* These are often used as an (initial) index
+ * to an array, and as such should start at 0.
+ */
 enum drm_sched_priority {
 	DRM_SCHED_PRIORITY_MIN,
-	DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
 	DRM_SCHED_PRIORITY_NORMAL,
-	DRM_SCHED_PRIORITY_HIGH_SW,
-	DRM_SCHED_PRIORITY_HIGH_HW,
-	DRM_SCHED_PRIORITY_KERNEL,
-	DRM_SCHED_PRIORITY_MAX,
+	DRM_SCHED_PRIORITY_SW,
+	DRM_SCHED_PRIORITY_HW,
+	DRM_SCHED_PRIORITY_HIGH,
+
+	DRM_SCHED_PRIORITY_COUNT,
 	DRM_SCHED_PRIORITY_INVALID = -1,
 	DRM_SCHED_PRIORITY_UNSET = -2
 };
@@ -273,7 +276,7 @@ struct drm_gpu_scheduler {
 	uint32_t			hw_submission_limit;
 	long				timeout;
 	const char			*name;
-	struct drm_sched_rq		sched_rq[DRM_SCHED_PRIORITY_MAX];
+	struct drm_sched_rq		sched_rq[DRM_SCHED_PRIORITY_COUNT];
 	wait_queue_head_t		wake_up_worker;
 	wait_queue_head_t		job_scheduled;
 	atomic_t			hw_rq_count;
-- 
2.28.0.215.g878e727637

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] drm/scheduler: Remove priority macro INVALID
  2020-08-14 19:13 [PATCH 0/2] Fixes to drm_sched_priority Luben Tuikov
  2020-08-14 19:14 ` [PATCH 1/2] drm/scheduler: Scheduler priority fixes Luben Tuikov
@ 2020-08-14 19:14 ` Luben Tuikov
  2020-08-14 19:28     ` Alex Deucher
  2020-08-14 19:24   ` Alex Deucher
  2 siblings, 1 reply; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 19:14 UTC (permalink / raw
  To: amd-gfx; +Cc: Luben Tuikov

Remove DRM_SCHED_PRIORITY_INVALID. We no longer
carry around an invalid priority and cut it off
at the source.

Backwards compatibility behaviour of AMDGPU CTX
IOCTL passing in garbage for context priority
from user space and then mapping that to
DRM_SCHED_PRIORITY_NORMAL is preserved.

Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 21 ++++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
 include/drm/gpu_scheduler.h               |  1 -
 4 files changed, 53 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index fd97ac34277b..10d3bfa416c0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -384,42 +384,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
 int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *filp)
 {
-	int r;
+	int res;
 	uint32_t id;
-	enum drm_sched_priority priority;
+	enum drm_sched_priority prio;
 
 	union drm_amdgpu_ctx *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
 	struct amdgpu_fpriv *fpriv = filp->driver_priv;
 
-	r = 0;
 	id = args->in.ctx_id;
-	priority = amdgpu_to_sched_priority(args->in.priority);
+	res = amdgpu_to_sched_priority(args->in.priority, &prio);
 
 	/* For backwards compatibility reasons, we need to accept
 	 * ioctls with garbage in the priority field */
-	if (priority == DRM_SCHED_PRIORITY_INVALID)
-		priority = DRM_SCHED_PRIORITY_NORMAL;
+	if (res == -EINVAL)
+		prio = DRM_SCHED_PRIORITY_NORMAL;
 
 	switch (args->in.op) {
 	case AMDGPU_CTX_OP_ALLOC_CTX:
-		r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
+		res = amdgpu_ctx_alloc(adev, fpriv, filp, prio, &id);
 		args->out.alloc.ctx_id = id;
 		break;
 	case AMDGPU_CTX_OP_FREE_CTX:
-		r = amdgpu_ctx_free(fpriv, id);
+		res = amdgpu_ctx_free(fpriv, id);
 		break;
 	case AMDGPU_CTX_OP_QUERY_STATE:
-		r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
+		res = amdgpu_ctx_query(adev, fpriv, id, &args->out);
 		break;
 	case AMDGPU_CTX_OP_QUERY_STATE2:
-		r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
+		res = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
 		break;
 	default:
 		return -EINVAL;
 	}
 
-	return r;
+	return res;
 }
 
 struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
index e05bc22a0a49..2398eb646043 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
@@ -32,24 +32,32 @@
 
 #include "amdgpu_vm.h"
 
-enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
+int amdgpu_to_sched_priority(int amdgpu_priority,
+			     enum drm_sched_priority *prio)
 {
 	switch (amdgpu_priority) {
 	case AMDGPU_CTX_PRIORITY_VERY_HIGH:
-		return DRM_SCHED_PRIORITY_HW;
+		*prio = DRM_SCHED_PRIORITY_HW;
+		break;
 	case AMDGPU_CTX_PRIORITY_HIGH:
-		return DRM_SCHED_PRIORITY_SW;
+		*prio = DRM_SCHED_PRIORITY_SW;
+		break;
 	case AMDGPU_CTX_PRIORITY_NORMAL:
-		return DRM_SCHED_PRIORITY_NORMAL;
+		*prio = DRM_SCHED_PRIORITY_NORMAL;
+		break;
 	case AMDGPU_CTX_PRIORITY_LOW:
 	case AMDGPU_CTX_PRIORITY_VERY_LOW:
-		return DRM_SCHED_PRIORITY_MIN;
+		*prio = DRM_SCHED_PRIORITY_MIN;
+		break;
 	case AMDGPU_CTX_PRIORITY_UNSET:
-		return DRM_SCHED_PRIORITY_UNSET;
+		*prio = DRM_SCHED_PRIORITY_UNSET;
+		break;
 	default:
 		WARN(1, "Invalid context priority %d\n", amdgpu_priority);
-		return DRM_SCHED_PRIORITY_INVALID;
+		return -EINVAL;
 	}
+
+	return 0;
 }
 
 static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
@@ -116,30 +124,42 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
 {
 	union drm_amdgpu_sched *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
-	enum drm_sched_priority priority;
-	int r;
+	enum drm_sched_priority prio;
+	int res;
 
-	priority = amdgpu_to_sched_priority(args->in.priority);
-	if (priority == DRM_SCHED_PRIORITY_INVALID)
+	/* First check the op, then the op's argument.
+	 */
+	switch (args->in.op) {
+	case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
+	case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
+		break;
+	default:
+		DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
 		return -EINVAL;
+	}
+
+	res = amdgpu_to_sched_priority(args->in.priority, &prio);
+	if (res)
+		return res;
 
 	switch (args->in.op) {
 	case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
-		r = amdgpu_sched_process_priority_override(adev,
-							   args->in.fd,
-							   priority);
+		res = amdgpu_sched_process_priority_override(adev,
+							     args->in.fd,
+							     prio);
 		break;
 	case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
-		r = amdgpu_sched_context_priority_override(adev,
-							   args->in.fd,
-							   args->in.ctx_id,
-							   priority);
+		res = amdgpu_sched_context_priority_override(adev,
+							     args->in.fd,
+							     args->in.ctx_id,
+							     prio);
 		break;
 	default:
-		DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
-		r = -EINVAL;
+		/* Impossible.
+		 */
+		res = -EINVAL;
 		break;
 	}
 
-	return r;
+	return res;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
index 12299fd95691..67e5b2472f6a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
@@ -30,7 +30,8 @@ enum drm_sched_priority;
 struct drm_device;
 struct drm_file;
 
-enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
+int amdgpu_to_sched_priority(int amdgpu_priority,
+			     enum drm_sched_priority *prio);
 int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
 		       struct drm_file *filp);
 
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 8ae9b8f73bf9..d6ee3b2e8407 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -44,7 +44,6 @@ enum drm_sched_priority {
 	DRM_SCHED_PRIORITY_HIGH,
 
 	DRM_SCHED_PRIORITY_COUNT,
-	DRM_SCHED_PRIORITY_INVALID = -1,
 	DRM_SCHED_PRIORITY_UNSET = -2
 };
 
-- 
2.28.0.215.g878e727637

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] Fixes to drm_sched_priority
  2020-08-14 19:13 [PATCH 0/2] Fixes to drm_sched_priority Luben Tuikov
@ 2020-08-14 19:24   ` Alex Deucher
  2020-08-14 19:14 ` [PATCH 2/2] drm/scheduler: Remove priority macro INVALID Luben Tuikov
  2020-08-14 19:24   ` Alex Deucher
  2 siblings, 0 replies; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 19:24 UTC (permalink / raw
  To: Luben Tuikov, Maling list - DRI developers; +Cc: amd-gfx list

+ dri-devel

On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> Some fixes to enum drm_sched_priority which I'd wanted to do
> since last year.
>
> For instance, renaming MAX to COUNT, as usually a maximum value
> is a value which is part of the set of values, (e.g. a maxima of
> a function), and thus assignable, whereby a count is the size of
> a set (the enumeration in this case). It also makes it clearer
> when used to define size of arrays.
>
> Removing some redundant naming and perhaps better naming could be
> had for PRIORITY_SW and PRIORITY_HW, maybe "moderate" and
> "temperate" respectively. However, I've left them as "sw" and
> "hw".
>
> Also remove a macro which was used in only a single place.
>
> In the second patch, remove priority INVALID, since it is not a
> priority, e.g. a scheduler cannot be assigned and run at priority
> "invalid". It seems to be more of a directive, a status, of user
> input, and as such has no place in the enumeration of priority
> levels.
>
> Something else I'd like to do, is to eliminate priority
> enumeration "UNSET", since it is not really a priority level,
> but  a directive, instructing the code to "reset the priority
> of a context to the initial priority".
>
> At the moment, this functionality is overloaded to this priority
> value, and it should be an IOCTL op, as opposed to a priority value.
>
> Tested on my desktop system, which is running a kernel with
> this patch applied.
>
> Luben Tuikov (2):
>   drm/scheduler: Scheduler priority fixes
>   drm/scheduler: Remove priority macro INVALID
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 27 +++++-----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>  drivers/gpu/drm/scheduler/sched_main.c    |  8 +--
>  include/drm/gpu_scheduler.h               | 16 +++---
>  9 files changed, 73 insertions(+), 51 deletions(-)
>
> --
> 2.28.0.215.g878e727637
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] Fixes to drm_sched_priority
@ 2020-08-14 19:24   ` Alex Deucher
  0 siblings, 0 replies; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 19:24 UTC (permalink / raw
  To: Luben Tuikov, Maling list - DRI developers; +Cc: amd-gfx list

+ dri-devel

On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> Some fixes to enum drm_sched_priority which I'd wanted to do
> since last year.
>
> For instance, renaming MAX to COUNT, as usually a maximum value
> is a value which is part of the set of values, (e.g. a maxima of
> a function), and thus assignable, whereby a count is the size of
> a set (the enumeration in this case). It also makes it clearer
> when used to define size of arrays.
>
> Removing some redundant naming and perhaps better naming could be
> had for PRIORITY_SW and PRIORITY_HW, maybe "moderate" and
> "temperate" respectively. However, I've left them as "sw" and
> "hw".
>
> Also remove a macro which was used in only a single place.
>
> In the second patch, remove priority INVALID, since it is not a
> priority, e.g. a scheduler cannot be assigned and run at priority
> "invalid". It seems to be more of a directive, a status, of user
> input, and as such has no place in the enumeration of priority
> levels.
>
> Something else I'd like to do, is to eliminate priority
> enumeration "UNSET", since it is not really a priority level,
> but  a directive, instructing the code to "reset the priority
> of a context to the initial priority".
>
> At the moment, this functionality is overloaded to this priority
> value, and it should be an IOCTL op, as opposed to a priority value.
>
> Tested on my desktop system, which is running a kernel with
> this patch applied.
>
> Luben Tuikov (2):
>   drm/scheduler: Scheduler priority fixes
>   drm/scheduler: Remove priority macro INVALID
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 27 +++++-----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>  drivers/gpu/drm/scheduler/sched_main.c    |  8 +--
>  include/drm/gpu_scheduler.h               | 16 +++---
>  9 files changed, 73 insertions(+), 51 deletions(-)
>
> --
> 2.28.0.215.g878e727637
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove priority macro INVALID
  2020-08-14 19:14 ` [PATCH 2/2] drm/scheduler: Remove priority macro INVALID Luben Tuikov
@ 2020-08-14 19:28     ` Alex Deucher
  0 siblings, 0 replies; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 19:28 UTC (permalink / raw
  To: Luben Tuikov, Maling list - DRI developers; +Cc: amd-gfx list

+ dri-devel


On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> Remove DRM_SCHED_PRIORITY_INVALID. We no longer
> carry around an invalid priority and cut it off
> at the source.
>
> Backwards compatibility behaviour of AMDGPU CTX
> IOCTL passing in garbage for context priority
> from user space and then mapping that to
> DRM_SCHED_PRIORITY_NORMAL is preserved.
>
> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 21 ++++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>  include/drm/gpu_scheduler.h               |  1 -
>  4 files changed, 53 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index fd97ac34277b..10d3bfa416c0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -384,42 +384,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
>  int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
>                      struct drm_file *filp)
>  {
> -       int r;
> +       int res;
>         uint32_t id;
> -       enum drm_sched_priority priority;
> +       enum drm_sched_priority prio;

The variable renaming is not directly related to the functional change
in the patch and should be split out as a separate patch is you think
it should be applied.  I personally prefer the original naming.  The
driver uses r or ret for return value checking pretty consistently.  I
also prefer to spell things out unless the names are really long.
Makes it more obvious what they are for.  Same comment on the
functions below as well.

Alex

>
>         union drm_amdgpu_ctx *args = data;
>         struct amdgpu_device *adev = dev->dev_private;
>         struct amdgpu_fpriv *fpriv = filp->driver_priv;
>
> -       r = 0;
>         id = args->in.ctx_id;
> -       priority = amdgpu_to_sched_priority(args->in.priority);
> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>
>         /* For backwards compatibility reasons, we need to accept
>          * ioctls with garbage in the priority field */
> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
> -               priority = DRM_SCHED_PRIORITY_NORMAL;
> +       if (res == -EINVAL)
> +               prio = DRM_SCHED_PRIORITY_NORMAL;
>
>         switch (args->in.op) {
>         case AMDGPU_CTX_OP_ALLOC_CTX:
> -               r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
> +               res = amdgpu_ctx_alloc(adev, fpriv, filp, prio, &id);
>                 args->out.alloc.ctx_id = id;
>                 break;
>         case AMDGPU_CTX_OP_FREE_CTX:
> -               r = amdgpu_ctx_free(fpriv, id);
> +               res = amdgpu_ctx_free(fpriv, id);
>                 break;
>         case AMDGPU_CTX_OP_QUERY_STATE:
> -               r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
> +               res = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>                 break;
>         case AMDGPU_CTX_OP_QUERY_STATE2:
> -               r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
> +               res = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>                 break;
>         default:
>                 return -EINVAL;
>         }
>
> -       return r;
> +       return res;
>  }
>
>  struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> index e05bc22a0a49..2398eb646043 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> @@ -32,24 +32,32 @@
>
>  #include "amdgpu_vm.h"
>
> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
> +int amdgpu_to_sched_priority(int amdgpu_priority,
> +                            enum drm_sched_priority *prio)
>  {
>         switch (amdgpu_priority) {
>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
> -               return DRM_SCHED_PRIORITY_HW;
> +               *prio = DRM_SCHED_PRIORITY_HW;
> +               break;
>         case AMDGPU_CTX_PRIORITY_HIGH:
> -               return DRM_SCHED_PRIORITY_SW;
> +               *prio = DRM_SCHED_PRIORITY_SW;
> +               break;
>         case AMDGPU_CTX_PRIORITY_NORMAL:
> -               return DRM_SCHED_PRIORITY_NORMAL;
> +               *prio = DRM_SCHED_PRIORITY_NORMAL;
> +               break;
>         case AMDGPU_CTX_PRIORITY_LOW:
>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
> -               return DRM_SCHED_PRIORITY_MIN;
> +               *prio = DRM_SCHED_PRIORITY_MIN;
> +               break;
>         case AMDGPU_CTX_PRIORITY_UNSET:
> -               return DRM_SCHED_PRIORITY_UNSET;
> +               *prio = DRM_SCHED_PRIORITY_UNSET;
> +               break;
>         default:
>                 WARN(1, "Invalid context priority %d\n", amdgpu_priority);
> -               return DRM_SCHED_PRIORITY_INVALID;
> +               return -EINVAL;
>         }
> +
> +       return 0;
>  }
>
>  static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
> @@ -116,30 +124,42 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>  {
>         union drm_amdgpu_sched *args = data;
>         struct amdgpu_device *adev = dev->dev_private;
> -       enum drm_sched_priority priority;
> -       int r;
> +       enum drm_sched_priority prio;
> +       int res;
>
> -       priority = amdgpu_to_sched_priority(args->in.priority);
> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
> +       /* First check the op, then the op's argument.
> +        */
> +       switch (args->in.op) {
> +       case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
> +       case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
> +               break;
> +       default:
> +               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>                 return -EINVAL;
> +       }
> +
> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
> +       if (res)
> +               return res;
>
>         switch (args->in.op) {
>         case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
> -               r = amdgpu_sched_process_priority_override(adev,
> -                                                          args->in.fd,
> -                                                          priority);
> +               res = amdgpu_sched_process_priority_override(adev,
> +                                                            args->in.fd,
> +                                                            prio);
>                 break;
>         case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
> -               r = amdgpu_sched_context_priority_override(adev,
> -                                                          args->in.fd,
> -                                                          args->in.ctx_id,
> -                                                          priority);
> +               res = amdgpu_sched_context_priority_override(adev,
> +                                                            args->in.fd,
> +                                                            args->in.ctx_id,
> +                                                            prio);
>                 break;
>         default:
> -               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
> -               r = -EINVAL;
> +               /* Impossible.
> +                */
> +               res = -EINVAL;
>                 break;
>         }
>
> -       return r;
> +       return res;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> index 12299fd95691..67e5b2472f6a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> @@ -30,7 +30,8 @@ enum drm_sched_priority;
>  struct drm_device;
>  struct drm_file;
>
> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
> +int amdgpu_to_sched_priority(int amdgpu_priority,
> +                            enum drm_sched_priority *prio);
>  int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>                        struct drm_file *filp);
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 8ae9b8f73bf9..d6ee3b2e8407 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -44,7 +44,6 @@ enum drm_sched_priority {
>         DRM_SCHED_PRIORITY_HIGH,
>
>         DRM_SCHED_PRIORITY_COUNT,
> -       DRM_SCHED_PRIORITY_INVALID = -1,
>         DRM_SCHED_PRIORITY_UNSET = -2
>  };
>
> --
> 2.28.0.215.g878e727637
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove priority macro INVALID
@ 2020-08-14 19:28     ` Alex Deucher
  0 siblings, 0 replies; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 19:28 UTC (permalink / raw
  To: Luben Tuikov, Maling list - DRI developers; +Cc: amd-gfx list

+ dri-devel


On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> Remove DRM_SCHED_PRIORITY_INVALID. We no longer
> carry around an invalid priority and cut it off
> at the source.
>
> Backwards compatibility behaviour of AMDGPU CTX
> IOCTL passing in garbage for context priority
> from user space and then mapping that to
> DRM_SCHED_PRIORITY_NORMAL is preserved.
>
> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 21 ++++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>  include/drm/gpu_scheduler.h               |  1 -
>  4 files changed, 53 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index fd97ac34277b..10d3bfa416c0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -384,42 +384,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
>  int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
>                      struct drm_file *filp)
>  {
> -       int r;
> +       int res;
>         uint32_t id;
> -       enum drm_sched_priority priority;
> +       enum drm_sched_priority prio;

The variable renaming is not directly related to the functional change
in the patch and should be split out as a separate patch is you think
it should be applied.  I personally prefer the original naming.  The
driver uses r or ret for return value checking pretty consistently.  I
also prefer to spell things out unless the names are really long.
Makes it more obvious what they are for.  Same comment on the
functions below as well.

Alex

>
>         union drm_amdgpu_ctx *args = data;
>         struct amdgpu_device *adev = dev->dev_private;
>         struct amdgpu_fpriv *fpriv = filp->driver_priv;
>
> -       r = 0;
>         id = args->in.ctx_id;
> -       priority = amdgpu_to_sched_priority(args->in.priority);
> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>
>         /* For backwards compatibility reasons, we need to accept
>          * ioctls with garbage in the priority field */
> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
> -               priority = DRM_SCHED_PRIORITY_NORMAL;
> +       if (res == -EINVAL)
> +               prio = DRM_SCHED_PRIORITY_NORMAL;
>
>         switch (args->in.op) {
>         case AMDGPU_CTX_OP_ALLOC_CTX:
> -               r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
> +               res = amdgpu_ctx_alloc(adev, fpriv, filp, prio, &id);
>                 args->out.alloc.ctx_id = id;
>                 break;
>         case AMDGPU_CTX_OP_FREE_CTX:
> -               r = amdgpu_ctx_free(fpriv, id);
> +               res = amdgpu_ctx_free(fpriv, id);
>                 break;
>         case AMDGPU_CTX_OP_QUERY_STATE:
> -               r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
> +               res = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>                 break;
>         case AMDGPU_CTX_OP_QUERY_STATE2:
> -               r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
> +               res = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>                 break;
>         default:
>                 return -EINVAL;
>         }
>
> -       return r;
> +       return res;
>  }
>
>  struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> index e05bc22a0a49..2398eb646043 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> @@ -32,24 +32,32 @@
>
>  #include "amdgpu_vm.h"
>
> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
> +int amdgpu_to_sched_priority(int amdgpu_priority,
> +                            enum drm_sched_priority *prio)
>  {
>         switch (amdgpu_priority) {
>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
> -               return DRM_SCHED_PRIORITY_HW;
> +               *prio = DRM_SCHED_PRIORITY_HW;
> +               break;
>         case AMDGPU_CTX_PRIORITY_HIGH:
> -               return DRM_SCHED_PRIORITY_SW;
> +               *prio = DRM_SCHED_PRIORITY_SW;
> +               break;
>         case AMDGPU_CTX_PRIORITY_NORMAL:
> -               return DRM_SCHED_PRIORITY_NORMAL;
> +               *prio = DRM_SCHED_PRIORITY_NORMAL;
> +               break;
>         case AMDGPU_CTX_PRIORITY_LOW:
>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
> -               return DRM_SCHED_PRIORITY_MIN;
> +               *prio = DRM_SCHED_PRIORITY_MIN;
> +               break;
>         case AMDGPU_CTX_PRIORITY_UNSET:
> -               return DRM_SCHED_PRIORITY_UNSET;
> +               *prio = DRM_SCHED_PRIORITY_UNSET;
> +               break;
>         default:
>                 WARN(1, "Invalid context priority %d\n", amdgpu_priority);
> -               return DRM_SCHED_PRIORITY_INVALID;
> +               return -EINVAL;
>         }
> +
> +       return 0;
>  }
>
>  static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
> @@ -116,30 +124,42 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>  {
>         union drm_amdgpu_sched *args = data;
>         struct amdgpu_device *adev = dev->dev_private;
> -       enum drm_sched_priority priority;
> -       int r;
> +       enum drm_sched_priority prio;
> +       int res;
>
> -       priority = amdgpu_to_sched_priority(args->in.priority);
> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
> +       /* First check the op, then the op's argument.
> +        */
> +       switch (args->in.op) {
> +       case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
> +       case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
> +               break;
> +       default:
> +               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>                 return -EINVAL;
> +       }
> +
> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
> +       if (res)
> +               return res;
>
>         switch (args->in.op) {
>         case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
> -               r = amdgpu_sched_process_priority_override(adev,
> -                                                          args->in.fd,
> -                                                          priority);
> +               res = amdgpu_sched_process_priority_override(adev,
> +                                                            args->in.fd,
> +                                                            prio);
>                 break;
>         case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
> -               r = amdgpu_sched_context_priority_override(adev,
> -                                                          args->in.fd,
> -                                                          args->in.ctx_id,
> -                                                          priority);
> +               res = amdgpu_sched_context_priority_override(adev,
> +                                                            args->in.fd,
> +                                                            args->in.ctx_id,
> +                                                            prio);
>                 break;
>         default:
> -               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
> -               r = -EINVAL;
> +               /* Impossible.
> +                */
> +               res = -EINVAL;
>                 break;
>         }
>
> -       return r;
> +       return res;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> index 12299fd95691..67e5b2472f6a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
> @@ -30,7 +30,8 @@ enum drm_sched_priority;
>  struct drm_device;
>  struct drm_file;
>
> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
> +int amdgpu_to_sched_priority(int amdgpu_priority,
> +                            enum drm_sched_priority *prio);
>  int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>                        struct drm_file *filp);
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 8ae9b8f73bf9..d6ee3b2e8407 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -44,7 +44,6 @@ enum drm_sched_priority {
>         DRM_SCHED_PRIORITY_HIGH,
>
>         DRM_SCHED_PRIORITY_COUNT,
> -       DRM_SCHED_PRIORITY_INVALID = -1,
>         DRM_SCHED_PRIORITY_UNSET = -2
>  };
>
> --
> 2.28.0.215.g878e727637
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] drm/scheduler: Scheduler priority fixes
  2020-08-14 19:14 ` [PATCH 1/2] drm/scheduler: Scheduler priority fixes Luben Tuikov
@ 2020-08-14 19:33   ` Alex Deucher
  2020-08-14 20:58     ` Alex Deucher
  0 siblings, 1 reply; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 19:33 UTC (permalink / raw
  To: Luben Tuikov; +Cc: amd-gfx list

+ dri-devel

On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> Remove DRM_SCHED_PRIORITY_LOW, as it was used
> in only one place.
>
> Rename and separate by a line
> DRM_SCHED_PRIORITY_MAX to DRM_SCHED_PRIORITY_COUNT
> as it represents a (total) count of said
> priorities and it is used as such in loops
> throughout the code. (0-based indexing is the
> the count number.)
>
> Remove redundant word HIGH in priority names,
> and rename *KERNEL* to *HIGH*, as it really
> means that, high.
>
> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  6 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c |  6 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>  drivers/gpu/drm/scheduler/sched_main.c    |  8 ++++----
>  include/drm/gpu_scheduler.h               | 15 +++++++++------
>  8 files changed, 23 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index d85d13f7a043..fd97ac34277b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
>  static int amdgpu_ctx_priority_permit(struct drm_file *filp,
>                                       enum drm_sched_priority priority)
>  {
> -       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
> +       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
>                 return -EINVAL;
>
>         /* NORMAL and below are accessible by everyone */
> @@ -65,8 +65,8 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
>  static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
>  {
>         switch (prio) {
> -       case DRM_SCHED_PRIORITY_HIGH_HW:
> -       case DRM_SCHED_PRIORITY_KERNEL:
> +       case DRM_SCHED_PRIORITY_HW:
> +       case DRM_SCHED_PRIORITY_HIGH:
>                 return AMDGPU_GFX_PIPE_PRIO_HIGH;
>         default:
>                 return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 75d37dfb51aa..bb9e5481ff3c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
>         int i;
>
>         /* Signal all jobs not yet scheduled */
> -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>                 struct drm_sched_rq *rq = &sched->sched_rq[i];
>
>                 if (!rq)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 13ea8ebc421c..6d4fc79bf84a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
>                         &ring->sched;
>         }
>
> -       for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
> +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i)
>                 atomic_set(&ring->num_jobs[i], 0);
>
>         return 0;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index da871d84b742..7112137689db 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -243,7 +243,7 @@ struct amdgpu_ring {
>         bool                    has_compute_vm_bug;
>         bool                    no_scheduler;
>
> -       atomic_t                num_jobs[DRM_SCHED_PRIORITY_MAX];
> +       atomic_t                num_jobs[DRM_SCHED_PRIORITY_COUNT];
>         struct mutex            priority_mutex;
>         /* protected by priority_mutex */
>         int                     priority;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> index c799691dfa84..e05bc22a0a49 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> @@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
>  {
>         switch (amdgpu_priority) {
>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
> -               return DRM_SCHED_PRIORITY_HIGH_HW;
> +               return DRM_SCHED_PRIORITY_HW;
>         case AMDGPU_CTX_PRIORITY_HIGH:
> -               return DRM_SCHED_PRIORITY_HIGH_SW;
> +               return DRM_SCHED_PRIORITY_SW;
>         case AMDGPU_CTX_PRIORITY_NORMAL:
>                 return DRM_SCHED_PRIORITY_NORMAL;
>         case AMDGPU_CTX_PRIORITY_LOW:
>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
> -               return DRM_SCHED_PRIORITY_LOW;
> +               return DRM_SCHED_PRIORITY_MIN;
>         case AMDGPU_CTX_PRIORITY_UNSET:
>                 return DRM_SCHED_PRIORITY_UNSET;
>         default:
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 20fa0497aaa4..bc4bdb90d8f7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2114,7 +2114,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>                 ring = adev->mman.buffer_funcs_ring;
>                 sched = &ring->sched;
>                 r = drm_sched_entity_init(&adev->mman.entity,
> -                                         DRM_SCHED_PRIORITY_KERNEL, &sched,
> +                                         DRM_SCHED_PRIORITY_HIGH, &sched,
>                                           1, NULL);
>                 if (r) {
>                         DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 2f319102ae9f..c2947e73d1b6 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -335,9 +335,9 @@ void drm_sched_increase_karma(struct drm_sched_job *bad)
>          * because sometimes GPU hang would cause kernel jobs (like VM updating jobs)
>          * corrupt but keep in mind that kernel jobs always considered good.
>          */
> -       if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) {
> +       if (bad->s_priority != DRM_SCHED_PRIORITY_HIGH) {
>                 atomic_inc(&bad->karma);
> -               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_KERNEL;
> +               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_HIGH;
>                      i++) {
>                         struct drm_sched_rq *rq = &sched->sched_rq[i];
>
> @@ -623,7 +623,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
>                 return NULL;
>
>         /* Kernel run queue has higher priority than normal run queue*/
> -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>                 entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
>                 if (entity)
>                         break;
> @@ -851,7 +851,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>         sched->name = name;
>         sched->timeout = timeout;
>         sched->hang_limit = hang_limit;
> -       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
> +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
>                 drm_sched_rq_init(sched, &sched->sched_rq[i]);
>
>         init_waitqueue_head(&sched->wake_up_worker);
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 26b04ff62676..8ae9b8f73bf9 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -33,14 +33,17 @@
>  struct drm_gpu_scheduler;
>  struct drm_sched_rq;
>
> +/* These are often used as an (initial) index
> + * to an array, and as such should start at 0.
> + */
>  enum drm_sched_priority {
>         DRM_SCHED_PRIORITY_MIN,
> -       DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
>         DRM_SCHED_PRIORITY_NORMAL,
> -       DRM_SCHED_PRIORITY_HIGH_SW,
> -       DRM_SCHED_PRIORITY_HIGH_HW,

We originally added HIGH_SW and HIGH_HW to differentiate between two
different high priority modes; e.g., HIGH_SW meant that the gpu
scheduler would schedule with high priority, HIGH_HW meant that the
hardware would schedule with high priority. You can set different
queue priorities in the hardware and work from the high priority queue
will get scheduled on the hw sooner than lower priority queues.  Not
all engines support HW queue priorities however.

Alex

> -       DRM_SCHED_PRIORITY_KERNEL,
> -       DRM_SCHED_PRIORITY_MAX,
> +       DRM_SCHED_PRIORITY_SW,
> +       DRM_SCHED_PRIORITY_HW,
> +       DRM_SCHED_PRIORITY_HIGH,
> +
> +       DRM_SCHED_PRIORITY_COUNT,
>         DRM_SCHED_PRIORITY_INVALID = -1,
>         DRM_SCHED_PRIORITY_UNSET = -2
>  };
> @@ -273,7 +276,7 @@ struct drm_gpu_scheduler {
>         uint32_t                        hw_submission_limit;
>         long                            timeout;
>         const char                      *name;
> -       struct drm_sched_rq             sched_rq[DRM_SCHED_PRIORITY_MAX];
> +       struct drm_sched_rq             sched_rq[DRM_SCHED_PRIORITY_COUNT];
>         wait_queue_head_t               wake_up_worker;
>         wait_queue_head_t               job_scheduled;
>         atomic_t                        hw_rq_count;
> --
> 2.28.0.215.g878e727637
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] drm/scheduler: Scheduler priority fixes
  2020-08-14 19:33   ` Alex Deucher
@ 2020-08-14 20:58     ` Alex Deucher
  2020-08-14 21:32       ` Luben Tuikov
  0 siblings, 1 reply; 12+ messages in thread
From: Alex Deucher @ 2020-08-14 20:58 UTC (permalink / raw
  To: Luben Tuikov; +Cc: amd-gfx list

On Fri, Aug 14, 2020 at 3:33 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> + dri-devel
>
> On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
> >
> > Remove DRM_SCHED_PRIORITY_LOW, as it was used
> > in only one place.
> >
> > Rename and separate by a line
> > DRM_SCHED_PRIORITY_MAX to DRM_SCHED_PRIORITY_COUNT
> > as it represents a (total) count of said
> > priorities and it is used as such in loops
> > throughout the code. (0-based indexing is the
> > the count number.)
> >
> > Remove redundant word HIGH in priority names,
> > and rename *KERNEL* to *HIGH*, as it really
> > means that, high.
> >
> > Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  6 +++---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c |  6 +++---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
> >  drivers/gpu/drm/scheduler/sched_main.c    |  8 ++++----
> >  include/drm/gpu_scheduler.h               | 15 +++++++++------
> >  8 files changed, 23 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> > index d85d13f7a043..fd97ac34277b 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> > @@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
> >  static int amdgpu_ctx_priority_permit(struct drm_file *filp,
> >                                       enum drm_sched_priority priority)
> >  {
> > -       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
> > +       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
> >                 return -EINVAL;
> >
> >         /* NORMAL and below are accessible by everyone */
> > @@ -65,8 +65,8 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
> >  static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
> >  {
> >         switch (prio) {
> > -       case DRM_SCHED_PRIORITY_HIGH_HW:
> > -       case DRM_SCHED_PRIORITY_KERNEL:
> > +       case DRM_SCHED_PRIORITY_HW:
> > +       case DRM_SCHED_PRIORITY_HIGH:
> >                 return AMDGPU_GFX_PIPE_PRIO_HIGH;
> >         default:
> >                 return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > index 75d37dfb51aa..bb9e5481ff3c 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > @@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
> >         int i;
> >
> >         /* Signal all jobs not yet scheduled */
> > -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> > +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> >                 struct drm_sched_rq *rq = &sched->sched_rq[i];
> >
> >                 if (!rq)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > index 13ea8ebc421c..6d4fc79bf84a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > @@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
> >                         &ring->sched;
> >         }
> >
> > -       for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
> > +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i)
> >                 atomic_set(&ring->num_jobs[i], 0);
> >
> >         return 0;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index da871d84b742..7112137689db 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -243,7 +243,7 @@ struct amdgpu_ring {
> >         bool                    has_compute_vm_bug;
> >         bool                    no_scheduler;
> >
> > -       atomic_t                num_jobs[DRM_SCHED_PRIORITY_MAX];
> > +       atomic_t                num_jobs[DRM_SCHED_PRIORITY_COUNT];
> >         struct mutex            priority_mutex;
> >         /* protected by priority_mutex */
> >         int                     priority;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> > index c799691dfa84..e05bc22a0a49 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
> > @@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
> >  {
> >         switch (amdgpu_priority) {
> >         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
> > -               return DRM_SCHED_PRIORITY_HIGH_HW;
> > +               return DRM_SCHED_PRIORITY_HW;
> >         case AMDGPU_CTX_PRIORITY_HIGH:
> > -               return DRM_SCHED_PRIORITY_HIGH_SW;
> > +               return DRM_SCHED_PRIORITY_SW;
> >         case AMDGPU_CTX_PRIORITY_NORMAL:
> >                 return DRM_SCHED_PRIORITY_NORMAL;
> >         case AMDGPU_CTX_PRIORITY_LOW:
> >         case AMDGPU_CTX_PRIORITY_VERY_LOW:
> > -               return DRM_SCHED_PRIORITY_LOW;
> > +               return DRM_SCHED_PRIORITY_MIN;
> >         case AMDGPU_CTX_PRIORITY_UNSET:
> >                 return DRM_SCHED_PRIORITY_UNSET;
> >         default:
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index 20fa0497aaa4..bc4bdb90d8f7 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -2114,7 +2114,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
> >                 ring = adev->mman.buffer_funcs_ring;
> >                 sched = &ring->sched;
> >                 r = drm_sched_entity_init(&adev->mman.entity,
> > -                                         DRM_SCHED_PRIORITY_KERNEL, &sched,
> > +                                         DRM_SCHED_PRIORITY_HIGH, &sched,
> >                                           1, NULL);
> >                 if (r) {
> >                         DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> > index 2f319102ae9f..c2947e73d1b6 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -335,9 +335,9 @@ void drm_sched_increase_karma(struct drm_sched_job *bad)
> >          * because sometimes GPU hang would cause kernel jobs (like VM updating jobs)
> >          * corrupt but keep in mind that kernel jobs always considered good.
> >          */
> > -       if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) {
> > +       if (bad->s_priority != DRM_SCHED_PRIORITY_HIGH) {
> >                 atomic_inc(&bad->karma);
> > -               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_KERNEL;
> > +               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_HIGH;
> >                      i++) {
> >                         struct drm_sched_rq *rq = &sched->sched_rq[i];
> >
> > @@ -623,7 +623,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> >                 return NULL;
> >
> >         /* Kernel run queue has higher priority than normal run queue*/
> > -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> > +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> >                 entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
> >                 if (entity)
> >                         break;
> > @@ -851,7 +851,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
> >         sched->name = name;
> >         sched->timeout = timeout;
> >         sched->hang_limit = hang_limit;
> > -       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
> > +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
> >                 drm_sched_rq_init(sched, &sched->sched_rq[i]);
> >
> >         init_waitqueue_head(&sched->wake_up_worker);
> > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> > index 26b04ff62676..8ae9b8f73bf9 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -33,14 +33,17 @@
> >  struct drm_gpu_scheduler;
> >  struct drm_sched_rq;
> >
> > +/* These are often used as an (initial) index
> > + * to an array, and as such should start at 0.
> > + */
> >  enum drm_sched_priority {
> >         DRM_SCHED_PRIORITY_MIN,
> > -       DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
> >         DRM_SCHED_PRIORITY_NORMAL,
> > -       DRM_SCHED_PRIORITY_HIGH_SW,
> > -       DRM_SCHED_PRIORITY_HIGH_HW,
>
> We originally added HIGH_SW and HIGH_HW to differentiate between two
> different high priority modes; e.g., HIGH_SW meant that the gpu
> scheduler would schedule with high priority, HIGH_HW meant that the
> hardware would schedule with high priority. You can set different
> queue priorities in the hardware and work from the high priority queue
> will get scheduled on the hw sooner than lower priority queues.  Not
> all engines support HW queue priorities however.

Thinking about this more, I think we can probably just get rid of the
SW and HW settings and just have a HIGH setting.  We can use HIGH
whether or not we have additional hw priority features or not.  We
want to keep KERNEL however since that is the highest priority and it
makes it obvious what it is for.  Submissions from the kernel need to
be the highest priority (e.g., memory managment or page table
updates).

Alex

>
> Alex
>
> > -       DRM_SCHED_PRIORITY_KERNEL,
> > -       DRM_SCHED_PRIORITY_MAX,
> > +       DRM_SCHED_PRIORITY_SW,
> > +       DRM_SCHED_PRIORITY_HW,
> > +       DRM_SCHED_PRIORITY_HIGH,
> > +
> > +       DRM_SCHED_PRIORITY_COUNT,
> >         DRM_SCHED_PRIORITY_INVALID = -1,
> >         DRM_SCHED_PRIORITY_UNSET = -2
> >  };
> > @@ -273,7 +276,7 @@ struct drm_gpu_scheduler {
> >         uint32_t                        hw_submission_limit;
> >         long                            timeout;
> >         const char                      *name;
> > -       struct drm_sched_rq             sched_rq[DRM_SCHED_PRIORITY_MAX];
> > +       struct drm_sched_rq             sched_rq[DRM_SCHED_PRIORITY_COUNT];
> >         wait_queue_head_t               wake_up_worker;
> >         wait_queue_head_t               job_scheduled;
> >         atomic_t                        hw_rq_count;
> > --
> > 2.28.0.215.g878e727637
> >
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] drm/scheduler: Scheduler priority fixes
  2020-08-14 20:58     ` Alex Deucher
@ 2020-08-14 21:32       ` Luben Tuikov
  0 siblings, 0 replies; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 21:32 UTC (permalink / raw
  To: Alex Deucher; +Cc: amd-gfx list

On 2020-08-14 4:58 p.m., Alex Deucher wrote:
> On Fri, Aug 14, 2020 at 3:33 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>>
>> + dri-devel
>>
>> On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>>>
>>> Remove DRM_SCHED_PRIORITY_LOW, as it was used
>>> in only one place.
>>>
>>> Rename and separate by a line
>>> DRM_SCHED_PRIORITY_MAX to DRM_SCHED_PRIORITY_COUNT
>>> as it represents a (total) count of said
>>> priorities and it is used as such in loops
>>> throughout the code. (0-based indexing is the
>>> the count number.)
>>>
>>> Remove redundant word HIGH in priority names,
>>> and rename *KERNEL* to *HIGH*, as it really
>>> means that, high.
>>>
>>> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
>>> ---
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  6 +++---
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c |  6 +++---
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>>>  drivers/gpu/drm/scheduler/sched_main.c    |  8 ++++----
>>>  include/drm/gpu_scheduler.h               | 15 +++++++++------
>>>  8 files changed, 23 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> index d85d13f7a043..fd97ac34277b 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> @@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
>>>  static int amdgpu_ctx_priority_permit(struct drm_file *filp,
>>>                                       enum drm_sched_priority priority)
>>>  {
>>> -       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
>>> +       if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
>>>                 return -EINVAL;
>>>
>>>         /* NORMAL and below are accessible by everyone */
>>> @@ -65,8 +65,8 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
>>>  static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
>>>  {
>>>         switch (prio) {
>>> -       case DRM_SCHED_PRIORITY_HIGH_HW:
>>> -       case DRM_SCHED_PRIORITY_KERNEL:
>>> +       case DRM_SCHED_PRIORITY_HW:
>>> +       case DRM_SCHED_PRIORITY_HIGH:
>>>                 return AMDGPU_GFX_PIPE_PRIO_HIGH;
>>>         default:
>>>                 return AMDGPU_GFX_PIPE_PRIO_NORMAL;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> index 75d37dfb51aa..bb9e5481ff3c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> @@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
>>>         int i;
>>>
>>>         /* Signal all jobs not yet scheduled */
>>> -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>>> +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>>>                 struct drm_sched_rq *rq = &sched->sched_rq[i];
>>>
>>>                 if (!rq)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
>>> index 13ea8ebc421c..6d4fc79bf84a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
>>> @@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
>>>                         &ring->sched;
>>>         }
>>>
>>> -       for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
>>> +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i)
>>>                 atomic_set(&ring->num_jobs[i], 0);
>>>
>>>         return 0;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> index da871d84b742..7112137689db 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> @@ -243,7 +243,7 @@ struct amdgpu_ring {
>>>         bool                    has_compute_vm_bug;
>>>         bool                    no_scheduler;
>>>
>>> -       atomic_t                num_jobs[DRM_SCHED_PRIORITY_MAX];
>>> +       atomic_t                num_jobs[DRM_SCHED_PRIORITY_COUNT];
>>>         struct mutex            priority_mutex;
>>>         /* protected by priority_mutex */
>>>         int                     priority;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>>> index c799691dfa84..e05bc22a0a49 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>>> @@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
>>>  {
>>>         switch (amdgpu_priority) {
>>>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
>>> -               return DRM_SCHED_PRIORITY_HIGH_HW;
>>> +               return DRM_SCHED_PRIORITY_HW;
>>>         case AMDGPU_CTX_PRIORITY_HIGH:
>>> -               return DRM_SCHED_PRIORITY_HIGH_SW;
>>> +               return DRM_SCHED_PRIORITY_SW;
>>>         case AMDGPU_CTX_PRIORITY_NORMAL:
>>>                 return DRM_SCHED_PRIORITY_NORMAL;
>>>         case AMDGPU_CTX_PRIORITY_LOW:
>>>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
>>> -               return DRM_SCHED_PRIORITY_LOW;
>>> +               return DRM_SCHED_PRIORITY_MIN;
>>>         case AMDGPU_CTX_PRIORITY_UNSET:
>>>                 return DRM_SCHED_PRIORITY_UNSET;
>>>         default:
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index 20fa0497aaa4..bc4bdb90d8f7 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -2114,7 +2114,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>>                 ring = adev->mman.buffer_funcs_ring;
>>>                 sched = &ring->sched;
>>>                 r = drm_sched_entity_init(&adev->mman.entity,
>>> -                                         DRM_SCHED_PRIORITY_KERNEL, &sched,
>>> +                                         DRM_SCHED_PRIORITY_HIGH, &sched,
>>>                                           1, NULL);
>>>                 if (r) {
>>>                         DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 2f319102ae9f..c2947e73d1b6 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -335,9 +335,9 @@ void drm_sched_increase_karma(struct drm_sched_job *bad)
>>>          * because sometimes GPU hang would cause kernel jobs (like VM updating jobs)
>>>          * corrupt but keep in mind that kernel jobs always considered good.
>>>          */
>>> -       if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) {
>>> +       if (bad->s_priority != DRM_SCHED_PRIORITY_HIGH) {
>>>                 atomic_inc(&bad->karma);
>>> -               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_KERNEL;
>>> +               for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_HIGH;
>>>                      i++) {
>>>                         struct drm_sched_rq *rq = &sched->sched_rq[i];
>>>
>>> @@ -623,7 +623,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
>>>                 return NULL;
>>>
>>>         /* Kernel run queue has higher priority than normal run queue*/
>>> -       for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>>> +       for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>>>                 entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
>>>                 if (entity)
>>>                         break;
>>> @@ -851,7 +851,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>         sched->name = name;
>>>         sched->timeout = timeout;
>>>         sched->hang_limit = hang_limit;
>>> -       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
>>> +       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
>>>                 drm_sched_rq_init(sched, &sched->sched_rq[i]);
>>>
>>>         init_waitqueue_head(&sched->wake_up_worker);
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index 26b04ff62676..8ae9b8f73bf9 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -33,14 +33,17 @@
>>>  struct drm_gpu_scheduler;
>>>  struct drm_sched_rq;
>>>
>>> +/* These are often used as an (initial) index
>>> + * to an array, and as such should start at 0.
>>> + */
>>>  enum drm_sched_priority {
>>>         DRM_SCHED_PRIORITY_MIN,
>>> -       DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
>>>         DRM_SCHED_PRIORITY_NORMAL,
>>> -       DRM_SCHED_PRIORITY_HIGH_SW,
>>> -       DRM_SCHED_PRIORITY_HIGH_HW,
>>
>> We originally added HIGH_SW and HIGH_HW to differentiate between two
>> different high priority modes; e.g., HIGH_SW meant that the gpu
>> scheduler would schedule with high priority, HIGH_HW meant that the
>> hardware would schedule with high priority. You can set different
>> queue priorities in the hardware and work from the high priority queue
>> will get scheduled on the hw sooner than lower priority queues.  Not
>> all engines support HW queue priorities however.
> 
> Thinking about this more, I think we can probably just get rid of the
> SW and HW settings and just have a HIGH setting.  We can use HIGH
> whether or not we have additional hw priority features or not.  We
> want to keep KERNEL however since that is the highest priority and it
> makes it obvious what it is for.  Submissions from the kernel need to
> be the highest priority (e.g., memory managment or page table
> updates).
> 

"MIN, NORMAL, HIGH, KERNEL", sure we can do that. That's better
and gets rid of the ambiguous "SW, HW". I'll do that.

Regards,
Luben

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove priority macro INVALID
  2020-08-14 19:28     ` Alex Deucher
@ 2020-08-14 21:37       ` Luben Tuikov
  -1 siblings, 0 replies; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 21:37 UTC (permalink / raw
  To: Alex Deucher, Maling list - DRI developers; +Cc: amd-gfx list

On 2020-08-14 3:28 p.m., Alex Deucher wrote:
> + dri-devel
> 
> 
> On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>>
>> Remove DRM_SCHED_PRIORITY_INVALID. We no longer
>> carry around an invalid priority and cut it off
>> at the source.
>>
>> Backwards compatibility behaviour of AMDGPU CTX
>> IOCTL passing in garbage for context priority
>> from user space and then mapping that to
>> DRM_SCHED_PRIORITY_NORMAL is preserved.
>>
>> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 21 ++++----
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>>  include/drm/gpu_scheduler.h               |  1 -
>>  4 files changed, 53 insertions(+), 34 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> index fd97ac34277b..10d3bfa416c0 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> @@ -384,42 +384,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
>>  int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
>>                      struct drm_file *filp)
>>  {
>> -       int r;
>> +       int res;
>>         uint32_t id;
>> -       enum drm_sched_priority priority;
>> +       enum drm_sched_priority prio;
> 
> The variable renaming is not directly related to the functional change
> in the patch and should be split out as a separate patch is you think
> it should be applied.  I personally prefer the original naming.  The
> driver uses r or ret for return value checking pretty consistently.  I
> also prefer to spell things out unless the names are really long.
> Makes it more obvious what they are for.  Same comment on the
> functions below as well.

Sure, I can revert this. Personally, I don't like single letter
variables as they are very inexpressive and hard to search for.
I thought "prio" to be easier to type than "priority", but I can
revert this.

Regards,
Luben

> 
> Alex
> 
>>
>>         union drm_amdgpu_ctx *args = data;
>>         struct amdgpu_device *adev = dev->dev_private;
>>         struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>
>> -       r = 0;
>>         id = args->in.ctx_id;
>> -       priority = amdgpu_to_sched_priority(args->in.priority);
>> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>>
>>         /* For backwards compatibility reasons, we need to accept
>>          * ioctls with garbage in the priority field */
>> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
>> -               priority = DRM_SCHED_PRIORITY_NORMAL;
>> +       if (res == -EINVAL)
>> +               prio = DRM_SCHED_PRIORITY_NORMAL;
>>
>>         switch (args->in.op) {
>>         case AMDGPU_CTX_OP_ALLOC_CTX:
>> -               r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
>> +               res = amdgpu_ctx_alloc(adev, fpriv, filp, prio, &id);
>>                 args->out.alloc.ctx_id = id;
>>                 break;
>>         case AMDGPU_CTX_OP_FREE_CTX:
>> -               r = amdgpu_ctx_free(fpriv, id);
>> +               res = amdgpu_ctx_free(fpriv, id);
>>                 break;
>>         case AMDGPU_CTX_OP_QUERY_STATE:
>> -               r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>> +               res = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>>                 break;
>>         case AMDGPU_CTX_OP_QUERY_STATE2:
>> -               r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>> +               res = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>>                 break;
>>         default:
>>                 return -EINVAL;
>>         }
>>
>> -       return r;
>> +       return res;
>>  }
>>
>>  struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id)
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> index e05bc22a0a49..2398eb646043 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> @@ -32,24 +32,32 @@
>>
>>  #include "amdgpu_vm.h"
>>
>> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
>> +int amdgpu_to_sched_priority(int amdgpu_priority,
>> +                            enum drm_sched_priority *prio)
>>  {
>>         switch (amdgpu_priority) {
>>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
>> -               return DRM_SCHED_PRIORITY_HW;
>> +               *prio = DRM_SCHED_PRIORITY_HW;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_HIGH:
>> -               return DRM_SCHED_PRIORITY_SW;
>> +               *prio = DRM_SCHED_PRIORITY_SW;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_NORMAL:
>> -               return DRM_SCHED_PRIORITY_NORMAL;
>> +               *prio = DRM_SCHED_PRIORITY_NORMAL;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_LOW:
>>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
>> -               return DRM_SCHED_PRIORITY_MIN;
>> +               *prio = DRM_SCHED_PRIORITY_MIN;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_UNSET:
>> -               return DRM_SCHED_PRIORITY_UNSET;
>> +               *prio = DRM_SCHED_PRIORITY_UNSET;
>> +               break;
>>         default:
>>                 WARN(1, "Invalid context priority %d\n", amdgpu_priority);
>> -               return DRM_SCHED_PRIORITY_INVALID;
>> +               return -EINVAL;
>>         }
>> +
>> +       return 0;
>>  }
>>
>>  static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
>> @@ -116,30 +124,42 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>>  {
>>         union drm_amdgpu_sched *args = data;
>>         struct amdgpu_device *adev = dev->dev_private;
>> -       enum drm_sched_priority priority;
>> -       int r;
>> +       enum drm_sched_priority prio;
>> +       int res;
>>
>> -       priority = amdgpu_to_sched_priority(args->in.priority);
>> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
>> +       /* First check the op, then the op's argument.
>> +        */
>> +       switch (args->in.op) {
>> +       case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
>> +       case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
>> +               break;
>> +       default:
>> +               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>>                 return -EINVAL;
>> +       }
>> +
>> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>> +       if (res)
>> +               return res;
>>
>>         switch (args->in.op) {
>>         case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
>> -               r = amdgpu_sched_process_priority_override(adev,
>> -                                                          args->in.fd,
>> -                                                          priority);
>> +               res = amdgpu_sched_process_priority_override(adev,
>> +                                                            args->in.fd,
>> +                                                            prio);
>>                 break;
>>         case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
>> -               r = amdgpu_sched_context_priority_override(adev,
>> -                                                          args->in.fd,
>> -                                                          args->in.ctx_id,
>> -                                                          priority);
>> +               res = amdgpu_sched_context_priority_override(adev,
>> +                                                            args->in.fd,
>> +                                                            args->in.ctx_id,
>> +                                                            prio);
>>                 break;
>>         default:
>> -               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>> -               r = -EINVAL;
>> +               /* Impossible.
>> +                */
>> +               res = -EINVAL;
>>                 break;
>>         }
>>
>> -       return r;
>> +       return res;
>>  }
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> index 12299fd95691..67e5b2472f6a 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> @@ -30,7 +30,8 @@ enum drm_sched_priority;
>>  struct drm_device;
>>  struct drm_file;
>>
>> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
>> +int amdgpu_to_sched_priority(int amdgpu_priority,
>> +                            enum drm_sched_priority *prio);
>>  int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>>                        struct drm_file *filp);
>>
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 8ae9b8f73bf9..d6ee3b2e8407 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -44,7 +44,6 @@ enum drm_sched_priority {
>>         DRM_SCHED_PRIORITY_HIGH,
>>
>>         DRM_SCHED_PRIORITY_COUNT,
>> -       DRM_SCHED_PRIORITY_INVALID = -1,
>>         DRM_SCHED_PRIORITY_UNSET = -2
>>  };
>>
>> --
>> 2.28.0.215.g878e727637
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cluben.tuikov%40amd.com%7Cfa26637c8f4243baea4708d84088507f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637330301495842311&amp;sdata=4CH%2Fu%2BbtRqKWJF5ZvRqaEeacOdTXJrLOTOz0Fi9ZwTo%3D&amp;reserved=0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove priority macro INVALID
@ 2020-08-14 21:37       ` Luben Tuikov
  0 siblings, 0 replies; 12+ messages in thread
From: Luben Tuikov @ 2020-08-14 21:37 UTC (permalink / raw
  To: Alex Deucher, Maling list - DRI developers; +Cc: amd-gfx list

On 2020-08-14 3:28 p.m., Alex Deucher wrote:
> + dri-devel
> 
> 
> On Fri, Aug 14, 2020 at 3:14 PM Luben Tuikov <luben.tuikov@amd.com> wrote:
>>
>> Remove DRM_SCHED_PRIORITY_INVALID. We no longer
>> carry around an invalid priority and cut it off
>> at the source.
>>
>> Backwards compatibility behaviour of AMDGPU CTX
>> IOCTL passing in garbage for context priority
>> from user space and then mapping that to
>> DRM_SCHED_PRIORITY_NORMAL is preserved.
>>
>> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 21 ++++----
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 62 +++++++++++++++--------
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h |  3 +-
>>  include/drm/gpu_scheduler.h               |  1 -
>>  4 files changed, 53 insertions(+), 34 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> index fd97ac34277b..10d3bfa416c0 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> @@ -384,42 +384,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
>>  int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
>>                      struct drm_file *filp)
>>  {
>> -       int r;
>> +       int res;
>>         uint32_t id;
>> -       enum drm_sched_priority priority;
>> +       enum drm_sched_priority prio;
> 
> The variable renaming is not directly related to the functional change
> in the patch and should be split out as a separate patch is you think
> it should be applied.  I personally prefer the original naming.  The
> driver uses r or ret for return value checking pretty consistently.  I
> also prefer to spell things out unless the names are really long.
> Makes it more obvious what they are for.  Same comment on the
> functions below as well.

Sure, I can revert this. Personally, I don't like single letter
variables as they are very inexpressive and hard to search for.
I thought "prio" to be easier to type than "priority", but I can
revert this.

Regards,
Luben

> 
> Alex
> 
>>
>>         union drm_amdgpu_ctx *args = data;
>>         struct amdgpu_device *adev = dev->dev_private;
>>         struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>
>> -       r = 0;
>>         id = args->in.ctx_id;
>> -       priority = amdgpu_to_sched_priority(args->in.priority);
>> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>>
>>         /* For backwards compatibility reasons, we need to accept
>>          * ioctls with garbage in the priority field */
>> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
>> -               priority = DRM_SCHED_PRIORITY_NORMAL;
>> +       if (res == -EINVAL)
>> +               prio = DRM_SCHED_PRIORITY_NORMAL;
>>
>>         switch (args->in.op) {
>>         case AMDGPU_CTX_OP_ALLOC_CTX:
>> -               r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
>> +               res = amdgpu_ctx_alloc(adev, fpriv, filp, prio, &id);
>>                 args->out.alloc.ctx_id = id;
>>                 break;
>>         case AMDGPU_CTX_OP_FREE_CTX:
>> -               r = amdgpu_ctx_free(fpriv, id);
>> +               res = amdgpu_ctx_free(fpriv, id);
>>                 break;
>>         case AMDGPU_CTX_OP_QUERY_STATE:
>> -               r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>> +               res = amdgpu_ctx_query(adev, fpriv, id, &args->out);
>>                 break;
>>         case AMDGPU_CTX_OP_QUERY_STATE2:
>> -               r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>> +               res = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
>>                 break;
>>         default:
>>                 return -EINVAL;
>>         }
>>
>> -       return r;
>> +       return res;
>>  }
>>
>>  struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id)
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> index e05bc22a0a49..2398eb646043 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
>> @@ -32,24 +32,32 @@
>>
>>  #include "amdgpu_vm.h"
>>
>> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
>> +int amdgpu_to_sched_priority(int amdgpu_priority,
>> +                            enum drm_sched_priority *prio)
>>  {
>>         switch (amdgpu_priority) {
>>         case AMDGPU_CTX_PRIORITY_VERY_HIGH:
>> -               return DRM_SCHED_PRIORITY_HW;
>> +               *prio = DRM_SCHED_PRIORITY_HW;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_HIGH:
>> -               return DRM_SCHED_PRIORITY_SW;
>> +               *prio = DRM_SCHED_PRIORITY_SW;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_NORMAL:
>> -               return DRM_SCHED_PRIORITY_NORMAL;
>> +               *prio = DRM_SCHED_PRIORITY_NORMAL;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_LOW:
>>         case AMDGPU_CTX_PRIORITY_VERY_LOW:
>> -               return DRM_SCHED_PRIORITY_MIN;
>> +               *prio = DRM_SCHED_PRIORITY_MIN;
>> +               break;
>>         case AMDGPU_CTX_PRIORITY_UNSET:
>> -               return DRM_SCHED_PRIORITY_UNSET;
>> +               *prio = DRM_SCHED_PRIORITY_UNSET;
>> +               break;
>>         default:
>>                 WARN(1, "Invalid context priority %d\n", amdgpu_priority);
>> -               return DRM_SCHED_PRIORITY_INVALID;
>> +               return -EINVAL;
>>         }
>> +
>> +       return 0;
>>  }
>>
>>  static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
>> @@ -116,30 +124,42 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>>  {
>>         union drm_amdgpu_sched *args = data;
>>         struct amdgpu_device *adev = dev->dev_private;
>> -       enum drm_sched_priority priority;
>> -       int r;
>> +       enum drm_sched_priority prio;
>> +       int res;
>>
>> -       priority = amdgpu_to_sched_priority(args->in.priority);
>> -       if (priority == DRM_SCHED_PRIORITY_INVALID)
>> +       /* First check the op, then the op's argument.
>> +        */
>> +       switch (args->in.op) {
>> +       case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
>> +       case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
>> +               break;
>> +       default:
>> +               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>>                 return -EINVAL;
>> +       }
>> +
>> +       res = amdgpu_to_sched_priority(args->in.priority, &prio);
>> +       if (res)
>> +               return res;
>>
>>         switch (args->in.op) {
>>         case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE:
>> -               r = amdgpu_sched_process_priority_override(adev,
>> -                                                          args->in.fd,
>> -                                                          priority);
>> +               res = amdgpu_sched_process_priority_override(adev,
>> +                                                            args->in.fd,
>> +                                                            prio);
>>                 break;
>>         case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
>> -               r = amdgpu_sched_context_priority_override(adev,
>> -                                                          args->in.fd,
>> -                                                          args->in.ctx_id,
>> -                                                          priority);
>> +               res = amdgpu_sched_context_priority_override(adev,
>> +                                                            args->in.fd,
>> +                                                            args->in.ctx_id,
>> +                                                            prio);
>>                 break;
>>         default:
>> -               DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
>> -               r = -EINVAL;
>> +               /* Impossible.
>> +                */
>> +               res = -EINVAL;
>>                 break;
>>         }
>>
>> -       return r;
>> +       return res;
>>  }
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> index 12299fd95691..67e5b2472f6a 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
>> @@ -30,7 +30,8 @@ enum drm_sched_priority;
>>  struct drm_device;
>>  struct drm_file;
>>
>> -enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
>> +int amdgpu_to_sched_priority(int amdgpu_priority,
>> +                            enum drm_sched_priority *prio);
>>  int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
>>                        struct drm_file *filp);
>>
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 8ae9b8f73bf9..d6ee3b2e8407 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -44,7 +44,6 @@ enum drm_sched_priority {
>>         DRM_SCHED_PRIORITY_HIGH,
>>
>>         DRM_SCHED_PRIORITY_COUNT,
>> -       DRM_SCHED_PRIORITY_INVALID = -1,
>>         DRM_SCHED_PRIORITY_UNSET = -2
>>  };
>>
>> --
>> 2.28.0.215.g878e727637
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cluben.tuikov%40amd.com%7Cfa26637c8f4243baea4708d84088507f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637330301495842311&amp;sdata=4CH%2Fu%2BbtRqKWJF5ZvRqaEeacOdTXJrLOTOz0Fi9ZwTo%3D&amp;reserved=0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-08-14 21:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-14 19:13 [PATCH 0/2] Fixes to drm_sched_priority Luben Tuikov
2020-08-14 19:14 ` [PATCH 1/2] drm/scheduler: Scheduler priority fixes Luben Tuikov
2020-08-14 19:33   ` Alex Deucher
2020-08-14 20:58     ` Alex Deucher
2020-08-14 21:32       ` Luben Tuikov
2020-08-14 19:14 ` [PATCH 2/2] drm/scheduler: Remove priority macro INVALID Luben Tuikov
2020-08-14 19:28   ` Alex Deucher
2020-08-14 19:28     ` Alex Deucher
2020-08-14 21:37     ` Luben Tuikov
2020-08-14 21:37       ` Luben Tuikov
2020-08-14 19:24 ` [PATCH 0/2] Fixes to drm_sched_priority Alex Deucher
2020-08-14 19:24   ` Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.