Linux-ARM-MSM Archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation
@ 2024-03-11 15:34 Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex Dmitry Baryshkov
                   ` (7 more replies)
  0 siblings, 8 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Protection domain mapper is a QMI service providing mapping between
'protection domains' and services supported / allowed in these domains.
For example such mapping is required for loading of the WiFi firmware or
for properly starting up the UCSI / altmode / battery manager support.

The existing userspace implementation has several issue. It doesn't play
well with CONFIG_EXTRA_FIRMWARE, it doesn't reread the JSON files if the
firmware location is changed (or if the firmware was not available at
the time pd-mapper was started but the corresponding directory is
mounted later), etc.

However this configuration is largely static and common between
different platforms. Provide in-kernel service implementing static
per-platform data.

NOTE: this is an RFC / RFT, the domain mapping data might be inaccurate
(especially for SM6xxx and SC7xxx platforms), which is reflected by
several TODO and FIXME comments in the code.

--
2.39.2

---
Changes in v4:
- Fixed missing chunk, reenabled kfree in qmi_del_server (Konrad)
- Added configuration for sm6350 (Thanks to Luca)
- Removed RFC tag (Konrad)
- Link to v3: https://lore.kernel.org/r/20240304-qcom-pd-mapper-v3-0-6858fa1ac1c8@linaro.org

Changes in RFC v3:
- Send start / stop notifications when PD-mapper domain list is changed
- Reworked the way PD-mapper treats protection domains, register all of
  them in a single batch
- Added SC7180 domains configuration based on TCL Book 14 GO
- Link to v2: https://lore.kernel.org/r/20240301-qcom-pd-mapper-v2-0-5d12a081d9d1@linaro.org

Changes in RFC v2:
- Swapped num_domains / domains (Konrad)
- Fixed an issue with battery not working on sc8280xp
- Added missing configuration for QCS404

---
Dmitry Baryshkov (7):
      soc: qcom: pdr: protect locator_addr with the main mutex
      soc: qcom: qmi: add a way to remove running service
      soc: qcom: add pd-mapper implementation
      remoteproc: qcom: pas: correct data indentation
      remoteproc: qcom: adsp: add configuration for in-kernel pdm
      remoteproc: qcom: mss: add configuration for in-kernel pdm
      remoteproc: qcom: pas: add configuration for in-kernel pdm

 drivers/remoteproc/Kconfig          |   3 +
 drivers/remoteproc/qcom_q6v5_adsp.c |  82 +++++-
 drivers/remoteproc/qcom_q6v5_mss.c  |  80 +++++-
 drivers/remoteproc/qcom_q6v5_pas.c  | 502 ++++++++++++++++++++++++++++++------
 drivers/soc/qcom/Kconfig            |  10 +
 drivers/soc/qcom/Makefile           |   2 +
 drivers/soc/qcom/pdr_interface.c    |   6 +-
 drivers/soc/qcom/qcom_pdm.c         | 346 +++++++++++++++++++++++++
 drivers/soc/qcom/qcom_pdm_msg.c     | 188 ++++++++++++++
 drivers/soc/qcom/qcom_pdm_msg.h     |  66 +++++
 drivers/soc/qcom/qmi_interface.c    |  67 +++++
 include/linux/soc/qcom/pd_mapper.h  |  39 +++
 include/linux/soc/qcom/qmi.h        |   2 +
 13 files changed, 1302 insertions(+), 91 deletions(-)
---
base-commit: 1843e16d2df9d98427ef8045589571749d627cf7
change-id: 20240301-qcom-pd-mapper-e12d622d4ad0

Best regards,
-- 
Dmitry Baryshkov <dmitry.baryshkov@linaro.org>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-13 23:35   ` Chris Lew
  2024-03-11 15:34 ` [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service Dmitry Baryshkov
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

If the service locator server is restarted fast enough, the PDR can
rewrite locator_addr fields concurrently. Protect them by placing
modification of those fields under the main pdr->lock.

Fixes: fbe639b44a82 ("soc: qcom: Introduce Protection Domain Restart helpers")
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/soc/qcom/pdr_interface.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
index a1b6a4081dea..7b5fdf9fd3d7 100644
--- a/drivers/soc/qcom/pdr_interface.c
+++ b/drivers/soc/qcom/pdr_interface.c
@@ -76,12 +76,12 @@ static int pdr_locator_new_server(struct qmi_handle *qmi,
 					      locator_hdl);
 	struct pdr_service *pds;
 
+	mutex_lock(&pdr->lock);
 	/* Create a local client port for QMI communication */
 	pdr->locator_addr.sq_family = AF_QIPCRTR;
 	pdr->locator_addr.sq_node = svc->node;
 	pdr->locator_addr.sq_port = svc->port;
 
-	mutex_lock(&pdr->lock);
 	pdr->locator_init_complete = true;
 	mutex_unlock(&pdr->lock);
 
@@ -104,10 +104,10 @@ static void pdr_locator_del_server(struct qmi_handle *qmi,
 
 	mutex_lock(&pdr->lock);
 	pdr->locator_init_complete = false;
-	mutex_unlock(&pdr->lock);
 
 	pdr->locator_addr.sq_node = 0;
 	pdr->locator_addr.sq_port = 0;
+	mutex_unlock(&pdr->lock);
 }
 
 static const struct qmi_ops pdr_locator_ops = {
@@ -133,11 +133,13 @@ static int pdr_register_listener(struct pdr_handle *pdr,
 	req.enable = enable;
 	strscpy(req.service_path, pds->service_path, sizeof(req.service_path));
 
+	mutex_lock(&pdr->lock);
 	ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr,
 			       &txn, SERVREG_REGISTER_LISTENER_REQ,
 			       SERVREG_REGISTER_LISTENER_REQ_LEN,
 			       servreg_register_listener_req_ei,
 			       &req);
+	mutex_unlock(&pdr->lock);
 	if (ret < 0) {
 		qmi_txn_cancel(&txn);
 		return ret;

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-12  0:53   ` Konrad Dybcio
  2024-03-14  0:03   ` Chris Lew
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Add qmi_del_server(), a pair to qmi_add_server(), a way to remove
running server from the QMI socket. This is e.g. necessary for
pd-mapper, which needs to readd a server each time the DSP is started or
stopped.

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/soc/qcom/qmi_interface.c | 67 ++++++++++++++++++++++++++++++++++++++++
 include/linux/soc/qcom/qmi.h     |  2 ++
 2 files changed, 69 insertions(+)

diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
index bb98b06e87f8..18ff2015c682 100644
--- a/drivers/soc/qcom/qmi_interface.c
+++ b/drivers/soc/qcom/qmi_interface.c
@@ -289,6 +289,73 @@ int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
 }
 EXPORT_SYMBOL_GPL(qmi_add_server);
 
+static void qmi_send_del_server(struct qmi_handle *qmi, struct qmi_service *svc)
+{
+	struct qrtr_ctrl_pkt pkt;
+	struct sockaddr_qrtr sq;
+	struct msghdr msg = { };
+	struct kvec iv = { &pkt, sizeof(pkt) };
+	int ret;
+
+	memset(&pkt, 0, sizeof(pkt));
+	pkt.cmd = cpu_to_le32(QRTR_TYPE_DEL_SERVER);
+	pkt.server.service = cpu_to_le32(svc->service);
+	pkt.server.instance = cpu_to_le32(svc->version | svc->instance << 8);
+	pkt.server.node = cpu_to_le32(qmi->sq.sq_node);
+	pkt.server.port = cpu_to_le32(qmi->sq.sq_port);
+
+	sq.sq_family = qmi->sq.sq_family;
+	sq.sq_node = qmi->sq.sq_node;
+	sq.sq_port = QRTR_PORT_CTRL;
+
+	msg.msg_name = &sq;
+	msg.msg_namelen = sizeof(sq);
+
+	mutex_lock(&qmi->sock_lock);
+	if (qmi->sock) {
+		ret = kernel_sendmsg(qmi->sock, &msg, &iv, 1, sizeof(pkt));
+		if (ret < 0)
+			pr_err("send service deregistration failed: %d\n", ret);
+	}
+	mutex_unlock(&qmi->sock_lock);
+}
+
+/**
+ * qmi_del_server() - register a service with the name service
+ * @qmi:	qmi handle
+ * @service:	type of the service
+ * @instance:	instance of the service
+ * @version:	version of the service
+ *
+ * Remove registration of the service with the name service. This notifies
+ * clients that they should no longer send messages to the client associated
+ * with @qmi.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int qmi_del_server(struct qmi_handle *qmi, unsigned int service,
+		   unsigned int version, unsigned int instance)
+{
+	struct qmi_service *svc;
+	struct qmi_service *tmp;
+
+	list_for_each_entry_safe(svc, tmp, &qmi->services, list_node) {
+		if (svc->service != service ||
+		    svc->version != version ||
+		    svc->instance != instance)
+			continue;
+
+		qmi_send_del_server(qmi, svc);
+		list_del(&svc->list_node);
+		kfree(svc);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(qmi_del_server);
+
 /**
  * qmi_txn_init() - allocate transaction id within the given QMI handle
  * @qmi:	QMI handle
diff --git a/include/linux/soc/qcom/qmi.h b/include/linux/soc/qcom/qmi.h
index 469e02d2aa0d..5039c30e4bdc 100644
--- a/include/linux/soc/qcom/qmi.h
+++ b/include/linux/soc/qcom/qmi.h
@@ -241,6 +241,8 @@ int qmi_add_lookup(struct qmi_handle *qmi, unsigned int service,
 		   unsigned int version, unsigned int instance);
 int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
 		   unsigned int version, unsigned int instance);
+int qmi_del_server(struct qmi_handle *qmi, unsigned int service,
+		   unsigned int version, unsigned int instance);
 
 int qmi_handle_init(struct qmi_handle *qmi, size_t max_msg_len,
 		    const struct qmi_ops *ops,

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-12  0:55   ` Konrad Dybcio
                     ` (3 more replies)
  2024-03-11 15:34 ` [PATCH v4 4/7] remoteproc: qcom: pas: correct data indentation Dmitry Baryshkov
                   ` (4 subsequent siblings)
  7 siblings, 4 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Existing userspace protection domain mapper implementation has several
issue. It doesn't play well with CONFIG_EXTRA_FIRMWARE, it doesn't
reread JSON files if firmware location is changed (or if firmware was
not available at the time pd-mapper was started but the corresponding
directory is mounted later), etc.

Provide in-kernel service implementing protection domain mapping
required to work with several services, which are provided by the DSP
firmware.

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/soc/qcom/Kconfig           |  10 ++
 drivers/soc/qcom/Makefile          |   2 +
 drivers/soc/qcom/qcom_pdm.c        | 346 +++++++++++++++++++++++++++++++++++++
 drivers/soc/qcom/qcom_pdm_msg.c    | 188 ++++++++++++++++++++
 drivers/soc/qcom/qcom_pdm_msg.h    |  66 +++++++
 include/linux/soc/qcom/pd_mapper.h |  39 +++++
 6 files changed, 651 insertions(+)

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 5af33b0e3470..f236ce376c1b 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -72,6 +72,16 @@ config QCOM_OCMEM
 	  requirements. This is typically used by the GPU, camera/video, and
 	  audio components on some Snapdragon SoCs.
 
+config QCOM_PD_MAPPER
+	tristate "Qualcomm Protection Domain Mapper"
+	select QCOM_QMI_HELPERS
+	depends on NET && QRTR
+	help
+	  The Protection Domain Mapper maps registered services to the domains
+	  and instances handled by the remote DSPs. This is a kernel-space
+	  implementation of the service. It is a simpler alternative to the
+	  userspace daemon.
+
 config QCOM_PDR_HELPERS
 	tristate
 	select QCOM_QMI_HELPERS
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index ca0bece0dfff..65e33b5a2231 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -7,6 +7,8 @@ obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o
 obj-$(CONFIG_QCOM_GSBI)	+=	qcom_gsbi.o
 obj-$(CONFIG_QCOM_MDT_LOADER)	+= mdt_loader.o
 obj-$(CONFIG_QCOM_OCMEM)	+= ocmem.o
+obj-$(CONFIG_QCOM_PD_MAPPER)	+= qcom_pd_mapper.o
+qcom_pd_mapper-y += qcom_pdm.o qcom_pdm_msg.o
 obj-$(CONFIG_QCOM_PDR_HELPERS)	+= pdr_interface.o
 obj-$(CONFIG_QCOM_PMIC_GLINK)	+= pmic_glink.o
 obj-$(CONFIG_QCOM_PMIC_GLINK)	+= pmic_glink_altmode.o
diff --git a/drivers/soc/qcom/qcom_pdm.c b/drivers/soc/qcom/qcom_pdm.c
new file mode 100644
index 000000000000..9d14b18b8590
--- /dev/null
+++ b/drivers/soc/qcom/qcom_pdm.c
@@ -0,0 +1,346 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Qualcomm Protection Domain mapper
+ *
+ * Copyright (c) 2023 Linaro Ltd.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/soc/qcom/pd_mapper.h>
+#include <linux/soc/qcom/qmi.h>
+
+#include "qcom_pdm_msg.h"
+
+#define TMS_SERVREG_SERVICE "tms/servreg"
+
+struct qcom_pdm_domain {
+	struct list_head list;
+	const char *name;
+	u32 instance_id;
+};
+
+struct qcom_pdm_service {
+	struct list_head list;
+	struct list_head domains;
+	const char *name;
+};
+
+static LIST_HEAD(qcom_pdm_services);
+static DEFINE_MUTEX(qcom_pdm_mutex);
+static bool qcom_pdm_server_added;
+static struct qmi_handle qcom_pdm_handle;
+
+static struct qcom_pdm_service *qcom_pdm_find_locked(const char *name)
+{
+	struct qcom_pdm_service *service;
+
+	list_for_each_entry(service, &qcom_pdm_services, list) {
+		if (!strcmp(service->name, name))
+			return service;
+	}
+
+	return NULL;
+}
+
+static void qcom_pdm_del_service_domain_locked(const char *service_name, const char *domain_name)
+{
+	struct qcom_pdm_service *service;
+	struct qcom_pdm_domain *domain, *temp;
+
+	service = qcom_pdm_find_locked(service_name);
+	if (WARN_ON(!service))
+		return;
+
+	list_for_each_entry_safe(domain, temp, &service->domains, list) {
+		if (!strcmp(domain->name, domain_name)) {
+			list_del(&domain->list);
+			kfree(domain);
+
+			if (list_empty(&service->domains)) {
+				list_del(&service->list);
+				kfree(service->name);
+				kfree(service);
+			}
+
+			return;
+		}
+	}
+
+	WARN(1, "domain not found");
+}
+
+static int qcom_pdm_add_service_domain_locked(const char *service_name,
+					      const char *domain_name,
+					      u32 instance_id)
+{
+	struct qcom_pdm_service *service;
+	struct qcom_pdm_domain *domain;
+
+	service = qcom_pdm_find_locked(service_name);
+	if (service) {
+		list_for_each_entry(domain, &service->domains, list) {
+			if (!strcmp(domain->name, domain_name))
+				return -EBUSY;
+		}
+	} else {
+		service = kzalloc(sizeof(*service), GFP_KERNEL);
+		if (!service)
+			return -ENOMEM;
+
+		INIT_LIST_HEAD(&service->domains);
+		service->name = kstrdup(service_name, GFP_KERNEL);
+
+		list_add_tail(&service->list, &qcom_pdm_services);
+	}
+
+	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+	if (!domain) {
+		if (list_empty(&service->domains)) {
+			list_del(&service->list);
+			kfree(service->name);
+			kfree(service);
+		}
+
+		return -ENOMEM;
+	}
+
+	/*
+	 * service name can outlive calling module and so it should be strdup'ed.
+	 * domain name can not outlive the module, so there is no need to strdup it.
+	 */
+	domain->name = domain_name;
+	domain->instance_id = instance_id;
+	list_add_tail(&domain->list, &service->domains);
+
+	return 0;
+}
+
+static int qcom_pdm_add_domain_locked(const struct qcom_pdm_domain_data *data)
+{
+	int ret;
+	int i;
+
+	ret = qcom_pdm_add_service_domain_locked(TMS_SERVREG_SERVICE,
+						 data->domain,
+						 data->instance_id);
+	if (ret)
+		return ret;
+
+	for (i = 0; data->services[i]; i++) {
+		ret = qcom_pdm_add_service_domain_locked(data->services[i],
+							 data->domain,
+							 data->instance_id);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+
+err:
+	while (--i >= 0)
+		qcom_pdm_del_service_domain_locked(data->services[i], data->domain);
+
+	qcom_pdm_del_service_domain_locked(TMS_SERVREG_SERVICE, data->domain);
+
+	return ret;
+}
+
+static void qcom_pdm_del_domain_locked(const struct qcom_pdm_domain_data *data)
+{
+	int i;
+
+	for (i = 0; data->services[i]; i++)
+		qcom_pdm_del_service_domain_locked(data->services[i], data->domain);
+
+	qcom_pdm_del_service_domain_locked(TMS_SERVREG_SERVICE, data->domain);
+}
+
+int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
+{
+	int ret;
+	int i;
+
+	mutex_lock(&qcom_pdm_mutex);
+
+	if (qcom_pdm_server_added) {
+		ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
+				     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
+		if (ret)
+			goto err_out;
+	}
+
+	for (i = 0; i < num_data; i++) {
+		ret = qcom_pdm_add_domain_locked(data[i]);
+		if (ret)
+			goto err;
+	}
+
+	ret = qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
+			     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
+	if (ret) {
+		pr_err("PDM: error adding server %d\n", ret);
+		goto err;
+	}
+
+	qcom_pdm_server_added = true;
+
+	mutex_unlock(&qcom_pdm_mutex);
+
+	return 0;
+
+err:
+	while (--i >= 0)
+		qcom_pdm_del_domain_locked(data[i]);
+
+	qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
+		       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
+
+err_out:
+	mutex_unlock(&qcom_pdm_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(qcom_pdm_add_domains);
+
+void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
+{
+	int i;
+
+	mutex_lock(&qcom_pdm_mutex);
+
+	if (qcom_pdm_server_added) {
+		qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
+			       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
+	}
+
+	for (i = 0; i < num_data; i++)
+		qcom_pdm_del_domain_locked(data[i]);
+
+	qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
+		       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
+	qcom_pdm_server_added = true;
+
+	mutex_unlock(&qcom_pdm_mutex);
+}
+EXPORT_SYMBOL_GPL(qcom_pdm_del_domains);
+
+static void qcom_pdm_get_domain_list(struct qmi_handle *qmi,
+				     struct sockaddr_qrtr *sq,
+				     struct qmi_txn *txn,
+				     const void *decoded)
+{
+	const struct servreg_loc_get_domain_list_req *req = decoded;
+	struct servreg_loc_get_domain_list_resp *rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
+	struct qcom_pdm_service *service;
+	u32 offset;
+	int ret;
+
+	offset = req->offset_valid ? req->offset : 0;
+
+	rsp->rsp.result = QMI_RESULT_SUCCESS_V01;
+	rsp->rsp.error = QMI_ERR_NONE_V01;
+
+	rsp->db_revision_valid = true;
+	rsp->db_revision = 1;
+
+	rsp->total_domains_valid = true;
+	rsp->total_domains = 0;
+
+	mutex_lock(&qcom_pdm_mutex);
+
+	service = qcom_pdm_find_locked(req->name);
+	if (service) {
+		struct qcom_pdm_domain *domain;
+
+		rsp->domain_list_valid = true;
+		rsp->domain_list_len = 0;
+
+		list_for_each_entry(domain, &service->domains, list) {
+			u32 i = rsp->total_domains++;
+
+			if (i >= offset && i < SERVREG_LOC_MAX_DOMAINS) {
+				u32 j = rsp->domain_list_len++;
+
+				strscpy(rsp->domain_list[j].name, domain->name,
+					sizeof(rsp->domain_list[i].name));
+				rsp->domain_list[j].instance_id = domain->instance_id;
+
+				pr_debug("PDM: found %s / %d\n", domain->name,
+					 domain->instance_id);
+			}
+		}
+
+	}
+
+	mutex_unlock(&qcom_pdm_mutex);
+
+	pr_debug("PDM: service '%s' offset %d returning %d domains (of %d)\n", req->name,
+		 req->offset_valid ? req->offset : -1, rsp->domain_list_len, rsp->total_domains);
+
+	ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_GET_DOMAIN_LIST,
+				2658,
+				servreg_loc_get_domain_list_resp_ei, rsp);
+	if (ret)
+		pr_err("Error sending servreg response: %d\n", ret);
+
+	kfree(rsp);
+}
+
+static void qcom_pdm_pfr(struct qmi_handle *qmi,
+			 struct sockaddr_qrtr *sq,
+			 struct qmi_txn *txn,
+			 const void *decoded)
+{
+	const struct servreg_loc_pfr_req *req = decoded;
+	struct servreg_loc_pfr_resp rsp = {};
+	int ret;
+
+	pr_warn_ratelimited("PDM: service '%s' crash: '%s'\n", req->service, req->reason);
+
+	rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
+	rsp.rsp.error = QMI_ERR_NONE_V01;
+
+	ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_PFR,
+				SERVREG_LOC_PFR_RESP_MSG_SIZE,
+				servreg_loc_pfr_resp_ei, &rsp);
+	if (ret)
+		pr_err("Error sending servreg response: %d\n", ret);
+}
+
+static const struct qmi_msg_handler qcom_pdm_msg_handlers[] = {
+	{
+		.type = QMI_REQUEST,
+		.msg_id = SERVREG_LOC_GET_DOMAIN_LIST,
+		.ei = servreg_loc_get_domain_list_req_ei,
+		.decoded_size = sizeof(struct servreg_loc_get_domain_list_req),
+		.fn = qcom_pdm_get_domain_list,
+	},
+	{
+		.type = QMI_REQUEST,
+		.msg_id = SERVREG_LOC_PFR,
+		.ei = servreg_loc_pfr_req_ei,
+		.decoded_size = sizeof(struct servreg_loc_pfr_req),
+		.fn = qcom_pdm_pfr,
+	},
+	{ },
+};
+
+static int qcom_pdm_init(void)
+{
+	return qmi_handle_init(&qcom_pdm_handle, 1024,
+			       NULL, qcom_pdm_msg_handlers);
+}
+
+static void qcom_pdm_exit(void)
+{
+	qmi_handle_release(&qcom_pdm_handle);
+
+	WARN_ON(!list_empty(&qcom_pdm_services));
+}
+
+module_init(qcom_pdm_init);
+module_exit(qcom_pdm_exit);
+
+MODULE_DESCRIPTION("Qualcomm Protection Domain Mapper");
+MODULE_LICENSE("GPL");
diff --git a/drivers/soc/qcom/qcom_pdm_msg.c b/drivers/soc/qcom/qcom_pdm_msg.c
new file mode 100644
index 000000000000..6f858df8f3dc
--- /dev/null
+++ b/drivers/soc/qcom/qcom_pdm_msg.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Copyright (c) 2016, Bjorn Andersson
+ */
+
+#include "qcom_pdm_msg.h"
+
+struct qmi_elem_info servreg_loc_domain_list_entry_ei[] = {
+	{
+		.data_type = QMI_STRING,
+		.elem_len = 65,
+		.elem_size = sizeof(char),
+		.offset = offsetof(struct servreg_loc_domain_list_entry, name)
+	},
+	{
+		.data_type = QMI_UNSIGNED_4_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(u32),
+		.offset = offsetof(struct servreg_loc_domain_list_entry, instance_id),
+	},
+	{
+		.data_type = QMI_UNSIGNED_1_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(uint8_t),
+		.offset = offsetof(struct servreg_loc_domain_list_entry, service_data_valid),
+	},
+	{
+		.data_type = QMI_UNSIGNED_4_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(u32),
+		.offset = offsetof(struct servreg_loc_domain_list_entry, service_data),
+	},
+	{}
+};
+
+struct qmi_elem_info servreg_loc_get_domain_list_req_ei[] = {
+	{
+		.data_type = QMI_STRING,
+		.elem_len = 65,
+		.elem_size = sizeof(char),
+		.array_type = VAR_LEN_ARRAY,
+		.tlv_type = 1,
+		.offset = offsetof(struct servreg_loc_get_domain_list_req, name)
+	},
+	{
+		.data_type = QMI_OPT_FLAG,
+		.elem_len = 1,
+		.elem_size = sizeof(u8),
+		.tlv_type = 16,
+		.offset = offsetof(struct servreg_loc_get_domain_list_req, offset_valid),
+	},
+	{
+		.data_type = QMI_UNSIGNED_4_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(u32),
+		.tlv_type = 16,
+		.offset = offsetof(struct servreg_loc_get_domain_list_req, offset),
+	},
+	{
+		.data_type	= QMI_EOTI,
+		.elem_len	= 0,
+		.elem_size	= 0,
+		.array_type	= NO_ARRAY,
+		.tlv_type	= QMI_COMMON_TLV_TYPE,
+		.offset		= 0,
+		.ei_array	= NULL,
+	}
+};
+
+struct qmi_elem_info servreg_loc_get_domain_list_resp_ei[] = {
+	{
+		.data_type = QMI_STRUCT,
+		.elem_len = 1,
+		.elem_size = sizeof_field(struct servreg_loc_get_domain_list_resp, rsp),
+		.tlv_type = 2,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, rsp),
+		.ei_array = qmi_response_type_v01_ei,
+	},
+	{
+		.data_type = QMI_OPT_FLAG,
+		.elem_len = 1,
+		.elem_size = sizeof(u8),
+		.tlv_type = 16,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, total_domains_valid),
+	},
+	{
+		.data_type = QMI_UNSIGNED_2_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(u16),
+		.tlv_type = 16,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, total_domains),
+	},
+	{
+		.data_type = QMI_OPT_FLAG,
+		.elem_len = 1,
+		.elem_size = sizeof(u8),
+		.tlv_type = 17,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, db_revision_valid),
+	},
+	{
+		.data_type = QMI_UNSIGNED_2_BYTE,
+		.elem_len = 1,
+		.elem_size = sizeof(u16),
+		.tlv_type = 17,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, db_revision),
+	},
+	{
+		.data_type = QMI_OPT_FLAG,
+		.elem_len = 1,
+		.elem_size = sizeof(u8),
+		.tlv_type = 18,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, domain_list_valid),
+	},
+	{
+		.data_type = QMI_DATA_LEN,
+		.elem_len = 1,
+		.elem_size = sizeof(uint8_t),
+		.tlv_type = 18,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, domain_list_len),
+	},
+	{
+		.data_type = QMI_STRUCT,
+		.elem_len = 32,
+		.elem_size = sizeof(struct servreg_loc_domain_list_entry),
+		.array_type = VAR_LEN_ARRAY,
+		.tlv_type = 18,
+		.offset = offsetof(struct servreg_loc_get_domain_list_resp, domain_list),
+		.ei_array = servreg_loc_domain_list_entry_ei,
+	},
+	{
+		.data_type	= QMI_EOTI,
+		.elem_len	= 0,
+		.elem_size	= 0,
+		.array_type	= NO_ARRAY,
+		.tlv_type	= QMI_COMMON_TLV_TYPE,
+		.offset		= 0,
+		.ei_array	= NULL,
+	}
+};
+
+struct qmi_elem_info servreg_loc_pfr_req_ei[] = {
+	{
+		.data_type = QMI_STRING,
+		.elem_len = 65,
+		.elem_size = sizeof(char),
+		.array_type = VAR_LEN_ARRAY,
+		.tlv_type = 1,
+		.offset = offsetof(struct servreg_loc_pfr_req, service)
+	},
+	{
+		.data_type = QMI_STRING,
+		.elem_len = 65,
+		.elem_size = sizeof(char),
+		.array_type = VAR_LEN_ARRAY,
+		.tlv_type = 2,
+		.offset = offsetof(struct servreg_loc_pfr_req, reason)
+	},
+	{
+		.data_type	= QMI_EOTI,
+		.elem_len	= 0,
+		.elem_size	= 0,
+		.array_type	= NO_ARRAY,
+		.tlv_type	= QMI_COMMON_TLV_TYPE,
+		.offset		= 0,
+		.ei_array	= NULL,
+	}
+};
+
+struct qmi_elem_info servreg_loc_pfr_resp_ei[] = {
+	{
+		.data_type = QMI_STRUCT,
+		.elem_len = 1,
+		.elem_size = sizeof_field(struct servreg_loc_pfr_resp, rsp),
+		.tlv_type = 2,
+		.offset = offsetof(struct servreg_loc_pfr_resp, rsp),
+		.ei_array = qmi_response_type_v01_ei,
+	},
+	{
+		.data_type	= QMI_EOTI,
+		.elem_len	= 0,
+		.elem_size	= 0,
+		.array_type	= NO_ARRAY,
+		.tlv_type	= QMI_COMMON_TLV_TYPE,
+		.offset		= 0,
+		.ei_array	= NULL,
+	}
+};
diff --git a/drivers/soc/qcom/qcom_pdm_msg.h b/drivers/soc/qcom/qcom_pdm_msg.h
new file mode 100644
index 000000000000..e576b87c67c0
--- /dev/null
+++ b/drivers/soc/qcom/qcom_pdm_msg.h
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Copyright (c) 2016, Bjorn Andersson
+ */
+
+#ifndef __QMI_SERVREG_LOC_H__
+#define __QMI_SERVREG_LOC_H__
+
+#include <linux/types.h>
+#include <linux/soc/qcom/qmi.h>
+
+#define SERVREG_QMI_SERVICE 64
+#define SERVREG_QMI_VERSION 257
+#define SERVREG_QMI_INSTANCE 0
+#define QMI_RESULT_SUCCESS 0
+#define QMI_RESULT_FAILURE 1
+#define QMI_ERR_NONE 0
+#define QMI_ERR_INTERNAL 1
+#define QMI_ERR_MALFORMED_MSG 2
+#define SERVREG_LOC_GET_DOMAIN_LIST 33
+#define SERVREG_LOC_PFR 36
+
+struct servreg_loc_domain_list_entry {
+	char name[65];
+	u32 instance_id;
+	u8 service_data_valid;
+	u32 service_data;
+};
+
+struct servreg_loc_get_domain_list_req {
+	char name[65];
+	u8 offset_valid;
+	u32 offset;
+};
+
+#define SERVREG_LOC_MAX_DOMAINS 32
+
+struct servreg_loc_get_domain_list_resp {
+	struct qmi_response_type_v01 rsp;
+	u8 total_domains_valid;
+	u16 total_domains;
+	u8 db_revision_valid;
+	u16 db_revision;
+	u8 domain_list_valid;
+	u32 domain_list_len;
+	struct servreg_loc_domain_list_entry domain_list[SERVREG_LOC_MAX_DOMAINS];
+};
+
+struct servreg_loc_pfr_req {
+	char service[65];
+	char reason[257];
+};
+
+struct servreg_loc_pfr_resp {
+	struct qmi_response_type_v01 rsp;
+};
+
+#define SERVREG_LOC_PFR_RESP_MSG_SIZE 10
+
+extern struct qmi_elem_info servreg_loc_get_domain_list_req_ei[];
+extern struct qmi_elem_info servreg_loc_get_domain_list_resp_ei[];
+extern struct qmi_elem_info servreg_loc_pfr_req_ei[];
+extern struct qmi_elem_info servreg_loc_pfr_resp_ei[];
+
+#endif
diff --git a/include/linux/soc/qcom/pd_mapper.h b/include/linux/soc/qcom/pd_mapper.h
new file mode 100644
index 000000000000..86438b7ca6fe
--- /dev/null
+++ b/include/linux/soc/qcom/pd_mapper.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Qualcomm Protection Domain mapper
+ *
+ * Copyright (c) 2023 Linaro Ltd.
+ */
+#ifndef __QCOM_PD_MAPPER__
+#define __QCOM_PD_MAPPER__
+
+struct qcom_pdm_domain_data {
+	const char *domain;
+	u32 instance_id;
+	/* NULL-terminated array */
+	const char * services[];
+};
+
+#if IS_ENABLED(CONFIG_QCOM_PD_MAPPER)
+
+int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data,
+			 size_t num_data);
+void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data,
+			  size_t num_data);
+
+#else
+
+static inline int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data,
+				      size_t num_data)
+{
+	return 0;
+}
+
+static inline void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data,
+					size_t num_data)
+{
+}
+
+#endif
+
+#endif

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 4/7] remoteproc: qcom: pas: correct data indentation
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
                   ` (2 preceding siblings ...)
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm Dmitry Baryshkov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Correct indentation of several struct adsp_data instances to always use
a single TAB character instead of two.

Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org>
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/remoteproc/qcom_q6v5_pas.c | 124 ++++++++++++++++++-------------------
 1 file changed, 62 insertions(+), 62 deletions(-)

diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
index d0b1f0f38347..3235249d703d 100644
--- a/drivers/remoteproc/qcom_q6v5_pas.c
+++ b/drivers/remoteproc/qcom_q6v5_pas.c
@@ -807,24 +807,24 @@ static void adsp_remove(struct platform_device *pdev)
 }
 
 static const struct adsp_data adsp_resource_init = {
-		.crash_reason_smem = 423,
-		.firmware_name = "adsp.mdt",
-		.pas_id = 1,
-		.auto_boot = true,
-		.ssr_name = "lpass",
-		.sysmon_name = "adsp",
-		.ssctl_id = 0x14,
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
 };
 
 static const struct adsp_data sdm845_adsp_resource_init = {
-		.crash_reason_smem = 423,
-		.firmware_name = "adsp.mdt",
-		.pas_id = 1,
-		.auto_boot = true,
-		.load_state = "adsp",
-		.ssr_name = "lpass",
-		.sysmon_name = "adsp",
-		.ssctl_id = 0x14,
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.load_state = "adsp",
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
 };
 
 static const struct adsp_data sm6350_adsp_resource = {
@@ -859,18 +859,18 @@ static const struct adsp_data sm6375_mpss_resource = {
 };
 
 static const struct adsp_data sm8150_adsp_resource = {
-		.crash_reason_smem = 423,
-		.firmware_name = "adsp.mdt",
-		.pas_id = 1,
-		.auto_boot = true,
-		.proxy_pd_names = (char*[]){
-			"cx",
-			NULL
-		},
-		.load_state = "adsp",
-		.ssr_name = "lpass",
-		.sysmon_name = "adsp",
-		.ssctl_id = 0x14,
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"cx",
+		NULL
+	},
+	.load_state = "adsp",
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
 };
 
 static const struct adsp_data sm8250_adsp_resource = {
@@ -906,17 +906,17 @@ static const struct adsp_data sm8350_adsp_resource = {
 };
 
 static const struct adsp_data msm8996_adsp_resource = {
-		.crash_reason_smem = 423,
-		.firmware_name = "adsp.mdt",
-		.pas_id = 1,
-		.auto_boot = true,
-		.proxy_pd_names = (char*[]){
-			"cx",
-			NULL
-		},
-		.ssr_name = "lpass",
-		.sysmon_name = "adsp",
-		.ssctl_id = 0x14,
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"cx",
+		NULL
+	},
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
 };
 
 static const struct adsp_data cdsp_resource_init = {
@@ -1063,33 +1063,33 @@ static const struct adsp_data sc8180x_mpss_resource = {
 };
 
 static const struct adsp_data msm8996_slpi_resource_init = {
-		.crash_reason_smem = 424,
-		.firmware_name = "slpi.mdt",
-		.pas_id = 12,
-		.auto_boot = true,
-		.proxy_pd_names = (char*[]){
-			"ssc_cx",
-			NULL
-		},
-		.ssr_name = "dsps",
-		.sysmon_name = "slpi",
-		.ssctl_id = 0x16,
+	.crash_reason_smem = 424,
+	.firmware_name = "slpi.mdt",
+	.pas_id = 12,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"ssc_cx",
+		NULL
+	},
+	.ssr_name = "dsps",
+	.sysmon_name = "slpi",
+	.ssctl_id = 0x16,
 };
 
 static const struct adsp_data sdm845_slpi_resource_init = {
-		.crash_reason_smem = 424,
-		.firmware_name = "slpi.mdt",
-		.pas_id = 12,
-		.auto_boot = true,
-		.proxy_pd_names = (char*[]){
-			"lcx",
-			"lmx",
-			NULL
-		},
-		.load_state = "slpi",
-		.ssr_name = "dsps",
-		.sysmon_name = "slpi",
-		.ssctl_id = 0x16,
+	.crash_reason_smem = 424,
+	.firmware_name = "slpi.mdt",
+	.pas_id = 12,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"lcx",
+		"lmx",
+		NULL
+	},
+	.load_state = "slpi",
+	.ssr_name = "dsps",
+	.sysmon_name = "slpi",
+	.ssctl_id = 0x16,
 };
 
 static const struct adsp_data wcss_resource_init = {

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
                   ` (3 preceding siblings ...)
  2024-03-11 15:34 ` [PATCH v4 4/7] remoteproc: qcom: pas: correct data indentation Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-16 18:15   ` Bjorn Andersson
  2024-03-11 15:34 ` [PATCH v4 6/7] remoteproc: qcom: mss: " Dmitry Baryshkov
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Add domain / service configuration for the in-kernel protection domain
mapper service.

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/remoteproc/Kconfig          |  1 +
 drivers/remoteproc/qcom_q6v5_adsp.c | 82 ++++++++++++++++++++++++++++++++++---
 2 files changed, 78 insertions(+), 5 deletions(-)

diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
index 48845dc8fa85..f1698d4c302e 100644
--- a/drivers/remoteproc/Kconfig
+++ b/drivers/remoteproc/Kconfig
@@ -181,6 +181,7 @@ config QCOM_Q6V5_ADSP
 	depends on QCOM_SYSMON || QCOM_SYSMON=n
 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
 	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+	depends on QCOM_PD_MAPPER || QCOM_PD_MAPPER=n
 	select MFD_SYSCON
 	select QCOM_PIL_INFO
 	select QCOM_MDT_LOADER
diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
index 93f9a1537ec6..ea74ca730a50 100644
--- a/drivers/remoteproc/qcom_q6v5_adsp.c
+++ b/drivers/remoteproc/qcom_q6v5_adsp.c
@@ -23,6 +23,7 @@
 #include <linux/remoteproc.h>
 #include <linux/reset.h>
 #include <linux/soc/qcom/mdt_loader.h>
+#include <linux/soc/qcom/pd_mapper.h>
 #include <linux/soc/qcom/smem.h>
 #include <linux/soc/qcom/smem_state.h>
 
@@ -75,6 +76,9 @@ struct adsp_pil_data {
 	const char **pd_names;
 	unsigned int num_pds;
 	const char *load_state;
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 struct qcom_adsp {
@@ -116,6 +120,9 @@ struct qcom_adsp {
 	struct qcom_sysmon *sysmon;
 
 	int (*shutdown)(struct qcom_adsp *adsp);
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 static int qcom_rproc_pds_attach(struct qcom_adsp *adsp, const char **pd_names,
@@ -375,10 +382,14 @@ static int adsp_start(struct rproc *rproc)
 	int ret;
 	unsigned int val;
 
-	ret = qcom_q6v5_prepare(&adsp->q6v5);
+	ret = qcom_pdm_add_domains(adsp->domains, adsp->num_domains);
 	if (ret)
 		return ret;
 
+	ret = qcom_q6v5_prepare(&adsp->q6v5);
+	if (ret)
+		goto del_domains;
+
 	ret = adsp_map_carveout(rproc);
 	if (ret) {
 		dev_err(adsp->dev, "ADSP smmu mapping failed\n");
@@ -446,6 +457,8 @@ static int adsp_start(struct rproc *rproc)
 	adsp_unmap_carveout(rproc);
 disable_irqs:
 	qcom_q6v5_unprepare(&adsp->q6v5);
+del_domains:
+	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
 
 	return ret;
 }
@@ -478,6 +491,8 @@ static int adsp_stop(struct rproc *rproc)
 	if (handover)
 		qcom_adsp_pil_handover(&adsp->q6v5);
 
+	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
+
 	return ret;
 }
 
@@ -690,6 +705,8 @@ static int adsp_probe(struct platform_device *pdev)
 	adsp->rproc = rproc;
 	adsp->info_name = desc->sysmon_name;
 	adsp->has_iommu = desc->has_iommu;
+	adsp->domains = desc->domains;
+	adsp->num_domains = desc->num_domains;
 
 	platform_set_drvdata(pdev, adsp);
 
@@ -764,7 +781,56 @@ static void adsp_remove(struct platform_device *pdev)
 	rproc_free(adsp->rproc);
 }
 
-static const struct adsp_pil_data adsp_resource_init = {
+static const struct qcom_pdm_domain_data adsp_audio_pd = {
+	.domain = "msm/adsp/audio_pd",
+	.instance_id = 74,
+	.services = {
+		"avs/audio",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data adsp_charger_pd = {
+	.domain = "msm/adsp/charger_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data adsp_root_pd = {
+	.domain = "msm/adsp/root_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data adsp_sensor_pd = {
+	.domain = "msm/adsp/sensor_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data *sc7280_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd,
+	&adsp_charger_pd,
+	&adsp_sensor_pd
+};
+
+static const struct qcom_pdm_domain_data cdsp_root_pd = {
+	.domain = "msm/cdsp/root_pd",
+	.instance_id = 76,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data *qcs404_cdsp_domains[] = {
+	&cdsp_root_pd,
+};
+
+static const struct qcom_pdm_domain_data *sdm845_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd,
+};
+
+static const struct adsp_pil_data adsp_sdm845_resource_init = {
 	.crash_reason_smem = 423,
 	.firmware_name = "adsp.mdt",
 	.ssr_name = "lpass",
@@ -779,6 +845,8 @@ static const struct adsp_pil_data adsp_resource_init = {
 	.num_clks = 7,
 	.pd_names = (const char*[]) { "cx" },
 	.num_pds = 1,
+	.domains = sdm845_adsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
 };
 
 static const struct adsp_pil_data adsp_sc7280_resource_init = {
@@ -794,9 +862,11 @@ static const struct adsp_pil_data adsp_sc7280_resource_init = {
 		"gcc_cfg_noc_lpass", NULL
 	},
 	.num_clks = 1,
+	.domains = sc7280_adsp_domains,
+	.num_domains = ARRAY_SIZE(sc7280_adsp_domains),
 };
 
-static const struct adsp_pil_data cdsp_resource_init = {
+static const struct adsp_pil_data cdsp_qcs404_resource_init = {
 	.crash_reason_smem = 601,
 	.firmware_name = "cdsp.mdt",
 	.ssr_name = "cdsp",
@@ -811,6 +881,8 @@ static const struct adsp_pil_data cdsp_resource_init = {
 	.num_clks = 7,
 	.pd_names = (const char*[]) { "cx" },
 	.num_pds = 1,
+	.domains = qcs404_cdsp_domains,
+	.num_domains = ARRAY_SIZE(qcs404_cdsp_domains),
 };
 
 static const struct adsp_pil_data wpss_resource_init = {
@@ -831,10 +903,10 @@ static const struct adsp_pil_data wpss_resource_init = {
 };
 
 static const struct of_device_id adsp_of_match[] = {
-	{ .compatible = "qcom,qcs404-cdsp-pil", .data = &cdsp_resource_init },
+	{ .compatible = "qcom,qcs404-cdsp-pil", .data = &cdsp_qcs404_resource_init },
 	{ .compatible = "qcom,sc7280-adsp-pil", .data = &adsp_sc7280_resource_init },
 	{ .compatible = "qcom,sc7280-wpss-pil", .data = &wpss_resource_init },
-	{ .compatible = "qcom,sdm845-adsp-pil", .data = &adsp_resource_init },
+	{ .compatible = "qcom,sdm845-adsp-pil", .data = &adsp_sdm845_resource_init },
 	{ },
 };
 MODULE_DEVICE_TABLE(of, adsp_of_match);

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 6/7] remoteproc: qcom: mss: add configuration for in-kernel pdm
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
                   ` (4 preceding siblings ...)
  2024-03-11 15:34 ` [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-11 15:34 ` [PATCH v4 7/7] remoteproc: qcom: pas: " Dmitry Baryshkov
  2024-03-11 17:18 ` [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation neil.armstrong
  7 siblings, 0 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Add domain / service configuration for the in-kernel protection domain
mapper service.

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/remoteproc/Kconfig         |  1 +
 drivers/remoteproc/qcom_q6v5_mss.c | 80 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
index f1698d4c302e..8152e845f7a3 100644
--- a/drivers/remoteproc/Kconfig
+++ b/drivers/remoteproc/Kconfig
@@ -202,6 +202,7 @@ config QCOM_Q6V5_MSS
 	depends on QCOM_SYSMON || QCOM_SYSMON=n
 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
 	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+	depends on QCOM_PD_MAPPER || QCOM_PD_MAPPER=n
 	select MFD_SYSCON
 	select QCOM_MDT_LOADER
 	select QCOM_PIL_INFO
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index 394b2c1cb5e2..d9ef874c3722 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -26,6 +26,7 @@
 #include <linux/remoteproc.h>
 #include <linux/reset.h>
 #include <linux/soc/qcom/mdt_loader.h>
+#include <linux/soc/qcom/pd_mapper.h>
 #include <linux/iopoll.h>
 #include <linux/slab.h>
 
@@ -163,6 +164,9 @@ struct rproc_hexagon_res {
 	bool has_qaccept_regs;
 	bool has_ext_cntl_regs;
 	bool has_vq6;
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 struct q6v5 {
@@ -242,6 +246,9 @@ struct q6v5 {
 	u64 mba_perm;
 	const char *hexagon_mdt_image;
 	int version;
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 enum {
@@ -1581,10 +1588,14 @@ static int q6v5_start(struct rproc *rproc)
 	int xfermemop_ret;
 	int ret;
 
-	ret = q6v5_mba_load(qproc);
+	ret = qcom_pdm_add_domains(qproc->domains, qproc->num_domains);
 	if (ret)
 		return ret;
 
+	ret = q6v5_mba_load(qproc);
+	if (ret)
+		goto del_domains;
+
 	dev_info(qproc->dev, "MBA booted with%s debug policy, loading mpss\n",
 		 qproc->dp_size ? "" : "out");
 
@@ -1614,6 +1625,9 @@ static int q6v5_start(struct rproc *rproc)
 	q6v5_mba_reclaim(qproc);
 	q6v5_dump_mba_logs(qproc);
 
+del_domains:
+	qcom_pdm_del_domains(qproc->domains, qproc->num_domains);
+
 	return ret;
 }
 
@@ -1628,6 +1642,8 @@ static int q6v5_stop(struct rproc *rproc)
 
 	q6v5_mba_reclaim(qproc);
 
+	qcom_pdm_del_domains(qproc->domains, qproc->num_domains);
+
 	return 0;
 }
 
@@ -2013,6 +2029,9 @@ static int q6v5_probe(struct platform_device *pdev)
 
 	platform_set_drvdata(pdev, qproc);
 
+	qproc->domains = desc->domains;
+	qproc->num_domains = desc->num_domains;
+
 	qproc->has_qaccept_regs = desc->has_qaccept_regs;
 	qproc->has_ext_cntl_regs = desc->has_ext_cntl_regs;
 	qproc->has_vq6 = desc->has_vq6;
@@ -2153,6 +2172,54 @@ static void q6v5_remove(struct platform_device *pdev)
 	rproc_free(rproc);
 }
 
+static const struct qcom_pdm_domain_data mpss_root_pd = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 180,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data msm8996_mpss_root_pd = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 100,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data sm8150_mpss_root_pd = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 180,
+	.services = {
+		"gps/gps_service",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data mpss_wlan_pd = {
+	.domain = "msm/modem/wlan_pd",
+	.instance_id = 180,
+	.services = {
+		"kernel/elf_loader",
+		"wlan/fw",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data *msm8996_mpss_domains[] = {
+	&msm8996_mpss_root_pd,
+};
+
+static const struct qcom_pdm_domain_data *sdm660_mpss_domains[] = {
+	&mpss_wlan_pd,
+};
+
+static const struct qcom_pdm_domain_data *sdm845_mpss_domains[] = {
+	&mpss_root_pd,
+	&mpss_wlan_pd,
+};
+
+static const struct qcom_pdm_domain_data *sm8350_mpss_domains[] = {
+	&sm8150_mpss_root_pd,
+};
+
 static const struct rproc_hexagon_res sc7180_mss = {
 	.hexagon_mba_image = "mba.mbn",
 	.proxy_clk_names = (char*[]){
@@ -2184,6 +2251,7 @@ static const struct rproc_hexagon_res sc7180_mss = {
 	.has_ext_cntl_regs = false,
 	.has_vq6 = false,
 	.version = MSS_SC7180,
+	// FIXME: domains?
 };
 
 static const struct rproc_hexagon_res sc7280_mss = {
@@ -2212,6 +2280,8 @@ static const struct rproc_hexagon_res sc7280_mss = {
 	.has_ext_cntl_regs = true,
 	.has_vq6 = true,
 	.version = MSS_SC7280,
+	.domains = sm8350_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
 };
 
 static const struct rproc_hexagon_res sdm660_mss = {
@@ -2243,6 +2313,8 @@ static const struct rproc_hexagon_res sdm660_mss = {
 	.has_ext_cntl_regs = false,
 	.has_vq6 = false,
 	.version = MSS_SDM660,
+	.domains = sdm660_mpss_domains,
+	.num_domains = ARRAY_SIZE(sdm660_mpss_domains),
 };
 
 static const struct rproc_hexagon_res sdm845_mss = {
@@ -2278,6 +2350,8 @@ static const struct rproc_hexagon_res sdm845_mss = {
 	.has_ext_cntl_regs = false,
 	.has_vq6 = false,
 	.version = MSS_SDM845,
+	.domains = sdm845_mpss_domains,
+	.num_domains = ARRAY_SIZE(sdm845_mpss_domains),
 };
 
 static const struct rproc_hexagon_res msm8998_mss = {
@@ -2309,6 +2383,8 @@ static const struct rproc_hexagon_res msm8998_mss = {
 	.has_ext_cntl_regs = false,
 	.has_vq6 = false,
 	.version = MSS_MSM8998,
+	.domains = sdm845_mpss_domains,
+	.num_domains = ARRAY_SIZE(sdm845_mpss_domains),
 };
 
 static const struct rproc_hexagon_res msm8996_mss = {
@@ -2347,6 +2423,8 @@ static const struct rproc_hexagon_res msm8996_mss = {
 	.has_ext_cntl_regs = false,
 	.has_vq6 = false,
 	.version = MSS_MSM8996,
+	.domains = msm8996_mpss_domains,
+	.num_domains = ARRAY_SIZE(msm8996_mpss_domains),
 };
 
 static const struct rproc_hexagon_res msm8909_mss = {

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 7/7] remoteproc: qcom: pas: add configuration for in-kernel pdm
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
                   ` (5 preceding siblings ...)
  2024-03-11 15:34 ` [PATCH v4 6/7] remoteproc: qcom: mss: " Dmitry Baryshkov
@ 2024-03-11 15:34 ` Dmitry Baryshkov
  2024-03-11 16:58   ` neil.armstrong
  2024-03-11 17:18 ` [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation neil.armstrong
  7 siblings, 1 reply; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-11 15:34 UTC (permalink / raw
  To: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

Add domain / service configuration for the in-kernel protection domain
mapper service.

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/remoteproc/Kconfig         |   1 +
 drivers/remoteproc/qcom_q6v5_pas.c | 370 +++++++++++++++++++++++++++++++++++--
 2 files changed, 354 insertions(+), 17 deletions(-)

diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
index 8152e845f7a3..7c6ec54c7b35 100644
--- a/drivers/remoteproc/Kconfig
+++ b/drivers/remoteproc/Kconfig
@@ -223,6 +223,7 @@ config QCOM_Q6V5_PAS
 	depends on QCOM_SYSMON || QCOM_SYSMON=n
 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
 	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+	depends on QCOM_PD_MAPPER || QCOM_PD_MAPPER=n
 	select MFD_SYSCON
 	select QCOM_PIL_INFO
 	select QCOM_MDT_LOADER
diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
index 3235249d703d..ba53df7ea30e 100644
--- a/drivers/remoteproc/qcom_q6v5_pas.c
+++ b/drivers/remoteproc/qcom_q6v5_pas.c
@@ -23,6 +23,7 @@
 #include <linux/regulator/consumer.h>
 #include <linux/remoteproc.h>
 #include <linux/soc/qcom/mdt_loader.h>
+#include <linux/soc/qcom/pd_mapper.h>
 #include <linux/soc/qcom/smem.h>
 #include <linux/soc/qcom/smem_state.h>
 
@@ -56,6 +57,9 @@ struct adsp_data {
 	int region_assign_count;
 	bool region_assign_shared;
 	int region_assign_vmid;
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 struct qcom_adsp {
@@ -112,6 +116,9 @@ struct qcom_adsp {
 
 	struct qcom_scm_pas_metadata pas_metadata;
 	struct qcom_scm_pas_metadata dtb_pas_metadata;
+
+	const struct qcom_pdm_domain_data * const *domains;
+	size_t num_domains;
 };
 
 static void adsp_segment_dump(struct rproc *rproc, struct rproc_dump_segment *segment,
@@ -256,10 +263,14 @@ static int adsp_start(struct rproc *rproc)
 	struct qcom_adsp *adsp = rproc->priv;
 	int ret;
 
-	ret = qcom_q6v5_prepare(&adsp->q6v5);
+	ret = qcom_pdm_add_domains(adsp->domains, adsp->num_domains);
 	if (ret)
 		return ret;
 
+	ret = qcom_q6v5_prepare(&adsp->q6v5);
+	if (ret)
+		goto del_domains;
+
 	ret = adsp_pds_enable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
 	if (ret < 0)
 		goto disable_irqs;
@@ -348,6 +359,9 @@ static int adsp_start(struct rproc *rproc)
 disable_irqs:
 	qcom_q6v5_unprepare(&adsp->q6v5);
 
+del_domains:
+	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
+
 	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
 	adsp->firmware = NULL;
 
@@ -394,6 +408,8 @@ static int adsp_stop(struct rproc *rproc)
 	if (handover)
 		qcom_pas_handover(&adsp->q6v5);
 
+	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
+
 	return ret;
 }
 
@@ -730,6 +746,10 @@ static int adsp_probe(struct platform_device *pdev)
 		adsp->dtb_firmware_name = dtb_fw_name;
 		adsp->dtb_pas_id = desc->dtb_pas_id;
 	}
+
+	adsp->domains = desc->domains;
+	adsp->num_domains = desc->num_domains;
+
 	platform_set_drvdata(pdev, adsp);
 
 	ret = device_init_wakeup(adsp->dev, true);
@@ -806,6 +826,172 @@ static void adsp_remove(struct platform_device *pdev)
 	rproc_free(adsp->rproc);
 }
 
+static const struct qcom_pdm_domain_data adsp_audio_pd = {
+	.domain = "msm/adsp/audio_pd",
+	.instance_id = 74,
+	.services = {
+		"avs/audio",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data adsp_charger_pd = {
+	.domain = "msm/adsp/charger_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data adsp_root_pd = {
+	.domain = "msm/adsp/root_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data adsp_root_pd_pdr = {
+	.domain = "msm/adsp/root_pd",
+	.instance_id = 74,
+	.services = {
+		"tms/pdr_enabled",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data adsp_sensor_pd = {
+	.domain = "msm/adsp/sensor_pd",
+	.instance_id = 74,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data msm8996_adsp_audio_pd = {
+	.domain = "msm/adsp/audio_pd",
+	.instance_id = 4,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data msm8996_adsp_root_pd = {
+	.domain = "msm/adsp/root_pd",
+	.instance_id = 4,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data *msm8996_adsp_domains[] = {
+	&msm8996_adsp_audio_pd,
+	&msm8996_adsp_root_pd,
+};
+
+static const struct qcom_pdm_domain_data *qcs404_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd,
+	&adsp_sensor_pd,
+};
+
+static const struct qcom_pdm_domain_data *sc7180_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd_pdr,
+	&adsp_sensor_pd,
+};
+
+static const struct qcom_pdm_domain_data *sc7280_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd_pdr,
+	&adsp_charger_pd,
+	&adsp_sensor_pd,
+};
+
+static const struct qcom_pdm_domain_data *sdm845_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd,
+};
+
+static const struct qcom_pdm_domain_data *sm8350_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_root_pd_pdr,
+	&adsp_charger_pd,
+};
+
+static const struct qcom_pdm_domain_data cdsp_root_pd = {
+	.domain = "msm/cdsp/root_pd",
+	.instance_id = 76,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data *sdm845_cdsp_domains[] = {
+	&cdsp_root_pd,
+};
+
+static const struct qcom_pdm_domain_data slpi_root_pd = {
+	.domain = "msm/slpi/root_pd",
+	.instance_id = 90,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data slpi_sensor_pd = {
+	.domain = "msm/slpi/sensor_pd",
+	.instance_id = 90,
+	.services = { NULL },
+};
+
+static const struct qcom_pdm_domain_data *sdm845_slpi_domains[] = {
+	&slpi_root_pd,
+	&slpi_sensor_pd,
+};
+
+static const struct qcom_pdm_domain_data mpss_root_pd = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 180,
+	.services = {
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data mpss_root_pd_gps = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 180,
+	.services = {
+		"gps/gps_service",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data mpss_root_pd_gps_pdr = {
+	.domain = "msm/modem/root_pd",
+	.instance_id = 180,
+	.services = {
+		"gps/gps_service",
+		"tms/pdr_enabled",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data mpss_wlan_pd = {
+	.domain = "msm/modem/wlan_pd",
+	.instance_id = 180,
+	.services = {
+		"kernel/elf_loader",
+		"wlan/fw",
+		NULL,
+	},
+};
+
+static const struct qcom_pdm_domain_data *qcs404_mpss_domains[] = {
+	&mpss_root_pd,
+	&mpss_wlan_pd,
+};
+
+static const struct qcom_pdm_domain_data *sc7180_mpss_domains[] = {
+	&mpss_root_pd_gps_pdr,
+	&mpss_wlan_pd,
+};
+
+static const struct qcom_pdm_domain_data *sm8150_mpss_domains[] = {
+	&mpss_root_pd_gps,
+	&mpss_wlan_pd,
+};
+
+static const struct qcom_pdm_domain_data *sm8350_mpss_domains[] = {
+	&mpss_root_pd_gps,
+};
+
 static const struct adsp_data adsp_resource_init = {
 	.crash_reason_smem = 423,
 	.firmware_name = "adsp.mdt",
@@ -814,6 +1000,55 @@ static const struct adsp_data adsp_resource_init = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	/* no domains */
+};
+
+static const struct adsp_data qcs404_adsp_resource = {
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
+	.domains = qcs404_adsp_domains,
+	.num_domains = ARRAY_SIZE(qcs404_adsp_domains),
+};
+
+static const struct adsp_data sc7180_adsp_resource = {
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"lcx",
+		"lmx",
+		NULL
+	},
+	.load_state = "adsp",
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
+	.domains = sc7180_adsp_domains,
+	.num_domains = ARRAY_SIZE(sc7180_adsp_domains),
+};
+
+static const struct adsp_data sc7280_adsp_resource = {
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.proxy_pd_names = (char*[]){
+		"lcx",
+		"lmx",
+		NULL
+	},
+	.load_state = "adsp",
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
+	.domains = sc7280_adsp_domains,
+	.num_domains = ARRAY_SIZE(sc7280_adsp_domains),
 };
 
 static const struct adsp_data sdm845_adsp_resource_init = {
@@ -825,6 +1060,20 @@ static const struct adsp_data sdm845_adsp_resource_init = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = sdm845_adsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
+};
+
+static const struct adsp_data sm6115_adsp_resource = {
+	.crash_reason_smem = 423,
+	.firmware_name = "adsp.mdt",
+	.pas_id = 1,
+	.auto_boot = true,
+	.ssr_name = "lpass",
+	.sysmon_name = "adsp",
+	.ssctl_id = 0x14,
+	.domains = sdm845_adsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
 };
 
 static const struct adsp_data sm6350_adsp_resource = {
@@ -841,6 +1090,8 @@ static const struct adsp_data sm6350_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = qcs404_adsp_domains,
+	.num_domains = ARRAY_SIZE(qcs404_adsp_domains),
 };
 
 static const struct adsp_data sm6375_mpss_resource = {
@@ -856,6 +1107,7 @@ static const struct adsp_data sm6375_mpss_resource = {
 	.ssr_name = "mpss",
 	.sysmon_name = "modem",
 	.ssctl_id = 0x12,
+	// TODO: domains
 };
 
 static const struct adsp_data sm8150_adsp_resource = {
@@ -871,6 +1123,8 @@ static const struct adsp_data sm8150_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = sdm845_adsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
 };
 
 static const struct adsp_data sm8250_adsp_resource = {
@@ -887,6 +1141,8 @@ static const struct adsp_data sm8250_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = sdm845_adsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
 };
 
 static const struct adsp_data sm8350_adsp_resource = {
@@ -903,6 +1159,8 @@ static const struct adsp_data sm8350_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = sm8350_adsp_domains,
+	.num_domains = ARRAY_SIZE(sm8350_adsp_domains),
 };
 
 static const struct adsp_data msm8996_adsp_resource = {
@@ -917,9 +1175,11 @@ static const struct adsp_data msm8996_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = msm8996_adsp_domains,
+	.num_domains = ARRAY_SIZE(msm8996_adsp_domains),
 };
 
-static const struct adsp_data cdsp_resource_init = {
+static const struct adsp_data qcs404_cdsp_resource = {
 	.crash_reason_smem = 601,
 	.firmware_name = "cdsp.mdt",
 	.pas_id = 18,
@@ -927,6 +1187,8 @@ static const struct adsp_data cdsp_resource_init = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sdm845_cdsp_resource_init = {
@@ -938,6 +1200,8 @@ static const struct adsp_data sdm845_cdsp_resource_init = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sm6350_cdsp_resource = {
@@ -954,6 +1218,8 @@ static const struct adsp_data sm6350_cdsp_resource = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sm8150_cdsp_resource = {
@@ -969,6 +1235,8 @@ static const struct adsp_data sm8150_cdsp_resource = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sm8250_cdsp_resource = {
@@ -984,6 +1252,8 @@ static const struct adsp_data sm8250_cdsp_resource = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sc8280xp_nsp0_resource = {
@@ -998,6 +1268,8 @@ static const struct adsp_data sc8280xp_nsp0_resource = {
 	.ssr_name = "cdsp0",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sc8280xp_nsp1_resource = {
@@ -1012,6 +1284,7 @@ static const struct adsp_data sc8280xp_nsp1_resource = {
 	.ssr_name = "cdsp1",
 	.sysmon_name = "cdsp1",
 	.ssctl_id = 0x20,
+	// TODO: domains
 };
 
 static const struct adsp_data sm8350_cdsp_resource = {
@@ -1028,9 +1301,11 @@ static const struct adsp_data sm8350_cdsp_resource = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
-static const struct adsp_data mpss_resource_init = {
+static const struct adsp_data sc7180_mpss_resource = {
 	.crash_reason_smem = 421,
 	.firmware_name = "modem.mdt",
 	.pas_id = 4,
@@ -1045,6 +1320,27 @@ static const struct adsp_data mpss_resource_init = {
 	.ssr_name = "mpss",
 	.sysmon_name = "modem",
 	.ssctl_id = 0x12,
+	.domains = sc7180_mpss_domains,
+	.num_domains = ARRAY_SIZE(sc7180_mpss_domains),
+};
+
+static const struct adsp_data sc7280_mpss_resource = {
+	.crash_reason_smem = 421,
+	.firmware_name = "modem.mdt",
+	.pas_id = 4,
+	.minidump_id = 3,
+	.auto_boot = false,
+	.proxy_pd_names = (char*[]){
+		"cx",
+		"mss",
+		NULL
+	},
+	.load_state = "modem",
+	.ssr_name = "mpss",
+	.sysmon_name = "modem",
+	.ssctl_id = 0x12,
+	.domains = sm8350_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
 };
 
 static const struct adsp_data sc8180x_mpss_resource = {
@@ -1060,6 +1356,27 @@ static const struct adsp_data sc8180x_mpss_resource = {
 	.ssr_name = "mpss",
 	.sysmon_name = "modem",
 	.ssctl_id = 0x12,
+	.domains = sm8150_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8150_mpss_domains),
+};
+
+static const struct adsp_data sm8150_mpss_resource = {
+	.crash_reason_smem = 421,
+	.firmware_name = "modem.mdt",
+	.pas_id = 4,
+	.minidump_id = 3,
+	.auto_boot = false,
+	.proxy_pd_names = (char*[]){
+		"cx",
+		"mss",
+		NULL
+	},
+	.load_state = "modem",
+	.ssr_name = "mpss",
+	.sysmon_name = "modem",
+	.ssctl_id = 0x12,
+	.domains = sm8150_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8150_mpss_domains),
 };
 
 static const struct adsp_data msm8996_slpi_resource_init = {
@@ -1074,6 +1391,7 @@ static const struct adsp_data msm8996_slpi_resource_init = {
 	.ssr_name = "dsps",
 	.sysmon_name = "slpi",
 	.ssctl_id = 0x16,
+	/* no domains */
 };
 
 static const struct adsp_data sdm845_slpi_resource_init = {
@@ -1090,9 +1408,11 @@ static const struct adsp_data sdm845_slpi_resource_init = {
 	.ssr_name = "dsps",
 	.sysmon_name = "slpi",
 	.ssctl_id = 0x16,
+	.domains = sdm845_slpi_domains,
+	.num_domains = ARRAY_SIZE(sdm845_slpi_domains),
 };
 
-static const struct adsp_data wcss_resource_init = {
+static const struct adsp_data qcs404_wcss_resource = {
 	.crash_reason_smem = 421,
 	.firmware_name = "wcnss.mdt",
 	.pas_id = 6,
@@ -1100,6 +1420,8 @@ static const struct adsp_data wcss_resource_init = {
 	.ssr_name = "mpss",
 	.sysmon_name = "wcnss",
 	.ssctl_id = 0x12,
+	.domains = qcs404_mpss_domains,
+	.num_domains = ARRAY_SIZE(qcs404_mpss_domains),
 };
 
 static const struct adsp_data sdx55_mpss_resource = {
@@ -1115,6 +1437,7 @@ static const struct adsp_data sdx55_mpss_resource = {
 	.ssr_name = "mpss",
 	.sysmon_name = "modem",
 	.ssctl_id = 0x22,
+	// TODO: domains
 };
 
 static const struct adsp_data sm8450_mpss_resource = {
@@ -1133,6 +1456,8 @@ static const struct adsp_data sm8450_mpss_resource = {
 	.ssr_name = "mpss",
 	.sysmon_name = "modem",
 	.ssctl_id = 0x12,
+	.domains = sm8350_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
 };
 
 static const struct adsp_data sm8550_adsp_resource = {
@@ -1152,6 +1477,8 @@ static const struct adsp_data sm8550_adsp_resource = {
 	.ssr_name = "lpass",
 	.sysmon_name = "adsp",
 	.ssctl_id = 0x14,
+	.domains = sm8350_adsp_domains,
+	.num_domains = ARRAY_SIZE(sm8350_adsp_domains),
 };
 
 static const struct adsp_data sm8550_cdsp_resource = {
@@ -1172,6 +1499,8 @@ static const struct adsp_data sm8550_cdsp_resource = {
 	.ssr_name = "cdsp",
 	.sysmon_name = "cdsp",
 	.ssctl_id = 0x17,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sm8550_mpss_resource = {
@@ -1195,6 +1524,8 @@ static const struct adsp_data sm8550_mpss_resource = {
 	.region_assign_idx = 2,
 	.region_assign_count = 1,
 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+	.domains = sm8350_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
 };
 
 static const struct adsp_data sc7280_wpss_resource = {
@@ -1235,6 +1566,8 @@ static const struct adsp_data sm8650_cdsp_resource = {
 	.region_assign_count = 1,
 	.region_assign_shared = true,
 	.region_assign_vmid = QCOM_SCM_VMID_CDSP,
+	.domains = sdm845_cdsp_domains,
+	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
 };
 
 static const struct adsp_data sm8650_mpss_resource = {
@@ -1258,29 +1591,32 @@ static const struct adsp_data sm8650_mpss_resource = {
 	.region_assign_idx = 2,
 	.region_assign_count = 3,
 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+	.domains = sm8350_mpss_domains,
+	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
 };
 
 static const struct of_device_id adsp_of_match[] = {
 	{ .compatible = "qcom,msm8226-adsp-pil", .data = &adsp_resource_init},
+	// FIXME: is msm8996 adsp audio domain applicable to msm8953 ?
 	{ .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
 	{ .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
 	{ .compatible = "qcom,msm8996-adsp-pil", .data = &msm8996_adsp_resource},
 	{ .compatible = "qcom,msm8996-slpi-pil", .data = &msm8996_slpi_resource_init},
 	{ .compatible = "qcom,msm8998-adsp-pas", .data = &msm8996_adsp_resource},
 	{ .compatible = "qcom,msm8998-slpi-pas", .data = &msm8996_slpi_resource_init},
-	{ .compatible = "qcom,qcs404-adsp-pas", .data = &adsp_resource_init },
-	{ .compatible = "qcom,qcs404-cdsp-pas", .data = &cdsp_resource_init },
-	{ .compatible = "qcom,qcs404-wcss-pas", .data = &wcss_resource_init },
-	{ .compatible = "qcom,sc7180-adsp-pas", .data = &sm8250_adsp_resource},
-	{ .compatible = "qcom,sc7180-mpss-pas", .data = &mpss_resource_init},
-	{ .compatible = "qcom,sc7280-adsp-pas", .data = &sm8350_adsp_resource},
+	{ .compatible = "qcom,qcs404-adsp-pas", .data = &qcs404_adsp_resource },
+	{ .compatible = "qcom,qcs404-cdsp-pas", .data = &qcs404_cdsp_resource },
+	{ .compatible = "qcom,qcs404-wcss-pas", .data = &qcs404_wcss_resource },
+	{ .compatible = "qcom,sc7180-adsp-pas", .data = &sc7180_adsp_resource},
+	{ .compatible = "qcom,sc7180-mpss-pas", .data = &sc7180_mpss_resource},
+	{ .compatible = "qcom,sc7280-adsp-pas", .data = &sc7280_adsp_resource},
 	{ .compatible = "qcom,sc7280-cdsp-pas", .data = &sm6350_cdsp_resource},
-	{ .compatible = "qcom,sc7280-mpss-pas", .data = &mpss_resource_init},
+	{ .compatible = "qcom,sc7280-mpss-pas", .data = &sc7280_mpss_resource},
 	{ .compatible = "qcom,sc7280-wpss-pas", .data = &sc7280_wpss_resource},
 	{ .compatible = "qcom,sc8180x-adsp-pas", .data = &sm8150_adsp_resource},
 	{ .compatible = "qcom,sc8180x-cdsp-pas", .data = &sm8150_cdsp_resource},
 	{ .compatible = "qcom,sc8180x-mpss-pas", .data = &sc8180x_mpss_resource},
-	{ .compatible = "qcom,sc8280xp-adsp-pas", .data = &sm8250_adsp_resource},
+	{ .compatible = "qcom,sc8280xp-adsp-pas", .data = &sm8350_adsp_resource},
 	{ .compatible = "qcom,sc8280xp-nsp0-pas", .data = &sc8280xp_nsp0_resource},
 	{ .compatible = "qcom,sc8280xp-nsp1-pas", .data = &sc8280xp_nsp1_resource},
 	{ .compatible = "qcom,sdm660-adsp-pas", .data = &adsp_resource_init},
@@ -1288,18 +1624,18 @@ static const struct of_device_id adsp_of_match[] = {
 	{ .compatible = "qcom,sdm845-cdsp-pas", .data = &sdm845_cdsp_resource_init},
 	{ .compatible = "qcom,sdm845-slpi-pas", .data = &sdm845_slpi_resource_init},
 	{ .compatible = "qcom,sdx55-mpss-pas", .data = &sdx55_mpss_resource},
-	{ .compatible = "qcom,sm6115-adsp-pas", .data = &adsp_resource_init},
-	{ .compatible = "qcom,sm6115-cdsp-pas", .data = &cdsp_resource_init},
+	{ .compatible = "qcom,sm6115-adsp-pas", .data = &sm6115_adsp_resource},
+	{ .compatible = "qcom,sm6115-cdsp-pas", .data = &qcs404_cdsp_resource},
 	{ .compatible = "qcom,sm6115-mpss-pas", .data = &sc8180x_mpss_resource},
 	{ .compatible = "qcom,sm6350-adsp-pas", .data = &sm6350_adsp_resource},
 	{ .compatible = "qcom,sm6350-cdsp-pas", .data = &sm6350_cdsp_resource},
-	{ .compatible = "qcom,sm6350-mpss-pas", .data = &mpss_resource_init},
+	{ .compatible = "qcom,sm6350-mpss-pas", .data = &sm8150_mpss_resource},
 	{ .compatible = "qcom,sm6375-adsp-pas", .data = &sm6350_adsp_resource},
 	{ .compatible = "qcom,sm6375-cdsp-pas", .data = &sm8150_cdsp_resource},
 	{ .compatible = "qcom,sm6375-mpss-pas", .data = &sm6375_mpss_resource},
 	{ .compatible = "qcom,sm8150-adsp-pas", .data = &sm8150_adsp_resource},
 	{ .compatible = "qcom,sm8150-cdsp-pas", .data = &sm8150_cdsp_resource},
-	{ .compatible = "qcom,sm8150-mpss-pas", .data = &mpss_resource_init},
+	{ .compatible = "qcom,sm8150-mpss-pas", .data = &sm8150_mpss_resource},
 	{ .compatible = "qcom,sm8150-slpi-pas", .data = &sdm845_slpi_resource_init},
 	{ .compatible = "qcom,sm8250-adsp-pas", .data = &sm8250_adsp_resource},
 	{ .compatible = "qcom,sm8250-cdsp-pas", .data = &sm8250_cdsp_resource},
@@ -1307,7 +1643,7 @@ static const struct of_device_id adsp_of_match[] = {
 	{ .compatible = "qcom,sm8350-adsp-pas", .data = &sm8350_adsp_resource},
 	{ .compatible = "qcom,sm8350-cdsp-pas", .data = &sm8350_cdsp_resource},
 	{ .compatible = "qcom,sm8350-slpi-pas", .data = &sdm845_slpi_resource_init},
-	{ .compatible = "qcom,sm8350-mpss-pas", .data = &mpss_resource_init},
+	{ .compatible = "qcom,sm8350-mpss-pas", .data = &sc7280_mpss_resource},
 	{ .compatible = "qcom,sm8450-adsp-pas", .data = &sm8350_adsp_resource},
 	{ .compatible = "qcom,sm8450-cdsp-pas", .data = &sm8350_cdsp_resource},
 	{ .compatible = "qcom,sm8450-slpi-pas", .data = &sdm845_slpi_resource_init},

-- 
2.39.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 7/7] remoteproc: qcom: pas: add configuration for in-kernel pdm
  2024-03-11 15:34 ` [PATCH v4 7/7] remoteproc: qcom: pas: " Dmitry Baryshkov
@ 2024-03-11 16:58   ` neil.armstrong
  0 siblings, 0 replies; 26+ messages in thread
From: neil.armstrong @ 2024-03-11 16:58 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

On 11/03/2024 16:34, Dmitry Baryshkov wrote:
> Add domain / service configuration for the in-kernel protection domain
> mapper service.
> 
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> ---
>   drivers/remoteproc/Kconfig         |   1 +
>   drivers/remoteproc/qcom_q6v5_pas.c | 370 +++++++++++++++++++++++++++++++++++--
>   2 files changed, 354 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> index 8152e845f7a3..7c6ec54c7b35 100644
> --- a/drivers/remoteproc/Kconfig
> +++ b/drivers/remoteproc/Kconfig
> @@ -223,6 +223,7 @@ config QCOM_Q6V5_PAS
>   	depends on QCOM_SYSMON || QCOM_SYSMON=n
>   	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
>   	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
> +	depends on QCOM_PD_MAPPER || QCOM_PD_MAPPER=n
>   	select MFD_SYSCON
>   	select QCOM_PIL_INFO
>   	select QCOM_MDT_LOADER
> diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
> index 3235249d703d..ba53df7ea30e 100644
> --- a/drivers/remoteproc/qcom_q6v5_pas.c
> +++ b/drivers/remoteproc/qcom_q6v5_pas.c
> @@ -23,6 +23,7 @@
>   #include <linux/regulator/consumer.h>
>   #include <linux/remoteproc.h>
>   #include <linux/soc/qcom/mdt_loader.h>
> +#include <linux/soc/qcom/pd_mapper.h>
>   #include <linux/soc/qcom/smem.h>
>   #include <linux/soc/qcom/smem_state.h>
>   
> @@ -56,6 +57,9 @@ struct adsp_data {
>   	int region_assign_count;
>   	bool region_assign_shared;
>   	int region_assign_vmid;
> +
> +	const struct qcom_pdm_domain_data * const *domains;
> +	size_t num_domains;
>   };
>   
>   struct qcom_adsp {
> @@ -112,6 +116,9 @@ struct qcom_adsp {
>   
>   	struct qcom_scm_pas_metadata pas_metadata;
>   	struct qcom_scm_pas_metadata dtb_pas_metadata;
> +
> +	const struct qcom_pdm_domain_data * const *domains;
> +	size_t num_domains;
>   };
>   
>   static void adsp_segment_dump(struct rproc *rproc, struct rproc_dump_segment *segment,
> @@ -256,10 +263,14 @@ static int adsp_start(struct rproc *rproc)
>   	struct qcom_adsp *adsp = rproc->priv;
>   	int ret;
>   
> -	ret = qcom_q6v5_prepare(&adsp->q6v5);
> +	ret = qcom_pdm_add_domains(adsp->domains, adsp->num_domains);
>   	if (ret)
>   		return ret;
>   
> +	ret = qcom_q6v5_prepare(&adsp->q6v5);
> +	if (ret)
> +		goto del_domains;
> +
>   	ret = adsp_pds_enable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
>   	if (ret < 0)
>   		goto disable_irqs;
> @@ -348,6 +359,9 @@ static int adsp_start(struct rproc *rproc)
>   disable_irqs:
>   	qcom_q6v5_unprepare(&adsp->q6v5);
>   
> +del_domains:
> +	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
> +
>   	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
>   	adsp->firmware = NULL;
>   
> @@ -394,6 +408,8 @@ static int adsp_stop(struct rproc *rproc)
>   	if (handover)
>   		qcom_pas_handover(&adsp->q6v5);
>   
> +	qcom_pdm_del_domains(adsp->domains, adsp->num_domains);
> +
>   	return ret;
>   }
>   
> @@ -730,6 +746,10 @@ static int adsp_probe(struct platform_device *pdev)
>   		adsp->dtb_firmware_name = dtb_fw_name;
>   		adsp->dtb_pas_id = desc->dtb_pas_id;
>   	}
> +
> +	adsp->domains = desc->domains;
> +	adsp->num_domains = desc->num_domains;
> +
>   	platform_set_drvdata(pdev, adsp);
>   
>   	ret = device_init_wakeup(adsp->dev, true);
> @@ -806,6 +826,172 @@ static void adsp_remove(struct platform_device *pdev)
>   	rproc_free(adsp->rproc);
>   }
>   
> +static const struct qcom_pdm_domain_data adsp_audio_pd = {
> +	.domain = "msm/adsp/audio_pd",
> +	.instance_id = 74,
> +	.services = {
> +		"avs/audio",
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data adsp_charger_pd = {
> +	.domain = "msm/adsp/charger_pd",
> +	.instance_id = 74,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data adsp_root_pd = {
> +	.domain = "msm/adsp/root_pd",
> +	.instance_id = 74,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data adsp_root_pd_pdr = {
> +	.domain = "msm/adsp/root_pd",
> +	.instance_id = 74,
> +	.services = {
> +		"tms/pdr_enabled",
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data adsp_sensor_pd = {
> +	.domain = "msm/adsp/sensor_pd",
> +	.instance_id = 74,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data msm8996_adsp_audio_pd = {
> +	.domain = "msm/adsp/audio_pd",
> +	.instance_id = 4,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data msm8996_adsp_root_pd = {
> +	.domain = "msm/adsp/root_pd",
> +	.instance_id = 4,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data *msm8996_adsp_domains[] = {
> +	&msm8996_adsp_audio_pd,
> +	&msm8996_adsp_root_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *qcs404_adsp_domains[] = {
> +	&adsp_audio_pd,
> +	&adsp_root_pd,
> +	&adsp_sensor_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sc7180_adsp_domains[] = {
> +	&adsp_audio_pd,
> +	&adsp_root_pd_pdr,
> +	&adsp_sensor_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sc7280_adsp_domains[] = {
> +	&adsp_audio_pd,
> +	&adsp_root_pd_pdr,
> +	&adsp_charger_pd,
> +	&adsp_sensor_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sdm845_adsp_domains[] = {
> +	&adsp_audio_pd,
> +	&adsp_root_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sm8350_adsp_domains[] = {
> +	&adsp_audio_pd,
> +	&adsp_root_pd_pdr,
> +	&adsp_charger_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data cdsp_root_pd = {
> +	.domain = "msm/cdsp/root_pd",
> +	.instance_id = 76,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data *sdm845_cdsp_domains[] = {
> +	&cdsp_root_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data slpi_root_pd = {
> +	.domain = "msm/slpi/root_pd",
> +	.instance_id = 90,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data slpi_sensor_pd = {
> +	.domain = "msm/slpi/sensor_pd",
> +	.instance_id = 90,
> +	.services = { NULL },
> +};
> +
> +static const struct qcom_pdm_domain_data *sdm845_slpi_domains[] = {
> +	&slpi_root_pd,
> +	&slpi_sensor_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data mpss_root_pd = {
> +	.domain = "msm/modem/root_pd",
> +	.instance_id = 180,
> +	.services = {
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data mpss_root_pd_gps = {
> +	.domain = "msm/modem/root_pd",
> +	.instance_id = 180,
> +	.services = {
> +		"gps/gps_service",
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data mpss_root_pd_gps_pdr = {
> +	.domain = "msm/modem/root_pd",
> +	.instance_id = 180,
> +	.services = {
> +		"gps/gps_service",
> +		"tms/pdr_enabled",
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data mpss_wlan_pd = {
> +	.domain = "msm/modem/wlan_pd",
> +	.instance_id = 180,
> +	.services = {
> +		"kernel/elf_loader",
> +		"wlan/fw",
> +		NULL,
> +	},
> +};
> +
> +static const struct qcom_pdm_domain_data *qcs404_mpss_domains[] = {
> +	&mpss_root_pd,
> +	&mpss_wlan_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sc7180_mpss_domains[] = {
> +	&mpss_root_pd_gps_pdr,
> +	&mpss_wlan_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sm8150_mpss_domains[] = {
> +	&mpss_root_pd_gps,
> +	&mpss_wlan_pd,
> +};
> +
> +static const struct qcom_pdm_domain_data *sm8350_mpss_domains[] = {
> +	&mpss_root_pd_gps,
> +};
> +
>   static const struct adsp_data adsp_resource_init = {
>   	.crash_reason_smem = 423,
>   	.firmware_name = "adsp.mdt",
> @@ -814,6 +1000,55 @@ static const struct adsp_data adsp_resource_init = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	/* no domains */
> +};
> +
> +static const struct adsp_data qcs404_adsp_resource = {
> +	.crash_reason_smem = 423,
> +	.firmware_name = "adsp.mdt",
> +	.pas_id = 1,
> +	.auto_boot = true,
> +	.ssr_name = "lpass",
> +	.sysmon_name = "adsp",
> +	.ssctl_id = 0x14,
> +	.domains = qcs404_adsp_domains,
> +	.num_domains = ARRAY_SIZE(qcs404_adsp_domains),
> +};
> +
> +static const struct adsp_data sc7180_adsp_resource = {
> +	.crash_reason_smem = 423,
> +	.firmware_name = "adsp.mdt",
> +	.pas_id = 1,
> +	.auto_boot = true,
> +	.proxy_pd_names = (char*[]){
> +		"lcx",
> +		"lmx",
> +		NULL
> +	},
> +	.load_state = "adsp",
> +	.ssr_name = "lpass",
> +	.sysmon_name = "adsp",
> +	.ssctl_id = 0x14,
> +	.domains = sc7180_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sc7180_adsp_domains),
> +};
> +
> +static const struct adsp_data sc7280_adsp_resource = {
> +	.crash_reason_smem = 423,
> +	.firmware_name = "adsp.mdt",
> +	.pas_id = 1,
> +	.auto_boot = true,
> +	.proxy_pd_names = (char*[]){
> +		"lcx",
> +		"lmx",
> +		NULL
> +	},
> +	.load_state = "adsp",
> +	.ssr_name = "lpass",
> +	.sysmon_name = "adsp",
> +	.ssctl_id = 0x14,
> +	.domains = sc7280_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sc7280_adsp_domains),
>   };
>   
>   static const struct adsp_data sdm845_adsp_resource_init = {
> @@ -825,6 +1060,20 @@ static const struct adsp_data sdm845_adsp_resource_init = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = sdm845_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
> +};
> +
> +static const struct adsp_data sm6115_adsp_resource = {
> +	.crash_reason_smem = 423,
> +	.firmware_name = "adsp.mdt",
> +	.pas_id = 1,
> +	.auto_boot = true,
> +	.ssr_name = "lpass",
> +	.sysmon_name = "adsp",
> +	.ssctl_id = 0x14,
> +	.domains = sdm845_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
>   };
>   
>   static const struct adsp_data sm6350_adsp_resource = {
> @@ -841,6 +1090,8 @@ static const struct adsp_data sm6350_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = qcs404_adsp_domains,
> +	.num_domains = ARRAY_SIZE(qcs404_adsp_domains),
>   };
>   
>   static const struct adsp_data sm6375_mpss_resource = {
> @@ -856,6 +1107,7 @@ static const struct adsp_data sm6375_mpss_resource = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "modem",
>   	.ssctl_id = 0x12,
> +	// TODO: domains
>   };
>   
>   static const struct adsp_data sm8150_adsp_resource = {
> @@ -871,6 +1123,8 @@ static const struct adsp_data sm8150_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = sdm845_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
>   };
>   
>   static const struct adsp_data sm8250_adsp_resource = {
> @@ -887,6 +1141,8 @@ static const struct adsp_data sm8250_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = sdm845_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_adsp_domains),
>   };
>   
>   static const struct adsp_data sm8350_adsp_resource = {
> @@ -903,6 +1159,8 @@ static const struct adsp_data sm8350_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = sm8350_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_adsp_domains),
>   };
>   
>   static const struct adsp_data msm8996_adsp_resource = {
> @@ -917,9 +1175,11 @@ static const struct adsp_data msm8996_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = msm8996_adsp_domains,
> +	.num_domains = ARRAY_SIZE(msm8996_adsp_domains),
>   };
>   
> -static const struct adsp_data cdsp_resource_init = {
> +static const struct adsp_data qcs404_cdsp_resource = {
>   	.crash_reason_smem = 601,
>   	.firmware_name = "cdsp.mdt",
>   	.pas_id = 18,
> @@ -927,6 +1187,8 @@ static const struct adsp_data cdsp_resource_init = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sdm845_cdsp_resource_init = {
> @@ -938,6 +1200,8 @@ static const struct adsp_data sdm845_cdsp_resource_init = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sm6350_cdsp_resource = {
> @@ -954,6 +1218,8 @@ static const struct adsp_data sm6350_cdsp_resource = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sm8150_cdsp_resource = {
> @@ -969,6 +1235,8 @@ static const struct adsp_data sm8150_cdsp_resource = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sm8250_cdsp_resource = {
> @@ -984,6 +1252,8 @@ static const struct adsp_data sm8250_cdsp_resource = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sc8280xp_nsp0_resource = {
> @@ -998,6 +1268,8 @@ static const struct adsp_data sc8280xp_nsp0_resource = {
>   	.ssr_name = "cdsp0",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sc8280xp_nsp1_resource = {
> @@ -1012,6 +1284,7 @@ static const struct adsp_data sc8280xp_nsp1_resource = {
>   	.ssr_name = "cdsp1",
>   	.sysmon_name = "cdsp1",
>   	.ssctl_id = 0x20,
> +	// TODO: domains
>   };
>   
>   static const struct adsp_data sm8350_cdsp_resource = {
> @@ -1028,9 +1301,11 @@ static const struct adsp_data sm8350_cdsp_resource = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
> -static const struct adsp_data mpss_resource_init = {
> +static const struct adsp_data sc7180_mpss_resource = {
>   	.crash_reason_smem = 421,
>   	.firmware_name = "modem.mdt",
>   	.pas_id = 4,
> @@ -1045,6 +1320,27 @@ static const struct adsp_data mpss_resource_init = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "modem",
>   	.ssctl_id = 0x12,
> +	.domains = sc7180_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sc7180_mpss_domains),
> +};
> +
> +static const struct adsp_data sc7280_mpss_resource = {
> +	.crash_reason_smem = 421,
> +	.firmware_name = "modem.mdt",
> +	.pas_id = 4,
> +	.minidump_id = 3,
> +	.auto_boot = false,
> +	.proxy_pd_names = (char*[]){
> +		"cx",
> +		"mss",
> +		NULL
> +	},
> +	.load_state = "modem",
> +	.ssr_name = "mpss",
> +	.sysmon_name = "modem",
> +	.ssctl_id = 0x12,
> +	.domains = sm8350_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
>   };
>   
>   static const struct adsp_data sc8180x_mpss_resource = {
> @@ -1060,6 +1356,27 @@ static const struct adsp_data sc8180x_mpss_resource = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "modem",
>   	.ssctl_id = 0x12,
> +	.domains = sm8150_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8150_mpss_domains),
> +};
> +
> +static const struct adsp_data sm8150_mpss_resource = {
> +	.crash_reason_smem = 421,
> +	.firmware_name = "modem.mdt",
> +	.pas_id = 4,
> +	.minidump_id = 3,
> +	.auto_boot = false,
> +	.proxy_pd_names = (char*[]){
> +		"cx",
> +		"mss",
> +		NULL
> +	},
> +	.load_state = "modem",
> +	.ssr_name = "mpss",
> +	.sysmon_name = "modem",
> +	.ssctl_id = 0x12,
> +	.domains = sm8150_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8150_mpss_domains),
>   };
>   
>   static const struct adsp_data msm8996_slpi_resource_init = {
> @@ -1074,6 +1391,7 @@ static const struct adsp_data msm8996_slpi_resource_init = {
>   	.ssr_name = "dsps",
>   	.sysmon_name = "slpi",
>   	.ssctl_id = 0x16,
> +	/* no domains */
>   };
>   
>   static const struct adsp_data sdm845_slpi_resource_init = {
> @@ -1090,9 +1408,11 @@ static const struct adsp_data sdm845_slpi_resource_init = {
>   	.ssr_name = "dsps",
>   	.sysmon_name = "slpi",
>   	.ssctl_id = 0x16,
> +	.domains = sdm845_slpi_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_slpi_domains),
>   };
>   
> -static const struct adsp_data wcss_resource_init = {
> +static const struct adsp_data qcs404_wcss_resource = {
>   	.crash_reason_smem = 421,
>   	.firmware_name = "wcnss.mdt",
>   	.pas_id = 6,
> @@ -1100,6 +1420,8 @@ static const struct adsp_data wcss_resource_init = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "wcnss",
>   	.ssctl_id = 0x12,
> +	.domains = qcs404_mpss_domains,
> +	.num_domains = ARRAY_SIZE(qcs404_mpss_domains),
>   };
>   
>   static const struct adsp_data sdx55_mpss_resource = {
> @@ -1115,6 +1437,7 @@ static const struct adsp_data sdx55_mpss_resource = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "modem",
>   	.ssctl_id = 0x22,
> +	// TODO: domains
>   };
>   
>   static const struct adsp_data sm8450_mpss_resource = {
> @@ -1133,6 +1456,8 @@ static const struct adsp_data sm8450_mpss_resource = {
>   	.ssr_name = "mpss",
>   	.sysmon_name = "modem",
>   	.ssctl_id = 0x12,
> +	.domains = sm8350_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
>   };
>   
>   static const struct adsp_data sm8550_adsp_resource = {
> @@ -1152,6 +1477,8 @@ static const struct adsp_data sm8550_adsp_resource = {
>   	.ssr_name = "lpass",
>   	.sysmon_name = "adsp",
>   	.ssctl_id = 0x14,
> +	.domains = sm8350_adsp_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_adsp_domains),

Looking at the jsn shipped with the SM8550 adsp, it should be:

static const struct qcom_pdm_domain_data *sm8550_adsp_domains[] = {
	&adsp_audio_pd,
	&adsp_sensor_pd,
	&adsp_root_pd,
	&adsp_charger_pd,
};

since SLPI has been moved to ADSP,
and for the root_pd there's no pdr_enabled service listed.

>   };
>   
>   static const struct adsp_data sm8550_cdsp_resource = {
> @@ -1172,6 +1499,8 @@ static const struct adsp_data sm8550_cdsp_resource = {
>   	.ssr_name = "cdsp",
>   	.sysmon_name = "cdsp",
>   	.ssctl_id = 0x17,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sm8550_mpss_resource = {
> @@ -1195,6 +1524,8 @@ static const struct adsp_data sm8550_mpss_resource = {
>   	.region_assign_idx = 2,
>   	.region_assign_count = 1,
>   	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
> +	.domains = sm8350_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),
>   };
>   
>   static const struct adsp_data sc7280_wpss_resource = {
> @@ -1235,6 +1566,8 @@ static const struct adsp_data sm8650_cdsp_resource = {
>   	.region_assign_count = 1,
>   	.region_assign_shared = true,
>   	.region_assign_vmid = QCOM_SCM_VMID_CDSP,
> +	.domains = sdm845_cdsp_domains,
> +	.num_domains = ARRAY_SIZE(sdm845_cdsp_domains),
>   };
>   
>   static const struct adsp_data sm8650_mpss_resource = {
> @@ -1258,29 +1591,32 @@ static const struct adsp_data sm8650_mpss_resource = {
>   	.region_assign_idx = 2,
>   	.region_assign_count = 3,
>   	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
> +	.domains = sm8350_mpss_domains,
> +	.num_domains = ARRAY_SIZE(sm8350_mpss_domains),

Ok for the SM8550/SM8650 CDSP/MPSS.

Neil

>   };
>   
>   static const struct of_device_id adsp_of_match[] = {
>   	{ .compatible = "qcom,msm8226-adsp-pil", .data = &adsp_resource_init},
> +	// FIXME: is msm8996 adsp audio domain applicable to msm8953 ?
>   	{ .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
>   	{ .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
>   	{ .compatible = "qcom,msm8996-adsp-pil", .data = &msm8996_adsp_resource},
>   	{ .compatible = "qcom,msm8996-slpi-pil", .data = &msm8996_slpi_resource_init},
>   	{ .compatible = "qcom,msm8998-adsp-pas", .data = &msm8996_adsp_resource},
>   	{ .compatible = "qcom,msm8998-slpi-pas", .data = &msm8996_slpi_resource_init},
> -	{ .compatible = "qcom,qcs404-adsp-pas", .data = &adsp_resource_init },
> -	{ .compatible = "qcom,qcs404-cdsp-pas", .data = &cdsp_resource_init },
> -	{ .compatible = "qcom,qcs404-wcss-pas", .data = &wcss_resource_init },
> -	{ .compatible = "qcom,sc7180-adsp-pas", .data = &sm8250_adsp_resource},
> -	{ .compatible = "qcom,sc7180-mpss-pas", .data = &mpss_resource_init},
> -	{ .compatible = "qcom,sc7280-adsp-pas", .data = &sm8350_adsp_resource},
> +	{ .compatible = "qcom,qcs404-adsp-pas", .data = &qcs404_adsp_resource },
> +	{ .compatible = "qcom,qcs404-cdsp-pas", .data = &qcs404_cdsp_resource },
> +	{ .compatible = "qcom,qcs404-wcss-pas", .data = &qcs404_wcss_resource },
> +	{ .compatible = "qcom,sc7180-adsp-pas", .data = &sc7180_adsp_resource},
> +	{ .compatible = "qcom,sc7180-mpss-pas", .data = &sc7180_mpss_resource},
> +	{ .compatible = "qcom,sc7280-adsp-pas", .data = &sc7280_adsp_resource},
>   	{ .compatible = "qcom,sc7280-cdsp-pas", .data = &sm6350_cdsp_resource},
> -	{ .compatible = "qcom,sc7280-mpss-pas", .data = &mpss_resource_init},
> +	{ .compatible = "qcom,sc7280-mpss-pas", .data = &sc7280_mpss_resource},
>   	{ .compatible = "qcom,sc7280-wpss-pas", .data = &sc7280_wpss_resource},
>   	{ .compatible = "qcom,sc8180x-adsp-pas", .data = &sm8150_adsp_resource},
>   	{ .compatible = "qcom,sc8180x-cdsp-pas", .data = &sm8150_cdsp_resource},
>   	{ .compatible = "qcom,sc8180x-mpss-pas", .data = &sc8180x_mpss_resource},
> -	{ .compatible = "qcom,sc8280xp-adsp-pas", .data = &sm8250_adsp_resource},
> +	{ .compatible = "qcom,sc8280xp-adsp-pas", .data = &sm8350_adsp_resource},
>   	{ .compatible = "qcom,sc8280xp-nsp0-pas", .data = &sc8280xp_nsp0_resource},
>   	{ .compatible = "qcom,sc8280xp-nsp1-pas", .data = &sc8280xp_nsp1_resource},
>   	{ .compatible = "qcom,sdm660-adsp-pas", .data = &adsp_resource_init},
> @@ -1288,18 +1624,18 @@ static const struct of_device_id adsp_of_match[] = {
>   	{ .compatible = "qcom,sdm845-cdsp-pas", .data = &sdm845_cdsp_resource_init},
>   	{ .compatible = "qcom,sdm845-slpi-pas", .data = &sdm845_slpi_resource_init},
>   	{ .compatible = "qcom,sdx55-mpss-pas", .data = &sdx55_mpss_resource},
> -	{ .compatible = "qcom,sm6115-adsp-pas", .data = &adsp_resource_init},
> -	{ .compatible = "qcom,sm6115-cdsp-pas", .data = &cdsp_resource_init},
> +	{ .compatible = "qcom,sm6115-adsp-pas", .data = &sm6115_adsp_resource},
> +	{ .compatible = "qcom,sm6115-cdsp-pas", .data = &qcs404_cdsp_resource},
>   	{ .compatible = "qcom,sm6115-mpss-pas", .data = &sc8180x_mpss_resource},
>   	{ .compatible = "qcom,sm6350-adsp-pas", .data = &sm6350_adsp_resource},
>   	{ .compatible = "qcom,sm6350-cdsp-pas", .data = &sm6350_cdsp_resource},
> -	{ .compatible = "qcom,sm6350-mpss-pas", .data = &mpss_resource_init},
> +	{ .compatible = "qcom,sm6350-mpss-pas", .data = &sm8150_mpss_resource},
>   	{ .compatible = "qcom,sm6375-adsp-pas", .data = &sm6350_adsp_resource},
>   	{ .compatible = "qcom,sm6375-cdsp-pas", .data = &sm8150_cdsp_resource},
>   	{ .compatible = "qcom,sm6375-mpss-pas", .data = &sm6375_mpss_resource},
>   	{ .compatible = "qcom,sm8150-adsp-pas", .data = &sm8150_adsp_resource},
>   	{ .compatible = "qcom,sm8150-cdsp-pas", .data = &sm8150_cdsp_resource},
> -	{ .compatible = "qcom,sm8150-mpss-pas", .data = &mpss_resource_init},
> +	{ .compatible = "qcom,sm8150-mpss-pas", .data = &sm8150_mpss_resource},
>   	{ .compatible = "qcom,sm8150-slpi-pas", .data = &sdm845_slpi_resource_init},
>   	{ .compatible = "qcom,sm8250-adsp-pas", .data = &sm8250_adsp_resource},
>   	{ .compatible = "qcom,sm8250-cdsp-pas", .data = &sm8250_cdsp_resource},
> @@ -1307,7 +1643,7 @@ static const struct of_device_id adsp_of_match[] = {
>   	{ .compatible = "qcom,sm8350-adsp-pas", .data = &sm8350_adsp_resource},
>   	{ .compatible = "qcom,sm8350-cdsp-pas", .data = &sm8350_cdsp_resource},
>   	{ .compatible = "qcom,sm8350-slpi-pas", .data = &sdm845_slpi_resource_init},
> -	{ .compatible = "qcom,sm8350-mpss-pas", .data = &mpss_resource_init},
> +	{ .compatible = "qcom,sm8350-mpss-pas", .data = &sc7280_mpss_resource},
>   	{ .compatible = "qcom,sm8450-adsp-pas", .data = &sm8350_adsp_resource},
>   	{ .compatible = "qcom,sm8450-cdsp-pas", .data = &sm8350_cdsp_resource},
>   	{ .compatible = "qcom,sm8450-slpi-pas", .data = &sdm845_slpi_resource_init},
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation
  2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
                   ` (6 preceding siblings ...)
  2024-03-11 15:34 ` [PATCH v4 7/7] remoteproc: qcom: pas: " Dmitry Baryshkov
@ 2024-03-11 17:18 ` neil.armstrong
  7 siblings, 0 replies; 26+ messages in thread
From: neil.armstrong @ 2024-03-11 17:18 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold

On 11/03/2024 16:34, Dmitry Baryshkov wrote:
> Protection domain mapper is a QMI service providing mapping between
> 'protection domains' and services supported / allowed in these domains.
> For example such mapping is required for loading of the WiFi firmware or
> for properly starting up the UCSI / altmode / battery manager support.
> 
> The existing userspace implementation has several issue. It doesn't play
> well with CONFIG_EXTRA_FIRMWARE, it doesn't reread the JSON files if the
> firmware location is changed (or if the firmware was not available at
> the time pd-mapper was started but the corresponding directory is
> mounted later), etc.
> 
> However this configuration is largely static and common between
> different platforms. Provide in-kernel service implementing static
> per-platform data.
> 
> NOTE: this is an RFC / RFT, the domain mapping data might be inaccurate
> (especially for SM6xxx and SC7xxx platforms), which is reflected by
> several TODO and FIXME comments in the code.
> 
> --
> 2.39.2
> 
> ---
> Changes in v4:
> - Fixed missing chunk, reenabled kfree in qmi_del_server (Konrad)
> - Added configuration for sm6350 (Thanks to Luca)
> - Removed RFC tag (Konrad)
> - Link to v3: https://lore.kernel.org/r/20240304-qcom-pd-mapper-v3-0-6858fa1ac1c8@linaro.org
> 
> Changes in RFC v3:
> - Send start / stop notifications when PD-mapper domain list is changed
> - Reworked the way PD-mapper treats protection domains, register all of
>    them in a single batch
> - Added SC7180 domains configuration based on TCL Book 14 GO
> - Link to v2: https://lore.kernel.org/r/20240301-qcom-pd-mapper-v2-0-5d12a081d9d1@linaro.org
> 
> Changes in RFC v2:
> - Swapped num_domains / domains (Konrad)
> - Fixed an issue with battery not working on sc8280xp
> - Added missing configuration for QCS404
> 
> ---
> Dmitry Baryshkov (7):
>        soc: qcom: pdr: protect locator_addr with the main mutex
>        soc: qcom: qmi: add a way to remove running service
>        soc: qcom: add pd-mapper implementation
>        remoteproc: qcom: pas: correct data indentation
>        remoteproc: qcom: adsp: add configuration for in-kernel pdm
>        remoteproc: qcom: mss: add configuration for in-kernel pdm
>        remoteproc: qcom: pas: add configuration for in-kernel pdm
> 
>   drivers/remoteproc/Kconfig          |   3 +
>   drivers/remoteproc/qcom_q6v5_adsp.c |  82 +++++-
>   drivers/remoteproc/qcom_q6v5_mss.c  |  80 +++++-
>   drivers/remoteproc/qcom_q6v5_pas.c  | 502 ++++++++++++++++++++++++++++++------
>   drivers/soc/qcom/Kconfig            |  10 +
>   drivers/soc/qcom/Makefile           |   2 +
>   drivers/soc/qcom/pdr_interface.c    |   6 +-
>   drivers/soc/qcom/qcom_pdm.c         | 346 +++++++++++++++++++++++++
>   drivers/soc/qcom/qcom_pdm_msg.c     | 188 ++++++++++++++
>   drivers/soc/qcom/qcom_pdm_msg.h     |  66 +++++
>   drivers/soc/qcom/qmi_interface.c    |  67 +++++
>   include/linux/soc/qcom/pd_mapper.h  |  39 +++
>   include/linux/soc/qcom/qmi.h        |   2 +
>   13 files changed, 1302 insertions(+), 91 deletions(-)
> ---
> base-commit: 1843e16d2df9d98427ef8045589571749d627cf7
> change-id: 20240301-qcom-pd-mapper-e12d622d4ad0
> 
> Best regards,


With:
=================><===============================================
diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
index ba53df7ea30e..1cfc52d5247b 100644
--- a/drivers/remoteproc/qcom_q6v5_pas.c
+++ b/drivers/remoteproc/qcom_q6v5_pas.c
@@ -909,6 +909,13 @@ static const struct qcom_pdm_domain_data *sm8350_adsp_domains[] = {
  	&adsp_charger_pd,
  };

+static const struct qcom_pdm_domain_data *sm8550_adsp_domains[] = {
+	&adsp_audio_pd,
+	&adsp_sensor_pd,
+	&adsp_root_pd,
+	&adsp_charger_pd,
+};
+
  static const struct qcom_pdm_domain_data cdsp_root_pd = {
  	.domain = "msm/cdsp/root_pd",
  	.instance_id = 76,
@@ -1477,8 +1484,8 @@ static const struct adsp_data sm8550_adsp_resource = {
  	.ssr_name = "lpass",
  	.sysmon_name = "adsp",
  	.ssctl_id = 0x14,
-	.domains = sm8350_adsp_domains,
-	.num_domains = ARRAY_SIZE(sm8350_adsp_domains),
+	.domains = sm8550_adsp_domains,
+	.num_domains = ARRAY_SIZE(sm8550_adsp_domains),
  };

  static const struct adsp_data sm8550_cdsp_resource = {
=================><===============================================

Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8550-QRD
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD

Thanks,
Neil


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-11 15:34 ` [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service Dmitry Baryshkov
@ 2024-03-12  0:53   ` Konrad Dybcio
  2024-03-12  1:03     ` Dmitry Baryshkov
  2024-03-14  0:03   ` Chris Lew
  1 sibling, 1 reply; 26+ messages in thread
From: Konrad Dybcio @ 2024-03-12  0:53 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold



On 3/11/24 16:34, Dmitry Baryshkov wrote:
> Add qmi_del_server(), a pair to qmi_add_server(), a way to remove
> running server from the QMI socket. This is e.g. necessary for
> pd-mapper, which needs to readd a server each time the DSP is started or
> stopped.
> 
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> ---
>   drivers/soc/qcom/qmi_interface.c | 67 ++++++++++++++++++++++++++++++++++++++++
>   include/linux/soc/qcom/qmi.h     |  2 ++
>   2 files changed, 69 insertions(+)
> 
> diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
> index bb98b06e87f8..18ff2015c682 100644
> --- a/drivers/soc/qcom/qmi_interface.c
> +++ b/drivers/soc/qcom/qmi_interface.c
> @@ -289,6 +289,73 @@ int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
>   }
>   EXPORT_SYMBOL_GPL(qmi_add_server);
>   
> +static void qmi_send_del_server(struct qmi_handle *qmi, struct qmi_service *svc)
> +{
> +	struct qrtr_ctrl_pkt pkt;
> +	struct sockaddr_qrtr sq;
> +	struct msghdr msg = { };
> +	struct kvec iv = { &pkt, sizeof(pkt) };
> +	int ret;
> +
> +	memset(&pkt, 0, sizeof(pkt));

0-init instead?

> +	pkt.cmd = cpu_to_le32(QRTR_TYPE_DEL_SERVER);
> +	pkt.server.service = cpu_to_le32(svc->service);
> +	pkt.server.instance = cpu_to_le32(svc->version | svc->instance << 8);
> +	pkt.server.node = cpu_to_le32(qmi->sq.sq_node);
> +	pkt.server.port = cpu_to_le32(qmi->sq.sq_port);

Or perhaps C99-init?

Konrad

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
@ 2024-03-12  0:55   ` Konrad Dybcio
  2024-03-12  9:43     ` Johan Hovold
  2024-03-12  9:36   ` Johan Hovold
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Konrad Dybcio @ 2024-03-12  0:55 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold



On 3/11/24 16:34, Dmitry Baryshkov wrote:
> Existing userspace protection domain mapper implementation has several
> issue. It doesn't play well with CONFIG_EXTRA_FIRMWARE, it doesn't
> reread JSON files if firmware location is changed (or if firmware was
> not available at the time pd-mapper was started but the corresponding
> directory is mounted later), etc.
> 
> Provide in-kernel service implementing protection domain mapping
> required to work with several services, which are provided by the DSP
> firmware.
> 
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> ---

I'm far from an expert on how this works, but to a non-expert eye,
there is nothing outrageously wrong.

One suggestion I have is to use cleanup.h and scoped guards to
save on some LoC

Konrad

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-12  0:53   ` Konrad Dybcio
@ 2024-03-12  1:03     ` Dmitry Baryshkov
  0 siblings, 0 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-12  1:03 UTC (permalink / raw
  To: Konrad Dybcio
  Cc: Bjorn Andersson, Mathieu Poirier, linux-arm-msm, linux-remoteproc,
	Johan Hovold

On Tue, 12 Mar 2024 at 02:53, Konrad Dybcio <konrad.dybcio@linaro.org> wrote:
>
>
>
> On 3/11/24 16:34, Dmitry Baryshkov wrote:
> > Add qmi_del_server(), a pair to qmi_add_server(), a way to remove
> > running server from the QMI socket. This is e.g. necessary for
> > pd-mapper, which needs to readd a server each time the DSP is started or
> > stopped.
> >
> > Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> > ---
> >   drivers/soc/qcom/qmi_interface.c | 67 ++++++++++++++++++++++++++++++++++++++++
> >   include/linux/soc/qcom/qmi.h     |  2 ++
> >   2 files changed, 69 insertions(+)
> >
> > diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
> > index bb98b06e87f8..18ff2015c682 100644
> > --- a/drivers/soc/qcom/qmi_interface.c
> > +++ b/drivers/soc/qcom/qmi_interface.c
> > @@ -289,6 +289,73 @@ int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
> >   }
> >   EXPORT_SYMBOL_GPL(qmi_add_server);
> >
> > +static void qmi_send_del_server(struct qmi_handle *qmi, struct qmi_service *svc)
> > +{
> > +     struct qrtr_ctrl_pkt pkt;
> > +     struct sockaddr_qrtr sq;
> > +     struct msghdr msg = { };
> > +     struct kvec iv = { &pkt, sizeof(pkt) };
> > +     int ret;
> > +
> > +     memset(&pkt, 0, sizeof(pkt));
>
> 0-init instead?
>
> > +     pkt.cmd = cpu_to_le32(QRTR_TYPE_DEL_SERVER);
> > +     pkt.server.service = cpu_to_le32(svc->service);
> > +     pkt.server.instance = cpu_to_le32(svc->version | svc->instance << 8);
> > +     pkt.server.node = cpu_to_le32(qmi->sq.sq_node);
> > +     pkt.server.port = cpu_to_le32(qmi->sq.sq_port);
>
> Or perhaps C99-init?

This follows 1:1 qcom_send_add_server(). I don't think we should use
new style just for this function.

>
> Konrad



-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
  2024-03-12  0:55   ` Konrad Dybcio
@ 2024-03-12  9:36   ` Johan Hovold
  2024-03-14 19:44   ` Chris Lew
  2024-04-07 23:14   ` Bjorn Andersson
  3 siblings, 0 replies; 26+ messages in thread
From: Johan Hovold @ 2024-03-12  9:36 UTC (permalink / raw
  To: Dmitry Baryshkov
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold

On Mon, Mar 11, 2024 at 05:34:03PM +0200, Dmitry Baryshkov wrote:
> Existing userspace protection domain mapper implementation has several
> issue. It doesn't play well with CONFIG_EXTRA_FIRMWARE, it doesn't
> reread JSON files if firmware location is changed (or if firmware was
> not available at the time pd-mapper was started but the corresponding
> directory is mounted later), etc.
> 
> Provide in-kernel service implementing protection domain mapping
> required to work with several services, which are provided by the DSP
> firmware.
> 
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>

Just a couple of drive-by comments below.

> +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> +{
> +	int ret;
> +	int i;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	if (qcom_pdm_server_added) {
> +		ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +				     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +		if (ret)
> +			goto err_out;

Name error labels after what they do, that is, 'err_unlock' in this
case.

> +	}
> +
> +	for (i = 0; i < num_data; i++) {
> +		ret = qcom_pdm_add_domain_locked(data[i]);
> +		if (ret)
> +			goto err;

And err_del_domains here.

> +	}
> +
> +	ret = qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +			     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +	if (ret) {
> +		pr_err("PDM: error adding server %d\n", ret);
> +		goto err;
> +	}
> +
> +	qcom_pdm_server_added = true;
> +
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	return 0;
> +
> +err:
> +	while (--i >= 0)
> +		qcom_pdm_del_domain_locked(data[i]);
> +
> +	qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +		       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);

Should the server really be added unconditionally here? And if so,
shouldn't you update that flag?

> +
> +err_out:
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(qcom_pdm_add_domains);

> +static void qcom_pdm_get_domain_list(struct qmi_handle *qmi,
> +				     struct sockaddr_qrtr *sq,
> +				     struct qmi_txn *txn,
> +				     const void *decoded)
> +{
> +	const struct servreg_loc_get_domain_list_req *req = decoded;
> +	struct servreg_loc_get_domain_list_resp *rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
> +	struct qcom_pdm_service *service;
> +	u32 offset;
> +	int ret;
> +
> +	offset = req->offset_valid ? req->offset : 0;
> +
> +	rsp->rsp.result = QMI_RESULT_SUCCESS_V01;
> +	rsp->rsp.error = QMI_ERR_NONE_V01;
> +
> +	rsp->db_revision_valid = true;
> +	rsp->db_revision = 1;
> +
> +	rsp->total_domains_valid = true;
> +	rsp->total_domains = 0;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	service = qcom_pdm_find_locked(req->name);
> +	if (service) {
> +		struct qcom_pdm_domain *domain;
> +
> +		rsp->domain_list_valid = true;
> +		rsp->domain_list_len = 0;
> +
> +		list_for_each_entry(domain, &service->domains, list) {
> +			u32 i = rsp->total_domains++;
> +
> +			if (i >= offset && i < SERVREG_LOC_MAX_DOMAINS) {
> +				u32 j = rsp->domain_list_len++;
> +
> +				strscpy(rsp->domain_list[j].name, domain->name,
> +					sizeof(rsp->domain_list[i].name));
> +				rsp->domain_list[j].instance_id = domain->instance_id;
> +
> +				pr_debug("PDM: found %s / %d\n", domain->name,
> +					 domain->instance_id);
> +			}
> +		}
> +

Stray newline.

> +	}
> +
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	pr_debug("PDM: service '%s' offset %d returning %d domains (of %d)\n", req->name,
> +		 req->offset_valid ? req->offset : -1, rsp->domain_list_len, rsp->total_domains);
> +
> +	ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_GET_DOMAIN_LIST,
> +				2658,
> +				servreg_loc_get_domain_list_resp_ei, rsp);
> +	if (ret)
> +		pr_err("Error sending servreg response: %d\n", ret);
> +
> +	kfree(rsp);
> +}

Johan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-12  0:55   ` Konrad Dybcio
@ 2024-03-12  9:43     ` Johan Hovold
  0 siblings, 0 replies; 26+ messages in thread
From: Johan Hovold @ 2024-03-12  9:43 UTC (permalink / raw
  To: Konrad Dybcio
  Cc: Dmitry Baryshkov, Bjorn Andersson, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold

On Tue, Mar 12, 2024 at 01:55:27AM +0100, Konrad Dybcio wrote:

> One suggestion I have is to use cleanup.h and scoped guards to
> save on some LoC

LoC is not the best metric for code quality, it really depends on if it
makes the code more readable or not. Often being explicit, so that it's
obvious what is going on, is preferred even if it adds a few more lines
(c.f. all the ways that people manage to use devres wrong).

After just skimming this patch, I don't see that there would be any
benefit from using scoped guards.

Johan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex
  2024-03-11 15:34 ` [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex Dmitry Baryshkov
@ 2024-03-13 23:35   ` Chris Lew
  2024-03-14  0:07     ` Dmitry Baryshkov
  0 siblings, 1 reply; 26+ messages in thread
From: Chris Lew @ 2024-03-13 23:35 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold



On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> @@ -133,11 +133,13 @@ static int pdr_register_listener(struct pdr_handle *pdr,
>  	req.enable = enable;
>  	strscpy(req.service_path, pds->service_path, sizeof(req.service_path));
>  
> +	mutex_lock(&pdr->lock);
>  	ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr,
>  			       &txn, SERVREG_REGISTER_LISTENER_REQ,
>  			       SERVREG_REGISTER_LISTENER_REQ_LEN,
>  			       servreg_register_listener_req_ei,
>  			       &req);
> +	mutex_unlock(&pdr->lock);
>  	if (ret < 0) {
>  		qmi_txn_cancel(&txn);
>  		return ret;
> 

Hi Dmitry,

What is the reason for taking the pdr lock here? The addr struct passed
into qmi_send_request is from the pdr_service. I think this is different
from the pdr_handle we are protecting in the other parts of the patch.

Thanks,
Chris

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-11 15:34 ` [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service Dmitry Baryshkov
  2024-03-12  0:53   ` Konrad Dybcio
@ 2024-03-14  0:03   ` Chris Lew
  2024-03-14  0:09     ` Dmitry Baryshkov
  1 sibling, 1 reply; 26+ messages in thread
From: Chris Lew @ 2024-03-14  0:03 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold



On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> +/**
> + * qmi_del_server() - register a service with the name service

s/register/deregister/g

> + * @qmi:	qmi handle
> + * @service:	type of the service
> + * @instance:	instance of the service
> + * @version:	version of the service
> + *
> + * Remove registration of the service with the name service. This notifies
> + * clients that they should no longer send messages to the client associated
> + * with @qmi.
> + *
> + * Return: 0 on success, negative errno on failure.
> + */
> +int qmi_del_server(struct qmi_handle *qmi, unsigned int service,
> +		   unsigned int version, unsigned int instance)
> +{
> +	struct qmi_service *svc;
> +	struct qmi_service *tmp;
> +
> +	list_for_each_entry_safe(svc, tmp, &qmi->services, list_node) {
> +		if (svc->service != service ||
> +		    svc->version != version ||
> +		    svc->instance != instance)
> +			continue;
> +
> +		qmi_send_del_server(qmi, svc);
> +		list_del(&svc->list_node);
> +		kfree(svc);
> +
> +		return 0;
> +	}
> +

is list_for_each_entry_safe required if we stop iterating and return
after we find the first instance of the service?

> +	return -EINVAL;
> +}
> +EXPORT_SYMBOL_GPL(qmi_del_server);

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex
  2024-03-13 23:35   ` Chris Lew
@ 2024-03-14  0:07     ` Dmitry Baryshkov
  0 siblings, 0 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-14  0:07 UTC (permalink / raw
  To: Chris Lew
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold

On Thu, 14 Mar 2024 at 01:35, Chris Lew <quic_clew@quicinc.com> wrote:
>
>
>
> On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> > @@ -133,11 +133,13 @@ static int pdr_register_listener(struct pdr_handle *pdr,
> >       req.enable = enable;
> >       strscpy(req.service_path, pds->service_path, sizeof(req.service_path));
> >
> > +     mutex_lock(&pdr->lock);
> >       ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr,
> >                              &txn, SERVREG_REGISTER_LISTENER_REQ,
> >                              SERVREG_REGISTER_LISTENER_REQ_LEN,
> >                              servreg_register_listener_req_ei,
> >                              &req);
> > +     mutex_unlock(&pdr->lock);
> >       if (ret < 0) {
> >               qmi_txn_cancel(&txn);
> >               return ret;
> >
>
> Hi Dmitry,
>
> What is the reason for taking the pdr lock here? The addr struct passed
> into qmi_send_request is from the pdr_service. I think this is different
> from the pdr_handle we are protecting in the other parts of the patch.

Indeed, we should be taking pdr->list_lock.


-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-14  0:03   ` Chris Lew
@ 2024-03-14  0:09     ` Dmitry Baryshkov
  2024-03-14 17:15       ` Chris Lew
  0 siblings, 1 reply; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-14  0:09 UTC (permalink / raw
  To: Chris Lew
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold

On Thu, 14 Mar 2024 at 02:03, Chris Lew <quic_clew@quicinc.com> wrote:
>
>
>
> On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> > +/**
> > + * qmi_del_server() - register a service with the name service
>
> s/register/deregister/g

ack

>
> > + * @qmi:     qmi handle
> > + * @service: type of the service
> > + * @instance:        instance of the service
> > + * @version: version of the service
> > + *
> > + * Remove registration of the service with the name service. This notifies
> > + * clients that they should no longer send messages to the client associated
> > + * with @qmi.
> > + *
> > + * Return: 0 on success, negative errno on failure.
> > + */
> > +int qmi_del_server(struct qmi_handle *qmi, unsigned int service,
> > +                unsigned int version, unsigned int instance)
> > +{
> > +     struct qmi_service *svc;
> > +     struct qmi_service *tmp;
> > +
> > +     list_for_each_entry_safe(svc, tmp, &qmi->services, list_node) {
> > +             if (svc->service != service ||
> > +                 svc->version != version ||
> > +                 svc->instance != instance)
> > +                     continue;
> > +
> > +             qmi_send_del_server(qmi, svc);
> > +             list_del(&svc->list_node);
> > +             kfree(svc);
> > +
> > +             return 0;
> > +     }
> > +
>
> is list_for_each_entry_safe required if we stop iterating and return
> after we find the first instance of the service?

Yes, it just adds a temp variable here.

>
> > +     return -EINVAL;
> > +}
> > +EXPORT_SYMBOL_GPL(qmi_del_server);



-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service
  2024-03-14  0:09     ` Dmitry Baryshkov
@ 2024-03-14 17:15       ` Chris Lew
  0 siblings, 0 replies; 26+ messages in thread
From: Chris Lew @ 2024-03-14 17:15 UTC (permalink / raw
  To: Dmitry Baryshkov
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold



On 3/13/2024 5:09 PM, Dmitry Baryshkov wrote:
> On Thu, 14 Mar 2024 at 02:03, Chris Lew <quic_clew@quicinc.com> wrote:
>>
>>
>>
>> On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
>>> +/**
>>> + * qmi_del_server() - register a service with the name service
>>
>> s/register/deregister/g
> 
> ack
> 
>>
>>> + * @qmi:     qmi handle
>>> + * @service: type of the service
>>> + * @instance:        instance of the service
>>> + * @version: version of the service
>>> + *
>>> + * Remove registration of the service with the name service. This notifies
>>> + * clients that they should no longer send messages to the client associated
>>> + * with @qmi.
>>> + *
>>> + * Return: 0 on success, negative errno on failure.
>>> + */
>>> +int qmi_del_server(struct qmi_handle *qmi, unsigned int service,
>>> +                unsigned int version, unsigned int instance)
>>> +{
>>> +     struct qmi_service *svc;
>>> +     struct qmi_service *tmp;
>>> +
>>> +     list_for_each_entry_safe(svc, tmp, &qmi->services, list_node) {
>>> +             if (svc->service != service ||
>>> +                 svc->version != version ||
>>> +                 svc->instance != instance)
>>> +                     continue;
>>> +
>>> +             qmi_send_del_server(qmi, svc);
>>> +             list_del(&svc->list_node);
>>> +             kfree(svc);
>>> +
>>> +             return 0;
>>> +     }
>>> +
>>
>> is list_for_each_entry_safe required if we stop iterating and return
>> after we find the first instance of the service?
> 
> Yes, it just adds a temp variable here.
> 

Ok, I was thinking that tmp wasn't necessary because we don't continue
iterating through the list after we free the svc. I guess it never hurts
to be safe though.

>>
>>> +     return -EINVAL;
>>> +}
>>> +EXPORT_SYMBOL_GPL(qmi_del_server);
> 
> 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
  2024-03-12  0:55   ` Konrad Dybcio
  2024-03-12  9:36   ` Johan Hovold
@ 2024-03-14 19:44   ` Chris Lew
  2024-03-14 21:30     ` Dmitry Baryshkov
  2024-04-07 23:14   ` Bjorn Andersson
  3 siblings, 1 reply; 26+ messages in thread
From: Chris Lew @ 2024-03-14 19:44 UTC (permalink / raw
  To: Dmitry Baryshkov, Bjorn Andersson, Konrad Dybcio, Mathieu Poirier
  Cc: linux-arm-msm, linux-remoteproc, Johan Hovold



On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> +{
> +	int ret;
> +	int i;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	if (qcom_pdm_server_added) {
> +		ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +				     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +		if (ret)
> +			goto err_out;
> +	}
> +
> +	for (i = 0; i < num_data; i++) {
> +		ret = qcom_pdm_add_domain_locked(data[i]);
> +		if (ret)
> +			goto err;
> +	}
> +
> +	ret = qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +			     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +	if (ret) {
> +		pr_err("PDM: error adding server %d\n", ret);
> +		goto err;
> +	}
> +
> +	qcom_pdm_server_added = true;
> +
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	return 0;
> +
> +err:
> +	while (--i >= 0)
> +		qcom_pdm_del_domain_locked(data[i]);
> +
> +	qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +		       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +
> +err_out:
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(qcom_pdm_add_domains);
> +
> +void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> +{
> +	int i;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	if (qcom_pdm_server_added) {
> +		qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +			       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +	}

I am confused as to why we need to reset the qmi handle anytime
qcom_pdm_del_domains or qcom_pdm_add_domains is called. Is this to
trigger some kind of re-broadcast type notification to clients because
the service list has been updated?

My concern would be that this causes some kind of unintended side-effect
on the rprocs that are not restarting.

> +
> +	for (i = 0; i < num_data; i++)
> +		qcom_pdm_del_domain_locked(data[i]);
> +
> +	qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +		       SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> +	qcom_pdm_server_added = true;
> +
> +	mutex_unlock(&qcom_pdm_mutex);
> +}
> +EXPORT_SYMBOL_GPL(qcom_pdm_del_domains);
> +
> +static void qcom_pdm_get_domain_list(struct qmi_handle *qmi,
> +				     struct sockaddr_qrtr *sq,
> +				     struct qmi_txn *txn,
> +				     const void *decoded)
> +{
> +	const struct servreg_loc_get_domain_list_req *req = decoded;
> +	struct servreg_loc_get_domain_list_resp *rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
> +	struct qcom_pdm_service *service;
> +	u32 offset;
> +	int ret;
> +
> +	offset = req->offset_valid ? req->offset : 0;
> +
> +	rsp->rsp.result = QMI_RESULT_SUCCESS_V01;
> +	rsp->rsp.error = QMI_ERR_NONE_V01;
> +
> +	rsp->db_revision_valid = true;
> +	rsp->db_revision = 1;
> +
> +	rsp->total_domains_valid = true;
> +	rsp->total_domains = 0;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	service = qcom_pdm_find_locked(req->name);
> +	if (service) {
> +		struct qcom_pdm_domain *domain;
> +
> +		rsp->domain_list_valid = true;
> +		rsp->domain_list_len = 0;
> +
> +		list_for_each_entry(domain, &service->domains, list) {
> +			u32 i = rsp->total_domains++;
> +
> +			if (i >= offset && i < SERVREG_LOC_MAX_DOMAINS) {
> +				u32 j = rsp->domain_list_len++;
> +
> +				strscpy(rsp->domain_list[j].name, domain->name,
> +					sizeof(rsp->domain_list[i].name));
> +				rsp->domain_list[j].instance_id = domain->instance_id;
> +
> +				pr_debug("PDM: found %s / %d\n", domain->name,
> +					 domain->instance_id);
> +			}
> +		}
> +
> +	}
> +
> +	mutex_unlock(&qcom_pdm_mutex);
> +
> +	pr_debug("PDM: service '%s' offset %d returning %d domains (of %d)\n", req->name,
> +		 req->offset_valid ? req->offset : -1, rsp->domain_list_len, rsp->total_domains);
> +
> +	ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_GET_DOMAIN_LIST,
> +				2658,
> +				servreg_loc_get_domain_list_resp_ei, rsp);

Other QMI clients like pdr_interface have macros for the message size.
Can we considering adding one instead of using 2658 directly?

> +	if (ret)
> +		pr_err("Error sending servreg response: %d\n", ret);
> +
> +	kfree(rsp);
> +}
> +
> +static void qcom_pdm_pfr(struct qmi_handle *qmi,
> +			 struct sockaddr_qrtr *sq,
> +			 struct qmi_txn *txn,
> +			 const void *decoded)
> +{
> +	const struct servreg_loc_pfr_req *req = decoded;
> +	struct servreg_loc_pfr_resp rsp = {};
> +	int ret;
> +
> +	pr_warn_ratelimited("PDM: service '%s' crash: '%s'\n", req->service, req->reason);
> +
> +	rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
> +	rsp.rsp.error = QMI_ERR_NONE_V01;
> +
> +	ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_PFR,
> +				SERVREG_LOC_PFR_RESP_MSG_SIZE,
> +				servreg_loc_pfr_resp_ei, &rsp);
> +	if (ret)
> +		pr_err("Error sending servreg response: %d\n", ret);
> +}
> +
> diff --git a/drivers/soc/qcom/qcom_pdm_msg.h b/drivers/soc/qcom/qcom_pdm_msg.h
> new file mode 100644
> index 000000000000..e576b87c67c0
> --- /dev/null
> +++ b/drivers/soc/qcom/qcom_pdm_msg.h
> @@ -0,0 +1,66 @@
> +// SPDX-License-Identifier: BSD-3-Clause
> +/*
> + * Copyright (c) 2018, Linaro Ltd.
> + * Copyright (c) 2016, Bjorn Andersson
> + */
> +
> +#ifndef __QMI_SERVREG_LOC_H__
> +#define __QMI_SERVREG_LOC_H__
> +

Should we update the header guards to reflect the new file name?

> +#include <linux/types.h>
> +#include <linux/soc/qcom/qmi.h>
> +
> +#define SERVREG_QMI_SERVICE 64
> +#define SERVREG_QMI_VERSION 257
> +#define SERVREG_QMI_INSTANCE 0
> +#define QMI_RESULT_SUCCESS 0
> +#define QMI_RESULT_FAILURE 1
> +#define QMI_ERR_NONE 0
> +#define QMI_ERR_INTERNAL 1
> +#define QMI_ERR_MALFORMED_MSG 2

I think these common QMI macro definitions are wrong. They should match
what is defined in <soc/qcom/qmi.h>. This is a bug in the userspace
pd-mapper header as well.

> +#endif
> diff --git a/include/linux/soc/qcom/pd_mapper.h b/include/linux/soc/qcom/pd_mapper.h
> new file mode 100644
> index 000000000000..86438b7ca6fe
> --- /dev/null
> +++ b/include/linux/soc/qcom/pd_mapper.h
> @@ -0,0 +1,39 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Qualcomm Protection Domain mapper
> + *
> + * Copyright (c) 2023 Linaro Ltd.
> + */
> +#ifndef __QCOM_PD_MAPPER__
> +#define __QCOM_PD_MAPPER__
> +
> +struct qcom_pdm_domain_data {
> +	const char *domain;
> +	u32 instance_id;
> +	/* NULL-terminated array */
> +	const char * services[];

s/char * services[]/char *services[]/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-14 19:44   ` Chris Lew
@ 2024-03-14 21:30     ` Dmitry Baryshkov
  2024-03-15  0:57       ` Chris Lew
  0 siblings, 1 reply; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-03-14 21:30 UTC (permalink / raw
  To: Chris Lew
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold

On Thu, 14 Mar 2024 at 21:44, Chris Lew <quic_clew@quicinc.com> wrote:
>
>
>
> On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
> > +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> > +{
> > +     int ret;
> > +     int i;
> > +
> > +     mutex_lock(&qcom_pdm_mutex);
> > +
> > +     if (qcom_pdm_server_added) {
> > +             ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                                  SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> > +             if (ret)
> > +                     goto err_out;
> > +     }
> > +
> > +     for (i = 0; i < num_data; i++) {
> > +             ret = qcom_pdm_add_domain_locked(data[i]);
> > +             if (ret)
> > +                     goto err;
> > +     }
> > +
> > +     ret = qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                          SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> > +     if (ret) {
> > +             pr_err("PDM: error adding server %d\n", ret);
> > +             goto err;
> > +     }
> > +
> > +     qcom_pdm_server_added = true;
> > +
> > +     mutex_unlock(&qcom_pdm_mutex);
> > +
> > +     return 0;
> > +
> > +err:
> > +     while (--i >= 0)
> > +             qcom_pdm_del_domain_locked(data[i]);
> > +
> > +     qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                    SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> > +
> > +err_out:
> > +     mutex_unlock(&qcom_pdm_mutex);
> > +
> > +     return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(qcom_pdm_add_domains);
> > +
> > +void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> > +{
> > +     int i;
> > +
> > +     mutex_lock(&qcom_pdm_mutex);
> > +
> > +     if (qcom_pdm_server_added) {
> > +             qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                            SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> > +     }
>
> I am confused as to why we need to reset the qmi handle anytime
> qcom_pdm_del_domains or qcom_pdm_add_domains is called. Is this to
> trigger some kind of re-broadcast type notification to clients because
> the service list has been updated?

Yes. Otherwise clients like pmic-glink will miss new domains.

>
> My concern would be that this causes some kind of unintended side-effect
> on the rprocs that are not restarting.

Well, an alternative is to match machine compatible strings and to
build a full list of domains right from the beginning.

>
> > +
> > +     for (i = 0; i < num_data; i++)
> > +             qcom_pdm_del_domain_locked(data[i]);
> > +
> > +     qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                    SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
> > +     qcom_pdm_server_added = true;
> > +
> > +     mutex_unlock(&qcom_pdm_mutex);
> > +}
> > +EXPORT_SYMBOL_GPL(qcom_pdm_del_domains);
> > +
> > +static void qcom_pdm_get_domain_list(struct qmi_handle *qmi,
> > +                                  struct sockaddr_qrtr *sq,
> > +                                  struct qmi_txn *txn,
> > +                                  const void *decoded)
> > +{
> > +     const struct servreg_loc_get_domain_list_req *req = decoded;
> > +     struct servreg_loc_get_domain_list_resp *rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
> > +     struct qcom_pdm_service *service;
> > +     u32 offset;
> > +     int ret;
> > +
> > +     offset = req->offset_valid ? req->offset : 0;
> > +
> > +     rsp->rsp.result = QMI_RESULT_SUCCESS_V01;
> > +     rsp->rsp.error = QMI_ERR_NONE_V01;
> > +
> > +     rsp->db_revision_valid = true;
> > +     rsp->db_revision = 1;
> > +
> > +     rsp->total_domains_valid = true;
> > +     rsp->total_domains = 0;
> > +
> > +     mutex_lock(&qcom_pdm_mutex);
> > +
> > +     service = qcom_pdm_find_locked(req->name);
> > +     if (service) {
> > +             struct qcom_pdm_domain *domain;
> > +
> > +             rsp->domain_list_valid = true;
> > +             rsp->domain_list_len = 0;
> > +
> > +             list_for_each_entry(domain, &service->domains, list) {
> > +                     u32 i = rsp->total_domains++;
> > +
> > +                     if (i >= offset && i < SERVREG_LOC_MAX_DOMAINS) {
> > +                             u32 j = rsp->domain_list_len++;
> > +
> > +                             strscpy(rsp->domain_list[j].name, domain->name,
> > +                                     sizeof(rsp->domain_list[i].name));
> > +                             rsp->domain_list[j].instance_id = domain->instance_id;
> > +
> > +                             pr_debug("PDM: found %s / %d\n", domain->name,
> > +                                      domain->instance_id);
> > +                     }
> > +             }
> > +
> > +     }
> > +
> > +     mutex_unlock(&qcom_pdm_mutex);
> > +
> > +     pr_debug("PDM: service '%s' offset %d returning %d domains (of %d)\n", req->name,
> > +              req->offset_valid ? req->offset : -1, rsp->domain_list_len, rsp->total_domains);
> > +
> > +     ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_GET_DOMAIN_LIST,
> > +                             2658,
> > +                             servreg_loc_get_domain_list_resp_ei, rsp);
>
> Other QMI clients like pdr_interface have macros for the message size.
> Can we considering adding one instead of using 2658 directly?


Ack

>
> > +     if (ret)
> > +             pr_err("Error sending servreg response: %d\n", ret);
> > +
> > +     kfree(rsp);
> > +}
> > +
> > +static void qcom_pdm_pfr(struct qmi_handle *qmi,
> > +                      struct sockaddr_qrtr *sq,
> > +                      struct qmi_txn *txn,
> > +                      const void *decoded)
> > +{
> > +     const struct servreg_loc_pfr_req *req = decoded;
> > +     struct servreg_loc_pfr_resp rsp = {};
> > +     int ret;
> > +
> > +     pr_warn_ratelimited("PDM: service '%s' crash: '%s'\n", req->service, req->reason);
> > +
> > +     rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
> > +     rsp.rsp.error = QMI_ERR_NONE_V01;
> > +
> > +     ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_PFR,
> > +                             SERVREG_LOC_PFR_RESP_MSG_SIZE,
> > +                             servreg_loc_pfr_resp_ei, &rsp);
> > +     if (ret)
> > +             pr_err("Error sending servreg response: %d\n", ret);
> > +}
> > +
> > diff --git a/drivers/soc/qcom/qcom_pdm_msg.h b/drivers/soc/qcom/qcom_pdm_msg.h
> > new file mode 100644
> > index 000000000000..e576b87c67c0
> > --- /dev/null
> > +++ b/drivers/soc/qcom/qcom_pdm_msg.h
> > @@ -0,0 +1,66 @@
> > +// SPDX-License-Identifier: BSD-3-Clause
> > +/*
> > + * Copyright (c) 2018, Linaro Ltd.
> > + * Copyright (c) 2016, Bjorn Andersson
> > + */
> > +
> > +#ifndef __QMI_SERVREG_LOC_H__
> > +#define __QMI_SERVREG_LOC_H__
> > +
>
> Should we update the header guards to reflect the new file name?

Ack

>
> > +#include <linux/types.h>
> > +#include <linux/soc/qcom/qmi.h>
> > +
> > +#define SERVREG_QMI_SERVICE 64
> > +#define SERVREG_QMI_VERSION 257
> > +#define SERVREG_QMI_INSTANCE 0
> > +#define QMI_RESULT_SUCCESS 0
> > +#define QMI_RESULT_FAILURE 1
> > +#define QMI_ERR_NONE 0
> > +#define QMI_ERR_INTERNAL 1
> > +#define QMI_ERR_MALFORMED_MSG 2
>
> I think these common QMI macro definitions are wrong. They should match
> what is defined in <soc/qcom/qmi.h>. This is a bug in the userspace
> pd-mapper header as well.

Ack

>
> > +#endif
> > diff --git a/include/linux/soc/qcom/pd_mapper.h b/include/linux/soc/qcom/pd_mapper.h
> > new file mode 100644
> > index 000000000000..86438b7ca6fe
> > --- /dev/null
> > +++ b/include/linux/soc/qcom/pd_mapper.h
> > @@ -0,0 +1,39 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Qualcomm Protection Domain mapper
> > + *
> > + * Copyright (c) 2023 Linaro Ltd.
> > + */
> > +#ifndef __QCOM_PD_MAPPER__
> > +#define __QCOM_PD_MAPPER__
> > +
> > +struct qcom_pdm_domain_data {
> > +     const char *domain;
> > +     u32 instance_id;
> > +     /* NULL-terminated array */
> > +     const char * services[];
>
> s/char * services[]/char *services[]/



-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-14 21:30     ` Dmitry Baryshkov
@ 2024-03-15  0:57       ` Chris Lew
  0 siblings, 0 replies; 26+ messages in thread
From: Chris Lew @ 2024-03-15  0:57 UTC (permalink / raw
  To: Dmitry Baryshkov
  Cc: Bjorn Andersson, Konrad Dybcio, Mathieu Poirier, linux-arm-msm,
	linux-remoteproc, Johan Hovold



On 3/14/2024 2:30 PM, Dmitry Baryshkov wrote:
> On Thu, 14 Mar 2024 at 21:44, Chris Lew <quic_clew@quicinc.com> wrote:
>>
>>
>>
>> On 3/11/2024 8:34 AM, Dmitry Baryshkov wrote:
>>> +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
>>> +{
>>> +     int ret;
>>> +     int i;
>>> +
>>> +     mutex_lock(&qcom_pdm_mutex);
>>> +
>>> +     if (qcom_pdm_server_added) {
>>> +             ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
>>> +                                  SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>>> +             if (ret)
>>> +                     goto err_out;
>>> +     }
>>> +
>>> +     for (i = 0; i < num_data; i++) {
>>> +             ret = qcom_pdm_add_domain_locked(data[i]);
>>> +             if (ret)
>>> +                     goto err;
>>> +     }
>>> +
>>> +     ret = qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
>>> +                          SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>>> +     if (ret) {
>>> +             pr_err("PDM: error adding server %d\n", ret);
>>> +             goto err;
>>> +     }
>>> +
>>> +     qcom_pdm_server_added = true;
>>> +
>>> +     mutex_unlock(&qcom_pdm_mutex);
>>> +
>>> +     return 0;
>>> +
>>> +err:
>>> +     while (--i >= 0)
>>> +             qcom_pdm_del_domain_locked(data[i]);
>>> +
>>> +     qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
>>> +                    SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>>> +
>>> +err_out:
>>> +     mutex_unlock(&qcom_pdm_mutex);
>>> +
>>> +     return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(qcom_pdm_add_domains);
>>> +
>>> +void qcom_pdm_del_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
>>> +{
>>> +     int i;
>>> +
>>> +     mutex_lock(&qcom_pdm_mutex);
>>> +
>>> +     if (qcom_pdm_server_added) {
>>> +             qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
>>> +                            SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>>> +     }
>>
>> I am confused as to why we need to reset the qmi handle anytime
>> qcom_pdm_del_domains or qcom_pdm_add_domains is called. Is this to
>> trigger some kind of re-broadcast type notification to clients because
>> the service list has been updated?
> 
> Yes. Otherwise clients like pmic-glink will miss new domains.
> 
>>
>> My concern would be that this causes some kind of unintended side-effect
>> on the rprocs that are not restarting.
> 
> Well, an alternative is to match machine compatible strings and to
> build a full list of domains right from the beginning.
> 

Ok, thanks for clarifying. I spoke to some of the firmware developers
and a quick scan seems to indicate their handling is robust enough to
handle this. It is a change in expected behavior but I think the current
approach is reasonable.

>>
>>> +
>>> +     for (i = 0; i < num_data; i++)
>>> +             qcom_pdm_del_domain_locked(data[i]);
>>> +
>>> +     qmi_add_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
>>> +                    SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>>> +     qcom_pdm_server_added = true;
>>> +
>>> +     mutex_unlock(&qcom_pdm_mutex);
>>> +}
>>> +EXPORT_SYMBOL_GPL(qcom_pdm_del_domains);
>>> +
>>> +static void qcom_pdm_get_domain_list(struct qmi_handle *qmi,
>>> +                                  struct sockaddr_qrtr *sq,
>>> +                                  struct qmi_txn *txn,
>>> +                                  const void *decoded)
>>> +{
>>> +     const struct servreg_loc_get_domain_list_req *req = decoded;
>>> +     struct servreg_loc_get_domain_list_resp *rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
>>> +     struct qcom_pdm_service *service;
>>> +     u32 offset;
>>> +     int ret;
>>> +
>>> +     offset = req->offset_valid ? req->offset : 0;
>>> +
>>> +     rsp->rsp.result = QMI_RESULT_SUCCESS_V01;
>>> +     rsp->rsp.error = QMI_ERR_NONE_V01;
>>> +
>>> +     rsp->db_revision_valid = true;
>>> +     rsp->db_revision = 1;
>>> +
>>> +     rsp->total_domains_valid = true;
>>> +     rsp->total_domains = 0;
>>> +
>>> +     mutex_lock(&qcom_pdm_mutex);
>>> +
>>> +     service = qcom_pdm_find_locked(req->name);
>>> +     if (service) {
>>> +             struct qcom_pdm_domain *domain;
>>> +
>>> +             rsp->domain_list_valid = true;
>>> +             rsp->domain_list_len = 0;
>>> +
>>> +             list_for_each_entry(domain, &service->domains, list) {
>>> +                     u32 i = rsp->total_domains++;
>>> +
>>> +                     if (i >= offset && i < SERVREG_LOC_MAX_DOMAINS) {
>>> +                             u32 j = rsp->domain_list_len++;
>>> +
>>> +                             strscpy(rsp->domain_list[j].name, domain->name,
>>> +                                     sizeof(rsp->domain_list[i].name));
>>> +                             rsp->domain_list[j].instance_id = domain->instance_id;
>>> +
>>> +                             pr_debug("PDM: found %s / %d\n", domain->name,
>>> +                                      domain->instance_id);
>>> +                     }
>>> +             }
>>> +
>>> +     }
>>> +
>>> +     mutex_unlock(&qcom_pdm_mutex);
>>> +
>>> +     pr_debug("PDM: service '%s' offset %d returning %d domains (of %d)\n", req->name,
>>> +              req->offset_valid ? req->offset : -1, rsp->domain_list_len, rsp->total_domains);
>>> +
>>> +     ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_GET_DOMAIN_LIST,
>>> +                             2658,
>>> +                             servreg_loc_get_domain_list_resp_ei, rsp);
>>
>> Other QMI clients like pdr_interface have macros for the message size.
>> Can we considering adding one instead of using 2658 directly?
> 
> 
> Ack
> 
>>
>>> +     if (ret)
>>> +             pr_err("Error sending servreg response: %d\n", ret);
>>> +
>>> +     kfree(rsp);
>>> +}
>>> +
>>> +static void qcom_pdm_pfr(struct qmi_handle *qmi,
>>> +                      struct sockaddr_qrtr *sq,
>>> +                      struct qmi_txn *txn,
>>> +                      const void *decoded)
>>> +{
>>> +     const struct servreg_loc_pfr_req *req = decoded;
>>> +     struct servreg_loc_pfr_resp rsp = {};
>>> +     int ret;
>>> +
>>> +     pr_warn_ratelimited("PDM: service '%s' crash: '%s'\n", req->service, req->reason);
>>> +
>>> +     rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
>>> +     rsp.rsp.error = QMI_ERR_NONE_V01;
>>> +
>>> +     ret = qmi_send_response(qmi, sq, txn, SERVREG_LOC_PFR,
>>> +                             SERVREG_LOC_PFR_RESP_MSG_SIZE,
>>> +                             servreg_loc_pfr_resp_ei, &rsp);
>>> +     if (ret)
>>> +             pr_err("Error sending servreg response: %d\n", ret);
>>> +}
>>> +
>>> diff --git a/drivers/soc/qcom/qcom_pdm_msg.h b/drivers/soc/qcom/qcom_pdm_msg.h
>>> new file mode 100644
>>> index 000000000000..e576b87c67c0
>>> --- /dev/null
>>> +++ b/drivers/soc/qcom/qcom_pdm_msg.h
>>> @@ -0,0 +1,66 @@
>>> +// SPDX-License-Identifier: BSD-3-Clause
>>> +/*
>>> + * Copyright (c) 2018, Linaro Ltd.
>>> + * Copyright (c) 2016, Bjorn Andersson
>>> + */
>>> +
>>> +#ifndef __QMI_SERVREG_LOC_H__
>>> +#define __QMI_SERVREG_LOC_H__
>>> +
>>
>> Should we update the header guards to reflect the new file name?
> 
> Ack
> 
>>
>>> +#include <linux/types.h>
>>> +#include <linux/soc/qcom/qmi.h>
>>> +
>>> +#define SERVREG_QMI_SERVICE 64
>>> +#define SERVREG_QMI_VERSION 257
>>> +#define SERVREG_QMI_INSTANCE 0
>>> +#define QMI_RESULT_SUCCESS 0
>>> +#define QMI_RESULT_FAILURE 1
>>> +#define QMI_ERR_NONE 0
>>> +#define QMI_ERR_INTERNAL 1
>>> +#define QMI_ERR_MALFORMED_MSG 2
>>
>> I think these common QMI macro definitions are wrong. They should match
>> what is defined in <soc/qcom/qmi.h>. This is a bug in the userspace
>> pd-mapper header as well.
> 
> Ack
> 
>>
>>> +#endif
>>> diff --git a/include/linux/soc/qcom/pd_mapper.h b/include/linux/soc/qcom/pd_mapper.h
>>> new file mode 100644
>>> index 000000000000..86438b7ca6fe
>>> --- /dev/null
>>> +++ b/include/linux/soc/qcom/pd_mapper.h
>>> @@ -0,0 +1,39 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Qualcomm Protection Domain mapper
>>> + *
>>> + * Copyright (c) 2023 Linaro Ltd.
>>> + */
>>> +#ifndef __QCOM_PD_MAPPER__
>>> +#define __QCOM_PD_MAPPER__
>>> +
>>> +struct qcom_pdm_domain_data {
>>> +     const char *domain;
>>> +     u32 instance_id;
>>> +     /* NULL-terminated array */
>>> +     const char * services[];
>>
>> s/char * services[]/char *services[]/
> 
> 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm
  2024-03-11 15:34 ` [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm Dmitry Baryshkov
@ 2024-03-16 18:15   ` Bjorn Andersson
  0 siblings, 0 replies; 26+ messages in thread
From: Bjorn Andersson @ 2024-03-16 18:15 UTC (permalink / raw
  To: Dmitry Baryshkov
  Cc: Konrad Dybcio, Mathieu Poirier, linux-arm-msm, linux-remoteproc,
	Johan Hovold

On Mon, Mar 11, 2024 at 05:34:05PM +0200, Dmitry Baryshkov wrote:
> diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
[..]
>  static int qcom_rproc_pds_attach(struct qcom_adsp *adsp, const char **pd_names,
> @@ -375,10 +382,14 @@ static int adsp_start(struct rproc *rproc)
>  	int ret;
>  	unsigned int val;
>  
> -	ret = qcom_q6v5_prepare(&adsp->q6v5);
> +	ret = qcom_pdm_add_domains(adsp->domains, adsp->num_domains);

As we're not defining the PDs based on the loaded firmware, and you're
doing the reinitialization-dance of the server on each add/remove of
domains I think it would be better to just register these at probe time.

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
                     ` (2 preceding siblings ...)
  2024-03-14 19:44   ` Chris Lew
@ 2024-04-07 23:14   ` Bjorn Andersson
  2024-04-07 23:20     ` Dmitry Baryshkov
  3 siblings, 1 reply; 26+ messages in thread
From: Bjorn Andersson @ 2024-04-07 23:14 UTC (permalink / raw
  To: Dmitry Baryshkov
  Cc: Konrad Dybcio, Mathieu Poirier, linux-arm-msm, linux-remoteproc,
	Johan Hovold

On Mon, Mar 11, 2024 at 05:34:03PM +0200, Dmitry Baryshkov wrote:
> diff --git a/drivers/soc/qcom/qcom_pdm.c b/drivers/soc/qcom/qcom_pdm.c
[..]
> +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> +{
> +	int ret;
> +	int i;
> +
> +	mutex_lock(&qcom_pdm_mutex);
> +
> +	if (qcom_pdm_server_added) {
> +		ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> +				     SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);

Sorry for the late reply.

I met with the owners of the firmware side of this and we concluded that
this will unfortunately not work.

The typical case is that when the firmware comes up, it queries the
information from the pd-mapper about itself, so this would obviously
work just fine.

Further more, if another core causes the server to be deleted and then
re-added the firmware will wait for pd-mapper to come up. So this will
work as well - as reported by Chris.

There is however a not too uncommon case where the firmware on one
remoteproc will inquiry about information of another remoteproc. One
example is modem coming up, inquiring about where to find audio
services. This inquiry will be performed once at firmware boot and
decisions will be made based on the responses, no retries or updates.

As such, starting pd-mapper with an incomplete "database" is
unfortunately not supported.

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 3/7] soc: qcom: add pd-mapper implementation
  2024-04-07 23:14   ` Bjorn Andersson
@ 2024-04-07 23:20     ` Dmitry Baryshkov
  0 siblings, 0 replies; 26+ messages in thread
From: Dmitry Baryshkov @ 2024-04-07 23:20 UTC (permalink / raw
  To: Bjorn Andersson
  Cc: Konrad Dybcio, Mathieu Poirier, linux-arm-msm, linux-remoteproc,
	Johan Hovold

On Mon, 8 Apr 2024 at 02:14, Bjorn Andersson <andersson@kernel.org> wrote:
>
> On Mon, Mar 11, 2024 at 05:34:03PM +0200, Dmitry Baryshkov wrote:
> > diff --git a/drivers/soc/qcom/qcom_pdm.c b/drivers/soc/qcom/qcom_pdm.c
> [..]
> > +int qcom_pdm_add_domains(const struct qcom_pdm_domain_data * const *data, size_t num_data)
> > +{
> > +     int ret;
> > +     int i;
> > +
> > +     mutex_lock(&qcom_pdm_mutex);
> > +
> > +     if (qcom_pdm_server_added) {
> > +             ret = qmi_del_server(&qcom_pdm_handle, SERVREG_QMI_SERVICE,
> > +                                  SERVREG_QMI_VERSION, SERVREG_QMI_INSTANCE);
>
> Sorry for the late reply.
>
> I met with the owners of the firmware side of this and we concluded that
> this will unfortunately not work.
>
> The typical case is that when the firmware comes up, it queries the
> information from the pd-mapper about itself, so this would obviously
> work just fine.
>
> Further more, if another core causes the server to be deleted and then
> re-added the firmware will wait for pd-mapper to come up. So this will
> work as well - as reported by Chris.
>
> There is however a not too uncommon case where the firmware on one
> remoteproc will inquiry about information of another remoteproc. One
> example is modem coming up, inquiring about where to find audio
> services. This inquiry will be performed once at firmware boot and
> decisions will be made based on the responses, no retries or updates.
>
> As such, starting pd-mapper with an incomplete "database" is
> unfortunately not supported.

Ack. Thanks for the info.

Xilin Wu has also reported a rare race condition seemingly between
pmic-glink and the pd-mapper.

I think I will rework the code to bring up the full database based on
the machine compatible.

>
> Regards,
> Bjorn





--
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2024-04-07 23:21 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-11 15:34 [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation Dmitry Baryshkov
2024-03-11 15:34 ` [PATCH v4 1/7] soc: qcom: pdr: protect locator_addr with the main mutex Dmitry Baryshkov
2024-03-13 23:35   ` Chris Lew
2024-03-14  0:07     ` Dmitry Baryshkov
2024-03-11 15:34 ` [PATCH v4 2/7] soc: qcom: qmi: add a way to remove running service Dmitry Baryshkov
2024-03-12  0:53   ` Konrad Dybcio
2024-03-12  1:03     ` Dmitry Baryshkov
2024-03-14  0:03   ` Chris Lew
2024-03-14  0:09     ` Dmitry Baryshkov
2024-03-14 17:15       ` Chris Lew
2024-03-11 15:34 ` [PATCH v4 3/7] soc: qcom: add pd-mapper implementation Dmitry Baryshkov
2024-03-12  0:55   ` Konrad Dybcio
2024-03-12  9:43     ` Johan Hovold
2024-03-12  9:36   ` Johan Hovold
2024-03-14 19:44   ` Chris Lew
2024-03-14 21:30     ` Dmitry Baryshkov
2024-03-15  0:57       ` Chris Lew
2024-04-07 23:14   ` Bjorn Andersson
2024-04-07 23:20     ` Dmitry Baryshkov
2024-03-11 15:34 ` [PATCH v4 4/7] remoteproc: qcom: pas: correct data indentation Dmitry Baryshkov
2024-03-11 15:34 ` [PATCH v4 5/7] remoteproc: qcom: adsp: add configuration for in-kernel pdm Dmitry Baryshkov
2024-03-16 18:15   ` Bjorn Andersson
2024-03-11 15:34 ` [PATCH v4 6/7] remoteproc: qcom: mss: " Dmitry Baryshkov
2024-03-11 15:34 ` [PATCH v4 7/7] remoteproc: qcom: pas: " Dmitry Baryshkov
2024-03-11 16:58   ` neil.armstrong
2024-03-11 17:18 ` [PATCH v4 0/7] soc: qcom: add in-kernel pd-mapper implementation neil.armstrong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).