From: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
To: Vishal Sagar <vishal.sagar@amd.com>
Cc: michal.simek@amd.com, dmaengine@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, varunkumar.allagadapa@amd.com,
laurent.pinchart@ideasonboard.com, vkoul@kernel.org
Subject: Re: [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode
Date: Wed, 27 Mar 2024 14:53:04 +0200 [thread overview]
Message-ID: <15f85f9f-d995-4146-82a9-5f11d715799a@ideasonboard.com> (raw)
In-Reply-To: <20240228042124.3074044-3-vishal.sagar@amd.com>
On 28/02/2024 06:21, Vishal Sagar wrote:
> From: Rohit Visavalia <rohit.visavalia@xilinx.com>
>
> This patch adds support for DPDMA cyclic dma mode,
> DMA cyclic transfers are required by audio streaming.
>
> Signed-off-by: Rohit Visavalia <rohit.visavalia@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
>
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 97 +++++++++++++++++++++++++++++++
> 1 file changed, 97 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index 28d9af8f00f0..88ad2f35538a 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -669,6 +669,84 @@ static void xilinx_dpdma_chan_free_tx_desc(struct virt_dma_desc *vdesc)
> kfree(desc);
> }
>
> +/**
> + * xilinx_dpdma_chan_prep_cyclic - Prepare a cyclic dma descriptor
> + * @chan: DPDMA channel
> + * @buf_addr: buffer address
> + * @buf_len: buffer length
> + * @period_len: number of periods
> + * @flags: tx flags argument passed in to prepare function
> + *
> + * Prepare a tx descriptor incudling internal software/hardware descriptors
> + * for the given cyclic transaction.
> + *
> + * Return: A dma async tx descriptor on success, or NULL.
> + */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_chan_prep_cyclic(struct xilinx_dpdma_chan *chan,
> + dma_addr_t buf_addr, size_t buf_len,
> + size_t period_len, unsigned long flags)
> +{
> + struct xilinx_dpdma_tx_desc *tx_desc;
> + struct xilinx_dpdma_sw_desc *sw_desc, *last = NULL;
> + unsigned int periods = buf_len / period_len;
> + unsigned int i;
> +
> + tx_desc = xilinx_dpdma_chan_alloc_tx_desc(chan);
> + if (!tx_desc)
> + return (void *)tx_desc;
Just return NULL here?
> +
> + for (i = 0; i < periods; i++) {
> + struct xilinx_dpdma_hw_desc *hw_desc;
> +
> + if (!IS_ALIGNED(buf_addr, XILINX_DPDMA_ALIGN_BYTES)) {
> + dev_err(chan->xdev->dev,
> + "buffer should be aligned at %d B\n",
> + XILINX_DPDMA_ALIGN_BYTES);
> + goto error;
> + }
> +
> + sw_desc = xilinx_dpdma_chan_alloc_sw_desc(chan);
> + if (!sw_desc)
> + goto error;
> +
> + xilinx_dpdma_sw_desc_set_dma_addrs(chan->xdev, sw_desc, last,
> + &buf_addr, 1);
> + hw_desc = &sw_desc->hw;
> + hw_desc->xfer_size = period_len;
> + hw_desc->hsize_stride =
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_HSIZE_MASK,
> + period_len) |
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_STRIDE_MASK,
> + period_len);
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
You could:
hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE |
XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE |
XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
Although... Shouldn't control always be 0 here, so you can just
hw_desc->control = ...;
> +
> + list_add_tail(&sw_desc->node, &tx_desc->descriptors);
> +
> + buf_addr += period_len;
> + last = sw_desc;
> + }
> +
> + sw_desc = list_first_entry(&tx_desc->descriptors,
> + struct xilinx_dpdma_sw_desc, node);
> + last->hw.next_desc = lower_32_bits(sw_desc->dma_addr);
> + if (chan->xdev->ext_addr)
> + last->hw.addr_ext |=
> + FIELD_PREP(XILINX_DPDMA_DESC_ADDR_EXT_NEXT_ADDR_MASK,
> + upper_32_bits(sw_desc->dma_addr));
> +
> + last->hw.control |= XILINX_DPDMA_DESC_CONTROL_LAST_OF_FRAME;
> +
> + return vchan_tx_prep(&chan->vchan, &tx_desc->vdesc, flags);
> +
> +error:
> + xilinx_dpdma_chan_free_tx_desc(&tx_desc->vdesc);
> +
> + return NULL;
> +}
> +
> /**
> * xilinx_dpdma_chan_prep_interleaved_dma - Prepare an interleaved dma
> * descriptor
> @@ -1190,6 +1268,23 @@ static void xilinx_dpdma_chan_handle_err(struct xilinx_dpdma_chan *chan)
> /* -----------------------------------------------------------------------------
> * DMA Engine Operations
> */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
> + size_t buf_len, size_t period_len,
> + enum dma_transfer_direction direction,
> + unsigned long flags)
> +{
> + struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> +
> + if (direction != DMA_MEM_TO_DEV)
> + return NULL;
> +
> + if (buf_len % period_len)
> + return NULL;
> +
> + return xilinx_dpdma_chan_prep_cyclic(chan, buf_addr, buf_len,
> + period_len, flags);
The parameters should be aligned above.
> +}
>
> static struct dma_async_tx_descriptor *
> xilinx_dpdma_prep_interleaved_dma(struct dma_chan *dchan,
> @@ -1673,6 +1768,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> dma_cap_set(DMA_SLAVE, ddev->cap_mask);
> dma_cap_set(DMA_PRIVATE, ddev->cap_mask);
> + dma_cap_set(DMA_CYCLIC, ddev->cap_mask);
> dma_cap_set(DMA_INTERLEAVE, ddev->cap_mask);
> dma_cap_set(DMA_REPEAT, ddev->cap_mask);
> dma_cap_set(DMA_LOAD_EOT, ddev->cap_mask);
> @@ -1680,6 +1776,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> ddev->device_alloc_chan_resources = xilinx_dpdma_alloc_chan_resources;
> ddev->device_free_chan_resources = xilinx_dpdma_free_chan_resources;
> + ddev->device_prep_dma_cyclic = xilinx_dpdma_prep_dma_cyclic;
> ddev->device_prep_interleaved_dma = xilinx_dpdma_prep_interleaved_dma;
> /* TODO: Can we achieve better granularity ? */
> ddev->device_tx_status = dma_cookie_status;
While I'm not too familiar with dma engines, this looks fine to me. So,
other than the few cosmetics comments:
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tomi
prev parent reply other threads:[~2024-03-27 12:53 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-28 4:21 [PATCH v2 0/2] Xilinx DPDMA fixes and cyclic dma mode support Vishal Sagar
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
2024-03-12 8:33 ` Tomi Valkeinen
2024-03-12 15:56 ` Sagar, Vishal
2024-03-12 17:46 ` Sean Anderson
2024-03-27 12:32 ` Tomi Valkeinen
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
2024-03-13 7:19 ` Tomi Valkeinen
2024-03-27 12:53 ` Tomi Valkeinen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=15f85f9f-d995-4146-82a9-5f11d715799a@ideasonboard.com \
--to=tomi.valkeinen@ideasonboard.com \
--cc=dmaengine@vger.kernel.org \
--cc=laurent.pinchart@ideasonboard.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michal.simek@amd.com \
--cc=varunkumar.allagadapa@amd.com \
--cc=vishal.sagar@amd.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).