From: "Cédric Le Goater" <clg@redhat.com>
To: Avihai Horon <avihaih@nvidia.com>, qemu-devel@nongnu.org
Cc: "Peter Xu" <peterx@redhat.com>, "Fabiano Rosas" <farosas@suse.de>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Markus Armbruster" <armbru@redhat.com>
Subject: Re: [PATCH v5 08/10] vfio: Add Error** argument to .get_dirty_bitmap() handler
Date: Tue, 14 May 2024 11:05:23 +0200 [thread overview]
Message-ID: <1efd3c9c-9d31-4896-9183-84f74e9899f1@redhat.com> (raw)
In-Reply-To: <6f7766e0-2f50-4ef5-90e1-46bd7c5e4892@nvidia.com>
On 5/13/24 15:51, Avihai Horon wrote:
>
> On 06/05/2024 12:20, Cédric Le Goater wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> Let the callers do the error reporting. Add documentation while at it.
>>
>> Signed-off-by: Cédric Le Goater <clg@redhat.com>
>> ---
>>
>> Changes in v5:
>>
>> - Replaced error_setg() by error_setg_errno() in
>> vfio_devices_query_dirty_bitmap() and vfio_legacy_query_dirty_bitmap()
>> - ':' -> '-' in vfio_iommu_map_dirty_notify()
>>
>> include/hw/vfio/vfio-common.h | 4 +-
>> include/hw/vfio/vfio-container-base.h | 17 +++++++-
>> hw/vfio/common.c | 59 ++++++++++++++++++---------
>> hw/vfio/container-base.c | 5 ++-
>> hw/vfio/container.c | 14 ++++---
>> 5 files changed, 68 insertions(+), 31 deletions(-)
>>
>> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
>> index 46f88493634b5634a9c14a5caa33a463fbf2c50d..68911d36676667352e94a97895828aff4b194b57 100644
>> --- a/include/hw/vfio/vfio-common.h
>> +++ b/include/hw/vfio/vfio-common.h
>> @@ -274,9 +274,9 @@ bool
>> vfio_devices_all_device_dirty_tracking(const VFIOContainerBase *bcontainer);
>> int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap, hwaddr iova,
>> - hwaddr size);
>> + hwaddr size, Error **errp);
>
> Nit: while at it, can we fix the line wrap here?
>
>> int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
>> - uint64_t size, ram_addr_t ram_addr);
>> + uint64_t size, ram_addr_t ram_addr, Error **errp);
>>
>> /* Returns 0 on success, or a negative errno. */
>> int vfio_device_get_name(VFIODevice *vbasedev, Error **errp);
>> diff --git a/include/hw/vfio/vfio-container-base.h b/include/hw/vfio/vfio-container-base.h
>> index 326ceea52a2030eec9dad289a9845866c4a8c090..48c92e186231c2c2b548abed08800faff3f430a7 100644
>> --- a/include/hw/vfio/vfio-container-base.h
>> +++ b/include/hw/vfio/vfio-container-base.h
>> @@ -85,7 +85,7 @@ int vfio_container_set_dirty_page_tracking(VFIOContainerBase *bcontainer,
>> bool start, Error **errp);
>> int vfio_container_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap,
>> - hwaddr iova, hwaddr size);
>> + hwaddr iova, hwaddr size, Error **errp);
>
> Nit: while at it, can we fix the line wrap here?
do you mean :
int vfio_container_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
VFIOBitmap *vbmap, hwaddr iova,
hwaddr size, Error **errp);
?
>
>>
>> void vfio_container_init(VFIOContainerBase *bcontainer,
>> VFIOAddressSpace *space,
>> @@ -138,9 +138,22 @@ struct VFIOIOMMUClass {
>> */
>> int (*set_dirty_page_tracking)(const VFIOContainerBase *bcontainer,
>> bool start, Error **errp);
>> + /**
>> + * @query_dirty_bitmap
>> + *
>> + * Get list of dirty pages from container
>
> s/list/bitmap?
yep
Thanks,
C.
>
>> + *
>> + * @bcontainer: #VFIOContainerBase from which to get dirty pages
>> + * @vbmap: #VFIOBitmap internal bitmap structure
>> + * @iova: iova base address
>> + * @size: size of iova range
>> + * @errp: pointer to Error*, to store an error if it happens.
>> + *
>> + * Returns zero to indicate success and negative for error
>> + */
>> int (*query_dirty_bitmap)(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap,
>> - hwaddr iova, hwaddr size);
>> + hwaddr iova, hwaddr size, Error **errp);
>> /* PCI specific */
>> int (*pci_hot_reset)(VFIODevice *vbasedev, bool single);
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index da748563eb33843e93631a5240759964f33162f2..c3d82a9d6e434e33f361e4b96157bf912d5c3a2f 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -1141,7 +1141,7 @@ static int vfio_device_dma_logging_report(VFIODevice *vbasedev, hwaddr iova,
>>
>> int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap, hwaddr iova,
>> - hwaddr size)
>> + hwaddr size, Error **errp)
>
> Nit: while at it, can we fix the line wrap here?
>
>> {
>> VFIODevice *vbasedev;
>> int ret;
>> @@ -1150,10 +1150,10 @@ int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> ret = vfio_device_dma_logging_report(vbasedev, iova, size,
>> vbmap->bitmap);
>> if (ret) {
>> - error_report("%s: Failed to get DMA logging report, iova: "
>> - "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx
>> - ", err: %d (%s)",
>> - vbasedev->name, iova, size, ret, strerror(-ret));
>> + error_setg_errno(errp, -ret,
>> + "%s: Failed to get DMA logging report, iova: "
>> + "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx,
>> + vbasedev->name, iova, size);
>>
>> return ret;
>> }
>> @@ -1163,7 +1163,7 @@ int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> }
>>
>> int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
>> - uint64_t size, ram_addr_t ram_addr)
>> + uint64_t size, ram_addr_t ram_addr, Error **errp)
>> {
>> bool all_device_dirty_tracking =
>> vfio_devices_all_device_dirty_tracking(bcontainer);
>> @@ -1180,13 +1180,17 @@ int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
>>
>> ret = vfio_bitmap_alloc(&vbmap, size);
>> if (ret) {
>> + error_setg_errno(errp, -ret,
>> + "Failed to allocate dirty tracking bitmap");
>> return ret;
>> }
>>
>> if (all_device_dirty_tracking) {
>> - ret = vfio_devices_query_dirty_bitmap(bcontainer, &vbmap, iova, size);
>> + ret = vfio_devices_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
>> + errp);
>> } else {
>> - ret = vfio_container_query_dirty_bitmap(bcontainer, &vbmap, iova, size);
>> + ret = vfio_container_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
>> + errp);
>> }
>>
>> if (ret) {
>> @@ -1234,12 +1238,13 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> }
>>
>> ret = vfio_get_dirty_bitmap(bcontainer, iova, iotlb->addr_mask + 1,
>> - translated_addr);
>> + translated_addr, &local_err);
>> if (ret) {
>> - error_report("vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
>> - "0x%"HWADDR_PRIx") = %d (%s)",
>> - bcontainer, iova, iotlb->addr_mask + 1, ret,
>> - strerror(-ret));
>> + error_prepend(&local_err,
>> + "vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
>> + "0x%"HWADDR_PRIx") failed - ", bcontainer, iova,
>> + iotlb->addr_mask + 1);
>> + error_report_err(local_err);
>> }
>>
>> out_lock:
>> @@ -1259,12 +1264,19 @@ static int vfio_ram_discard_get_dirty_bitmap(MemoryRegionSection *section,
>> const ram_addr_t ram_addr = memory_region_get_ram_addr(section->mr) +
>> section->offset_within_region;
>> VFIORamDiscardListener *vrdl = opaque;
>> + Error *local_err = NULL;
>> + int ret;
>>
>> /*
>> * Sync the whole mapped region (spanning multiple individual mappings)
>> * in one go.
>> */
>> - return vfio_get_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr);
>> + ret = vfio_get_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr,
>> + &local_err);
>> + if (ret) {
>> + error_report_err(local_err);
>> + }
>> + return ret;
>> }
>>
>> static int
>> @@ -1296,7 +1308,7 @@ vfio_sync_ram_discard_listener_dirty_bitmap(VFIOContainerBase *bcontainer,
>> }
>>
>> static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
>> - MemoryRegionSection *section)
>> + MemoryRegionSection *section, Error **errp)
>> {
>> ram_addr_t ram_addr;
>>
>> @@ -1327,7 +1339,14 @@ static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
>> }
>> return 0;
>> } else if (memory_region_has_ram_discard_manager(section->mr)) {
>> - return vfio_sync_ram_discard_listener_dirty_bitmap(bcontainer, section);
>> + int ret;
>> +
>> + ret = vfio_sync_ram_discard_listener_dirty_bitmap(bcontainer, section);
>> + if (ret) {
>> + error_setg(errp,
>> + "Failed to sync dirty bitmap with RAM discard listener");
>> + return ret;
>> + }
>> }
>>
>> ram_addr = memory_region_get_ram_addr(section->mr) +
>> @@ -1335,7 +1354,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
>>
>> return vfio_get_dirty_bitmap(bcontainer,
>> REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
>> - int128_get64(section->size), ram_addr);
>> + int128_get64(section->size), ram_addr, errp);
>> }
>>
>> static void vfio_listener_log_sync(MemoryListener *listener,
>> @@ -1344,16 +1363,16 @@ static void vfio_listener_log_sync(MemoryListener *listener,
>> VFIOContainerBase *bcontainer = container_of(listener, VFIOContainerBase,
>> listener);
>> int ret;
>> + Error *local_err = NULL;
>>
>> if (vfio_listener_skipped_section(section)) {
>> return;
>> }
>>
>> if (vfio_devices_all_dirty_tracking(bcontainer)) {
>> - ret = vfio_sync_dirty_bitmap(bcontainer, section);
>> + ret = vfio_sync_dirty_bitmap(bcontainer, section, &local_err);
>> if (ret) {
>> - error_report("vfio: Failed to sync dirty bitmap, err: %d (%s)", ret,
>> - strerror(-ret));
>> + error_report_err(local_err);
>> vfio_set_migration_error(ret);
>> }
>> }
>> diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
>> index 7c0764121d24b02b6c4e66e368d7dff78a6d65aa..8db59881873c3b1edee81104b966af737e5fa6f6 100644
>> --- a/hw/vfio/container-base.c
>> +++ b/hw/vfio/container-base.c
>> @@ -65,10 +65,11 @@ int vfio_container_set_dirty_page_tracking(VFIOContainerBase *bcontainer,
>>
>> int vfio_container_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap,
>> - hwaddr iova, hwaddr size)
>> + hwaddr iova, hwaddr size, Error **errp)
>
> Nit: while at it, can we fix the line wrap here?
>
> Thanks.
>
>> {
>> g_assert(bcontainer->ops->query_dirty_bitmap);
>> - return bcontainer->ops->query_dirty_bitmap(bcontainer, vbmap, iova, size);
>> + return bcontainer->ops->query_dirty_bitmap(bcontainer, vbmap, iova, size,
>> + errp);
>> }
>>
>> void vfio_container_init(VFIOContainerBase *bcontainer, VFIOAddressSpace *space,
>> diff --git a/hw/vfio/container.c b/hw/vfio/container.c
>> index c35221fbe7dc5453050f97cd186fc958e24f28f7..e00db6546c652c61a352f8e4cd65b212ecfb65bc 100644
>> --- a/hw/vfio/container.c
>> +++ b/hw/vfio/container.c
>> @@ -130,6 +130,7 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase *bcontainer,
>> };
>> bool need_dirty_sync = false;
>> int ret;
>> + Error *local_err = NULL;
>>
>> if (iotlb && vfio_devices_all_running_and_mig_active(bcontainer)) {
>> if (!vfio_devices_all_device_dirty_tracking(bcontainer) &&
>> @@ -165,8 +166,9 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase *bcontainer,
>>
>> if (need_dirty_sync) {
>> ret = vfio_get_dirty_bitmap(bcontainer, iova, size,
>> - iotlb->translated_addr);
>> + iotlb->translated_addr, &local_err);
>> if (ret) {
>> + error_report_err(local_err);
>> return ret;
>> }
>> }
>> @@ -236,7 +238,8 @@ vfio_legacy_set_dirty_page_tracking(const VFIOContainerBase *bcontainer,
>>
>> static int vfio_legacy_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> VFIOBitmap *vbmap,
>> - hwaddr iova, hwaddr size)
>> + hwaddr iova, hwaddr size,
>> + Error **errp)
>> {
>> const VFIOContainer *container = container_of(bcontainer, VFIOContainer,
>> bcontainer);
>> @@ -264,9 +267,10 @@ static int vfio_legacy_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
>> ret = ioctl(container->fd, VFIO_IOMMU_DIRTY_PAGES, dbitmap);
>> if (ret) {
>> ret = -errno;
>> - error_report("Failed to get dirty bitmap for iova: 0x%"PRIx64
>> - " size: 0x%"PRIx64" err: %d", (uint64_t)range->iova,
>> - (uint64_t)range->size, errno);
>> + error_setg_errno(errp, errno,
>> + "Failed to get dirty bitmap for iova: 0x%"PRIx64
>> + " size: 0x%"PRIx64, (uint64_t)range->iova,
>> + (uint64_t)range->size);
>> }
>>
>> g_free(dbitmap);
>> --
>> 2.45.0
>>
>
next prev parent reply other threads:[~2024-05-14 9:06 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-06 9:20 [PATCH v5 00/10] vfio: Improve error reporting (part 2) Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 01/10] vfio: Add Error** argument to .set_dirty_page_tracking() handler Cédric Le Goater
2024-05-13 13:03 ` Avihai Horon
2024-05-13 16:27 ` Cédric Le Goater
2024-05-15 12:17 ` Avihai Horon
2024-05-15 12:25 ` Cédric Le Goater
2024-05-15 12:29 ` Avihai Horon
2024-05-15 12:36 ` Cédric Le Goater
2024-05-15 12:49 ` Avihai Horon
2024-05-06 9:20 ` [PATCH v5 02/10] vfio: Add Error** argument to vfio_devices_dma_logging_start() Cédric Le Goater
2024-05-13 13:08 ` Avihai Horon
2024-05-13 16:49 ` Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 03/10] vfio: Extend migration_file_set_error() with Error** argument Cédric Le Goater
2024-05-13 13:14 ` Avihai Horon
2024-05-13 16:51 ` Cédric Le Goater
2024-05-13 22:19 ` Fabiano Rosas
2024-05-06 9:20 ` [PATCH v5 04/10] vfio: Use new Error** argument in vfio_save_setup() Cédric Le Goater
2024-05-13 13:21 ` Avihai Horon
2024-05-14 7:57 ` Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 05/10] vfio: Add Error** argument to .vfio_save_config() handler Cédric Le Goater
2024-05-13 13:30 ` Avihai Horon
2024-05-14 8:50 ` Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 06/10] vfio: Reverse test on vfio_get_dirty_bitmap() Cédric Le Goater
2024-05-13 13:42 ` Avihai Horon
2024-05-14 8:55 ` Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 07/10] memory: Add Error** argument to memory_get_xlat_addr() Cédric Le Goater
2024-05-13 13:44 ` Avihai Horon
2024-05-14 8:57 ` Cédric Le Goater
2024-05-06 9:20 ` [PATCH v5 08/10] vfio: Add Error** argument to .get_dirty_bitmap() handler Cédric Le Goater
2024-05-13 13:51 ` Avihai Horon
2024-05-14 9:05 ` Cédric Le Goater [this message]
2024-05-06 9:20 ` [PATCH v5 09/10] vfio: Also trace event failures in vfio_save_complete_precopy() Cédric Le Goater
2024-05-13 13:54 ` Avihai Horon
2024-05-06 9:20 ` [PATCH v5 10/10] vfio: Extend vfio_set_migration_error() with Error* argument Cédric Le Goater
2024-05-13 14:26 ` Avihai Horon
2024-05-14 11:20 ` Cédric Le Goater
2024-05-14 14:35 ` Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1efd3c9c-9d31-4896-9183-84f74e9899f1@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=armbru@redhat.com \
--cc=avihaih@nvidia.com \
--cc=farosas@suse.de \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).