From: Lu Baolu <baolu.lu@linux.intel.com> To: David Gibson <david@gibson.dropbear.id.au> Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, Jason Wang <jasowang@redhat.com>, Kirti Wankhede <kwankhede@nvidia.com>, Jean-Philippe Brucker <jean-philippe@linaro.org>, "Jiang, Dave" <dave.jiang@intel.com>, "Raj, Ashok" <ashok.raj@intel.com>, Jonathan Corbet <corbet@lwn.net>, Jason Gunthorpe <jgg@nvidia.com>, "Tian, Kevin" <kevin.tian@intel.com>, "parav@mellanox.com" <parav@mellanox.com>, "Alex Williamson \(alex.williamson@redhat.com\)" <alex.williamson@redhat.com>, "Enrico Weigelt, metux IT consult" <lkml@metux.net>, David Woodhouse <dwmw2@infradead.org>, LKML <linux-kernel@vger.kernel.org>, Shenming Lu <lushenming@huawei.com>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, Paolo Bonzini <pbonzini@redhat.com>, Robin Murphy <robin.murphy@arm.com> Subject: Re: Plan for /dev/ioasid RFC v2 Date: Fri, 18 Jun 2021 13:21:47 +0800 [thread overview] Message-ID: <b9c48526-8b8f-ff9e-4ece-4a39f476e3b7@linux.intel.com> (raw) In-Reply-To: <YMrcLcTL+cUKd1a5@yekko> Hi David, On 6/17/21 1:22 PM, David Gibson wrote: >> The iommu_group can guarantee the isolation among different physical >> devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or >> vDPA devices represented by RID + SSID), we have to rely on the >> device driver for isolation. The devices which are able to generate sub- >> devices should either use their own on-device mechanisms or use the >> platform features like Intel Scalable IOV to isolate the sub-devices. > This seems like a misunderstanding of groups. Groups are not tied to > any PCI meaning. Groups are the smallest unit of isolation, no matter > what is providing that isolation. > > If mdevs are isolated from each other by clever software, even though > they're on the same PCI device they are in different groups from each > other*by definition*. They are also in a different group from their > parent device (however the mdevs only exist when mdev driver is > active, which implies that the parent device's group is owned by the > kernel). You are right. This is also my understanding of an "isolation group". But, as I understand it, iommu_group is only the isolation group visible to IOMMU. When we talk about sub-devices (sw-mdev or mdev w/ pasid), only the device and device driver knows the details of isolation, hence iommu_group could not be extended to cover them. The device drivers should define their own isolation groups. Otherwise, the device driver has to fake an iommu_group and add hacky code to link the related IOMMU elements (iommu device, domain, group etc.) together. Actually this is part of the problem that this proposal tries to solve. > >> Under above conditions, different sub-device from a same RID device >> could be able to use different IOASID. This seems to means that we can't >> support mixed mode where, for example, two RIDs share an iommu_group and >> one (or both) of them have sub-devices. > That doesn't necessarily follow. mdevs which can be successfully > isolated by their mdev driver are in a different group from their > parent device, and therefore need not be affected by whether the > parent device shares a group with some other physical device. They > *might* be, but that's up to the mdev driver to determine based on > what it can safely isolate. > If we understand it as multiple levels of isolation, can we classify the devices into the following categories? 1) Legacy devices - devices without device-level isolation - multiple devices could sit in a single iommu_group - only a single I/O address space could be bound to IOMMU 2) Modern devices - devices capable of device-level isolation - able to have subdevices - self-isolated, hence not share iommu_group with others - multiple I/O address spaces could be bound to IOMMU For 1), all devices in an iommu_group should be bound to a single IOASID; The isolation is guaranteed by an iommu_group. For 2) a single device could be bound to multiple IOASIDs with each sub- device corresponding to an IOASID. The isolation of each subdevice is guaranteed by the device driver. Best regards, baolu _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
WARNING: multiple messages have this Message-ID (diff)
From: Lu Baolu <baolu.lu@linux.intel.com> To: David Gibson <david@gibson.dropbear.id.au> Cc: baolu.lu@linux.intel.com, Jason Gunthorpe <jgg@nvidia.com>, Joerg Roedel <joro@8bytes.org>, "Tian, Kevin" <kevin.tian@intel.com>, "Alex Williamson (alex.williamson@redhat.com)" <alex.williamson@redhat.com>, Jean-Philippe Brucker <jean-philippe@linaro.org>, Jason Wang <jasowang@redhat.com>, "parav@mellanox.com" <parav@mellanox.com>, "Enrico Weigelt, metux IT consult" <lkml@metux.net>, Paolo Bonzini <pbonzini@redhat.com>, Shenming Lu <lushenming@huawei.com>, Eric Auger <eric.auger@redhat.com>, Jonathan Corbet <corbet@lwn.net>, "Raj, Ashok" <ashok.raj@intel.com>, "Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>, "Jiang, Dave" <dave.jiang@intel.com>, Jacob Pan <jacob.jun.pan@linux.intel.com>, Kirti Wankhede <kwankhede@nvidia.com>, Robin Murphy <robin.murphy@arm.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, David Woodhouse <dwmw2@infradead.org>, LKML <linux-kernel@vger.kernel.org> Subject: Re: Plan for /dev/ioasid RFC v2 Date: Fri, 18 Jun 2021 13:21:47 +0800 [thread overview] Message-ID: <b9c48526-8b8f-ff9e-4ece-4a39f476e3b7@linux.intel.com> (raw) In-Reply-To: <YMrcLcTL+cUKd1a5@yekko> Hi David, On 6/17/21 1:22 PM, David Gibson wrote: >> The iommu_group can guarantee the isolation among different physical >> devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or >> vDPA devices represented by RID + SSID), we have to rely on the >> device driver for isolation. The devices which are able to generate sub- >> devices should either use their own on-device mechanisms or use the >> platform features like Intel Scalable IOV to isolate the sub-devices. > This seems like a misunderstanding of groups. Groups are not tied to > any PCI meaning. Groups are the smallest unit of isolation, no matter > what is providing that isolation. > > If mdevs are isolated from each other by clever software, even though > they're on the same PCI device they are in different groups from each > other*by definition*. They are also in a different group from their > parent device (however the mdevs only exist when mdev driver is > active, which implies that the parent device's group is owned by the > kernel). You are right. This is also my understanding of an "isolation group". But, as I understand it, iommu_group is only the isolation group visible to IOMMU. When we talk about sub-devices (sw-mdev or mdev w/ pasid), only the device and device driver knows the details of isolation, hence iommu_group could not be extended to cover them. The device drivers should define their own isolation groups. Otherwise, the device driver has to fake an iommu_group and add hacky code to link the related IOMMU elements (iommu device, domain, group etc.) together. Actually this is part of the problem that this proposal tries to solve. > >> Under above conditions, different sub-device from a same RID device >> could be able to use different IOASID. This seems to means that we can't >> support mixed mode where, for example, two RIDs share an iommu_group and >> one (or both) of them have sub-devices. > That doesn't necessarily follow. mdevs which can be successfully > isolated by their mdev driver are in a different group from their > parent device, and therefore need not be affected by whether the > parent device shares a group with some other physical device. They > *might* be, but that's up to the mdev driver to determine based on > what it can safely isolate. > If we understand it as multiple levels of isolation, can we classify the devices into the following categories? 1) Legacy devices - devices without device-level isolation - multiple devices could sit in a single iommu_group - only a single I/O address space could be bound to IOMMU 2) Modern devices - devices capable of device-level isolation - able to have subdevices - self-isolated, hence not share iommu_group with others - multiple I/O address spaces could be bound to IOMMU For 1), all devices in an iommu_group should be bound to a single IOASID; The isolation is guaranteed by an iommu_group. For 2) a single device could be bound to multiple IOASIDs with each sub- device corresponding to an IOASID. The isolation of each subdevice is guaranteed by the device driver. Best regards, baolu
next prev parent reply other threads:[~2021-06-18 5:23 UTC|newest] Thread overview: 162+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-07 2:58 Plan for /dev/ioasid RFC v2 Tian, Kevin 2021-06-07 2:58 ` Tian, Kevin 2021-06-09 8:14 ` Eric Auger 2021-06-09 8:14 ` Eric Auger 2021-06-09 9:37 ` Tian, Kevin 2021-06-09 9:37 ` Tian, Kevin 2021-06-09 10:14 ` Eric Auger 2021-06-09 10:14 ` Eric Auger 2021-06-09 9:01 ` Leon Romanovsky 2021-06-09 9:01 ` Leon Romanovsky 2021-06-09 9:43 ` Tian, Kevin 2021-06-09 9:43 ` Tian, Kevin 2021-06-09 12:24 ` Joerg Roedel 2021-06-09 12:24 ` Joerg Roedel 2021-06-09 12:39 ` Jason Gunthorpe 2021-06-09 12:39 ` Jason Gunthorpe 2021-06-09 13:32 ` Joerg Roedel 2021-06-09 13:32 ` Joerg Roedel 2021-06-09 15:00 ` Jason Gunthorpe 2021-06-09 15:00 ` Jason Gunthorpe 2021-06-09 15:51 ` Joerg Roedel 2021-06-09 15:51 ` Joerg Roedel 2021-06-09 16:15 ` Alex Williamson 2021-06-09 16:15 ` Alex Williamson 2021-06-09 16:27 ` Alex Williamson 2021-06-09 16:27 ` Alex Williamson 2021-06-09 18:49 ` Jason Gunthorpe 2021-06-09 18:49 ` Jason Gunthorpe 2021-06-10 15:38 ` Alex Williamson 2021-06-10 15:38 ` Alex Williamson 2021-06-11 0:58 ` Tian, Kevin 2021-06-11 0:58 ` Tian, Kevin 2021-06-11 21:38 ` Alex Williamson 2021-06-11 21:38 ` Alex Williamson 2021-06-14 3:09 ` Tian, Kevin 2021-06-14 3:09 ` Tian, Kevin 2021-06-14 3:22 ` Alex Williamson 2021-06-14 3:22 ` Alex Williamson 2021-06-15 1:05 ` Tian, Kevin 2021-06-15 1:05 ` Tian, Kevin 2021-06-14 13:38 ` Jason Gunthorpe 2021-06-14 13:38 ` Jason Gunthorpe 2021-06-15 1:21 ` Tian, Kevin 2021-06-15 1:21 ` Tian, Kevin 2021-06-15 16:56 ` Alex Williamson 2021-06-15 16:56 ` Alex Williamson 2021-06-16 6:53 ` Tian, Kevin 2021-06-16 6:53 ` Tian, Kevin 2021-06-24 4:50 ` David Gibson 2021-06-24 4:50 ` David Gibson 2021-06-11 16:45 ` Jason Gunthorpe 2021-06-11 16:45 ` Jason Gunthorpe 2021-06-11 19:38 ` Alex Williamson 2021-06-11 19:38 ` Alex Williamson 2021-06-12 1:28 ` Jason Gunthorpe 2021-06-12 1:28 ` Jason Gunthorpe 2021-06-12 16:57 ` Alex Williamson 2021-06-12 16:57 ` Alex Williamson 2021-06-14 14:07 ` Jason Gunthorpe 2021-06-14 14:07 ` Jason Gunthorpe 2021-06-14 16:28 ` Alex Williamson 2021-06-14 16:28 ` Alex Williamson 2021-06-14 19:40 ` Jason Gunthorpe 2021-06-14 19:40 ` Jason Gunthorpe 2021-06-15 2:31 ` Tian, Kevin 2021-06-15 2:31 ` Tian, Kevin 2021-06-15 16:12 ` Alex Williamson 2021-06-15 16:12 ` Alex Williamson 2021-06-16 6:43 ` Tian, Kevin 2021-06-16 6:43 ` Tian, Kevin 2021-06-16 19:39 ` Alex Williamson 2021-06-16 19:39 ` Alex Williamson 2021-06-17 3:39 ` Liu Yi L 2021-06-17 3:39 ` Liu Yi L 2021-06-17 7:31 ` Tian, Kevin 2021-06-17 7:31 ` Tian, Kevin 2021-06-17 21:14 ` Alex Williamson 2021-06-17 21:14 ` Alex Williamson 2021-06-18 0:19 ` Jason Gunthorpe 2021-06-18 0:19 ` Jason Gunthorpe 2021-06-18 16:57 ` Tian, Kevin 2021-06-18 16:57 ` Tian, Kevin 2021-06-18 18:23 ` Jason Gunthorpe 2021-06-18 18:23 ` Jason Gunthorpe 2021-06-25 10:27 ` Tian, Kevin 2021-06-25 10:27 ` Tian, Kevin 2021-06-25 14:36 ` Jason Gunthorpe 2021-06-25 14:36 ` Jason Gunthorpe 2021-06-28 1:09 ` Tian, Kevin 2021-06-28 1:09 ` Tian, Kevin 2021-06-28 22:31 ` Alex Williamson 2021-06-28 22:31 ` Alex Williamson 2021-06-28 22:48 ` Jason Gunthorpe 2021-06-28 22:48 ` Jason Gunthorpe 2021-06-28 23:09 ` Alex Williamson 2021-06-28 23:09 ` Alex Williamson 2021-06-28 23:13 ` Jason Gunthorpe 2021-06-28 23:13 ` Jason Gunthorpe 2021-06-29 0:26 ` Tian, Kevin 2021-06-29 0:26 ` Tian, Kevin 2021-06-29 0:28 ` Tian, Kevin 2021-06-29 0:28 ` Tian, Kevin 2021-06-29 0:43 ` Tian, Kevin 2021-06-29 0:43 ` Tian, Kevin 2021-06-28 2:03 ` Tian, Kevin 2021-06-28 2:03 ` Tian, Kevin 2021-06-28 14:41 ` Jason Gunthorpe 2021-06-28 14:41 ` Jason Gunthorpe 2021-06-28 6:45 ` Tian, Kevin 2021-06-28 6:45 ` Tian, Kevin 2021-06-28 16:26 ` Jason Gunthorpe 2021-06-28 16:26 ` Jason Gunthorpe 2021-06-24 4:26 ` David Gibson 2021-06-24 4:26 ` David Gibson 2021-06-24 5:59 ` Tian, Kevin 2021-06-24 5:59 ` Tian, Kevin 2021-06-24 12:22 ` Lu Baolu 2021-06-24 12:22 ` Lu Baolu 2021-06-24 4:23 ` David Gibson 2021-06-24 4:23 ` David Gibson 2021-06-18 0:52 ` Jason Gunthorpe 2021-06-18 0:52 ` Jason Gunthorpe 2021-06-18 13:47 ` Joerg Roedel 2021-06-18 13:47 ` Joerg Roedel 2021-06-18 15:15 ` Jason Gunthorpe 2021-06-18 15:15 ` Jason Gunthorpe 2021-06-18 15:37 ` Raj, Ashok 2021-06-18 15:37 ` Raj, Ashok 2021-06-18 15:51 ` Alex Williamson 2021-06-18 15:51 ` Alex Williamson 2021-06-24 4:29 ` David Gibson 2021-06-24 4:29 ` David Gibson 2021-06-24 11:56 ` Jason Gunthorpe 2021-06-24 11:56 ` Jason Gunthorpe 2021-06-18 0:10 ` Jason Gunthorpe 2021-06-18 0:10 ` Jason Gunthorpe 2021-06-17 5:29 ` David Gibson 2021-06-17 5:29 ` David Gibson 2021-06-17 5:02 ` David Gibson 2021-06-17 5:02 ` David Gibson 2021-06-17 23:04 ` Jason Gunthorpe 2021-06-17 23:04 ` Jason Gunthorpe 2021-06-24 4:37 ` David Gibson 2021-06-24 4:37 ` David Gibson 2021-06-24 11:57 ` Jason Gunthorpe 2021-06-24 11:57 ` Jason Gunthorpe 2021-06-10 5:50 ` Lu Baolu 2021-06-10 5:50 ` Lu Baolu 2021-06-17 5:22 ` David Gibson 2021-06-17 5:22 ` David Gibson 2021-06-18 5:21 ` Lu Baolu [this message] 2021-06-18 5:21 ` Lu Baolu 2021-06-24 4:03 ` David Gibson 2021-06-24 4:03 ` David Gibson 2021-06-24 13:42 ` Lu Baolu 2021-06-24 13:42 ` Lu Baolu 2021-06-17 4:45 ` David Gibson 2021-06-17 4:45 ` David Gibson 2021-06-17 23:10 ` Jason Gunthorpe 2021-06-17 23:10 ` Jason Gunthorpe 2021-06-24 4:07 ` David Gibson 2021-06-24 4:07 ` David Gibson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b9c48526-8b8f-ff9e-4ece-4a39f476e3b7@linux.intel.com \ --to=baolu.lu@linux.intel.com \ --cc=alex.williamson@redhat.com \ --cc=ashok.raj@intel.com \ --cc=corbet@lwn.net \ --cc=dave.jiang@intel.com \ --cc=david@gibson.dropbear.id.au \ --cc=dwmw2@infradead.org \ --cc=iommu@lists.linux-foundation.org \ --cc=jasowang@redhat.com \ --cc=jean-philippe@linaro.org \ --cc=jgg@nvidia.com \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=kwankhede@nvidia.com \ --cc=linux-kernel@vger.kernel.org \ --cc=lkml@metux.net \ --cc=lushenming@huawei.com \ --cc=parav@mellanox.com \ --cc=pbonzini@redhat.com \ --cc=robin.murphy@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.