From: Jason Gunthorpe <jgg@nvidia.com> To: "Tian, Kevin" <kevin.tian@intel.com> Cc: "Alex Williamson (alex.williamson@redhat.com)" <alex.williamson@redhat.com>, Joerg Roedel <joro@8bytes.org>, Jean-Philippe Brucker <jean-philippe@linaro.org>, David Gibson <david@gibson.dropbear.id.au>, Jason Wang <jasowang@redhat.com>, "parav@mellanox.com" <parav@mellanox.com>, "Enrico Weigelt, metux IT consult" <lkml@metux.net>, Paolo Bonzini <pbonzini@redhat.com>, Shenming Lu <lushenming@huawei.com>, Eric Auger <eric.auger@redhat.com>, Jonathan Corbet <corbet@lwn.net>, "Raj, Ashok" <ashok.raj@intel.com>, "Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>, "Jiang, Dave" <dave.jiang@intel.com>, Jacob Pan <jacob.jun.pan@linux.intel.com>, Kirti Wankhede <kwankhede@nvidia.com>, Robin Murphy <robin.murphy@arm.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, David Woodhouse <dwmw2@infradead.org>, LKML <linux-kernel@vger.kernel.org>, Lu Baolu <baolu.lu@linux.intel.com> Subject: Re: Plan for /dev/ioasid RFC v2 Date: Fri, 25 Jun 2021 11:36:16 -0300 [thread overview] Message-ID: <20210625143616.GT2371267@nvidia.com> (raw) In-Reply-To: <BN9PR11MB5433B9C0577CF0BD8EFCC9BC8C069@BN9PR11MB5433.namprd11.prod.outlook.com> On Fri, Jun 25, 2021 at 10:27:18AM +0000, Tian, Kevin wrote: > - When receiving the binding call for the 1st device in a group, iommu_fd > calls iommu_group_set_block_dma(group, dev->driver) which does > several things: The whole problem here is trying to match this new world where we want devices to be in charge of their own IOMMU configuration and the old world where groups are in charge. Inserting the group fd and then calling a device-centric VFIO_GROUP_GET_DEVICE_FD_NEW doesn't solve this conflict, and isn't necessary. We can always get the group back from the device at any point in the sequence do to a group wide operation. What I saw as the appeal of the sort of idea was to just completely leave all the difficult multi-device-group scenarios behind on the old group centric API and then we don't have to deal with them at all, or least not right away. I'd see some progression where iommu_fd only works with 1:1 groups at the start. Other scenarios continue with the old API. Then maybe groups where all devices use the same IOASID. Then 1:N groups if the source device is reliably identifiable, this requires iommu subystem work to attach domains to sub-group objects - not sure it is worthwhile. But at least we can talk about each step with well thought out patches The only thing that needs to be done to get the 1:1 step is to broadly define how the other two cases will work so we don't get into trouble and set some way to exclude the problematic cases from even getting to iommu_fd in the first place. For instance if we go ahead and create /dev/vfio/device nodes we could do this only if the group was 1:1, otherwise the group cdev has to be used, along with its API. > a) Check group viability. A group is viable only when all devices in > the group are in one of below states: > > * driver-less > * bound to a driver which is same as dev->driver (vfio in this case) > * bound to an otherwise allowed driver (same list as in vfio) This really shouldn't use hardwired driver checks. Attached drivers should generically indicate to the iommu layer that they are safe for iommu_fd usage by calling some function around probe() Thus a group must contain only iommu_fd safe drivers, or drivers-less devices before any of it can be used. It is the more general refactoring of what VFIO is doing. > c) The iommu layer also verifies group viability on BUS_NOTIFY_ > BOUND_DRIVER event. BUG_ON if viability is broken while block_dma > is set. And with this concept of iommu_fd safety being first-class maybe we can somehow eliminate this gross BUG_ON (and the 100's of lines of code that are used to create it) by denying probe to non-iommu-safe drivers, somehow. > - Binding other devices in the group to iommu_fd just succeeds since > the group is already in block_dma. I think the rest of this more or less describes the device centric logic for multi-device groups we've already talked about. I don't think it benifits from having the group fd Jason
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com> To: "Tian, Kevin" <kevin.tian@intel.com> Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, Jason Wang <jasowang@redhat.com>, Kirti Wankhede <kwankhede@nvidia.com>, Jean-Philippe Brucker <jean-philippe@linaro.org>, "Jiang, Dave" <dave.jiang@intel.com>, "Raj, Ashok" <ashok.raj@intel.com>, Jonathan Corbet <corbet@lwn.net>, "parav@mellanox.com" <parav@mellanox.com>, "Alex Williamson \(alex.williamson@redhat.com\)" <alex.williamson@redhat.com>, "Enrico Weigelt, metux IT consult" <lkml@metux.net>, David Gibson <david@gibson.dropbear.id.au>, Robin Murphy <robin.murphy@arm.com>, LKML <linux-kernel@vger.kernel.org>, Shenming Lu <lushenming@huawei.com>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw2@infradead.org> Subject: Re: Plan for /dev/ioasid RFC v2 Date: Fri, 25 Jun 2021 11:36:16 -0300 [thread overview] Message-ID: <20210625143616.GT2371267@nvidia.com> (raw) In-Reply-To: <BN9PR11MB5433B9C0577CF0BD8EFCC9BC8C069@BN9PR11MB5433.namprd11.prod.outlook.com> On Fri, Jun 25, 2021 at 10:27:18AM +0000, Tian, Kevin wrote: > - When receiving the binding call for the 1st device in a group, iommu_fd > calls iommu_group_set_block_dma(group, dev->driver) which does > several things: The whole problem here is trying to match this new world where we want devices to be in charge of their own IOMMU configuration and the old world where groups are in charge. Inserting the group fd and then calling a device-centric VFIO_GROUP_GET_DEVICE_FD_NEW doesn't solve this conflict, and isn't necessary. We can always get the group back from the device at any point in the sequence do to a group wide operation. What I saw as the appeal of the sort of idea was to just completely leave all the difficult multi-device-group scenarios behind on the old group centric API and then we don't have to deal with them at all, or least not right away. I'd see some progression where iommu_fd only works with 1:1 groups at the start. Other scenarios continue with the old API. Then maybe groups where all devices use the same IOASID. Then 1:N groups if the source device is reliably identifiable, this requires iommu subystem work to attach domains to sub-group objects - not sure it is worthwhile. But at least we can talk about each step with well thought out patches The only thing that needs to be done to get the 1:1 step is to broadly define how the other two cases will work so we don't get into trouble and set some way to exclude the problematic cases from even getting to iommu_fd in the first place. For instance if we go ahead and create /dev/vfio/device nodes we could do this only if the group was 1:1, otherwise the group cdev has to be used, along with its API. > a) Check group viability. A group is viable only when all devices in > the group are in one of below states: > > * driver-less > * bound to a driver which is same as dev->driver (vfio in this case) > * bound to an otherwise allowed driver (same list as in vfio) This really shouldn't use hardwired driver checks. Attached drivers should generically indicate to the iommu layer that they are safe for iommu_fd usage by calling some function around probe() Thus a group must contain only iommu_fd safe drivers, or drivers-less devices before any of it can be used. It is the more general refactoring of what VFIO is doing. > c) The iommu layer also verifies group viability on BUS_NOTIFY_ > BOUND_DRIVER event. BUG_ON if viability is broken while block_dma > is set. And with this concept of iommu_fd safety being first-class maybe we can somehow eliminate this gross BUG_ON (and the 100's of lines of code that are used to create it) by denying probe to non-iommu-safe drivers, somehow. > - Binding other devices in the group to iommu_fd just succeeds since > the group is already in block_dma. I think the rest of this more or less describes the device centric logic for multi-device groups we've already talked about. I don't think it benifits from having the group fd Jason _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-06-25 14:36 UTC|newest] Thread overview: 162+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-07 2:58 Plan for /dev/ioasid RFC v2 Tian, Kevin 2021-06-07 2:58 ` Tian, Kevin 2021-06-09 8:14 ` Eric Auger 2021-06-09 8:14 ` Eric Auger 2021-06-09 9:37 ` Tian, Kevin 2021-06-09 9:37 ` Tian, Kevin 2021-06-09 10:14 ` Eric Auger 2021-06-09 10:14 ` Eric Auger 2021-06-09 9:01 ` Leon Romanovsky 2021-06-09 9:01 ` Leon Romanovsky 2021-06-09 9:43 ` Tian, Kevin 2021-06-09 9:43 ` Tian, Kevin 2021-06-09 12:24 ` Joerg Roedel 2021-06-09 12:24 ` Joerg Roedel 2021-06-09 12:39 ` Jason Gunthorpe 2021-06-09 12:39 ` Jason Gunthorpe 2021-06-09 13:32 ` Joerg Roedel 2021-06-09 13:32 ` Joerg Roedel 2021-06-09 15:00 ` Jason Gunthorpe 2021-06-09 15:00 ` Jason Gunthorpe 2021-06-09 15:51 ` Joerg Roedel 2021-06-09 15:51 ` Joerg Roedel 2021-06-09 16:15 ` Alex Williamson 2021-06-09 16:15 ` Alex Williamson 2021-06-09 16:27 ` Alex Williamson 2021-06-09 16:27 ` Alex Williamson 2021-06-09 18:49 ` Jason Gunthorpe 2021-06-09 18:49 ` Jason Gunthorpe 2021-06-10 15:38 ` Alex Williamson 2021-06-10 15:38 ` Alex Williamson 2021-06-11 0:58 ` Tian, Kevin 2021-06-11 0:58 ` Tian, Kevin 2021-06-11 21:38 ` Alex Williamson 2021-06-11 21:38 ` Alex Williamson 2021-06-14 3:09 ` Tian, Kevin 2021-06-14 3:09 ` Tian, Kevin 2021-06-14 3:22 ` Alex Williamson 2021-06-14 3:22 ` Alex Williamson 2021-06-15 1:05 ` Tian, Kevin 2021-06-15 1:05 ` Tian, Kevin 2021-06-14 13:38 ` Jason Gunthorpe 2021-06-14 13:38 ` Jason Gunthorpe 2021-06-15 1:21 ` Tian, Kevin 2021-06-15 1:21 ` Tian, Kevin 2021-06-15 16:56 ` Alex Williamson 2021-06-15 16:56 ` Alex Williamson 2021-06-16 6:53 ` Tian, Kevin 2021-06-16 6:53 ` Tian, Kevin 2021-06-24 4:50 ` David Gibson 2021-06-24 4:50 ` David Gibson 2021-06-11 16:45 ` Jason Gunthorpe 2021-06-11 16:45 ` Jason Gunthorpe 2021-06-11 19:38 ` Alex Williamson 2021-06-11 19:38 ` Alex Williamson 2021-06-12 1:28 ` Jason Gunthorpe 2021-06-12 1:28 ` Jason Gunthorpe 2021-06-12 16:57 ` Alex Williamson 2021-06-12 16:57 ` Alex Williamson 2021-06-14 14:07 ` Jason Gunthorpe 2021-06-14 14:07 ` Jason Gunthorpe 2021-06-14 16:28 ` Alex Williamson 2021-06-14 16:28 ` Alex Williamson 2021-06-14 19:40 ` Jason Gunthorpe 2021-06-14 19:40 ` Jason Gunthorpe 2021-06-15 2:31 ` Tian, Kevin 2021-06-15 2:31 ` Tian, Kevin 2021-06-15 16:12 ` Alex Williamson 2021-06-15 16:12 ` Alex Williamson 2021-06-16 6:43 ` Tian, Kevin 2021-06-16 6:43 ` Tian, Kevin 2021-06-16 19:39 ` Alex Williamson 2021-06-16 19:39 ` Alex Williamson 2021-06-17 3:39 ` Liu Yi L 2021-06-17 3:39 ` Liu Yi L 2021-06-17 7:31 ` Tian, Kevin 2021-06-17 7:31 ` Tian, Kevin 2021-06-17 21:14 ` Alex Williamson 2021-06-17 21:14 ` Alex Williamson 2021-06-18 0:19 ` Jason Gunthorpe 2021-06-18 0:19 ` Jason Gunthorpe 2021-06-18 16:57 ` Tian, Kevin 2021-06-18 16:57 ` Tian, Kevin 2021-06-18 18:23 ` Jason Gunthorpe 2021-06-18 18:23 ` Jason Gunthorpe 2021-06-25 10:27 ` Tian, Kevin 2021-06-25 10:27 ` Tian, Kevin 2021-06-25 14:36 ` Jason Gunthorpe [this message] 2021-06-25 14:36 ` Jason Gunthorpe 2021-06-28 1:09 ` Tian, Kevin 2021-06-28 1:09 ` Tian, Kevin 2021-06-28 22:31 ` Alex Williamson 2021-06-28 22:31 ` Alex Williamson 2021-06-28 22:48 ` Jason Gunthorpe 2021-06-28 22:48 ` Jason Gunthorpe 2021-06-28 23:09 ` Alex Williamson 2021-06-28 23:09 ` Alex Williamson 2021-06-28 23:13 ` Jason Gunthorpe 2021-06-28 23:13 ` Jason Gunthorpe 2021-06-29 0:26 ` Tian, Kevin 2021-06-29 0:26 ` Tian, Kevin 2021-06-29 0:28 ` Tian, Kevin 2021-06-29 0:28 ` Tian, Kevin 2021-06-29 0:43 ` Tian, Kevin 2021-06-29 0:43 ` Tian, Kevin 2021-06-28 2:03 ` Tian, Kevin 2021-06-28 2:03 ` Tian, Kevin 2021-06-28 14:41 ` Jason Gunthorpe 2021-06-28 14:41 ` Jason Gunthorpe 2021-06-28 6:45 ` Tian, Kevin 2021-06-28 6:45 ` Tian, Kevin 2021-06-28 16:26 ` Jason Gunthorpe 2021-06-28 16:26 ` Jason Gunthorpe 2021-06-24 4:26 ` David Gibson 2021-06-24 4:26 ` David Gibson 2021-06-24 5:59 ` Tian, Kevin 2021-06-24 5:59 ` Tian, Kevin 2021-06-24 12:22 ` Lu Baolu 2021-06-24 12:22 ` Lu Baolu 2021-06-24 4:23 ` David Gibson 2021-06-24 4:23 ` David Gibson 2021-06-18 0:52 ` Jason Gunthorpe 2021-06-18 0:52 ` Jason Gunthorpe 2021-06-18 13:47 ` Joerg Roedel 2021-06-18 13:47 ` Joerg Roedel 2021-06-18 15:15 ` Jason Gunthorpe 2021-06-18 15:15 ` Jason Gunthorpe 2021-06-18 15:37 ` Raj, Ashok 2021-06-18 15:37 ` Raj, Ashok 2021-06-18 15:51 ` Alex Williamson 2021-06-18 15:51 ` Alex Williamson 2021-06-24 4:29 ` David Gibson 2021-06-24 4:29 ` David Gibson 2021-06-24 11:56 ` Jason Gunthorpe 2021-06-24 11:56 ` Jason Gunthorpe 2021-06-18 0:10 ` Jason Gunthorpe 2021-06-18 0:10 ` Jason Gunthorpe 2021-06-17 5:29 ` David Gibson 2021-06-17 5:29 ` David Gibson 2021-06-17 5:02 ` David Gibson 2021-06-17 5:02 ` David Gibson 2021-06-17 23:04 ` Jason Gunthorpe 2021-06-17 23:04 ` Jason Gunthorpe 2021-06-24 4:37 ` David Gibson 2021-06-24 4:37 ` David Gibson 2021-06-24 11:57 ` Jason Gunthorpe 2021-06-24 11:57 ` Jason Gunthorpe 2021-06-10 5:50 ` Lu Baolu 2021-06-10 5:50 ` Lu Baolu 2021-06-17 5:22 ` David Gibson 2021-06-17 5:22 ` David Gibson 2021-06-18 5:21 ` Lu Baolu 2021-06-18 5:21 ` Lu Baolu 2021-06-24 4:03 ` David Gibson 2021-06-24 4:03 ` David Gibson 2021-06-24 13:42 ` Lu Baolu 2021-06-24 13:42 ` Lu Baolu 2021-06-17 4:45 ` David Gibson 2021-06-17 4:45 ` David Gibson 2021-06-17 23:10 ` Jason Gunthorpe 2021-06-17 23:10 ` Jason Gunthorpe 2021-06-24 4:07 ` David Gibson 2021-06-24 4:07 ` David Gibson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210625143616.GT2371267@nvidia.com \ --to=jgg@nvidia.com \ --cc=alex.williamson@redhat.com \ --cc=ashok.raj@intel.com \ --cc=baolu.lu@linux.intel.com \ --cc=corbet@lwn.net \ --cc=dave.jiang@intel.com \ --cc=david@gibson.dropbear.id.au \ --cc=dwmw2@infradead.org \ --cc=eric.auger@redhat.com \ --cc=hao.wu@intel.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jacob.jun.pan@linux.intel.com \ --cc=jasowang@redhat.com \ --cc=jean-philippe@linaro.org \ --cc=joro@8bytes.org \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=kwankhede@nvidia.com \ --cc=linux-kernel@vger.kernel.org \ --cc=lkml@metux.net \ --cc=lushenming@huawei.com \ --cc=parav@mellanox.com \ --cc=pbonzini@redhat.com \ --cc=robin.murphy@arm.com \ --cc=yi.l.liu@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.