kdevops.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: kdevops@lists.linux.dev, Jeff Layton <jlayton@kernel.org>
Subject: [PATCH] vagrant: allow the aio= and cache= options to be set on any drive type
Date: Wed, 23 Aug 2023 12:20:35 -0400	[thread overview]
Message-ID: <20230823-fixes-v1-1-92612c3e2027@kernel.org> (raw)

The aio= and cache= options to the -drive option don't have any
dependency on the interface that is presented to the guest.

Rename *_VIRTIO_AIO_* to *_AIO_*, and remove the dependency on virtio in
the Kconfig. Add the appropriate cache= and aio= options in the
Vagrantfile template.

While we're in here, let's add support for io_uring on an experimental
basis.

Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 playbooks/roles/gen_nodes/defaults/main.yml        |  4 +-
 playbooks/roles/gen_nodes/templates/Vagrantfile.j2 |  8 ++--
 scripts/gen-nodes.Makefile                         |  5 ++-
 vagrant/Kconfig                                    | 47 +++++++++++-----------
 4 files changed, 33 insertions(+), 31 deletions(-)

diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index 0c5d36467038..c57effe7346f 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -55,8 +55,8 @@ libvirt_extra_drive_id_prefix: 'drv'
 libvirt_extra_storage_drive_nvme: True
 libvirt_extra_storage_drive_virtio: False
 libvirt_extra_storage_drive_ide: False
-libvirt_extra_storage_virtio_aio_mode: "native"
-libvirt_extra_storage_virtio_aio_cache_mode: "none"
+libvirt_extra_storage_aio_mode: "native"
+libvirt_extra_storage_aio_cache_mode: "none"
 libvirt_extra_storage_nvme_logical_block_size: 512
 libvirt_extra_storage_virtio_logical_block_size: 512
 libvirt_extra_storage_virtio_physical_block_size: 512
diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
index 8210627c3717..7ed59ff744ae 100644
--- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
+++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
@@ -391,14 +391,14 @@ Vagrant.configure("2") do |config|
 {% endif %}
 {% if libvirt_extra_storage_drive_ide %}
             libvirt.qemuargs :value => "-drive"
-            libvirt.qemuargs :value => "file=#{extra_disk},if=ide,serial=#{serial_id}"
+            libvirt.qemuargs :value => "file=#{extra_disk},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},if=ide,serial=#{serial_id}"
 {% elif libvirt_extra_storage_drive_virtio %}
             virtio_pbs = "{{ libvirt_extra_storage_virtio_physical_block_size }}"
             virtio_lbs = "{{ libvirt_extra_storage_virtio_logical_block_size }}"
             libvirt.qemuargs :value => "-object"
             libvirt.qemuargs :value => "iothread,id=kdevops-virtio-iothread-#{port}"
             libvirt.qemuargs :value => "-drive"
-            libvirt.qemuargs :value => "file=#{extra_disk},if=none,aio={{ libvirt_extra_storage_virtio_aio_mode }},cache={{ libvirt_extra_storage_virtio_aio_cache_mode }},id=#{disk_id}"
+            libvirt.qemuargs :value => "file=#{extra_disk},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
             libvirt.qemuargs :value => "-device"
             libvirt.qemuargs :value => "virtio-blk-pci,scsi=off,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
 {% elif libvirt_extra_storage_drive_nvme  %}
@@ -409,14 +409,14 @@ Vagrant.configure("2") do |config|
             extra_drive_interface = "none"
             if zoned
               libvirt.qemuargs :value => "-drive"
-              libvirt.qemuargs :value => "file=#{extra_disk},if=#{extra_drive_interface},id=#{disk_id}"
+              libvirt.qemuargs :value => "file=#{extra_disk},if=#{extra_drive_interface},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
               libvirt.qemuargs :value => "-device"
               libvirt.qemuargs :value => "nvme,id={{ extra_disk_driver }}#{port},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},zoned.zasl=#{zone_zasl}"
               libvirt.qemuargs :value => "-device"
               libvirt.qemuargs :value => "nvme-ns,drive=#{disk_id},bus={{ extra_disk_driver }}#{port},nsid=1,logical_block_size=#{logical_block_size},physical_block_size=#{physical_block_size},zoned=true,zoned.zone_size=#{zone_size},zoned.zone_capacity=#{zone_capacity},zoned.max_open=#{zone_max_open},zoned.max_active=#{zone_max_active}"
             else
               libvirt.qemuargs :value => "-drive"
-              libvirt.qemuargs :value => "file=#{extra_disk},if=#{extra_drive_interface},id=#{disk_id}"
+              libvirt.qemuargs :value => "file=#{extra_disk},if=#{extra_drive_interface},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
               libvirt.qemuargs :value => "-device"
               libvirt.qemuargs :value => "nvme,id={{ extra_disk_driver }}#{port},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function}"
               libvirt.qemuargs :value => "-device"
diff --git a/scripts/gen-nodes.Makefile b/scripts/gen-nodes.Makefile
index 579a2dfe4d54..533b19dbf771 100644
--- a/scripts/gen-nodes.Makefile
+++ b/scripts/gen-nodes.Makefile
@@ -56,10 +56,11 @@ endif
 ifeq (y,$(CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO))
 GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_nvme='False'
 GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_virtio='True'
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_virtio_aio_mode='$(subst ",,$(CONFIG_LIBVIRT_VIRTIO_AIO_MODE))'
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_virtio_aio_cache_mode='$(subst ",,$(CONFIG_LIBVIRT_VIRTIO_AIO_CACHE_MODE))'
 endif
 
+GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_aio_mode='$(subst ",,$(CONFIG_LIBVIRT_AIO_MODE))'
+GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_aio_cache_mode='$(subst ",,$(CONFIG_LIBVIRT_AIO_CACHE_MODE))'
+
 ifeq (y,$(CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME))
 ifeq (y,$(CONFIG_LIBVIRT_EXTRA_STORAGE_NVME_LOGICAL_BLOCK_SIZE_512))
 GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_nvme_logical_block_size='512'
diff --git a/vagrant/Kconfig b/vagrant/Kconfig
index 34f4e50e63c5..7f13791fa27f 100644
--- a/vagrant/Kconfig
+++ b/vagrant/Kconfig
@@ -615,13 +615,11 @@ config LIBVIRT_EXTRA_STORAGE_VIRTIO_LOGICAL_BLOCK_SIZE_2M
 
 endchoice
 
-if LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
-
 choice
-	prompt "Libvirt virtio aio"
-	default LIBVIRT_VIRTIO_AIO_MODE_NATIVE
+	prompt "Libvirt aio mode"
+	default LIBVIRT_AIO_MODE_NATIVE
 
-config LIBVIRT_VIRTIO_AIO_MODE_NATIVE
+config LIBVIRT_AIO_MODE_NATIVE
 	bool "aio=native"
 	help
 	  Use the aio=native mode. For some older kernels it is known that
@@ -640,25 +638,30 @@ config LIBVIRT_VIRTIO_AIO_MODE_NATIVE
 
 	  https://review.opendev.org/c/openstack/nova-specs/+/232514/7/specs/mitaka/approved/libvirt-aio-mode.rst
 
-config LIBVIRT_VIRTIO_AIO_MODE_THREADS
+config LIBVIRT_AIO_MODE_THREADS
 	bool "aio=threads"
 	help
 	  Use the aio=threads mode. This might be more suitable for you if on
 	  older kernels such as in RHEL6 and using sparsefiles and ext4 or xfs
 	  on this host with cache=none.
 
+config LIBVIRT_AIO_MODE_IO_URING
+	bool "aio=io_uring"
+	help
+	  Use the aio=io_uring mode. This is currently experimental.
 endchoice
 
-config LIBVIRT_VIRTIO_AIO_MODE
+config LIBVIRT_AIO_MODE
 	string
-	default "native" if LIBVIRT_VIRTIO_AIO_MODE_NATIVE
-	default "threads" if LIBVIRT_VIRTIO_AIO_MODE_THREADS
+	default "native" if LIBVIRT_AIO_MODE_NATIVE
+	default "threads" if LIBVIRT_AIO_MODE_THREADS
+	default "io_uring" if LIBVIRT_AIO_MODE_IO_URING
 
 choice
 	prompt "Libvirt cache mode"
-	default LIBVIRT_VIRTIO_AIO_CACHE_MODE_NONE
+	default LIBVIRT_AIO_CACHE_MODE_NONE
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE_NONE
+config LIBVIRT_AIO_CACHE_MODE_NONE
 	bool "cache=none"
 	help
 	  Use the cache=none. IO from the guest is not cached on the host but
@@ -670,7 +673,7 @@ config LIBVIRT_VIRTIO_AIO_CACHE_MODE_NONE
 	  option for guests with large IO requirements. This is generally the
 	  best option and is required for live migration.
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITETHROUGH
+config LIBVIRT_AIO_CACHE_MODE_WRITETHROUGH
 	bool "cache=writethrough"
 	help
 	  cache=writethrough. IO from the guest is cached on the host but
@@ -682,7 +685,7 @@ config LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITETHROUGH
 	  Best used for small number of guests with lower IO reqs. This should
 	  be used on older guests which do not support writeback cache.
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITEBACK
+config LIBVIRT_AIO_CACHE_MODE_WRITEBACK
 	bool "cache=writeback"
 	help
 	  cache=writeback. IO from the guest is cached on the host so it is
@@ -691,7 +694,7 @@ config LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITEBACK
 	  storage controller is informed of the writeback cache and therefore
 	  expected to send flush commands as needed to manage data integrity.
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE_DIRECTSYNC
+config LIBVIRT_AIO_CACHE_MODE_DIRECTSYNC
 	bool "cache=directsync"
 	help
 	  cache=directsync. Writes are reported only when the data has been
@@ -699,7 +702,7 @@ config LIBVIRT_VIRTIO_AIO_CACHE_MODE_DIRECTSYNC
 	  bypassed. This mode is useful for guests which that do not send
 	  flushes when needed.
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE_UNSAFE
+config LIBVIRT_AIO_CACHE_MODE_UNSAFE
 	bool "cache=unsafe"
 	help
 	  cache=unsafe. Similar to writeback except all of the flush commands
@@ -708,15 +711,13 @@ config LIBVIRT_VIRTIO_AIO_CACHE_MODE_UNSAFE
 
 endchoice
 
-config LIBVIRT_VIRTIO_AIO_CACHE_MODE
+config LIBVIRT_AIO_CACHE_MODE
 	string
-	default "none" if LIBVIRT_VIRTIO_AIO_CACHE_MODE_NONE
-	default "writethrough" if LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITETHROUGH
-	default "writeback" if LIBVIRT_VIRTIO_AIO_CACHE_MODE_WRITEBACK
-	default "directsync" if LIBVIRT_VIRTIO_AIO_CACHE_MODE_DIRECTSYNC
-	default "unsafe" if LIBVIRT_VIRTIO_AIO_CACHE_MODE_UNSAFE
-
-endif
+	default "none" if LIBVIRT_AIO_CACHE_MODE_NONE
+	default "writethrough" if LIBVIRT_AIO_CACHE_MODE_WRITETHROUGH
+	default "writeback" if LIBVIRT_AIO_CACHE_MODE_WRITEBACK
+	default "directsync" if LIBVIRT_AIO_CACHE_MODE_DIRECTSYNC
+	default "unsafe" if LIBVIRT_AIO_CACHE_MODE_UNSAFE
 
 choice
 	prompt "Libvirt storage pool path"

---
base-commit: ed8c04809df15cff347ec011d95a9bf5a6f34c14
change-id: 20230823-fixes-7d36e8b073b9

Best regards,
-- 
Jeff Layton <jlayton@kernel.org>


             reply	other threads:[~2023-08-23 16:20 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-23 16:20 Jeff Layton [this message]
2023-08-23 17:56 ` [PATCH] vagrant: allow the aio= and cache= options to be set on any drive type Luis Chamberlain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230823-fixes-v1-1-92612c3e2027@kernel.org \
    --to=jlayton@kernel.org \
    --cc=kdevops@lists.linux.dev \
    --cc=mcgrof@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).