All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
@ 2016-04-01 13:19 Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
                   ` (13 more replies)
  0 siblings, 14 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

This version fixes some commit messages, is based on qemu.git master
and adds Cornelia's Reviewed-by tags.  There are no code changes apart
from context.

Michael S. Tsirkin (2):
  virtio: add aio handler
  virtio-blk: use aio handler for data plane

Paolo Bonzini (7):
  virtio-dataplane: pass assign=true to
    virtio_queue_aio_set_host_notifier_handler
  virtio: make virtio_queue_notify_vq static
  virtio-blk: fix disabled mode
  virtio-scsi: fix disabled mode
  virtio-scsi: use aio handler for data plane
  virtio: merge virtio_queue_aio_set_host_notifier_handler with
    virtio_queue_set_aio
  virtio: remove starting/stopping checks

 hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
 hw/block/virtio-blk.c           | 29 ++++++++++-------
 hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
 hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
 hw/virtio/virtio.c              | 37 ++++++++++++++++------
 include/hw/virtio/virtio-blk.h  |  3 ++
 include/hw/virtio/virtio-scsi.h |  9 ++----
 include/hw/virtio/virtio.h      |  4 +--
 8 files changed, 158 insertions(+), 84 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 14:02   ` Cornelia Huck
  2016-04-05 10:38   ` Michael S. Tsirkin
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
                   ` (12 subsequent siblings)
  13 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

There is no need to run the handler one last time; the device is
being reset and it is okay to drop requests that are pending in
the virtqueue.  Even in the case of migration, the requests would
be processed when ioeventfd is re-enabled on the destination
side: virtio_queue_set_host_notifier_fd_handler will call
virtio_queue_host_notifier_read, which will start dataplane; the host
notifier is then connected to the I/O thread and event_notifier_set is
called to start processing it.

By omitting this call, we dodge a possible cause of races between the
dataplane thread on one side and the main/vCPU threads on the other.

The virtio_queue_aio_set_host_notifier_handler function is now
only ever called with assign=true, but for now this is left as is
because the function parameters will change soon anyway.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 +-
 hw/scsi/virtio-scsi-dataplane.c | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index e666dd4..fddd3ab 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     aio_context_acquire(s->ctx);
 
     /* Stop notifications for new requests from guest */
-    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
+    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
 
     /* Drain and switch bs back to the QEMU main loop */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index b44ac5d..21d5bfd 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     int i;
 
-    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
-    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
+    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
     for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
+        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
     }
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/virtio/virtio.c         | 2 +-
 include/hw/virtio/virtio.h | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 14d5d91..264d4f6 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1088,7 +1088,7 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
     virtio_queue_update_rings(vdev, n);
 }
 
-void virtio_queue_notify_vq(VirtQueue *vq)
+static void virtio_queue_notify_vq(VirtQueue *vq)
 {
     if (vq->vring.desc && vq->handle_output) {
         VirtIODevice *vdev = vq->vdev;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 2b5b248..5afb51c 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -252,7 +252,6 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
                                                bool set_handler);
 void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
                                                 bool assign, bool set_handler);
-void virtio_queue_notify_vq(VirtQueue *vq);
 void virtio_irq(VirtQueue *vq);
 VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
 VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

We must not call virtio_blk_data_plane_notify if dataplane is
disabled: we would hit a segmentation fault in notify_guest_bh as
s->guest_notifier has not been setup and is NULL.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 7 +++----
 hw/block/virtio-blk.c           | 2 +-
 include/hw/virtio/virtio-blk.h  | 1 +
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index fddd3ab..ed9d0ce 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -29,7 +29,6 @@
 struct VirtIOBlockDataPlane {
     bool starting;
     bool stopping;
-    bool disabled;
 
     VirtIOBlkConf *conf;
 
@@ -234,7 +233,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
   fail_host_notifier:
     k->set_guest_notifiers(qbus->parent, 1, false);
   fail_guest_notifiers:
-    s->disabled = true;
+    vblk->dataplane_disabled = true;
     s->starting = false;
     vblk->dataplane_started = true;
 }
@@ -251,8 +250,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     }
 
     /* Better luck next time. */
-    if (s->disabled) {
-        s->disabled = false;
+    if (vblk->dataplane_disabled) {
+        vblk->dataplane_disabled = false;
         vblk->dataplane_started = false;
         return;
     }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 870d345..151fe78 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -54,7 +54,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
 
     stb_p(&req->in->status, status);
     virtqueue_push(s->vq, &req->elem, req->in_len);
-    if (s->dataplane) {
+    if (s->dataplane_started && !s->dataplane_disabled) {
         virtio_blk_data_plane_notify(s->dataplane);
     } else {
         virtio_notify(vdev, s->vq);
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index ae84d92..59ae1e4 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
     unsigned short sector_mask;
     bool original_wce;
     VMChangeStateEntry *change;
+    bool dataplane_disabled;
     bool dataplane_started;
     struct VirtIOBlockDataPlane *dataplane;
 } VirtIOBlock;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 4/9] virtio-scsi: fix disabled mode
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (2 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

Add two missing checks for s->dataplane_fenced.  In one case, QEMU
would skip injecting an IRQ due to a write to an uninitialized
EventNotifier's file descriptor.

In the second case, the dataplane_disabled field was used by mistake;
in fact after fixing this occurrence it is completely unused.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/scsi/virtio-scsi.c           | 4 ++--
 include/hw/virtio/virtio-scsi.h | 1 -
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index ade4972..38f1e2c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -68,7 +68,7 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *req)
 
     qemu_iovec_from_buf(&req->resp_iov, 0, &req->resp, req->resp_size);
     virtqueue_push(vq, &req->elem, req->qsgl.size + req->resp_iov.size);
-    if (s->dataplane_started) {
+    if (s->dataplane_started && !s->dataplane_fenced) {
         virtio_scsi_dataplane_notify(vdev, req);
     } else {
         virtio_notify(vdev, vq);
@@ -773,7 +773,7 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
 
-    if (s->ctx && !s->dataplane_disabled) {
+    if (s->ctx && !s->dataplane_fenced) {
         VirtIOSCSIBlkChangeNotifier *insert_notifier, *remove_notifier;
 
         if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 209eaa4..eef4e95 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -91,7 +91,6 @@ typedef struct VirtIOSCSI {
     bool dataplane_started;
     bool dataplane_starting;
     bool dataplane_stopping;
-    bool dataplane_disabled;
     bool dataplane_fenced;
     Error *blocker;
     uint32_t host_features;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 5/9] virtio: add aio handler
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (3 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 14:04   ` Cornelia Huck
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

From: "Michael S. Tsirkin" <mst@redhat.com>

In addition to handling IO in vcpu thread and
in io thread, blk dataplane introduces yet another mode:
handling it by aio.

This reuses the same handler as previous modes,
which triggers races as these were not designed to be reentrant.

As a temporary fix, add a separate handler just for aio, this will make
it possible to disable regular handlers when dataplane is active.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/virtio/virtio.c         | 36 ++++++++++++++++++++++++++++++++----
 include/hw/virtio/virtio.h |  3 +++
 2 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 264d4f6..eb04ac0 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -96,6 +96,7 @@ struct VirtQueue
 
     uint16_t vector;
     void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq);
+    void (*handle_aio_output)(VirtIODevice *vdev, VirtQueue *vq);
     VirtIODevice *vdev;
     EventNotifier guest_notifier;
     EventNotifier host_notifier;
@@ -1088,6 +1089,16 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
     virtio_queue_update_rings(vdev, n);
 }
 
+static void virtio_queue_notify_aio_vq(VirtQueue *vq)
+{
+    if (vq->vring.desc && vq->handle_aio_output) {
+        VirtIODevice *vdev = vq->vdev;
+
+        trace_virtio_queue_notify(vdev, vq - vdev->vq, vq);
+        vq->handle_aio_output(vdev, vq);
+    }
+}
+
 static void virtio_queue_notify_vq(VirtQueue *vq)
 {
     if (vq->vring.desc && vq->handle_output) {
@@ -1143,10 +1154,19 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
     vdev->vq[i].vring.num_default = queue_size;
     vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
     vdev->vq[i].handle_output = handle_output;
+    vdev->vq[i].handle_aio_output = NULL;
 
     return &vdev->vq[i];
 }
 
+void virtio_set_queue_aio(VirtQueue *vq,
+                          void (*handle_output)(VirtIODevice *, VirtQueue *))
+{
+    assert(vq->handle_output);
+
+    vq->handle_aio_output = handle_output;
+}
+
 void virtio_del_queue(VirtIODevice *vdev, int n)
 {
     if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1780,11 +1800,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
     return &vq->guest_notifier;
 }
 
-static void virtio_queue_host_notifier_read(EventNotifier *n)
+static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
 {
     VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
     if (event_notifier_test_and_clear(n)) {
-        virtio_queue_notify_vq(vq);
+        virtio_queue_notify_aio_vq(vq);
     }
 }
 
@@ -1793,14 +1813,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
 {
     if (assign && set_handler) {
         aio_set_event_notifier(ctx, &vq->host_notifier, true,
-                               virtio_queue_host_notifier_read);
+                               virtio_queue_host_notifier_aio_read);
     } else {
         aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
     }
     if (!assign) {
         /* Test and clear notifier before after disabling event,
          * in case poll callback didn't have time to run. */
-        virtio_queue_host_notifier_read(&vq->host_notifier);
+        virtio_queue_host_notifier_aio_read(&vq->host_notifier);
+    }
+}
+
+static void virtio_queue_host_notifier_read(EventNotifier *n)
+{
+    VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
+    if (event_notifier_test_and_clear(n)) {
+        virtio_queue_notify_vq(vq);
     }
 }
 
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 5afb51c..fa3f93b 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,6 +142,9 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
                             void (*handle_output)(VirtIODevice *,
                                                   VirtQueue *));
 
+void virtio_set_queue_aio(VirtQueue *vq,
+                          void (*handle_output)(VirtIODevice *, VirtQueue *));
+
 void virtio_del_queue(VirtIODevice *vdev, int n);
 
 void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (4 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 14:05   ` Cornelia Huck
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

From: "Michael S. Tsirkin" <mst@redhat.com>

In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.

This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.

Use a separate handler just for aio, and disable regular handlers when
dataplane is active.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
 hw/block/virtio-blk.c           | 27 +++++++++++++++++----------
 include/hw/virtio/virtio-blk.h  |  2 ++
 3 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index ed9d0ce..fd06726 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -184,6 +184,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     g_free(s);
 }
 
+static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev,
+                                                VirtQueue *vq)
+{
+    VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+    assert(s->dataplane);
+    assert(s->dataplane_started);
+
+    virtio_blk_handle_vq(s, vq);
+}
+
 /* Context: QEMU global mutex held */
 void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
 {
@@ -226,6 +237,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
 
     /* Get this show started by hooking up our callbacks */
     aio_context_acquire(s->ctx);
+    virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
     virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
     aio_context_release(s->ctx);
     return;
@@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
 
     /* Stop notifications for new requests from guest */
     virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
+    virtio_set_queue_aio(s->vq, NULL);
 
     /* Drain and switch bs back to the QEMU main loop */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 151fe78..3f88f8c 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -578,20 +578,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
     }
 }
 
-static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
 {
-    VirtIOBlock *s = VIRTIO_BLK(vdev);
     VirtIOBlockReq *req;
     MultiReqBuffer mrb = {};
 
-    /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
-     * dataplane here instead of waiting for .set_status().
-     */
-    if (s->dataplane && !s->dataplane_started) {
-        virtio_blk_data_plane_start(s->dataplane);
-        return;
-    }
-
     blk_io_plug(s->blk);
 
     while ((req = virtio_blk_get_request(s))) {
@@ -605,6 +596,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
     blk_io_unplug(s->blk);
 }
 
+static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+    if (s->dataplane) {
+        /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
+         * dataplane here instead of waiting for .set_status().
+         */
+        virtio_blk_data_plane_start(s->dataplane);
+        if (!s->dataplane_disabled) {
+            return;
+        }
+    }
+    virtio_blk_handle_vq(s, vq);
+}
+
 static void virtio_blk_dma_restart_bh(void *opaque)
 {
     VirtIOBlock *s = opaque;
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 59ae1e4..8f2b056 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -86,4 +86,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb);
 
 void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb);
 
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
+
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (5 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-05  9:06   ` Fam Zheng
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.

This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.

Use a separate handler just for aio, and disable regular handlers when
dataplane is active.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
 hw/scsi/virtio-scsi.c           | 65 ++++++++++++++++++++++++++++-------------
 include/hw/virtio/virtio-scsi.h |  6 ++--
 3 files changed, 86 insertions(+), 28 deletions(-)

diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 21d5bfd..a497a2c 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -38,7 +38,35 @@ void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread)
     }
 }
 
-static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
+static void virtio_scsi_data_plane_handle_cmd(VirtIODevice *vdev,
+                                              VirtQueue *vq)
+{
+    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+    assert(s->ctx && s->dataplane_started);
+    virtio_scsi_handle_cmd_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev,
+                                               VirtQueue *vq)
+{
+    VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+    assert(s->ctx && s->dataplane_started);
+    virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_event(VirtIODevice *vdev,
+                                                VirtQueue *vq)
+{
+    VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+    assert(s->ctx && s->dataplane_started);
+    virtio_scsi_handle_event_vq(s, vq);
+}
+
+static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
+				  void (*fn)(VirtIODevice *vdev, VirtQueue *vq))
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
@@ -54,6 +82,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
     }
 
     virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
+    virtio_set_queue_aio(vq, fn);
     return 0;
 }
 
@@ -71,9 +100,12 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
     int i;
 
     virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+    virtio_set_queue_aio(vs->ctrl_vq, NULL);
     virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
+    virtio_set_queue_aio(vs->event_vq, NULL);
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
+        virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
     }
 }
 
@@ -104,16 +136,19 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
     }
 
     aio_context_acquire(s->ctx);
-    rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0);
+    rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0,
+				virtio_scsi_data_plane_handle_ctrl);
     if (rc) {
         goto fail_vrings;
     }
-    rc = virtio_scsi_vring_init(s, vs->event_vq, 1);
+    rc = virtio_scsi_vring_init(s, vs->event_vq, 1,
+				virtio_scsi_data_plane_handle_event);
     if (rc) {
         goto fail_vrings;
     }
     for (i = 0; i < vs->conf.num_queues; i++) {
-        rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2);
+        rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2,
+				    virtio_scsi_data_plane_handle_cmd);
         if (rc) {
             goto fail_vrings;
         }
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 38f1e2c..30415c6 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -374,7 +374,7 @@ fail:
     return ret;
 }
 
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
 {
     VirtIODevice *vdev = (VirtIODevice *)s;
     uint32_t type;
@@ -412,20 +412,28 @@ void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
     }
 }
 
-static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
 {
-    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
     VirtIOSCSIReq *req;
 
-    if (s->ctx && !s->dataplane_started) {
-        virtio_scsi_dataplane_start(s);
-        return;
-    }
     while ((req = virtio_scsi_pop_req(s, vq))) {
         virtio_scsi_handle_ctrl_req(s, req);
     }
 }
 
+static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+    if (s->ctx) {
+        virtio_scsi_dataplane_start(s);
+        if (!s->dataplane_fenced) {
+            return;
+        }
+    }
+    virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
 static void virtio_scsi_complete_cmd_req(VirtIOSCSIReq *req)
 {
     /* Sense data is not in req->resp and is copied separately
@@ -508,7 +516,7 @@ static void virtio_scsi_fail_cmd_req(VirtIOSCSIReq *req)
     virtio_scsi_complete_cmd_req(req);
 }
 
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
 {
     VirtIOSCSICommon *vs = &s->parent_obj;
     SCSIDevice *d;
@@ -550,7 +558,7 @@ bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
     return true;
 }
 
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
 {
     SCSIRequest *sreq = req->sreq;
     if (scsi_req_enqueue(sreq)) {
@@ -560,17 +568,11 @@ void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
     scsi_req_unref(sreq);
 }
 
-static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
 {
-    /* use non-QOM casts in the data path */
-    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
     VirtIOSCSIReq *req, *next;
     QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
 
-    if (s->ctx && !s->dataplane_started) {
-        virtio_scsi_dataplane_start(s);
-        return;
-    }
     while ((req = virtio_scsi_pop_req(s, vq))) {
         if (virtio_scsi_handle_cmd_req_prepare(s, req)) {
             QTAILQ_INSERT_TAIL(&reqs, req, next);
@@ -582,6 +584,20 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
     }
 }
 
+static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+{
+    /* use non-QOM casts in the data path */
+    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+    if (s->ctx) {
+        virtio_scsi_dataplane_start(s);
+        if (!s->dataplane_fenced) {
+            return;
+        }
+    }
+    virtio_scsi_handle_cmd_vq(s, vq);
+}
+
 static void virtio_scsi_get_config(VirtIODevice *vdev,
                                    uint8_t *config)
 {
@@ -725,17 +741,24 @@ out:
     }
 }
 
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
+{
+    if (s->events_dropped) {
+        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+    }
+}
+
 static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
 {
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
 
-    if (s->ctx && !s->dataplane_started) {
+    if (s->ctx) {
         virtio_scsi_dataplane_start(s);
-        return;
-    }
-    if (s->events_dropped) {
-        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+        if (!s->dataplane_fenced) {
+            return;
+        }
     }
+    virtio_scsi_handle_event_vq(s, vq);
 }
 
 static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index eef4e95..ba2f5ce 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -139,9 +139,9 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
                                 HandleOutput cmd);
 
 void virtio_scsi_common_unrealize(DeviceState *dev, Error **errp);
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req);
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req);
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req);
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
 void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
 void virtio_scsi_free_req(VirtIOSCSIReq *req);
 void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (6 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-03  9:06   ` Michael S. Tsirkin
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

Eliminating the reentrancy is actually a nice thing that we can do
with the API that Michael proposed, so let's make it first class.
This also hides the complex assign/set_handler conventions from
callers of virtio_queue_aio_set_host_notifier_handler, which in
fact was always called with assign=true.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c |  7 +++----
 hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
 hw/virtio/virtio.c              | 19 ++++---------------
 include/hw/virtio/virtio.h      |  6 ++----
 4 files changed, 13 insertions(+), 31 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index fd06726..74c6d37 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
 
     /* Get this show started by hooking up our callbacks */
     aio_context_acquire(s->ctx);
-    virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
-    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
+    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
+					       virtio_blk_data_plane_handle_output);
     aio_context_release(s->ctx);
     return;
 
@@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     aio_context_acquire(s->ctx);
 
     /* Stop notifications for new requests from guest */
-    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
-    virtio_set_queue_aio(s->vq, NULL);
+    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
 
     /* Drain and switch bs back to the QEMU main loop */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index a497a2c..5494dcc 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
         return rc;
     }
 
-    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
-    virtio_set_queue_aio(vq, fn);
+    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
     return 0;
 }
 
@@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     int i;
 
-    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
-    virtio_set_queue_aio(vs->ctrl_vq, NULL);
-    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
-    virtio_set_queue_aio(vs->event_vq, NULL);
+    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
+    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
     for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
-        virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
+        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
     }
 }
 
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index eb04ac0..7fcfc24f 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
     return &vdev->vq[i];
 }
 
-void virtio_set_queue_aio(VirtQueue *vq,
-                          void (*handle_output)(VirtIODevice *, VirtQueue *))
-{
-    assert(vq->handle_output);
-
-    vq->handle_aio_output = handle_output;
-}
-
 void virtio_del_queue(VirtIODevice *vdev, int n)
 {
     if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
 }
 
 void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
-                                                bool assign, bool set_handler)
+                                                void (*handle_output)(VirtIODevice *,
+							   VirtQueue *))
 {
-    if (assign && set_handler) {
+    vq->handle_aio_output = handle_output;
+    if (handle_output) {
         aio_set_event_notifier(ctx, &vq->host_notifier, true,
                                virtio_queue_host_notifier_aio_read);
     } else {
         aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
     }
-    if (!assign) {
-        /* Test and clear notifier before after disabling event,
-         * in case poll callback didn't have time to run. */
-        virtio_queue_host_notifier_aio_read(&vq->host_notifier);
-    }
 }
 
 static void virtio_queue_host_notifier_read(EventNotifier *n)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index fa3f93b..6a37065 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
                             void (*handle_output)(VirtIODevice *,
                                                   VirtQueue *));
 
-void virtio_set_queue_aio(VirtQueue *vq,
-                          void (*handle_output)(VirtIODevice *, VirtQueue *));
-
 void virtio_del_queue(VirtIODevice *vdev, int n);
 
 void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
@@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
 void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
                                                bool set_handler);
 void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
-                                                bool assign, bool set_handler);
+                                                void (*fn)(VirtIODevice *,
+                                                           VirtQueue *));
 void virtio_irq(VirtQueue *vq);
 VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
 VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (7 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
  2016-04-01 14:14   ` Christian Borntraeger
  2016-04-01 14:02 ` [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Christian Borntraeger
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw
  To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck

Reentrancy cannot happen while the BQL is being held.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 12 ++----------
 hw/scsi/virtio-scsi-dataplane.c |  9 +--------
 include/hw/virtio/virtio-scsi.h |  2 --
 3 files changed, 3 insertions(+), 20 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 74c6d37..cb00cdc 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -27,9 +27,6 @@
 #include "qom/object_interfaces.h"
 
 struct VirtIOBlockDataPlane {
-    bool starting;
-    bool stopping;
-
     VirtIOBlkConf *conf;
 
     VirtIODevice *vdev;
@@ -203,11 +200,10 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
     VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
     int r;
 
-    if (vblk->dataplane_started || s->starting) {
+    if (vblk->dataplane_started) {
         return;
     }
 
-    s->starting = true;
     s->vq = virtio_get_queue(s->vdev, 0);
 
     /* Set up guest notifier (irq) */
@@ -226,7 +222,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         goto fail_host_notifier;
     }
 
-    s->starting = false;
     vblk->dataplane_started = true;
     trace_virtio_blk_data_plane_start(s);
 
@@ -246,7 +241,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
     k->set_guest_notifiers(qbus->parent, 1, false);
   fail_guest_notifiers:
     vblk->dataplane_disabled = true;
-    s->starting = false;
     vblk->dataplane_started = true;
 }
 
@@ -257,7 +251,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
     VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
 
-    if (!vblk->dataplane_started || s->stopping) {
+    if (!vblk->dataplane_started) {
         return;
     }
 
@@ -267,7 +261,6 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
         vblk->dataplane_started = false;
         return;
     }
-    s->stopping = true;
     trace_virtio_blk_data_plane_stop(s);
 
     aio_context_acquire(s->ctx);
@@ -286,5 +279,4 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     k->set_guest_notifiers(qbus->parent, 1, false);
 
     vblk->dataplane_started = false;
-    s->stopping = false;
 }
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 5494dcc..7511447 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -115,14 +115,11 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
 
     if (s->dataplane_started ||
-        s->dataplane_starting ||
         s->dataplane_fenced ||
         s->ctx != iothread_get_aio_context(vs->conf.iothread)) {
         return;
     }
 
-    s->dataplane_starting = true;
-
     /* Set up guest notifier (irq) */
     rc = k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, true);
     if (rc != 0) {
@@ -150,7 +147,6 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
         }
     }
 
-    s->dataplane_starting = false;
     s->dataplane_started = true;
     aio_context_release(s->ctx);
     return;
@@ -164,7 +160,6 @@ fail_vrings:
     k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
 fail_guest_notifiers:
     s->dataplane_fenced = true;
-    s->dataplane_starting = false;
     s->dataplane_started = true;
 }
 
@@ -176,7 +171,7 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     int i;
 
-    if (!s->dataplane_started || s->dataplane_stopping) {
+    if (!s->dataplane_started) {
         return;
     }
 
@@ -186,7 +181,6 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
         s->dataplane_started = false;
         return;
     }
-    s->dataplane_stopping = true;
     assert(s->ctx == iothread_get_aio_context(vs->conf.iothread));
 
     aio_context_acquire(s->ctx);
@@ -203,6 +197,5 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
 
     /* Clean up guest notifier (irq) */
     k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
-    s->dataplane_stopping = false;
     s->dataplane_started = false;
 }
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index ba2f5ce..d5352d8 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -89,8 +89,6 @@ typedef struct VirtIOSCSI {
     QTAILQ_HEAD(, VirtIOSCSIBlkChangeNotifier) remove_notifiers;
 
     bool dataplane_started;
-    bool dataplane_starting;
-    bool dataplane_stopping;
     bool dataplane_fenced;
     Error *blocker;
     uint32_t host_features;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (8 preceding siblings ...)
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
@ 2016-04-01 14:02 ` Christian Borntraeger
  2016-04-01 15:16   ` Christian Borntraeger
  2016-04-01 14:06 ` Cornelia Huck
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Christian Borntraeger @ 2016-04-01 14:02 UTC (permalink / raw
  To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, tubo, famz, stefanha, mst

On 04/01/2016 03:19 PM, Paolo Bonzini wrote:
> This version fixes some commit messages, is based on qemu.git master
> and adds Cornelia's Reviewed-by tags.  There are no code changes apart
> from context.
> 
> Michael S. Tsirkin (2):
>   virtio: add aio handler
>   virtio-blk: use aio handler for data plane
> 
> Paolo Bonzini (7):
>   virtio-dataplane: pass assign=true to
>     virtio_queue_aio_set_host_notifier_handler
>   virtio: make virtio_queue_notify_vq static
>   virtio-blk: fix disabled mode
>   virtio-scsi: fix disabled mode
>   virtio-scsi: use aio handler for data plane
>   virtio: merge virtio_queue_aio_set_host_notifier_handler with
>     virtio_queue_set_aio
>   virtio: remove starting/stopping checks
> 
>  hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
>  hw/block/virtio-blk.c           | 29 ++++++++++-------
>  hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
>  hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
>  hw/virtio/virtio.c              | 37 ++++++++++++++++------
>  include/hw/virtio/virtio-blk.h  |  3 ++
>  include/hw/virtio/virtio-scsi.h |  9 ++----
>  include/hw/virtio/virtio.h      |  4 +--
>  8 files changed, 158 insertions(+), 84 deletions(-)
> 

2.6-rc0 + this patch gives several occurences of segmentation fault when starting several
guests with a reboot loop, e.g. something like using 0 as a pointer for a pthread_mutex.



Thread 1 (Thread 0x3ff7b1ff910 (LWP 24233)):
#0  0x000003ff7d18a178 in pthread_mutex_lock () at /lib64/libpthread.so.0
#1  0x0000000080250572 in qemu_mutex_lock (mutex=mutex@entry=0xf0) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:64
#2  0x00000000801b3e14 in aio_bh_new (ctx=0x0, cb=cb@entry=0x801ef148 <blk_aio_complete_bh>, opaque=opaque@entry=0x3ff74000a50) at /home/cborntra/REPOS/qemu/async.c:55
#3  0x00000000801f0bf0 in blk_aio_prwv (blk=0x808ee1f0, offset=4096, qiov=0x3ff740009b8, co_entry=co_entry@entry=0x801efef8 <blk_aio_read_entry>, flags=flags@entry=(unknown: 0), cb=0x8007fb50 <virtio_blk_rw_complete>, opaque=0x3ff74000960) at /home/cborntra/REPOS/qemu/block/block-backend.c:904
#4  0x00000000801f0cc8 in blk_aio_readv (blk=<optimized out>, sector_num=<optimized out>, iov=<optimized out>, nb_sectors=<optimized out>, cb=<optimized out>, opaque=0x3ff74000960)
    at /home/cborntra/REPOS/qemu/block/block-backend.c:997
#5  0x000000008008062e in virtio_blk_submit_multireq (niov=<optimized out>, num_reqs=<optimized out>, start=<optimized out>, mrb=<optimized out>, blk=<optimized out>)
    at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:361
---Type <return> to continue, or q <return> to quit---
#6  0x000000008008062e in virtio_blk_submit_multireq (blk=<optimized out>, mrb=mrb@entry=0x3ff7b1fe780) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:391
#7  0x00000000800811d4 in virtio_blk_handle_vq (s=0x8090c608, vq=<optimized out>) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:593
#8  0x000000008009c9ee in virtio_queue_host_notifier_aio_read (vq=0x80d361d0) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1098
#9  0x000000008009c9ee in virtio_queue_host_notifier_aio_read (n=0x80d36230) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1799
#10 0x00000000801bece6 in aio_dispatch (ctx=ctx@entry=0x808c35d0) at /home/cborntra/REPOS/qemu/aio-posix.c:327
#11 0x00000000801bef44 in aio_poll (ctx=0x808c35d0, blocking=<optimized out>) at /home/cborntra/REPOS/qemu/aio-posix.c:475
#12 0x00000000800e2db8 in iothread_run (opaque=0x808c3090) at /home/cborntra/REPOS/qemu/iothread.c:46
#13 0x000003ff7d187c2c in start_thread () at /lib64/libpthread.so.0
#14 0x000003ff7d08ec9a in thread_start () at /lib64/libc.so.6

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-04-01 14:02   ` Cornelia Huck
  2016-04-05 10:38   ` Michael S. Tsirkin
  1 sibling, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:02 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: famz, borntraeger, mst, qemu-devel, tubo, stefanha

On Fri,  1 Apr 2016 15:19:46 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue.  Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane; the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
> 
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
> 
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 2 +-
>  hw/scsi/virtio-scsi-dataplane.c | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 5/9] virtio: add aio handler
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-04-01 14:04   ` Cornelia Huck
  0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:04 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: famz, borntraeger, mst, qemu-devel, tubo, stefanha

On Fri,  1 Apr 2016 15:19:50 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> From: "Michael S. Tsirkin" <mst@redhat.com>
> 
> In addition to handling IO in vcpu thread and
> in io thread, blk dataplane introduces yet another mode:
> handling it by aio.
> 
> This reuses the same handler as previous modes,
> which triggers races as these were not designed to be reentrant.
> 
> As a temporary fix, add a separate handler just for aio, this will make
> it possible to disable regular handlers when dataplane is active.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/virtio/virtio.c         | 36 ++++++++++++++++++++++++++++++++----
>  include/hw/virtio/virtio.h |  3 +++
>  2 files changed, 35 insertions(+), 4 deletions(-)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-04-01 14:05   ` Cornelia Huck
  0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:05 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: famz, borntraeger, mst, qemu-devel, tubo, stefanha

On Fri,  1 Apr 2016 15:19:51 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> From: "Michael S. Tsirkin" <mst@redhat.com>
> 
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
> 
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
> 
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
>  hw/block/virtio-blk.c           | 27 +++++++++++++++++----------
>  include/hw/virtio/virtio-blk.h  |  2 ++
>  3 files changed, 32 insertions(+), 10 deletions(-)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (9 preceding siblings ...)
  2016-04-01 14:02 ` [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Christian Borntraeger
@ 2016-04-01 14:06 ` Cornelia Huck
  2016-04-03  9:29 ` Michael S. Tsirkin
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:06 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: famz, borntraeger, mst, qemu-devel, tubo, stefanha

On Fri,  1 Apr 2016 15:19:45 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> This version fixes some commit messages, is based on qemu.git master
> and adds Cornelia's Reviewed-by tags.  There are no code changes apart
> from context.
> 
> Michael S. Tsirkin (2):
>   virtio: add aio handler
>   virtio-blk: use aio handler for data plane
> 
> Paolo Bonzini (7):
>   virtio-dataplane: pass assign=true to
>     virtio_queue_aio_set_host_notifier_handler
>   virtio: make virtio_queue_notify_vq static
>   virtio-blk: fix disabled mode
>   virtio-scsi: fix disabled mode
>   virtio-scsi: use aio handler for data plane
>   virtio: merge virtio_queue_aio_set_host_notifier_handler with
>     virtio_queue_set_aio
>   virtio: remove starting/stopping checks
> 
>  hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
>  hw/block/virtio-blk.c           | 29 ++++++++++-------
>  hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
>  hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
>  hw/virtio/virtio.c              | 37 ++++++++++++++++------
>  include/hw/virtio/virtio-blk.h  |  3 ++
>  include/hw/virtio/virtio-scsi.h |  9 ++----
>  include/hw/virtio/virtio.h      |  4 +--
>  8 files changed, 158 insertions(+), 84 deletions(-)
> 

This one survives a few reboots on my small setup. Let's see how it
fares on Tu Bo's setup.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
@ 2016-04-01 14:14   ` Christian Borntraeger
  2016-04-01 14:30     ` Cornelia Huck
  0 siblings, 1 reply; 29+ messages in thread
From: Christian Borntraeger @ 2016-04-01 14:14 UTC (permalink / raw
  To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, tubo, famz, stefanha, mst

On 04/01/2016 03:19 PM, Paolo Bonzini wrote:
> Reentrancy cannot happen while the BQL is being held.
> 
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reverting this patch makes the segfaults go away.





> ---
>  hw/block/dataplane/virtio-blk.c | 12 ++----------
>  hw/scsi/virtio-scsi-dataplane.c |  9 +--------
>  include/hw/virtio/virtio-scsi.h |  2 --
>  3 files changed, 3 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 74c6d37..cb00cdc 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -27,9 +27,6 @@
>  #include "qom/object_interfaces.h"
> 
>  struct VirtIOBlockDataPlane {
> -    bool starting;
> -    bool stopping;
> -
>      VirtIOBlkConf *conf;
> 
>      VirtIODevice *vdev;
> @@ -203,11 +200,10 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>      VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
>      int r;
> 
> -    if (vblk->dataplane_started || s->starting) {
> +    if (vblk->dataplane_started) {
>          return;
>      }
> 
> -    s->starting = true;
>      s->vq = virtio_get_queue(s->vdev, 0);
> 
>      /* Set up guest notifier (irq) */
> @@ -226,7 +222,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>          goto fail_host_notifier;
>      }
> 
> -    s->starting = false;
>      vblk->dataplane_started = true;
>      trace_virtio_blk_data_plane_start(s);
> 
> @@ -246,7 +241,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>      k->set_guest_notifiers(qbus->parent, 1, false);
>    fail_guest_notifiers:
>      vblk->dataplane_disabled = true;
> -    s->starting = false;
>      vblk->dataplane_started = true;
>  }
> 
> @@ -257,7 +251,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
>      VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
> 
> -    if (!vblk->dataplane_started || s->stopping) {
> +    if (!vblk->dataplane_started) {
>          return;
>      }
> 
> @@ -267,7 +261,6 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>          vblk->dataplane_started = false;
>          return;
>      }
> -    s->stopping = true;
>      trace_virtio_blk_data_plane_stop(s);
> 
>      aio_context_acquire(s->ctx);
> @@ -286,5 +279,4 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>      k->set_guest_notifiers(qbus->parent, 1, false);
> 
>      vblk->dataplane_started = false;
> -    s->stopping = false;
>  }
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index 5494dcc..7511447 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -115,14 +115,11 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
> 
>      if (s->dataplane_started ||
> -        s->dataplane_starting ||
>          s->dataplane_fenced ||
>          s->ctx != iothread_get_aio_context(vs->conf.iothread)) {
>          return;
>      }
> 
> -    s->dataplane_starting = true;
> -
>      /* Set up guest notifier (irq) */
>      rc = k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, true);
>      if (rc != 0) {
> @@ -150,7 +147,6 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
>          }
>      }
> 
> -    s->dataplane_starting = false;
>      s->dataplane_started = true;
>      aio_context_release(s->ctx);
>      return;
> @@ -164,7 +160,6 @@ fail_vrings:
>      k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
>  fail_guest_notifiers:
>      s->dataplane_fenced = true;
> -    s->dataplane_starting = false;
>      s->dataplane_started = true;
>  }
> 
> @@ -176,7 +171,7 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>      int i;
> 
> -    if (!s->dataplane_started || s->dataplane_stopping) {
> +    if (!s->dataplane_started) {
>          return;
>      }
> 
> @@ -186,7 +181,6 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
>          s->dataplane_started = false;
>          return;
>      }
> -    s->dataplane_stopping = true;
>      assert(s->ctx == iothread_get_aio_context(vs->conf.iothread));
> 
>      aio_context_acquire(s->ctx);
> @@ -203,6 +197,5 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
> 
>      /* Clean up guest notifier (irq) */
>      k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
> -    s->dataplane_stopping = false;
>      s->dataplane_started = false;
>  }
> diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
> index ba2f5ce..d5352d8 100644
> --- a/include/hw/virtio/virtio-scsi.h
> +++ b/include/hw/virtio/virtio-scsi.h
> @@ -89,8 +89,6 @@ typedef struct VirtIOSCSI {
>      QTAILQ_HEAD(, VirtIOSCSIBlkChangeNotifier) remove_notifiers;
> 
>      bool dataplane_started;
> -    bool dataplane_starting;
> -    bool dataplane_stopping;
>      bool dataplane_fenced;
>      Error *blocker;
>      uint32_t host_features;
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
  2016-04-01 14:14   ` Christian Borntraeger
@ 2016-04-01 14:30     ` Cornelia Huck
  2016-04-03 10:30       ` Michael S. Tsirkin
  0 siblings, 1 reply; 29+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:30 UTC (permalink / raw
  To: Christian Borntraeger
  Cc: famz, mst, qemu-devel, tubo, stefanha, Paolo Bonzini

On Fri, 1 Apr 2016 16:14:22 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 04/01/2016 03:19 PM, Paolo Bonzini wrote:
> > Reentrancy cannot happen while the BQL is being held.
> > 
> > Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Reverting this patch makes the segfaults go away.
> 
> 
> 
> 
> 
> > ---
> >  hw/block/dataplane/virtio-blk.c | 12 ++----------
> >  hw/scsi/virtio-scsi-dataplane.c |  9 +--------
> >  include/hw/virtio/virtio-scsi.h |  2 --
> >  3 files changed, 3 insertions(+), 20 deletions(-)

:(

On the one hand, I'm wondering what we're missing here.

On the other hand, let's just skip the cleanup patches and get the bug
fixed?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 14:02 ` [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Christian Borntraeger
@ 2016-04-01 15:16   ` Christian Borntraeger
  2016-04-03  9:00     ` Michael S. Tsirkin
  0 siblings, 1 reply; 29+ messages in thread
From: Christian Borntraeger @ 2016-04-01 15:16 UTC (permalink / raw
  To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, tubo, famz, stefanha, mst

Now with enable-debug and better call traces

(gdb) 
(gdb) thread apply all bt

Thread 5 (Thread 0x3ffa0e7f910 (LWP 29839)):
#0  0x000003ffa530334a in ioctl () at /lib64/libc.so.6
#1  0x0000000080081c84 in kvm_vcpu_ioctl (cpu=0x80e8c170, type=44672) at /home/cborntra/REPOS/qemu/kvm-all.c:1984
#2  0x000000008008154c in kvm_cpu_exec (cpu=0x80e8c170) at /home/cborntra/REPOS/qemu/kvm-all.c:1834
#3  0x000000008006075c in qemu_kvm_cpu_thread_fn (arg=0x80e8c170) at /home/cborntra/REPOS/qemu/cpus.c:1056
#4  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
#5  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6

Thread 4 (Thread 0x3ffa3c7f910 (LWP 29830)):
#0  0x000003ffa530841e in syscall () at /lib64/libc.so.6
#1  0x00000000803e83fe in futex_wait (ev=0x80a65bd4 <rcu_call_ready_event>, val=4294967295) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:292
#2  0x00000000803e868e in qemu_event_wait (ev=0x80a65bd4 <rcu_call_ready_event>) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:399
#3  0x0000000080405dcc in call_rcu_thread (opaque=0x0) at /home/cborntra/REPOS/qemu/util/rcu.c:250
#4  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
#5  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6

Thread 3 (Thread 0x3ffa16c3910 (LWP 29838)):
#0  0x000003ffa5410cd4 in __lll_lock_wait () at /lib64/libpthread.so.0
#1  0x000003ffa5413ed4 in __lll_lock_elision () at /lib64/libpthread.so.0
#2  0x00000000803e7ace in qemu_mutex_lock (mutex=0x8063ad08 <qemu_global_mutex>) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:64
#3  0x0000000080060f04 in qemu_mutex_lock_iothread () at /home/cborntra/REPOS/qemu/cpus.c:1232
#4  0x0000000080158866 in kvm_arch_handle_exit (cs=0x80e4d2b0, run=0x3ffa0e80000) at /home/cborntra/REPOS/qemu/target-s390x/kvm.c:2024
#5  0x000000008008181e in kvm_cpu_exec (cpu=0x80e4d2b0) at /home/cborntra/REPOS/qemu/kvm-all.c:1921
#6  0x000000008006075c in qemu_kvm_cpu_thread_fn (arg=0x80e4d2b0) at /home/cborntra/REPOS/qemu/cpus.c:1056
#7  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
#8  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6

Thread 2 (Thread 0x3ffa6edbb90 (LWP 29795)):
#0  0x000003ffa540d68a in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00000000803e7d66 in qemu_cond_wait (cond=0x80acd660, mutex=0x80acd630) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:123
#2  0x0000000080405618 in rfifolock_lock (r=0x80acd630) at /home/cborntra/REPOS/qemu/util/rfifolock.c:59
#3  0x00000000802dfafa in aio_context_acquire (ctx=0x80acd5d0) at /home/cborntra/REPOS/qemu/async.c:373
#4  0x00000000802eb7e0 in bdrv_set_aio_context (bs=0x80b00d30, new_context=0x80acd5d0) at /home/cborntra/REPOS/qemu/block.c:3682
#5  0x000000008034834e in blk_set_aio_context (blk=0x80af81f0, new_context=0x80acd5d0) at /home/cborntra/REPOS/qemu/block/block-backend.c:1371
#6  0x00000000800b7b08 in virtio_blk_data_plane_start (s=0x80f39530) at /home/cborntra/REPOS/qemu/hw/block/dataplane/virtio-blk.c:228
#7  0x00000000800b57ba in virtio_blk_handle_output (vdev=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:607
#8  0x00000000800f0c7c in virtio_queue_notify_vq (vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1108
#9  0x00000000800f3674 in virtio_queue_host_notifier_read (n=0x80fa2630) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1820
#10 0x00000000802f1914 in aio_dispatch (ctx=0x80abae30) at /home/cborntra/REPOS/qemu/aio-posix.c:327
#11 0x00000000802df3dc in aio_ctx_dispatch (source=0x80abae30, callback=0x0, user_data=0x0) at /home/cborntra/REPOS/qemu/async.c:233
#12 0x000003ffa5c51c0a in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#13 0x00000000802ee616 in glib_pollfds_poll () at /home/cborntra/REPOS/qemu/main-loop.c:213
#14 0x00000000802ee752 in os_host_main_loop_wait (timeout=1328000000) at /home/cborntra/REPOS/qemu/main-loop.c:258
#15 0x00000000802ee85e in main_loop_wait (nonblocking=0) at /home/cborntra/REPOS/qemu/main-loop.c:506
#16 0x000000008017db14 in main_loop () at /home/cborntra/REPOS/qemu/vl.c:1934
#17 0x0000000080185fe8 in main (argc=72, argv=0x3ffd157ea28, envp=0x3ffd157ec70) at /home/cborntra/REPOS/qemu/vl.c:4652

Thread 1 (Thread 0x3ffa347f910 (LWP 29831)):
#0  0x000003ffa540a178 in pthread_mutex_lock () at /lib64/libpthread.so.0
#1  0x00000000803e7ace in qemu_mutex_lock (mutex=0xf0) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:64
#2  0x00000000802deba8 in aio_bh_new (ctx=0x0, cb=0x80346910 <blk_aio_complete_bh>, opaque=0x3ff9c000a50) at /home/cborntra/REPOS/qemu/async.c:55
#3  0x0000000080346a6e in blk_aio_prwv (blk=0x80af81f0, offset=4096, qiov=0x3ff9c0021c8, co_entry=0x80346aa8 <blk_aio_read_entry>, flags=(unknown: 0), cb=0x800b3be8 <virtio_blk_rw_complete>, opaque=0x3ff9c002170) at /home/cborntra/REPOS/qemu/block/block-backend.c:904
#4  0x0000000080347030 in blk_aio_readv (blk=0x80af81f0, sector_num=8, iov=0x3ff9c0021c8, nb_sectors=8, cb=0x800b3be8 <virtio_blk_rw_complete>, opaque=0x3ff9c002170) at /home/cborntra/REPOS/qemu/block/block-backend.c:997
#5  0x00000000800b492c in submit_requests (blk=0x80af81f0, mrb=0x3ffa347e5c0, start=0, num_reqs=1, niov=-1) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:361
#6  0x00000000800b4a5e in virtio_blk_submit_multireq (blk=0x80af81f0, mrb=0x3ffa347e5c0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:391
#7  0x00000000800b571e in virtio_blk_handle_vq (s=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:593
#8  0x00000000800b78a2 in virtio_blk_data_plane_handle_output (vdev=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/dataplane/virtio-blk.c:192
#9  0x00000000800f0bbc in virtio_queue_notify_aio_vq (vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1098
#10 0x00000000800f356c in virtio_queue_host_notifier_aio_read (n=0x80fa2630) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1799
#11 0x00000000802f1914 in aio_dispatch (ctx=0x80acd5d0) at /home/cborntra/REPOS/qemu/aio-posix.c:327
#12 0x00000000802f229a in aio_poll (ctx=0x80acd5d0, blocking=true) at /home/cborntra/REPOS/qemu/aio-posix.c:475
#13 0x0000000080165812 in iothread_run (opaque=0x80acd090) at /home/cborntra/REPOS/qemu/iothread.c:46
#14 0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
#15 0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 15:16   ` Christian Borntraeger
@ 2016-04-03  9:00     ` Michael S. Tsirkin
  0 siblings, 0 replies; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-03  9:00 UTC (permalink / raw
  To: Christian Borntraeger
  Cc: famz, qemu-devel, tubo, stefanha, cornelia.huck, Paolo Bonzini

On Fri, Apr 01, 2016 at 05:16:06PM +0200, Christian Borntraeger wrote:
> Now with enable-debug and better call traces
> 
> (gdb) 
> (gdb) thread apply all bt
> 
> Thread 5 (Thread 0x3ffa0e7f910 (LWP 29839)):
> #0  0x000003ffa530334a in ioctl () at /lib64/libc.so.6
> #1  0x0000000080081c84 in kvm_vcpu_ioctl (cpu=0x80e8c170, type=44672) at /home/cborntra/REPOS/qemu/kvm-all.c:1984
> #2  0x000000008008154c in kvm_cpu_exec (cpu=0x80e8c170) at /home/cborntra/REPOS/qemu/kvm-all.c:1834
> #3  0x000000008006075c in qemu_kvm_cpu_thread_fn (arg=0x80e8c170) at /home/cborntra/REPOS/qemu/cpus.c:1056
> #4  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
> #5  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6
> 
> Thread 4 (Thread 0x3ffa3c7f910 (LWP 29830)):
> #0  0x000003ffa530841e in syscall () at /lib64/libc.so.6
> #1  0x00000000803e83fe in futex_wait (ev=0x80a65bd4 <rcu_call_ready_event>, val=4294967295) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:292
> #2  0x00000000803e868e in qemu_event_wait (ev=0x80a65bd4 <rcu_call_ready_event>) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:399
> #3  0x0000000080405dcc in call_rcu_thread (opaque=0x0) at /home/cborntra/REPOS/qemu/util/rcu.c:250
> #4  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
> #5  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6
> 
> Thread 3 (Thread 0x3ffa16c3910 (LWP 29838)):
> #0  0x000003ffa5410cd4 in __lll_lock_wait () at /lib64/libpthread.so.0
> #1  0x000003ffa5413ed4 in __lll_lock_elision () at /lib64/libpthread.so.0
> #2  0x00000000803e7ace in qemu_mutex_lock (mutex=0x8063ad08 <qemu_global_mutex>) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:64
> #3  0x0000000080060f04 in qemu_mutex_lock_iothread () at /home/cborntra/REPOS/qemu/cpus.c:1232
> #4  0x0000000080158866 in kvm_arch_handle_exit (cs=0x80e4d2b0, run=0x3ffa0e80000) at /home/cborntra/REPOS/qemu/target-s390x/kvm.c:2024
> #5  0x000000008008181e in kvm_cpu_exec (cpu=0x80e4d2b0) at /home/cborntra/REPOS/qemu/kvm-all.c:1921
> #6  0x000000008006075c in qemu_kvm_cpu_thread_fn (arg=0x80e4d2b0) at /home/cborntra/REPOS/qemu/cpus.c:1056
> #7  0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
> #8  0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6
> 
> Thread 2 (Thread 0x3ffa6edbb90 (LWP 29795)):
> #0  0x000003ffa540d68a in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
> #1  0x00000000803e7d66 in qemu_cond_wait (cond=0x80acd660, mutex=0x80acd630) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:123
> #2  0x0000000080405618 in rfifolock_lock (r=0x80acd630) at /home/cborntra/REPOS/qemu/util/rfifolock.c:59
> #3  0x00000000802dfafa in aio_context_acquire (ctx=0x80acd5d0) at /home/cborntra/REPOS/qemu/async.c:373
> #4  0x00000000802eb7e0 in bdrv_set_aio_context (bs=0x80b00d30, new_context=0x80acd5d0) at /home/cborntra/REPOS/qemu/block.c:3682
> #5  0x000000008034834e in blk_set_aio_context (blk=0x80af81f0, new_context=0x80acd5d0) at /home/cborntra/REPOS/qemu/block/block-backend.c:1371
> #6  0x00000000800b7b08 in virtio_blk_data_plane_start (s=0x80f39530) at /home/cborntra/REPOS/qemu/hw/block/dataplane/virtio-blk.c:228


So we are in virtio_blk_data_plane_start, but before we set aio handler.

> #7  0x00000000800b57ba in virtio_blk_handle_output (vdev=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:607
> #8  0x00000000800f0c7c in virtio_queue_notify_vq (vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1108
> #9  0x00000000800f3674 in virtio_queue_host_notifier_read (n=0x80fa2630) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1820
> #10 0x00000000802f1914 in aio_dispatch (ctx=0x80abae30) at /home/cborntra/REPOS/qemu/aio-posix.c:327
> #11 0x00000000802df3dc in aio_ctx_dispatch (source=0x80abae30, callback=0x0, user_data=0x0) at /home/cborntra/REPOS/qemu/async.c:233
> #12 0x000003ffa5c51c0a in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
> #13 0x00000000802ee616 in glib_pollfds_poll () at /home/cborntra/REPOS/qemu/main-loop.c:213
> #14 0x00000000802ee752 in os_host_main_loop_wait (timeout=1328000000) at /home/cborntra/REPOS/qemu/main-loop.c:258
> #15 0x00000000802ee85e in main_loop_wait (nonblocking=0) at /home/cborntra/REPOS/qemu/main-loop.c:506
> #16 0x000000008017db14 in main_loop () at /home/cborntra/REPOS/qemu/vl.c:1934
> #17 0x0000000080185fe8 in main (argc=72, argv=0x3ffd157ea28, envp=0x3ffd157ec70) at /home/cborntra/REPOS/qemu/vl.c:4652
> 
> Thread 1 (Thread 0x3ffa347f910 (LWP 29831)):
> #0  0x000003ffa540a178 in pthread_mutex_lock () at /lib64/libpthread.so.0
> #1  0x00000000803e7ace in qemu_mutex_lock (mutex=0xf0) at /home/cborntra/REPOS/qemu/util/qemu-thread-posix.c:64
> #2  0x00000000802deba8 in aio_bh_new (ctx=0x0, cb=0x80346910 <blk_aio_complete_bh>, opaque=0x3ff9c000a50) at /home/cborntra/REPOS/qemu/async.c:55
> #3  0x0000000080346a6e in blk_aio_prwv (blk=0x80af81f0, offset=4096, qiov=0x3ff9c0021c8, co_entry=0x80346aa8 <blk_aio_read_entry>, flags=(unknown: 0), cb=0x800b3be8 <virtio_blk_rw_complete>, opaque=0x3ff9c002170) at /home/cborntra/REPOS/qemu/block/block-backend.c:904
> #4  0x0000000080347030 in blk_aio_readv (blk=0x80af81f0, sector_num=8, iov=0x3ff9c0021c8, nb_sectors=8, cb=0x800b3be8 <virtio_blk_rw_complete>, opaque=0x3ff9c002170) at /home/cborntra/REPOS/qemu/block/block-backend.c:997
> #5  0x00000000800b492c in submit_requests (blk=0x80af81f0, mrb=0x3ffa347e5c0, start=0, num_reqs=1, niov=-1) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:361
> #6  0x00000000800b4a5e in virtio_blk_submit_multireq (blk=0x80af81f0, mrb=0x3ffa347e5c0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:391
> #7  0x00000000800b571e in virtio_blk_handle_vq (s=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/virtio-blk.c:593
> #8  0x00000000800b78a2 in virtio_blk_data_plane_handle_output (vdev=0x80f18008, vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/block/dataplane/virtio-blk.c:192


but here, are are in the aio handler already. how come?

is it possible that data plane stop does not shut down
running aio properly?

> #9  0x00000000800f0bbc in virtio_queue_notify_aio_vq (vq=0x80fa25d0) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1098
> #10 0x00000000800f356c in virtio_queue_host_notifier_aio_read (n=0x80fa2630) at /home/cborntra/REPOS/qemu/hw/virtio/virtio.c:1799
> #11 0x00000000802f1914 in aio_dispatch (ctx=0x80acd5d0) at /home/cborntra/REPOS/qemu/aio-posix.c:327
> #12 0x00000000802f229a in aio_poll (ctx=0x80acd5d0, blocking=true) at /home/cborntra/REPOS/qemu/aio-posix.c:475
> #13 0x0000000080165812 in iothread_run (opaque=0x80acd090) at /home/cborntra/REPOS/qemu/iothread.c:46
> #14 0x000003ffa5407c2c in start_thread () at /lib64/libpthread.so.0
> #15 0x000003ffa530ec9a in thread_start () at /lib64/libc.so.6

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-04-03  9:06   ` Michael S. Tsirkin
  2016-04-03 17:13     ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-03  9:06 UTC (permalink / raw
  To: Paolo Bonzini
  Cc: famz, tubo, qemu-devel, borntraeger, stefanha, cornelia.huck

On Fri, Apr 01, 2016 at 03:19:53PM +0200, Paolo Bonzini wrote:
> Eliminating the reentrancy is actually a nice thing that we can do
> with the API that Michael proposed, so let's make it first class.
> This also hides the complex assign/set_handler conventions from
> callers of virtio_queue_aio_set_host_notifier_handler, which in
> fact was always called with assign=true.
> 
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c |  7 +++----
>  hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
>  hw/virtio/virtio.c              | 19 ++++---------------
>  include/hw/virtio/virtio.h      |  6 ++----
>  4 files changed, 13 insertions(+), 31 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index fd06726..74c6d37 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>  
>      /* Get this show started by hooking up our callbacks */
>      aio_context_acquire(s->ctx);
> -    virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
> +					       virtio_blk_data_plane_handle_output);
>      aio_context_release(s->ctx);
>      return;
>  
> @@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>      aio_context_acquire(s->ctx);
>  
>      /* Stop notifications for new requests from guest */
> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
> -    virtio_set_queue_aio(s->vq, NULL);
> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
>  
>      /* Drain and switch bs back to the QEMU main loop */
>      blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index a497a2c..5494dcc 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
>          return rc;
>      }
>  
> -    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
> -    virtio_set_queue_aio(vq, fn);
> +    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
>      return 0;
>  }
>  
> @@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>      int i;
>  
> -    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
> -    virtio_set_queue_aio(vs->ctrl_vq, NULL);
> -    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
> -    virtio_set_queue_aio(vs->event_vq, NULL);
> +    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
> +    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
>      for (i = 0; i < vs->conf.num_queues; i++) {
> -        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
> -        virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
> +        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
>      }
>  }
>  
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index eb04ac0..7fcfc24f 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>      return &vdev->vq[i];
>  }
>  
> -void virtio_set_queue_aio(VirtQueue *vq,
> -                          void (*handle_output)(VirtIODevice *, VirtQueue *))
> -{
> -    assert(vq->handle_output);
> -
> -    vq->handle_aio_output = handle_output;
> -}
> -
>  void virtio_del_queue(VirtIODevice *vdev, int n)
>  {
>      if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
> @@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
>  }
>  
>  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> -                                                bool assign, bool set_handler)
> +                                                void (*handle_output)(VirtIODevice *,
> +							   VirtQueue *))
>  {
> -    if (assign && set_handler) {
> +    vq->handle_aio_output = handle_output;
> +    if (handle_output) {
>          aio_set_event_notifier(ctx, &vq->host_notifier, true,
>                                 virtio_queue_host_notifier_aio_read);
>      } else {
>          aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
>      }
> -    if (!assign) {
> -        /* Test and clear notifier before after disabling event,
> -         * in case poll callback didn't have time to run. */
> -        virtio_queue_host_notifier_aio_read(&vq->host_notifier);
> -    }
>  }
>  
>  static void virtio_queue_host_notifier_read(EventNotifier *n)

This means that caller is now responsible for invoking the
handler after it sets handle_output = NULL.
I think it's cleaner to invoke it internally.


> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index fa3f93b..6a37065 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>                              void (*handle_output)(VirtIODevice *,
>                                                    VirtQueue *));
>  
> -void virtio_set_queue_aio(VirtQueue *vq,
> -                          void (*handle_output)(VirtIODevice *, VirtQueue *));
> -
>  void virtio_del_queue(VirtIODevice *vdev, int n);
>  
>  void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
> @@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
>  void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
>                                                 bool set_handler);
>  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> -                                                bool assign, bool set_handler);
> +                                                void (*fn)(VirtIODevice *,
> +                                                           VirtQueue *));
>  void virtio_irq(VirtQueue *vq);
>  VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
>  VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (10 preceding siblings ...)
  2016-04-01 14:06 ` Cornelia Huck
@ 2016-04-03  9:29 ` Michael S. Tsirkin
  2016-04-05  9:13 ` Fam Zheng
  2016-04-05 10:15 ` Christian Borntraeger
  13 siblings, 0 replies; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-03  9:29 UTC (permalink / raw
  To: Paolo Bonzini
  Cc: famz, tubo, qemu-devel, borntraeger, stefanha, cornelia.huck

On Fri, Apr 01, 2016 at 03:19:45PM +0200, Paolo Bonzini wrote:
> This version fixes some commit messages, is based on qemu.git master
> and adds Cornelia's Reviewed-by tags.  There are no code changes apart
> from context.

I agree with patches 1-7 for 2.6.

8 and 9 are cleanups and IMO they are best left for after 2.6.

Which is all nice and well, but we need to get to the bottom of
the crashes reported by Christian.

> Michael S. Tsirkin (2):
>   virtio: add aio handler
>   virtio-blk: use aio handler for data plane
> 
> Paolo Bonzini (7):
>   virtio-dataplane: pass assign=true to
>     virtio_queue_aio_set_host_notifier_handler
>   virtio: make virtio_queue_notify_vq static
>   virtio-blk: fix disabled mode
>   virtio-scsi: fix disabled mode
>   virtio-scsi: use aio handler for data plane
>   virtio: merge virtio_queue_aio_set_host_notifier_handler with
>     virtio_queue_set_aio
>   virtio: remove starting/stopping checks
> 
>  hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
>  hw/block/virtio-blk.c           | 29 ++++++++++-------
>  hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
>  hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
>  hw/virtio/virtio.c              | 37 ++++++++++++++++------
>  include/hw/virtio/virtio-blk.h  |  3 ++
>  include/hw/virtio/virtio-scsi.h |  9 ++----
>  include/hw/virtio/virtio.h      |  4 +--
>  8 files changed, 158 insertions(+), 84 deletions(-)
> 
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
  2016-04-01 14:30     ` Cornelia Huck
@ 2016-04-03 10:30       ` Michael S. Tsirkin
  0 siblings, 0 replies; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-03 10:30 UTC (permalink / raw
  To: Cornelia Huck
  Cc: famz, Christian Borntraeger, qemu-devel, tubo, stefanha,
	Paolo Bonzini

On Fri, Apr 01, 2016 at 04:30:44PM +0200, Cornelia Huck wrote:
> On Fri, 1 Apr 2016 16:14:22 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 04/01/2016 03:19 PM, Paolo Bonzini wrote:
> > > Reentrancy cannot happen while the BQL is being held.
> > > 
> > > Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > 
> > Reverting this patch makes the segfaults go away.
> > 
> > 
> > 
> > 
> > 
> > > ---
> > >  hw/block/dataplane/virtio-blk.c | 12 ++----------
> > >  hw/scsi/virtio-scsi-dataplane.c |  9 +--------
> > >  include/hw/virtio/virtio-scsi.h |  2 --
> > >  3 files changed, 3 insertions(+), 20 deletions(-)
> 
> :(
> 
> On the one hand, I'm wondering what we're missing here.
> 
> On the other hand, let's just skip the cleanup patches and get the bug
> fixed?

For 2.6, absolutely. I would also drop 8/9.
For debugging (that we can keep up for now), instead of dropping the
checks, let's replace them with asserts.

-- 
MST

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
  2016-04-03  9:06   ` Michael S. Tsirkin
@ 2016-04-03 17:13     ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-03 17:13 UTC (permalink / raw
  To: Michael S. Tsirkin
  Cc: famz, borntraeger, qemu-devel, tubo, stefanha, cornelia.huck



On 03/04/2016 11:06, Michael S. Tsirkin wrote:
> On Fri, Apr 01, 2016 at 03:19:53PM +0200, Paolo Bonzini wrote:
>> Eliminating the reentrancy is actually a nice thing that we can do
>> with the API that Michael proposed, so let's make it first class.
>> This also hides the complex assign/set_handler conventions from
>> callers of virtio_queue_aio_set_host_notifier_handler, which in
>> fact was always called with assign=true.
>>
>> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>  hw/block/dataplane/virtio-blk.c |  7 +++----
>>  hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
>>  hw/virtio/virtio.c              | 19 ++++---------------
>>  include/hw/virtio/virtio.h      |  6 ++----
>>  4 files changed, 13 insertions(+), 31 deletions(-)
>>
>> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
>> index fd06726..74c6d37 100644
>> --- a/hw/block/dataplane/virtio-blk.c
>> +++ b/hw/block/dataplane/virtio-blk.c
>> @@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>>  
>>      /* Get this show started by hooking up our callbacks */
>>      aio_context_acquire(s->ctx);
>> -    virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
>> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
>> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
>> +					       virtio_blk_data_plane_handle_output);
>>      aio_context_release(s->ctx);
>>      return;
>>  
>> @@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>>      aio_context_acquire(s->ctx);
>>  
>>      /* Stop notifications for new requests from guest */
>> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
>> -    virtio_set_queue_aio(s->vq, NULL);
>> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
>>  
>>      /* Drain and switch bs back to the QEMU main loop */
>>      blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
>> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
>> index a497a2c..5494dcc 100644
>> --- a/hw/scsi/virtio-scsi-dataplane.c
>> +++ b/hw/scsi/virtio-scsi-dataplane.c
>> @@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
>>          return rc;
>>      }
>>  
>> -    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
>> -    virtio_set_queue_aio(vq, fn);
>> +    virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
>>      return 0;
>>  }
>>  
>> @@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
>>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>>      int i;
>>  
>> -    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
>> -    virtio_set_queue_aio(vs->ctrl_vq, NULL);
>> -    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
>> -    virtio_set_queue_aio(vs->event_vq, NULL);
>> +    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
>> +    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
>>      for (i = 0; i < vs->conf.num_queues; i++) {
>> -        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
>> -        virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
>> +        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
>>      }
>>  }
>>  
>> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>> index eb04ac0..7fcfc24f 100644
>> --- a/hw/virtio/virtio.c
>> +++ b/hw/virtio/virtio.c
>> @@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>>      return &vdev->vq[i];
>>  }
>>  
>> -void virtio_set_queue_aio(VirtQueue *vq,
>> -                          void (*handle_output)(VirtIODevice *, VirtQueue *))
>> -{
>> -    assert(vq->handle_output);
>> -
>> -    vq->handle_aio_output = handle_output;
>> -}
>> -
>>  void virtio_del_queue(VirtIODevice *vdev, int n)
>>  {
>>      if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
>> @@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
>>  }
>>  
>>  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
>> -                                                bool assign, bool set_handler)
>> +                                                void (*handle_output)(VirtIODevice *,
>> +							   VirtQueue *))
>>  {
>> -    if (assign && set_handler) {
>> +    vq->handle_aio_output = handle_output;
>> +    if (handle_output) {
>>          aio_set_event_notifier(ctx, &vq->host_notifier, true,
>>                                 virtio_queue_host_notifier_aio_read);
>>      } else {
>>          aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
>>      }
>> -    if (!assign) {
>> -        /* Test and clear notifier before after disabling event,
>> -         * in case poll callback didn't have time to run. */
>> -        virtio_queue_host_notifier_aio_read(&vq->host_notifier);
>> -    }
>>  }
>>  
>>  static void virtio_queue_host_notifier_read(EventNotifier *n)
> 
> This means that caller is now responsible for invoking the
> handler after it sets handle_output = NULL.
> I think it's cleaner to invoke it internally.

No, the caller is not responsible for that.  Ultimately, it's the virtio
core that will be responsible for setting up the handler again in the
main I/O thread, and that's how it will be called if it matters.  This
will happen when the API is cleaned up further by Cornelia.

Note that this patch doesn't change the semantics, it's patch 1 that
changes assign=false to assign=true in the call to
virtio_queue_aio_set_host_notifier_handler.  Without that change,
virtio_queue_host_notifier_aio_read can run the virtqueue handler in the
dataplane thread concurrently with the same handler in the main I/O
thread.  I consider that a fix for a latent/unknown bug, since we are
having so many headaches with reentrancy.

Of course I'm okay with dropping 9/9 (that's why I put it last), and
probably it was not 2.6 material even if it worked.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-04-05  9:06   ` Fam Zheng
  0 siblings, 0 replies; 29+ messages in thread
From: Fam Zheng @ 2016-04-05  9:06 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: tubo, mst, qemu-devel, borntraeger, stefanha, cornelia.huck

On Fri, 04/01 15:19, Paolo Bonzini wrote:
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
> 
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
> 
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
> 
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
>  hw/scsi/virtio-scsi.c           | 65 ++++++++++++++++++++++++++++-------------
>  include/hw/virtio/virtio-scsi.h |  6 ++--
>  3 files changed, 86 insertions(+), 28 deletions(-)
> 
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index 21d5bfd..a497a2c 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -38,7 +38,35 @@ void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread)
>      }
>  }
>  
> -static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
> +static void virtio_scsi_data_plane_handle_cmd(VirtIODevice *vdev,
> +                                              VirtQueue *vq)
> +{
> +    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
> +
> +    assert(s->ctx && s->dataplane_started);
> +    virtio_scsi_handle_cmd_vq(s, vq);
> +}
> +
> +static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev,
> +                                               VirtQueue *vq)
> +{
> +    VirtIOSCSI *s = VIRTIO_SCSI(vdev);
> +
> +    assert(s->ctx && s->dataplane_started);
> +    virtio_scsi_handle_ctrl_vq(s, vq);
> +}
> +
> +static void virtio_scsi_data_plane_handle_event(VirtIODevice *vdev,
> +                                                VirtQueue *vq)
> +{
> +    VirtIOSCSI *s = VIRTIO_SCSI(vdev);
> +
> +    assert(s->ctx && s->dataplane_started);
> +    virtio_scsi_handle_event_vq(s, vq);
> +}
> +
> +static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
> +				  void (*fn)(VirtIODevice *vdev, VirtQueue *vq))

Are tabs used by mistake? Three more occasions below...

>  {
>      BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
>      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> @@ -54,6 +82,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
>      }
>  
>      virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
> +    virtio_set_queue_aio(vq, fn);
>      return 0;
>  }
>  
>  
> @@ -104,16 +136,19 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
>      }
>  
>      aio_context_acquire(s->ctx);
> -    rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0);
> +    rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0,
> +				virtio_scsi_data_plane_handle_ctrl);

Tabs.

>      if (rc) {
>          goto fail_vrings;
>      }
> -    rc = virtio_scsi_vring_init(s, vs->event_vq, 1);
> +    rc = virtio_scsi_vring_init(s, vs->event_vq, 1,
> +				virtio_scsi_data_plane_handle_event);

Tabs.

>      if (rc) {
>          goto fail_vrings;
>      }
>      for (i = 0; i < vs->conf.num_queues; i++) {
> -        rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2);
> +        rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2,
> +				    virtio_scsi_data_plane_handle_cmd);

Tabs.

>          if (rc) {
>              goto fail_vrings;
>          }
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 38f1e2c..30415c6 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -374,7 +374,7 @@ fail:
>      return ret;
>  }
>  
> -void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
> +static void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
>  {
>      VirtIODevice *vdev = (VirtIODevice *)s;
>      uint32_t type;
> @@ -412,20 +412,28 @@ void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
>      }
>  }
>  
> -static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> +void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
>  {
> -    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
>      VirtIOSCSIReq *req;
>  
> -    if (s->ctx && !s->dataplane_started) {
> -        virtio_scsi_dataplane_start(s);
> -        return;
> -    }
>      while ((req = virtio_scsi_pop_req(s, vq))) {
>          virtio_scsi_handle_ctrl_req(s, req);
>      }
>  }
>  
> +static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
> +
> +    if (s->ctx) {
> +        virtio_scsi_dataplane_start(s);
> +        if (!s->dataplane_fenced) {
> +            return;
> +        }

Could be more readable if virtio_scsi_dataplane_start has a return value
indicating success.

> +    }
> +    virtio_scsi_handle_ctrl_vq(s, vq);
> +}
> +
>  static void virtio_scsi_complete_cmd_req(VirtIOSCSIReq *req)
>  {
>      /* Sense data is not in req->resp and is copied separately
> @@ -508,7 +516,7 @@ static void virtio_scsi_fail_cmd_req(VirtIOSCSIReq *req)
>      virtio_scsi_complete_cmd_req(req);
>  }
>  
> -bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
> +static bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)

Isn't this line too long now?

>  {
>      VirtIOSCSICommon *vs = &s->parent_obj;
>      SCSIDevice *d;
> @@ -550,7 +558,7 @@ bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
>      return true;
>  }
>  
> -void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
> +static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
>  {
>      SCSIRequest *sreq = req->sreq;
>      if (scsi_req_enqueue(sreq)) {
> @@ -560,17 +568,11 @@ void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
>      scsi_req_unref(sreq);
>  }
>  
> -static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
> +void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
>  {
> -    /* use non-QOM casts in the data path */
> -    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
>      VirtIOSCSIReq *req, *next;
>      QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
>  
> -    if (s->ctx && !s->dataplane_started) {
> -        virtio_scsi_dataplane_start(s);
> -        return;
> -    }
>      while ((req = virtio_scsi_pop_req(s, vq))) {
>          if (virtio_scsi_handle_cmd_req_prepare(s, req)) {
>              QTAILQ_INSERT_TAIL(&reqs, req, next);
> @@ -582,6 +584,20 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
>      }
>  }
>  
> +static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    /* use non-QOM casts in the data path */
> +    VirtIOSCSI *s = (VirtIOSCSI *)vdev;
> +
> +    if (s->ctx) {
> +        virtio_scsi_dataplane_start(s);
> +        if (!s->dataplane_fenced) {
> +            return;
> +        }
> +    }
> +    virtio_scsi_handle_cmd_vq(s, vq);
> +}
> +
>  static void virtio_scsi_get_config(VirtIODevice *vdev,
>                                     uint8_t *config)
>  {
> @@ -725,17 +741,24 @@ out:
>      }
>  }
>  
> +void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
> +{
> +    if (s->events_dropped) {
> +        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
> +    }
> +}
> +
>  static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
>  {
>      VirtIOSCSI *s = VIRTIO_SCSI(vdev);
>  
> -    if (s->ctx && !s->dataplane_started) {
> +    if (s->ctx) {
>          virtio_scsi_dataplane_start(s);
> -        return;
> -    }
> -    if (s->events_dropped) {
> -        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
> +        if (!s->dataplane_fenced) {
> +            return;
> +        }
>      }
> +    virtio_scsi_handle_event_vq(s, vq);
>  }
>  
>  static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
> diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
> index eef4e95..ba2f5ce 100644
> --- a/include/hw/virtio/virtio-scsi.h
> +++ b/include/hw/virtio/virtio-scsi.h
> @@ -139,9 +139,9 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
>                                  HandleOutput cmd);
>  
>  void virtio_scsi_common_unrealize(DeviceState *dev, Error **errp);
> -void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req);
> -bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req);
> -void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req);
> +void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq);
> +void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
> +void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
>  void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
>  void virtio_scsi_free_req(VirtIOSCSIReq *req);
>  void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
> -- 
> 1.8.3.1
> 
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (11 preceding siblings ...)
  2016-04-03  9:29 ` Michael S. Tsirkin
@ 2016-04-05  9:13 ` Fam Zheng
  2016-04-05 10:15 ` Christian Borntraeger
  13 siblings, 0 replies; 29+ messages in thread
From: Fam Zheng @ 2016-04-05  9:13 UTC (permalink / raw
  To: Paolo Bonzini; +Cc: tubo, mst, qemu-devel, borntraeger, stefanha, cornelia.huck

On Fri, 04/01 15:19, Paolo Bonzini wrote:
> This version fixes some commit messages, is based on qemu.git master
> and adds Cornelia's Reviewed-by tags.  There are no code changes apart
> from context.

Apart from the nic-picks in patch 7, for patches 1-7:

Reviewed-by: Fam Zheng <famz@redhat.com>

> 
> Michael S. Tsirkin (2):
>   virtio: add aio handler
>   virtio-blk: use aio handler for data plane
> 
> Paolo Bonzini (7):
>   virtio-dataplane: pass assign=true to
>     virtio_queue_aio_set_host_notifier_handler
>   virtio: make virtio_queue_notify_vq static
>   virtio-blk: fix disabled mode
>   virtio-scsi: fix disabled mode
>   virtio-scsi: use aio handler for data plane
>   virtio: merge virtio_queue_aio_set_host_notifier_handler with
>     virtio_queue_set_aio
>   virtio: remove starting/stopping checks
> 
>  hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
>  hw/block/virtio-blk.c           | 29 ++++++++++-------
>  hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
>  hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
>  hw/virtio/virtio.c              | 37 ++++++++++++++++------
>  include/hw/virtio/virtio-blk.h  |  3 ++
>  include/hw/virtio/virtio-scsi.h |  9 ++----
>  include/hw/virtio/virtio.h      |  4 +--
>  8 files changed, 158 insertions(+), 84 deletions(-)
> 
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API
  2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
                   ` (12 preceding siblings ...)
  2016-04-05  9:13 ` Fam Zheng
@ 2016-04-05 10:15 ` Christian Borntraeger
  13 siblings, 0 replies; 29+ messages in thread
From: Christian Borntraeger @ 2016-04-05 10:15 UTC (permalink / raw
  To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, tubo, famz, stefanha, mst

On 04/01/2016 03:19 PM, Paolo Bonzini wrote:
> This version fixes some commit messages, is based on qemu.git master
> and adds Cornelia's Reviewed-by tags.  There are no code changes apart
> from context.
> 
> Michael S. Tsirkin (2):
>   virtio: add aio handler
>   virtio-blk: use aio handler for data plane
> 
> Paolo Bonzini (7):
>   virtio-dataplane: pass assign=true to
>     virtio_queue_aio_set_host_notifier_handler
>   virtio: make virtio_queue_notify_vq static
>   virtio-blk: fix disabled mode
>   virtio-scsi: fix disabled mode
>   virtio-scsi: use aio handler for data plane
>   virtio: merge virtio_queue_aio_set_host_notifier_handler with
>     virtio_queue_set_aio
>   virtio: remove starting/stopping checks
> 
>  hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
>  hw/block/virtio-blk.c           | 29 ++++++++++-------
>  hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
>  hw/scsi/virtio-scsi.c           | 69 +++++++++++++++++++++++++++--------------
>  hw/virtio/virtio.c              | 37 ++++++++++++++++------
>  include/hw/virtio/virtio-blk.h  |  3 ++
>  include/hw/virtio/virtio-scsi.h |  9 ++----
>  include/hw/virtio/virtio.h      |  4 +--
>  8 files changed, 158 insertions(+), 84 deletions(-)

The problem goes away with patches 1-8 according to our test team. (I added patch 8 before
you decided to defer it). So I think we should push this patch set soon.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
  2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
  2016-04-01 14:02   ` Cornelia Huck
@ 2016-04-05 10:38   ` Michael S. Tsirkin
  2016-04-05 10:42     ` Paolo Bonzini
  1 sibling, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-05 10:38 UTC (permalink / raw
  To: Paolo Bonzini
  Cc: famz, tubo, qemu-devel, dgilbert, borntraeger, stefanha,
	cornelia.huck

On Fri, Apr 01, 2016 at 03:19:46PM +0200, Paolo Bonzini wrote:
> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue.  Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane;

I didn't get this part: here's virtio_queue_host_notifier_read:

    VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
    if (event_notifier_test_and_clear(n)) {
        virtio_queue_notify_vq(vq);
    }

event notifier is initially clear on migration, how can we
be sure virtio_queue_notify_vq is invoked?


> the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
> 
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
> 
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 2 +-
>  hw/scsi/virtio-scsi-dataplane.c | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index e666dd4..fddd3ab 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>      aio_context_acquire(s->ctx);
>  
>      /* Stop notifications for new requests from guest */
> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
>  
>      /* Drain and switch bs back to the QEMU main loop */
>      blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index b44ac5d..21d5bfd 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>      int i;
>  
> -    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
> -    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
> +    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
> +    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
>      for (i = 0; i < vs->conf.num_queues; i++) {
> -        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
> +        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
>      }
>  }
>  
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
  2016-04-05 10:38   ` Michael S. Tsirkin
@ 2016-04-05 10:42     ` Paolo Bonzini
  2016-04-05 10:59       ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-05 10:42 UTC (permalink / raw
  To: Michael S. Tsirkin
  Cc: famz, borntraeger, qemu-devel, dgilbert, tubo, stefanha,
	cornelia.huck



On 05/04/2016 12:38, Michael S. Tsirkin wrote:
>> > There is no need to run the handler one last time; the device is
>> > being reset and it is okay to drop requests that are pending in
>> > the virtqueue.  Even in the case of migration, the requests would
>> > be processed when ioeventfd is re-enabled on the destination
>> > side: virtio_queue_set_host_notifier_fd_handler will call
>> > virtio_queue_host_notifier_read, which will start dataplane;
> I didn't get this part: here's virtio_queue_host_notifier_read:
> 
>     VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
>     if (event_notifier_test_and_clear(n)) {
>         virtio_queue_notify_vq(vq);
>     }
> 
> event notifier is initially clear on migration, how can we
> be sure virtio_queue_notify_vq is invoked?

I think you're right about this.

Dataplane has an event_notifier_set; we should move it to
virtio_queue_aio_set_host_notifier_handler and add it to
virtio_queue_set_host_notifier_handler.  I'll send v3 today.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
  2016-04-05 10:42     ` Paolo Bonzini
@ 2016-04-05 10:59       ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-05 10:59 UTC (permalink / raw
  To: Michael S. Tsirkin
  Cc: famz, borntraeger, qemu-devel, dgilbert, tubo, stefanha,
	cornelia.huck



On 05/04/2016 12:42, Paolo Bonzini wrote:
> I think you're right about this.
> 
> Dataplane has an event_notifier_set; we should move it to
> virtio_queue_aio_set_host_notifier_handler and add it to
> virtio_queue_set_host_notifier_handler.  I'll send v3 today.

As discussed on IRC, even that wouldn't be safe.  The good news is that
this patch is not necessary, because
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false)
is called inside aio_context_acquire.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2016-04-05 10:59 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-04-01 14:02   ` Cornelia Huck
2016-04-05 10:38   ` Michael S. Tsirkin
2016-04-05 10:42     ` Paolo Bonzini
2016-04-05 10:59       ` Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
2016-04-01 14:04   ` Cornelia Huck
2016-04-01 13:19 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
2016-04-01 14:05   ` Cornelia Huck
2016-04-01 13:19 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
2016-04-05  9:06   ` Fam Zheng
2016-04-01 13:19 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
2016-04-03  9:06   ` Michael S. Tsirkin
2016-04-03 17:13     ` Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
2016-04-01 14:14   ` Christian Borntraeger
2016-04-01 14:30     ` Cornelia Huck
2016-04-03 10:30       ` Michael S. Tsirkin
2016-04-01 14:02 ` [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Christian Borntraeger
2016-04-01 15:16   ` Christian Borntraeger
2016-04-03  9:00     ` Michael S. Tsirkin
2016-04-01 14:06 ` Cornelia Huck
2016-04-03  9:29 ` Michael S. Tsirkin
2016-04-05  9:13 ` Fam Zheng
2016-04-05 10:15 ` Christian Borntraeger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.