From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Tobias Huschle <huschle@linux.ibm.com>,
Abel Wu <wuyun.abel@bytedance.com>,
Peter Zijlstra <peterz@infradead.org>,
Linux Kernel <linux-kernel@vger.kernel.org>,
kvm@vger.kernel.org, virtualization@lists.linux.dev,
netdev@vger.kernel.org
Subject: Re: Re: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement)
Date: Mon, 11 Dec 2023 15:26:46 +0800 [thread overview]
Message-ID: <CACGkMEuSGT-e-i-8U7hum-N_xEnsEKL+_07Mipf6gMLFFhj2Aw@mail.gmail.com> (raw)
In-Reply-To: <20231209053443-mutt-send-email-mst@kernel.org>
On Sat, Dec 9, 2023 at 6:42 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Fri, Dec 08, 2023 at 12:41:38PM +0100, Tobias Huschle wrote:
> > On Fri, Dec 08, 2023 at 05:31:18AM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Dec 08, 2023 at 10:24:16AM +0100, Tobias Huschle wrote:
> > > > On Thu, Dec 07, 2023 at 01:48:40AM -0500, Michael S. Tsirkin wrote:
> > > > > On Thu, Dec 07, 2023 at 07:22:12AM +0100, Tobias Huschle wrote:
> > > > > > 3. vhost looping endlessly, waiting for kworker to be scheduled
> > > > > >
> > > > > > I dug a little deeper on what the vhost is doing. I'm not an expert on
> > > > > > virtio whatsoever, so these are just educated guesses that maybe
> > > > > > someone can verify/correct. Please bear with me probably messing up
> > > > > > the terminology.
> > > > > >
> > > > > > - vhost is looping through available queues.
> > > > > > - vhost wants to wake up a kworker to process a found queue.
> > > > > > - kworker does something with that queue and terminates quickly.
> > > > > >
> > > > > > What I found by throwing in some very noisy trace statements was that,
> > > > > > if the kworker is not woken up, the vhost just keeps looping accross
> > > > > > all available queues (and seems to repeat itself). So it essentially
> > > > > > relies on the scheduler to schedule the kworker fast enough. Otherwise
> > > > > > it will just keep on looping until it is migrated off the CPU.
> > > > >
> > > > >
> > > > > Normally it takes the buffers off the queue and is done with it.
> > > > > I am guessing that at the same time guest is running on some other
> > > > > CPU and keeps adding available buffers?
> > > > >
> > > >
> > > > It seems to do just that, there are multiple other vhost instances
> > > > involved which might keep filling up thoses queues.
> > > >
> > >
> > > No vhost is ever only draining queues. Guest is filling them.
> > >
> > > > Unfortunately, this makes the problematic vhost instance to stay on
> > > > the CPU and prevents said kworker to get scheduled. The kworker is
> > > > explicitly woken up by vhost, so it wants it to do something.
It looks to me vhost doesn't use workqueue but the worker by itself.
> > > >
> > > > At this point it seems that there is an assumption about the scheduler
> > > > in place which is no longer fulfilled by EEVDF. From the discussion so
> > > > far, it seems like EEVDF does what is intended to do.
> > > >
> > > > Shouldn't there be a more explicit mechanism in use that allows the
> > > > kworker to be scheduled in favor of the vhost?
Vhost did a brunch of copy_from_user() which should trigger
__might_fault() so a __might_sleep() most of the case.
> > > >
> > > > It is also concerning that the vhost seems cannot be preempted by the
> > > > scheduler while executing that loop.
> > >
> > >
> > > Which loop is that, exactly?
> >
> > The loop continously passes translate_desc in drivers/vhost/vhost.c
> > That's where I put the trace statements.
> >
> > The overall sequence seems to be (top to bottom):
> >
> > handle_rx
> > get_rx_bufs
> > vhost_get_vq_desc
> > vhost_get_avail_head
> > vhost_get_avail
> > __vhost_get_user_slow
> > translate_desc << trace statement in here
> > vhost_iotlb_itree_first
>
> I wonder why do you keep missing cache and re-translating.
> Is pr_debug enabled for you? If not could you check if it
> outputs anything?
> Or you can tweak:
>
> #define vq_err(vq, fmt, ...) do { \
> pr_debug(pr_fmt(fmt), ##__VA_ARGS__); \
> if ((vq)->error_ctx) \
> eventfd_signal((vq)->error_ctx, 1);\
> } while (0)
>
> to do pr_err if you prefer.
>
> > These functions show up as having increased overhead in perf.
> >
> > There are multiple loops going on in there.
> > Again the disclaimer though, I'm not familiar with that code at all.
>
>
> So there's a limit there: vhost_exceeds_weight should requeue work:
>
> } while (likely(!vhost_exceeds_weight(vq, ++recv_pkts, total_len)));
>
> then we invoke scheduler each time before re-executing it:
>
>
> {
> struct vhost_worker *worker = data;
> struct vhost_work *work, *work_next;
> struct llist_node *node;
>
> node = llist_del_all(&worker->work_list);
> if (node) {
> __set_current_state(TASK_RUNNING);
>
> node = llist_reverse_order(node);
> /* make sure flag is seen after deletion */
> smp_wmb();
> llist_for_each_entry_safe(work, work_next, node, node) {
> clear_bit(VHOST_WORK_QUEUED, &work->flags);
> kcov_remote_start_common(worker->kcov_handle);
> work->fn(work);
> kcov_remote_stop();
> cond_resched();
> }
> }
>
> return !!node;
> }
>
> These are the byte and packet limits:
>
> /* Max number of bytes transferred before requeueing the job.
> * Using this limit prevents one virtqueue from starving others. */
> #define VHOST_NET_WEIGHT 0x80000
>
> /* Max number of packets transferred before requeueing the job.
> * Using this limit prevents one virtqueue from starving others with small
> * pkts.
> */
> #define VHOST_NET_PKT_WEIGHT 256
>
>
> Try reducing the VHOST_NET_WEIGHT limit and see if that improves things any?
Or a dirty hack like cond_resched() in translate_desc().
Thanks
>
> --
> MST
>
next prev parent reply other threads:[~2023-12-11 7:27 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-16 18:58 EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement) Tobias Huschle
2023-11-17 9:23 ` Peter Zijlstra
2023-11-17 9:58 ` Peter Zijlstra
2023-11-17 12:24 ` Tobias Huschle
2023-11-17 12:37 ` Peter Zijlstra
2023-11-17 13:07 ` Abel Wu
2023-11-21 13:17 ` Tobias Huschle
2023-11-22 10:00 ` Peter Zijlstra
2023-11-27 13:56 ` Tobias Huschle
[not found] ` <6564a012.c80a0220.adb78.f0e4SMTPIN_ADDED_BROKEN@mx.google.com>
2023-11-28 8:55 ` Abel Wu
2023-11-29 6:31 ` Tobias Huschle
2023-12-07 6:22 ` Tobias Huschle
[not found] ` <07513.123120701265800278@us-mta-474.us.mimecast.lan>
2023-12-07 6:48 ` Michael S. Tsirkin
2023-12-08 9:24 ` Tobias Huschle
2023-12-08 17:28 ` Mike Christie
[not found] ` <56082.123120804242300177@us-mta-137.us.mimecast.lan>
2023-12-08 10:31 ` Re: " Michael S. Tsirkin
2023-12-08 11:41 ` Tobias Huschle
[not found] ` <53044.123120806415900549@us-mta-342.us.mimecast.lan>
2023-12-09 10:42 ` Michael S. Tsirkin
2023-12-11 7:26 ` Jason Wang [this message]
2023-12-11 16:53 ` Michael S. Tsirkin
2023-12-12 3:00 ` Jason Wang
2023-12-12 16:15 ` Michael S. Tsirkin
2023-12-13 10:37 ` Tobias Huschle
[not found] ` <42870.123121305373200110@us-mta-641.us.mimecast.lan>
2023-12-13 12:00 ` Michael S. Tsirkin
2023-12-13 12:45 ` Tobias Huschle
[not found] ` <25485.123121307454100283@us-mta-18.us.mimecast.lan>
2023-12-13 14:47 ` Michael S. Tsirkin
2023-12-13 14:55 ` Michael S. Tsirkin
2023-12-14 7:14 ` Michael S. Tsirkin
2024-01-08 13:13 ` Tobias Huschle
[not found] ` <92916.124010808133201076@us-mta-622.us.mimecast.lan>
2024-01-09 23:07 ` Michael S. Tsirkin
2024-01-21 18:44 ` Michael S. Tsirkin
2024-01-22 11:29 ` Tobias Huschle
2024-02-01 7:38 ` Tobias Huschle
[not found] ` <07974.124020102385100135@us-mta-501.us.mimecast.lan>
2024-02-01 8:08 ` Michael S. Tsirkin
2024-02-01 11:47 ` Tobias Huschle
[not found] ` <89460.124020106474400877@us-mta-475.us.mimecast.lan>
2024-02-01 12:08 ` Michael S. Tsirkin
2024-02-22 19:23 ` Michael S. Tsirkin
2024-03-11 17:05 ` Michael S. Tsirkin
2024-03-12 9:45 ` Luis Machado
2024-03-14 11:46 ` Tobias Huschle
[not found] ` <73123.124031407552500165@us-mta-156.us.mimecast.lan>
2024-03-14 15:09 ` Michael S. Tsirkin
2024-03-15 8:33 ` Tobias Huschle
[not found] ` <84704.124031504335801509@us-mta-515.us.mimecast.lan>
2024-03-15 10:31 ` Michael S. Tsirkin
2024-03-19 8:21 ` Tobias Huschle
2024-03-19 8:29 ` Michael S. Tsirkin
2024-03-19 8:59 ` Tobias Huschle
2024-04-30 10:50 ` Tobias Huschle
2024-05-01 10:51 ` Peter Zijlstra
2024-05-01 15:31 ` Michael S. Tsirkin
2024-05-02 9:16 ` Peter Zijlstra
2024-05-02 12:23 ` Tobias Huschle
2024-05-02 12:20 ` Tobias Huschle
2023-11-18 5:14 ` Abel Wu
2023-11-20 10:56 ` Peter Zijlstra
2023-11-20 12:06 ` Abel Wu
2023-11-18 7:33 ` Abel Wu
2023-11-18 15:29 ` Honglei Wang
2023-11-19 13:29 ` Bagas Sanjaya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACGkMEuSGT-e-i-8U7hum-N_xEnsEKL+_07Mipf6gMLFFhj2Aw@mail.gmail.com \
--to=jasowang@redhat.com \
--cc=huschle@linux.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=virtualization@lists.linux.dev \
--cc=wuyun.abel@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).