Virtualization Archive mirror
 help / color / mirror / Atom feed
From: SeongJae Park <sj@kernel.org>
Cc: SeongJae Park <sj@kernel.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	virtualization@lists.linux.dev, linux-kernel@vger.kernel.org,
	damon@lists.linux.dev, linux-mm@kvack.org
Subject: [RFC IDEA v2 6/6] drivers/virtio/virtio_balloon: integrate ACMA and ballooning
Date: Sun, 12 May 2024 12:36:57 -0700	[thread overview]
Message-ID: <20240512193657.79298-7-sj@kernel.org> (raw)
In-Reply-To: <20240512193657.79298-1-sj@kernel.org>

Let the host effectively inflate the balloon in access/contiguity-aware
way when the guest kernel is compiled with specific kernel config.  When
the config is enabled and the host requests balloon size change,
virtio-balloon adjusts ACMA's max-mem parameter instead of allocating
guest pages and put it into the balloon.  As a result, the host can use
the requested amount of guest memory, so from the host's perspective,
the ballooning just works, but in transparent and
access/contiguity-aware way.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 drivers/virtio/virtio_balloon.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 1f5b3dd31fcf..a954d75789ae 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -472,6 +472,32 @@ static void virtballoon_changed(struct virtio_device *vdev)
 	struct virtio_balloon *vb = vdev->priv;
 	unsigned long flags;
 
+#ifdef CONFIG_ACMA_BALLOON
+	s64 target;
+	u32 num_pages;
+
+
+	/* Legacy balloon config space is LE, unlike all other devices. */
+	virtio_cread_le(vb->vdev, struct virtio_balloon_config, num_pages,
+			&num_pages);
+
+	/*
+	 * Aligned up to guest page size to avoid inflating and deflating
+	 * balloon endlessly.
+	 */
+	target = ALIGN(num_pages, VIRTIO_BALLOON_PAGES_PER_PAGE);
+
+	/*
+	 * If the given new max mem size is larger than current acma's max mem
+	 * size, same to normal max mem adjustment.
+	 * If the given new max mem size is smaller than current acma's max mem
+	 * size, strong aggressiveness is applied while memory for meeting the
+	 * new max mem is met is stolen.
+	 */
+	acma_set_max_mem_aggressive(totalram_pages() - target);
+	return;
+#endif
+
 	spin_lock_irqsave(&vb->stop_update_lock, flags);
 	if (!vb->stop_update) {
 		start_update_balloon_size(vb);
-- 
2.39.2


      reply	other threads:[~2024-05-12 19:37 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-12 19:36 [RFC IDEA v2 0/6] mm/damon: introduce Access/Contiguity-aware Memory Auto-scaling (ACMA) SeongJae Park
2024-05-12 19:36 ` SeongJae Park [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240512193657.79298-7-sj@kernel.org \
    --to=sj@kernel.org \
    --cc=damon@lists.linux.dev \
    --cc=david@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).