All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Reima ISHII <ishiir@g.ecc.u-tokyo.ac.jp>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	 Takahiro Shinagawa <shina@ecc.u-tokyo.ac.jp>
Subject: Re: [BUG] Nested Virtualization Bug on x86-64 AMD CPU
Date: Wed, 6 Dec 2023 12:05:05 +0900	[thread overview]
Message-ID: <CA+aCS-H2wkiVOMvCS7cCPojduXdStMYzHn7SxintNyg0vS_Bhg@mail.gmail.com> (raw)
In-Reply-To: <1415ddc9-81f3-4d50-b735-7e44a7f656d5@citrix.com>

Thank you for your prompt response.

On Tue, Dec 5, 2023 at 11:43 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Who is still in 64-bit mode ?
>
> It is legal for a 64-bit L1 to VMRUN into a 32-bit L2 with PG=0.
>
> But I'm guessing that you mean L2 is also 64-bit, and we're clearing PG,
> thus creating an illegal state (LMA=1 && PG=0) in VMCB12.
>
> And yes, in that case (virtual) VMRUN at L1 ought to fail with
> VMEXIT_INVALID.

Yes, you are correct in your understanding. This issue is triggered by
VMRUN execution to 64-bit L2 guests, when CR0.PG is cleared in VMCB12.
Contrary to the expected behavior where a VMRUN at L1 should fail with
VMEXIT_INVALID, the VMRUN does not fail but instead, the system
encounters a BUG().

> As an incidental observation, that function is particularly absurd and
> the two switches should be merged.
>
> VMExit reason 0x402 is AVIC_NOACCEL and Xen has no support for AVIC in
> the slightest right now.  i.e. Xen shouldn't have AVIC active in the
> VMCB, and should never any AVIC related VMExits.
>
> It is possible that we've got memory corruption, and have accidentally
> activated AVIC in the VMCB.

The idea of potential memory corruption activating AVIC in the VMCB is
certainly an interesting perspective. While I'm not sure how exactly
such memory corruption could occur, the suggestion does provide a
compelling explanation for the VMExit reason 0x402 (AVIC_NOACCEL),
particularly considering Xen's current lack of AVIC support.

> But, is this by any chance all running nested under KVM in your fuzzer?

No, KVM was not used. The issue was observed on a Xen hypervisor's
domU HVM running directly on the hardware. Within the guest HVM, a
simple custom hypervisor was utilized.

-- 
Graduate School of Information Science and Technology, The University of Tokyo
Reima Ishii
ishiir@g.ecc.u-tokyo.ac.jp


      reply	other threads:[~2023-12-06  3:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-05 13:51 [BUG] Nested Virtualization Bug on x86-64 AMD CPU Reima ISHII
2023-12-05 14:43 ` Andrew Cooper
2023-12-06  3:05   ` Reima ISHII [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+aCS-H2wkiVOMvCS7cCPojduXdStMYzHn7SxintNyg0vS_Bhg@mail.gmail.com \
    --to=ishiir@g.ecc.u-tokyo.ac.jp \
    --cc=andrew.cooper3@citrix.com \
    --cc=shina@ecc.u-tokyo.ac.jp \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.