From: "Mickaël Salaün" <mic@digikod.net>
To: Sean Christopherson <seanjc@google.com>,
Nicolas Saenz Julienne <nsaenz@amazon.com>
Cc: "Borislav Petkov" <bp@alien8.de>,
"Dave Hansen" <dave.hansen@linux.intel.com>,
"H . Peter Anvin" <hpa@zytor.com>,
"Ingo Molnar" <mingo@redhat.com>,
"Kees Cook" <keescook@chromium.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Vitaly Kuznetsov" <vkuznets@redhat.com>,
"Wanpeng Li" <wanpengli@tencent.com>,
"Rick P Edgecombe" <rick.p.edgecombe@intel.com>,
"Alexander Graf" <graf@amazon.com>,
"Angelina Vu" <angelinavu@linux.microsoft.com>,
"Anna Trikalinou" <atrikalinou@microsoft.com>,
"Chao Peng" <chao.p.peng@linux.intel.com>,
"Forrest Yuan Yu" <yuanyu@google.com>,
"James Gowans" <jgowans@amazon.com>,
"James Morris" <jamorris@linux.microsoft.com>,
"John Andersen" <john.s.andersen@intel.com>,
"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
"Marian Rotariu" <marian.c.rotariu@gmail.com>,
"Mihai Donțu" <mdontu@bitdefender.com>,
"Nicușor Cîțu" <nicu.citu@icloud.com>,
"Thara Gopinath" <tgopinath@microsoft.com>,
"Trilok Soni" <quic_tsoni@quicinc.com>,
"Wei Liu" <wei.liu@kernel.org>, "Will Deacon" <will@kernel.org>,
"Yu Zhang" <yu.c.zhang@linux.intel.com>,
"Ștefan Șicleru" <ssicleru@bitdefender.com>,
dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org,
linux-security-module@vger.kernel.org, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org, x86@kernel.org,
xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy configuration and violation
Date: Tue, 14 May 2024 14:15:46 +0200 [thread overview]
Message-ID: <20240514.OoPohLaejai6@digikod.net> (raw)
In-Reply-To: <ZjpTxt-Bxia3bRwB@google.com>
On Tue, May 07, 2024 at 09:16:06AM -0700, Sean Christopherson wrote:
> On Tue, May 07, 2024, Mickaël Salaün wrote:
> > > Actually, potential bad/crazy idea. Why does the _host_ need to define policy?
> > > Linux already knows what assets it wants to (un)protect and when. What's missing
> > > is a way for the guest kernel to effectively deprivilege and re-authenticate
> > > itself as needed. We've been tossing around the idea of paired VMs+vCPUs to
> > > support VTLs and SEV's VMPLs, what if we usurped/piggybacked those ideas, with a
> > > bit of pKVM mixed in?
> > >
> > > Borrowing VTL terminology, where VTL0 is the least privileged, userspace launches
> > > the VM at VTL0. At some point, the guest triggers the deprivileging sequence and
> > > userspace creates VTL1. Userpace also provides a way for VTL0 restrict access to
> > > its memory, e.g. to effectively make the page tables for the kernel's direct map
> > > writable only from VTL1, to make kernel text RO (or XO), etc. And VTL0 could then
> > > also completely remove its access to code that changes CR0/CR4.
> > >
> > > It would obviously require a _lot_ more upfront work, e.g. to isolate the kernel
> > > text that modifies CR0/CR4 so that it can be removed from VTL0, but that should
> > > be doable with annotations, e.g. tag relevant functions with __magic or whatever,
> > > throw them in a dedicated section, and then free/protect the section(s) at the
> > > appropriate time.
> > >
> > > KVM would likely need to provide the ability to switch VTLs (or whatever they get
> > > called), and host userspace would need to provide a decent amount of the backend
> > > mechanisms and "core" policies, e.g. to manage VTL0 memory, teardown (turn off?)
> > > VTL1 on kexec(), etc. But everything else could live in the guest kernel itself.
> > > E.g. to have CR pinning play nice with kexec(), toss the relevant kexec() code into
> > > VTL1. That way VTL1 can verify the kexec() target and tear itself down before
> > > jumping into the new kernel.
> > >
> > > This is very off the cuff and have-wavy, e.g. I don't have much of an idea what
> > > it would take to harden kernel text patching, but keeping the policy in the guest
> > > seems like it'd make everything more tractable than trying to define an ABI
> > > between Linux and a VMM that is rich and flexible enough to support all the
> > > fancy things Linux does (and will do in the future).
> >
> > Yes, we agree that the guest needs to manage its own policy. That's why
> > we implemented Heki for KVM this way, but without VTLs because KVM
> > doesn't support them.
> >
> > To sum up, is the VTL approach the only one that would be acceptable for
> > KVM?
>
> Heh, that's not a question you want to be asking. You're effectively asking me
> to make an authorative, "final" decision on a topic which I am only passingly
> familiar with.
>
> But since you asked it... :-) Probably?
>
> I see a lot of advantages to a VTL/VSM-like approach:
>
> 1. Provides Linux-as-a guest the flexibility it needs to meaningfully advance
> its security, with the least amount of policy built into the guest/host ABI.
>
> 2. Largely decouples guest policy from the host, i.e. should allow the guest to
> evolve/update it's policy without needing to coordinate changes with the host.
>
> 3. The KVM implementation can be generic enough to be reusable for other features.
>
> 4. Other groups are already working on VTL-like support in KVM, e.g. for VSM
> itself, and potentially for VMPL/SVSM support.
>
> IMO, #2 is a *huge* selling point. Not having to coordinate changes across
> multiple code bases and/or organizations and/or maintainers is a big win for
> velocity, long term maintenance, and probably the very viability of HEKI.
Agree, this is our goal.
>
> Providing the guest with the tools to define and implement its own policy means
> end users don't have to way for some third party, e.g. CSPs, to deploy the
> accompanying host-side changes, because there are no host-side changes.
>
> And encapsulating everything in the guest drastically reduces the friction with
> changes in the kernel that interact with hardening, both from a technical and a
> social perspective. I.e. giving the kernel (near) complete control over its
> destiny minimizes the number of moving parts, and will be far, far easier to sell
> to maintainers. I would expect maintainers to react much more favorably to being
> handed tools to harden the kernel, as opposed to being presented a set of APIs
> that can be used to make the kernel compliant with _someone else's_ vision of
> what kernel hardening should look like.
>
> E.g. imagine a new feature comes along that requires overriding CR0/CR4 pinning
> in a way that doesn't fit into existing policy. If the VMM is involved in
> defining/enforcing the CR pinning policy, then supporting said new feature would
> require new guest/host ABI and an updated host VMM in order to make the new
> feature compatible with HEKI. Inevitably, even if everything goes smoothly from
> an upstreaming perspective, that will result in guests that have to choose between
> HEKI and new feature X, because there is zero chance that all hosts that run Linux
> as a guest will be updated in advance of new feature X being deployed.
Sure. We need to find a generic-enough KVM interface to be able to
restrict a wide range of virtualization/hardware mechanisms (to not rely
too much on KVM changes) and delegate most of enforcement/emulation to
VTL1. In short, policy definition owned by VTL0/guest, and policy
enforcement shared between KVM (coarse grained) and VTL1 (fine grained).
>
> And if/when things don't go smoothly, odds are very good that kernel maintainers
> will eventually tire of having to coordinate and negotiate with QEMU and other
> VMMs, and will become resistant to continuing to support/extend HEKI.
Yes, that was our concern too and another reason why we choose to let
the guest handle its own security policy.
>
> > If yes, that would indeed require a *lot* of work for something we're not
> > sure will be accepted later on.
>
> Yes and no. The AWS folks are pursuing VSM support in KVM+QEMU, and SVSM support
> is trending toward the paired VM+vCPU model. IMO, it's entirely feasible to
> design KVM support such that much of the development load can be shared between
> the projects. And having 2+ use cases for a feature (set) makes it _much_ more
> likely that the feature(s) will be accepted.
>
> And similar to what Paolo said regarding HEKI not having a complete story, I
> don't see a clear line of sight for landing host-defined policy enforcement, as
> there are many open, non-trivial questions that need answers. I.e. upstreaming
> HEKI in its current form is also far from a done deal, and isn't guaranteed to
> be substantially less work when all is said and done.
I'm not sure to understand why "Heki not having a complete story". The
goal is the same as the current kernel self-protection mechanisms.
next prev parent reply other threads:[~2024-05-14 12:15 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-03 13:19 [RFC PATCH v3 0/5] Hypervisor-Enforced Kernel Integrity - CR pinning Mickaël Salaün
2024-05-03 13:19 ` [RFC PATCH v3 1/5] virt: Introduce Hypervisor Enforced Kernel Integrity (Heki) Mickaël Salaün
2024-05-03 13:19 ` [RFC PATCH v3 2/5] KVM: x86: Add new hypercall to lock control registers Mickaël Salaün
2024-05-03 13:19 ` [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy configuration and violation Mickaël Salaün
2024-05-03 14:03 ` Sean Christopherson
2024-05-06 17:50 ` Mickaël Salaün
2024-05-07 1:34 ` Sean Christopherson
2024-05-07 9:30 ` Mickaël Salaün
2024-05-07 16:16 ` Sean Christopherson
2024-05-10 10:07 ` Nicolas Saenz Julienne
2024-05-14 12:23 ` Mickaël Salaün
2024-05-15 20:32 ` Sean Christopherson
2024-05-16 14:02 ` Nicolas Saenz Julienne
2024-05-14 12:15 ` Mickaël Salaün [this message]
2024-05-03 13:19 ` [RFC PATCH v3 4/5] heki: Lock guest control registers at the end of guest kernel init Mickaël Salaün
2024-05-03 13:19 ` [RFC PATCH v3 5/5] virt: Add Heki KUnit tests Mickaël Salaün
2024-05-03 13:49 ` [RFC PATCH v3 0/5] Hypervisor-Enforced Kernel Integrity - CR pinning Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240514.OoPohLaejai6@digikod.net \
--to=mic@digikod.net \
--cc=angelinavu@linux.microsoft.com \
--cc=atrikalinou@microsoft.com \
--cc=bp@alien8.de \
--cc=chao.p.peng@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dev@lists.cloudhypervisor.org \
--cc=graf@amazon.com \
--cc=hpa@zytor.com \
--cc=jamorris@linux.microsoft.com \
--cc=jgowans@amazon.com \
--cc=john.s.andersen@intel.com \
--cc=keescook@chromium.org \
--cc=kvm@vger.kernel.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-security-module@vger.kernel.org \
--cc=madvenka@linux.microsoft.com \
--cc=marian.c.rotariu@gmail.com \
--cc=mdontu@bitdefender.com \
--cc=mingo@redhat.com \
--cc=nicu.citu@icloud.com \
--cc=nsaenz@amazon.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quic_tsoni@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=ssicleru@bitdefender.com \
--cc=tglx@linutronix.de \
--cc=tgopinath@microsoft.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--cc=wei.liu@kernel.org \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xenproject.org \
--cc=yu.c.zhang@linux.intel.com \
--cc=yuanyu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).