Linux-Sgx Archive mirror
 help / color / mirror / Atom feed
From: Zhiquan Li <zhiquan1.li@intel.com>
To: linux-sgx@vger.kernel.org, tony.luck@intel.com
Cc: jarkko@kernel.org, dave.hansen@linux.intel.com,
	seanjc@google.com, fan.du@intel.com, zhiquan1.li@intel.com
Subject: [PATCH 0/4] x86/sgx: fine grained SGX MCA behavior
Date: Tue, 10 May 2022 11:16:46 +0800	[thread overview]
Message-ID: <20220510031646.3181306-1-zhiquan1.li@intel.com> (raw)

Hi everyone,

This series contains a few patches to fine grained SGX MCA behavior.

When VM guest access a SGX EPC page with memory failure, current
behavior will kill the guest, expected only kill the SGX application
inside it.

To fix it we send SIGBUS with code BUS_MCEERR_AR and some extra
information for hypervisor to inject #MC information to guest, which
is helpful in SGX virtualization case.

However, current SGX data structures are insufficient to track the
EPC pages for vepc, so we introduce a new struct sgx_vepc_page which
can be the owner of EPC pages for vepc and saves the useful info of
EPC pages for vepc, like struct sgx_encl_page.

Moreover, canonical memory failure collects victim tasks by iterating
all the tasks one by one and use reverse mapping to get victim tasks’
virtual address. This is not necessary for SGX - as one EPC page can
be mapped to ONE enclave only. So, this 1:1 mapping enforcement
allows us to find task virtual address with physical address
directly.

Then we extend the solution for the normal SGX case, so that the task
has opportunity to make further decision while EPC page has memory
failure.

Tests:
1. MCE injection test for SGX in VM.
   As we expected, the application was killed and VM was alive.
2. MCE injection test for SGX on host.
   As we expected, the application received SIGBUS with extra info.
3. Kernel selftest/sgx: PASS
4. Internal SGX stress test: PASS
5. kmemleak test: No memory leakage detected.

Zhiquan Li (4):
  x86/sgx: Move struct sgx_vepc definition to sgx.h
  x86/sgx: add struct sgx_vepc_page to manage EPC pages for vepc
  x86/sgx: Fine grained SGX MCA behavior for virtualization
  x86/sgx: Fine grained SGX MCA behavior for normal case

 arch/x86/kernel/cpu/sgx/main.c | 24 ++++++++++++++++++++++--
 arch/x86/kernel/cpu/sgx/sgx.h  | 12 ++++++++++++
 arch/x86/kernel/cpu/sgx/virt.c | 29 +++++++++++++++++++----------
 3 files changed, 53 insertions(+), 12 deletions(-)

-- 
2.25.1


             reply	other threads:[~2022-05-10  3:16 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-10  3:16 Zhiquan Li [this message]
2022-05-11 10:29 ` [PATCH 0/4] x86/sgx: fine grained SGX MCA behavior Jarkko Sakkinen
2022-05-12 12:03   ` Zhiquan Li
2022-05-13 14:38     ` Jarkko Sakkinen
2022-05-13 16:35       ` Luck, Tony
2022-05-14  5:39         ` Zhiquan Li
2022-05-15  3:35           ` Luck, Tony
2022-05-16  0:57             ` Zhiquan Li
2022-05-16  2:29           ` Kai Huang
2022-05-16  8:40             ` Zhiquan Li
2022-05-17  0:43               ` Kai Huang
2022-05-18  1:02                 ` Zhiquan Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220510031646.3181306-1-zhiquan1.li@intel.com \
    --to=zhiquan1.li@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=fan.du@intel.com \
    --cc=jarkko@kernel.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=seanjc@google.com \
    --cc=tony.luck@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).