From: "Zhang, Xiong Y" <xiong.y.zhang@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>,
Mingwei Zhang <mizhang@google.com>
Cc: Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Xiong Zhang <xiong.y.zhang@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>,
Kan Liang <kan.liang@intel.com>,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Manali Shukla <manali.shukla@amd.com>,
Sandipan Das <sandipan.das@amd.com>,
Jim Mattson <jmattson@google.com>,
Stephane Eranian <eranian@google.com>,
Ian Rogers <irogers@google.com>,
Namhyung Kim <namhyung@kernel.org>,
gce-passthrou-pmu-dev@google.com,
Samantha Alt <samantha.alt@intel.com>,
Zhiyuan Lv <zhiyuan.lv@intel.com>,
Yanfei Xu <yanfei.xu@intel.com>, maobibo <maobibo@loongson.cn>,
Like Xu <like.xu.linux@gmail.com>,
kvm@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [PATCH v2 13/54] perf: core/x86: Forbid PMI handler when guest own PMU
Date: Thu, 9 May 2024 15:39:20 +0800 [thread overview]
Message-ID: <fd01485d-558b-4a96-bdc4-18663bf47759@linux.intel.com> (raw)
In-Reply-To: <20240507093311.GW40213@noisy.programming.kicks-ass.net>
On 5/7/2024 5:33 PM, Peter Zijlstra wrote:
> On Mon, May 06, 2024 at 05:29:38AM +0000, Mingwei Zhang wrote:
>
>> @@ -1749,6 +1749,23 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
>> u64 finish_clock;
>> int ret;
>>
>> + /*
>> + * When guest pmu context is loaded this handler should be forbidden from
>> + * running, the reasons are:
>> + * 1. After x86_perf_guest_enter() is called, and before cpu enter into
>> + * non-root mode, NMI could happen, but x86_pmu_handle_irq() restore PMU
>> + * to use NMI vector, which destroy KVM PMI vector setting.
>> + * 2. When VM is running, host NMI other than PMI causes VM exit, KVM will
>> + * call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save
>> + * guest PMU context (kvm_pmu_save_pmu_context()), as x86_pmu_handle_irq()
>> + * clear global_status MSR which has guest status now, then this destroy
>> + * guest PMU status.
>> + * 3. After VM exit, but before KVM save guest PMU context, host NMI other
>> + * than PMI could happen, x86_pmu_handle_irq() clear global_status MSR
>> + * which has guest status now, then this destroy guest PMU status.
>> + */
>> + if (perf_is_guest_context_loaded())
>> + return 0;
>
> A function call makes sense because? Also, isn't this naming at least the purpose of function call is to re-use the per-cpu variable defined in
perf core, otherwise another per-cpu variable will be defined in
arch/x86/event/core.c, whether function call or per-cpu variable depends on
the interface between perf and KVM.
> very little misleading? Specifically this is about passthrough, not
> guest context per se.
>
>> /*
>> * All PMUs/events that share this PMI handler should make sure to
>> * increment active_events for their events.
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index acf16676401a..5da7de42954e 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -1736,6 +1736,7 @@ extern int perf_get_mediated_pmu(void);
>> extern void perf_put_mediated_pmu(void);
>> void perf_guest_enter(void);
>> void perf_guest_exit(void);
>> +bool perf_is_guest_context_loaded(void);
>> #else /* !CONFIG_PERF_EVENTS: */
>> static inline void *
>> perf_aux_output_begin(struct perf_output_handle *handle,
>> @@ -1830,6 +1831,10 @@ static inline int perf_get_mediated_pmu(void)
>> static inline void perf_put_mediated_pmu(void) { }
>> static inline void perf_guest_enter(void) { }
>> static inline void perf_guest_exit(void) { }
>> +static inline bool perf_is_guest_context_loaded(void)
>> +{
>> + return false;
>> +}
>> #endif
>>
>> #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4c6daf5cc923..184d06c23391 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5895,6 +5895,11 @@ void perf_guest_exit(void)
>> perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
>> }
>>
>> +bool perf_is_guest_context_loaded(void)
>> +{
>> + return __this_cpu_read(perf_in_guest);
>> +}
>> +
>> /*
>> * Holding the top-level event's child_mutex means that any
>> * descendant process that has inherited this event will block
>> --
>> 2.45.0.rc1.225.g2a3ae87e7f-goog
>>
>
next prev parent reply other threads:[~2024-05-09 7:39 UTC|newest]
Thread overview: 92+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-06 5:29 [PATCH v2 00/54] Mediated Passthrough vPMU 2.0 for x86 Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 01/54] KVM: x86/pmu: Set enable bits for GP counters in PERF_GLOBAL_CTRL at "RESET" Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 02/54] KVM: x86: Snapshot if a vCPU's vendor model is AMD vs. Intel compatible Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 03/54] KVM: x86/pmu: Do not mask LVTPC when handling a PMI on AMD platforms Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 04/54] x86/msr: Define PerfCntrGlobalStatusSet register Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 05/54] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 06/54] perf: Support get/put passthrough PMU interfaces Mingwei Zhang
2024-05-07 8:31 ` Peter Zijlstra
2024-05-08 4:13 ` Zhang, Xiong Y
2024-05-07 8:41 ` Peter Zijlstra
2024-05-08 4:54 ` Zhang, Xiong Y
2024-05-08 8:32 ` Peter Zijlstra
2024-05-06 5:29 ` [PATCH v2 07/54] perf: Add generic exclude_guest support Mingwei Zhang
2024-05-07 8:58 ` Peter Zijlstra
2024-05-06 5:29 ` [PATCH v2 08/54] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 09/54] perf: core/x86: Register a new vector for KVM GUEST PMI Mingwei Zhang
2024-05-07 9:12 ` Peter Zijlstra
2024-05-08 10:06 ` Yanfei Xu
2024-05-06 5:29 ` [PATCH v2 10/54] KVM: x86: Extract x86_set_kvm_irq_handler() function Mingwei Zhang
2024-05-07 9:18 ` Peter Zijlstra
2024-05-08 8:57 ` Zhang, Xiong Y
2024-05-06 5:29 ` [PATCH v2 11/54] KVM: x86/pmu: Register guest pmi handler for emulated PMU Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 12/54] perf: x86: Add x86 function to switch PMI handler Mingwei Zhang
2024-05-07 9:22 ` Peter Zijlstra
2024-05-08 6:58 ` Zhang, Xiong Y
2024-05-08 8:37 ` Peter Zijlstra
2024-05-09 7:30 ` Zhang, Xiong Y
2024-05-07 21:40 ` Chen, Zide
2024-05-08 3:44 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 13/54] perf: core/x86: Forbid PMI handler when guest own PMU Mingwei Zhang
2024-05-07 9:33 ` Peter Zijlstra
2024-05-09 7:39 ` Zhang, Xiong Y [this message]
2024-05-06 5:29 ` [PATCH v2 14/54] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 15/54] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 16/54] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 17/54] KVM: x86/pmu: Always set global enable bits in passthrough mode Mingwei Zhang
2024-05-08 4:18 ` Mi, Dapeng
2024-05-08 4:36 ` Mingwei Zhang
2024-05-08 6:27 ` Mi, Dapeng
2024-05-08 14:13 ` Sean Christopherson
2024-05-09 0:13 ` Mingwei Zhang
2024-05-09 0:30 ` Mi, Dapeng
2024-05-09 0:38 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 18/54] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 19/54] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 20/54] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-05-08 21:55 ` Chen, Zide
2024-05-06 5:29 ` [PATCH v2 21/54] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 22/54] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 23/54] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 24/54] KVM: x86/pmu: Create a function prototype to disable MSR interception Mingwei Zhang
2024-05-08 22:03 ` Chen, Zide
2024-05-06 5:29 ` [PATCH v2 25/54] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 26/54] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU Mingwei Zhang
2024-05-08 21:48 ` Chen, Zide
2024-05-09 0:43 ` Mi, Dapeng
2024-05-09 1:29 ` Chen, Zide
2024-05-09 2:58 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 27/54] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() Mingwei Zhang
2024-05-14 7:33 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 28/54] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 29/54] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 30/54] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Mingwei Zhang
2024-05-14 8:08 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 31/54] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 32/54] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 33/54] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 34/54] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 35/54] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 36/54] KVM: x86/pmu: Switch PMI handler at KVM context switch boundary Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 37/54] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 38/54] KVM: x86/pmu: Call perf_guest_enter() at PMU context switch Mingwei Zhang
2024-05-07 9:39 ` Peter Zijlstra
2024-05-08 4:22 ` Mi, Dapeng
2024-05-06 5:30 ` [PATCH v2 39/54] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 40/54] KVM: x86/pmu: Introduce PMU operator to increment counter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 41/54] KVM: x86/pmu: Introduce PMU operator for setting counter overflow Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 42/54] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Mingwei Zhang
2024-05-08 18:28 ` Chen, Zide
2024-05-09 1:11 ` Mi, Dapeng
2024-05-06 5:30 ` [PATCH v2 43/54] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 44/54] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 45/54] KVM: nVMX: Add nested virtualization support for " Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 46/54] perf/x86/amd/core: Set passthrough capability for host Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 47/54] KVM: x86/pmu/svm: Set passthrough capability for vcpus Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 48/54] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 49/54] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 50/54] KVM: x86/pmu/svm: Implement callback to disable MSR interception Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 51/54] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 52/54] KVM: x86/pmu/svm: Add registers to direct access list Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 53/54] KVM: x86/pmu/svm: Implement handlers to save and restore context Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 54/54] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU Mingwei Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fd01485d-558b-4a96-bdc4-18663bf47759@linux.intel.com \
--to=xiong.y.zhang@linux.intel.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=eranian@google.com \
--cc=gce-passthrou-pmu-dev@google.com \
--cc=irogers@google.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=maobibo@loongson.cn \
--cc=mizhang@google.com \
--cc=namhyung@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=samantha.alt@intel.com \
--cc=sandipan.das@amd.com \
--cc=seanjc@google.com \
--cc=xiong.y.zhang@intel.com \
--cc=yanfei.xu@intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).