Xen-Devel Archive mirror
 help / color / mirror / Atom feed
* [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable
@ 2024-05-14  8:16 Sergiy Kibrik
  2024-05-14  8:18 ` [XEN PATCH v3 1/6] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:16 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Stefano Stabellini

This series aims to separate support of Intel & AMD CPUs in Xen build.
The code to drive both platforms used to be built unconditionally, until recent
introduction of CONFIG_{AMD,INTEL} Kconfig options.

This series extends coverage of these options on vpmu and mcheck subsystems,
which allows not to build Intel or AMD vpmu/mcheck support if CPU vendor's support
was explicitly disabled.

Since v2 series one patch had been split into two patches for more clarity,
hence series became one piece longer, and last patch changed significantly.
Other changes are mostly cosmetic and tags addition.
Specific changes since v2 are provided per-patch.

v2 series here:
https://lore.kernel.org/xen-devel/cover.1714640459.git.Sergiy_Kibrik@epam.com/

  -Sergiy

Sergiy Kibrik (6):
  x86/vpmu: separate amd/intel vPMU code
  x86/intel: move vmce_has_lmce() routine to header
  x86/MCE: guard access to Intel/AMD-specific MCA MSRs
  x86/MCE: guard {intel/amd}_mcheck_init() calls
  x86/MCE: add default switch case in init_nonfatal_mce_checker()
  x86/MCE: optional build of AMD/Intel MCE code

 xen/arch/x86/cpu/Makefile           |  4 +++-
 xen/arch/x86/cpu/mcheck/Makefile    |  8 ++++----
 xen/arch/x86/cpu/mcheck/mce.c       |  8 ++++++++
 xen/arch/x86/cpu/mcheck/mce.h       |  5 +++++
 xen/arch/x86/cpu/mcheck/mce_intel.c |  8 --------
 xen/arch/x86/cpu/mcheck/non-fatal.c |  8 ++++++++
 xen/arch/x86/cpu/mcheck/vmce.c      | 13 ++++++++++---
 xen/arch/x86/cpu/vpmu.c             |  4 ++++
 xen/arch/x86/include/asm/mce.h      |  1 -
 xen/arch/x86/msr.c                  |  2 ++
 10 files changed, 44 insertions(+), 17 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 1/6] x86/vpmu: separate amd/intel vPMU code
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
@ 2024-05-14  8:18 ` Sergiy Kibrik
  2024-05-14  8:20 ` [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header Sergiy Kibrik
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:18 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Jan Beulich, Roger Pau Monné,
	Stefano Stabellini, Andrew Cooper

Build AMD vPMU when CONFIG_AMD is on, and Intel vPMU when CONFIG_INTEL
is on respectively, allowing for a plaftorm-specific build.

No functional change intended.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>

---
changes in v3:
 - neither of the blank lines dropped
changes in v2:
 - drop static inline stubs, use #idef/#endif in vpmu_init)()
changes in v1:
 - switch to CONFIG_{AMD,INTEL} instead of CONFIG_{SVM,VMX}
---
 xen/arch/x86/cpu/Makefile | 4 +++-
 xen/arch/x86/cpu/vpmu.c   | 4 ++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile
index 35561fe51d..eafce5f204 100644
--- a/xen/arch/x86/cpu/Makefile
+++ b/xen/arch/x86/cpu/Makefile
@@ -10,4 +10,6 @@ obj-y += intel.o
 obj-y += intel_cacheinfo.o
 obj-y += mwait-idle.o
 obj-y += shanghai.o
-obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
+obj-y += vpmu.o
+obj-$(CONFIG_AMD) += vpmu_amd.o
+obj-$(CONFIG_INTEL) += vpmu_intel.o
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index b2e9881e06..a7bc0cd1fc 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -827,6 +827,7 @@ static int __init cf_check vpmu_init(void)
 
     switch ( vendor )
     {
+#ifdef CONFIG_AMD
     case X86_VENDOR_AMD:
         ops = amd_vpmu_init();
         break;
@@ -834,10 +835,13 @@ static int __init cf_check vpmu_init(void)
     case X86_VENDOR_HYGON:
         ops = hygon_vpmu_init();
         break;
+#endif
 
+#ifdef CONFIG_INTEL
     case X86_VENDOR_INTEL:
         ops = core2_vpmu_init();
         break;
+#endif
 
     default:
         printk(XENLOG_WARNING "VPMU: Unknown CPU vendor: %d. "
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
  2024-05-14  8:18 ` [XEN PATCH v3 1/6] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
@ 2024-05-14  8:20 ` Sergiy Kibrik
  2024-05-16  9:39   ` Jan Beulich
  2024-05-14  8:22 ` [XEN PATCH v3 3/6] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:20 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Andrew Cooper, Roger Pau Monné, Jan Beulich,
	Stefano Stabellini

Moving this function out of mce_intel.c would make it possible to disable
build of Intel MCE code later on, because the function gets called from
common x86 code.

Also replace boilerplate code that checks for MCG_LMCE_P flag with
vmce_has_lmce(), which might contribute to readability a bit.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - do not check for CONFIG_INTEL
 - remove CONFIG_INTEL from patch description
changes in v2:
 - move vmce_has_lmce() to cpu/mcheck/mce.h
 - move IS_ENABLED(CONFIG_INTEL) check inside vmce_has_lmce()
 - changed description
---
 xen/arch/x86/cpu/mcheck/mce.h       | 5 +++++
 xen/arch/x86/cpu/mcheck/mce_intel.c | 4 ----
 xen/arch/x86/cpu/mcheck/vmce.c      | 5 ++---
 xen/arch/x86/include/asm/mce.h      | 1 -
 xen/arch/x86/msr.c                  | 2 ++
 5 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index 4806405f96..eba4b536c7 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -170,6 +170,11 @@ static inline int mce_bank_msr(const struct vcpu *v, uint32_t msr)
     return 0;
 }
 
+static inline bool vmce_has_lmce(const struct vcpu *v)
+{
+    return v->arch.vmce.mcg_cap & MCG_LMCE_P;
+}
+
 struct mce_callbacks {
     void (*handler)(const struct cpu_user_regs *regs);
     bool (*check_addr)(uint64_t status, uint64_t misc, int addr_type);
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 3f5199b531..af43281cc6 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -1050,7 +1050,3 @@ int vmce_intel_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     return 1;
 }
 
-bool vmce_has_lmce(const struct vcpu *v)
-{
-    return v->arch.vmce.mcg_cap & MCG_LMCE_P;
-}
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 353d4f19b2..94d1f021e1 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
          * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
          * does not need to check them here.
          */
-        if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
+        if ( vmce_has_lmce(cur) )
         {
             *val = cur->arch.vmce.mcg_ext_ctl;
             mce_printk(MCE_VERBOSE, "MCE: %pv: rd MCG_EXT_CTL %#"PRIx64"\n",
@@ -324,8 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
         break;
 
     case MSR_IA32_MCG_EXT_CTL:
-        if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
-             !(val & ~MCG_EXT_CTL_LMCE_EN) )
+        if ( vmce_has_lmce(cur) && !(val & ~MCG_EXT_CTL_LMCE_EN) )
             cur->arch.vmce.mcg_ext_ctl = val;
         else
             ret = -1;
diff --git a/xen/arch/x86/include/asm/mce.h b/xen/arch/x86/include/asm/mce.h
index 6ce56b5b85..2ec47a71ae 100644
--- a/xen/arch/x86/include/asm/mce.h
+++ b/xen/arch/x86/include/asm/mce.h
@@ -41,7 +41,6 @@ extern void vmce_init_vcpu(struct vcpu *v);
 extern int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt);
 extern int vmce_wrmsr(uint32_t msr, uint64_t val);
 extern int vmce_rdmsr(uint32_t msr, uint64_t *val);
-extern bool vmce_has_lmce(const struct vcpu *v);
 extern int vmce_enable_mca_cap(struct domain *d, uint64_t cap);
 
 DECLARE_PER_CPU(unsigned int, nr_mce_banks);
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9babd441f9..b0ec96f021 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -24,6 +24,8 @@
 
 #include <public/hvm/params.h>
 
+#include "cpu/mcheck/mce.h"
+
 DEFINE_PER_CPU(uint32_t, tsc_aux);
 
 int init_vcpu_msr_policy(struct vcpu *v)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 3/6] x86/MCE: guard access to Intel/AMD-specific MCA MSRs
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
  2024-05-14  8:18 ` [XEN PATCH v3 1/6] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
  2024-05-14  8:20 ` [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header Sergiy Kibrik
@ 2024-05-14  8:22 ` Sergiy Kibrik
  2024-05-14  8:24 ` [XEN PATCH v3 4/6] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:22 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Stefano Stabellini

Add build-time checks for newly introduced INTEL/AMD config options when
calling vmce_{intel/amd}_{rdmsr/wrmsr}() routines.
This way a platform-specific code can be omitted in vmce code, if this
platform is disabled in config.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - neither of the blank lines dropped
changes in v2:
 - use #ifdef/#endif in switch instead of IS_ENABLED
 - fallback to returning default 0 if vendor not recognized
---
 xen/arch/x86/cpu/mcheck/vmce.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 94d1f021e1..5abdf4cb5f 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -138,16 +138,20 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     default:
         switch ( boot_cpu_data.x86_vendor )
         {
+#ifdef CONFIG_INTEL
         case X86_VENDOR_CENTAUR:
         case X86_VENDOR_SHANGHAI:
         case X86_VENDOR_INTEL:
             ret = vmce_intel_rdmsr(v, msr, val);
             break;
+#endif
 
+#ifdef CONFIG_AMD
         case X86_VENDOR_AMD:
         case X86_VENDOR_HYGON:
             ret = vmce_amd_rdmsr(v, msr, val);
             break;
+#endif
 
         default:
             ret = 0;
@@ -271,14 +275,18 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     default:
         switch ( boot_cpu_data.x86_vendor )
         {
+#ifdef CONFIG_INTEL
         case X86_VENDOR_INTEL:
             ret = vmce_intel_wrmsr(v, msr, val);
             break;
+#endif
 
+#ifdef CONFIG_AMD
         case X86_VENDOR_AMD:
         case X86_VENDOR_HYGON:
             ret = vmce_amd_wrmsr(v, msr, val);
             break;
+#endif
 
         default:
             ret = 0;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 4/6] x86/MCE: guard {intel/amd}_mcheck_init() calls
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
                   ` (2 preceding siblings ...)
  2024-05-14  8:22 ` [XEN PATCH v3 3/6] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
@ 2024-05-14  8:24 ` Sergiy Kibrik
  2024-05-14  8:26 ` [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker() Sergiy Kibrik
  2024-05-14  8:28 ` [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
  5 siblings, 0 replies; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:24 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Stefano Stabellini

Guard calls to CPU-specific mcheck init routines in common MCE code
using new INTEL/AMD config options.

The purpose is not to build platform-specific mcheck code and calls to it,
if this platform is disabled in config.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
- neither of the blank lines dropped
changes in v2:
 - use #ifdef/#endif in switch instead of IS_ENABLED
---
 xen/arch/x86/cpu/mcheck/mce.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index d179e6b068..fb9dec5b89 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -760,11 +760,14 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
 
     switch ( c->x86_vendor )
     {
+#ifdef CONFIG_AMD
     case X86_VENDOR_AMD:
     case X86_VENDOR_HYGON:
         inited = amd_mcheck_init(c, bsp);
         break;
+#endif
 
+#ifdef CONFIG_INTEL
     case X86_VENDOR_INTEL:
         switch ( c->x86 )
         {
@@ -774,6 +777,7 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
             break;
         }
         break;
+#endif
 
     default:
         break;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker()
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
                   ` (3 preceding siblings ...)
  2024-05-14  8:24 ` [XEN PATCH v3 4/6] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
@ 2024-05-14  8:26 ` Sergiy Kibrik
  2024-05-16  9:44   ` Jan Beulich
  2024-05-14  8:28 ` [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
  5 siblings, 1 reply; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:26 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Andrew Cooper, Roger Pau Monné, Jan Beulich,
	Stefano Stabellini

The default switch case block is likely wanted here, to handle situation
e.g. of unexpected c->x86_vendor value -- then no mcheck init is done, but
misleading message still gets logged anyway.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/cpu/mcheck/non-fatal.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/non-fatal.c b/xen/arch/x86/cpu/mcheck/non-fatal.c
index 33cacd15c2..c5d8e3aea9 100644
--- a/xen/arch/x86/cpu/mcheck/non-fatal.c
+++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
@@ -29,9 +29,13 @@ static int __init cf_check init_nonfatal_mce_checker(void)
 		/* Assume we are on K8 or newer AMD or Hygon CPU here */
 		amd_nonfatal_mcheck_init(c);
 		break;
+
 	case X86_VENDOR_INTEL:
 		intel_nonfatal_mcheck_init(c);
 		break;
+
+	default:
+		return -ENODEV;
 	}
 	printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
 	return 0;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code
  2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
                   ` (4 preceding siblings ...)
  2024-05-14  8:26 ` [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker() Sergiy Kibrik
@ 2024-05-14  8:28 ` Sergiy Kibrik
  2024-05-16  9:55   ` Jan Beulich
  5 siblings, 1 reply; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-14  8:28 UTC (permalink / raw
  To: xen-devel
  Cc: Sergiy Kibrik, Andrew Cooper, Roger Pau Monné, Jan Beulich,
	Stefano Stabellini

Separate Intel/AMD-specific MCE code using CONFIG_{INTEL,AMD} config options.
Now we can avoid build of mcheck code if support for specific platform is
intentionally disabled by configuration.

Also global variables lmce_support & cmci_support from Intel-specific
mce_intel.c have to moved to common mce.c, as they get checked in common code.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - default return value of init_nonfatal_mce_checker() done in separate patch
 - move lmce_support & cmci_support to common mce.c code
 - changed patch description
changes in v2:
 - fallback to original ordering in Makefile
 - redefine lmce_support & cmci_support global vars to false when !INTEL
 - changed patch description
---
 xen/arch/x86/cpu/mcheck/Makefile    | 8 ++++----
 xen/arch/x86/cpu/mcheck/mce.c       | 4 ++++
 xen/arch/x86/cpu/mcheck/mce_intel.c | 4 ----
 xen/arch/x86/cpu/mcheck/non-fatal.c | 4 ++++
 4 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/Makefile b/xen/arch/x86/cpu/mcheck/Makefile
index f927f10b4d..e6cb4dd503 100644
--- a/xen/arch/x86/cpu/mcheck/Makefile
+++ b/xen/arch/x86/cpu/mcheck/Makefile
@@ -1,12 +1,12 @@
-obj-y += amd_nonfatal.o
-obj-y += mce_amd.o
+obj-$(CONFIG_AMD) += amd_nonfatal.o
+obj-$(CONFIG_AMD) += mce_amd.o
 obj-y += mcaction.o
 obj-y += barrier.o
-obj-y += intel-nonfatal.o
+obj-$(CONFIG_INTEL) += intel-nonfatal.o
 obj-y += mctelem.o
 obj-y += mce.o
 obj-y += mce-apei.o
-obj-y += mce_intel.o
+obj-$(CONFIG_INTEL) += mce_intel.o
 obj-y += non-fatal.o
 obj-y += util.o
 obj-y += vmce.o
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index fb9dec5b89..32c1b2756b 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -38,6 +38,10 @@ DEFINE_PER_CPU_READ_MOSTLY(unsigned int, nr_mce_banks);
 unsigned int __read_mostly firstbank;
 unsigned int __read_mostly ppin_msr;
 uint8_t __read_mostly cmci_apic_vector;
+bool __read_mostly cmci_support;
+
+/* If mce_force_broadcast == 1, lmce_support will be disabled forcibly. */
+bool __read_mostly lmce_support;
 
 DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, poll_bankmask);
 DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, no_cmci_banks);
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index af43281cc6..dd812f4b8a 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -26,16 +26,12 @@
 #include "mcaction.h"
 
 static DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, mce_banks_owned);
-bool __read_mostly cmci_support;
 static bool __read_mostly ser_support;
 static bool __read_mostly mce_force_broadcast;
 boolean_param("mce_fb", mce_force_broadcast);
 
 static int __read_mostly nr_intel_ext_msrs;
 
-/* If mce_force_broadcast == 1, lmce_support will be disabled forcibly. */
-bool __read_mostly lmce_support;
-
 /* Intel SDM define bit15~bit0 of IA32_MCi_STATUS as the MC error code */
 #define INTEL_MCCOD_MASK 0xFFFF
 
diff --git a/xen/arch/x86/cpu/mcheck/non-fatal.c b/xen/arch/x86/cpu/mcheck/non-fatal.c
index c5d8e3aea9..52e16ad499 100644
--- a/xen/arch/x86/cpu/mcheck/non-fatal.c
+++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
@@ -24,15 +24,19 @@ static int __init cf_check init_nonfatal_mce_checker(void)
 	 * Check for non-fatal errors every MCE_RATE s
 	 */
 	switch (c->x86_vendor) {
+#ifdef CONFIG_AMD
 	case X86_VENDOR_AMD:
 	case X86_VENDOR_HYGON:
 		/* Assume we are on K8 or newer AMD or Hygon CPU here */
 		amd_nonfatal_mcheck_init(c);
 		break;
+#endif
 
+#ifdef CONFIG_INTEL
 	case X86_VENDOR_INTEL:
 		intel_nonfatal_mcheck_init(c);
 		break;
+#endif
 
 	default:
 		return -ENODEV;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-14  8:20 ` [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header Sergiy Kibrik
@ 2024-05-16  9:39   ` Jan Beulich
  2024-05-20  9:32     ` Sergiy Kibrik
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2024-05-16  9:39 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 14.05.2024 10:20, Sergiy Kibrik wrote:
> Moving this function out of mce_intel.c would make it possible to disable
> build of Intel MCE code later on, because the function gets called from
> common x86 code.

Why "would"? "Will" or "is going to" would seem more to the point to me.
But anyway.

> --- a/xen/arch/x86/cpu/mcheck/mce.h
> +++ b/xen/arch/x86/cpu/mcheck/mce.h
> @@ -170,6 +170,11 @@ static inline int mce_bank_msr(const struct vcpu *v, uint32_t msr)
>      return 0;
>  }
>  
> +static inline bool vmce_has_lmce(const struct vcpu *v)
> +{
> +    return v->arch.vmce.mcg_cap & MCG_LMCE_P;
> +}

Is there a particular reason this is placed here, rather than ...

> --- a/xen/arch/x86/include/asm/mce.h
> +++ b/xen/arch/x86/include/asm/mce.h
> @@ -41,7 +41,6 @@ extern void vmce_init_vcpu(struct vcpu *v);
>  extern int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt);
>  extern int vmce_wrmsr(uint32_t msr, uint64_t val);
>  extern int vmce_rdmsr(uint32_t msr, uint64_t *val);
> -extern bool vmce_has_lmce(const struct vcpu *v);
>  extern int vmce_enable_mca_cap(struct domain *d, uint64_t cap);

... in the file the declaration was in, thus avoiding ...

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -24,6 +24,8 @@
>  
>  #include <public/hvm/params.h>
>  
> +#include "cpu/mcheck/mce.h"

... the need for such a non-standard, cross-directory #include?

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker()
  2024-05-14  8:26 ` [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker() Sergiy Kibrik
@ 2024-05-16  9:44   ` Jan Beulich
  2024-05-16  9:46     ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2024-05-16  9:44 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 14.05.2024 10:26, Sergiy Kibrik wrote:
> The default switch case block is likely wanted here, to handle situation
> e.g. of unexpected c->x86_vendor value -- then no mcheck init is done, but
> misleading message still gets logged anyway.

With "likely" dropped I'm okay with this as far as the addition of the default
label goes. However, ...

> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
> @@ -29,9 +29,13 @@ static int __init cf_check init_nonfatal_mce_checker(void)
>  		/* Assume we are on K8 or newer AMD or Hygon CPU here */
>  		amd_nonfatal_mcheck_init(c);
>  		break;
> +
>  	case X86_VENDOR_INTEL:
>  		intel_nonfatal_mcheck_init(c);
>  		break;
> +
> +	default:
> +		return -ENODEV;
>  	}
>  	printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
>  	return 0;

... converting this to an error return still isn't justified. On one hand it's
benign because we still don't check return values from initcalls. Yet otoh it
isn't really an error, is it?

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker()
  2024-05-16  9:44   ` Jan Beulich
@ 2024-05-16  9:46     ` Jan Beulich
  2024-05-20  9:41       ` Sergiy Kibrik
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2024-05-16  9:46 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 16.05.2024 11:44, Jan Beulich wrote:
> On 14.05.2024 10:26, Sergiy Kibrik wrote:
>> The default switch case block is likely wanted here, to handle situation
>> e.g. of unexpected c->x86_vendor value -- then no mcheck init is done, but
>> misleading message still gets logged anyway.
> 
> With "likely" dropped I'm okay with this as far as the addition of the default
> label goes. However, ...
> 
>> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
>> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
>> @@ -29,9 +29,13 @@ static int __init cf_check init_nonfatal_mce_checker(void)
>>  		/* Assume we are on K8 or newer AMD or Hygon CPU here */
>>  		amd_nonfatal_mcheck_init(c);
>>  		break;
>> +
>>  	case X86_VENDOR_INTEL:
>>  		intel_nonfatal_mcheck_init(c);
>>  		break;
>> +
>> +	default:
>> +		return -ENODEV;
>>  	}
>>  	printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
>>  	return 0;
> 
> ... converting this to an error return still isn't justified. On one hand it's
> benign because we still don't check return values from initcalls. Yet otoh it
> isn't really an error, is it?

I realize earlier in the function there are error returns, too. I question at
least the former of them as well. And the latter shouldn't be an error at least
when the vendor isn't going to be handled in the switch().

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code
  2024-05-14  8:28 ` [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
@ 2024-05-16  9:55   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2024-05-16  9:55 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 14.05.2024 10:28, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -38,6 +38,10 @@ DEFINE_PER_CPU_READ_MOSTLY(unsigned int, nr_mce_banks);
>  unsigned int __read_mostly firstbank;
>  unsigned int __read_mostly ppin_msr;
>  uint8_t __read_mostly cmci_apic_vector;
> +bool __read_mostly cmci_support;
> +
> +/* If mce_force_broadcast == 1, lmce_support will be disabled forcibly. */
> +bool __read_mostly lmce_support;

While moving these, did you consider switching to __ro_after_init? Preferably
with that
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-16  9:39   ` Jan Beulich
@ 2024-05-20  9:32     ` Sergiy Kibrik
  2024-05-21  6:05       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-20  9:32 UTC (permalink / raw
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

16.05.24 12:39, Jan Beulich:
> On 14.05.2024 10:20, Sergiy Kibrik wrote:
>> Moving this function out of mce_intel.c would make it possible to disable
>> build of Intel MCE code later on, because the function gets called from
>> common x86 code.
> 
> Why "would"? "Will" or "is going to" would seem more to the point to me.

yes, sure

> But anyway.
> 
>> --- a/xen/arch/x86/cpu/mcheck/mce.h
>> +++ b/xen/arch/x86/cpu/mcheck/mce.h
>> @@ -170,6 +170,11 @@ static inline int mce_bank_msr(const struct vcpu *v, uint32_t msr)
>>       return 0;
>>   }
>>   
>> +static inline bool vmce_has_lmce(const struct vcpu *v)
>> +{
>> +    return v->arch.vmce.mcg_cap & MCG_LMCE_P;
>> +}
> 
> Is there a particular reason this is placed here, rather than ...
> 
>> --- a/xen/arch/x86/include/asm/mce.h
>> +++ b/xen/arch/x86/include/asm/mce.h
>> @@ -41,7 +41,6 @@ extern void vmce_init_vcpu(struct vcpu *v);
>>   extern int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt);
>>   extern int vmce_wrmsr(uint32_t msr, uint64_t val);
>>   extern int vmce_rdmsr(uint32_t msr, uint64_t *val);
>> -extern bool vmce_has_lmce(const struct vcpu *v);
>>   extern int vmce_enable_mca_cap(struct domain *d, uint64_t cap);
> 
> ... in the file the declaration was in, thus avoiding ...
> 
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -24,6 +24,8 @@
>>   
>>   #include <public/hvm/params.h>
>>   
>> +#include "cpu/mcheck/mce.h"
> 
> ... the need for such a non-standard, cross-directory #include?
> 


This is because MCG_LMCE_P is defined in arch/x86/cpu/mcheck/x86_mca.h 
-- so either MCG_LMCE_P (+ a bunch of MCG_* declarations) has to be 
moved to common header to be accessible, or local x86_mca.h got to be 
included from common arch/x86/include/asm/mce.h.

As for the MCG_* declarations movement I didn't think there's a good 
enough reason to do it; as for the inclusion of x86_mca.h it didn't look 
nice at all.

Are there any more options?

  -Sergiy


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker()
  2024-05-16  9:46     ` Jan Beulich
@ 2024-05-20  9:41       ` Sergiy Kibrik
  0 siblings, 0 replies; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-20  9:41 UTC (permalink / raw
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

16.05.24 12:46, Jan Beulich:
>>> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
>>> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
>>> @@ -29,9 +29,13 @@ static int __init cf_check init_nonfatal_mce_checker(void)
>>>   		/* Assume we are on K8 or newer AMD or Hygon CPU here */
>>>   		amd_nonfatal_mcheck_init(c);
>>>   		break;
>>> +
>>>   	case X86_VENDOR_INTEL:
>>>   		intel_nonfatal_mcheck_init(c);
>>>   		break;
>>> +
>>> +	default:
>>> +		return -ENODEV;
>>>   	}
>>>   	printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
>>>   	return 0;
>> ... converting this to an error return still isn't justified. On one hand it's
>> benign because we still don't check return values from initcalls. Yet otoh it
>> isn't really an error, is it?
> I realize earlier in the function there are error returns, too. I question at
> least the former of them as well. And the latter shouldn't be an error at least
> when the vendor isn't going to be handled in the switch().

I'll just return 0 then, and maybe leave a comment, as this code will 
start to look a bit unnatural to me.

   -Sergiy


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-20  9:32     ` Sergiy Kibrik
@ 2024-05-21  6:05       ` Jan Beulich
  2024-05-21 10:00         ` Sergiy Kibrik
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2024-05-21  6:05 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 20.05.2024 11:32, Sergiy Kibrik wrote:
> 16.05.24 12:39, Jan Beulich:
>> On 14.05.2024 10:20, Sergiy Kibrik wrote:
>>> Moving this function out of mce_intel.c would make it possible to disable
>>> build of Intel MCE code later on, because the function gets called from
>>> common x86 code.
>>
>> Why "would"? "Will" or "is going to" would seem more to the point to me.
> 
> yes, sure
> 
>> But anyway.
>>
>>> --- a/xen/arch/x86/cpu/mcheck/mce.h
>>> +++ b/xen/arch/x86/cpu/mcheck/mce.h
>>> @@ -170,6 +170,11 @@ static inline int mce_bank_msr(const struct vcpu *v, uint32_t msr)
>>>       return 0;
>>>   }
>>>   
>>> +static inline bool vmce_has_lmce(const struct vcpu *v)
>>> +{
>>> +    return v->arch.vmce.mcg_cap & MCG_LMCE_P;
>>> +}
>>
>> Is there a particular reason this is placed here, rather than ...
>>
>>> --- a/xen/arch/x86/include/asm/mce.h
>>> +++ b/xen/arch/x86/include/asm/mce.h
>>> @@ -41,7 +41,6 @@ extern void vmce_init_vcpu(struct vcpu *v);
>>>   extern int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt);
>>>   extern int vmce_wrmsr(uint32_t msr, uint64_t val);
>>>   extern int vmce_rdmsr(uint32_t msr, uint64_t *val);
>>> -extern bool vmce_has_lmce(const struct vcpu *v);
>>>   extern int vmce_enable_mca_cap(struct domain *d, uint64_t cap);
>>
>> ... in the file the declaration was in, thus avoiding ...
>>
>>> --- a/xen/arch/x86/msr.c
>>> +++ b/xen/arch/x86/msr.c
>>> @@ -24,6 +24,8 @@
>>>   
>>>   #include <public/hvm/params.h>
>>>   
>>> +#include "cpu/mcheck/mce.h"
>>
>> ... the need for such a non-standard, cross-directory #include?
>>
> 
> 
> This is because MCG_LMCE_P is defined in arch/x86/cpu/mcheck/x86_mca.h 
> -- so either MCG_LMCE_P (+ a bunch of MCG_* declarations) has to be 
> moved to common header to be accessible, or local x86_mca.h got to be 
> included from common arch/x86/include/asm/mce.h.
> 
> As for the MCG_* declarations movement I didn't think there's a good 
> enough reason to do it; as for the inclusion of x86_mca.h it didn't look 
> nice at all.

I'm afraid I don't follow the latter: Why's including x86_mca.h any worse
than what you do right now?

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-21  6:05       ` Jan Beulich
@ 2024-05-21 10:00         ` Sergiy Kibrik
  2024-05-21 10:19           ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Sergiy Kibrik @ 2024-05-21 10:00 UTC (permalink / raw
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

21.05.24 09:05, Jan Beulich:
>> This is because MCG_LMCE_P is defined in arch/x86/cpu/mcheck/x86_mca.h
>> -- so either MCG_LMCE_P (+ a bunch of MCG_* declarations) has to be
>> moved to common header to be accessible, or local x86_mca.h got to be
>> included from common arch/x86/include/asm/mce.h.
>>
>> As for the MCG_* declarations movement I didn't think there's a good
>> enough reason to do it; as for the inclusion of x86_mca.h it didn't look
>> nice at all.
> I'm afraid I don't follow the latter: Why's including x86_mca.h any worse
> than what you do right now?

To include x86_mca.h from asm/mce.h something like this line would be 
needed:

#include "../../cpu/mcheck/x86_mca.h"

I've found only two include-s of such kind, so I presume they're not common.
Besides xen/sched.h includes asm/mce.h before declaration of struct 
vcpu, so body of vmce_has_lmce(const struct vcpu *v) can't really be 
compiled in asm/mce.h

   -Sergiy


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header
  2024-05-21 10:00         ` Sergiy Kibrik
@ 2024-05-21 10:19           ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2024-05-21 10:19 UTC (permalink / raw
  To: Sergiy Kibrik
  Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini,
	xen-devel

On 21.05.2024 12:00, Sergiy Kibrik wrote:
> 21.05.24 09:05, Jan Beulich:
>>> This is because MCG_LMCE_P is defined in arch/x86/cpu/mcheck/x86_mca.h
>>> -- so either MCG_LMCE_P (+ a bunch of MCG_* declarations) has to be
>>> moved to common header to be accessible, or local x86_mca.h got to be
>>> included from common arch/x86/include/asm/mce.h.
>>>
>>> As for the MCG_* declarations movement I didn't think there's a good
>>> enough reason to do it; as for the inclusion of x86_mca.h it didn't look
>>> nice at all.
>> I'm afraid I don't follow the latter: Why's including x86_mca.h any worse
>> than what you do right now?
> 
> To include x86_mca.h from asm/mce.h something like this line would be 
> needed:
> 
> #include "../../cpu/mcheck/x86_mca.h"
> 
> I've found only two include-s of such kind, so I presume they're not common.

Indeed, and I have to apologize for not reading your earlier reply quite
right.

Jan

> Besides xen/sched.h includes asm/mce.h before declaration of struct 
> vcpu, so body of vmce_has_lmce(const struct vcpu *v) can't really be 
> compiled in asm/mce.h
> 
>    -Sergiy



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-05-21 10:19 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-14  8:16 [XEN PATCH v3 0/6] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
2024-05-14  8:18 ` [XEN PATCH v3 1/6] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
2024-05-14  8:20 ` [XEN PATCH v3 2/6] x86/intel: move vmce_has_lmce() routine to header Sergiy Kibrik
2024-05-16  9:39   ` Jan Beulich
2024-05-20  9:32     ` Sergiy Kibrik
2024-05-21  6:05       ` Jan Beulich
2024-05-21 10:00         ` Sergiy Kibrik
2024-05-21 10:19           ` Jan Beulich
2024-05-14  8:22 ` [XEN PATCH v3 3/6] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
2024-05-14  8:24 ` [XEN PATCH v3 4/6] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
2024-05-14  8:26 ` [XEN PATCH v3 5/6] x86/MCE: add default switch case in init_nonfatal_mce_checker() Sergiy Kibrik
2024-05-16  9:44   ` Jan Beulich
2024-05-16  9:46     ` Jan Beulich
2024-05-20  9:41       ` Sergiy Kibrik
2024-05-14  8:28 ` [XEN PATCH v3 6/6] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
2024-05-16  9:55   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).