All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics
@ 2022-08-24 13:27 Matias Ezequiel Vara Larsen
  2022-08-24 13:27 ` [RFC PATCH v1 1/2] xen/memory : Add a stats_table resource type Matias Ezequiel Vara Larsen
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Matias Ezequiel Vara Larsen @ 2022-08-24 13:27 UTC (permalink / raw
  To: xen-devel
  Cc: Matias Ezequiel Vara Larsen, Jan Beulich, Andrew Cooper, Wei Liu,
	George Dunlap, Julien Grall, Stefano Stabellini, Dario Faggioli,
	Anthony PERARD

Hello all,

The purpose of this RFC is to get feedback about a new acquire resource that
exposes vcpu statistics for a given domain. The current mechanism to get those
statistics is by querying the hypervisor. This mechanism relies on a hypercall
and holds the domctl spinlock during its execution. When a pv tool like xcp-rrdd
periodically samples these counters, it ends up affecting other paths that share
that spinlock. By using acquire resources, the pv tool only requires a few
hypercalls to set the shared memory region and samples are got without issuing
any other hypercall. The original idea has been suggested by Andrew Cooper to
which I have been discussing about how to implement the current PoC. You can
find the RFC patch series at [1]. The series is rebased on top of stable-4.15.

I am currently a bit blocked on 1) what to expose and 2) how to expose it. For
1), I decided to expose what xcp-rrdd is querying, e.g., XEN_DOMCTL_getvcpuinfo.
More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is a uint64_t
counter. However, the time spent in other states may be interesting too.
Regarding 2), I am not sure if simply using an array of uint64_t is enough or if
a different interface should be exposed. The remaining question is when to get
new values. For the moment, I am updating this counter during
vcpu_runstate_change().

The current series includes a simple pv tool that shows how this new interface is
used. This tool maps the counter and periodically samples it.

Any feedback/help would be appreciated.

Thanks, Matias.

[1] https://github.com/MatiasVara/xen/tree/feature_stats

Changes in v1:
- rework how the resource is allocated and released
- rework when the resource is allocated that happens only when the resource is
  requested 
- rework the structure shared between the tool and Xen to make it extensible to
  new counters and declare it in a public header

There are still the following questions:
   - consumer may fetch inconsistency data
   - resource shall be released when there are no more readers otherwise we keep
     updating it during a hot path
   - one frame can host up to 512 vcpus. Should I check to this limit when
     updating? Should it be possible to allocate more than one frame for vcpu
     counters?  

Matias Ezequiel Vara Larsen (2):
  xen/memory : Add a stats_table resource type
  tools/misc: Add xen-vcpus-stats tool

 tools/misc/Makefile          |  6 +++
 tools/misc/xen-vcpus-stats.c | 76 +++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c       |  2 +
 xen/common/memory.c          | 94 ++++++++++++++++++++++++++++++++++++
 xen/common/sched/core.c      |  7 +++
 xen/include/public/memory.h  |  3 ++
 xen/include/public/vcpu.h    | 10 ++++
 xen/include/xen/mm.h         |  2 +
 xen/include/xen/sched.h      |  5 ++
 9 files changed, 205 insertions(+)
 create mode 100644 tools/misc/xen-vcpus-stats.c

-- 
2.25.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH v1 1/2] xen/memory : Add a stats_table resource type
  2022-08-24 13:27 [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Matias Ezequiel Vara Larsen
@ 2022-08-24 13:27 ` Matias Ezequiel Vara Larsen
  2022-08-24 13:27 ` [RFC PATCH v1 2/2] tools/misc: Add xen-vcpus-stats tool Matias Ezequiel Vara Larsen
  2022-09-29 15:55 ` [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Jan Beulich
  2 siblings, 0 replies; 5+ messages in thread
From: Matias Ezequiel Vara Larsen @ 2022-08-24 13:27 UTC (permalink / raw
  To: xen-devel
  Cc: Matias Ezequiel Vara Larsen, Jan Beulich, Andrew Cooper, Wei Liu,
	George Dunlap, Julien Grall, Stefano Stabellini, Dario Faggioli

This commit proposes a new mechanism to query the RUNSTATE_running counter for
a given vcpu from a dom0 userspace application. This commit proposes to expose
that counter by using the acquire_resource interface. The current mechanism
relies on the XEN_DOMCTL_getvcpuinfo and holds a single global domctl_lock for
the entire hypercall; and iterate over every vcpu in the system for every
update thus impacting operations that share that lock.

This commit proposes to expose vcpu RUNSTATE_running via the
xenforeignmemory interface thus preventing to issue the hypercall and holding
the lock. For that purpose, a new resource type named stats_table is added. The
first frame of this resource stores per-vcpu counters. The frame has one entry
of type struct vcpu_stats per vcpu. The allocation of this frame only happens
if the resource is requested. The frame is released after the domain is
destroyed.

Note that the updating of this counter is in a hot path, thus, in this commit,
copying only happens if it is specifically required.

Note that the exposed structure is extensible in two ways. First, the structure
vcpu_stats can be extended with new per-vcpu counters while it fits in a frame.
Second, new frames can be added in case new counters are required.

Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
---
Changes in v1:
- rework the allocation and releasing of the frames
- use the zero frame for per-vcpu counters that are listed as an array
- allocate vcpu stats frames only when the resource is requested
- rewrite commit message
- add the vcpu_stats structure to keep per-vcpu counters
- add the shared_vcpustatspage to keep an array of per-vcpu counters for a
  given domain
- declare the structures in a public header 
- define the vcpustats_page in the domain structure
---
 xen/arch/x86/hvm/hvm.c      |  2 +
 xen/common/memory.c         | 94 +++++++++++++++++++++++++++++++++++++
 xen/common/sched/core.c     |  7 +++
 xen/include/public/memory.h |  3 ++
 xen/include/public/vcpu.h   | 10 ++++
 xen/include/xen/mm.h        |  2 +
 xen/include/xen/sched.h     |  5 ++
 7 files changed, 123 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ddd001a6ad..1ef6cb5ff0 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -741,6 +741,8 @@ void hvm_domain_relinquish_resources(struct domain *d)
 
     ioreq_server_destroy_all(d);
 
+    stats_free_vcpu_mfn(d);
+
     msixtbl_pt_cleanup(d);
 
     /* Stop all asynchronous timer actions. */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 297b98a562..749486d5d4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1078,6 +1078,12 @@ unsigned int ioreq_server_max_frames(const struct domain *d)
     return nr;
 }
 
+unsigned int stats_table_max_frames(const struct domain *d)
+{
+    /* One frame per 512 vcpus. */
+    return 1;
+}
+
 /*
  * Return 0 on any kind of error.  Caller converts to -EINVAL.
  *
@@ -1099,6 +1105,9 @@ static unsigned int resource_max_frames(const struct domain *d,
     case XENMEM_resource_vmtrace_buf:
         return d->vmtrace_size >> PAGE_SHIFT;
 
+    case XENMEM_resource_stats_table:
+        return stats_table_max_frames(d);
+
     default:
         return -EOPNOTSUPP;
     }
@@ -1162,6 +1171,88 @@ static int acquire_vmtrace_buf(
     return nr_frames;
 }
 
+void stats_free_vcpu_mfn(struct domain * d)
+{
+    struct page_info *pg = d->vcpustats_page.pg;
+
+    if ( !pg )
+        return;
+
+    d->vcpustats_page.pg = NULL;
+
+    if ( d->vcpustats_page.va )
+        unmap_domain_page_global(d->vcpustats_page.va);
+
+    d->vcpustats_page.va = NULL;
+
+    put_page_alloc_ref(pg);
+    put_page_and_type(pg);
+}
+
+static int stats_vcpu_alloc_mfn(struct domain *d)
+{
+    struct page_info *pg;
+
+    pg = alloc_domheap_page(d, MEMF_no_refcount);
+
+    if ( !pg )
+        return -ENOMEM;
+
+    if ( !get_page_and_type(pg, d, PGT_writable_page) ) {
+        put_page_alloc_ref(pg);
+        return -ENODATA;
+    }
+
+    d->vcpustats_page.va = __map_domain_page_global(pg);
+    if ( !d->vcpustats_page.va )
+        goto fail;
+
+    d->vcpustats_page.pg = pg;
+    clear_page(d->vcpustats_page.va);
+    return 1;
+
+fail:
+    put_page_alloc_ref(pg);
+    put_page_and_type(pg);
+
+    return -ENOMEM;
+}
+
+static int acquire_stats_table(struct domain *d,
+                                unsigned int id,
+                                unsigned int frame,
+                                unsigned int nr_frames,
+                                xen_pfn_t mfn_list[])
+{
+    mfn_t mfn;
+    int rc;
+    unsigned int i;
+
+    if ( !d )
+        return -ENOENT;
+
+    for ( i = 0; i < nr_frames; i++ )
+    {
+        switch ( i )
+        {
+        case XENMEM_resource_stats_frame_vcpustats:
+            if ( !d->vcpustats_page.pg ) {
+                rc = stats_vcpu_alloc_mfn(d);
+                if ( rc < 1 )
+                    return rc;
+            }
+            mfn = page_to_mfn(d->vcpustats_page.pg);
+            mfn_list[i] = mfn_x(mfn);
+            break;
+
+        default:
+            return -EINVAL;
+        }
+    }
+
+    return nr_frames;
+}
+
 /*
  * Returns -errno on error, or positive in the range [1, nr_frames] on
  * success.  Returning less than nr_frames contitutes a request for a
@@ -1182,6 +1273,9 @@ static int _acquire_resource(
     case XENMEM_resource_vmtrace_buf:
         return acquire_vmtrace_buf(d, id, frame, nr_frames, mfn_list);
 
+    case XENMEM_resource_stats_table:
+        return acquire_stats_table(d, id, frame, nr_frames, mfn_list);
+
     default:
         return -EOPNOTSUPP;
     }
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 8f4b1ca10d..87cfa172ad 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -264,6 +264,8 @@ static inline void vcpu_runstate_change(
 {
     s_time_t delta;
     struct sched_unit *unit = v->sched_unit;
+    shared_vcpustatspage_t * vcpustats_va;
+    struct domain *d = v->domain;
 
     ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock));
     if ( v->runstate.state == new_state )
@@ -287,6 +289,11 @@ static inline void vcpu_runstate_change(
     }
 
     v->runstate.state = new_state;
+
+    vcpustats_va = (shared_vcpustatspage_t*)d->vcpustats_page.va;
+    if ( vcpustats_va )
+        memcpy(&vcpustats_va->vcpu_info[v->vcpu_id].runstate_running_time,
+               &v->runstate.time[RUNSTATE_running], sizeof(v->runstate.time[RUNSTATE_running]));
 }
 
 void sched_guest_idle(void (*idle) (void), unsigned int cpu)
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 50e73eef98..e1a10b8b97 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -626,6 +626,7 @@ struct xen_mem_acquire_resource {
 #define XENMEM_resource_ioreq_server 0
 #define XENMEM_resource_grant_table 1
 #define XENMEM_resource_vmtrace_buf 2
+#define XENMEM_resource_stats_table 3
 
     /*
      * IN - a type-specific resource identifier, which must be zero
@@ -683,6 +684,8 @@ struct xen_mem_acquire_resource {
 typedef struct xen_mem_acquire_resource xen_mem_acquire_resource_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_acquire_resource_t);
 
+#define XENMEM_resource_stats_frame_vcpustats 0
+
 /*
  * XENMEM_get_vnumainfo used by guest to get
  * vNUMA topology from hypervisor.
diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h
index 3623af932f..57a5ca777c 100644
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -235,6 +235,16 @@ struct vcpu_register_time_memory_area {
 typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_area_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t);
 
+struct vcpu_stats{
+   uint64_t runstate_running_time;
+};
+
+struct shared_vcpustatspage {
+   struct vcpu_stats vcpu_info[1];
+};
+
+typedef struct shared_vcpustatspage shared_vcpustatspage_t;
+
 #endif /* __XEN_PUBLIC_VCPU_H__ */
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..d1ca8b9aa8 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -134,6 +134,8 @@ int assign_pages(
 /* Dump info to serial console */
 void arch_dump_shared_mem_info(void);
 
+void stats_free_vcpu_mfn(struct domain * d);
+
 /*
  * Extra fault info types which are used to further describe
  * the source of an access violation.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5485d08afb..d9551ce35f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -577,6 +577,11 @@ struct domain
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 #endif
+    /* Page that hosts vcpu stats */
+    struct {
+        struct page_info *pg;
+        void *va;
+    } vcpustats_page;
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH v1 2/2] tools/misc: Add xen-vcpus-stats tool
  2022-08-24 13:27 [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Matias Ezequiel Vara Larsen
  2022-08-24 13:27 ` [RFC PATCH v1 1/2] xen/memory : Add a stats_table resource type Matias Ezequiel Vara Larsen
@ 2022-08-24 13:27 ` Matias Ezequiel Vara Larsen
  2022-09-29 15:55 ` [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Jan Beulich
  2 siblings, 0 replies; 5+ messages in thread
From: Matias Ezequiel Vara Larsen @ 2022-08-24 13:27 UTC (permalink / raw
  To: xen-devel; +Cc: Matias Ezequiel Vara Larsen, Wei Liu, Anthony PERARD

Add a demostration tool that uses the stats_table resource to
query vcpus' RUNSTATE_running counter for a DomU.

Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
---
Changes in v1:
- change the name of the tool to xen-vcpus-stats
- set command line parameters in the same order that are passed
- remove header libs.h
- build by default
- remove errno, strerrno, "\n", and identation
- use errx when errno is not needed
- address better the number of pages requested and error msgs
- use the shared_vcpustatspage_t structure
- use the correct frame id when requesting the resource
---
 tools/misc/Makefile          |  6 +++
 tools/misc/xen-vcpus-stats.c | 76 ++++++++++++++++++++++++++++++++++++
 2 files changed, 82 insertions(+)
 create mode 100644 tools/misc/xen-vcpus-stats.c

diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 2b683819d4..837e4b50da 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -49,6 +49,7 @@ TARGETS_COPY += xenpvnetboot
 
 # Everything which needs to be built
 TARGETS_BUILD := $(filter-out $(TARGETS_COPY),$(TARGETS_ALL))
+TARGETS_BUILD += xen-vcpus-stats
 
 # ... including build-only targets
 TARGETS_BUILD-$(CONFIG_X86)    += xen-vmtrace
@@ -135,4 +136,9 @@ xencov: xencov.o
 xen-ucode: xen-ucode.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-vcpus-stats.o: CFLAGS += $(CFLAGS_libxenforeginmemory)
+
+xen-vcpus-stats: xen-vcpus-stats.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
+
 -include $(DEPS_INCLUDE)
diff --git a/tools/misc/xen-vcpus-stats.c b/tools/misc/xen-vcpus-stats.c
new file mode 100644
index 0000000000..d56d1493e4
--- /dev/null
+++ b/tools/misc/xen-vcpus-stats.c
@@ -0,0 +1,76 @@
+#include <err.h>
+#include <errno.h>
+#include <error.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <signal.h>
+
+#include <xenctrl.h>
+#include <xenforeignmemory.h>
+#include <xen/vcpu.h>
+
+static sig_atomic_t interrupted;
+static void close_handler(int signum)
+{
+    interrupted = 1;
+}
+
+int main(int argc, char **argv)
+{
+    xenforeignmemory_handle *fh;
+    xenforeignmemory_resource_handle *res;
+    size_t size;
+    int rc, domid, frec, vcpu;
+    shared_vcpustatspage_t * info;
+    struct sigaction act;
+
+    if (argc != 4 ) {
+        fprintf(stderr, "Usage: %s <domid> <vcpu> <period>\n", argv[0]);
+        return 1;
+    }
+
+    domid = atoi(argv[1]);
+    vcpu = atoi(argv[2]);
+    frec = atoi(argv[3]);
+
+    act.sa_handler = close_handler;
+    act.sa_flags = 0;
+    sigemptyset(&act.sa_mask);
+    sigaction(SIGHUP,  &act, NULL);
+    sigaction(SIGTERM, &act, NULL);
+    sigaction(SIGINT,  &act, NULL);
+    sigaction(SIGALRM, &act, NULL);
+
+    fh = xenforeignmemory_open(NULL, 0);
+
+    if ( !fh )
+        err(1, "xenforeignmemory_open");
+
+    rc = xenforeignmemory_resource_size(
+        fh, domid, XENMEM_resource_stats_table,
+        0, &size);
+
+    if ( rc )
+        err(1, "Fail: Get size");
+
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_stats_table,
+        0, XENMEM_resource_stats_frame_vcpustats, size >> XC_PAGE_SHIFT,
+        (void **)&info, PROT_READ, 0);
+
+    if ( !res )
+        err(1, "Fail: Map");
+
+    while ( !interrupted ) {
+        sleep(frec);
+        printf("running cpu_time: %ld\n", info->vcpu_info[vcpu].runstate_running_time);
+    }
+
+    rc = xenforeignmemory_unmap_resource(fh, res);
+    if ( rc )
+        err(1, "Fail: Unmap");
+
+    return 0;
+}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics
  2022-08-24 13:27 [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Matias Ezequiel Vara Larsen
  2022-08-24 13:27 ` [RFC PATCH v1 1/2] xen/memory : Add a stats_table resource type Matias Ezequiel Vara Larsen
  2022-08-24 13:27 ` [RFC PATCH v1 2/2] tools/misc: Add xen-vcpus-stats tool Matias Ezequiel Vara Larsen
@ 2022-09-29 15:55 ` Jan Beulich
  2022-09-30 11:10   ` Matias Ezequiel Vara Larsen
  2 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2022-09-29 15:55 UTC (permalink / raw
  To: Matias Ezequiel Vara Larsen
  Cc: Matias Ezequiel Vara Larsen, Andrew Cooper, Wei Liu,
	George Dunlap, Julien Grall, Stefano Stabellini, Dario Faggioli,
	Anthony PERARD, xen-devel

On 24.08.2022 15:27, Matias Ezequiel Vara Larsen wrote:
> The purpose of this RFC is to get feedback about a new acquire resource that
> exposes vcpu statistics for a given domain. The current mechanism to get those
> statistics is by querying the hypervisor. This mechanism relies on a hypercall
> and holds the domctl spinlock during its execution. When a pv tool like xcp-rrdd
> periodically samples these counters, it ends up affecting other paths that share
> that spinlock. By using acquire resources, the pv tool only requires a few
> hypercalls to set the shared memory region and samples are got without issuing
> any other hypercall. The original idea has been suggested by Andrew Cooper to
> which I have been discussing about how to implement the current PoC. You can
> find the RFC patch series at [1]. The series is rebased on top of stable-4.15.
> 
> I am currently a bit blocked on 1) what to expose and 2) how to expose it. For
> 1), I decided to expose what xcp-rrdd is querying, e.g., XEN_DOMCTL_getvcpuinfo.
> More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is a uint64_t
> counter. However, the time spent in other states may be interesting too.
> Regarding 2), I am not sure if simply using an array of uint64_t is enough or if
> a different interface should be exposed. The remaining question is when to get
> new values. For the moment, I am updating this counter during
> vcpu_runstate_change().
> 
> The current series includes a simple pv tool that shows how this new interface is
> used. This tool maps the counter and periodically samples it.
> 
> Any feedback/help would be appreciated.

Before looking more closely - was there perhaps kind-of-review feedback
during the summit, which would make it more reasonable to look through
this once a v2 has appeared?

Jan


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics
  2022-09-29 15:55 ` [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Jan Beulich
@ 2022-09-30 11:10   ` Matias Ezequiel Vara Larsen
  0 siblings, 0 replies; 5+ messages in thread
From: Matias Ezequiel Vara Larsen @ 2022-09-30 11:10 UTC (permalink / raw
  To: Jan Beulich
  Cc: Matias Ezequiel Vara Larsen, Andrew Cooper, Wei Liu,
	George Dunlap, Julien Grall, Stefano Stabellini, Dario Faggioli,
	Anthony PERARD, xen-devel

On Thu, Sep 29, 2022 at 05:55:50PM +0200, Jan Beulich wrote:
> On 24.08.2022 15:27, Matias Ezequiel Vara Larsen wrote:
> > The purpose of this RFC is to get feedback about a new acquire resource that
> > exposes vcpu statistics for a given domain. The current mechanism to get those
> > statistics is by querying the hypervisor. This mechanism relies on a hypercall
> > and holds the domctl spinlock during its execution. When a pv tool like xcp-rrdd
> > periodically samples these counters, it ends up affecting other paths that share
> > that spinlock. By using acquire resources, the pv tool only requires a few
> > hypercalls to set the shared memory region and samples are got without issuing
> > any other hypercall. The original idea has been suggested by Andrew Cooper to
> > which I have been discussing about how to implement the current PoC. You can
> > find the RFC patch series at [1]. The series is rebased on top of stable-4.15.
> > 
> > I am currently a bit blocked on 1) what to expose and 2) how to expose it. For
> > 1), I decided to expose what xcp-rrdd is querying, e.g., XEN_DOMCTL_getvcpuinfo.
> > More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is a uint64_t
> > counter. However, the time spent in other states may be interesting too.
> > Regarding 2), I am not sure if simply using an array of uint64_t is enough or if
> > a different interface should be exposed. The remaining question is when to get
> > new values. For the moment, I am updating this counter during
> > vcpu_runstate_change().
> > 
> > The current series includes a simple pv tool that shows how this new interface is
> > used. This tool maps the counter and periodically samples it.
> > 
> > Any feedback/help would be appreciated.
> 
> Before looking more closely - was there perhaps kind-of-review feedback
> during the summit, which would make it more reasonable to look through
> this once a v2 has appeared?
> 

Yes, there was. I will submit v2 from feedback during summit. Thanks for point
it.

Matias 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-09-30 11:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-24 13:27 [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Matias Ezequiel Vara Larsen
2022-08-24 13:27 ` [RFC PATCH v1 1/2] xen/memory : Add a stats_table resource type Matias Ezequiel Vara Larsen
2022-08-24 13:27 ` [RFC PATCH v1 2/2] tools/misc: Add xen-vcpus-stats tool Matias Ezequiel Vara Larsen
2022-09-29 15:55 ` [RFC PATCH v1 0/2] Add a new acquire resource to query vcpu statistics Jan Beulich
2022-09-30 11:10   ` Matias Ezequiel Vara Larsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.