All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] Migration Debugging Helper Device
@ 2013-10-23 13:11 Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 1/3] Export savevm handlers outside of savevm.c Alexander Graf
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Alexander Graf @ 2013-10-23 13:11 UTC (permalink / raw
  To: QEMU Developers; +Cc: Lucas Meneghel Rodrigues, Anthony Liguori

This patch set adds support for a simple migration debugging method.

It adds a device that exports all metadata required to read a migration
stream from an external program as part of the migration stream. The
external program then does not need to have any knowledge of device internals
of the target virtual machine.

The patch set also adds a python script that serves as such an external program,
allowing users to easily introspect the contents of a live migrated stream.

This approach consciously does not modify any way QEMU operates. To QEMU, it is
completely transparent. QEMU does not read that stream either, so you can not
use it to recover from migration breakage within the code. For that, we should
simply improve the migration protocol to be more future proof.

This approach is about enabling offline introspection of migration stream data
and structure, so that we have one more tool in our hands to see what goes wrong
inside a virtual machine.

  Example decoded migration: http://csgraf.de/mig/mig.txt
  Presentation: https://www.youtube.com/watch?v=iq1x40Qsrew
  Slides: https://www.dropbox.com/s/otp2pk2n3g087zp/Live%20Migration.pdf

Alexander Graf (3):
  Export savevm handlers outside of savevm.c
  Add migration debug device
  Add migration stream analyzation script

 hw/misc/Makefile.objs        |   1 +
 hw/misc/debug_migration.c    | 498 +++++++++++++++++++++++++++++++++++++++++++
 include/qemu/savevm.h        |  28 +++
 savevm.c                     |  24 +--
 scripts/analyze-migration.py | 483 +++++++++++++++++++++++++++++++++++++++++
 5 files changed, 1012 insertions(+), 22 deletions(-)
 create mode 100644 hw/misc/debug_migration.c
 create mode 100644 include/qemu/savevm.h
 create mode 100755 scripts/analyze-migration.py

-- 
1.7.12.4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Qemu-devel] [PATCH 1/3] Export savevm handlers outside of savevm.c
  2013-10-23 13:11 [Qemu-devel] [PATCH 0/3] Migration Debugging Helper Device Alexander Graf
@ 2013-10-23 13:11 ` Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 2/3] Add migration debug device Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 3/3] Add migration stream analyzation script Alexander Graf
  2 siblings, 0 replies; 4+ messages in thread
From: Alexander Graf @ 2013-10-23 13:11 UTC (permalink / raw
  To: QEMU Developers; +Cc: Lucas Meneghel Rodrigues, Anthony Liguori

We need to be able to access savevm handlers from code that lives
outside of savevm.c. Extract its struct definitions and declaration
into a separate header file.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 include/qemu/savevm.h | 28 ++++++++++++++++++++++++++++
 savevm.c              | 24 ++----------------------
 2 files changed, 30 insertions(+), 22 deletions(-)
 create mode 100644 include/qemu/savevm.h

diff --git a/include/qemu/savevm.h b/include/qemu/savevm.h
new file mode 100644
index 0000000..5dae243
--- /dev/null
+++ b/include/qemu/savevm.h
@@ -0,0 +1,28 @@
+#ifndef QEMU_SAVEVM_H
+#define QEMU_SAVEVM_H
+
+typedef struct CompatEntry {
+    char idstr[256];
+    int instance_id;
+} CompatEntry;
+
+typedef struct SaveStateEntry {
+    QTAILQ_ENTRY(SaveStateEntry) entry;
+    char idstr[256];
+    int instance_id;
+    int alias_id;
+    int version_id;
+    int section_id;
+    SaveVMHandlers *ops;
+    const VMStateDescription *vmsd;
+    void *opaque;
+    CompatEntry *compat;
+    int no_migrate;
+    int is_ram;
+} SaveStateEntry;
+
+typedef QTAILQ_HEAD(EHCIQueueHead, EHCIQueue) EHCIQueueHead;
+typedef QTAILQ_HEAD(savevm_handlers, SaveStateEntry) SaveStateEntryHead;
+extern SaveStateEntryHead savevm_handlers;
+
+#endif /* QEMU_SAVEVM_H */
diff --git a/savevm.c b/savevm.c
index 2f631d4..eea45e1 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1457,29 +1457,9 @@ const VMStateInfo vmstate_info_bitmap = {
     .put = put_bitmap,
 };
 
-typedef struct CompatEntry {
-    char idstr[256];
-    int instance_id;
-} CompatEntry;
-
-typedef struct SaveStateEntry {
-    QTAILQ_ENTRY(SaveStateEntry) entry;
-    char idstr[256];
-    int instance_id;
-    int alias_id;
-    int version_id;
-    int section_id;
-    SaveVMHandlers *ops;
-    const VMStateDescription *vmsd;
-    void *opaque;
-    CompatEntry *compat;
-    int no_migrate;
-    int is_ram;
-} SaveStateEntry;
-
+#include "qemu/savevm.h"
 
-static QTAILQ_HEAD(savevm_handlers, SaveStateEntry) savevm_handlers =
-    QTAILQ_HEAD_INITIALIZER(savevm_handlers);
+SaveStateEntryHead savevm_handlers = QTAILQ_HEAD_INITIALIZER(savevm_handlers);
 static int global_section_id;
 
 static int calculate_new_instance_id(const char *idstr)
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [Qemu-devel] [PATCH 2/3] Add migration debug device
  2013-10-23 13:11 [Qemu-devel] [PATCH 0/3] Migration Debugging Helper Device Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 1/3] Export savevm handlers outside of savevm.c Alexander Graf
@ 2013-10-23 13:11 ` Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 3/3] Add migration stream analyzation script Alexander Graf
  2 siblings, 0 replies; 4+ messages in thread
From: Alexander Graf @ 2013-10-23 13:11 UTC (permalink / raw
  To: QEMU Developers; +Cc: Lucas Meneghel Rodrigues, Anthony Liguori

This patch adds a pseudo device whose sole purpose is to encapsulate
a machine readable layout description of the vmstate stream layout inside
of the stream.

With this device enabled in the system while a migration is happening, we
have to chance to decypher the contents of the stream from an external
program without any knowledge of the device layout of the guest.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 hw/misc/Makefile.objs     |   1 +
 hw/misc/debug_migration.c | 498 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 499 insertions(+)
 create mode 100644 hw/misc/debug_migration.c

diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
index 2578e29..4cfe8a4 100644
--- a/hw/misc/Makefile.objs
+++ b/hw/misc/Makefile.objs
@@ -41,3 +41,4 @@ obj-$(CONFIG_SLAVIO) += slavio_misc.o
 obj-$(CONFIG_ZYNQ) += zynq_slcr.o
 
 obj-$(CONFIG_PVPANIC) += pvpanic.o
+obj-y += debug_migration.o
diff --git a/hw/misc/debug_migration.c b/hw/misc/debug_migration.c
new file mode 100644
index 0000000..813041e
--- /dev/null
+++ b/hw/misc/debug_migration.c
@@ -0,0 +1,498 @@
+/*
+ *  QEMU pseudo-device to expose migration details
+ *
+ *  Copyright (c) 2013 Alexander Graf <agraf@suse.de>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hw/hw.h"
+#include "qemu/savevm.h"
+#include "qapi/qmp/qstring.h"
+
+#define TYPE_DEBUG_MIGRATION_DEVICE "debug-migration"
+
+typedef struct DebugMigration {
+    DeviceState parent_obj;
+
+    uint8_t magic[16];
+    int32_t size;
+    char *data;
+} DebugMigration;
+
+
+/**************** QJSON *****************/
+
+typedef struct QJSON {
+    QString *str;
+    bool omit_comma;
+    unsigned long self_size_offset;
+} QJSON;
+
+static void json_emit_element(QJSON *json, const char *name)
+{
+    /* Check whether we need to print a , before an element */
+    if (json->omit_comma) {
+        json->omit_comma = false;
+    } else {
+        qstring_append(json->str, ", ");
+    }
+
+    if (name) {
+        qstring_append(json->str, "\"");
+        qstring_append(json->str, name);
+        qstring_append(json->str, "\" : ");
+    }
+}
+
+static void json_start_object(QJSON *json, const char *name)
+{
+    json_emit_element(json, name);
+    qstring_append(json->str, "{ ");
+    json->omit_comma = true;
+}
+
+static void json_end_object(QJSON *json)
+{
+    qstring_append(json->str, " }");
+    json->omit_comma = false;
+}
+
+static void json_start_array(QJSON *json, const char *name)
+{
+    json_emit_element(json, name);
+    qstring_append(json->str, "[ ");
+    json->omit_comma = true;
+}
+
+static void json_end_array(QJSON *json)
+{
+    qstring_append(json->str, " ]");
+    json->omit_comma = false;
+}
+
+static void json_prop_int(QJSON *json, const char *name, int64_t val)
+{
+    json_emit_element(json, name);
+    qstring_append_int(json->str, val);
+}
+
+static void json_prop_str(QJSON *json, const char *name, const char *str)
+{
+    json_emit_element(json, name);
+    qstring_append_chr(json->str, '"');
+    qstring_append(json->str, str);
+    qstring_append_chr(json->str, '"');
+}
+
+static QJSON *qjson_new(void)
+{
+    QJSON *json = g_new(QJSON, 1);
+    json->str = qstring_from_str("{ ");
+    json->omit_comma = true;
+    return json;
+}
+
+static void qjson_finish(QJSON *json)
+{
+    json_end_object(json);
+}
+
+
+/**************** fake_file *****************/
+
+
+static int fake_file_put_buffer(void *opaque, const uint8_t *json,
+                                int64_t pos, int size)
+{
+    size_t *offset = (size_t *)opaque;
+    *offset += size;
+    return size;
+}
+
+const QEMUFileOps fake_file_ops = {
+    .put_buffer         = fake_file_put_buffer,
+};
+
+
+/**************** debug_migration *****************/
+
+static void print_vmsd(QJSON *json, const VMStateDescription *vmsd,
+                       void *opaque);
+static void print_vmsd_one(QJSON *json, const VMStateDescription *vmsd,
+                           void *opaque, int version_id);
+static const VMStateDescription vmstate_debug_migration;
+
+
+static void print_non_vmstate(QJSON *json, SaveStateEntry *se)
+{
+    QEMUFile *fakefile;
+    size_t offset = 0;
+
+    fakefile = qemu_fopen_ops(&offset, &fake_file_ops);
+
+    offset = 0;
+    se->ops->save_state(fakefile, se->opaque);
+    qemu_fflush(fakefile);
+
+    json_prop_int(json, "size", offset);
+    json_start_array(json, "fields");
+    json_start_object(json, NULL);
+    json_prop_str(json, "name", "data");
+    json_prop_int(json, "size", offset);
+    json_prop_str(json, "type", "buffer");
+    json_end_object(json);
+    json_end_array(json);
+
+    qemu_fclose(fakefile);
+}
+
+static const char *unknown = "unknown";
+static const char *get_vmfield_type_name(QJSON *json,
+                                         const VMStateDescription *vmsd,
+                                         void *opaque, VMStateField *field)
+{
+    const char *type = unknown;
+
+    if (field->info == &vmstate_info_bool) {
+        type = "bool";
+    } else if (field->info == &vmstate_info_int8) {
+        type = "int8";
+    } else if (field->info == &vmstate_info_int16) {
+        type = "int16";
+    } else if (field->info == &vmstate_info_int32) {
+        type = "int32";
+    } else if (field->info == &vmstate_info_int64) {
+        type = "int64";
+    } else if (field->info == &vmstate_info_uint8_equal) {
+        type = "uint8";
+    } else if (field->info == &vmstate_info_uint16_equal) {
+        type = "uint16";
+    } else if (field->info == &vmstate_info_int32_equal) {
+        type = "int32_equal";
+    } else if (field->info == &vmstate_info_uint32_equal) {
+        type = "uint32";
+    } else if (field->info == &vmstate_info_uint64_equal) {
+        type = "uint64";
+    } else if (field->info == &vmstate_info_int32_le) {
+        type = "int32_le";
+    } else if (field->info == &vmstate_info_uint8) {
+        type = "uint8";
+    } else if (field->info == &vmstate_info_uint16) {
+        type = "uint16";
+    } else if (field->info == &vmstate_info_uint32) {
+        type = "uint32";
+    } else if (field->info == &vmstate_info_uint64) {
+        type = "uint64";
+    } else if (field->info == &vmstate_info_float64) {
+        type = "float64";
+    } else if (field->info == &vmstate_info_timer) {
+        type = "timer";
+    } else if (field->info == &vmstate_info_buffer) {
+        type = "buffer";
+    } else if (field->info == &vmstate_info_unused_buffer) {
+        type = "unused_buffer";
+    } else if (field->info == &vmstate_info_bitmap) {
+        type = "bitmap";
+    } else if (field->flags & VMS_STRUCT) {
+        type = "struct";
+    }
+
+    return type;
+}
+
+static void print_vmfield(QJSON *json, const VMStateDescription *vmsd,
+                          void *opaque, VMStateField *field, const char *name)
+{
+    void *base_addr = opaque + field->offset;
+    int size = field->size;
+    QEMUFile *fakefile;
+    size_t offset = 0;
+    int n_elems = 1;
+    int i;
+
+    fakefile = qemu_fopen_ops(&offset, &fake_file_ops);
+
+    if (field->flags & VMS_VBUFFER) {
+        size = *(int32_t *)(opaque + field->size_offset);
+        if (field->flags & VMS_MULTIPLY) {
+            size *= field->size;
+        }
+    }
+
+    if (field->flags & VMS_ARRAY) {
+        n_elems = field->num;
+    } else if (field->flags & VMS_VARRAY_INT32) {
+        n_elems = *(int32_t *)(opaque + field->num_offset);
+    } else if (field->flags & VMS_VARRAY_UINT32) {
+        n_elems = *(uint32_t *)(opaque + field->num_offset);
+    } else if (field->flags & VMS_VARRAY_UINT16) {
+        n_elems = *(uint16_t *)(opaque + field->num_offset);
+    } else if (field->flags & VMS_VARRAY_UINT8) {
+        n_elems = *(uint8_t *)(opaque + field->num_offset);
+    }
+
+    if (field->flags & VMS_POINTER) {
+        base_addr = *(void **)base_addr + field->start;
+    }
+
+    for (i = 0; i < n_elems; i++) {
+        void *addr = base_addr + size * i;
+        const char *type_name = get_vmfield_type_name(json, vmsd, addr, field);
+
+        if (field->flags & VMS_ARRAY_OF_POINTER) {
+            addr = *(void **)addr;
+        }
+
+        json_start_object(json, NULL);
+        json_prop_str(json, "name", name);
+        if (n_elems > 1) {
+            json_prop_int(json, "index", i);
+        }
+
+        /* Hack for the buffer we're writing to */
+        if (vmsd == &vmstate_debug_migration &&
+            field->info == &vmstate_info_buffer &&
+            field->flags & VMS_VBUFFER) {
+            json_prop_int(json, "size", 100000000);
+            json->self_size_offset = qstring_get_length(json->str) -
+                                     strlen("100000000");
+        } else if (type_name == unknown) {
+            offset = 0;
+            field->info->put(fakefile, opaque, size);
+            qemu_fflush(fakefile);
+            json_prop_int(json, "size", offset);
+        } else {
+            json_prop_int(json, "size", size);
+        }
+
+        json_prop_str(json, "type", type_name);
+
+        if (field->flags & VMS_STRUCT) {
+            /* Structs have hardcoded version IDs */
+            json_start_object(json, "struct");
+            json_prop_int(json, "version_id", field->vmsd->version_id);
+            print_vmsd_one(json, field->vmsd, addr, field->vmsd->version_id);
+            json_end_object(json);
+        }
+
+        json_end_object(json);
+    }
+
+    qemu_fclose(fakefile);
+}
+
+static int vmfield_name_num(VMStateField *start, VMStateField *search)
+{
+    VMStateField *field;
+    int found = 0;
+
+    for (field = start; field->name; field++) {
+        if (!strcmp(field->name, search->name)) {
+            if (field == search) {
+                return found;
+            }
+            found++;
+        }
+    }
+
+    return -1;
+}
+
+static bool vmfield_name_is_unique(VMStateField *start, VMStateField *search)
+{
+    VMStateField *field;
+    int found = 0;
+
+    for (field = start; field->name; field++) {
+        if (!strcmp(field->name, search->name)) {
+            found++;
+            /* name found more than once, so it's not unique */
+            if (found > 1) {
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
+static void print_vmsd_one(QJSON *json, const VMStateDescription *vmsd,
+                           void *opaque, int version_id)
+{
+    const VMStateSubsection *sub;
+    VMStateField *field;
+    bool subsection_found = false;
+
+    json_start_array(json, "fields");
+    for (field = vmsd->fields; field->name; field++) {
+        char *name = g_strdup(field->name);
+        bool (*fe)(void *opaque, int version_id) = field->field_exists;
+
+        if (!vmfield_name_is_unique(vmsd->fields, field)) {
+            /* Field name is not unique, need to make it unique */
+            int num = vmfield_name_num(vmsd->fields, field);
+            name = g_strdup_printf("%s[%d]", name, num);
+        }
+
+        if ((fe && fe(opaque, version_id)) ||
+            (!fe && field->version_id <= version_id)) {
+            /* Field exists in the current version, print it */
+            print_vmfield(json, vmsd, opaque, field, name);
+        }
+
+        g_free(name);
+    }
+    json_end_array(json);
+
+    for (sub = vmsd->subsections; sub && sub->needed; sub++) {
+        if (sub->needed(opaque)) {
+            /* This particular vmsd also contains this subsection */
+
+            /* Only create subsection array when we have any */
+            if (!subsection_found) {
+                json_start_array(json, "subsections");
+                subsection_found = true;
+            }
+
+            /* Dump the subsection description */
+            json_start_object(json, NULL);
+            print_vmsd(json, sub->vmsd, opaque);
+            json_end_object(json);
+
+        }
+    }
+    if (subsection_found) {
+        json_end_array(json);
+    }
+}
+
+static void print_vmsd(QJSON *json, const VMStateDescription *vmsd,
+                       void *opaque)
+{
+    int version_id;
+    int min_ver = MIN(vmsd->minimum_version_id_old, vmsd->minimum_version_id);
+
+    json_prop_str(json, "vmsd_name", vmsd->name);
+    json_start_object(json, "versions");
+    for (version_id = min_ver; version_id <= vmsd->version_id; version_id++) {
+        char *str = g_strdup_printf("%d", version_id);
+        json_start_object(json, str);
+        print_vmsd_one(json, vmsd, opaque, version_id);
+        json_end_object(json);
+        g_free(str);
+    }
+    json_end_object(json);
+}
+
+static void debug_migration_pre_save(void *opaque)
+{
+    SaveStateEntry *se;
+    DebugMigration *dm = opaque;
+    char sizestr[10];
+
+    QJSON *json = qjson_new();
+
+    strcpy((char*)dm->magic, "Debug Migration");
+    json_start_array(json, "devices");
+
+    /* Print description of all device sections */
+    QTAILQ_FOREACH(se, &savevm_handlers, entry) {
+        if ((!se->ops || !se->ops->save_state) && !se->vmsd) {
+            continue;
+        }
+
+        json_start_object(json, NULL);
+        json_prop_str(json, "name", se->idstr);
+        json_prop_int(json, "instance_id", se->instance_id);
+
+        if (!se->vmsd) {
+            /* Not converted to VMState yet */
+            print_non_vmstate(json, se);
+        } else {
+            /* VMState based */
+            print_vmsd(json, se->vmsd, se->opaque);
+        }
+
+        json_end_object(json);
+    }
+
+    json_end_array(json);
+    qjson_finish(json);
+
+    dm->data = (char*)qstring_get_str(json->str);
+    dm->size = qstring_get_length(json->str) + 1;
+
+    /* Fix up my own size information */
+    sprintf(sizestr, "%9d", dm->size);
+    strncpy(&dm->data[json->self_size_offset], sizestr, 9);
+}
+
+static int debug_migration_pre_load(void *opaque)
+{
+    DebugMigration *dm = opaque;
+
+    /* Allocate big temporary buffer */
+    dm->data = g_malloc(10 * 1024 * 1024);
+
+    return 0;
+}
+
+static int debug_migration_post_load(void *opaque, int version_id)
+{
+    DebugMigration *dm = opaque;
+
+    /* Free the big buffer again */
+    g_free(dm->data);
+
+    return 0;
+}
+
+static const VMStateDescription vmstate_debug_migration = {
+    .name = "debug-migration",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .pre_save = debug_migration_pre_save,
+    .pre_load = debug_migration_pre_load,
+    .post_load = debug_migration_post_load,
+    .fields      = (VMStateField[]) {
+        VMSTATE_INT32(size, DebugMigration),
+        VMSTATE_BUFFER(magic, DebugMigration),
+        VMSTATE_VBUFFER(data, DebugMigration, 0, NULL, 0, size),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static void debug_migration_class_initfn(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+
+    dc->vmsd = &vmstate_debug_migration;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static const TypeInfo debug_migration_info = {
+    .name          = TYPE_DEBUG_MIGRATION_DEVICE,
+    .parent        = TYPE_DEVICE,
+    .instance_size = sizeof(DebugMigration),
+    .class_init    = debug_migration_class_initfn,
+};
+
+static void debug_migration_register_types(void)
+{
+    type_register_static(&debug_migration_info);
+}
+
+type_init(debug_migration_register_types)
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [Qemu-devel] [PATCH 3/3] Add migration stream analyzation script
  2013-10-23 13:11 [Qemu-devel] [PATCH 0/3] Migration Debugging Helper Device Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 1/3] Export savevm handlers outside of savevm.c Alexander Graf
  2013-10-23 13:11 ` [Qemu-devel] [PATCH 2/3] Add migration debug device Alexander Graf
@ 2013-10-23 13:11 ` Alexander Graf
  2 siblings, 0 replies; 4+ messages in thread
From: Alexander Graf @ 2013-10-23 13:11 UTC (permalink / raw
  To: QEMU Developers; +Cc: Lucas Meneghel Rodrigues, Anthony Liguori

This patch adds a python tool to the scripts directory that can read
a dumped migration stream which contains the debug_migration device
and construct a human readable JSON stream out of it.

It's very simple to use:

  $ qemu-system-x86_64 -device debug_migration
    (qemu) migrate "exec:cat > mig"
  $ ./scripts/analyze_migration.py -f mig

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 scripts/analyze-migration.py | 483 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 483 insertions(+)
 create mode 100755 scripts/analyze-migration.py

diff --git a/scripts/analyze-migration.py b/scripts/analyze-migration.py
new file mode 100755
index 0000000..bf70749
--- /dev/null
+++ b/scripts/analyze-migration.py
@@ -0,0 +1,483 @@
+#!/usr/bin/env python
+#
+#  Migration Stream Analyzer
+#
+#  Copyright (c) 2013 Alexander Graf <agraf@suse.de>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
+
+import numpy as np
+import json
+import os
+import argparse
+import collections
+import pprint
+
+class MigrationFile(object):
+    def __init__(self, filename):
+        self.filename = filename
+        self.file = open(self.filename, "rb")
+
+    def read64(self):
+        return np.asscalar(np.fromfile(self.file, count=1, dtype='>i8')[0])
+
+    def read32(self):
+        return np.asscalar(np.fromfile(self.file, count=1, dtype='>i4')[0])
+
+    def read8(self):
+        return np.asscalar(np.fromfile(self.file, count=1, dtype='>i1')[0])
+
+    def readstr(self, len = None):
+        if len is None:
+            len = self.read8()
+        if len == 0:
+            return ""
+        return np.fromfile(self.file, count=1, dtype=('S%d' % len))[0]
+
+    def readvar(self, size = None):
+        if size is None:
+            size = self.read8()
+        if size == 0:
+            return ""
+        value = self.file.read(size)
+        if len(value) != size:
+            raise Exception("Unexpected end of %s at 0x%x" % (self.filename, self.file.tell()))
+        return value
+
+    # Search the current file from the current position onwards for a JSON
+    # migration descriptor. Returns the JSON string blob.
+    def read_migration_debug_json(self):
+        pos = self.file.tell()
+        data = self.file.read()
+        dbgpos = data.find("Debug Migration")
+        if dbgpos == -1:
+            raise Exception("No Debug Migration device found")
+
+        # The full file read closed the file as well, reopen it where we were
+        self.file = open(self.filename, "rb")
+        self.file.seek(pos, 0)
+
+        # We assume that our JSON blob starts after the "Debug Migration" magic
+        # and is null terminated.
+        return data[(dbgpos + 16):].split('\0',1)[0]
+
+    def close(self):
+        self.file.close()
+
+
+class RamSection(object):
+    RAM_SAVE_FLAG_COMPRESS = 0x02
+    RAM_SAVE_FLAG_MEM_SIZE = 0x04
+    RAM_SAVE_FLAG_PAGE     = 0x08
+    RAM_SAVE_FLAG_EOS      = 0x10
+    RAM_SAVE_FLAG_CONTINUE = 0x20
+    RAM_SAVE_FLAG_XBZRLE   = 0x40
+    RAM_SAVE_FLAG_HOOK     = 0x80
+    # This can be dynamic, but all targets we care about have 4k pages
+    TARGET_PAGE_SIZE       = 0x1000
+    blocks = []
+
+    def __init__(self, file, version_id, device, section_key):
+        if version_id != 4:
+            raise Exception("Unknown RAM version %d" % version_id)
+
+        self.file = file
+        self.section_key = section_key
+
+    def read(self):
+        # Read all RAM sections
+        while True:
+            addr = self.file.read64()
+            flags = addr & 0xfff
+            addr &= 0xfffffffffffff000;
+
+            if flags & self.RAM_SAVE_FLAG_MEM_SIZE:
+                while True:
+                    namelen = self.file.read8()
+                    # We assume that no RAM chunk is big enough to ever
+                    # hit the first byte of the address, so when we see
+                    # a zero here we know it has to be an address, not the
+                    # length of the next block.
+                    if namelen == 0:
+                        self.file.file.seek(-1, 1)
+                        break
+                    name = self.file.readstr(len = namelen)
+                    len = self.file.read64()
+                    self.blocks.append((name, len))
+                flags &= ~self.RAM_SAVE_FLAG_MEM_SIZE
+
+            if flags & self.RAM_SAVE_FLAG_COMPRESS:
+                if flags & self.RAM_SAVE_FLAG_CONTINUE:
+                    flags &= ~self.RAM_SAVE_FLAG_CONTINUE
+                else:
+                    name = self.file.readstr()
+                fill_char = self.file.read8()
+                # The page in question would be filled with fill_char now
+                flags &= ~self.RAM_SAVE_FLAG_COMPRESS
+            elif flags & self.RAM_SAVE_FLAG_PAGE:
+                if flags & self.RAM_SAVE_FLAG_CONTINUE:
+                    flags &= ~self.RAM_SAVE_FLAG_CONTINUE
+                else:
+                    name = self.file.readstr()
+                # Just skip RAM data for now
+                self.file.file.seek(self.TARGET_PAGE_SIZE, 1)
+                flags &= ~self.RAM_SAVE_FLAG_PAGE
+            elif flags & self.RAM_SAVE_FLAG_XBZRLE:
+                raise Exception("XBZRLE RAM compression is not supported yet")
+            elif flags & self.RAM_SAVE_FLAG_HOOK:
+                raise Exception("RAM hooks don't make sense with files")
+
+            # End of RAM section
+            if flags & self.RAM_SAVE_FLAG_EOS:
+                break
+
+            if flags != 0:
+                raise Exception("Unknown RAM flags: %x" % flags)
+
+    def getDict(self):
+        return ""
+
+class VMSDFieldGeneric(object):
+    def __init__(self, desc, file):
+        self.file = file
+        self.desc = desc
+        self.data = ""
+
+    def __repr__(self):
+        return str(self.__str__())
+
+    def __str__(self):
+        return " ".join("{0:02x}".format(ord(c)) for c in self.data)
+
+    def getDict(self):
+        return self.__str__()
+
+    def read(self):
+        size = int(self.desc['size'])
+        self.data = self.file.readvar(size)
+        return self.data
+
+class VMSDFieldInt(VMSDFieldGeneric):
+    def __init__(self, desc, file):
+        super(VMSDFieldInt, self).__init__(desc, file)
+        self.size = int(desc['size'])
+        self.format = '0x%%0%dx' % (self.size * 2)
+        self.sdtype = '>i%d' % self.size
+        self.udtype = '>u%d' % self.size
+
+    def __repr__(self):
+        if self.data < 0:
+            return ('%s (%d)' % ((self.format % self.udata), self.data))
+        else:
+            return self.format % self.data
+
+    def __str__(self):
+        return self.__repr__()
+
+    def getDict(self):
+        return self.__str__()
+
+    def read(self):
+        super(VMSDFieldInt, self).read()
+        self.sdata = np.fromstring(self.data, count=1, dtype=(self.sdtype))[0]
+        self.udata = np.fromstring(self.data, count=1, dtype=(self.udtype))[0]
+        self.data = self.sdata
+        return self.data
+
+class VMSDFieldUInt(VMSDFieldInt):
+    def __init__(self, desc, file):
+        super(VMSDFieldUInt, self).__init__(desc, file)
+
+    def read(self):
+        super(VMSDFieldUInt, self).read()
+        self.data = self.udata
+        return self.data
+
+class VMSDFieldIntLE(VMSDFieldInt):
+    def __init__(self, desc, file):
+        super(VMSDFieldIntLE, self).__init__(desc, file)
+        self.dtype = '<i%d' % self.size
+
+class VMSDFieldBool(VMSDFieldGeneric):
+    def __init__(self, desc, file):
+        super(VMSDFieldBool, self).__init__(desc, file)
+
+    def __repr__(self):
+        return self.data.__repr__()
+
+    def __str__(self):
+        return self.data.__str__()
+
+    def getDict(self):
+        return self.data
+
+    def read(self):
+        super(VMSDFieldBool, self).read()
+        if self.data[0] == 0:
+            self.data = False
+        else:
+            self.data = True
+        return self.data
+
+class VMSDFieldStruct(VMSDFieldGeneric):
+    QEMU_VM_SUBSECTION    = 0x05
+
+    def __init__(self, desc, file):
+        super(VMSDFieldStruct, self).__init__(desc, file)
+        self.data = collections.OrderedDict()
+
+    def __repr__(self):
+        return self.data.__repr__()
+
+    def __str__(self):
+        return self.data.__str__()
+
+    def read(self):
+        for field in self.desc['struct']['fields']:
+            field['data'] = vmsd_field_readers[field['type']](field, self.file)
+            field['data'].read()
+
+            if 'index' in field:
+                if field['name'] not in self.data:
+                    self.data[field['name']] = []
+                a = self.data[field['name']]
+                if len(a) != int(field['index']):
+                    raise Exception("internal index of data field unmatched (%d/%d)" % (len(a), int(field['index'])))
+                a.append(field['data'])
+            else:
+                self.data[field['name']] = field['data']
+
+        if 'subsections' in self.desc['struct']:
+            for subsection in self.desc['struct']['subsections']:
+                if self.file.read8() != self.QEMU_VM_SUBSECTION:
+                    raise Exception("Subsection %s not found" % subsection['vmsd_name'])
+                name = self.file.readstr()
+                version_id = self.file.read32()
+                self.data[name] = VMSDSection(self.file, version_id, subsection, (name, 0))
+                self.data[name].read()
+
+    def getDictItem(self, value):
+       # Strings would fall into the array category, treat
+       # them specially
+       if value.__class__ is ''.__class__:
+           return value
+
+       try:
+           return self.getDictOrderedDict(value)
+       except:
+           try:
+               return self.getDictArray(value)
+           except:
+               try:
+                   return value.getDict()
+               except:
+                   return value
+
+    def getDictArray(self, array):
+        r = []
+        for value in array:
+           r.append(self.getDictItem(value))
+        return r
+
+    def getDictOrderedDict(self, dict):
+        r = collections.OrderedDict()
+        for (key, value) in dict.items():
+            r[key] = self.getDictItem(value)
+        return r
+
+    def getDict(self):
+        return self.getDictOrderedDict(self.data)
+
+vmsd_field_readers = {
+    "bool" : VMSDFieldBool,
+    "int8" : VMSDFieldInt,
+    "int16" : VMSDFieldInt,
+    "int32" : VMSDFieldInt,
+    "int32_equal" : VMSDFieldInt,
+    "int32_le" : VMSDFieldIntLE,
+    "int64" : VMSDFieldInt,
+    "uint8" : VMSDFieldUInt,
+    "uint16" : VMSDFieldUInt,
+    "uint32" : VMSDFieldUInt,
+    "uint64" : VMSDFieldUInt,
+    "float64" : VMSDFieldGeneric,
+    "timer" : VMSDFieldGeneric,
+    "buffer" : VMSDFieldGeneric,
+    "unused_buffer" : VMSDFieldGeneric,
+    "bitmap" : VMSDFieldGeneric,
+    "struct" : VMSDFieldStruct,
+    "unknown" : VMSDFieldGeneric,
+}
+
+class VMSDSection(VMSDFieldStruct):
+    def __init__(self, file, version_id, device, section_key):
+        self.file = file
+        self.data = ""
+        self.section_key = section_key
+        if 'versions' in device:
+            # Normal VMSD description
+            desc = device['versions'][str(version_id)]
+            self.vmsd_name = device['vmsd_name']
+        else:
+            # A legacy non-VMSD section without detailed information.
+            desc = device
+            self.vmsd_name = ""
+
+        # A section really is nothing but a FieldStruct :)
+        super(VMSDSection, self).__init__({ 'struct' : desc }, file)
+
+class DebugMigrationSection(VMSDSection):
+    def __init__(self, file, version_id, device, section_key):
+        super(DebugMigrationSection, self).__init__(file, version_id, device, section_key)
+
+    # We define out own reader because it's very unlikely that the buffer size
+    # for the VMSD description is identical between different migration files.
+    #
+    # It also allows us to override the "data" field with something a lot
+    # less convoluted than the VMSD description JSON.
+    def read(self):
+        self.data['size'] = self.file.read32()
+        self.data['magic'] = str(self.file.readstr(len = 16))
+        self.data['data'] = self.file.readstr(len = int(self.data['size']))
+
+        # Don't include our VMSD description in the output
+        self.data['data'] = 'Omitted for the sake of readability'
+
+
+###############################################################################
+
+class MigrationDump(object):
+    QEMU_VM_FILE_MAGIC    = 0x5145564d
+    QEMU_VM_FILE_VERSION  = 0x00000003
+    QEMU_VM_EOF           = 0x00
+    QEMU_VM_SECTION_START = 0x01
+    QEMU_VM_SECTION_PART  = 0x02
+    QEMU_VM_SECTION_END   = 0x03
+    QEMU_VM_SECTION_FULL  = 0x04
+    QEMU_VM_SUBSECTION    = 0x05
+
+    def __init__(self, filename):
+        self.section_classes = { ( 'ram', 0 ) : ( RamSection, None ) }
+        self.filename = filename
+        self.vmsd_desc = None
+
+    def read(self, desc_only = False):
+        # Read in the whole file
+        file = MigrationFile(self.filename)
+
+        # File magic
+        data = file.read32()
+        if data != self.QEMU_VM_FILE_MAGIC:
+            raise Exception("Invalid file magic %x" % data)
+
+        # Version (has to be v3)
+        data = file.read32()
+        if data != self.QEMU_VM_FILE_VERSION:
+            raise Exception("Invalid version number %d" % data)
+
+        # Read sections
+        self.sections = collections.OrderedDict()
+
+        while True:
+            section_type = file.read8()
+            if section_type == self.QEMU_VM_EOF:
+                break
+            elif section_type == self.QEMU_VM_SECTION_START or section_type == self.QEMU_VM_SECTION_FULL:
+                section_id = file.read32()
+                name = file.readstr()
+                instance_id = file.read32()
+                version_id = file.read32()
+                section_key = (name, instance_id)
+                try:
+                    classdesc = self.section_classes[section_key]
+                    section = classdesc[0](file, version_id, classdesc[1], section_key)
+                except:
+                    # Could not find a decoder for that section
+                    if self.vmsd_desc is None:
+                        # Try to find the migration debug device and extract parsers
+                        # from there
+                        self.load_vmsd_json(file)
+                        # We only care about the vmsd description json, so drop out now
+                        if desc_only:
+                            return
+                        classdesc = self.section_classes[section_key]
+                        section = classdesc[0](file, version_id, classdesc[1], section_key)
+                    else:
+                        # This is a genuinely unknown section
+                        raise
+                self.sections[section_id] = section
+                section.read()
+            elif section_type == self.QEMU_VM_SECTION_PART or section_type == self.QEMU_VM_SECTION_END:
+                section_id = file.read32()
+                self.sections[section_id].read()
+            else:
+                raise Exception("Unknown section type: %d" % section_type)
+        file.close()
+
+    def load_vmsd_json(self, file):
+        vmsd_json = file.read_migration_debug_json()
+        self.vmsd_desc = json.loads(vmsd_json, object_pairs_hook=collections.OrderedDict)
+        for device in self.vmsd_desc['devices']:
+            key = (device['name'], device['instance_id'])
+            value = ( VMSDSection, device )
+            if device['name'] == 'debug-migration':
+                value = ( DebugMigrationSection, device )
+            self.section_classes[key] = value
+
+    def getDict(self):
+        r = collections.OrderedDict()
+        for (key, value) in self.sections.items(): 
+           key = "%s (%d)" % ( value.section_key[0], key )
+           r[key] = value.getDict()
+        return r
+
+###############################################################################
+
+class JSONEncoder(json.JSONEncoder):
+    def default(self, o):
+        if isinstance(o, VMSDFieldGeneric):
+            return str(o)
+        return json.JSONEncoder.default(self, o)
+
+parser = argparse.ArgumentParser()
+parser.add_argument("-f", "--file", help='migration dump to read from', required=True)
+parser.add_argument("-s", "--descriptionfile", help='migration dump to read vmstate description from')
+parser.add_argument("-d", "--dump", help='what to dump ("state" or "desc")', default='state')
+args = parser.parse_args()
+
+jsonenc = JSONEncoder(indent=4, separators=(',', ': '))
+
+if args.dump == "state":
+    dump = MigrationDump(args.file)
+    if args.descriptionfile:
+        # Fetch the vmstate description from the file passed through -s
+        desc_dump = MigrationDump(args.descriptionfile)
+        desc_dump.read(desc_only = True)
+        # and override all section readers and vmsd description in our
+        # data migration file with the ones from the vmstate description
+        # migration file
+        dump.vmsd_desc = desc_dump.vmsd_desc
+        dump.section_classes = desc_dump.section_classes
+    dump.read()
+    dict = dump.getDict()
+    print jsonenc.encode(dict)
+elif args.dump == "desc":
+    if args.descriptionfile:
+        dump = MigrationDump(args.descriptionfile)
+    else:
+        dump = MigrationDump(args.file)
+    dump.read(desc_only = True)
+    print jsonenc.encode(dump.vmsd_desc)
+else:
+    raise Exception("Unknown dump type \"%s\", available: \"state\", \"desc\"" % args.dump)
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-10-23 13:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-23 13:11 [Qemu-devel] [PATCH 0/3] Migration Debugging Helper Device Alexander Graf
2013-10-23 13:11 ` [Qemu-devel] [PATCH 1/3] Export savevm handlers outside of savevm.c Alexander Graf
2013-10-23 13:11 ` [Qemu-devel] [PATCH 2/3] Add migration debug device Alexander Graf
2013-10-23 13:11 ` [Qemu-devel] [PATCH 3/3] Add migration stream analyzation script Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.