lvm-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: haaber <haaber@web.de>
To: lvm-devel@redhat.com
Subject: Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
Date: Tue, 25 Apr 2023 15:49:07 +0200	[thread overview]
Message-ID: <54726276-8f4e-7538-f0b5-e825920a0357@web.de> (raw)

Dear all,

I had a lethally bad hardware failure and to replace the machine.  Now I try to get some data back that is not contained in half-year backups ... (I know! but it's too late to be sorry). OK,  the old SSD is attached  via usb adapter to a brand new machine. I started

sudo pvscan
sudo vgscan --mknodes
sudo vgchange -ay

Here is the  unexpected output:

 ?PV /dev/mapper/OLDSSD ? VG   vg0 ????? lvm2 [238.27 GiB / <15.79 GiB free]
 ? Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0?? ]
 ? Found volume group "vg0" using metadata type lvm2
 ? Check of pool vg0/pool00 failed (status:1). Manual repair required!
 ? 1 logical volume(s) in volume group "vg0" now active

then I consulted dr. google for diagnosis, but found only little help. This one

https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html

suggested to deactivate all sub-volumes so that a repair can work correctly. It happened that only swap was
active, so I deactivated it. But repair does still not work:

lvconvert --repair vg0/pool00
terminate called after throwing an instance of 'std::runtime_error'
 ? what():? transaction_manager::new_block() couldn't allocate new block
 ? Child 21255 exited abnormally
 ? Repair of thin metadata volume of thin pool vg0/pool00 failed
(status:-1). Manual repair required!


I would like to find a good soul out there that can give more hints. In particular,
could it be a metadata overflow? How to check? I seek not for repair, but a "once only"
read access to the pool data ....

thank you so much!   Bernhard


             reply	other threads:[~2023-04-25 13:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-25 13:49 haaber [this message]
2023-04-26 11:10 ` Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure Zdenek Kabelac
2023-04-26 13:12   ` haaber
2023-04-27  9:29     ` Zdenek Kabelac
2023-05-03 16:48       ` haaber
2023-05-04 13:17         ` Zdenek Kabelac
2023-05-04 16:31           ` haaber
2023-05-05 15:14             ` Zdenek Kabelac
2023-05-04 17:06           ` haaber
2023-05-05  9:42             ` Ming Hung Tsai
2023-05-05 15:07             ` Zdenek Kabelac
2023-05-05 16:25               ` Ming Hung Tsai
2023-05-11  7:39               ` haaber
2023-05-12  3:29                 ` Ming Hung Tsai
2023-05-12 18:05                   ` haaber
2023-05-13  3:20                     ` Ming Hung Tsai
2023-05-17 15:17                 ` Ming Hung Tsai
2023-05-20 20:34                   ` haaber
2023-05-22  7:40                     ` Ming Hung Tsai
2023-05-23 15:24                       ` [SOLVED] " haaber
2023-04-26 12:06 ` Ming Hung Tsai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54726276-8f4e-7538-f0b5-e825920a0357@web.de \
    --to=haaber@web.de \
    --cc=lvm-devel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).