All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* Failed SBOD RAID on old NAS, how to diagnose/resurrect?
@ 2020-03-07 21:58 Chris Green
  2020-03-07 22:08 ` Chris Green
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Green @ 2020-03-07 21:58 UTC (permalink / raw
  To: linux-raid

I have an old (well, fairly old) WD NAS which has two 1Tb disk drives
configured to be one big disk drive using RAID.  It's been sat on the
shelf for two or three years at least so I was actually quite
pleasantly surprised when it booted.

However the RAID is broken, the second disk drive doesn't get added to
the RAID array.  The NAS's configuration GUI says that both physical
disk drives are healthy/ok but marks the second half of the single
RAID virtual disk as 'failed'.

I have ssh access to the NAS (and root) and I'm quite at home on the
Unix/Linux command line and doing sysadmin type things but I'm a total
newbie as regards RAID.

So how can I set about diagnosing and fixing this?  I can't even find
anything that tells me how the RAID is configured at the moment.  At
the most basic level 'mount' shows:-

    ~ # mount
    /dev/root on / type ext3 (rw,noatime,data=ordered)
    proc on /proc type proc (rw)
    sys on /sys type sysfs (rw)
    /dev/pts on /dev/pts type devpts (rw)
    securityfs on /sys/kernel/security type securityfs (rw)
    /dev/md3 on /var type ext3 (rw,noatime,data=ordered)
    /dev/md2 on /DataVolume type xfs (rw,noatime,uqnoenforce)
    /dev/ram0 on /mnt/ram type tmpfs (rw)
    /dev/md2 on /shares/Public type xfs (rw,noatime,uqnoenforce)
    /dev/md2 on /shares/Download type xfs (rw,noatime,uqnoenforce)
    /dev/md2 on /shares/chris type xfs (rw,noatime,uqnoenforce)
    /dev/md2 on /shares/laptop type xfs (rw,noatime,uqnoenforce)
    /dev/md2 on /shares/dps type xfs (rw,noatime,uqnoenforce)
    /dev/md2 on /shares/ben type xfs (rw,noatime,uqnoenforce)
    ~ # 

... and fdisk:-

    ~ # fdisk -l

    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

       Device Boot    Start       End    Blocks   Id  System
    /dev/sda1               5         248     1959930   fd  Linux raid autodetect
    /dev/sda2             249         280      257040   fd  Linux raid autodetect
    /dev/sda3             281         403      987997+  fd  Linux raid autodetect
    /dev/sda4             404      121601   973522935   fd  Linux raid autodetect

    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

       Device Boot    Start       End    Blocks   Id  System
    /dev/sdb1               5         248     1959930   fd  Linux raid autodetect
    /dev/sdb2             249         280      257040   fd  Linux raid autodetect
    /dev/sdb3             281         403      987997+  fd  Linux raid autodetect
    /dev/sdb4             404      121601   973522935   fd  Linux raid autodetect


There should be a /dev/md4. :-

    /proc # more mdstat
    Personalities : [linear] [raid0] [raid1] 
    md2 : active raid1 sda4[0]
          973522816 blocks [2/1] [U_]
          
    md1 : active raid1 sdb2[1] sda2[0]
          256960 blocks [2/2] [UU]
          
    md3 : active raid1 sdb3[1] sda3[0]
          987904 blocks [2/2] [UU]
          
    md4 : active raid1 sdb4[0]
          973522816 blocks [2/1] [U_]
          
    md0 : active raid1 sdb1[1] sda1[0]
          1959808 blocks [2/2] [UU]
          
    unused devices: <none>



Oh, and the kernel is:-

    Linux WDbackup 2.6.24.4 #1 Thu Apr 1 16:43:58 CST 2010 armv5tejl unknown

-- 
Chris Green

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Failed SBOD RAID on old NAS, how to diagnose/resurrect?
  2020-03-07 21:58 Failed SBOD RAID on old NAS, how to diagnose/resurrect? Chris Green
@ 2020-03-07 22:08 ` Chris Green
  2020-03-08 14:14   ` Failed JBOD " Chris Green
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Green @ 2020-03-07 22:08 UTC (permalink / raw
  To: linux-raid

More information, from dmesg:-


md: linear personality registered for level -1
md: raid0 personality registered for level 0
md: raid1 personality registered for level 1
device-mapper: ioctl: 4.12.0-ioctl (2007-10-02) initialised: dm-devel@redhat.com
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
md: Autodetecting RAID arrays.
md: Scanned 8 and added 8 devices.
md: autorun ...
md: considering sdb4 ...
md:  adding sdb4 ...
md: sdb3 has different UUID to sdb4
md: sdb2 has different UUID to sdb4
md: sdb1 has different UUID to sdb4
md: sda4 has different UUID to sdb4
md: sda3 has different UUID to sdb4
md: sda2 has different UUID to sdb4
md: sda1 has different UUID to sdb4
md: created md4
md: bind<sdb4>
md: running: <sdb4>
raid1: raid set md4 active with 1 out of 2 mirrors
raid1 not hw raidable, needs two working disks.
md: considering sdb3 ...
md:  adding sdb3 ...
md: sdb2 has different UUID to sdb3
md: sdb1 has different UUID to sdb3
md: sda4 has different UUID to sdb3
md:  adding sda3 ...
md: sda2 has different UUID to sdb3
md: sda1 has different UUID to sdb3
md: created md3
md: bind<sda3>
md: bind<sdb3>
md: running: <sdb3><sda3>
raid1: raid set md3 active with 2 out of 2 mirrors
raid1 using hardware RAID 0x00000001
md: considering sdb2 ...
md:  adding sdb2 ...
md: sdb1 has different UUID to sdb2
md: sda4 has different UUID to sdb2
md:  adding sda2 ...
md: sda1 has different UUID to sdb2
md: created md1
md: bind<sda2>
md: bind<sdb2>
md: running: <sdb2><sda2>
raid1: raid set md1 active with 2 out of 2 mirrors
raid1 using hardware RAID 0x00000001
md: considering sdb1 ...
md:  adding sdb1 ...
md: sda4 has different UUID to sdb1
md:  adding sda1 ...
md: created md0
md: bind<sda1>
md: bind<sdb1>
md: running: <sdb1><sda1>
raid1: raid set md0 active with 2 out of 2 mirrors
raid1 using hardware RAID 0x00000001
md: considering sda4 ...
md:  adding sda4 ...
md: created md2
md: bind<sda4>
md: running: <sda4>
raid1: raid set md2 active with 1 out of 2 mirrors
raid1 not hw raidable, needs two working disks.
md: ... autorun DONE.

-- 
Chris Green

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Failed JBOD RAID on old NAS, how to diagnose/resurrect?
  2020-03-07 22:08 ` Chris Green
@ 2020-03-08 14:14   ` Chris Green
  2020-03-10 21:20     ` Song Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Green @ 2020-03-08 14:14 UTC (permalink / raw
  To: linux-raid

Well I've got it working again but I'm very confused as to *why* it
failed the way it did.

A 'cat /proc/mdstat' produced:-

    Personalities : [linear] [raid0] [raid1] 
    md4 : active raid1 sda4[0]
          973522816 blocks [2/1] [U_]
          
    md1 : active raid1 sdb2[0] sda2[1]
          256960 blocks [2/2] [UU]
          
    md3 : active raid1 sdb3[0] sda3[1]
          987904 blocks [2/2] [UU]
          
    md2 : active raid1 sdb4[0]
          973522816 blocks [2/1] [U_]
          
    md0 : active raid1 sdb1[0] sda1[1]
          1959808 blocks [2/2] [UU]

So md2 and md4 (the main parts of the two 1Tb disk drives) seemed to
be OK from the RAID point of view.  But I noticed that the block
device for /dev/md4 didn't exist:-

    ~ # ls -l /dev/md*
    brw-r-----    1 root     root       9,   0 Sep 29  2011 /dev/md0
    brw-r-----    1 root     root       9,   1 Sep 29  2011 /dev/md1
    brw-r-----    1 root     root       9,  10 Sep 29  2011 /dev/md10
    brw-r-----    1 root     root       9,  11 Sep 29  2011 /dev/md11
    brw-r-----    1 root     root       9,  12 Sep 29  2011 /dev/md12
    brw-r-----    1 root     root       9,  13 Sep 29  2011 /dev/md13
    brw-r-----    1 root     root       9,  14 Sep 29  2011 /dev/md14
    brw-r-----    1 root     root       9,  15 Sep 29  2011 /dev/md15
    brw-r-----    1 root     root       9,  16 Sep 29  2011 /dev/md16
    brw-r-----    1 root     root       9,  17 Sep 29  2011 /dev/md17
    brw-r-----    1 root     root       9,  18 Sep 29  2011 /dev/md18
    brw-r-----    1 root     root       9,  19 Sep 29  2011 /dev/md19
    brw-r-----    1 root     root       9,   2 Sep 29  2011 /dev/md2
    brw-r-----    1 root     root       9,  20 Sep 29  2011 /dev/md20
    brw-r-----    1 root     root       9,  21 Sep 29  2011 /dev/md21
    brw-r-----    1 root     root       9,  22 Sep 29  2011 /dev/md22
    brw-r-----    1 root     root       9,  23 Sep 29  2011 /dev/md23
    brw-r-----    1 root     root       9,  24 Sep 29  2011 /dev/md24
    brw-r-----    1 root     root       9,  25 Sep 29  2011 /dev/md25
    brw-r-----    1 root     root       9,  26 Sep 29  2011 /dev/md26
    brw-r-----    1 root     root       9,  27 Sep 29  2011 /dev/md27
    brw-r-----    1 root     root       9,  28 Sep 29  2011 /dev/md28
    brw-r-----    1 root     root       9,  29 Sep 29  2011 /dev/md29
    brw-r-----    1 root     root       9,   3 Sep 29  2011 /dev/md3
    brw-r-----    1 root     root       9,   5 Sep 29  2011 /dev/md5
    brw-r-----    1 root     root       9,   6 Sep 29  2011 /dev/md6
    brw-r-----    1 root     root       9,   7 Sep 29  2011 /dev/md7
    brw-r-----    1 root     root       9,   8 Sep 29  2011 /dev/md8
    brw-r-----    1 root     root       9,   9 Sep 29  2011 /dev/md9


The fix was simply to use 'mknod' to create the missing /dev/md4, now
I can mount the drive and see the data.  

What I don't understand is where /dev/md4 went, how would it have got
deleted?  I have yet to reboot the system to see if /dev/md4
disappears again but if it does it's not a big problem to create it
again.

Should the RAID block devices get created as part of the RAID start
up? Maybe there's something gone awry there.

(Oh, and sorry for talking about SBOD when I meant JBOD)


-- 
Chris Green

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Failed JBOD RAID on old NAS, how to diagnose/resurrect?
  2020-03-08 14:14   ` Failed JBOD " Chris Green
@ 2020-03-10 21:20     ` Song Liu
  2020-03-10 22:11       ` Chris Green
  0 siblings, 1 reply; 6+ messages in thread
From: Song Liu @ 2020-03-10 21:20 UTC (permalink / raw
  To: linux-raid

On Sun, Mar 8, 2020 at 7:15 AM Chris Green <cl@isbd.net> wrote:
>
> Well I've got it working again but I'm very confused as to *why* it
> failed the way it did.
>
> A 'cat /proc/mdstat' produced:-
>
>     Personalities : [linear] [raid0] [raid1]
>     md4 : active raid1 sda4[0]
>           973522816 blocks [2/1] [U_]
>
>     md1 : active raid1 sdb2[0] sda2[1]
>           256960 blocks [2/2] [UU]
>
>     md3 : active raid1 sdb3[0] sda3[1]
>           987904 blocks [2/2] [UU]
>
>     md2 : active raid1 sdb4[0]
>           973522816 blocks [2/1] [U_]
>
>     md0 : active raid1 sdb1[0] sda1[1]
>           1959808 blocks [2/2] [UU]
>
> So md2 and md4 (the main parts of the two 1Tb disk drives) seemed to
> be OK from the RAID point of view.  But I noticed that the block
> device for /dev/md4 didn't exist:-
>
>     ~ # ls -l /dev/md*
>     brw-r-----    1 root     root       9,   0 Sep 29  2011 /dev/md0
>     brw-r-----    1 root     root       9,   1 Sep 29  2011 /dev/md1
>     brw-r-----    1 root     root       9,  10 Sep 29  2011 /dev/md10
>     brw-r-----    1 root     root       9,  11 Sep 29  2011 /dev/md11
>     brw-r-----    1 root     root       9,  12 Sep 29  2011 /dev/md12
>     brw-r-----    1 root     root       9,  13 Sep 29  2011 /dev/md13
>     brw-r-----    1 root     root       9,  14 Sep 29  2011 /dev/md14
>     brw-r-----    1 root     root       9,  15 Sep 29  2011 /dev/md15
>     brw-r-----    1 root     root       9,  16 Sep 29  2011 /dev/md16
>     brw-r-----    1 root     root       9,  17 Sep 29  2011 /dev/md17
>     brw-r-----    1 root     root       9,  18 Sep 29  2011 /dev/md18
>     brw-r-----    1 root     root       9,  19 Sep 29  2011 /dev/md19
>     brw-r-----    1 root     root       9,   2 Sep 29  2011 /dev/md2
>     brw-r-----    1 root     root       9,  20 Sep 29  2011 /dev/md20
>     brw-r-----    1 root     root       9,  21 Sep 29  2011 /dev/md21
>     brw-r-----    1 root     root       9,  22 Sep 29  2011 /dev/md22
>     brw-r-----    1 root     root       9,  23 Sep 29  2011 /dev/md23
>     brw-r-----    1 root     root       9,  24 Sep 29  2011 /dev/md24
>     brw-r-----    1 root     root       9,  25 Sep 29  2011 /dev/md25
>     brw-r-----    1 root     root       9,  26 Sep 29  2011 /dev/md26
>     brw-r-----    1 root     root       9,  27 Sep 29  2011 /dev/md27
>     brw-r-----    1 root     root       9,  28 Sep 29  2011 /dev/md28
>     brw-r-----    1 root     root       9,  29 Sep 29  2011 /dev/md29
>     brw-r-----    1 root     root       9,   3 Sep 29  2011 /dev/md3
>     brw-r-----    1 root     root       9,   5 Sep 29  2011 /dev/md5
>     brw-r-----    1 root     root       9,   6 Sep 29  2011 /dev/md6
>     brw-r-----    1 root     root       9,   7 Sep 29  2011 /dev/md7
>     brw-r-----    1 root     root       9,   8 Sep 29  2011 /dev/md8
>     brw-r-----    1 root     root       9,   9 Sep 29  2011 /dev/md9
>
>
> The fix was simply to use 'mknod' to create the missing /dev/md4, now
> I can mount the drive and see the data.
>
> What I don't understand is where /dev/md4 went, how would it have got
> deleted?  I have yet to reboot the system to see if /dev/md4
> disappears again but if it does it's not a big problem to create it
> again.
>
> Should the RAID block devices get created as part of the RAID start
> up? Maybe there's something gone awry there.

Do you have proper /etc/md.conf?

Thanks,
Song

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Failed JBOD RAID on old NAS, how to diagnose/resurrect?
  2020-03-10 21:20     ` Song Liu
@ 2020-03-10 22:11       ` Chris Green
  2020-03-10 23:46         ` Song Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Green @ 2020-03-10 22:11 UTC (permalink / raw
  To: linux-raid

On Tue, Mar 10, 2020 at 02:20:02PM -0700, Song Liu wrote:
> On Sun, Mar 8, 2020 at 7:15 AM Chris Green <cl@isbd.net> wrote:
> >
> > Well I've got it working again but I'm very confused as to *why* it
> > failed the way it did.
> >
> > A 'cat /proc/mdstat' produced:-
> >
> >     Personalities : [linear] [raid0] [raid1]
> >     md4 : active raid1 sda4[0]
> >           973522816 blocks [2/1] [U_]
> >
> >     md1 : active raid1 sdb2[0] sda2[1]
> >           256960 blocks [2/2] [UU]
> >
> >     md3 : active raid1 sdb3[0] sda3[1]
> >           987904 blocks [2/2] [UU]
> >
> >     md2 : active raid1 sdb4[0]
> >           973522816 blocks [2/1] [U_]
> >
> >     md0 : active raid1 sdb1[0] sda1[1]
> >           1959808 blocks [2/2] [UU]
> >
> > So md2 and md4 (the main parts of the two 1Tb disk drives) seemed to
> > be OK from the RAID point of view.  But I noticed that the block
> > device for /dev/md4 didn't exist:-
> >
> >     ~ # ls -l /dev/md*
> >     brw-r-----    1 root     root       9,   0 Sep 29  2011 /dev/md0
> >     brw-r-----    1 root     root       9,   1 Sep 29  2011 /dev/md1
> >     brw-r-----    1 root     root       9,  10 Sep 29  2011 /dev/md10
> >     brw-r-----    1 root     root       9,  11 Sep 29  2011 /dev/md11
> >     brw-r-----    1 root     root       9,  12 Sep 29  2011 /dev/md12
> >     brw-r-----    1 root     root       9,  13 Sep 29  2011 /dev/md13
> >     brw-r-----    1 root     root       9,  14 Sep 29  2011 /dev/md14
> >     brw-r-----    1 root     root       9,  15 Sep 29  2011 /dev/md15
> >     brw-r-----    1 root     root       9,  16 Sep 29  2011 /dev/md16
> >     brw-r-----    1 root     root       9,  17 Sep 29  2011 /dev/md17
> >     brw-r-----    1 root     root       9,  18 Sep 29  2011 /dev/md18
> >     brw-r-----    1 root     root       9,  19 Sep 29  2011 /dev/md19
> >     brw-r-----    1 root     root       9,   2 Sep 29  2011 /dev/md2
> >     brw-r-----    1 root     root       9,  20 Sep 29  2011 /dev/md20
> >     brw-r-----    1 root     root       9,  21 Sep 29  2011 /dev/md21
> >     brw-r-----    1 root     root       9,  22 Sep 29  2011 /dev/md22
> >     brw-r-----    1 root     root       9,  23 Sep 29  2011 /dev/md23
> >     brw-r-----    1 root     root       9,  24 Sep 29  2011 /dev/md24
> >     brw-r-----    1 root     root       9,  25 Sep 29  2011 /dev/md25
> >     brw-r-----    1 root     root       9,  26 Sep 29  2011 /dev/md26
> >     brw-r-----    1 root     root       9,  27 Sep 29  2011 /dev/md27
> >     brw-r-----    1 root     root       9,  28 Sep 29  2011 /dev/md28
> >     brw-r-----    1 root     root       9,  29 Sep 29  2011 /dev/md29
> >     brw-r-----    1 root     root       9,   3 Sep 29  2011 /dev/md3
> >     brw-r-----    1 root     root       9,   5 Sep 29  2011 /dev/md5
> >     brw-r-----    1 root     root       9,   6 Sep 29  2011 /dev/md6
> >     brw-r-----    1 root     root       9,   7 Sep 29  2011 /dev/md7
> >     brw-r-----    1 root     root       9,   8 Sep 29  2011 /dev/md8
> >     brw-r-----    1 root     root       9,   9 Sep 29  2011 /dev/md9
> >
> >
> > The fix was simply to use 'mknod' to create the missing /dev/md4, now
> > I can mount the drive and see the data.
> >
> > What I don't understand is where /dev/md4 went, how would it have got
> > deleted?  I have yet to reboot the system to see if /dev/md4
> > disappears again but if it does it's not a big problem to create it
> > again.
> >
> > Should the RAID block devices get created as part of the RAID start
> > up? Maybe there's something gone awry there.
> 
> Do you have proper /etc/md.conf?
> 
There is no /etc/md.conf or anything that I can see related to RAID
configuration anywhere in the system.

-- 
Chris Green

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Failed JBOD RAID on old NAS, how to diagnose/resurrect?
  2020-03-10 22:11       ` Chris Green
@ 2020-03-10 23:46         ` Song Liu
  0 siblings, 0 replies; 6+ messages in thread
From: Song Liu @ 2020-03-10 23:46 UTC (permalink / raw
  To: linux-raid

On Tue, Mar 10, 2020 at 3:11 PM Chris Green <cl@isbd.net> wrote:
>
> On Tue, Mar 10, 2020 at 02:20:02PM -0700, Song Liu wrote:
> > On Sun, Mar 8, 2020 at 7:15 AM Chris Green <cl@isbd.net> wrote:
> > >
> > > Well I've got it working again but I'm very confused as to *why* it
> > > failed the way it did.
> > >
> > > A 'cat /proc/mdstat' produced:-
> > >
> > >     Personalities : [linear] [raid0] [raid1]
> > >     md4 : active raid1 sda4[0]
> > >           973522816 blocks [2/1] [U_]
> > >
> > >     md1 : active raid1 sdb2[0] sda2[1]
> > >           256960 blocks [2/2] [UU]
> > >
> > >     md3 : active raid1 sdb3[0] sda3[1]
> > >           987904 blocks [2/2] [UU]
> > >
> > >     md2 : active raid1 sdb4[0]
> > >           973522816 blocks [2/1] [U_]
> > >
> > >     md0 : active raid1 sdb1[0] sda1[1]
> > >           1959808 blocks [2/2] [UU]
> > >
> > > So md2 and md4 (the main parts of the two 1Tb disk drives) seemed to
> > > be OK from the RAID point of view.  But I noticed that the block
> > > device for /dev/md4 didn't exist:-
> > >
> > >     ~ # ls -l /dev/md*
> > >     brw-r-----    1 root     root       9,   0 Sep 29  2011 /dev/md0
> > >     brw-r-----    1 root     root       9,   1 Sep 29  2011 /dev/md1
> > >     brw-r-----    1 root     root       9,  10 Sep 29  2011 /dev/md10
> > >     brw-r-----    1 root     root       9,  11 Sep 29  2011 /dev/md11
> > >     brw-r-----    1 root     root       9,  12 Sep 29  2011 /dev/md12
> > >     brw-r-----    1 root     root       9,  13 Sep 29  2011 /dev/md13
> > >     brw-r-----    1 root     root       9,  14 Sep 29  2011 /dev/md14
> > >     brw-r-----    1 root     root       9,  15 Sep 29  2011 /dev/md15
> > >     brw-r-----    1 root     root       9,  16 Sep 29  2011 /dev/md16
> > >     brw-r-----    1 root     root       9,  17 Sep 29  2011 /dev/md17
> > >     brw-r-----    1 root     root       9,  18 Sep 29  2011 /dev/md18
> > >     brw-r-----    1 root     root       9,  19 Sep 29  2011 /dev/md19
> > >     brw-r-----    1 root     root       9,   2 Sep 29  2011 /dev/md2
> > >     brw-r-----    1 root     root       9,  20 Sep 29  2011 /dev/md20
> > >     brw-r-----    1 root     root       9,  21 Sep 29  2011 /dev/md21
> > >     brw-r-----    1 root     root       9,  22 Sep 29  2011 /dev/md22
> > >     brw-r-----    1 root     root       9,  23 Sep 29  2011 /dev/md23
> > >     brw-r-----    1 root     root       9,  24 Sep 29  2011 /dev/md24
> > >     brw-r-----    1 root     root       9,  25 Sep 29  2011 /dev/md25
> > >     brw-r-----    1 root     root       9,  26 Sep 29  2011 /dev/md26
> > >     brw-r-----    1 root     root       9,  27 Sep 29  2011 /dev/md27
> > >     brw-r-----    1 root     root       9,  28 Sep 29  2011 /dev/md28
> > >     brw-r-----    1 root     root       9,  29 Sep 29  2011 /dev/md29
> > >     brw-r-----    1 root     root       9,   3 Sep 29  2011 /dev/md3
> > >     brw-r-----    1 root     root       9,   5 Sep 29  2011 /dev/md5
> > >     brw-r-----    1 root     root       9,   6 Sep 29  2011 /dev/md6
> > >     brw-r-----    1 root     root       9,   7 Sep 29  2011 /dev/md7
> > >     brw-r-----    1 root     root       9,   8 Sep 29  2011 /dev/md8
> > >     brw-r-----    1 root     root       9,   9 Sep 29  2011 /dev/md9
> > >
> > >
> > > The fix was simply to use 'mknod' to create the missing /dev/md4, now
> > > I can mount the drive and see the data.
> > >
> > > What I don't understand is where /dev/md4 went, how would it have got
> > > deleted?  I have yet to reboot the system to see if /dev/md4
> > > disappears again but if it does it's not a big problem to create it
> > > again.
> > >
> > > Should the RAID block devices get created as part of the RAID start
> > > up? Maybe there's something gone awry there.
> >
> > Do you have proper /etc/md.conf?
> >
> There is no /etc/md.conf or anything that I can see related to RAID
> configuration anywhere in the system.

Sorry, I meant mdadm.conf.

Please refer to https://raid.wiki.kernel.org/index.php/RAID_setup for
ways to set up the configuration.

Thanks,
Song

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-03-10 23:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-03-07 21:58 Failed SBOD RAID on old NAS, how to diagnose/resurrect? Chris Green
2020-03-07 22:08 ` Chris Green
2020-03-08 14:14   ` Failed JBOD " Chris Green
2020-03-10 21:20     ` Song Liu
2020-03-10 22:11       ` Chris Green
2020-03-10 23:46         ` Song Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.