All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* help with xfs_repair on 10TB fs
@ 2009-01-17 17:13 Alberto Accomazzi
  2009-01-17 17:33 ` Eric Sandeen
  2009-01-17 17:35 ` Tru Huynh
  0 siblings, 2 replies; 9+ messages in thread
From: Alberto Accomazzi @ 2009-01-17 17:13 UTC (permalink / raw
  To: xfs

I need some help with figuring out how to repair a large XFS
filesystem (10TB of data, 100+ million files).  xfs_repair seems to
have crapped out before finishing the job and now I'm not sure how to
proceed.

The system is a CentOS 5.2 storage server with a 3ware controller and
16 x 1TB drives, 32GB RAM and 64GB swap.  After clearing the issues
with bad blocks on the disks, yesterday we set out to fix the
filesystem.  This is the list of relevant packages that yum reports
installed:

kmod-xfs.x86_64                          0.4-1.2.6.18_53.1.14.e installed
kmod-xfs.x86_64                          0.4-2                  installed
kmod-xfs.x86_64                          0.4-1.2.6.18_92.1.10.e installed
xfsdump.x86_64                           2.2.46-1.el5.centos    installed
xfsprogs.x86_64                          2.9.4-1.el5.centos     installed
xfsprogs-devel.x86_64                    2.9.4-1.el5.centos     installed
kernel.x86_64                            2.6.18-92.1.13.el5.cen installed

After bringing the system back, a mount of the fs reported problems:

Starting XFS recovery on filesystem: sdb1 (logdev: internal)
Filesystem "sdb1": XFS internal error xfs_btree_check_sblock at line 334 of file
 /home/buildsvn/rpmbuild/BUILD/xfs-kmod-0.4/_kmod_build_/xfs_btree.c.  Caller 0x
ffffffff882fa8d2

Call Trace:
 [<ffffffff882eacc9>] :xfs:xfs_btree_check_sblock+0xbc/0xcb
 .....

An xfs_check on the device suggests how to solve the problem:

alberto@adsduo-54: sudo xfs_check /dev/sdb1
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

xfs_info reports the following for the filesystem:

meta-data=/dev/sdb1              isize=256    agcount=32, agsize=98361855 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=3147579360, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

So last night I started an "xfs_repair -L" on the device, which
proceeded through step 6 before quitting at some point in the middle
of the night without giving me many clues ast to what went wrong.  I
know that this process uses a ton of memory so we loaded the server
with 32GB of RAM (the swap file is 64GB) and before goint to sleep I
noticed that the xfs_repair was using about 24GB of RAM.  I put the
complete log of xfs_repair online at:
http://www.cfa.harvard.edu/~alberto/ads/xfs_repair.log

bad hash table for directory inode 58134992 (no data entry): rebuilding
rebuilding directory inode 58134992
rebuilding directory inode 58345355
rebuilding directory inode 60221905

So I'm lead to believe that xfs_repair died before completing the job.
 Should I try again?  Does anyone have an idea why this might have
happened?  Is it possible that we still don't have enough memory in
the system for xfs_repair to do the job?  Also, it's not clear to me
how xfs_repair works.  Assuming we won't be able to get it to complete
all of its steps, has it in fact repaired the filesystem somewhat or
are all the changes mentioned while it runs not committed to the
filesystem until the end of the run?

For lack of better ideas I'm running an xfs_check at the moment.  It's
been running for close to an hour and has used almost 29GB of memory
so far.  No errors reported.

TIA,

-- Alberto

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 17:13 help with xfs_repair on 10TB fs Alberto Accomazzi
@ 2009-01-17 17:33 ` Eric Sandeen
  2009-01-17 18:42   ` Alberto Accomazzi
  2009-01-17 17:35 ` Tru Huynh
  1 sibling, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2009-01-17 17:33 UTC (permalink / raw
  To: Alberto Accomazzi; +Cc: xfs

Alberto Accomazzi wrote:
> I need some help with figuring out how to repair a large XFS
> filesystem (10TB of data, 100+ million files).  xfs_repair seems to
> have crapped out before finishing the job and now I'm not sure how to
> proceed.
> 
> The system is a CentOS 5.2 storage server with a 3ware controller and
> 16 x 1TB drives, 32GB RAM and 64GB swap.  After clearing the issues
> with bad blocks on the disks, yesterday we set out to fix the
> filesystem.  This is the list of relevant packages that yum reports
> installed:
> 
> kmod-xfs.x86_64                          0.4-1.2.6.18_53.1.14.e installed
> kmod-xfs.x86_64                          0.4-2                  installed
> kmod-xfs.x86_64                          0.4-1.2.6.18_92.1.10.e installed
> xfsdump.x86_64                           2.2.46-1.el5.centos    installed
> xfsprogs.x86_64                          2.9.4-1.el5.centos     installed
> xfsprogs-devel.x86_64                    2.9.4-1.el5.centos     installed
> kernel.x86_64                            2.6.18-92.1.13.el5.cen installed

How did it "crap out?"

You could pretty easily run the very latest xfsprogs here by rebuilding
the src.rpm from

http://kojipkgs.fedoraproject.org/packages/xfsprogs/2.10.2/3.fc11/src/

> After bringing the system back, a mount of the fs reported problems:
> 
> Starting XFS recovery on filesystem: sdb1 (logdev: internal)
> Filesystem "sdb1": XFS internal error xfs_btree_check_sblock at line 334 of file
>  /home/buildsvn/rpmbuild/BUILD/xfs-kmod-0.4/_kmod_build_/xfs_btree.c.  Caller 0x
> ffffffff882fa8d2

so log replay is failing now; but that indicates an unclean shutdown.
Something else must have happened between the xfs_repair and this mount
instance?

> Call Trace:
>  [<ffffffff882eacc9>] :xfs:xfs_btree_check_sblock+0xbc/0xcb
>  .....
> 
> An xfs_check on the device suggests how to solve the problem:
> 
> alberto@adsduo-54: sudo xfs_check /dev/sdb1
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.

Just means that you have a dirty log.

> xfs_info reports the following for the filesystem:
> 
> meta-data=/dev/sdb1              isize=256    agcount=32, agsize=98361855 blks
>          =                       sectsz=512   attr=0
> data     =                       bsize=4096   blocks=3147579360, imaxpct=25
>          =                       sunit=0      swidth=0 blks, unwritten=1
> naming   =version 2              bsize=4096
> log      =internal               bsize=4096   blocks=32768, version=1
>          =                       sectsz=512   sunit=0 blks, lazy-count=0
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> So last night I started an "xfs_repair -L" on the device, which
> proceeded through step 6 before quitting at some point in the middle
> of the night without giving me many clues ast to what went wrong.  I
> know that this process uses a ton of memory so we loaded the server
> with 32GB of RAM (the swap file is 64GB) and before goint to sleep I
> noticed that the xfs_repair was using about 24GB of RAM.  I put the
> complete log of xfs_repair online at:
> http://www.cfa.harvard.edu/~alberto/ads/xfs_repair.log

wow, that's messy

> bad hash table for directory inode 58134992 (no data entry): rebuilding
> rebuilding directory inode 58134992
> rebuilding directory inode 58345355
> rebuilding directory inode 60221905
> 
> So I'm lead to believe that xfs_repair died before completing the job.
>  Should I try again?  Does anyone have an idea why this might have
> happened?  Is it possible that we still don't have enough memory in
> the system for xfs_repair to do the job?  Also, it's not clear to me
> how xfs_repair works.  Assuming we won't be able to get it to complete
> all of its steps, has it in fact repaired the filesystem somewhat or
> are all the changes mentioned while it runs not committed to the
> filesystem until the end of the run?

I don't see any evidence of it dying in the logs; either it looks like
it's still progressing, or it's stuck.

> For lack of better ideas I'm running an xfs_check at the moment.  It's
> been running for close to an hour and has used almost 29GB of memory
> so far.  No errors reported.

xfs_check doesn't actually repair anything, just FWIW.

I'd rebuild the srpm I mentioned above and give xfs_repair another shot
with that newer version, at this point.

-Eric

> TIA,
> 
> -- Alberto

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 17:13 help with xfs_repair on 10TB fs Alberto Accomazzi
  2009-01-17 17:33 ` Eric Sandeen
@ 2009-01-17 17:35 ` Tru Huynh
  2009-01-17 18:45   ` Alberto Accomazzi
  1 sibling, 1 reply; 9+ messages in thread
From: Tru Huynh @ 2009-01-17 17:35 UTC (permalink / raw
  To: Alberto Accomazzi; +Cc: xfs

On Sat, Jan 17, 2009 at 12:13:26PM -0500, Alberto Accomazzi wrote:
> I need some help with figuring out how to repair a large XFS
> filesystem (10TB of data, 100+ million files).  xfs_repair seems to
> have crapped out before finishing the job and now I'm not sure how to
> proceed.
> 
> The system is a CentOS 5.2 storage server with a 3ware controller and
> 16 x 1TB drives, 32GB RAM and 64GB swap.  After clearing the issues
> with bad blocks on the disks, yesterday we set out to fix the
> filesystem.  This is the list of relevant packages that yum reports
> installed:
> 
> kmod-xfs.x86_64                          0.4-1.2.6.18_53.1.14.e installed
> kmod-xfs.x86_64                          0.4-2                  installed
> kmod-xfs.x86_64                          0.4-1.2.6.18_92.1.10.e installed
> xfsdump.x86_64                           2.2.46-1.el5.centos    installed
> xfsprogs.x86_64                          2.9.4-1.el5.centos     installed
> xfsprogs-devel.x86_64                    2.9.4-1.el5.centos     installed
> kernel.x86_64                            2.6.18-92.1.13.el5.cen installed
> 
are you using the centosplus kernel? or just the regular kernel + kmod-xfs?

if xfsprogs ran out of memory, try the testing version available at
http://people.centos.org/tru/XFS/centos-5/RPMS/x86_64/

Tru

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 17:33 ` Eric Sandeen
@ 2009-01-17 18:42   ` Alberto Accomazzi
  2009-01-17 18:50     ` Eric Sandeen
  0 siblings, 1 reply; 9+ messages in thread
From: Alberto Accomazzi @ 2009-01-17 18:42 UTC (permalink / raw
  To: Eric Sandeen; +Cc: xfs

On Sat, Jan 17, 2009 at 12:33 PM, Eric Sandeen <sandeen@sandeen.net> wrote:

> Alberto Accomazzi wrote:
> > I need some help with figuring out how to repair a large XFS
> > filesystem (10TB of data, 100+ million files).  xfs_repair seems to
> > have crapped out before finishing the job and now I'm not sure how to
> > proceed.
>
> How did it "crap out?


Well, in the way I described below, namely it ran for several hours and then
died without completing.  As you can see from the log (which captured both
stdout and stderr) there's nothing that indicates what terminated the
program.  And it's definitely not running now.


> the src.rpm from
>
> http://kojipkgs.fedoraproject.org/packages/xfsprogs/2.10.2/3.fc11/src/
>

Ok, I guess it's worth giving it a shot.  I assume I don't need to worry
about kernel modules because the xfsprogs don't depend on that, right?


> > After bringing the system back, a mount of the fs reported problems:
> >
> > Starting XFS recovery on filesystem: sdb1 (logdev: internal)
> > Filesystem "sdb1": XFS internal error xfs_btree_check_sblock at line 334
> of file
> >  /home/buildsvn/rpmbuild/BUILD/xfs-kmod-0.4/_kmod_build_/xfs_btree.c.
>  Caller 0x
> > ffffffff882fa8d2
>
> so log replay is failing now; but that indicates an unclean shutdown.
> Something else must have happened between the xfs_repair and this mount
> instance?
>

Sorry, I wasn't clear: there was indeed an unclean shutdown (actually a
couple), after which the mount would not succeed presumably because of the
dirty log.  I was able to mount the system read-only and take enough of a
look to see that there was significant corruption of the data.  Running
xfs_repair -L at that point seemed the only option available.  But do let me
know if this line of thinking is incorrect.


> > So I'm lead to believe that xfs_repair died before completing the job.
> >  Should I try again?  Does anyone have an idea why this might have
> > happened?  Is it possible that we still don't have enough memory in
> > the system for xfs_repair to do the job?  Also, it's not clear to me
> > how xfs_repair works.  Assuming we won't be able to get it to complete
> > all of its steps, has it in fact repaired the filesystem somewhat or
> > are all the changes mentioned while it runs not committed to the
> > filesystem until the end of the run?
>
> I don't see any evidence of it dying in the logs; either it looks like
> it's still progressing, or it's stuck.
>

It's definitely not running now, so it has died at some point.


> > For lack of better ideas I'm running an xfs_check at the moment.  It's
> > been running for close to an hour and has used almost 29GB of memory
> > so far.  No errors reported.
>
> xfs_check doesn't actually repair anything, just FWIW.


Right, but I'm hoping to get some clue as to the status of the filesystem at
this point.


>
> I'd rebuild the srpm I mentioned above and give xfs_repair another shot
> with that newer version, at this point.
>

Ok, will work on that and report back.  Thank you much for the suggestion.

-- Alberto


[[HTML alternate version deleted]]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 17:35 ` Tru Huynh
@ 2009-01-17 18:45   ` Alberto Accomazzi
  0 siblings, 0 replies; 9+ messages in thread
From: Alberto Accomazzi @ 2009-01-17 18:45 UTC (permalink / raw
  To: Tru Huynh; +Cc: xfs

On Sat, Jan 17, 2009 at 12:35 PM, Tru Huynh <tru@pasteur.fr> wrote:

> On Sat, Jan 17, 2009 at 12:13:26PM -0500, Alberto Accomazzi wrote:
> > I need some help with figuring out how to repair a large XFS
> > filesystem (10TB of data, 100+ million files).  xfs_repair seems to
> > have crapped out before finishing the job and now I'm not sure how to
> > proceed.
> >
> > The system is a CentOS 5.2 storage server with a 3ware controller and
> > 16 x 1TB drives, 32GB RAM and 64GB swap.  After clearing the issues
> > with bad blocks on the disks, yesterday we set out to fix the
> > filesystem.  This is the list of relevant packages that yum reports
> > installed:
> >
> > kmod-xfs.x86_64                          0.4-1.2.6.18_53.1.14.e installed
> > kmod-xfs.x86_64                          0.4-2                  installed
> > kmod-xfs.x86_64                          0.4-1.2.6.18_92.1.10.e installed
> > xfsdump.x86_64                           2.2.46-1.el5.centos    installed
> > xfsprogs.x86_64                          2.9.4-1.el5.centos     installed
> > xfsprogs-devel.x86_64                    2.9.4-1.el5.centos     installed
> > kernel.x86_64                            2.6.18-92.1.13.el5.cen installed
> >
> are you using the centosplus kernel? or just the regular kernel + kmod-xfs?
>
> if xfsprogs ran out of memory, try the testing version available at
> http://people.centos.org/tru/XFS/centos-5/RPMS/x86_64/
>

Right now we're running centosplus:

Linux adsduo 2.6.18-92.1.13.el5.centos.plus #1 SMP Wed Oct 1 13:41:35 EDT
2008 x86_64 x86_64 x86_64 GNU/Linux

What is the current thinking about this vs. regular kernel + kmod-xfs?

-- AA


[[HTML alternate version deleted]]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 18:42   ` Alberto Accomazzi
@ 2009-01-17 18:50     ` Eric Sandeen
  2009-01-17 23:14       ` Alberto Accomazzi
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2009-01-17 18:50 UTC (permalink / raw
  To: Alberto Accomazzi; +Cc: xfs

Alberto Accomazzi wrote:
> On Sat, Jan 17, 2009 at 12:33 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
> 
>> Alberto Accomazzi wrote:
>>> I need some help with figuring out how to repair a large XFS
>>> filesystem (10TB of data, 100+ million files).  xfs_repair seems to
>>> have crapped out before finishing the job and now I'm not sure how to
>>> proceed.
>> How did it "crap out?
> 
> 
> Well, in the way I described below, namely it ran for several hours and then
> died without completing.  As you can see from the log (which captured both
> stdout and stderr) there's nothing that indicates what terminated the
> program.  And it's definitely not running now.
> 
> 
>> the src.rpm from
>>
>> http://kojipkgs.fedoraproject.org/packages/xfsprogs/2.10.2/3.fc11/src/
>>
> 
> Ok, I guess it's worth giving it a shot.  I assume I don't need to worry
> about kernel modules because the xfsprogs don't depend on that, right?

right.

> 
>>> After bringing the system back, a mount of the fs reported problems:
>>>
>>> Starting XFS recovery on filesystem: sdb1 (logdev: internal)
>>> Filesystem "sdb1": XFS internal error xfs_btree_check_sblock at line 334
>> of file
>>>  /home/buildsvn/rpmbuild/BUILD/xfs-kmod-0.4/_kmod_build_/xfs_btree.c.
>>  Caller 0x
>>> ffffffff882fa8d2
>> so log replay is failing now; but that indicates an unclean shutdown.
>> Something else must have happened between the xfs_repair and this mount
>> instance?
>>
> 
> Sorry, I wasn't clear: there was indeed an unclean shutdown (actually a
> couple), after which the mount would not succeed presumably because of the
> dirty log.  I was able to mount the system read-only and take enough of a
> look to see that there was significant corruption of the data.  Running
> xfs_repair -L at that point seemed the only option available.  But do let me
> know if this line of thinking is incorrect.

yes, if you have a dirty log that won't replay, zapping the log via
repair is about the only option.  I wonder what the first hint of
trouble here was, though, what led to all this misery.... :)

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 18:50     ` Eric Sandeen
@ 2009-01-17 23:14       ` Alberto Accomazzi
  2009-01-17 23:49         ` Eric Sandeen
  0 siblings, 1 reply; 9+ messages in thread
From: Alberto Accomazzi @ 2009-01-17 23:14 UTC (permalink / raw
  To: Eric Sandeen; +Cc: xfs

On Sat, Jan 17, 2009 at 1:50 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
> Alberto Accomazzi wrote:
>> On Sat, Jan 17, 2009 at 12:33 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
>>
>>> Alberto Accomazzi wrote:
>>>> I need some help with figuring out how to repair a large XFS
>>>> filesystem (10TB of data, 100+ million files).  xfs_repair seems to
>>>> have crapped out before finishing the job and now I'm not sure how to
>>>> proceed.
>>> How did it "crap out?
>>
>>
>> Well, in the way I described below, namely it ran for several hours and then
>> died without completing.  As you can see from the log (which captured both
>> stdout and stderr) there's nothing that indicates what terminated the
>> program.  And it's definitely not running now.
>>
>>
>>> the src.rpm from
>>>
>>> http://kojipkgs.fedoraproject.org/packages/xfsprogs/2.10.2/3.fc11/src/
>>>
>>
>> Ok, I guess it's worth giving it a shot.  I assume I don't need to worry
>> about kernel modules because the xfsprogs don't depend on that, right?

For the record, after upgrading to xfsprogs-2.10.2 as suggested by
Eric, xfs_repair completed successfully.  Unfortunately now I'm left
with dealing with quite a mess: 388K files in lost+found, with a
filesystem of over 160M inodes.  ugh...

Thanks for all your help, though...
-- Alberto

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 23:14       ` Alberto Accomazzi
@ 2009-01-17 23:49         ` Eric Sandeen
  2009-01-18 20:34           ` Alberto Accomazzi
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2009-01-17 23:49 UTC (permalink / raw
  To: Alberto Accomazzi; +Cc: xfs

Alberto Accomazzi wrote:


> For the record, after upgrading to xfsprogs-2.10.2 as suggested by
> Eric, xfs_repair completed successfully.  Unfortunately now I'm left
> with dealing with quite a mess: 388K files in lost+found, with a
> filesystem of over 160M inodes.  ugh...
> 
> Thanks for all your help, though...
> -- Alberto

Bummer.  What was the first thing that went wrong, by the way?

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: help with xfs_repair on 10TB fs
  2009-01-17 23:49         ` Eric Sandeen
@ 2009-01-18 20:34           ` Alberto Accomazzi
  0 siblings, 0 replies; 9+ messages in thread
From: Alberto Accomazzi @ 2009-01-18 20:34 UTC (permalink / raw
  To: Eric Sandeen; +Cc: xfs

On Sat, Jan 17, 2009 at 6:49 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
> Alberto Accomazzi wrote:
>
>
>> For the record, after upgrading to xfsprogs-2.10.2 as suggested by
>> Eric, xfs_repair completed successfully.  Unfortunately now I'm left
>> with dealing with quite a mess: 388K files in lost+found, with a
>> filesystem of over 160M inodes.  ugh...
>>
>> Thanks for all your help, though...
>> -- Alberto
>
> Bummer.  What was the first thing that went wrong, by the way?

A hardware issue with the underlying RAID managed by a 3ware
controller.  Although this is running RAID 6 + hot spare apparently
there was enough accumulation of bad blocks on the drives that we
experienced data corruption.  I'm definitely not happy about it and
I'm trying to figure out if there is something faulty going on here.
In fact I just noticed that the drives in question (Seagate ES.2) have
been found to have problems with data loss under certain
circumstances: http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207931

-- Alberto

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2009-01-18 20:34 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-17 17:13 help with xfs_repair on 10TB fs Alberto Accomazzi
2009-01-17 17:33 ` Eric Sandeen
2009-01-17 18:42   ` Alberto Accomazzi
2009-01-17 18:50     ` Eric Sandeen
2009-01-17 23:14       ` Alberto Accomazzi
2009-01-17 23:49         ` Eric Sandeen
2009-01-18 20:34           ` Alberto Accomazzi
2009-01-17 17:35 ` Tru Huynh
2009-01-17 18:45   ` Alberto Accomazzi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.