Linux-bcachefs Archive mirror
 help / color / mirror / Atom feed
From: Michael <mclaud@roznica.com.ua>
To: linux-bcachefs@vger.kernel.org
Subject: Re: [WIP] bcachefs fs usage update
Date: Tue, 5 Mar 2024 14:33:33 +0200	[thread overview]
Message-ID: <d243882b-7bdd-e743-85ab-00b104befb3f@roznica.com.ua> (raw)
In-Reply-To: <gajhq3iyluwmr44ee2fzacfpgpxmr2jurwqg6aeiab4lfila3p@b3l7bywr3yed>

On 2024-03-02 04:14, Kent Overstreet wrote:
> I'm currently updating the 'bcachefs fs usage' command for the disk
> accounting rewrite, and looking for suggestions on any improements we
> could make - ways to present the output that would be clearer and more
> useful, possibly ideas on on new things to count...
> 
> I think a shorter form of the per-device section is in order, a table
> with data type on the x axis and the device on the y axis; we also want
> percentages.
> 
> The big thing I'm trying to figure out is how to present the snapshots
> counters in a useful way.
> 
> Snapshot IDs form trees, where subvolumes correspond to leaf nodes in
> snapshot trees and interior nodes represent data shared between multiple
> subvolumes.
> 
> That means it's straightforward to print how much data each subvolumme
> is using directly - look up the subvolume for a given snapshot ID, look
> up the filesystem path of that subvolume - but I haven't come up with a
> good way of presenting how data is shared; these trees can be
> arbitrarily large.
> 
> Thoughts?
> 

Short/long form human readable and something similar to json
Short form Size, Used, Free

> Filesystem: 77d3a40d-58b6-46c9-a4d2-e59c8681e152
> Size:                       11.0 GiB
> Used:                       4.96 GiB
> Online reserved:                 0 B
> Inodes:                            4
> 
> Persistent reservations:
> 2x                          5.00 MiB
> 

Option for show layout, warning if layout not optimal

> Data type       Required/total  Durability    Devices
> btree:          1/2             2             [vdb vdc]           14.0 MiB
> btree:          1/2             2             [vdb vdd]           17.8 MiB
> btree:          1/2             2             [vdc vdd]           14.3 MiB
> user:           1/2             2             [vdb vdc]           1.64 GiB
> user:           1/2             2             [vdb vdd]           1.63 GiB
> user:           1/2             2             [vdc vdd]           1.64 GiB
> 

Option for detailed

> Compression:      compressed    uncompressed     average extent size
> lz4                 4.63 GiB        6.57 GiB                 112 KiB
> incompressible       328 MiB         328 MiB                 113 Ki >

> Snapshots:
> 4294967295          4.91 GiB
Per snapshot/subvol all/mask info like Total, Exclusive, Short layout 
(replicas-size, EC-size)
> 
> Btrees:
> extents             12.0 MiB
> inodes               256 KiB
> dirents              256 KiB
> alloc               10.8 MiB
> subvolumes           256 KiB
> snapshots            256 KiB
> lru                  256 KiB
> freespace            256 KiB
> need_discard         256 KiB
> backpointers        20.5 MiB
> bucket_gens          256 KiB
> snapshot_trees       256 KiB
> logged_ops           256 KiB
> accounting           256 KiB
> 
May be device label or device name from /dev/mapper/... Not dm-0, dm-1, etc
> (no label) (device 0):           vdb              rw
>                                  data         buckets    fragmented
>    free:                     2.27 GiB           18627
>    sb:                       3.00 MiB              25       124 KiB
>    journal:                  32.0 MiB             256
>    btree:                    15.9 MiB             127
>    user:                     1.64 GiB           13733      41.1 MiB
>    cached:                        0 B               0
>    parity:                        0 B               0
>    stripe:                        0 B               0
>    need_gc_gens:                  0 B               0
>    need_discard:                  0 B               0
>    capacity:                 4.00 GiB           32768
> 
> (no label) (device 1):           vdc              rw
>                                  data         buckets    fragmented
>    free:                     2.28 GiB           18652
>    sb:                       3.00 MiB              25       124 KiB
>    journal:                  32.0 MiB             256
>    btree:                    14.1 MiB             113
>    user:                     1.64 GiB           13722      38.5 MiB
>    cached:                        0 B               0
>    parity:                        0 B               0
>    stripe:                        0 B               0
>    need_gc_gens:                  0 B               0
>    need_discard:                  0 B               0
>    capacity:                 4.00 GiB           32768
> 
> (no label) (device 2):           vdd              rw
>                                  data         buckets    fragmented
>    free:                     2.28 GiB           18640
>    sb:                       3.00 MiB              25       124 KiB
>    journal:                  32.0 MiB             256
>    btree:                    16.0 MiB             128
>    user:                     1.64 GiB           13719      38.6 MiB
>    cached:                        0 B               0
>    parity:                        0 B               0
>    stripe:                        0 B               0
>    need_gc_gens:                  0 B               0
>    need_discard:                  0 B               0
>    capacity:                 4.00 GiB           32768
> 

And option for full info ))

-- 
Michael

  parent reply	other threads:[~2024-03-05 12:43 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-02  2:14 [WIP] bcachefs fs usage update Kent Overstreet
2024-03-03 18:49 ` John Stoffel
2024-03-03 23:19   ` Kent Overstreet
2024-03-04  1:02     ` John Stoffel
2024-03-04  1:08       ` Kent Overstreet
2024-03-04  8:16         ` Martin Steigerwald
     [not found]           ` <CAMT0RQRsdLd9dg5jkpQ+gRTn0XJe=cU5Umsjs2npyvz6pCU61g@mail.gmail.com>
2024-03-04 13:44             ` Hannu Krosing
2024-03-05  0:00         ` John Stoffel
2024-03-05 12:33 ` Michael [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-03-06 11:14 Mike Fleetwood
2024-03-08  1:23 ` Kent Overstreet
2024-03-08 23:37   ` Mike Fleetwood
2024-03-08 23:41     ` Kent Overstreet
2024-03-11 15:55       ` Mike Fleetwood

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d243882b-7bdd-e743-85ab-00b104befb3f@roznica.com.ua \
    --to=mclaud@roznica.com.ua \
    --cc=linux-bcachefs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).