All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Ric Wheeler <ricwheeler@gmail.com>,
	lsf-pc@lists.linux-foundation.org,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	linux-block@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!)
Date: Wed, 9 Jun 2021 19:30:12 +0100	[thread overview]
Message-ID: <YMEItMNXG2bHgJE+@casper.infradead.org> (raw)
In-Reply-To: <0e1ed05f-4e83-7c84-dee6-ac0160be8f5c@acm.org>

On Wed, Jun 09, 2021 at 11:05:22AM -0700, Bart Van Assche wrote:
> On 6/9/21 3:53 AM, Ric Wheeler wrote:
> > Consumer devices are pushed to use the highest capacity emmc class
> > devices, but they have horrible write durability.
> > 
> > At the same time, we layer on top of these devices our normal stack -
> > device mapper and ext4 or f2fs are common configurations today - which
> > causes write amplification and can burn out storage even faster. I think
> > it would be useful to discuss how we can minimize the write
> > amplification when we need to run on these low end parts & see where the
> > stack needs updating.
> > 
> > Great background paper which inspired me to spend time tormenting emmc
> > parts is:
> > 
> > http://www.cs.unc.edu/~porter/pubs/hotos17-final29.pdf
> 
> Without having read that paper, has zoned storage been considered? F2FS
> already supports zoned block devices. I'm not aware of a better solution
> to reduce write amplification for flash devices. Maybe I'm missing
> something?

maybe you should read the paper.

" Thiscomparison demonstrates that using F2FS, a flash-friendly file
sys-tem, does not mitigate the wear-out problem, except inasmuch asit
inadvertently rate limitsallI/O to the device"

> More information is available in this paper:
> https://dl.acm.org/doi/pdf/10.1145/3458336.3465300.
> 
> Thanks,
> 
> Bart.

  reply	other threads:[~2021-06-09 18:30 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 10:53 [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!) Ric Wheeler
2021-06-09 18:05 ` Bart Van Assche
2021-06-09 18:30   ` Matthew Wilcox [this message]
2021-06-09 18:47     ` Bart Van Assche
2021-06-10  0:16       ` Damien Le Moal
2021-06-10  1:11         ` Ric Wheeler
2021-06-10  1:20       ` Ric Wheeler
2021-06-10 11:07         ` Tim Walker
2021-06-10 16:38           ` Keith Busch
     [not found]       ` <CAOtxgyeRf=+grEoHxVLEaSM=Yfx4KrSG5q96SmztpoWfP=QrDg@mail.gmail.com>
2021-06-10 16:22         ` Ric Wheeler
2021-06-10 17:06           ` Matthew Wilcox
2021-06-10 17:25             ` Ric Wheeler
2021-06-10 17:57           ` Viacheslav Dubeyko
2021-06-13 20:41 ` [LSF/MM/BPF TOPIC] SSDFS: LFS file system without GC operations + NAND flash devices lifetime prolongation Viacheslav Dubeyko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YMEItMNXG2bHgJE+@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=bvanassche@acm.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ricwheeler@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.