Linux-LVM Archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zdenek.kabelac@gmail.com>
To: matthew patton <pattonme@yahoo.com>,
	"lists.linux.dev@frank.fyi" <lists.linux.dev@frank.fyi>
Cc: "linux-lvm@lists.linux.dev" <linux-lvm@lists.linux.dev>
Subject: Re: add volatile flag to PV/LVs (for cache) to avoid degraded state on reboot
Date: Mon, 22 Jan 2024 11:58:47 +0100	[thread overview]
Message-ID: <59d111f1-bd73-4f88-b036-2cd09977ea24@gmail.com> (raw)
In-Reply-To: <1279983342.522347.1705803666448@mail.yahoo.com>

Dne 21. 01. 24 v 3:21 matthew patton napsal(a):
>> As like you already said it was never ACKed, so the software that tried to write it never expected it to be written.
> 
> we don't care about the user program and what it thinks got written or not. 
> That's way higher up the stack.
> 
> Any write-thru cache has NO business writing new data to cache first, it must 
> hit the source media first. Once that is done it can be ACK'd. The ONLY other 
> part of the "transaction" is an update to the cache management block-mapping 
> to invalidate the block so as to prevent stale reads.
> 
> THEN  IF there is a case to be made for re-caching the new data (we know it 
> was a block under active management), that is a SECOND OP that can also be 
> made asynchronous. Write-thru should ALWAYS perform and behave like cache 
> device doesn't exist at all.


Hi

Anyone can surely write a caching policy following rules above, however 
current DM cache is working differently with cached 'blocks'.

Method above would require to drop/demote whole cached block out of the cache 
first. Then update the content on the origin device, and promote the whole 
such updated block back to cache. i.e. user writes sector 512b  and the cached 
block with 512KiB would need to be recached...

So here I could wish a good luck with performance of such engine, the current 
DM cache engine is using parallel writes - thus there can be a moment where 
the cache has simply the more recent and valid data.

The problem here will happen when origin would have faulty sectors - so DM 
target takes this risk - it should not have any impact on properly written 
software that is using transactional mechanisms properly.

So if there is a space for much slower caching that will never ever have any 
dirty pages - someone can bravely step-in and write a new caching policy for 
such engine.

Regards

Zdenek


      parent reply	other threads:[~2024-01-22 10:58 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-12 18:19 add volatile flag to PV/LVs (for cache) to avoid degraded state on reboot lists.linux.dev
2024-01-17 11:08 ` Zdenek Kabelac
2024-01-17 22:00   ` Gionatan Danti
2024-01-18 15:40     ` Zdenek Kabelac
2024-01-18 19:50       ` Gionatan Danti
2024-01-20 20:29       ` lists.linux.dev
     [not found]         ` <1279983342.522347.1705803666448@mail.yahoo.com>
2024-01-22 10:58           ` Zdenek Kabelac [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59d111f1-bd73-4f88-b036-2cd09977ea24@gmail.com \
    --to=zdenek.kabelac@gmail.com \
    --cc=linux-lvm@lists.linux.dev \
    --cc=lists.linux.dev@frank.fyi \
    --cc=pattonme@yahoo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).