All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
       [not found] <19990116115459.A7544@hexagon>
@ 1999-01-16 13:22 ` Andrea Arcangeli
  1999-01-16 16:37   ` Andrea Arcangeli
  1999-01-19 18:02   ` Stephen C. Tweedie
  0 siblings, 2 replies; 8+ messages in thread
From: Andrea Arcangeli @ 1999-01-16 13:22 UTC (permalink / raw
  To: Nimrod Zimerman; +Cc: Linux Kernel mailing list, linux-mm

On Sat, 16 Jan 1999, Nimrod Zimerman wrote:

> Personally, I don't like the way the pager works. It is too magical. Change
> 'priority', and it might work better. Why? Because. I much preferred the old
> approach, of being able to simply tell the cache (and buffers, though I
> don't see this unless I explicitly try to enlarge it) to *never*, *ever* grow
> over some arbitrary limit. This is far better for smaller machines, at
> least as far as I can currently see.

Setting an high limit for the cache when we are low memory is easy doable.
Comments from other mm guys?

Andrea Arcangeli

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-16 13:22 ` VM20 behavior on a 486DX/66Mhz with 16mb of RAM Andrea Arcangeli
@ 1999-01-16 16:37   ` Andrea Arcangeli
  1999-01-19 18:02   ` Stephen C. Tweedie
  1 sibling, 0 replies; 8+ messages in thread
From: Andrea Arcangeli @ 1999-01-16 16:37 UTC (permalink / raw
  To: Nimrod Zimerman; +Cc: Linux Kernel mailing list, linux-mm

On Sat, 16 Jan 1999, Andrea Arcangeli wrote:

> On Sat, 16 Jan 1999, Nimrod Zimerman wrote:
> 
> > Personally, I don't like the way the pager works. It is too magical. Change
> > 'priority', and it might work better. Why? Because. I much preferred the old
> > approach, of being able to simply tell the cache (and buffers, though I
> > don't see this unless I explicitly try to enlarge it) to *never*, *ever* grow
> > over some arbitrary limit. This is far better for smaller machines, at
> > least as far as I can currently see.
> 
> Setting an high limit for the cache when we are low memory is easy doable.
> Comments from other mm guys?

Please try out arca-vm-22. You can set the percentage of the cache
(buffer+filecache+swapcache) that your system will get close when you'll
be low on memory. The cache percentage is tunable via the second number in
the sysctl `.../sys/vm/pager`.

ftp://e-mind.com/pub/linux/kernel-patches/2.2.0-pre7-arca-VM-22

Let me know if it's what you like.

Andrea Arcangeli

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-16 13:22 ` VM20 behavior on a 486DX/66Mhz with 16mb of RAM Andrea Arcangeli
  1999-01-16 16:37   ` Andrea Arcangeli
@ 1999-01-19 18:02   ` Stephen C. Tweedie
  1999-01-19 20:09     ` John Alvord
  1 sibling, 1 reply; 8+ messages in thread
From: Stephen C. Tweedie @ 1999-01-19 18:02 UTC (permalink / raw
  To: Andrea Arcangeli
  Cc: Nimrod Zimerman, Linux Kernel mailing list, linux-mm,
	Stephen Tweedie

Hi,

On Sat, 16 Jan 1999 14:22:10 +0100 (CET), Andrea Arcangeli
<andrea@e-mind.com> said:

> Setting an high limit for the cache when we are low memory is easy doable.
> Comments from other mm guys?

Horrible --- smells like the old problem of "oh, our VM is hopeless at
tuning performance itself, so let's rely on magic numbers to constrain
it to reasonable performance".  I'd much much much much rather see a VM
which manages to work well without having to be constrained by tricks
like that (although by all means supply extra boundary limits for use in
special cases: just don't enable them on a default system).

--Stephen
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-19 18:02   ` Stephen C. Tweedie
@ 1999-01-19 20:09     ` John Alvord
  1999-01-19 20:35       ` Andrea Arcangeli
  0 siblings, 1 reply; 8+ messages in thread
From: John Alvord @ 1999-01-19 20:09 UTC (permalink / raw
  To: Stephen C. Tweedie
  Cc: Andrea Arcangeli, Nimrod Zimerman, Linux Kernel mailing list,
	linux-mm

On Tue, 19 Jan 1999, Stephen C. Tweedie wrote:

> Hi,
> 
> On Sat, 16 Jan 1999 14:22:10 +0100 (CET), Andrea Arcangeli
> <andrea@e-mind.com> said:
> 
> > Setting an high limit for the cache when we are low memory is easy doable.
> > Comments from other mm guys?
> 
> Horrible --- smells like the old problem of "oh, our VM is hopeless at
> tuning performance itself, so let's rely on magic numbers to constrain
> it to reasonable performance".  I'd much much much much rather see a VM
> which manages to work well without having to be constrained by tricks
> like that (although by all means supply extra boundary limits for use in
> special cases: just don't enable them on a default system).
> 
We have at least one other case where a memory algorithm needed to be
tuned for smaller memory. It was the "target free space per cent" which
had to be larger for small memory machines. There could be a similiar
effect in cache handling. No problem on larger machines, but a big problem
on small memory machines.

John Alvord

Music, Management, Poetry and more...
           http://www.candlelist.org/kuilema
 
Cheap CDs @ http://www.cruzio.com/~billpeet/MusicByCandlelight
 



--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-19 20:09     ` John Alvord
@ 1999-01-19 20:35       ` Andrea Arcangeli
  1999-01-21 14:47         ` Stephen C. Tweedie
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Arcangeli @ 1999-01-19 20:35 UTC (permalink / raw
  To: Stephen C. Tweedie
  Cc: John Alvord, Nimrod Zimerman, Linux Kernel mailing list, linux-mm

This written by Stephen:

> > Horrible --- smells like the old problem of "oh, our VM is hopeless at
> > tuning performance itself, so let's rely on magic numbers to constrain
> > it to reasonable performance".  I'd much much much much rather see a VM

My point is that the algorithm to do something of useful and safe needs an
objective to reach. The algorithm need to know what has to do. I learn to
the algorithm what to do, nothing more.

Swapping out when shrink_mmap fails, means nothing. You don't know what
will happens to the memory levels. This is the reason it works worse than
my way (and that slowdown machines after some day).

And btw, I don't care `work with magic'. I care that everything works
efficient, stable, and confortable. 1 only level of cache percentage
tunable looks fine to me (everything else works with magic and works fine,
but I need at least 1 fixed point to learn at the algorithm what to do). 
You can write a gtk app that allow the sysadm to move up and down the
_only_ cache percentage level. I dropped all others bogus percentage
levels. So at least my code is 6/1 times less Horrible than pre8 (and
sctvm) from your `must work (and mess) with magic' point of view.

If I am missing something (again ;) comments are always welcome.

Andrea Arcangeli

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-19 20:35       ` Andrea Arcangeli
@ 1999-01-21 14:47         ` Stephen C. Tweedie
  1999-01-21 19:32           ` Andrea Arcangeli
  0 siblings, 1 reply; 8+ messages in thread
From: Stephen C. Tweedie @ 1999-01-21 14:47 UTC (permalink / raw
  To: Andrea Arcangeli
  Cc: Stephen C. Tweedie, John Alvord, Nimrod Zimerman,
	Linux Kernel mailing list, linux-mm

Hi,

On Tue, 19 Jan 1999 21:35:33 +0100 (CET), Andrea Arcangeli
<andrea@e-mind.com> said:

> My point is that the algorithm to do something of useful and safe needs an
> objective to reach. The algorithm need to know what has to do. I learn to
> the algorithm what to do, nothing more.

No.  The algorithm should react to the current *load*, not to what it
thinks the ideal parameters should be.  There are specific things you
can do to the VM which completely invalidate any single set of cache
figures.  For example, you can create large ramdisks which effectively
lock large amounts of memory into the buffer cache, and there's nothing
you can do about that.  If you rely on magic numbers to get the
balancing right, then performance simply disappears when you do
something unexpected like that.

This is not supposition.  This is the observed performance of VMs which
think they know how much memory should be allocated for different
purposes.  You cannot say that cache should be larger than or smaller
than a particular value, because only the current load can tell you how
big the cache should be and that load can vary over time.

> I dropped all others bogus percentage levels. So at least my code is
> 6/1 times less Horrible than pre8 (and sctvm) from your `must work
> (and mess) with magic' point of view.

sctvm used figures of (I think) 1% and 100% for the minimum and maximum
buffer/cache values.  In other words, the mechanism was there to let the
user set limits, but it wasn't used by default.

> If I am missing something (again ;) comments are always welcome.

Yes.  Offer the functionality of VM limits, sure.  Relying on it is a
disaster if the user does something you didn't predict.

--Stephen
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-21 14:47         ` Stephen C. Tweedie
@ 1999-01-21 19:32           ` Andrea Arcangeli
  1999-01-23 19:16             ` Stephen C. Tweedie
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Arcangeli @ 1999-01-21 19:32 UTC (permalink / raw
  To: Stephen C. Tweedie
  Cc: John Alvord, Nimrod Zimerman, Linux Kernel mailing list, linux-mm,
	Linus Torvalds

On Thu, 21 Jan 1999, Stephen C. Tweedie wrote:

> No.  The algorithm should react to the current *load*, not to what it
> thinks the ideal parameters should be.  There are specific things you

Obviously when the system has a lot of freeable memory in fly there are
not constraints. When instead the system is very low on memory you have to
choose what to do.

Two choices:

1. You want to give the most of available memory to the process that is
   trashing the VM, in this case you left the balance percentage of
   freeable pages low.

2. You leave the number of freeable pages more high, this way other
   iteractive processes will run smoothly even if with the trashing proggy
   in background. 

This percentage of freeable page balance you want at your time can't be
known by the algorithm. 5% of freeable pages work always well here but you
may want 30% of freeable pages (but note too much pages in the swap cache
are a real risk for the __ugly__ O(n) seach we handle right now in the
cache so rising too much the freeable percentage could theorically
decrease performances (and obviously increase the swap space
really available for not in ram pages)).

> can do to the VM which completely invalidate any single set of cache
> figures.  For example, you can create large ramdisks which effectively
> lock large amounts of memory into the buffer cache, and there's nothing
> you can do about that.  If you rely on magic numbers to get the
> balancing right, then performance simply disappears when you do
> something unexpected like that.

My current (not yet diffed and released VM due not time to play with Linux
today due offtopic lessions at University) try to go close the a balance
percentage (5%) of freeable pages.  Note: for freeable pages I mean pages
in the file cache (swapper_inode included) with a reference count of 1,
really shrunkable (exists "shrunkable" ? ;) from shrink_mmap(). I
implemented two new functions page_get() and page_put()  (and hacked
free_pages and friends) to take nr_freeable_pages uptodate. 

> This is not supposition.  This is the observed performance of VMs which
> think they know how much memory should be allocated for different
> purposes.  You cannot say that cache should be larger than or smaller
> than a particular value, because only the current load can tell you how
> big the cache should be and that load can vary over time.

I just know that (not noticed, because the old code was just quite good).
The reason I can't trust the cache size is because some part of cache are
not freeable and infact I just moved my VM to check the percentage of
_freeable_ pages. And the algorithm try to go close to such percentage
because it know that it's rasonable, but it works fine even if it can't
reach such vale. If you don't try to go in a rasonable direction you could
risk to swapout even if there are tons of freeable pages in the swap cache
(maybe because the pages are not distributed equally on the mmap so
shrink_mmap() exires too early). 

The current VM balance is based on the (num_physpages << 1) / (priority+1)
and I find this bogus. My current VM change really nothing using a
starting prio of 6 or of 1. Sure starting from 1 is less responsive but
the numbers of vmstat are the ~same.

> > If I am missing something (again ;) comments are always welcome.
> 
> Yes.  Offer the functionality of VM limits, sure.  Relying on it is a
> disaster if the user does something you didn't predict.

Do still think this even if I am trying to give a balance to the number of
_freeable_ pages? Note, I never studied about the memory management.
Everything I do came from my instinct, so I can be wrong of course... but
I am telling you what I think right now.

BTW, do you remeber the benchmark that I am using that dirtyfy 160Mbyte in
loop and tell me how many seconds take each loop? 

Well it was taking 100 sec in pre1, it started to take 50 sec since I
killed kswapd and I started to async swapout also from process context,
and with my current experimental code (not yet released) is running in 20
sec each loop (the record ever seen here ;). I don't know if my new
experimental code (with the new nr_freeable_pages) is good under all
aspects but sure it gives a big boost and it's a real fun (at least Linux
is fun, University is going worse and worse). 

Andrea Arcangeli

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: VM20 behavior on a 486DX/66Mhz with 16mb of RAM
  1999-01-21 19:32           ` Andrea Arcangeli
@ 1999-01-23 19:16             ` Stephen C. Tweedie
  0 siblings, 0 replies; 8+ messages in thread
From: Stephen C. Tweedie @ 1999-01-23 19:16 UTC (permalink / raw
  To: Andrea Arcangeli
  Cc: Stephen C. Tweedie, John Alvord, Nimrod Zimerman,
	Linux Kernel mailing list, linux-mm, Linus Torvalds

Hi,

On Thu, 21 Jan 1999 20:32:32 +0100 (CET), Andrea Arcangeli
<andrea@e-mind.com> said:

> On Thu, 21 Jan 1999, Stephen C. Tweedie wrote:
>> No.  The algorithm should react to the current *load*, not to what it
>> thinks the ideal parameters should be.  There are specific things you

> Obviously when the system has a lot of freeable memory in fly there are
> not constraints. When instead the system is very low on memory you have to
> choose what to do.

> Two choices:

> 1. You want to give the most of available memory to the process that is
>    trashing the VM, in this case you left the balance percentage of
>    freeable pages low.

> 2. You leave the number of freeable pages more high, this way other
>    iteractive processes will run smoothly even if with the trashing proggy
>    in background. 

Note that if you have a thrashing process, then by far the most
important factor to tune is the aggressiveness with which that process
charges through new pages.  It doesn't matter how many pages you try to
keep free: if you have any process which is trying to gobble them all,
then it is far more important to throttle the rate at which they can do
so than to have any hard and fast limits on freeable pages.  Otherwise,
you just end up freeing lots of pages for the thrashing task(s) to
reclaim them straight back.

This is what I mean by being tuned by the load, not by predetermined
limits.

--Stephen
--
To unsubscribe, send a message witch 'unsubscribe linux-mm my@address' in
the body to majordomo@kvack.org.
For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~1999-01-23 19:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <19990116115459.A7544@hexagon>
1999-01-16 13:22 ` VM20 behavior on a 486DX/66Mhz with 16mb of RAM Andrea Arcangeli
1999-01-16 16:37   ` Andrea Arcangeli
1999-01-19 18:02   ` Stephen C. Tweedie
1999-01-19 20:09     ` John Alvord
1999-01-19 20:35       ` Andrea Arcangeli
1999-01-21 14:47         ` Stephen C. Tweedie
1999-01-21 19:32           ` Andrea Arcangeli
1999-01-23 19:16             ` Stephen C. Tweedie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.