Linux-mm Archive mirror
 help / color / mirror / Atom feed
* memory use in Linux
@ 1998-08-21  5:37 Lonnie Nunweiler
  1998-08-21  6:20 ` Adam Fritzler
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Lonnie Nunweiler @ 1998-08-21  5:37 UTC (permalink / raw
  To: linux-mm

I am researching why Linux runs into memory problems.  We recently had to
convert our dialin server, email and web server to NT, because the Linux
machine would eventually eat up all ram, and then crash.  We were using
128MB machines, and it would take about 3 days before rebooting was
required.  If we didn't reboot soon enough, it was a very messy job
rebuilding some of the chewed files.

I have encountered the saying "free memory is wasted memory", and it got me
thinking.  I believe that statement is completely wrong, and is responsible
for the current problems that Linux is having for systems that keep running
(servers) as opposed to systems that get shut down nightly.

If we were to treat memory as money, we would not think that money sitting
idly in the bank is wasted.  I've been there, with no reserves, and it is
not fun.  If too much is sitting idle, it might not be best, but let it
sit.  It is ready in an instant should we need it.  If it is not there when
we need it, we scramble, and sometimes get embarrassed.

I think the memory manager should place limits on caching, so as to leave a
specified amount of free ram.

>From what I have observed, processes will eventually use up all available
ram, and get into swapping.  Imagine having a buddy or partner that was
just following you around to get any money you earned, and immediately
spent it.  Eventually important things would be delayed until you could get
enough put aside to cover them.....only problem, that buddy is grabbing
anything you put away, and spending it.  You try as hard as you wish, but,
no way can you get ahead.  Then total disaster strikes.  Your partner has
gotten hold of a credit card.  At this point you can forget about ever
having anything to spare.  Time to reboot.

It's silly to have a 64M machine, running only a primary DNS task, and
having it slowly get its memory chewed up, and then get into swapping.
When it crashes due to no available memory, what was gained in a few
milliseconds faster disk access because of caching?

Is it possible to configure Linux to limit the performance speeder-uppers
to leave a specified chunk of ram available?  Do you think this would help
with reliability?  Can anyone tell me how to do it?

Thanks
Lonnie Nunweiler, President
WebWorld Warehouse Ltd.
1255 - 5 th Ave.
PO Box 1030
Valemount, BC.  V0E 2Z0

www.valemount.com
www.webworldwarehouse.com

lonnie@valemount.com
lonnie@vis.bc.ca
Voice: (250) 566-4698  Fax: (250) 566-9835

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory use in Linux
  1998-08-21  5:37 memory use in Linux Lonnie Nunweiler
@ 1998-08-21  6:20 ` Adam Fritzler
  1998-08-21 23:48 ` Eric W. Biederman
  1998-08-24 10:36 ` Stephen C. Tweedie
  2 siblings, 0 replies; 4+ messages in thread
From: Adam Fritzler @ 1998-08-21  6:20 UTC (permalink / raw
  To: Lonnie Nunweiler; +Cc: linux-mm


I'm no where near on expert, but the "free memory is wasted memory" has
been used for far longer than just Linux, and AFAIK has been the policy
for most if not all UNIX variants.

Why your machine didn't stay up for more than 3 days does not sound like a
memory managemnt issue in the kernel.  I would guess you had a leaky
userspace process.  Although some kernels have memory leaks, I don't
remember any of them being quite that bad.  Also, I don't understand the
relationship between running out of memory and corrupt files.  

Many, many people (including myself) have Linux machines running the same
services as you describe in far, far less RAM and have uptimes of many
months, and absolutly no corrupt files.

I'll leave the scrambling-to-free-caches issue to the more knowlegable, as
I see your reasons of justification of leaving memory free as more
justification of the current policy than anything else.  When you put
money in the bank, do you not get paid interest?  This is like using a
checking account instead of a savings account for idle moneys!

af

On Thu, 20 Aug 1998, Lonnie Nunweiler wrote:

> I am researching why Linux runs into memory problems.  We recently had to
> convert our dialin server, email and web server to NT, because the Linux
> machine would eventually eat up all ram, and then crash.  We were using
> 128MB machines, and it would take about 3 days before rebooting was
> required.  If we didn't reboot soon enough, it was a very messy job
> rebuilding some of the chewed files.
> 
> I have encountered the saying "free memory is wasted memory", and it got me
> thinking.  I believe that statement is completely wrong, and is responsible
> for the current problems that Linux is having for systems that keep running
> (servers) as opposed to systems that get shut down nightly.
> 
> If we were to treat memory as money, we would not think that money sitting
> idly in the bank is wasted.  I've been there, with no reserves, and it is
> not fun.  If too much is sitting idle, it might not be best, but let it
> sit.  It is ready in an instant should we need it.  If it is not there when
> we need it, we scramble, and sometimes get embarrassed.
> 
> I think the memory manager should place limits on caching, so as to leave a
> specified amount of free ram.
> 
> >From what I have observed, processes will eventually use up all available
> ram, and get into swapping.  Imagine having a buddy or partner that was
> just following you around to get any money you earned, and immediately
> spent it.  Eventually important things would be delayed until you could get
> enough put aside to cover them.....only problem, that buddy is grabbing
> anything you put away, and spending it.  You try as hard as you wish, but,
> no way can you get ahead.  Then total disaster strikes.  Your partner has
> gotten hold of a credit card.  At this point you can forget about ever
> having anything to spare.  Time to reboot.
> 
> It's silly to have a 64M machine, running only a primary DNS task, and
> having it slowly get its memory chewed up, and then get into swapping.
> When it crashes due to no available memory, what was gained in a few
> milliseconds faster disk access because of caching?
> 
> Is it possible to configure Linux to limit the performance speeder-uppers
> to leave a specified chunk of ram available?  Do you think this would help
> with reliability?  Can anyone tell me how to do it?
> 
> Thanks
> Lonnie Nunweiler, President
> WebWorld Warehouse Ltd.
> 1255 - 5 th Ave.
> PO Box 1030
> Valemount, BC.  V0E 2Z0
> 
> www.valemount.com
> www.webworldwarehouse.com
> 
> lonnie@valemount.com
> lonnie@vis.bc.ca
> Voice: (250) 566-4698  Fax: (250) 566-9835
> 
> --
> This is a majordomo managed list.  To unsubscribe, send a message with
> the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
> 




-------------------------------------------------------------------------------
Adam Fritzler                           |
  afritz@delphid.ml.org                 | Animals who are not penguins can   
    afritz@iname.com                    |    can only wish they were.
      http://delphid.ml.org/~afritz/    |        -- Chicago Reader 
        http://www.pst.com/             |               15 Oct 1982
-------------------------------------------------------------------------------

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory use in Linux
  1998-08-21  5:37 memory use in Linux Lonnie Nunweiler
  1998-08-21  6:20 ` Adam Fritzler
@ 1998-08-21 23:48 ` Eric W. Biederman
  1998-08-24 10:36 ` Stephen C. Tweedie
  2 siblings, 0 replies; 4+ messages in thread
From: Eric W. Biederman @ 1998-08-21 23:48 UTC (permalink / raw
  To: lonnie; +Cc: linux-mm

>>>>> "LN" == Lonnie Nunweiler <lonnie@valemount.com> writes:

LN> I am researching why Linux runs into memory problems.  We recently had to
LN> convert our dialin server, email and web server to NT, because the Linux
LN> machine would eventually eat up all ram, and then crash.  We were using
LN> 128MB machines, and it would take about 3 days before rebooting was
LN> required.  If we didn't reboot soon enough, it was a very messy job
LN> rebuilding some of the chewed files.

Instead of running into generalities probably the best place to start is
to ask why linux ran into problems in your case.

Which kernel were you running?
What were the specifics that killed your machine?

Did it look like a kernel memory where more and more memory is eaten,
your machine begins to swap harder, and harder until death.

Or was it a user space program that leaked memory, and linux wasn't
able to cope with runing out of swap?

The cache in general is designed so anything it caches may be
reclaimed when needed. 

I think you are barking up the wrong tree so please take it slow
so the real culprit can be found.

Eric
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory use in Linux
  1998-08-21  5:37 memory use in Linux Lonnie Nunweiler
  1998-08-21  6:20 ` Adam Fritzler
  1998-08-21 23:48 ` Eric W. Biederman
@ 1998-08-24 10:36 ` Stephen C. Tweedie
  2 siblings, 0 replies; 4+ messages in thread
From: Stephen C. Tweedie @ 1998-08-24 10:36 UTC (permalink / raw
  To: lonnie; +Cc: linux-mm, Stephen Tweedie

Hi,

On Thu, 20 Aug 1998 22:37:33 -0700, Lonnie Nunweiler
<lonnie@valemount.com> said:

> I am researching why Linux runs into memory problems.  We recently had
> to convert our dialin server, email and web server to NT, because the
> Linux machine would eventually eat up all ram, and then crash.  We
> were using 128MB machines, and it would take about 3 days before
> rebooting was required.  If we didn't reboot soon enough, it was a
> very messy job rebuilding some of the chewed files.

> I have encountered the saying "free memory is wasted memory", and it
> got me thinking.  I believe that statement is completely wrong, and is
> responsible for the current problems that Linux is having for systems
> that keep running (servers) as opposed to systems that get shut down
> nightly.

No --- the statement "free memory is wasted memory" simply implies that
if we have otherwise unused memory available, we should not be afraid to
temporarily use it for cache.  Linux does not keep such memory
indefinitely, but should release it on demand.

Linux does keep a pool of completely free pages available at all times
to cope with short-term memory requirements.  As soon as we start eating
into this, the kernel will preemptively start releasing cache pages.

> From what I have observed, processes will eventually use up all
> available ram, and get into swapping.  

Nasty --- you should find out which process is hogging all of the
memory.  If you have an 80MB process on a 64MB machine, it is simply not
going to be happy.

There _is_ a know problem in that Linux is sometimes too reluctant to
kill a process which runs away with memory, but at that point there is
really no alternative to killing the process anyway: if you have let it
get that far then there is a problem in some application in the system.

> It's silly to have a 64M machine, running only a primary DNS task, and
> having it slowly get its memory chewed up, and then get into swapping.
> When it crashes due to no available memory, what was gained in a few
> milliseconds faster disk access because of caching?

The system will not swap significantly if there is cache to be
reclaimed: the kernel is much more eager to reclaim cache than to swap.

The real question is where is your memory being used?  Simply assuming
it is cache is not necessarily valid.  "ps max" will show the memory
consumption of all processes, and "free" will show the current cache
size (which includes ALL mapped library and program pages, so don't be
worried if it's a few MB in size: that memory is likely to be actively
in use, and reflected in the "shared" size).

For what it's worth, named has a known problem (in all versions, on all
systems) that the DNS server can grow arbitrarily large.  Most sysadmins
I know at large sites restart DNS nightly to cope with this.  The latest
versions may be better, but it's still something to be aware of. 

Finally, what version of the kernel are you running?  There are known
memory leaks in many versions.  In particular, version 2.0.30 had a
fairly bad worst-case-behaviour bug in leaving too large a cache
around.  That bug was present in only one version of the kernel and has
been fixed for over a year, so if you are running 2.0.30, that's almost
certainly part of the problem and you _really_ want to upgrade.

--Stephen
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~1998-08-24 13:08 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
1998-08-21  5:37 memory use in Linux Lonnie Nunweiler
1998-08-21  6:20 ` Adam Fritzler
1998-08-21 23:48 ` Eric W. Biederman
1998-08-24 10:36 ` Stephen C. Tweedie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).