All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* #define HZ 1024 -- negative effects?
@ 2001-04-24 23:20 Michael Rothwell
  2001-04-25 22:40 ` Nigel Gamble
  2001-04-29 21:44 ` Jim Gettys
  0 siblings, 2 replies; 18+ messages in thread
From: Michael Rothwell @ 2001-04-24 23:20 UTC (permalink / raw
  To: linux-kernel

Are there any negative effects of editing include/asm/param.h to change 
HZ from 100 to 1024? Or any other number? This has been suggested as a 
way to improve the responsiveness of the GUI on a Linux system. Does it 
throw off anything else, like serial port timing, etc.?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-24 23:20 Michael Rothwell
@ 2001-04-25 22:40 ` Nigel Gamble
  2001-04-29 21:44 ` Jim Gettys
  1 sibling, 0 replies; 18+ messages in thread
From: Nigel Gamble @ 2001-04-25 22:40 UTC (permalink / raw
  To: Michael Rothwell; +Cc: linux-kernel

On Tue, 24 Apr 2001, Michael Rothwell wrote:
> Are there any negative effects of editing include/asm/param.h to change 
> HZ from 100 to 1024? Or any other number? This has been suggested as a 
> way to improve the responsiveness of the GUI on a Linux system. Does it 
> throw off anything else, like serial port timing, etc.?

Why not just run the X server at a realtime priority?  Then it will get
to respond to existing events, such as keyboard and mouse input,
promptly without creating lots of superfluous extra clock interrupts.
I think you will find this is a better solution.

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

MontaVista Software                             nigel@mvista.com


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
       [not found] <fa.gh4u8sv.17i1q6@ifi.uio.no>
@ 2001-04-26  2:02 ` Dan Maas
  2001-04-26  2:30   ` Werner Puschitz
                     ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Dan Maas @ 2001-04-26  2:02 UTC (permalink / raw
  To: Michael Rothwell; +Cc: linux-kernel

> Are there any negative effects of editing include/asm/param.h to change
> HZ from 100 to 1024? Or any other number? This has been suggested as a
> way to improve the responsiveness of the GUI on a Linux system.

I have also played around with HZ=1024 and wondered how it affects
interactivity. I don't quite understand why it could help - one thing I've
learned looking at kernel traces (LTT) is that interactive processes very,
very rarely eat up their whole timeslice (even hogs like X). So more
frequent timer interrupts shouldn't have much of an effect...

If you are burning CPU doing stuff like long compiles, then the increased HZ
might make the system appear more responsive because the CPU hog gets
pre-empted more often. However, you could get the same result just by
running the task 'nice'ly...

The only other possibility I can think of is a scheduler anomaly. A thread
arose on this list recently about strange scheduling behavior of processes
using local IPC - even though one process had readable data pending, the
kernel would still go idle until the next timer interrupt. If this is the
case, then HZ=1024 would kick the system back into action more quickly...

Of course, the appearance of better interactivity could just be a placebo
effect. Double-blind trials, anyone? =)

Regards,
Dan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26  2:02 ` #define HZ 1024 -- negative effects? Dan Maas
@ 2001-04-26  2:30   ` Werner Puschitz
  2001-04-26  3:51   ` Mike Galbraith
  2001-04-28  8:23   ` Guus Sliepen
  2 siblings, 0 replies; 18+ messages in thread
From: Werner Puschitz @ 2001-04-26  2:30 UTC (permalink / raw
  To: Dan Maas; +Cc: Michael Rothwell, linux-kernel

On Wed, 25 Apr 2001, Dan Maas wrote:

> > Are there any negative effects of editing include/asm/param.h to change
> > HZ from 100 to 1024? Or any other number? This has been suggested as a
> > way to improve the responsiveness of the GUI on a Linux system.
>
> I have also played around with HZ=1024 and wondered how it affects
> interactivity. I don't quite understand why it could help - one thing I've
> learned looking at kernel traces (LTT) is that interactive processes very,
> very rarely eat up their whole timeslice (even hogs like X). So more
> frequent timer interrupts shouldn't have much of an effect...
>
> If you are burning CPU doing stuff like long compiles, then the increased HZ
> might make the system appear more responsive because the CPU hog gets
> pre-empted more often. However, you could get the same result just by
> running the task 'nice'ly...

A tradeoff of having better system responsiveness by having the kernel to
check more often if a running process should be preempted is that the CPU
spends more time in Kernel Mode and less time in User Mode.
And as a consequence, user programs run slower.

Regards,
Werner




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26  2:02 ` #define HZ 1024 -- negative effects? Dan Maas
  2001-04-26  2:30   ` Werner Puschitz
@ 2001-04-26  3:51   ` Mike Galbraith
  2001-04-28  8:23   ` Guus Sliepen
  2 siblings, 0 replies; 18+ messages in thread
From: Mike Galbraith @ 2001-04-26  3:51 UTC (permalink / raw
  To: Dan Maas; +Cc: Michael Rothwell, linux-kernel

On Wed, 25 Apr 2001, Dan Maas wrote:

> The only other possibility I can think of is a scheduler anomaly. A thread
> arose on this list recently about strange scheduling behavior of processes
> using local IPC - even though one process had readable data pending, the
> kernel would still go idle until the next timer interrupt. If this is the
> case, then HZ=1024 would kick the system back into action more quickly...

Hmm.  I've caught tasks looping here (experimental tree but..) with
interrupts enabled, but schedule never being called despite having
many runnable tasks.

	-Mike


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
@ 2001-04-26 18:19 Adam J. Richter
  2001-04-26 18:31 ` Rik van Riel
  0 siblings, 1 reply; 18+ messages in thread
From: Adam J. Richter @ 2001-04-26 18:19 UTC (permalink / raw
  To: linux-kernel

	I have not tried it, but I would think that setting HZ to 1024
should make a big improvement in responsiveness.

	Currently, the time slice allocated to a standard Linux process
is 5*HZ, or 50ms when HZ is 100.  That means that you will notice
keystrokes being echoed slowly in X when you have just one or two running
processes, no matter how fast your CPU is, assuming these processes 
do not complete in that time.  Setting HZ to 1000 should improve that
a lot, and the cost of the extra context switches should still be quite
small in comparison to time slice length (a 1ms time slize = 1 million
cycles on a 1GHz processor or a maximum of 532kB of memory bus
utilization on a PC-133 bus that transfer 8 bytes on an averge of every
two cycles based to 5-1-1-1 memory timing).

	I would think this would be particularly noticible for internet
service providers that offer shell accounts or VNC accounts (like
WorkSpot and LastFoot).

	A few of other approaches to consider if one is feeling
more ambitious are:
	1. Make the time slice size scale to the number of
	   currently runnable processes (more precisely, threads)
	   divided by number of CPU's.  I posted something about this
	   a week or two ago.  This way, responsiveness is maintained,
	   but people who are worried about the extra context switch
	   and caching effects can rest assured that this shorter time slices
	   would only happen when responsiveness would otherwise be bad.
	2. Like #1, but only shrink the time slices when at least
	   one of the runnable processes is running at regular or high
	   CPU priority.
	3. Have the current process give up the CPU as soon as another
	   process awaiting the CPU has a higher current->count value.
	   That would increase the number of context switches like
	   increasing HZ by 5X (with basically the same trade-offs),
	   but without increasing the number of timer interrupts.
	   By itself, this is probably not worth the complexity.
	4. Similar to #3, but only switch on current->count!=0 when
	   another process has just become unblocked.
	5. I haven't looked at the code closely enough yet, but I tend
	   to wonder about the usefulness of having "ticks" when you have
	   a real time clock and avoid unnecessary "tick" interrupts by
	   just accounting based on microroseconds or something.  I
	   understand there may be issues of inaccuracy due to not knowing
	   exactly where you are in the current RTC tick, and the cost
	   of the unnecessary tick interrupts is probably pretty small.
	   I mention this just for completeness.

Adam J. Richter     __     ______________   4880 Stevens Creek Blvd, Suite 104
adam@yggdrasil.com     \ /                  San Jose, California 95129-1034
+1 408 261-6630         | g g d r a s i l   United States of America
fax +1 408 261-6631      "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26 18:19 Adam J. Richter
@ 2001-04-26 18:31 ` Rik van Riel
  2001-04-26 20:24   ` Dan Mann
  0 siblings, 1 reply; 18+ messages in thread
From: Rik van Riel @ 2001-04-26 18:31 UTC (permalink / raw
  To: Adam J. Richter; +Cc: linux-kernel

On Thu, 26 Apr 2001, Adam J. Richter wrote:

> 	I have not tried it, but I would think that setting HZ to 1024
> should make a big improvement in responsiveness.
>
> 	Currently, the time slice allocated to a standard Linux
> process is 5*HZ, or 50ms when HZ is 100.  That means that you
> will notice keystrokes being echoed slowly in X when you have
> just one or two running processes,

Rubbish.  Whenever a higher-priority thread than the current
thread becomes runnable the current thread will get preempted,
regardless of whether its timeslices is over or not.

And please, DO try things before proposing a radical change
to the kernel ;)

regards,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26 18:31 ` Rik van Riel
@ 2001-04-26 20:24   ` Dan Mann
  2001-04-27 10:04     ` Mike Galbraith
  0 siblings, 1 reply; 18+ messages in thread
From: Dan Mann @ 2001-04-26 20:24 UTC (permalink / raw
  To: Rik van Riel; +Cc: linux-kernel

So, the kernel really doesn't have much of an effect on the interactivity of
the gui?  I really don't think there is a problem right now at the
console..but I am curious to help it at the gui level.  Does it have
anything to do with the way the mouse is handled? I've applied the mvista
preemptive + low latency patch, and my subjective experience is that it
"feels" the same.  I'd just like to help and I'll patch the hell out of my
kernel if you need someone to test it.  I don't really care if my hardrive
catches on fire as long as it doesn't burn my house down :-)

Dan

----- Original Message -----
From: "Rik van Riel" <riel@conectiva.com.br>
To: "Adam J. Richter" <adam@yggdrasil.com>
Cc: <linux-kernel@vger.kernel.org>
Sent: Thursday, April 26, 2001 2:31 PM
Subject: Re: #define HZ 1024 -- negative effects?


> On Thu, 26 Apr 2001, Adam J. Richter wrote:
>
> > I have not tried it, but I would think that setting HZ to 1024
> > should make a big improvement in responsiveness.
> >
> > Currently, the time slice allocated to a standard Linux
> > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > will notice keystrokes being echoed slowly in X when you have
> > just one or two running processes,
>
> Rubbish.  Whenever a higher-priority thread than the current
> thread becomes runnable the current thread will get preempted,
> regardless of whether its timeslices is over or not.
>
> And please, DO try things before proposing a radical change
> to the kernel ;)
>
> regards,
>
> Rik
> --
> Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml
>
> Virtual memory is like a game you can't win;
> However, without VM there's truly nothing to lose...
>
> http://www.surriel.com/
> http://www.conectiva.com/ http://distro.conectiva.com/
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26 20:24   ` Dan Mann
@ 2001-04-27 10:04     ` Mike Galbraith
  2001-04-27 15:06       ` Dan Mann
  2001-04-27 19:26       ` Nigel Gamble
  0 siblings, 2 replies; 18+ messages in thread
From: Mike Galbraith @ 2001-04-27 10:04 UTC (permalink / raw
  To: linux-kernel

> > I have not tried it, but I would think that setting HZ to 1024
> > should make a big improvement in responsiveness.
> >
> > Currently, the time slice allocated to a standard Linux
> > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > will notice keystrokes being echoed slowly in X when you have
> > just one or two running processes,
>
> Rubbish.  Whenever a higher-priority thread than the current
> thread becomes runnable the current thread will get preempted,
> regardless of whether its timeslices is over or not.

(hmm.. noone mentioned this, and it doesn't look like anyone is
going to volunteer to be my proxy [see ionut's .sig].  oh well)

What about SCHED_YIELD and allocating during vm stress times?

Say you have only two tasks.  One is the gui and is allocating,
the other is a pure compute task.  The compute task doesn't do
anything which will cause preemtion except use up it's slice.
The gui may yield the cpu but the compute job never will.

(The gui won't _become_ runnable if that matters.  It's marked
as running, has yielded it's remaining slice and went to sleep..
with it's eyes open;)

Since increasing HZ reduces timeslice, the maximum amount of time
that you can yield is also decreased.  In the above case, isn't
it true that changing HZ from 100 to 1000 decreases sleep time
for the yielder from 50ms to 5ms if the compute task is at the
start of it's slice when the gui yields?

It seems likely that even if you're running a normal mix of tasks,
that the gui, big fat oinker that the things tend to be, will yield
much more often than the slimmer tasks it's competing with for cpu
because it's likely allocating/yielding much more often.

It follows that increasing HZ must decrease latency for the gui if
there's any vm stress.. and that's the time that gui responsivness
complaints usually refer to.  Throughput for yielding tasks should
also increase with a larger HZ value because the number of yields
is constant (tied to the number of allocations) but the amount of
cpu time lost per yield is smaller.

Correct?

(if big fat tasks _don't_ generally allocate more than slim tasks,
my refering to ionuts .sig was most unfortunate.  i hope it's safe
to assume that you can't become that obese without eating a lot;)

 	-Mike


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-27 10:04     ` Mike Galbraith
@ 2001-04-27 15:06       ` Dan Mann
  2001-04-27 19:26       ` Nigel Gamble
  1 sibling, 0 replies; 18+ messages in thread
From: Dan Mann @ 2001-04-27 15:06 UTC (permalink / raw
  To: linux-kernel

When you change the #define HZ setting in param.h, what effect does that
have on the CLOCKS_PER_SEC?  Are you really going to get a different amount
of slice time or is the is there another kernel source file (timex.h) that
just puts you back anyway?


Dan
----- Original Message -----
From: "Mike Galbraith" <mikeg@wen-online.de>
To: "linux-kernel" <linux-kernel@vger.kernel.org>
Sent: Friday, April 27, 2001 6:04 AM
Subject: Re: #define HZ 1024 -- negative effects?


> > > I have not tried it, but I would think that setting HZ to 1024
> > > should make a big improvement in responsiveness.
> > >
> > > Currently, the time slice allocated to a standard Linux
> > > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > > will notice keystrokes being echoed slowly in X when you have
> > > just one or two running processes,
> >
> > Rubbish.  Whenever a higher-priority thread than the current
> > thread becomes runnable the current thread will get preempted,
> > regardless of whether its timeslices is over or not.
>
> (hmm.. noone mentioned this, and it doesn't look like anyone is
> going to volunteer to be my proxy [see ionut's .sig].  oh well)
>
> What about SCHED_YIELD and allocating during vm stress times?
>
> Say you have only two tasks.  One is the gui and is allocating,
> the other is a pure compute task.  The compute task doesn't do
> anything which will cause preemtion except use up it's slice.
> The gui may yield the cpu but the compute job never will.
>
> (The gui won't _become_ runnable if that matters.  It's marked
> as running, has yielded it's remaining slice and went to sleep..
> with it's eyes open;)
>
> Since increasing HZ reduces timeslice, the maximum amount of time
> that you can yield is also decreased.  In the above case, isn't
> it true that changing HZ from 100 to 1000 decreases sleep time
> for the yielder from 50ms to 5ms if the compute task is at the
> start of it's slice when the gui yields?
>
> It seems likely that even if you're running a normal mix of tasks,
> that the gui, big fat oinker that the things tend to be, will yield
> much more often than the slimmer tasks it's competing with for cpu
> because it's likely allocating/yielding much more often.
>
> It follows that increasing HZ must decrease latency for the gui if
> there's any vm stress.. and that's the time that gui responsivness
> complaints usually refer to.  Throughput for yielding tasks should
> also increase with a larger HZ value because the number of yields
> is constant (tied to the number of allocations) but the amount of
> cpu time lost per yield is smaller.
>
> Correct?
>
> (if big fat tasks _don't_ generally allocate more than slim tasks,
> my refering to ionuts .sig was most unfortunate.  i hope it's safe
> to assume that you can't become that obese without eating a lot;)
>
>   -Mike
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-27 10:04     ` Mike Galbraith
  2001-04-27 15:06       ` Dan Mann
@ 2001-04-27 19:26       ` Nigel Gamble
  2001-04-27 20:28         ` Mike Galbraith
  1 sibling, 1 reply; 18+ messages in thread
From: Nigel Gamble @ 2001-04-27 19:26 UTC (permalink / raw
  To: Mike Galbraith; +Cc: linux-kernel

On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > Rubbish.  Whenever a higher-priority thread than the current
> > thread becomes runnable the current thread will get preempted,
> > regardless of whether its timeslices is over or not.
> 
> What about SCHED_YIELD and allocating during vm stress times?
> 
> Say you have only two tasks.  One is the gui and is allocating,
> the other is a pure compute task.  The compute task doesn't do
> anything which will cause preemtion except use up it's slice.
> The gui may yield the cpu but the compute job never will.
> 
> (The gui won't _become_ runnable if that matters.  It's marked
> as running, has yielded it's remaining slice and went to sleep..
> with it's eyes open;)

A well-written GUI should not be using SCHED_YIELD.  If it is
"allocating", anything, it won't be using SCHED_YIELD or be marked
runnable, it will be blocked, waiting until the resource becomes
available.  When that happens, it will preempt the compute task (if its
priority is high enough, which is very likely - and can be assured if
it's running at a real-time priority as I suggested earlier).

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

MontaVista Software                             nigel@mvista.com


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-27 19:26       ` Nigel Gamble
@ 2001-04-27 20:28         ` Mike Galbraith
  2001-04-27 23:22           ` Nigel Gamble
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2001-04-27 20:28 UTC (permalink / raw
  To: Nigel Gamble; +Cc: linux-kernel

On Fri, 27 Apr 2001, Nigel Gamble wrote:

> > What about SCHED_YIELD and allocating during vm stress times?

snip

> A well-written GUI should not be using SCHED_YIELD.  If it is

I was refering to the gui (or other tasks) allocating memory during
vm stress periods, and running into the yield in __alloc_pages()..
not a voluntary yield.

	-Mike


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-27 20:28         ` Mike Galbraith
@ 2001-04-27 23:22           ` Nigel Gamble
  2001-04-28  4:57             ` Mike Galbraith
  0 siblings, 1 reply; 18+ messages in thread
From: Nigel Gamble @ 2001-04-27 23:22 UTC (permalink / raw
  To: Mike Galbraith; +Cc: linux-kernel

On Fri, 27 Apr 2001, Mike Galbraith wrote:
> On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > What about SCHED_YIELD and allocating during vm stress times?
> 
> snip
> 
> > A well-written GUI should not be using SCHED_YIELD.  If it is
> 
> I was refering to the gui (or other tasks) allocating memory during
> vm stress periods, and running into the yield in __alloc_pages()..
> not a voluntary yield.

Oh, I see.  Well, if this were causing the problem, then running the GUI
at a real-time priority would be a better solution than increasing the
clock frequency, since SCHED_YIELD has no effect on real-time tasks
unless there are other runnable real-time tasks at the same priority.
The call to schedule() would just reschedule the real-time GUI task
itself immediately.

However, in times of vm stress it is more likely that GUI performance
problems would be caused by parts of the GUI having been paged out,
rather than by anything which could be helped by scheduling differences.

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

MontaVista Software                             nigel@mvista.com


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-27 23:22           ` Nigel Gamble
@ 2001-04-28  4:57             ` Mike Galbraith
  2001-04-29  8:46               ` george anzinger
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2001-04-28  4:57 UTC (permalink / raw
  To: Nigel Gamble; +Cc: linux-kernel

On Fri, 27 Apr 2001, Nigel Gamble wrote:

> On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > > What about SCHED_YIELD and allocating during vm stress times?
> >
> > snip
> >
> > > A well-written GUI should not be using SCHED_YIELD.  If it is
> >
> > I was refering to the gui (or other tasks) allocating memory during
> > vm stress periods, and running into the yield in __alloc_pages()..
> > not a voluntary yield.
>
> Oh, I see.  Well, if this were causing the problem, then running the GUI
> at a real-time priority would be a better solution than increasing the
> clock frequency, since SCHED_YIELD has no effect on real-time tasks
> unless there are other runnable real-time tasks at the same priority.
> The call to schedule() would just reschedule the real-time GUI task
> itself immediately.
>
> However, in times of vm stress it is more likely that GUI performance
> problems would be caused by parts of the GUI having been paged out,
> rather than by anything which could be helped by scheduling differences.

Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
up with laundering, and then user tasks having to pick up some of the
load.  Anyway, I've been told that for most values of HZ the slice is
50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
why do some archs use higher HZ values?)

	-Mike


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-26  2:02 ` #define HZ 1024 -- negative effects? Dan Maas
  2001-04-26  2:30   ` Werner Puschitz
  2001-04-26  3:51   ` Mike Galbraith
@ 2001-04-28  8:23   ` Guus Sliepen
  2 siblings, 0 replies; 18+ messages in thread
From: Guus Sliepen @ 2001-04-28  8:23 UTC (permalink / raw
  To: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1777 bytes --]

On Wed, Apr 25, 2001 at 10:02:26PM -0400, Dan Maas wrote:

> > Are there any negative effects of editing include/asm/param.h to change
> > HZ from 100 to 1024? Or any other number? This has been suggested as a
> > way to improve the responsiveness of the GUI on a Linux system.
[...]
> Of course, the appearance of better interactivity could just be a placebo
> effect. Double-blind trials, anyone? =)

I tried HZ=1024 on my i386 kernel, to check two things. One was a timer
routine. The performance of the timer routine depends heavily on the
granularity of the nanosleep() or select() system call. Since those calls
always block at least 1/HZ seconds, the timer precision indeed increased by a
factor 10 when I changed the HZ value from 100 to 1024.

However, another thing I wanted to do was to generate profiling statistics for
freesci. Profiling is done with 1/HZ granularity. Any subroutine in a program
executed in less than 1/HZ cannot be profiled correctly (for example a routine
that executes in 1 nanosecond and one that needs 1/HZ/2 seconds both show up
as taking 1 sample).

Now, you would think that profiling would be a lot better with HZ=1024.
However, the program didn't even run anymore! The reason is that some system
calls are being interupted by SIGPROF every 1/HZ, and return something like
ERESTARTSYS to the libraries. The libraries then try to restart the system
call but a SIGPROF is bound to follow shortly, again interrupting the system
call, and so on...

-------------------------------------------
Met vriendelijke groet / with kind regards,
  Guus Sliepen <guus@sliepen.warande.net>
-------------------------------------------
See also: http://tinc.nl.linux.org/
          http://www.kernelbench.org/
-------------------------------------------

[-- Attachment #2: Type: application/pgp-signature, Size: 232 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-28  4:57             ` Mike Galbraith
@ 2001-04-29  8:46               ` george anzinger
  0 siblings, 0 replies; 18+ messages in thread
From: george anzinger @ 2001-04-29  8:46 UTC (permalink / raw
  To: Mike Galbraith; +Cc: Nigel Gamble, linux-kernel

Mike Galbraith wrote:
> 
> On Fri, 27 Apr 2001, Nigel Gamble wrote:
> 
> > On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > > On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > > > What about SCHED_YIELD and allocating during vm stress times?
> > >
> > > snip
> > >
> > > > A well-written GUI should not be using SCHED_YIELD.  If it is
> > >
> > > I was refering to the gui (or other tasks) allocating memory during
> > > vm stress periods, and running into the yield in __alloc_pages()..
> > > not a voluntary yield.
> >
> > Oh, I see.  Well, if this were causing the problem, then running the GUI
> > at a real-time priority would be a better solution than increasing the
> > clock frequency, since SCHED_YIELD has no effect on real-time tasks
> > unless there are other runnable real-time tasks at the same priority.
> > The call to schedule() would just reschedule the real-time GUI task
> > itself immediately.
> >
> > However, in times of vm stress it is more likely that GUI performance
> > problems would be caused by parts of the GUI having been paged out,
> > rather than by anything which could be helped by scheduling differences.
> 
> Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
> up with laundering, and then user tasks having to pick up some of the
> load.  Anyway, I've been told that for most values of HZ the slice is
> 50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
> why do some archs use higher HZ values?)
> 
Well, almost.  Here is the scaling code:

#if HZ < 200
#define TICK_SCALE(x)	((x) >> 2)
#elif HZ < 400
#define TICK_SCALE(x)	((x) >> 1)
#elif HZ < 800
#define TICK_SCALE(x)	(x)
#elif HZ < 1600
#define TICK_SCALE(x)	((x) << 1)
#else
#define TICK_SCALE(x)	((x) << 2)
#endif

#define NICE_TO_TICKS(nice)	(TICK_SCALE(20-(nice))+1)

This, by the way, is new with 2.4.x.  As to why, it has more to do with
timer resolution than anything else.  Timer resolution is 1/HZ so higher
HZ => better resolution.  Of course, you must pay for it.  Nothing is
free :)  Higher HZ means more interrupts => higher overhead.

George

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-24 23:20 Michael Rothwell
  2001-04-25 22:40 ` Nigel Gamble
@ 2001-04-29 21:44 ` Jim Gettys
  2001-04-29 21:59   ` Michael Rothwell
  1 sibling, 1 reply; 18+ messages in thread
From: Jim Gettys @ 2001-04-29 21:44 UTC (permalink / raw
  To: Michael Rothwell; +Cc: linux-kernel

The biggest single issue in GUI responsiveness on Linux has been caused
by XFree86's implementation of mouse tracking in user space.

On typical UNIX systems, the mouse was often controlled in the kernel
driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
of mouse/keyboard events were not signal driven, so that if the X server
was loaded, the cursor stopped moving.

On most (but not all) current XFree86 implementations, this is now
signal drive, and further the internal X schedular has been reworked to
make it difficult for a single client to monopolize the X server.

So the first thing you should try is to make sure you are using an X server
with this "silken mouse" enabled; anotherwords, run XFree86 4.0x and make
sure the implementation has it enabled....

There may be more to do in Linux thereafter, but until you've done this, you
don't get to discuss the matter further....
					- Jim Gettys

--
Jim Gettys
Technology and Corporate Development
Compaq Computer Corporation
jg@pa.dec.com


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: #define HZ 1024 -- negative effects?
  2001-04-29 21:44 ` Jim Gettys
@ 2001-04-29 21:59   ` Michael Rothwell
  0 siblings, 0 replies; 18+ messages in thread
From: Michael Rothwell @ 2001-04-29 21:59 UTC (permalink / raw
  To: Jim Gettys; +Cc: linux-kernel

Great. I'm running 4.02. How do I enable "silken mouse"?

Thanks,

-Michael

On 29 Apr 2001 14:44:11 -0700, Jim Gettys wrote:
> The biggest single issue in GUI responsiveness on Linux has been caused
> by XFree86's implementation of mouse tracking in user space.
> 
> On typical UNIX systems, the mouse was often controlled in the kernel
> driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
> of mouse/keyboard events were not signal driven, so that if the X server
> was loaded, the cursor stopped moving.
> 
> On most (but not all) current XFree86 implementations, this is now
> signal drive, and further the internal X schedular has been reworked to
> make it difficult for a single client to monopolize the X server.
> 
> So the first thing you should try is to make sure you are using an X server
> with this "silken mouse" enabled; anotherwords, run XFree86 4.0x and make
> sure the implementation has it enabled....
> 
> There may be more to do in Linux thereafter, but until you've done this, you
> don't get to discuss the matter further....
>                                       - Jim Gettys
> 
> --
> Jim Gettys
> Technology and Corporate Development
> Compaq Computer Corporation
> jg@pa.dec.com
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2001-04-29 21:59 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <fa.gh4u8sv.17i1q6@ifi.uio.no>
2001-04-26  2:02 ` #define HZ 1024 -- negative effects? Dan Maas
2001-04-26  2:30   ` Werner Puschitz
2001-04-26  3:51   ` Mike Galbraith
2001-04-28  8:23   ` Guus Sliepen
2001-04-26 18:19 Adam J. Richter
2001-04-26 18:31 ` Rik van Riel
2001-04-26 20:24   ` Dan Mann
2001-04-27 10:04     ` Mike Galbraith
2001-04-27 15:06       ` Dan Mann
2001-04-27 19:26       ` Nigel Gamble
2001-04-27 20:28         ` Mike Galbraith
2001-04-27 23:22           ` Nigel Gamble
2001-04-28  4:57             ` Mike Galbraith
2001-04-29  8:46               ` george anzinger
  -- strict thread matches above, loose matches on Subject: below --
2001-04-24 23:20 Michael Rothwell
2001-04-25 22:40 ` Nigel Gamble
2001-04-29 21:44 ` Jim Gettys
2001-04-29 21:59   ` Michael Rothwell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.