LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] sched: Minor changes for rd->overload access
@ 2024-03-22 14:16 Shrikanth Hegde
  2024-03-22 14:16 ` [PATCH v2 1/2] sched/fair: Check rd->overload value before update Shrikanth Hegde
  2024-03-22 14:16 ` [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload Shrikanth Hegde
  0 siblings, 2 replies; 5+ messages in thread
From: Shrikanth Hegde @ 2024-03-22 14:16 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, dietmar.eggemann, qyousef, linux-kernel, vschneid,
	qperret

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=yes, Size: 7406 bytes --]

v1 -> v2:
- dropped Fixes tag.
- Added one of the perf probes in the changelog for reference.
- Added reviewed-by tags.

tl;dr
When running workloads in large systems, it was observed that access to
rd->overload was taking time. It would be better to check the value
before updating since value changes less often. Patch 1/2 does that.
With patch updates happen only if necessary. CPU Bus traffic reduced a
bit. No significant gains in workload performance.

Qais Suggested that it would be better to use the helper functions to
access the rd->overload instead. Patch 2/2 does that.

*These patches depend on below to be applied first*
https://lore.kernel.org/all/20240307085725.444486-1-sshegde@linux.ibm.com/


-----------------------------------------------------------------------
Detailed Perf annotation and probes stat
-----------------------------------------------------------------------
=======
6.8-rc5
=======
320 CPU system, SMT8
  NUMA node(s):          4
  NUMA node0 CPU(s):     0-79
  NUMA node1 CPU(s):     80-159
  NUMA node6 CPU(s):     160-239
  NUMA node7 CPU(s):     240-319

Perf annoate while running "schbench -t 320 -i 30 -r 30"
       │     if (!READ_ONCE(this_rq->rd->overload) ||
 18.05 │       ld       r9,2752(r31)
       │     sd = rcu_dereference_check_sched_domain(this_rq->sd);
  6.97 │       ld       r30,2760(r31)


Added some dummy codes so the probes can be put at required places.
perf probe -L update_sd_lb_stats
     46         if (env->sd->flags & SD_NUMA)
     47                 env->fbq_type = fbq_classify_group(&sds->busiest_stat);

     49         if (!env->sd->parent) {
                        /* update overload indicator if we are at root domain */
     51                 WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);

perf -probe -L newidle_balance
                rcu_read_lock();
     38         sd = rcu_dereference_check_sched_domain(this_rq->sd);

                if (!READ_ONCE(this_rq->rd->overload) ||
                    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

perf probe -L add_nr_running
         #ifdef CONFIG_SMP
     11         if (prev_nr < 2 && rq->nr_running >= 2) {
     12                 if (!READ_ONCE(rq->rd->overload)) {
     13                         a = a +10;
                                WRITE_ONCE(rq->rd->overload, 1);
                        }

probe hits when running different workload.
idle
320 probe:add_nr_running_L12
260 probe:add_nr_running_L13
1K probe:newidle_balance_L38
596 probe:update_sd_lb_stats_L51

./hackbench 10 process 100000 loops
130K probe:add_nr_running_L12
93 probe:add_nr_running_L13
1M probe:newidle_balance_L38
109K probe:update_sd_lb_stats_L51

./schbench -t 320 -i 30 -r 30
3K probe:add_nr_running_L12
436 probe:add_nr_running_L13
125K probe:newidle_balance_L38
33K probe:update_sd_lb_stats_L51

Modified stress-ng --wait
3K probe:add_nr_running_L12
1K probe:add_nr_running_L13
6M probe:newidle_balance_L38
11K probe:update_sd_lb_stats_L51

stress-ng --cpu=400 -l 20
833 probe:add_nr_running_L12
280 probe:add_nr_running_L13
2K probe:newidle_balance_L38
1K probe:update_sd_lb_stats_L51

stress-ng --cpu=400 -l 100
730 probe:add_nr_running_L12
0 probe:add_nr_running_L13
0 probe:newidle_balance_L38
0 probe:update_sd_lb_stats_L51

stress-ng --cpu=800 -l 50
2K probe:add_nr_running_L12
0 probe:add_nr_running_L13
2K probe:newidle_balance_L38
946 probe:update_sd_lb_stats_L51

stress-ng --cpu=800 -l 100
361 probe:add_nr_running_L12
0 probe:add_nr_running_L13
0 probe:newidle_balance_L38
0 probe:update_sd_lb_stats_L51

L13 numbers are quite less compared to L12. This indicates that it might
not change often.

------------------------------------------------------------------------------
==========
With Patch:
==========
Perf annoate while running "schbench -t 320 -i 30 -r 30"
       │     if (!READ_ONCE(this_rq->rd->overload) ||
       │       ld       r9,2752(r31)
       │     sd = rcu_dereference_check_sched_domain(this_rq->sd);
       │       ld       r30,2760(r31)
       │     if (!READ_ONCE(this_rq->rd->overload) ||
       │       lwz      r9,536(r9)
       │       cmpwi    r9,0
       │     ↓ beq      2b4
       │100:   mflr     r0
       │       cmpdi    r30,0
  0.38 │       std      r0,240(r1)
  1.56 │     ↓ beq      120


perf probe -L update_sd_lb_stats
     49         if (!env->sd->parent) {
     50                 int a;
                        /* update overload indicator if we are at root domain */
                        if ( READ_ONCE(env->dst_rq->rd->overload) != sg_status & SG_OVERLOAD) {
     53                         a= a+10;
                                WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
                        }

perf probe -L newidle_balance
     38         sd = rcu_dereference_check_sched_domain(this_rq->sd);

                if (!READ_ONCE(this_rq->rd->overload) ||
                    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

perf probe -L add_nr_running
		#ifdef CONFIG_SMP
     11         if (prev_nr < 2 && rq->nr_running >= 2) {
     12                 if (!READ_ONCE(rq->rd->overload)) {
     13                         a = a +10;
                                WRITE_ONCE(rq->rd->overload, 1);
                        }

perf probes when running different workloads. How many times actual
value changes in update_sd_lb_stats is indicated as L53/L50*100.
idle
818 probe:newidle_balance_L38
262 probe:update_sd_lb_stats_L53	<-- 86%
321 probe:add_nr_running_L12
261 probe:add_nr_running_L13
304 probe:update_sd_lb_stats_L50

./hackbench 10 process 100000 loops
1M probe:newidle_balance_L38
139 probe:update_sd_lb_stats_L53	<-- 0.25%
129K probe:add_nr_running_L12
74 probe:add_nr_running_L13
54K probe:update_sd_lb_stats_L50

./schbench -t 320 -i 30 -r 30
101K probe:newidle_balance_L38
2K probe:update_sd_lb_stats_L53		<-- 9.09%
5K probe:add_nr_running_L12
1K probe:add_nr_running_L13
22K probe:update_sd_lb_stats_L50

Modified stress-ng --wait
6M probe:newidle_balance_L38
2K probe:update_sd_lb_stats_L53		<-- 25%
4K probe:add_nr_running_L12
2K probe:add_nr_running_L13
8K probe:update_sd_lb_stats_L50

stress-ng --cpu=400 -l 20
2K probe:newidle_balance_L38
286 probe:update_sd_lb_stats_L53	<-- 36.11%
746 probe:add_nr_running_L12
256 probe:add_nr_running_L13
792 probe:update_sd_lb_stats_L50

stress-ng --cpu=400 -l 100
2 probe:newidle_balance_L38
0 probe:update_sd_lb_stats_L53		<-- NA
923 probe:add_nr_running_L12
0 probe:add_nr_running_L13
0 probe:update_sd_lb_stats_L50

stress-ng --cpu=800 -l 50
2K probe:newidle_balance_L38
0 probe:update_sd_lb_stats_L53		<-- 0%
2K probe:add_nr_running_L12
0 probe:add_nr_running_L13
429 probe:update_sd_lb_stats_L50

stress-ng --cpu=800 -l 100
0 probe:newidle_balance_L38
0 probe:update_sd_lb_stats_L53		<-- NA
424 probe:add_nr_running_L12
0 probe:add_nr_running_L13
1 probe:update_sd_lb_stats_L50

This indicates that likely that value changes less often. So adding a
read before update would help in generic workloads.
-------------------------------------------------------------------------------

Shrikanth Hegde (2):
  sched/fair: Check rd->overload value before update
  sched/fair: Use helper functions to access rd->overload

 kernel/sched/fair.c  |  6 ++++--
 kernel/sched/sched.h | 14 ++++++++++++--
 2 files changed, 16 insertions(+), 4 deletions(-)

--
2.39.3


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/2] sched/fair: Check rd->overload value before update
  2024-03-22 14:16 [PATCH v2 0/2] sched: Minor changes for rd->overload access Shrikanth Hegde
@ 2024-03-22 14:16 ` Shrikanth Hegde
  2024-03-22 14:16 ` [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload Shrikanth Hegde
  1 sibling, 0 replies; 5+ messages in thread
From: Shrikanth Hegde @ 2024-03-22 14:16 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, dietmar.eggemann, qyousef, linux-kernel, vschneid,
	qperret

Overload is an indicator used to inform if there is any rq in the root
domain has 2 or more running tasks. Root domain is a global structure per
cpuset island.

Overload status is updated when adding a task to the runqueue.
It is cleared in update_sd_lb_stats during load balance, if none of the
rq has 2 or more running task. It is used during to newidle
balance to see if its worth doing load balance. If it is set, then
newidle balance will try aggressively to pull a task.

Since commit 630246a06ae2 ("sched/fair: Clean-up update_sg_lb_stats
parameters"), overload is being updated unconditionally. The change in
value of this depends on the workload. On typical workloads, it is
likely that this value doesn't change often. Write causes the
cacheline to be invalidated. This would cause true sharing when the same
variable is accessed in newidle_balance. On large systems this would
cause more CPU bus traffic.

Perf probe stats for reference. Detailed info on perf probe can be found
in the cover letter. With Patch while running hackbench.
1M probe:newidle_balance_L38
139 probe:update_sd_lb_stats_L53
129K probe:add_nr_running_L12
74 probe:add_nr_running_L13
54K probe:update_sd_lb_stats_L50

Perf probes prove that actual change in value is less often. L50 vs L53.
So it would be better to check the value before updating it. CPU bus
traffic reduces with the patch.

Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02d4d224b436..eeebadd7d9ae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10621,7 +10621,8 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

 	if (!env->sd->parent) {
 		/* update overload indicator if we are at root domain */
-		WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
+		if (READ_ONCE(env->dst_rq->rd->overload) != (sg_status & SG_OVERLOAD))
+			WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);

 		/* Update over-utilization (tipping point, U >= 0) indicator */
 		set_rd_overutilized_status(env->dst_rq->rd,
--
2.39.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload
  2024-03-22 14:16 [PATCH v2 0/2] sched: Minor changes for rd->overload access Shrikanth Hegde
  2024-03-22 14:16 ` [PATCH v2 1/2] sched/fair: Check rd->overload value before update Shrikanth Hegde
@ 2024-03-22 14:16 ` Shrikanth Hegde
  2024-03-24  1:16   ` Qais Yousef
  1 sibling, 1 reply; 5+ messages in thread
From: Shrikanth Hegde @ 2024-03-22 14:16 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, dietmar.eggemann, qyousef, linux-kernel, vschneid,
	qperret

rd->overload is accessed at multiple places. Instead it could use helper
get/set functions. This would make the code more readable and easy to
maintain.

No change in functionality intended.

Suggested-by: Qais Yousef <qyousef@layalina.io>
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c  |  7 ++++---
 kernel/sched/sched.h | 14 ++++++++++++--
 2 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index eeebadd7d9ae..cee99c93e6a4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10621,8 +10621,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

 	if (!env->sd->parent) {
 		/* update overload indicator if we are at root domain */
-		if (READ_ONCE(env->dst_rq->rd->overload) != (sg_status & SG_OVERLOAD))
-			WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
+		if (is_rd_overloaded(env->dst_rq->rd) != (sg_status & SG_OVERLOAD))
+			set_rd_overload_status(env->dst_rq->rd,
+					       sg_status & SG_OVERLOAD);

 		/* Update over-utilization (tipping point, U >= 0) indicator */
 		set_rd_overutilized_status(env->dst_rq->rd,
@@ -12344,7 +12345,7 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
 	rcu_read_lock();
 	sd = rcu_dereference_check_sched_domain(this_rq->sd);

-	if (!READ_ONCE(this_rq->rd->overload) ||
+	if (!is_rd_overloaded(this_rq->rd) ||
 	    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

 		if (sd)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 41024c1c49b4..c91eb8811bef 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -918,6 +918,16 @@ extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
 extern void sched_get_rd(struct root_domain *rd);
 extern void sched_put_rd(struct root_domain *rd);

+static inline int is_rd_overloaded(struct root_domain *rd)
+{
+	return READ_ONCE(rd->overload);
+}
+
+static inline void set_rd_overload_status(struct root_domain *rd, int status)
+{
+	WRITE_ONCE(rd->overload, status);
+}
+
 #ifdef HAVE_RT_PUSH_IPI
 extern void rto_push_irq_work_func(struct irq_work *work);
 #endif
@@ -2518,8 +2528,8 @@ static inline void add_nr_running(struct rq *rq, unsigned count)

 #ifdef CONFIG_SMP
 	if (prev_nr < 2 && rq->nr_running >= 2) {
-		if (!READ_ONCE(rq->rd->overload))
-			WRITE_ONCE(rq->rd->overload, 1);
+		if (!is_rd_overloaded(rq->rd))
+			set_rd_overload_status(rq->rd, 1);
 	}
 #endif

--
2.39.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload
  2024-03-22 14:16 ` [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload Shrikanth Hegde
@ 2024-03-24  1:16   ` Qais Yousef
  2024-03-25  5:12     ` Shrikanth Hegde
  0 siblings, 1 reply; 5+ messages in thread
From: Qais Yousef @ 2024-03-24  1:16 UTC (permalink / raw
  To: Shrikanth Hegde
  Cc: mingo, peterz, vincent.guittot, dietmar.eggemann, linux-kernel,
	vschneid, qperret

On 03/22/24 19:46, Shrikanth Hegde wrote:
> rd->overload is accessed at multiple places. Instead it could use helper
> get/set functions. This would make the code more readable and easy to
> maintain.
> 
> No change in functionality intended.
> 
> Suggested-by: Qais Yousef <qyousef@layalina.io>

Thanks for following up!

> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c  |  7 ++++---
>  kernel/sched/sched.h | 14 ++++++++++++--
>  2 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index eeebadd7d9ae..cee99c93e6a4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10621,8 +10621,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> 
>  	if (!env->sd->parent) {
>  		/* update overload indicator if we are at root domain */
> -		if (READ_ONCE(env->dst_rq->rd->overload) != (sg_status & SG_OVERLOAD))
> -			WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
> +		if (is_rd_overloaded(env->dst_rq->rd) != (sg_status & SG_OVERLOAD))
> +			set_rd_overload_status(env->dst_rq->rd,
> +					       sg_status & SG_OVERLOAD);

A bit picky, but..

Wouldn't it be better to encapsulate the check of whether we're writing a new
value inside set_rd_overload_status()? Only write if it the value changed and
all future users wouldn't care then.

I think no need to wrap the line too.

> 
>  		/* Update over-utilization (tipping point, U >= 0) indicator */
>  		set_rd_overutilized_status(env->dst_rq->rd,
> @@ -12344,7 +12345,7 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
>  	rcu_read_lock();
>  	sd = rcu_dereference_check_sched_domain(this_rq->sd);
> 
> -	if (!READ_ONCE(this_rq->rd->overload) ||
> +	if (!is_rd_overloaded(this_rq->rd) ||
>  	    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {
> 
>  		if (sd)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 41024c1c49b4..c91eb8811bef 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -918,6 +918,16 @@ extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
>  extern void sched_get_rd(struct root_domain *rd);
>  extern void sched_put_rd(struct root_domain *rd);
> 
> +static inline int is_rd_overloaded(struct root_domain *rd)
> +{
> +	return READ_ONCE(rd->overload);
> +}
> +
> +static inline void set_rd_overload_status(struct root_domain *rd, int status)
> +{
> +	WRITE_ONCE(rd->overload, status);
> +}
> +
>  #ifdef HAVE_RT_PUSH_IPI
>  extern void rto_push_irq_work_func(struct irq_work *work);
>  #endif
> @@ -2518,8 +2528,8 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
> 
>  #ifdef CONFIG_SMP
>  	if (prev_nr < 2 && rq->nr_running >= 2) {
> -		if (!READ_ONCE(rq->rd->overload))
> -			WRITE_ONCE(rq->rd->overload, 1);
> +		if (!is_rd_overloaded(rq->rd))
> +			set_rd_overload_status(rq->rd, 1);

While at it, could you write SG_OVERLOAD instead of 1?

Both patches LGTM otherwise

Reviewed-by: Qais Yousef <qyousef@layalina.io>


Thanks!

--
Qais Yousef

>  	}
>  #endif
> 
> --
> 2.39.3
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload
  2024-03-24  1:16   ` Qais Yousef
@ 2024-03-25  5:12     ` Shrikanth Hegde
  0 siblings, 0 replies; 5+ messages in thread
From: Shrikanth Hegde @ 2024-03-25  5:12 UTC (permalink / raw
  To: Qais Yousef
  Cc: mingo, peterz, vincent.guittot, dietmar.eggemann, linux-kernel,
	vschneid, qperret



On 3/24/24 6:46 AM, Qais Yousef wrote:
> On 03/22/24 19:46, Shrikanth Hegde wrote:

>>
>>  	if (!env->sd->parent) {
>>  		/* update overload indicator if we are at root domain */
>> -		if (READ_ONCE(env->dst_rq->rd->overload) != (sg_status & SG_OVERLOAD))
>> -			WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
>> +		if (is_rd_overloaded(env->dst_rq->rd) != (sg_status & SG_OVERLOAD))
>> +			set_rd_overload_status(env->dst_rq->rd,
>> +					       sg_status & SG_OVERLOAD);
> 
> A bit picky, but..
> 
> Wouldn't it be better to encapsulate the check of whether we're writing a new
> value inside set_rd_overload_status()? Only write if it the value changed and
> all future users wouldn't care then.
> 
> I think no need to wrap the line too.
> 

Yes. Makes sense all the places do the check and update, can be wrapped. 
I will send out v3.  

>>
>>  		/* Update over-utilization (tipping point, U >= 0) indicator */
>>  		set_rd_overutilized_status(env->dst_rq->rd,
>> @@ -12344,7 +12345,7 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
>>  	rcu_read_lock();
>>  	sd = rcu_dereference_check_sched_domain(this_rq->sd);
>>
>> -	if (!READ_ONCE(this_rq->rd->overload) ||
>> +	if (!is_rd_overloaded(this_rq->rd) ||
>>  	    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {
>>
>>  		if (sd)
>> @@ -2518,8 +2528,8 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
>>
>>  #ifdef CONFIG_SMP
>>  	if (prev_nr < 2 && rq->nr_running >= 2) {
>> -		if (!READ_ONCE(rq->rd->overload))
>> -			WRITE_ONCE(rq->rd->overload, 1);
>> +		if (!is_rd_overloaded(rq->rd))
>> +			set_rd_overload_status(rq->rd, 1);
> 
> While at it, could you write SG_OVERLOAD instead of 1? 
Done.  
> 
> Both patches LGTM otherwise
> 
> Reviewed-by: Qais Yousef <qyousef@layalina.io>
> 
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-03-25  5:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-22 14:16 [PATCH v2 0/2] sched: Minor changes for rd->overload access Shrikanth Hegde
2024-03-22 14:16 ` [PATCH v2 1/2] sched/fair: Check rd->overload value before update Shrikanth Hegde
2024-03-22 14:16 ` [PATCH v2 2/2] sched/fair: Use helper functions to access rd->overload Shrikanth Hegde
2024-03-24  1:16   ` Qais Yousef
2024-03-25  5:12     ` Shrikanth Hegde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).