From: g.medini@eurosoft.it
To: bigeasy@linutronix.de
Cc: linux-rt-users@vger.kernel.org
Subject: Re: High latency of a system based on 5.19 rt
Date: Tue, 3 Oct 2023 15:47:57 +0200 [thread overview]
Message-ID: <S1YGZX$2ABA0246807C0BA80101FD71365D4784@eurosoft.it> (raw)
[-- Attachment #1.1: Type: text/plain, Size: 2203 bytes --]
Good afternoon, I've tried to reproduce the issue with CONFIG_X86_DEBUG_FPU but I was not able to do since was depending from the enviromnent where I run an rt-task that executes an Ethercat master with an high priority rt-task (priority 60).
On a system free of rt-task wakeup_rt trace shows the attached trace in the particular condition I press the windows like button (and X resize graphical windows continously).
In my rt task at the end the high latency is connected with this type of graphical activity so I think could be better investigate the system free of my rt-task.
Actually I have also tested the stable 5.15 but the main issue remains the same.
Gianluca
Da "Sebastian Andrzej Siewior" bigeasy@linutronix.de
A g.medini@eurosoft.it
Cc linux-rt-users@vger.kernel.org
Data Mon, 2 Oct 2023 12:02:57 +0200
Oggetto Re: High latency of a system based on 5.19 rt
On 2023-09-25 18:30:44 [+0200], g.medini@eurosoft.it wrote:
>
> Hi All, I'm struggling with a realtime issue I have.
Hi,
> Using trace I've found a problem related to CONFIG_X86_DEBUG_FPU that was introducing very large latency.
Do you have a trace and numeric value of "very large latency" for
CONFIG_X86_DEBUG_FPU?
> Removing this I have another one where the solution is not so trivial.
>
> The problem I've actually seems related to i915 driver.
>
> With a particular load for the graphic card (basically using ALT-TAB to switch from graphical windows or pressing the Win like button to resize all the graphical windows to the screen) I see an huge latency that gives problem to an rt task I need for my application.
>
> I've tried everything to optimize the sytem.
>
> As expected I find little enhancement that help me to mitigate the problem (core isolation and so on) but the basic problem remain the same (it happens with X11 and Wayland also).
What is the latency problem? The wake up trace shows a wake up from
gnome to irq-work. Is the graphic/ i915 lagging or something else?
> Hoping someone can help me.
>
> Gianluca
>
> Cordiali saluti
> Best regards
Sebastian
[-- Attachment #1.2: Type: text/html, Size: 5523 bytes --]
[-- Attachment #2: wakeup_rt.txt --]
[-- Type: text/plain, Size: 19543 bytes --]
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 5.19.0-rt10
# --------------------------------------------------------------------
# latency: 371 us, #227/227, CPU#0 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:2)
# -----------------
# | task: irq_work/0-23 (uid:0 nice:0 policy:1 rt_prio:1)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
Xorg-771 0dn.h512 1us!: 771:120:R + [000] 23: 98:R irq_work/0
Xorg-771 0dn.h512 114us : <stack trace>
=> __ftrace_trace_stack
=> probe_wakeup
=> ttwu_do_wakeup
=> try_to_wake_up
=> __sysvec_irq_work
=> sysvec_irq_work
=> asm_sysvec_irq_work
=> native_apic_mem_read
=> native_apic_wait_icr_idle
=> irq_work_queue
=> i915_request_enable_breadcrumb
=> __dma_fence_enable_signaling
=> dma_fence_add_callback
=> i915_sw_fence_await_dma_fence
=> i915_sw_fence_await_reservation
=> intel_prepare_plane_fb
=> drm_atomic_helper_prepare_planes
=> intel_atomic_commit
=> drm_atomic_helper_page_flip
=> drm_mode_page_flip_ioctl
=> drm_ioctl_kernel
=> drm_ioctl
=> __x64_sys_ioctl
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
Xorg-771 0dn.h512 115us : 0
Xorg-771 0dn.h412 118us : task_woken_rt <-ttwu_do_wakeup
Xorg-771 0dn.h412 118us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h312 120us : _raw_spin_unlock_irqrestore <-try_to_wake_up
Xorg-771 0dn.h312 121us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h212 121us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h112 123us : irq_exit_rcu <-sysvec_irq_work
Xorg-771 0dn.h112 123us : preempt_count_sub <-__irq_exit_rcu
Xorg-771 0dn..112 125us : idle_cpu <-__irq_exit_rcu
Xorg-771 0dn..112 126us+: raw_irqentry_exit_cond_resched <-irqentry_exit
Xorg-771 0dn..112 174us : irq_enter_rcu <-sysvec_apic_timer_interrupt
Xorg-771 0dn..112 174us : preempt_count_add <-irq_enter_rcu
Xorg-771 0dn.h112 176us : __sysvec_apic_timer_interrupt <-sysvec_apic_timer_interrupt
Xorg-771 0dn.h112 181us : hrtimer_interrupt <-__sysvec_apic_timer_interrupt
Xorg-771 0dn.h112 186us : _raw_spin_lock_irqsave <-hrtimer_interrupt
Xorg-771 0dn.h112 186us : preempt_count_add <-_raw_spin_lock_irqsave
Xorg-771 0dn.h212 187us : ktime_get_update_offsets_now <-hrtimer_interrupt
Xorg-771 0dn.h212 189us : __hrtimer_run_queues <-hrtimer_interrupt
Xorg-771 0dn.h212 191us : _raw_spin_unlock_irqrestore <-__hrtimer_run_queues
Xorg-771 0dn.h212 191us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h112 192us : tick_sched_timer <-__hrtimer_run_queues
Xorg-771 0dn.h112 193us : ktime_get <-tick_sched_timer
Xorg-771 0dn.h112 193us : tick_sched_do_timer <-tick_sched_timer
Xorg-771 0dn.h112 195us : tick_do_update_jiffies64 <-tick_sched_do_timer
Xorg-771 0dn.h112 197us : _raw_spin_lock <-tick_do_update_jiffies64
Xorg-771 0dn.h112 197us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn.h212 201us : calc_global_load <-tick_do_update_jiffies64
Xorg-771 0dn.h212 202us : preempt_count_sub <-tick_do_update_jiffies64
Xorg-771 0dn.h112 202us : update_wall_time <-tick_sched_do_timer
Xorg-771 0dn.h112 202us : timekeeping_advance <-update_wall_time
Xorg-771 0dn.h112 202us : _raw_spin_lock_irqsave <-timekeeping_advance
Xorg-771 0dn.h112 202us : preempt_count_add <-_raw_spin_lock_irqsave
Xorg-771 0dn.h212 208us : ntp_tick_length <-timekeeping_advance
Xorg-771 0dn.h212 209us : ntp_tick_length <-timekeeping_advance
Xorg-771 0dn.h212 209us : timekeeping_update <-timekeeping_advance
Xorg-771 0dn.h212 209us : ntp_get_next_leap <-timekeeping_update
Xorg-771 0dn.h212 210us : update_vsyscall <-timekeeping_update
Xorg-771 0dn.h212 214us : raw_notifier_call_chain <-timekeeping_update
Xorg-771 0dn.h212 215us : update_fast_timekeeper <-timekeeping_update
Xorg-771 0dn.h212 217us : update_fast_timekeeper <-timekeeping_update
Xorg-771 0dn.h212 218us : _raw_spin_unlock_irqrestore <-timekeeping_advance
Xorg-771 0dn.h212 218us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h112 219us : tick_sched_handle <-tick_sched_timer
Xorg-771 0dn.h112 219us : update_process_times <-tick_sched_handle
Xorg-771 0dn.h112 220us : account_process_tick <-update_process_times
Xorg-771 0dn.h112 222us : account_system_time <-update_process_times
Xorg-771 0dn.h112 222us : account_system_index_time <-update_process_times
Xorg-771 0dn.h112 224us : cpuacct_account_field <-account_system_index_time
Xorg-771 0dn.h112 225us : __cgroup_account_cputime_field <-account_system_index_time
Xorg-771 0dn.h112 225us : preempt_count_add <-__cgroup_account_cputime_field
Xorg-771 0dn.h212 227us : cgroup_base_stat_cputime_account_end.constprop.0 <-account_system_index_time
Xorg-771 0dn.h212 227us : cgroup_rstat_updated <-cgroup_base_stat_cputime_account_end.constprop.0
Xorg-771 0dn.h212 227us : preempt_count_sub <-cgroup_base_stat_cputime_account_end.constprop.0
Xorg-771 0dn.h112 227us : acct_account_cputime <-update_process_times
Xorg-771 0dn.h112 229us : hrtimer_run_queues <-update_process_times
Xorg-771 0dn.h112 229us : raise_timer_softirq <-update_process_times
Xorg-771 0dn.h112 230us : wake_up_process <-raise_timer_softirq
Xorg-771 0dn.h112 230us : try_to_wake_up <-raise_timer_softirq
Xorg-771 0dn.h112 231us : preempt_count_add <-try_to_wake_up
Xorg-771 0dn.h212 231us : _raw_spin_lock_irqsave <-try_to_wake_up
Xorg-771 0dn.h212 231us : preempt_count_add <-_raw_spin_lock_irqsave
Xorg-771 0dn.h312 232us : kthread_is_per_cpu <-is_cpu_allowed
Xorg-771 0dn.h312 235us : ttwu_queue_wakelist <-try_to_wake_up
Xorg-771 0dn.h312 235us : preempt_count_add <-try_to_wake_up
Xorg-771 0dn.h412 235us : _raw_spin_lock <-try_to_wake_up
Xorg-771 0dn.h412 235us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn.h512 236us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h412 236us : update_rq_clock <-try_to_wake_up
Xorg-771 0dn.h412 236us : ttwu_do_activate <-try_to_wake_up
Xorg-771 0dn.h412 236us : psi_task_change <-ttwu_do_activate
Xorg-771 0dn.h412 237us : psi_flags_change <-psi_task_change
Xorg-771 0dn.h412 238us : psi_group_change <-ttwu_do_activate
Xorg-771 0dn.h412 239us : enqueue_task_rt <-ttwu_do_activate
Xorg-771 0dn.h412 239us : dequeue_rt_stack <-enqueue_task_rt
Xorg-771 0dn.h412 239us : dequeue_top_rt_rq <-dequeue_rt_stack
Xorg-771 0dn.h412 240us : update_rt_migration <-enqueue_task_rt
Xorg-771 0dn.h412 241us : _raw_spin_lock <-enqueue_task_rt
Xorg-771 0dn.h412 241us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn.h512 241us : preempt_count_sub <-enqueue_task_rt
Xorg-771 0dn.h412 241us : enqueue_top_rt_rq <-enqueue_task_rt
Xorg-771 0dn.h412 241us : ttwu_do_wakeup <-try_to_wake_up
Xorg-771 0dn.h412 241us : check_preempt_curr <-ttwu_do_wakeup
Xorg-771 0dn.h412 241us : resched_curr <-check_preempt_curr
Xorg-771 0dn.h412 243us : task_woken_rt <-ttwu_do_wakeup
Xorg-771 0dn.h412 243us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h312 243us : _raw_spin_unlock_irqrestore <-try_to_wake_up
Xorg-771 0dn.h312 243us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h212 244us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h112 244us : rcu_sched_clock_irq <-update_process_times
Xorg-771 0dn.h112 245us : rcu_is_cpu_rrupt_from_idle <-rcu_sched_clock_irq
Xorg-771 0dn.h112 246us : rcu_is_cpu_rrupt_from_idle <-rcu_sched_clock_irq
Xorg-771 0dn.h112 248us : wake_up_process <-update_process_times
Xorg-771 0dn.h112 248us : try_to_wake_up <-update_process_times
Xorg-771 0dn.h112 249us : preempt_count_add <-try_to_wake_up
Xorg-771 0dn.h212 249us : _raw_spin_lock_irqsave <-try_to_wake_up
Xorg-771 0dn.h212 249us : preempt_count_add <-_raw_spin_lock_irqsave
Xorg-771 0dn.h312 250us : _raw_spin_unlock_irqrestore <-try_to_wake_up
Xorg-771 0dn.h312 250us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h212 252us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn.h112 253us : scheduler_tick <-update_process_times
Xorg-771 0dn.h112 254us : arch_scale_freq_tick <-scheduler_tick
Xorg-771 0dn.h112 256us : preempt_count_add <-scheduler_tick
Xorg-771 0dn.h212 256us : _raw_spin_lock <-scheduler_tick
Xorg-771 0dn.h212 256us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn.h312 257us : preempt_count_sub <-scheduler_tick
Xorg-771 0dn.h212 257us : update_rq_clock <-scheduler_tick
Xorg-771 0dn.h212 257us : task_tick_fair <-scheduler_tick
Xorg-771 0dn.h212 258us : update_curr <-task_tick_fair
Xorg-771 0dn.h212 259us : update_min_vruntime <-update_curr
Xorg-771 0dn.h212 259us : cpuacct_charge <-update_curr
Xorg-771 0dn.h212 260us : __cgroup_account_cputime <-update_curr
Xorg-771 0dn.h212 260us : preempt_count_add <-__cgroup_account_cputime
Xorg-771 0dn.h312 260us : cgroup_base_stat_cputime_account_end.constprop.0 <-update_curr
Xorg-771 0dn.h312 260us : cgroup_rstat_updated <-cgroup_base_stat_cputime_account_end.constprop.0
Xorg-771 0dn.h312 261us : preempt_count_sub <-cgroup_base_stat_cputime_account_end.constprop.0
Xorg-771 0dn.h212 262us : __update_load_avg_se <-update_load_avg
Xorg-771 0dn.h212 264us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0dn.h212 265us : update_cfs_group <-task_tick_fair
Xorg-771 0dn.h212 266us : hrtimer_active <-task_tick_fair
Xorg-771 0dn.h212 267us : update_curr <-task_tick_fair
Xorg-771 0dn.h212 267us : update_min_vruntime <-update_curr
Xorg-771 0dn.h212 269us : __update_load_avg_se <-update_load_avg
Xorg-771 0dn.h212 269us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0dn.h212 269us : update_cfs_group <-task_tick_fair
Xorg-771 0dn.h212 270us : reweight_entity <-task_tick_fair
Xorg-771 0dn.h212 270us : update_curr <-reweight_entity
Xorg-771 0dn.h212 270us : hrtimer_active <-task_tick_fair
Xorg-771 0dn.h212 270us : sched_slice <-task_tick_fair
Xorg-771 0dn.h212 271us : __calc_delta <-sched_slice
Xorg-771 0dn.h212 272us : calc_global_load_tick <-scheduler_tick
Xorg-771 0dn.h212 272us : preempt_count_sub <-scheduler_tick
Xorg-771 0dn.h112 273us : perf_event_task_tick <-scheduler_tick
Xorg-771 0dn.h112 274us : trigger_load_balance <-update_process_times
Xorg-771 0dn.h112 274us : run_posix_cpu_timers <-tick_sched_handle
Xorg-771 0dn.h112 276us : profile_tick <-tick_sched_timer
Xorg-771 0dn.h112 277us : hrtimer_forward <-tick_sched_timer
Xorg-771 0dn.h112 278us : _raw_spin_lock_irq <-__hrtimer_run_queues
Xorg-771 0dn.h112 278us : preempt_count_add <-_raw_spin_lock_irq
Xorg-771 0dn.h212 279us : enqueue_hrtimer <-__hrtimer_run_queues
Xorg-771 0dn.h212 279us : hrtimer_update_next_event <-hrtimer_interrupt
Xorg-771 0dn.h212 279us : __hrtimer_next_event_base <-hrtimer_update_next_event
Xorg-771 0dn.h212 280us : __hrtimer_next_event_base <-hrtimer_update_next_event
Xorg-771 0dn.h212 280us : _raw_spin_unlock_irqrestore <-hrtimer_interrupt
Xorg-771 0dn.h212 280us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn.h112 281us : tick_program_event <-hrtimer_interrupt
Xorg-771 0dn.h112 281us : clockevents_program_event <-hrtimer_interrupt
Xorg-771 0dn.h112 282us : ktime_get <-clockevents_program_event
Xorg-771 0dn.h112 282us : lapic_next_deadline <-clockevents_program_event
Xorg-771 0dn.h112 283us : irq_exit_rcu <-sysvec_apic_timer_interrupt
Xorg-771 0dn.h112 283us : preempt_count_sub <-__irq_exit_rcu
Xorg-771 0dn..112 284us : wake_up_process <-__irq_exit_rcu
Xorg-771 0dn..112 284us : try_to_wake_up <-__irq_exit_rcu
Xorg-771 0dn..112 284us : preempt_count_add <-try_to_wake_up
Xorg-771 0dn..212 284us : _raw_spin_lock_irqsave <-try_to_wake_up
Xorg-771 0dn..212 284us : preempt_count_add <-_raw_spin_lock_irqsave
Xorg-771 0dn..312 284us : _raw_spin_unlock_irqrestore <-try_to_wake_up
Xorg-771 0dn..312 284us : preempt_count_sub <-_raw_spin_unlock_irqrestore
Xorg-771 0dn..212 284us : preempt_count_sub <-try_to_wake_up
Xorg-771 0dn..112 285us : idle_cpu <-__irq_exit_rcu
Xorg-771 0dn..112 285us : raw_irqentry_exit_cond_resched <-irqentry_exit
Xorg-771 0.n..112 285us : preempt_count_sub <-irq_work_queue
Xorg-771 0dn..112 290us : rcu_note_context_switch <-__schedule
Xorg-771 0dn..112 290us : _raw_spin_lock <-rcu_note_context_switch
Xorg-771 0dn..112 293us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn..212 295us : preempt_count_sub <-rcu_note_context_switch
Xorg-771 0dn..112 295us : rcu_qs <-rcu_note_context_switch
Xorg-771 0dn..112 295us : preempt_count_add <-__schedule
Xorg-771 0dn..212 295us : _raw_spin_lock <-__schedule
Xorg-771 0dn..212 295us : preempt_count_add <-_raw_spin_lock
Xorg-771 0dn..312 296us : preempt_count_sub <-__schedule
Xorg-771 0dn..212 296us : update_rq_clock <-__schedule
Xorg-771 0dn..212 296us : balance_fair <-__schedule
Xorg-771 0dn..212 296us : put_prev_task_fair <-__schedule
Xorg-771 0dn..212 296us : put_prev_entity <-put_prev_task_fair
Xorg-771 0dn..212 298us : update_curr <-put_prev_entity
Xorg-771 0dn..212 298us : __update_load_avg_se <-update_load_avg
Xorg-771 0dn..212 299us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0dn..212 299us : put_prev_entity <-put_prev_task_fair
Xorg-771 0dn..212 299us : update_curr <-put_prev_entity
Xorg-771 0dn..212 299us : __update_load_avg_se <-update_load_avg
Xorg-771 0dn..212 300us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0dn..212 300us : pick_next_task_stop <-__schedule
Xorg-771 0dn..212 301us : pick_next_task_dl <-__schedule
Xorg-771 0dn..212 301us : pick_next_task_rt <-__schedule
Xorg-771 0dn..212 302us : update_rt_rq_load_avg <-pick_next_task_rt
Xorg-771 0d...212 304us : __do_set_cpus_allowed <-__schedule
Xorg-771 0d...212 306us : dequeue_task_fair <-__do_set_cpus_allowed
Xorg-771 0d...212 309us : dequeue_entity <-dequeue_task_fair
Xorg-771 0d...212 309us : update_curr <-dequeue_entity
Xorg-771 0d...212 309us : __update_load_avg_se <-update_load_avg
Xorg-771 0d...212 309us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0d...212 309us : clear_buddies <-dequeue_entity
Xorg-771 0d...212 310us : update_cfs_group <-dequeue_entity
Xorg-771 0d...212 310us : dequeue_entity <-dequeue_task_fair
Xorg-771 0d...212 310us : update_curr <-dequeue_entity
Xorg-771 0d...212 310us : __update_load_avg_se <-update_load_avg
Xorg-771 0d...212 310us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0d...212 311us : clear_buddies <-dequeue_entity
Xorg-771 0d...212 311us : update_cfs_group <-dequeue_entity
Xorg-771 0d...212 311us : reweight_entity <-dequeue_entity
Xorg-771 0d...212 311us : hrtick_update <-__do_set_cpus_allowed
Xorg-771 0d...212 311us : set_cpus_allowed_common <-__do_set_cpus_allowed
Xorg-771 0d...212 312us : enqueue_task_fair <-__schedule
Xorg-771 0d...212 312us : enqueue_entity <-enqueue_task_fair
Xorg-771 0d...212 312us : update_curr <-enqueue_entity
Xorg-771 0d...212 313us : __update_load_avg_se <-update_load_avg
Xorg-771 0d...212 313us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0d...212 313us : update_cfs_group <-enqueue_entity
Xorg-771 0d...212 313us : enqueue_entity <-enqueue_task_fair
Xorg-771 0d...212 314us : update_curr <-enqueue_entity
Xorg-771 0d...212 314us : __update_load_avg_se <-update_load_avg
Xorg-771 0d...212 315us : __update_load_avg_cfs_rq <-update_load_avg
Xorg-771 0d...212 315us : update_cfs_group <-enqueue_entity
Xorg-771 0d...212 316us : reweight_entity <-enqueue_entity
Xorg-771 0d...212 316us : se_is_idle <-enqueue_entity
Xorg-771 0d...212 318us : hrtick_update <-__schedule
Xorg-771 0d...212 319us : psi_task_switch <-__schedule
Xorg-771 0d...212 320us : psi_flags_change <-psi_task_switch
Xorg-771 0d...212 322us : psi_flags_change <-psi_task_switch
Xorg-771 0d...212 323us : psi_group_change <-psi_task_switch
Xorg-771 0d...212 324us : psi_group_change <-psi_task_switch
Xorg-771 0d...212 325us : psi_group_change <-psi_task_switch
Xorg-771 0d...312 326us : __schedule
Xorg-771 0d...312 326us+: 771:120:R ==> [000] 23: 98:R irq_work/0
Xorg-771 0d...312 370us : <stack trace>
=> __ftrace_trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> preempt_schedule_thunk
=> irq_work_queue
=> i915_request_enable_breadcrumb
=> __dma_fence_enable_signaling
=> dma_fence_add_callback
=> i915_sw_fence_await_dma_fence
=> i915_sw_fence_await_reservation
=> intel_prepare_plane_fb
=> drm_atomic_helper_prepare_planes
=> intel_atomic_commit
=> drm_atomic_helper_page_flip
=> drm_mode_page_flip_ioctl
=> drm_ioctl_kernel
=> drm_ioctl
=> __x64_sys_ioctl
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
next reply other threads:[~2023-10-03 13:48 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-03 13:47 g.medini [this message]
-- strict thread matches above, loose matches on Subject: below --
2023-09-25 16:30 High latency of a system based on 5.19 rt g.medini
2023-09-25 17:02 ` Ralf Mardorf
2023-09-26 7:04 ` Mike Galbraith
2023-09-26 12:34 ` Clark Williams
[not found] ` <CAMLffL8=zW0Bv47R=r3=Lh9==OjDrM=FTJZgmm6PYGOyTedeBA@mail.gmail.com>
2023-09-26 13:15 ` Mike Galbraith
2023-10-02 10:05 ` Sebastian Andrzej Siewior
2023-10-02 11:58 ` Mike Galbraith
2023-10-02 14:16 ` Sebastian Andrzej Siewior
2023-10-02 15:20 ` Mike Galbraith
2023-10-02 10:02 ` Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='S1YGZX$2ABA0246807C0BA80101FD71365D4784@eurosoft.it' \
--to=g.medini@eurosoft.it \
--cc=bigeasy@linutronix.de \
--cc=linux-rt-users@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).