All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Arnd Bergmann <arnd@kernel.org>
To: Nikolai Zhubr <zhubr.2@gmail.com>
Cc: netdev <netdev@vger.kernel.org>
Subject: Re: Realtek 8139 problem on 486.
Date: Sun, 13 Jun 2021 00:41:37 +0200	[thread overview]
Message-ID: <CAK8P3a3vnnaYf6+v9N1WmH0N7uG55DrC=Hy71mYi4Kt+FXBRuw@mail.gmail.com> (raw)
In-Reply-To: <60C4F187.3050808@gmail.com>

On Sat, Jun 12, 2021 at 7:40 PM Nikolai Zhubr <zhubr.2@gmail.com> wrote:
> 09.06.2021 10:09, Arnd Bergmann:
> [...]
> > If it's only a bit slower, that is not surprising, I'd expect it to
> > use fewer CPU
> > cycles though, as it avoids the expensive polling.
> >
> > There are a couple of things you could do to make it faster without reducing
> > reliability, but I wouldn't recommend major surgery on this driver, I was just
> > going for the simplest change that would make it work right with broken
> > IRQ settings.
> >
> > You could play around a little with the order in which you process events:
> > doing RX first would help free up buffer space in the card earlier, possibly
> > alternating between TX and RX one buffer at a time, or processing both
> > in a loop until the budget runs out would also help.
>
> I've modified your patch so as to quickly test several approaches within
> a single file by just switching some conditional defines.
> My diff against 4.14 is here:
> https://pastebin.com/mgpLPciE
>
> The tests were performed using a simple shell script:
> https://pastebin.com/Vfr8JC3X
>
> Each cell in the resulting table shows:
> - tcp sender/receiver (Mbit/s) as reported by iperf3 (total)
> - udp sender/receiver (Mbit/s) as reported by iperf3 (total)
> - accumulated cpu utilization during tcp+upd test.
>
> The first line in the table essentially corresponds to a standard
> unmodified kernel. The second line corresponds to your initially
> proposed approach.
>
> All tests run with the same physical instance of 8139D card against the
> same server.
>
> (The table best viewed in monospace font)
> +-------------------+-------------+-----------+-----------+
> | #Defines          ; i486dx2/66  ; Pentium3/ ; PentiumE/ |
> |                   ; (Edge IRQ)  ;  1200     ; Dual 2600 |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 1  ;             ; tcp 86/86 ; tcp 94/94 |
> | TX_WORK_IN_POLL 0 ;  (fails)    ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 0     ;             ; cpu 59%   ; cpu 15%   |
> | LOOP_IN_POLL 0    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 0  ; tcp 9.4/9.1 ; tcp 88/88 ; tcp 95/94 |
> | TX_WORK_IN_POLL 1 ; udp 5.5/5.5 ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 0     ; cpu 98%     ; cpu 55%   ; cpu 19%   |
> | LOOP_IN_POLL 0    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 0  ; tcp 9.0/8.7 ; tcp 87/87 ; tcp 95/94 |
> | TX_WORK_IN_POLL 1 ; udp 5.8/5.8 ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 0     ; cpu 98%     ; cpu 58%   ; cpu 20%   |
> | LOOP_IN_POLL 1    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 1  ; tcp 7.3/7.3 ; tcp 87/86 ; tcp 94/94 |
> | TX_WORK_IN_POLL 0 ; udp 6.2/6.2 ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 1     ; cpu 99%     ; cpu 57%   ; cpu 17%   |
> | LOOP_IN_POLL 0    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 1  ; tcp 6.5/6.5 ; tcp 88/88 ; tcp 94/94 |
> | TX_WORK_IN_POLL 1 ; udp 6.1/6.1 ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 1     ; cpu 99%     ; cpu 55%   ; cpu 16%   |
> | LOOP_IN_POLL 1    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
> | TX_WORK_IN_IRQ 1  ; tcp 5.7/5.7 ; tcp 87/87 ; tcp 95/94 |
> | TX_WORK_IN_POLL 1 ; udp 6.1/6.1 ; udp 96/96 ; udp 96/96 |
> | LOOP_IN_IRQ 1     ; cpu 98%     ; cpu 56%   ; cpu 15%   |
> | LOOP_IN_POLL 0    ;             ;           ;           |
> +-------------------+-------------+-----------+-----------+
>
> Hopefully this helps to choose the most benefical approach.

I think several variants can just be eliminated without looking
at the numbers:

- doing the TX work in the irq handler (with the loop) but not in
  the poll function is incorrect with the edge interupts, as it has
  the same race as before, you just make it much harder to hit

- doing the tx work in both the irq handler and the poll function
  is probably not helpful, you just do extra work

- calling the tx cleanup loop in a second loop is not helpful
  if you don't do anything interesting after finding that all
  TX frames are done.

For best performance I would suggest restructuring the poll
function from your current

  while (boguscnt--) {
       handle_rare_events();
       while (tx_pending())
             handle_one_tx();
  }
  while (rx_pending && work_done < budged)
         work_done += handle_one_rx();

to something like

   handle_rare_events();
   do {
      if (rx_pending())
          work_done += handle_one_rx();
      if (tx_pending())
          work_done += handle_one_tx();
   } while ((tx_pending || rx_pending) && work_done < budget)

This way, you can catch the most events in one poll function
if new work comes in while you are processing the pending
events.

Or, to keep the change simpler, keep the inner loop in the tx
and rx processing, doing all rx events before moving on
to processing all tx events, but then looping back to try both
again, until either the budget runs out or no further events
are pending.

      Arnd

  reply	other threads:[~2021-06-12 22:49 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-29 14:08 Realtek 8139 problem on 486 Nikolai Zhubr
2021-05-29 18:42 ` Heiner Kallweit
2021-05-29 21:44   ` tedheadster
2021-05-30  0:49     ` Nikolai Zhubr
2021-05-30 10:36       ` Nikolai Zhubr
2021-05-30 17:27         ` Nikolai Zhubr
2021-05-30 20:54           ` Arnd Bergmann
2021-05-30 23:17             ` Nikolai Zhubr
2021-05-31 16:53               ` Nikolai Zhubr
2021-05-31 18:39                 ` Arnd Bergmann
2021-05-31 22:18                   ` Nikolai Zhubr
2021-05-31 22:30                     ` Heiner Kallweit
2021-06-01  7:20                       ` Arnd Bergmann
2021-06-01 10:53                         ` Nikolai Zhubr
2021-06-01 11:42                           ` Heiner Kallweit
2021-06-01 16:09                             ` Nikolai Zhubr
2021-06-01 21:48                               ` Heiner Kallweit
2021-06-01 23:37                                 ` Nikolai Zhubr
2021-06-02  9:12                                   ` Arnd Bergmann
2021-06-07 23:07                                     ` Nikolai Zhubr
2021-06-08  7:44                                       ` Arnd Bergmann
2021-06-08 20:32                                         ` Nikolai Zhubr
2021-06-08 20:45                                           ` Arnd Bergmann
2021-06-08 22:07                                             ` Nikolai Zhubr
2021-06-09  7:09                                               ` Arnd Bergmann
2021-06-12 17:40                                                 ` Nikolai Zhubr
2021-06-12 22:41                                                   ` Arnd Bergmann [this message]
2021-06-13 14:10                                                     ` Nikolai Zhubr
2021-06-13 21:52                                                       ` Arnd Bergmann
2021-06-03 18:32                                 ` Maciej W. Rozycki
2021-06-04  7:36                                   ` Arnd Bergmann
2021-06-20  0:34                                     ` Thomas Gleixner
2021-06-20 10:19                                       ` Arnd Bergmann
2021-06-21  4:10                                       ` Maciej W. Rozycki
2021-06-21 11:22                                         ` Arnd Bergmann
2021-06-21 14:42                                           ` Maciej W. Rozycki
2021-06-21 15:20                                             ` Arnd Bergmann
2021-06-22 11:12                                             ` David Laight
2021-06-22 12:42                                           ` Nikolai Zhubr
2021-06-22 13:22                                             ` Arnd Bergmann
2021-06-22 18:42                                               ` Nikolai Zhubr
2021-06-22 19:26                                                 ` Arnd Bergmann
2021-06-23  1:04                                                   ` Maciej W. Rozycki
2021-06-24 17:56                                                     ` Nikolai Zhubr
2021-06-24 18:25                                                       ` Maciej W. Rozycki
2021-07-14 23:32                                                         ` Maciej W. Rozycki
2021-07-15  7:32                                                           ` Nikolai Zhubr
2021-07-16 23:48                                                             ` Maciej W. Rozycki
2021-06-23 16:31                                                   ` Nikolai Zhubr
2021-06-23 23:39                                                     ` Maciej W. Rozycki
2021-06-24  8:28                                                       ` Arnd Bergmann
2021-07-02 19:02                                                         ` Nikolai Zhubr
2021-07-03  9:10                                                           ` Arnd Bergmann
2021-07-08 19:21                                                             ` Nikolai Zhubr
2021-07-09  7:31                                                               ` Arnd Bergmann
2021-07-09 12:43                                                               ` David Laight
2021-06-01 17:44                             ` Maciej W. Rozycki
2021-06-02 15:14                               ` Nikolai Zhubr
2021-06-02 15:28                                 ` Arnd Bergmann
2021-05-31 19:05                 ` Heiner Kallweit
2021-05-31 18:29 ` Denis Kirjanov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAK8P3a3vnnaYf6+v9N1WmH0N7uG55DrC=Hy71mYi4Kt+FXBRuw@mail.gmail.com' \
    --to=arnd@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=zhubr.2@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.