Linux-RDMA Archive mirror
 help / color / mirror / Atom feed
From: shaozhengchao <shaozhengchao@huawei.com>
To: <saeedm@nvidia.com>, <tariqt@nvidia.com>, <borisp@nvidia.com>,
	<shayd@nvidia.com>, <msanalla@nvidia.com>,
	Rahul Rameshbabu <rrameshbabu@nvidia.com>, <weizhang@nvidia.com>,
	<kliteyn@nvidia.com>, <erezsh@nvidia.com>, <igozlan@nvidia.com>
Cc: netdev <netdev@vger.kernel.org>, <linux-rdma@vger.kernel.org>
Subject: [question] when bonding with CX5 network card that support ROCE
Date: Mon, 6 May 2024 12:46:55 +0800	[thread overview]
Message-ID: <756aaf3c-5a15-8d18-89d4-ea7380cf845d@huawei.com> (raw)


When using the 5.10 kernel, I can find two IB devices using the 
ibv_devinfo command.
----------------------------------
[root@localhost ~]# lspci
91:00.0 Ethernet controller: Mellanox Technologies MT27800 Family 
[ConnectX-5]
91:00.1 Ethernet controller: Mellanox Technologies MT27800 Family
----------------------------------
[root@localhost ~]# ibv_devinfo
hca_id: mlx5_0
         transport:                      InfiniBand (0)
         fw_ver:                         16.31.1014
         node_guid:                      f41d:6b03:006f:4743
         sys_image_guid:                 f41d:6b03:006f:4743
         vendor_id:                      0x02c9
         vendor_part_id:                 4119
         hw_ver:                         0x0
         board_id:                       HUA0000000004
         phys_port_cnt:                  1
                 port:   1
                         state:                  PORT_ACTIVE (4)
                         max_mtu:                4096 (5)
                         active_mtu:             1024 (3)
                         sm_lid:                 0
                         port_lid:               0
                         port_lmc:               0x00
                         link_layer:             Ethernet

hca_id: mlx5_1
         transport:                      InfiniBand (0)
         fw_ver:                         16.31.1014
         node_guid:                      f41d:6b03:006f:4744
         sys_image_guid:                 f41d:6b03:006f:4743
         vendor_id:                      0x02c9
         vendor_part_id:                 4119
         hw_ver:                         0x0
         board_id:                       HUA0000000004
         phys_port_cnt:                  1
                 port:   1
                         state:                  PORT_ACTIVE (4)
                         max_mtu:                4096 (5)
                         active_mtu:             1024 (3)
                         sm_lid:                 0
                         port_lid:               0
                         port_lmc:               0x00
                         link_layer:             Ethernet
----------------------------------
But after the two network ports are bonded, only one IB device is
available, and only PF0 can be used.
[root@localhost shaozhengchao]# ibv_devinfo
hca_id: mlx5_bond_0
         transport:                      InfiniBand (0)
         fw_ver:                         16.31.1014
         node_guid:                      f41d:6b03:006f:4743
         sys_image_guid:                 f41d:6b03:006f:4743
         vendor_id:                      0x02c9
         vendor_part_id:                 4119
         hw_ver:                         0x0
         board_id:                       HUA0000000004
         phys_port_cnt:                  1
                 port:   1
                         state:                  PORT_ACTIVE (4)
                         max_mtu:                4096 (5)
                         active_mtu:             1024 (3)
                         sm_lid:                 0
                         port_lid:               0
                         port_lmc:               0x00
                         link_layer:             Ethernet

The current Linux mainline driver is the same.

I found the comment ("If bonded, we do not add an IB device for PF1.")
in the mlx5_lag_intf_add function of the 5.10 branch driver code.
This indicates that wthe the same NIC is used, only PF0 support bonding?
Are there any other constraints, when enable bonding with CX5?

Thank you
Zhengchao Shao

             reply	other threads:[~2024-05-06  4:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-06  4:46 shaozhengchao [this message]
2024-05-06  8:26 ` [question] when bonding with CX5 network card that support ROCE Zhu Yanjun
2024-05-06 10:45   ` shaozhengchao
2024-05-06 10:58     ` Zhu Yanjun
2024-05-06 11:33       ` shaozhengchao
2024-05-06 12:27         ` Zhu Yanjun
2024-05-07  1:23           ` shaozhengchao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=756aaf3c-5a15-8d18-89d4-ea7380cf845d@huawei.com \
    --to=shaozhengchao@huawei.com \
    --cc=borisp@nvidia.com \
    --cc=erezsh@nvidia.com \
    --cc=igozlan@nvidia.com \
    --cc=kliteyn@nvidia.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=msanalla@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=rrameshbabu@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=shayd@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=weizhang@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).