From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 610A5C04FFE for ; Thu, 18 Apr 2024 15:28:28 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9208540272; Thu, 18 Apr 2024 17:28:27 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 1C47540042 for ; Thu, 18 Apr 2024 17:28:26 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id F1D6643EA8; Thu, 18 Apr 2024 17:28:25 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [DPDK/other Bug 1417] RTE_FLOW not matching UDP ports in head fragments Date: Thu, 18 Apr 2024 15:28:25 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: other X-Bugzilla-Version: 24.03 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: tony.hart@domainhart.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: multipart/alternative; boundary=17134541050.35feaB3A.2313793 Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --17134541050.35feaB3A.2313793 Date: Thu, 18 Apr 2024 17:28:25 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All https://bugs.dpdk.org/show_bug.cgi?id=3D1417 Bug ID: 1417 Summary: RTE_FLOW not matching UDP ports in head fragments Product: DPDK Version: 24.03 Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: other Assignee: dev@dpdk.org Reporter: tony.hart@domainhart.com Target Milestone: --- Using RTE_FLOW (with the asynchronous API) to match UDP packets coming from a particular port and destination address. This is on a Nvidia/Mellanox CX6. This works great whilst the packets are not fragmented, however, when the packet is fragmented it does not match. I was expecting that it would still match the head fragment since it still contains the UDP header and hence the port (obviously I'm not expecting it to match the tail fragment). Is this intended behavior (looking at the rte_mbuf_ptype code it suggests that it is)? In the script below I see the count for the first entry in group 3 increment for the non-fragmented case but when the length of the (otherwise) same packet is one byte longer than the MTIU, the counter of the second group 3 entry is incremented instead. Thanks for any insights, Tony FYI this is using the v24.03-rc2 dpdk release. port stop all flow configure 0 queues_number 1 queues_size 64 counters_number 1000 port start all # PATTERN TEMPLATES flow pattern_template 0 create pattern_template_id 0 ingress template end flow pattern_template 0 create pattern_template_id 1 ingress template eth / ipv4 dst mask 0xffffffff / udp src mask 0xffff / end # ACTION TEMPLATES flow actions_template 0 create actions_template_id 0 template jump / end mask jump / end flow actions_template 0 create actions_template_id 1 template count / drop / end mask count / drop / end # TEMPLATE TABLES flow template_table 0 create table_id 0 group 0 priority 0 ingress rules_number 1 pattern_template 0 actions_template 0 flow template_table 0 create table_id 8 group 3 priority 1 ingress rules_number 100 pattern_template 1 actions_template 1 flow template_table 0 create table_id 9 group 3 priority 99 ingress rules_number 1 pattern_template 0 actions_template 1 # GROUP 0 flow queue 0 create 0 template_table 0 pattern_template 0 actions_template 0 postpone no pattern end actions jump group 3 / end # GROUP 3: flow queue 0 create 0 template_table 8 pattern_template 0 actions_template 0 postpone no pattern eth / ipv4 dst spec 2.2.3.1 / udp src spec 389 / end actions count / drop / end flow queue 0 create 0 template_table 9 pattern_template 0 actions_template 0 postpone no pattern end actions count / drop / end --=20 You are receiving this mail because: You are the assignee for the bug.= --17134541050.35feaB3A.2313793 Date: Thu, 18 Apr 2024 17:28:25 +0200 MIME-Version: 1.0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All
Bug ID 1417
Summary RTE_FLOW not matching UDP ports in head fragments
Product DPDK
Version 24.03
Hardware All
OS All
Status UNCONFIRMED
Severity normal
Priority Normal
Component other
Assignee dev@dpdk.org
Reporter tony.hart@domainhart.com
Target Milestone ---

Using RTE_FLOW (with the asynchron=
ous API) to match UDP packets coming
from a particular port and destination address.  This is on a
Nvidia/Mellanox CX6.  This works great whilst the packets are not
fragmented, however, when the packet is fragmented it does not match.
I was expecting that it would still match the head fragment since it
still contains the UDP header and hence the port (obviously I'm not
expecting it to match the tail fragment).

Is this intended behavior (looking at the rte_mbuf_ptype code it
suggests that it is)?

In the script below I see the count for the first entry in group 3
increment for the non-fragmented case but when the length of the
(otherwise) same packet is one byte longer than the MTIU, the counter
of the second group 3 entry is incremented instead.

Thanks for any insights,
Tony

FYI this is using the v24.03-rc2 dpdk release.

port stop all
flow configure 0 queues_number 1 queues_size 64 counters_number 1000
port start all

# PATTERN TEMPLATES
flow pattern_template 0 create pattern_template_id 0 ingress template end
flow pattern_template 0 create pattern_template_id 1 ingress template
eth / ipv4 dst mask 0xffffffff / udp src mask 0xffff / end

# ACTION TEMPLATES
flow actions_template 0 create actions_template_id 0 template jump /
end mask jump / end
flow actions_template 0 create actions_template_id 1 template count /
drop / end mask count / drop / end

# TEMPLATE TABLES
flow template_table 0 create table_id 0 group 0 priority 0 ingress
rules_number 1 pattern_template 0 actions_template 0
flow template_table 0 create table_id 8 group 3 priority 1 ingress
rules_number 100 pattern_template 1 actions_template 1
flow template_table 0 create table_id 9 group 3 priority 99 ingress
rules_number 1 pattern_template 0 actions_template 1

# GROUP 0
flow queue 0 create 0 template_table 0 pattern_template 0
actions_template 0 postpone no pattern end actions jump group 3 / end

# GROUP 3:
flow queue 0 create 0 template_table 8 pattern_template 0
actions_template 0 postpone no pattern eth / ipv4 dst spec 2.2.3.1 /
udp src spec 389 / end actions count / drop / end
flow queue 0 create 0 template_table 9 pattern_template 0
actions_template 0 postpone no pattern end actions count / drop / end
          


You are receiving this mail because:
  • You are the assignee for the bug.
=20=20=20=20=20=20=20=20=20=20
= --17134541050.35feaB3A.2313793--