All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] Fixes for NFS/RDMA client
@ 2014-02-03 21:01 ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:01 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

---

Chuck Lever (8):
      NFS: incorrect "port=" value in /proc/mounts
      NFS: Use port 20049 by default for NFS/RDMA mounts
      NFS: advertise only supported callback netids
      NFS: Add debugging message in nfs4_callback_null()
      SUNRPC: Display tk_pid where possible
      SUNRPC: remove KERN_INFO from dprintk() call sites
      SUNRPC: Fix large reads on NFS/RDMA
      NFS: Fix READDIR oops with NFSv4 on RDMA


 fs/nfs/callback_xdr.c           |    1 
 fs/nfs/client.c                 |    1 
 fs/nfs/nfs4proc.c               |   24 +++++++++---
 fs/nfs/nfs4xdr.c                |    3 -
 fs/nfs/super.c                  |    2 +
 include/uapi/linux/nfs.h        |    1 
 net/sunrpc/xprtrdma/rpc_rdma.c  |   80 ++++++++++++++++++++++-----------------
 net/sunrpc/xprtrdma/transport.c |   29 ++++++++------
 8 files changed, 86 insertions(+), 55 deletions(-)

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 0/8] Fixes for NFS/RDMA client
@ 2014-02-03 21:01 ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:01 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

---

Chuck Lever (8):
      NFS: incorrect "port=" value in /proc/mounts
      NFS: Use port 20049 by default for NFS/RDMA mounts
      NFS: advertise only supported callback netids
      NFS: Add debugging message in nfs4_callback_null()
      SUNRPC: Display tk_pid where possible
      SUNRPC: remove KERN_INFO from dprintk() call sites
      SUNRPC: Fix large reads on NFS/RDMA
      NFS: Fix READDIR oops with NFSv4 on RDMA


 fs/nfs/callback_xdr.c           |    1 
 fs/nfs/client.c                 |    1 
 fs/nfs/nfs4proc.c               |   24 +++++++++---
 fs/nfs/nfs4xdr.c                |    3 -
 fs/nfs/super.c                  |    2 +
 include/uapi/linux/nfs.h        |    1 
 net/sunrpc/xprtrdma/rpc_rdma.c  |   80 ++++++++++++++++++++++-----------------
 net/sunrpc/xprtrdma/transport.c |   29 ++++++++------
 8 files changed, 86 insertions(+), 55 deletions(-)

-- 
Chuck Lever
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/8] NFS: Fix READDIR oops with NFSv4 on RDMA
@ 2014-02-03 21:01   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:01 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

When starting the Connectathon basic tests on an NFSv4 RDMA
mount, I encountered this oops:

  BUG: unable to handle kernel NULL pointer dereference at (null)
  IP: [<ffffffff8129cc56>] memcpy+0x6/0x110
  PGD 2106cd067 PUD 20fef9067 PMD 0
  Oops: 0000 [#1] SMP

 ...

  [<ffffffffa05dc1b1>] ? xdr_inline_decode+0xb1/0x120 [sunrpc]
  [<ffffffffa071f19c>] nfs4_decode_dirent+0x4c/0x250 [nfsv4]
  [<ffffffff81178a02>] ? alloc_pages_current+0xb2/0x170
  [<ffffffffa06a1225>] nfs_readdir_page_filler+0xe5/0x2c0 [nfs]
  [<ffffffffa06a1622>] nfs_readdir_xdr_to_array+0x222/0x2e0 [nfs]
  [<ffffffffa06a1702>] nfs_readdir_filler+0x22/0x90 [nfs]
  [<ffffffff8112f975>] ? add_to_page_cache_lru+0x35/0x50
  [<ffffffff8112faee>] __read_cache_page+0x7e/0xe0
  [<ffffffffa06a16e0>] ? nfs_readdir_xdr_to_array+0x2e0/0x2e0 [nfs]
  [<ffffffffa06a16e0>] ? nfs_readdir_xdr_to_array+0x2e0/0x2e0 [nfs]
  [<ffffffff8113079c>] do_read_cache_page+0x3c/0x110
  [<ffffffff811308b9>] read_cache_page_async+0x19/0x20
  [<ffffffff811308ce>] read_cache_page+0xe/0x20
  [<ffffffffa06a1c1e>] nfs_readdir+0x14e/0x3d0 [nfs]
  [<ffffffffa071f150>] ? decode_pathconf+0x1c0/0x1c0 [nfsv4]
  [<ffffffff811a811d>] iterate_dir+0xad/0xd0
  [<ffffffff811a71ca>] ? do_fcntl+0x28a/0x370
  [<ffffffff811a82d5>] SyS_getdents+0x95/0x100
  [<ffffffff811a83e0>] ? SyS_old_readdir+0xa0/0xa0
  [<ffffffff815a7752>] system_call_fastpath+0x16/0x1b

The problem does not occur with NFSv3 over RDMA.

nfs4_decode_dirent() is confused because the xdr_buf's page vector
starts long after the first directory entry in the server's reply.

Commit aa9c2669, "NFS: Client implementation of Labeled-NFS," is
reported by git bisect as the first bad commit.

This commit changes the decode_readdir_maxsz macro.  This macro
controls where the generic XDR routines split incoming readdir reply
data between the head[0] buffer and the page cache.

Security labels go with each directory entry, thus they are always
stored in the page cache, not in the head buffer.  The length of the
reply that goes in head[0] should not change.

I've reverted the change to decode_readdir_maxsz.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68371
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: <stable@vger.kernel.org> # 3.11+
---

 fs/nfs/nfs4xdr.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
index 5be2868..79e1d02 100644
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -203,8 +203,7 @@ static int nfs4_stat_to_errno(int);
 				 2 + encode_verifier_maxsz + 5 + \
 				nfs4_label_maxsz)
 #define decode_readdir_maxsz	(op_decode_hdr_maxsz + \
-				 decode_verifier_maxsz + \
-				nfs4_label_maxsz + nfs4_fattr_maxsz)
+				 decode_verifier_maxsz)
 #define encode_readlink_maxsz	(op_encode_hdr_maxsz)
 #define decode_readlink_maxsz	(op_decode_hdr_maxsz + 1)
 #define encode_write_maxsz	(op_encode_hdr_maxsz + \


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 1/8] NFS: Fix READDIR oops with NFSv4 on RDMA
@ 2014-02-03 21:01   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:01 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

When starting the Connectathon basic tests on an NFSv4 RDMA
mount, I encountered this oops:

  BUG: unable to handle kernel NULL pointer dereference at (null)
  IP: [<ffffffff8129cc56>] memcpy+0x6/0x110
  PGD 2106cd067 PUD 20fef9067 PMD 0
  Oops: 0000 [#1] SMP

 ...

  [<ffffffffa05dc1b1>] ? xdr_inline_decode+0xb1/0x120 [sunrpc]
  [<ffffffffa071f19c>] nfs4_decode_dirent+0x4c/0x250 [nfsv4]
  [<ffffffff81178a02>] ? alloc_pages_current+0xb2/0x170
  [<ffffffffa06a1225>] nfs_readdir_page_filler+0xe5/0x2c0 [nfs]
  [<ffffffffa06a1622>] nfs_readdir_xdr_to_array+0x222/0x2e0 [nfs]
  [<ffffffffa06a1702>] nfs_readdir_filler+0x22/0x90 [nfs]
  [<ffffffff8112f975>] ? add_to_page_cache_lru+0x35/0x50
  [<ffffffff8112faee>] __read_cache_page+0x7e/0xe0
  [<ffffffffa06a16e0>] ? nfs_readdir_xdr_to_array+0x2e0/0x2e0 [nfs]
  [<ffffffffa06a16e0>] ? nfs_readdir_xdr_to_array+0x2e0/0x2e0 [nfs]
  [<ffffffff8113079c>] do_read_cache_page+0x3c/0x110
  [<ffffffff811308b9>] read_cache_page_async+0x19/0x20
  [<ffffffff811308ce>] read_cache_page+0xe/0x20
  [<ffffffffa06a1c1e>] nfs_readdir+0x14e/0x3d0 [nfs]
  [<ffffffffa071f150>] ? decode_pathconf+0x1c0/0x1c0 [nfsv4]
  [<ffffffff811a811d>] iterate_dir+0xad/0xd0
  [<ffffffff811a71ca>] ? do_fcntl+0x28a/0x370
  [<ffffffff811a82d5>] SyS_getdents+0x95/0x100
  [<ffffffff811a83e0>] ? SyS_old_readdir+0xa0/0xa0
  [<ffffffff815a7752>] system_call_fastpath+0x16/0x1b

The problem does not occur with NFSv3 over RDMA.

nfs4_decode_dirent() is confused because the xdr_buf's page vector
starts long after the first directory entry in the server's reply.

Commit aa9c2669, "NFS: Client implementation of Labeled-NFS," is
reported by git bisect as the first bad commit.

This commit changes the decode_readdir_maxsz macro.  This macro
controls where the generic XDR routines split incoming readdir reply
data between the head[0] buffer and the page cache.

Security labels go with each directory entry, thus they are always
stored in the page cache, not in the head buffer.  The length of the
reply that goes in head[0] should not change.

I've reverted the change to decode_readdir_maxsz.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68371
Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org> # 3.11+
---

 fs/nfs/nfs4xdr.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
index 5be2868..79e1d02 100644
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -203,8 +203,7 @@ static int nfs4_stat_to_errno(int);
 				 2 + encode_verifier_maxsz + 5 + \
 				nfs4_label_maxsz)
 #define decode_readdir_maxsz	(op_decode_hdr_maxsz + \
-				 decode_verifier_maxsz + \
-				nfs4_label_maxsz + nfs4_fattr_maxsz)
+				 decode_verifier_maxsz)
 #define encode_readlink_maxsz	(op_encode_hdr_maxsz)
 #define decode_readlink_maxsz	(op_decode_hdr_maxsz + 1)
 #define encode_write_maxsz	(op_encode_hdr_maxsz + \

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/8] SUNRPC: Fix large reads on NFS/RDMA
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves
in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a
few hundred bytes via NFS/RDMA no longer work.  This commit exposed
a long-standing bug in rpcrdma_inline_fixup().

I reproduce this with an rsize=4096 mount using the cthon04 basic
tests.  Test 5 fails with an EIO error.

For my reproducer, kernel log shows:

  NFS: server cheating in read reply: count 4096 > recvd 0

rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field,
and xdr_align_pages() is now returning that value to the READ XDR
decoder function.

That field is set up by xdr_inline_pages() by the READ XDR encoder
function.  As far as I can tell, it is supposed to be left alone
after that, as it describes the dimensions of the reply xdr_stream,
not the contents of that stream.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: <stable@vger.kernel.org> # 3.7+
---

 net/sunrpc/xprtrdma/rpc_rdma.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index e03725b..96ead52 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -649,9 +649,7 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 				break;
 			page_base = 0;
 		}
-		rqst->rq_rcv_buf.page_len = olen - copy_len;
-	} else
-		rqst->rq_rcv_buf.page_len = 0;
+	}
 
 	if (copy_len && rqst->rq_rcv_buf.tail[0].iov_len) {
 		curlen = copy_len;


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/8] SUNRPC: Fix large reads on NFS/RDMA
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves
in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a
few hundred bytes via NFS/RDMA no longer work.  This commit exposed
a long-standing bug in rpcrdma_inline_fixup().

I reproduce this with an rsize=4096 mount using the cthon04 basic
tests.  Test 5 fails with an EIO error.

For my reproducer, kernel log shows:

  NFS: server cheating in read reply: count 4096 > recvd 0

rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field,
and xdr_align_pages() is now returning that value to the READ XDR
decoder function.

That field is set up by xdr_inline_pages() by the READ XDR encoder
function.  As far as I can tell, it is supposed to be left alone
after that, as it describes the dimensions of the reply xdr_stream,
not the contents of that stream.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391
Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org> # 3.7+
---

 net/sunrpc/xprtrdma/rpc_rdma.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index e03725b..96ead52 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -649,9 +649,7 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 				break;
 			page_base = 0;
 		}
-		rqst->rq_rcv_buf.page_len = olen - copy_len;
-	} else
-		rqst->rq_rcv_buf.page_len = 0;
+	}
 
 	if (copy_len && rqst->rq_rcv_buf.tail[0].iov_len) {
 		curlen = copy_len;

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/8] SUNRPC: remove KERN_INFO from dprintk() call sites
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

The use of KERN_INFO causes garbage characters to appear when
debugging is enabled.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 net/sunrpc/xprtrdma/transport.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 285dc08..1eb9c46 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -733,7 +733,7 @@ static void __exit xprt_rdma_cleanup(void)
 {
 	int rc;
 
-	dprintk(KERN_INFO "RPCRDMA Module Removed, deregister RPC RDMA transport\n");
+	dprintk("RPCRDMA Module Removed, deregister RPC RDMA transport\n");
 #ifdef RPC_DEBUG
 	if (sunrpc_table_header) {
 		unregister_sysctl_table(sunrpc_table_header);
@@ -755,14 +755,14 @@ static int __init xprt_rdma_init(void)
 	if (rc)
 		return rc;
 
-	dprintk(KERN_INFO "RPCRDMA Module Init, register RPC RDMA transport\n");
+	dprintk("RPCRDMA Module Init, register RPC RDMA transport\n");
 
-	dprintk(KERN_INFO "Defaults:\n");
-	dprintk(KERN_INFO "\tSlots %d\n"
+	dprintk("Defaults:\n");
+	dprintk("\tSlots %d\n"
 		"\tMaxInlineRead %d\n\tMaxInlineWrite %d\n",
 		xprt_rdma_slot_table_entries,
 		xprt_rdma_max_inline_read, xprt_rdma_max_inline_write);
-	dprintk(KERN_INFO "\tPadding %d\n\tMemreg %d\n",
+	dprintk("\tPadding %d\n\tMemreg %d\n",
 		xprt_rdma_inline_write_padding, xprt_rdma_memreg_strategy);
 
 #ifdef RPC_DEBUG


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/8] SUNRPC: remove KERN_INFO from dprintk() call sites
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

The use of KERN_INFO causes garbage characters to appear when
debugging is enabled.

Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---

 net/sunrpc/xprtrdma/transport.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 285dc08..1eb9c46 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -733,7 +733,7 @@ static void __exit xprt_rdma_cleanup(void)
 {
 	int rc;
 
-	dprintk(KERN_INFO "RPCRDMA Module Removed, deregister RPC RDMA transport\n");
+	dprintk("RPCRDMA Module Removed, deregister RPC RDMA transport\n");
 #ifdef RPC_DEBUG
 	if (sunrpc_table_header) {
 		unregister_sysctl_table(sunrpc_table_header);
@@ -755,14 +755,14 @@ static int __init xprt_rdma_init(void)
 	if (rc)
 		return rc;
 
-	dprintk(KERN_INFO "RPCRDMA Module Init, register RPC RDMA transport\n");
+	dprintk("RPCRDMA Module Init, register RPC RDMA transport\n");
 
-	dprintk(KERN_INFO "Defaults:\n");
-	dprintk(KERN_INFO "\tSlots %d\n"
+	dprintk("Defaults:\n");
+	dprintk("\tSlots %d\n"
 		"\tMaxInlineRead %d\n\tMaxInlineWrite %d\n",
 		xprt_rdma_slot_table_entries,
 		xprt_rdma_max_inline_read, xprt_rdma_max_inline_write);
-	dprintk(KERN_INFO "\tPadding %d\n\tMemreg %d\n",
+	dprintk("\tPadding %d\n\tMemreg %d\n",
 		xprt_rdma_inline_write_padding, xprt_rdma_memreg_strategy);
 
 #ifdef RPC_DEBUG

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/8] SUNRPC: Display tk_pid where possible
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

Until we get around to adding trace points to xprtrdma.ko, take the
easy road and try to polish up the existing debugging messages.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 net/sunrpc/xprtrdma/rpc_rdma.c  |   76 +++++++++++++++++++++++----------------
 net/sunrpc/xprtrdma/transport.c |   19 ++++++----
 2 files changed, 56 insertions(+), 39 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 96ead52..4ab505b 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -216,8 +216,9 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
 			xdr_encode_hyper(
 					(__be32 *)&cur_rchunk->rc_target.rs_offset,
 					seg->mr_base);
-			dprintk("RPC:       %s: read chunk "
-				"elem %d@0x%llx:0x%x pos %u (%s)\n", __func__,
+			dprintk("RPC: %5u %s: read chunk "
+				"elem %d@0x%llx:0x%x pos %u (%s)\n",
+				rqst->rq_task->tk_pid, __func__,
 				seg->mr_len, (unsigned long long)seg->mr_base,
 				seg->mr_rkey, pos, n < nsegs ? "more" : "last");
 			cur_rchunk++;
@@ -228,8 +229,9 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
 			xdr_encode_hyper(
 					(__be32 *)&cur_wchunk->wc_target.rs_offset,
 					seg->mr_base);
-			dprintk("RPC:       %s: %s chunk "
-				"elem %d@0x%llx:0x%x (%s)\n", __func__,
+			dprintk("RPC: %5u %s: %s chunk "
+				"elem %d@0x%llx:0x%x (%s)\n",
+				rqst->rq_task->tk_pid, __func__,
 				(type == rpcrdma_replych) ? "reply" : "write",
 				seg->mr_len, (unsigned long long)seg->mr_base,
 				seg->mr_rkey, n < nsegs ? "more" : "last");
@@ -310,8 +312,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 	if (pad < 0 || rqst->rq_slen - curlen < RPCRDMA_INLINE_PAD_THRESH)
 		pad = 0;	/* don't pad this request */
 
-	dprintk("RPC:       %s: pad %d destp 0x%p len %d hdrlen %d\n",
-		__func__, pad, destp, rqst->rq_slen, curlen);
+	dprintk("RPC: %5u %s: pad %d destp 0x%p len %d hdrlen %d\n",
+		rqst->rq_task->tk_pid, __func__,
+		pad, destp, rqst->rq_slen, curlen);
 
 	copy_len = rqst->rq_snd_buf.page_len;
 
@@ -322,8 +325,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 				rqst->rq_snd_buf.tail[0].iov_base, curlen);
 			r_xprt->rx_stats.pullup_copy_count += curlen;
 		}
-		dprintk("RPC:       %s: tail destp 0x%p len %d\n",
-			__func__, destp + copy_len, curlen);
+		dprintk("RPC: %5u %s: tail destp 0x%p len %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			destp + copy_len, curlen);
 		rqst->rq_svec[0].iov_len += curlen;
 	}
 	r_xprt->rx_stats.pullup_copy_count += copy_len;
@@ -336,8 +340,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 		curlen = PAGE_SIZE - page_base;
 		if (curlen > copy_len)
 			curlen = copy_len;
-		dprintk("RPC:       %s: page %d destp 0x%p len %d curlen %d\n",
-			__func__, i, destp, copy_len, curlen);
+		dprintk("RPC: %5u %s: page %d destp 0x%p len %d curlen %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			i, destp, copy_len, curlen);
 		srcp = kmap_atomic(ppages[i]);
 		memcpy(destp, srcp+page_base, curlen);
 		kunmap_atomic(srcp);
@@ -446,8 +451,9 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
 	if (r_xprt->rx_ia.ri_memreg_strategy == RPCRDMA_BOUNCEBUFFERS &&
 	    (rtype != rpcrdma_noch || wtype != rpcrdma_noch)) {
 		/* forced to "pure inline"? */
-		dprintk("RPC:       %s: too much data (%d/%d) for inline\n",
-			__func__, rqst->rq_rcv_buf.len, rqst->rq_snd_buf.len);
+		dprintk("RPC: %5u %s: too much data (%d/%d) for inline\n",
+			rqst->rq_task->tk_pid, __func__,
+			rqst->rq_rcv_buf.len, rqst->rq_snd_buf.len);
 		return -1;
 	}
 
@@ -515,9 +521,10 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
 	if (hdrlen == 0)
 		return -1;
 
-	dprintk("RPC:       %s: %s: hdrlen %zd rpclen %zd padlen %zd"
+	dprintk("RPC: %5u %s: %s: hdrlen %zd rpclen %zd padlen %zd"
 		" headerp 0x%p base 0x%p lkey 0x%x\n",
-		__func__, transfertypes[wtype], hdrlen, rpclen, padlen,
+		rqst->rq_task->tk_pid, __func__,
+		transfertypes[wtype], hdrlen, rpclen, padlen,
 		headerp, base, req->rl_iov.lkey);
 
 	/*
@@ -614,8 +621,9 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 		rqst->rq_rcv_buf.head[0].iov_len = curlen;
 	}
 
-	dprintk("RPC:       %s: srcp 0x%p len %d hdrlen %d\n",
-		__func__, srcp, copy_len, curlen);
+	dprintk("RPC: %5u %s: srcp 0x%p len %d hdrlen %d\n",
+		rqst->rq_task->tk_pid, __func__,
+		srcp, copy_len, curlen);
 
 	/* Shift pointer for first receive segment only */
 	rqst->rq_rcv_buf.head[0].iov_base = srcp;
@@ -636,9 +644,10 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 			curlen = PAGE_SIZE - page_base;
 			if (curlen > copy_len)
 				curlen = copy_len;
-			dprintk("RPC:       %s: page %d"
+			dprintk("RPC: %5u %s: page %d"
 				" srcp 0x%p len %d curlen %d\n",
-				__func__, i, srcp, copy_len, curlen);
+				rqst->rq_task->tk_pid, __func__,
+				i, srcp, copy_len, curlen);
 			destp = kmap_atomic(ppages[i]);
 			memcpy(destp + page_base, srcp, curlen);
 			flush_dcache_page(ppages[i]);
@@ -657,8 +666,9 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 			curlen = rqst->rq_rcv_buf.tail[0].iov_len;
 		if (rqst->rq_rcv_buf.tail[0].iov_base != srcp)
 			memmove(rqst->rq_rcv_buf.tail[0].iov_base, srcp, curlen);
-		dprintk("RPC:       %s: tail srcp 0x%p len %d curlen %d\n",
-			__func__, srcp, copy_len, curlen);
+		dprintk("RPC: %5u %s: tail srcp 0x%p len %d curlen %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			srcp, copy_len, curlen);
 		rqst->rq_rcv_buf.tail[0].iov_len = curlen;
 		copy_len -= curlen; ++i;
 	} else
@@ -672,9 +682,10 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 	}
 
 	if (copy_len)
-		dprintk("RPC:       %s: %d bytes in"
+		dprintk("RPC: %5u %s: %d bytes in"
 			" %d extra segments (%d lost)\n",
-			__func__, olen, i, copy_len);
+			rqst->rq_task->tk_pid, __func__,
+			olen, i, copy_len);
 
 	/* TBD avoid a warning from call_decode() */
 	rqst->rq_private_buf = rqst->rq_rcv_buf;
@@ -771,15 +782,17 @@ repost:
 	req = rpcr_to_rdmar(rqst);
 	if (req->rl_reply) {
 		spin_unlock(&xprt->transport_lock);
-		dprintk("RPC:       %s: duplicate reply 0x%p to RPC "
-			"request 0x%p: xid 0x%08x\n", __func__, rep, req,
-			headerp->rm_xid);
+		dprintk("RPC: %5u %s: duplicate reply 0x%p to RPC "
+			"request 0x%p: xid 0x%08x\n",
+			rqst->rq_task->tk_pid, __func__,
+			rep, req, headerp->rm_xid);
 		goto repost;
 	}
 
-	dprintk("RPC:       %s: reply 0x%p completes request 0x%p\n"
+	dprintk("RPC: %5u %s: reply 0x%p completes request 0x%p\n"
 		"                   RPC request 0x%p xid 0x%08x\n",
-			__func__, rep, req, rqst, headerp->rm_xid);
+		rqst->rq_task->tk_pid, __func__,
+		rep, req, rqst, headerp->rm_xid);
 
 	/* from here on, the reply is no longer an orphan */
 	req->rl_reply = rep;
@@ -844,10 +857,11 @@ repost:
 
 badheader:
 	default:
-		dprintk("%s: invalid rpcrdma reply header (type %d):"
+		dprintk("RPC: %5u %s: invalid rpcrdma reply header (type %d):"
 				" chunks[012] == %d %d %d"
 				" expected chunks <= %d\n",
-				__func__, ntohl(headerp->rm_type),
+				rqst->rq_task->tk_pid, __func__,
+				ntohl(headerp->rm_type),
 				headerp->rm_body.rm_chunks[0],
 				headerp->rm_body.rm_chunks[1],
 				headerp->rm_body.rm_chunks[2],
@@ -878,8 +892,8 @@ badheader:
 		break;
 	}
 
-	dprintk("RPC:       %s: xprt_complete_rqst(0x%p, 0x%p, %d)\n",
-			__func__, xprt, rqst, status);
+	dprintk("RPC: %5u %s: xprt_complete_rqst(0x%p, 0x%p, %d)\n",
+		rqst->rq_task->tk_pid, __func__, xprt, rqst, status);
 	xprt_complete_rqst(rqst->rq_task, status);
 	spin_unlock(&xprt->transport_lock);
 }
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 1eb9c46..5f31775 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -456,7 +456,8 @@ xprt_rdma_reserve_xprt(struct rpc_xprt *xprt, struct rpc_task *task)
 	/* == RPC_CWNDSCALE @ init, but *after* setup */
 	if (r_xprt->rx_buf.rb_cwndscale == 0UL) {
 		r_xprt->rx_buf.rb_cwndscale = xprt->cwnd;
-		dprintk("RPC:       %s: cwndscale %lu\n", __func__,
+		dprintk("RPC: %5u %s: cwndscale %lu\n",
+			task->tk_pid, __func__,
 			r_xprt->rx_buf.rb_cwndscale);
 		BUG_ON(r_xprt->rx_buf.rb_cwndscale <= 0);
 	}
@@ -482,9 +483,9 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 	BUG_ON(NULL == req);
 
 	if (size > req->rl_size) {
-		dprintk("RPC:       %s: size %zd too large for buffer[%zd]: "
+		dprintk("RPC: %5u %s: size %zd too large for buffer[%zd]: "
 			"prog %d vers %d proc %d\n",
-			__func__, size, req->rl_size,
+			task->tk_pid, __func__, size, req->rl_size,
 			task->tk_client->cl_prog, task->tk_client->cl_vers,
 			task->tk_msg.rpc_proc->p_proc);
 		/*
@@ -506,8 +507,9 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 		if (rpcx_to_rdmax(xprt)->rx_ia.ri_memreg_strategy ==
 				RPCRDMA_BOUNCEBUFFERS) {
 			/* forced to "pure inline" */
-			dprintk("RPC:       %s: too much data (%zd) for inline "
-					"(r/w max %d/%d)\n", __func__, size,
+			dprintk("RPC: %5u %s: too much data (%zd) for inline "
+					"(r/w max %d/%d)\n",
+					task->tk_pid, __func__, size,
 					rpcx_to_rdmad(xprt).inline_rsize,
 					rpcx_to_rdmad(xprt).inline_wsize);
 			size = req->rl_size;
@@ -542,7 +544,8 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 		req->rl_reply = NULL;
 		req = nreq;
 	}
-	dprintk("RPC:       %s: size %zd, request 0x%p\n", __func__, size, req);
+	dprintk("RPC: %5u %s: size %zd, request 0x%p\n",
+		task->tk_pid, __func__, size, req);
 out:
 	req->rl_connect_cookie = 0;	/* our reserved value */
 	return req->rl_xdr_buf;
@@ -634,8 +637,8 @@ xprt_rdma_send_request(struct rpc_task *task)
 	/* marshal the send itself */
 	if (req->rl_niovs == 0 && rpcrdma_marshal_req(rqst) != 0) {
 		r_xprt->rx_stats.failed_marshal_count++;
-		dprintk("RPC:       %s: rpcrdma_marshal_req failed\n",
-			__func__);
+		dprintk("RPC: %5u %s: rpcrdma_marshal_req failed\n",
+			task->tk_pid, __func__);
 		return -EIO;
 	}
 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/8] SUNRPC: Display tk_pid where possible
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Until we get around to adding trace points to xprtrdma.ko, take the
easy road and try to polish up the existing debugging messages.

Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---

 net/sunrpc/xprtrdma/rpc_rdma.c  |   76 +++++++++++++++++++++++----------------
 net/sunrpc/xprtrdma/transport.c |   19 ++++++----
 2 files changed, 56 insertions(+), 39 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 96ead52..4ab505b 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -216,8 +216,9 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
 			xdr_encode_hyper(
 					(__be32 *)&cur_rchunk->rc_target.rs_offset,
 					seg->mr_base);
-			dprintk("RPC:       %s: read chunk "
-				"elem %d@0x%llx:0x%x pos %u (%s)\n", __func__,
+			dprintk("RPC: %5u %s: read chunk "
+				"elem %d@0x%llx:0x%x pos %u (%s)\n",
+				rqst->rq_task->tk_pid, __func__,
 				seg->mr_len, (unsigned long long)seg->mr_base,
 				seg->mr_rkey, pos, n < nsegs ? "more" : "last");
 			cur_rchunk++;
@@ -228,8 +229,9 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
 			xdr_encode_hyper(
 					(__be32 *)&cur_wchunk->wc_target.rs_offset,
 					seg->mr_base);
-			dprintk("RPC:       %s: %s chunk "
-				"elem %d@0x%llx:0x%x (%s)\n", __func__,
+			dprintk("RPC: %5u %s: %s chunk "
+				"elem %d@0x%llx:0x%x (%s)\n",
+				rqst->rq_task->tk_pid, __func__,
 				(type == rpcrdma_replych) ? "reply" : "write",
 				seg->mr_len, (unsigned long long)seg->mr_base,
 				seg->mr_rkey, n < nsegs ? "more" : "last");
@@ -310,8 +312,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 	if (pad < 0 || rqst->rq_slen - curlen < RPCRDMA_INLINE_PAD_THRESH)
 		pad = 0;	/* don't pad this request */
 
-	dprintk("RPC:       %s: pad %d destp 0x%p len %d hdrlen %d\n",
-		__func__, pad, destp, rqst->rq_slen, curlen);
+	dprintk("RPC: %5u %s: pad %d destp 0x%p len %d hdrlen %d\n",
+		rqst->rq_task->tk_pid, __func__,
+		pad, destp, rqst->rq_slen, curlen);
 
 	copy_len = rqst->rq_snd_buf.page_len;
 
@@ -322,8 +325,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 				rqst->rq_snd_buf.tail[0].iov_base, curlen);
 			r_xprt->rx_stats.pullup_copy_count += curlen;
 		}
-		dprintk("RPC:       %s: tail destp 0x%p len %d\n",
-			__func__, destp + copy_len, curlen);
+		dprintk("RPC: %5u %s: tail destp 0x%p len %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			destp + copy_len, curlen);
 		rqst->rq_svec[0].iov_len += curlen;
 	}
 	r_xprt->rx_stats.pullup_copy_count += copy_len;
@@ -336,8 +340,9 @@ rpcrdma_inline_pullup(struct rpc_rqst *rqst, int pad)
 		curlen = PAGE_SIZE - page_base;
 		if (curlen > copy_len)
 			curlen = copy_len;
-		dprintk("RPC:       %s: page %d destp 0x%p len %d curlen %d\n",
-			__func__, i, destp, copy_len, curlen);
+		dprintk("RPC: %5u %s: page %d destp 0x%p len %d curlen %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			i, destp, copy_len, curlen);
 		srcp = kmap_atomic(ppages[i]);
 		memcpy(destp, srcp+page_base, curlen);
 		kunmap_atomic(srcp);
@@ -446,8 +451,9 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
 	if (r_xprt->rx_ia.ri_memreg_strategy == RPCRDMA_BOUNCEBUFFERS &&
 	    (rtype != rpcrdma_noch || wtype != rpcrdma_noch)) {
 		/* forced to "pure inline"? */
-		dprintk("RPC:       %s: too much data (%d/%d) for inline\n",
-			__func__, rqst->rq_rcv_buf.len, rqst->rq_snd_buf.len);
+		dprintk("RPC: %5u %s: too much data (%d/%d) for inline\n",
+			rqst->rq_task->tk_pid, __func__,
+			rqst->rq_rcv_buf.len, rqst->rq_snd_buf.len);
 		return -1;
 	}
 
@@ -515,9 +521,10 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
 	if (hdrlen == 0)
 		return -1;
 
-	dprintk("RPC:       %s: %s: hdrlen %zd rpclen %zd padlen %zd"
+	dprintk("RPC: %5u %s: %s: hdrlen %zd rpclen %zd padlen %zd"
 		" headerp 0x%p base 0x%p lkey 0x%x\n",
-		__func__, transfertypes[wtype], hdrlen, rpclen, padlen,
+		rqst->rq_task->tk_pid, __func__,
+		transfertypes[wtype], hdrlen, rpclen, padlen,
 		headerp, base, req->rl_iov.lkey);
 
 	/*
@@ -614,8 +621,9 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 		rqst->rq_rcv_buf.head[0].iov_len = curlen;
 	}
 
-	dprintk("RPC:       %s: srcp 0x%p len %d hdrlen %d\n",
-		__func__, srcp, copy_len, curlen);
+	dprintk("RPC: %5u %s: srcp 0x%p len %d hdrlen %d\n",
+		rqst->rq_task->tk_pid, __func__,
+		srcp, copy_len, curlen);
 
 	/* Shift pointer for first receive segment only */
 	rqst->rq_rcv_buf.head[0].iov_base = srcp;
@@ -636,9 +644,10 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 			curlen = PAGE_SIZE - page_base;
 			if (curlen > copy_len)
 				curlen = copy_len;
-			dprintk("RPC:       %s: page %d"
+			dprintk("RPC: %5u %s: page %d"
 				" srcp 0x%p len %d curlen %d\n",
-				__func__, i, srcp, copy_len, curlen);
+				rqst->rq_task->tk_pid, __func__,
+				i, srcp, copy_len, curlen);
 			destp = kmap_atomic(ppages[i]);
 			memcpy(destp + page_base, srcp, curlen);
 			flush_dcache_page(ppages[i]);
@@ -657,8 +666,9 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 			curlen = rqst->rq_rcv_buf.tail[0].iov_len;
 		if (rqst->rq_rcv_buf.tail[0].iov_base != srcp)
 			memmove(rqst->rq_rcv_buf.tail[0].iov_base, srcp, curlen);
-		dprintk("RPC:       %s: tail srcp 0x%p len %d curlen %d\n",
-			__func__, srcp, copy_len, curlen);
+		dprintk("RPC: %5u %s: tail srcp 0x%p len %d curlen %d\n",
+			rqst->rq_task->tk_pid, __func__,
+			srcp, copy_len, curlen);
 		rqst->rq_rcv_buf.tail[0].iov_len = curlen;
 		copy_len -= curlen; ++i;
 	} else
@@ -672,9 +682,10 @@ rpcrdma_inline_fixup(struct rpc_rqst *rqst, char *srcp, int copy_len, int pad)
 	}
 
 	if (copy_len)
-		dprintk("RPC:       %s: %d bytes in"
+		dprintk("RPC: %5u %s: %d bytes in"
 			" %d extra segments (%d lost)\n",
-			__func__, olen, i, copy_len);
+			rqst->rq_task->tk_pid, __func__,
+			olen, i, copy_len);
 
 	/* TBD avoid a warning from call_decode() */
 	rqst->rq_private_buf = rqst->rq_rcv_buf;
@@ -771,15 +782,17 @@ repost:
 	req = rpcr_to_rdmar(rqst);
 	if (req->rl_reply) {
 		spin_unlock(&xprt->transport_lock);
-		dprintk("RPC:       %s: duplicate reply 0x%p to RPC "
-			"request 0x%p: xid 0x%08x\n", __func__, rep, req,
-			headerp->rm_xid);
+		dprintk("RPC: %5u %s: duplicate reply 0x%p to RPC "
+			"request 0x%p: xid 0x%08x\n",
+			rqst->rq_task->tk_pid, __func__,
+			rep, req, headerp->rm_xid);
 		goto repost;
 	}
 
-	dprintk("RPC:       %s: reply 0x%p completes request 0x%p\n"
+	dprintk("RPC: %5u %s: reply 0x%p completes request 0x%p\n"
 		"                   RPC request 0x%p xid 0x%08x\n",
-			__func__, rep, req, rqst, headerp->rm_xid);
+		rqst->rq_task->tk_pid, __func__,
+		rep, req, rqst, headerp->rm_xid);
 
 	/* from here on, the reply is no longer an orphan */
 	req->rl_reply = rep;
@@ -844,10 +857,11 @@ repost:
 
 badheader:
 	default:
-		dprintk("%s: invalid rpcrdma reply header (type %d):"
+		dprintk("RPC: %5u %s: invalid rpcrdma reply header (type %d):"
 				" chunks[012] == %d %d %d"
 				" expected chunks <= %d\n",
-				__func__, ntohl(headerp->rm_type),
+				rqst->rq_task->tk_pid, __func__,
+				ntohl(headerp->rm_type),
 				headerp->rm_body.rm_chunks[0],
 				headerp->rm_body.rm_chunks[1],
 				headerp->rm_body.rm_chunks[2],
@@ -878,8 +892,8 @@ badheader:
 		break;
 	}
 
-	dprintk("RPC:       %s: xprt_complete_rqst(0x%p, 0x%p, %d)\n",
-			__func__, xprt, rqst, status);
+	dprintk("RPC: %5u %s: xprt_complete_rqst(0x%p, 0x%p, %d)\n",
+		rqst->rq_task->tk_pid, __func__, xprt, rqst, status);
 	xprt_complete_rqst(rqst->rq_task, status);
 	spin_unlock(&xprt->transport_lock);
 }
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 1eb9c46..5f31775 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -456,7 +456,8 @@ xprt_rdma_reserve_xprt(struct rpc_xprt *xprt, struct rpc_task *task)
 	/* == RPC_CWNDSCALE @ init, but *after* setup */
 	if (r_xprt->rx_buf.rb_cwndscale == 0UL) {
 		r_xprt->rx_buf.rb_cwndscale = xprt->cwnd;
-		dprintk("RPC:       %s: cwndscale %lu\n", __func__,
+		dprintk("RPC: %5u %s: cwndscale %lu\n",
+			task->tk_pid, __func__,
 			r_xprt->rx_buf.rb_cwndscale);
 		BUG_ON(r_xprt->rx_buf.rb_cwndscale <= 0);
 	}
@@ -482,9 +483,9 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 	BUG_ON(NULL == req);
 
 	if (size > req->rl_size) {
-		dprintk("RPC:       %s: size %zd too large for buffer[%zd]: "
+		dprintk("RPC: %5u %s: size %zd too large for buffer[%zd]: "
 			"prog %d vers %d proc %d\n",
-			__func__, size, req->rl_size,
+			task->tk_pid, __func__, size, req->rl_size,
 			task->tk_client->cl_prog, task->tk_client->cl_vers,
 			task->tk_msg.rpc_proc->p_proc);
 		/*
@@ -506,8 +507,9 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 		if (rpcx_to_rdmax(xprt)->rx_ia.ri_memreg_strategy ==
 				RPCRDMA_BOUNCEBUFFERS) {
 			/* forced to "pure inline" */
-			dprintk("RPC:       %s: too much data (%zd) for inline "
-					"(r/w max %d/%d)\n", __func__, size,
+			dprintk("RPC: %5u %s: too much data (%zd) for inline "
+					"(r/w max %d/%d)\n",
+					task->tk_pid, __func__, size,
 					rpcx_to_rdmad(xprt).inline_rsize,
 					rpcx_to_rdmad(xprt).inline_wsize);
 			size = req->rl_size;
@@ -542,7 +544,8 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size)
 		req->rl_reply = NULL;
 		req = nreq;
 	}
-	dprintk("RPC:       %s: size %zd, request 0x%p\n", __func__, size, req);
+	dprintk("RPC: %5u %s: size %zd, request 0x%p\n",
+		task->tk_pid, __func__, size, req);
 out:
 	req->rl_connect_cookie = 0;	/* our reserved value */
 	return req->rl_xdr_buf;
@@ -634,8 +637,8 @@ xprt_rdma_send_request(struct rpc_task *task)
 	/* marshal the send itself */
 	if (req->rl_niovs == 0 && rpcrdma_marshal_req(rqst) != 0) {
 		r_xprt->rx_stats.failed_marshal_count++;
-		dprintk("RPC:       %s: rpcrdma_marshal_req failed\n",
-			__func__);
+		dprintk("RPC: %5u %s: rpcrdma_marshal_req failed\n",
+			task->tk_pid, __func__);
 		return -EIO;
 	}
 

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/8] NFS: Add debugging message in nfs4_callback_null()
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

It's useful to see whether a server's callback probe reached our
callback server.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/callback_xdr.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
index f4ccfe6..c7e0c37 100644
--- a/fs/nfs/callback_xdr.c
+++ b/fs/nfs/callback_xdr.c
@@ -57,6 +57,7 @@ static struct callback_op callback_ops[];
 
 static __be32 nfs4_callback_null(struct svc_rqst *rqstp, void *argp, void *resp)
 {
+	dprintk("NFS: received CB_NULL from server %pIS\n", svc_addr(rqstp));
 	return htonl(NFS4_OK);
 }
 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/8] NFS: Add debugging message in nfs4_callback_null()
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

It's useful to see whether a server's callback probe reached our
callback server.

Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---

 fs/nfs/callback_xdr.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
index f4ccfe6..c7e0c37 100644
--- a/fs/nfs/callback_xdr.c
+++ b/fs/nfs/callback_xdr.c
@@ -57,6 +57,7 @@ static struct callback_op callback_ops[];
 
 static __be32 nfs4_callback_null(struct svc_rqst *rqstp, void *argp, void *resp)
 {
+	dprintk("NFS: received CB_NULL from server %pIS\n", svc_addr(rqstp));
 	return htonl(NFS4_OK);
 }
 

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/8] NFS: advertise only supported callback netids
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

NFSv4.0 clients use the SETCLIENTID operation to inform NFS servers
how to contact a client's callback service.  If a server cannot
contact a client's callback service, that server will not delegate
to that client, which results in a performance loss.

Our client advertises "rdma" as the callback netid when the forward
channel is "rdma".  But our client always starts only "tcp" and
"tcp6" callback services.

Instead of advertising the forward channel netid, advertise "tcp"
or "tcp6" as the callback netid, based on the value of the
clientaddr mount option, since those are what our client currently
supports.

(Note: this approach is appropriate for older kernels too, but this
patch doesn't apply cleanly on kernels earlier than 3.11).

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=69171
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: <stable@vger.kernel.org> # 3.11+
---

 fs/nfs/nfs4proc.c |   24 ++++++++++++++++++------
 1 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 15052b8..09dd967 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -4883,6 +4883,20 @@ nfs4_init_uniform_client_string(const struct nfs_client *clp,
 				nodename);
 }
 
+/*
+ * nfs4_callback_up_net() starts only "tcp" and "tcp6" callback
+ * services.  Advertise one based on the address family of the
+ * clientaddr.
+ */
+static unsigned int
+nfs4_init_callback_netid(const struct nfs_client *clp, char *buf, size_t len)
+{
+	if (strchr(clp->cl_ipaddr, ':') != NULL)
+		return scnprintf(buf, len, "tcp6");
+	else
+		return scnprintf(buf, len, "tcp");
+}
+
 /**
  * nfs4_proc_setclientid - Negotiate client ID
  * @clp: state data structure
@@ -4924,12 +4938,10 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 						setclientid.sc_name,
 						sizeof(setclientid.sc_name));
 	/* cb_client4 */
-	rcu_read_lock();
-	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
-				sizeof(setclientid.sc_netid), "%s",
-				rpc_peeraddr2str(clp->cl_rpcclient,
-							RPC_DISPLAY_NETID));
-	rcu_read_unlock();
+	setclientid.sc_netid_len =
+				nfs4_init_callback_netid(clp,
+						setclientid.sc_netid,
+						sizeof(setclientid.sc_netid));
 	setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
 				clp->cl_ipaddr, port >> 8, port & 255);


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/8] NFS: advertise only supported callback netids
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

NFSv4.0 clients use the SETCLIENTID operation to inform NFS servers
how to contact a client's callback service.  If a server cannot
contact a client's callback service, that server will not delegate
to that client, which results in a performance loss.

Our client advertises "rdma" as the callback netid when the forward
channel is "rdma".  But our client always starts only "tcp" and
"tcp6" callback services.

Instead of advertising the forward channel netid, advertise "tcp"
or "tcp6" as the callback netid, based on the value of the
clientaddr mount option, since those are what our client currently
supports.

(Note: this approach is appropriate for older kernels too, but this
patch doesn't apply cleanly on kernels earlier than 3.11).

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=69171
Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org> # 3.11+
---

 fs/nfs/nfs4proc.c |   24 ++++++++++++++++++------
 1 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 15052b8..09dd967 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -4883,6 +4883,20 @@ nfs4_init_uniform_client_string(const struct nfs_client *clp,
 				nodename);
 }
 
+/*
+ * nfs4_callback_up_net() starts only "tcp" and "tcp6" callback
+ * services.  Advertise one based on the address family of the
+ * clientaddr.
+ */
+static unsigned int
+nfs4_init_callback_netid(const struct nfs_client *clp, char *buf, size_t len)
+{
+	if (strchr(clp->cl_ipaddr, ':') != NULL)
+		return scnprintf(buf, len, "tcp6");
+	else
+		return scnprintf(buf, len, "tcp");
+}
+
 /**
  * nfs4_proc_setclientid - Negotiate client ID
  * @clp: state data structure
@@ -4924,12 +4938,10 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 						setclientid.sc_name,
 						sizeof(setclientid.sc_name));
 	/* cb_client4 */
-	rcu_read_lock();
-	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
-				sizeof(setclientid.sc_netid), "%s",
-				rpc_peeraddr2str(clp->cl_rpcclient,
-							RPC_DISPLAY_NETID));
-	rcu_read_unlock();
+	setclientid.sc_netid_len =
+				nfs4_init_callback_netid(clp,
+						setclientid.sc_netid,
+						sizeof(setclientid.sc_netid));
 	setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
 				clp->cl_ipaddr, port >> 8, port & 255);

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/8] NFS: Use port 20049 by default for NFS/RDMA mounts
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

When providing NFSv2 and v3 service via RDMA, NFS servers MAY use an
alternative well-known port number, 20049.  They are not required to
register the service with their local rpcbind.  At least one server
implementation I am aware of does not register.

When providing NFSv4 service via RDMA, the server MUST use the
alternative port number, 20049.  As with NFSv4 on IP-based
transports, clients SHOULD connect to this port without consulting
the server's rpcbind, so we can't expect rpcbind to help here
either.

See section 6 of RFC 5667 for details.

Today, admins are required to specify "port=" when mounting an
NFS/RDMA server.  To comply with RFC 5667 and for cross-
compatibility with heterogenous automounter maps, provide the proper
default port when mounting with "proto=rdma".

If the server does not use port 20049, the "port=" mount option is
still required to mount successfully.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68401
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/super.c           |    2 ++
 include/uapi/linux/nfs.h |    1 +
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index 910ed90..6432600 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -2157,6 +2157,8 @@ static int nfs_validate_text_mount_data(void *options,
 	} else
 		nfs_set_mount_transport_protocol(args);
 
+	if (args->nfs_server.protocol == XPRT_TRANSPORT_RDMA)
+		port = NFS_RDMA_PORT;
 	nfs_set_port(sap, &args->nfs_server.port, port);
 
 	return nfs_parse_devname(dev_name,
diff --git a/include/uapi/linux/nfs.h b/include/uapi/linux/nfs.h
index 5199a36..ccc6f8d 100644
--- a/include/uapi/linux/nfs.h
+++ b/include/uapi/linux/nfs.h
@@ -9,6 +9,7 @@
 
 #define NFS_PROGRAM	100003
 #define NFS_PORT	2049
+#define NFS_RDMA_PORT	20049
 #define NFS_MAXDATA	8192
 #define NFS_MAXPATHLEN	1024
 #define NFS_MAXNAMLEN	255


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/8] NFS: Use port 20049 by default for NFS/RDMA mounts
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

When providing NFSv2 and v3 service via RDMA, NFS servers MAY use an
alternative well-known port number, 20049.  They are not required to
register the service with their local rpcbind.  At least one server
implementation I am aware of does not register.

When providing NFSv4 service via RDMA, the server MUST use the
alternative port number, 20049.  As with NFSv4 on IP-based
transports, clients SHOULD connect to this port without consulting
the server's rpcbind, so we can't expect rpcbind to help here
either.

See section 6 of RFC 5667 for details.

Today, admins are required to specify "port=" when mounting an
NFS/RDMA server.  To comply with RFC 5667 and for cross-
compatibility with heterogenous automounter maps, provide the proper
default port when mounting with "proto=rdma".

If the server does not use port 20049, the "port=" mount option is
still required to mount successfully.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68401
Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---

 fs/nfs/super.c           |    2 ++
 include/uapi/linux/nfs.h |    1 +
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index 910ed90..6432600 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -2157,6 +2157,8 @@ static int nfs_validate_text_mount_data(void *options,
 	} else
 		nfs_set_mount_transport_protocol(args);
 
+	if (args->nfs_server.protocol == XPRT_TRANSPORT_RDMA)
+		port = NFS_RDMA_PORT;
 	nfs_set_port(sap, &args->nfs_server.port, port);
 
 	return nfs_parse_devname(dev_name,
diff --git a/include/uapi/linux/nfs.h b/include/uapi/linux/nfs.h
index 5199a36..ccc6f8d 100644
--- a/include/uapi/linux/nfs.h
+++ b/include/uapi/linux/nfs.h
@@ -9,6 +9,7 @@
 
 #define NFS_PROGRAM	100003
 #define NFS_PORT	2049
+#define NFS_RDMA_PORT	20049
 #define NFS_MAXDATA	8192
 #define NFS_MAXPATHLEN	1024
 #define NFS_MAXNAMLEN	255

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 8/8] NFS: incorrect "port=" value in /proc/mounts
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust; +Cc: linux-nfs, linux-rdma

Mounting with "-o vers=4" shows the following in /proc/mounts:

  ... namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2, ...
                                ^^^^^^

The default port, 2049, was actually used, but was never recorded
in this mount point's nfs_server.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=69241
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: <stable@vger.kernel.org>
---

 fs/nfs/client.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 1d09289..2bb6dee 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -924,6 +924,7 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
 	target->flags = source->flags;
 	target->rsize = source->rsize;
 	target->wsize = source->wsize;
+	target->port = source->port;
 	target->acregmin = source->acregmin;
 	target->acregmax = source->acregmax;
 	target->acdirmin = source->acdirmin;


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 8/8] NFS: incorrect "port=" value in /proc/mounts
@ 2014-02-03 21:02   ` Chuck Lever
  0 siblings, 0 replies; 18+ messages in thread
From: Chuck Lever @ 2014-02-03 21:02 UTC (permalink / raw
  To: trond.myklebust-7I+n7zu2hftEKMMhf/gKZA
  Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Mounting with "-o vers=4" shows the following in /proc/mounts:

  ... namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2, ...
                                ^^^^^^

The default port, 2049, was actually used, but was never recorded
in this mount point's nfs_server.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=69241
Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
---

 fs/nfs/client.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 1d09289..2bb6dee 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -924,6 +924,7 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
 	target->flags = source->flags;
 	target->rsize = source->rsize;
 	target->wsize = source->wsize;
+	target->port = source->port;
 	target->acregmin = source->acregmin;
 	target->acregmax = source->acregmax;
 	target->acdirmin = source->acdirmin;

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-02-03 21:03 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-03 21:01 [PATCH 0/8] Fixes for NFS/RDMA client Chuck Lever
2014-02-03 21:01 ` Chuck Lever
2014-02-03 21:01 ` [PATCH 1/8] NFS: Fix READDIR oops with NFSv4 on RDMA Chuck Lever
2014-02-03 21:01   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 2/8] SUNRPC: Fix large reads on NFS/RDMA Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 3/8] SUNRPC: remove KERN_INFO from dprintk() call sites Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 4/8] SUNRPC: Display tk_pid where possible Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 5/8] NFS: Add debugging message in nfs4_callback_null() Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 6/8] NFS: advertise only supported callback netids Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 7/8] NFS: Use port 20049 by default for NFS/RDMA mounts Chuck Lever
2014-02-03 21:02   ` Chuck Lever
2014-02-03 21:02 ` [PATCH 8/8] NFS: incorrect "port=" value in /proc/mounts Chuck Lever
2014-02-03 21:02   ` Chuck Lever

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.