All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* hung system with cifsd, cannot reduce timeout
@ 2013-12-05 21:03 Patrick Noffke
       [not found] ` <52A0EA24.9000600-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
  0 siblings, 1 reply; 6+ messages in thread
From: Patrick Noffke @ 2013-12-05 21:03 UTC (permalink / raw
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hi,

I am having problems similar to those described here:
http://article.gmane.org/gmane.linux.kernel.cifs/9024

My system is an embedded Linux, kernel version 3.9.0, and the CIFS 
server is Windows Server 2003, SP1.  I can somewhat reliably produce a 
system that hangs for about three minutes, then recovers.  I would like 
to reduce this time, if possible (to more quickly recover under link 
failures or other conditions that cause the server to not respond).

I tried changing SMB_ECHO_INTERVAL to 5 seconds (5 * HZ).  This appears 
to be working for part of cifs, but I think there is another socket that 
is still open, and doesn't disconnect until about two minutes later when 
the server sends a RST.

Here is my sequence of actions:

1. Start lengthy process that accesses files on CIFS mount.
2. Pull Ethernet cable.
3. Wait about 20 seconds (with SMB_ECHO_INTERVAL at 5 seconds), then 
reconnect cable.  Process resumes almost immediately accessing files on 
CIFS mount.
4. Pull Ethernet cable again.
5. Wait about 20 seconds, then reconnect cable.  Process is hung. ps is 
also hung, printing everything before but not including the hung 
process.  cifsd was reported to have state DW (it has a PID before my 
process, so it was printed in the ps output).
6. About 165 seconds later, the hung process resumes, and the system is 
functioning normally.

I have a wireshark capture for the above sequence.  I will try to 
describe the packet sequence corresponding to each of the above steps 
(except 1).

2. Last packet successfully transmitted is from server to client, which 
is a TCP segment of a reassembled PDU.  There are several 
retransmissions of packets from the server (when I pull the plug, I can 
still see packets from the server, since it is running on the same 
machine as wireshark).
3. Client sends new SYN packet (source port 43480), followed by 
Negotiate Protocol Request, followed by session setup and so forth (the 
server is responding as appropriate for client requests).
4. Last packet successfully transmitted is from server to client, and is 
a Read AndX Response, FID:  0x800f.  Again, there are several 
retransmissions from server to client.
5. Client sends new SYN packet (source port 43492), followed by 
Negotiate Protocol Request.
  - Server replies with Negotiate Protocol Request.
  - Then nothing for about 9 seconds.
  - Client sends Echo Request *on previous TCP connection* (the one that 
had retransmissions in step 4, source port 43480).
  - Server sends RST for previous TCP connection (dest port 43480).
  - Then nothing for 111 sec, when server sends TCP keep-alive (this is 
also 120 seconds after Negotiate sequence, which is probably the 
configured TCP keep-alive interval).
  - Client ACKs keep-alive immediately.
  - 35 seconds later, server sends RST for new connection (dest port 43492).
  - Client immediately sends new SYN packet.
6. 10 seconds after last SYN packet, client Negotiate Protocol Request, 
and normal communication resumes.

I do see klog messages that the CIFS server has not responded in 10 
seconds (twice SMB_ECHO_INTERVAL, as expected), and that it is 
reconnecting.  I believe these correspond to the first two SYN packets 
above, but it is hard to correlate those timestamps to wireshark, so I 
can't be sure.  But the last such log occurred 177 seconds before my 
process resumed working, which makes me think the logs correlate to the 
first two SYN packets.

Why would the Echo Request go out on the old connection after a new 
connection has been opened?  And why are there no Echo Requests on the 
new connection?

I did check the cifsd stack (cat /proc/<cifsd PID>/stack) for previous 
tests, and it was waiting on a recv, and its state was SW (not DW).  
Unfortunately, I did not get the stack for this test.

Please let me know if there's any more information I can provide.

Also, is reducing SMB_ECHO_INTERVAL expected to reduce the recovery time 
under such failures?  If so, should the total time to reconnect to the 
server be 2 * SMB_ECHO_INTERVAL, or are there other timeouts on top of this?

Best regards,
Patrick

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hung system with cifsd, cannot reduce timeout
       [not found] ` <52A0EA24.9000600-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
@ 2013-12-05 21:36   ` Patrick Noffke
       [not found]     ` <52A0F1CB.5060709-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
  2013-12-31 22:43   ` ISHIKAWA,Chiaki
  1 sibling, 1 reply; 6+ messages in thread
From: Patrick Noffke @ 2013-12-05 21:36 UTC (permalink / raw
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On 12/05/2013 03:03 PM, Patrick Noffke wrote:
> Hi,
>
> I did check the cifsd stack (cat /proc/<cifsd PID>/stack) for previous
> tests, and it was waiting on a recv, and its state was SW (not DW).
> Unfortunately, I did not get the stack for this test.
>
I just repeated this test, and this time cifsd was in the SW state.

The stack was as follows:
sk_wait_data
tcp_recvmsg
inet_recvmsg
sock_recvmsg
kernel_recvmsg
cifs_readv_from_socket
cifs_read_from_socket
cifs_readv_discard
cifs_readv_receive
cifs_demultiplex_thread
kthread
ret_from_fork

For this test, my process and ps were hung after the first time pulling 
the cable (I hadn't rebooted from my earlier test, but I think the CIFS 
connection had disconnected due to inactivity).

As before, an Echo Request was sent on a previous connection after 
sending the SYN and Negotiate on a new connection.  The server RST the 
old connection right after the Echo Request, and 115 seconds later RST 
the new connection.  Another new connection was then made, and the 
process resumed and ps completed.

Patrick

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hung system with cifsd, cannot reduce timeout
       [not found]     ` <52A0F1CB.5060709-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
@ 2013-12-06 15:50       ` Patrick Noffke
       [not found]         ` <1774674.CzhmRQMMgZ-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
  0 siblings, 1 reply; 6+ messages in thread
From: Patrick Noffke @ 2013-12-06 15:50 UTC (permalink / raw
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 2542 bytes --]

On Thursday, December 05, 2013 03:36:11 PM you wrote:
> On 12/05/2013 03:03 PM, Patrick Noffke wrote:
> > Hi,
> > 
> > I did check the cifsd stack (cat /proc/<cifsd PID>/stack) for previous
> > tests, and it was waiting on a recv, and its state was SW (not DW).
> > Unfortunately, I did not get the stack for this test.
> 
> I just repeated this test, and this time cifsd was in the SW state.
> 
> The stack was as follows:
> sk_wait_data
> tcp_recvmsg
> inet_recvmsg
> sock_recvmsg
> kernel_recvmsg
> cifs_readv_from_socket
> cifs_read_from_socket
> cifs_readv_discard
> cifs_readv_receive
> cifs_demultiplex_thread
> kthread
> ret_from_fork
> 
> For this test, my process and ps were hung after the first time pulling
> the cable (I hadn't rebooted from my earlier test, but I think the CIFS
> connection had disconnected due to inactivity).
> 
> As before, an Echo Request was sent on a previous connection after
> sending the SYN and Negotiate on a new connection.  The server RST the
> old connection right after the Echo Request, and 115 seconds later RST
> the new connection.  Another new connection was then made, and the
> process resumed and ps completed.
> 

I have a little more info from yet another test with the same symptoms.  I 
did:

echo 1 > /proc/fs/cifs/cifsFYI

and ran the test again.  I had klogd running, so the cifs debug logs went to 
the syslog.  I've attached the syslogs around the time when I pulled the 
cable, reattached it, and up until the system recovers.

A few notes about these logs:

22:29:18:  Ethernet cable is disconnected.
22:29:22:  Echo request is sent
22:29:27:  Second echo request is sent
22:29:31:  Reconnect due to no echo response.  Socket is created at this time.
22:29:31:  (Last log for this timestamp) Ethernet cable is reconnected.  I 
believe the echo request must have been queued up for sending, which is why I 
see it in wireshark after reconnecting the cable (even though the echo 
requests were logged prior to the reconnection).  Server also sends RST for 
first connection at this time.
22:29:57:  Last cifs log until server sends RST.  Last message is 
"fs/cifs/inode.c: Getting info on"
22:32:09:  Server sends RST on second connection, client reconnects, and the 
system recovers.

It seems like the "Getting info on" log message is missing a filename.  Would 
this explain why it's hung until the server sends a RST?

There are also logs like the following before the cable is reattached:
fs/cifs/misc.c: Null buffer passed to cifs_small_buf_release

Best regards,
Patrick

[-- Attachment #2: cifs-lockup-syslogs.gz --]
[-- Type: application/gzip, Size: 3467 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hung system with cifsd, cannot reduce timeout
       [not found]         ` <1774674.CzhmRQMMgZ-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
@ 2013-12-10 15:15           ` Patrick Noffke
       [not found]             ` <3540393.5GuVZWBWRs-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
  0 siblings, 1 reply; 6+ messages in thread
From: Patrick Noffke @ 2013-12-10 15:15 UTC (permalink / raw
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On Friday, December 06, 2013 09:50:27 AM Patrick Noffke wrote:
> 
> 
> It seems like the "Getting info on" log message is missing a filename.  Would 
> this explain why it's hung until the server sends a RST?
> 
> There are also logs like the following before the cable is reattached:
> fs/cifs/misc.c: Null buffer passed to cifs_small_buf_release
> 

In inode.c, in the cifs_revalidate_dentry_attr function, if I add a check for an empty full_path (returning -ENOENT if empty), then the system recovers much faster, but my process gets an error opening the file it was trying to open when the cable was disconnected.  I noticed cifs logs following trying to call cifs_revalidate_dentry_attr with the last file path, but there is a NULL inode for several such calls.

I added this right after the check for a NULL full_path:

	if (0 != strcmp(full_path, "")) {
		cFYI(1, "Got empty full_path");
		rc = -ENOENT;
		goto out;
	}

I don't propose this as a patch -- I'm just curious if I'm on the right track.  Could I return something else, or is there some other way to help the file system recover faster?

Thanks,
Patrick

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hung system with cifsd, cannot reduce timeout
       [not found]             ` <3540393.5GuVZWBWRs-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
@ 2013-12-14 11:50               ` Jeff Layton
  0 siblings, 0 replies; 6+ messages in thread
From: Jeff Layton @ 2013-12-14 11:50 UTC (permalink / raw
  To: Patrick Noffke; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On Tue, 10 Dec 2013 09:15:41 -0600
Patrick Noffke <patrick-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org> wrote:

> On Friday, December 06, 2013 09:50:27 AM Patrick Noffke wrote:
> > 
> > 
> > It seems like the "Getting info on" log message is missing a filename.  Would 
> > this explain why it's hung until the server sends a RST?
> > 
> > There are also logs like the following before the cable is reattached:
> > fs/cifs/misc.c: Null buffer passed to cifs_small_buf_release
> > 
> 
> In inode.c, in the cifs_revalidate_dentry_attr function, if I add a check for an empty full_path (returning -ENOENT if empty), then the system recovers much faster, but my process gets an error opening the file it was trying to open when the cable was disconnected.  I noticed cifs logs following trying to call cifs_revalidate_dentry_attr with the last file path, but there is a NULL inode for several such calls.
> 
> I added this right after the check for a NULL full_path:
> 
> 	if (0 != strcmp(full_path, "")) {
> 		cFYI(1, "Got empty full_path");
> 		rc = -ENOENT;
> 		goto out;
> 	}
> 
> I don't propose this as a patch -- I'm just curious if I'm on the right track.  Could I return something else, or is there some other way to help the file system recover faster?
> 

Complex problem...

I don't think you're on the right track there. An empty path is likely
just an indicator that it's trying to fetch info from the root of the
share. I think that's just happenstance. cifs.ko does a lot of dentry
revalidation by default, and that's probably where the syscalls are
when this occurs.

The problem you're describing seems more tied up with how the transport
layer and socket handling are working. I'd suggest focusing your
efforts there.

Basically, cifsd will just try to reconnect the socket if the
server->lstrp value is too far in the last. That's just a timestamp
that tells us when we last got a response from the server. The async
echoes just kick off to try and keep that going. That code needs
some work.

One thing that looks wrong to me immediately is that lstrp is reset to
jiffies before we reconnect to the server. If the reconnect takes a
while, then that value could easily be too far in the past by the time
the connect succeeds. I'll attach an untested patch. It probably won't
make much difference for this test, but looks like a reasonable thing
to fix while we're in here.

As to why the client would try to send an echo on the old socket after
connecting the new one, I'm not sure. I'd suspect that has more to do
with the socket layer. Just because we shut down the socket doesn't
mean we purge the data that we attempted to send on the old one.

Note too that there are a number of hardcoded timeouts in this code
(e.g., the msleep(3000) in cifs_reconnect, and the socket timeouts).
Reducing SMB_ECHO_TIMEOUT to 5s is likely to mean that you end up
overlapping with some of them. That's may give you odd results.

Also, 3.9.0 is pretty old at this point I'd suggest working with
something closer to mainline while troubleshooting this so you're not
treading over problems that have already been fixed.

Patch for lstrp problem follows. It compiles, but is otherwise untested.

-----------------------[snip]--------------------

[PATCH] cifs: reset server->lstrp after reconnecting

Currently we reset it before trying to reconnect. If the reconnect
attempt takes a long time, then this value could be too far in the past
by the time the connect succeeds. Reset it only after the connect
succeeds.

Cc: Patrick Noffke <patrick-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
Signed-off-by: Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 fs/cifs/connect.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 8813ff7..6e83e8e 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -353,7 +353,6 @@ cifs_reconnect(struct TCP_Server_Info *server)
 	kfree(server->session_key.response);
 	server->session_key.response = NULL;
 	server->session_key.len = 0;
-	server->lstrp = jiffies;
 	mutex_unlock(&server->srv_mutex);
 
 	/* mark submitted MIDs for retry and issue callback */
@@ -390,6 +389,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
 			if (server->tcpStatus != CifsExiting)
 				server->tcpStatus = CifsNeedNegotiate;
 			spin_unlock(&GlobalMid_Lock);
+			server->lstrp = jiffies;
 		}
 		mutex_unlock(&server->srv_mutex);
 	} while (server->tcpStatus == CifsNeedReconnect);
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: hung system with cifsd, cannot reduce timeout
       [not found] ` <52A0EA24.9000600-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
  2013-12-05 21:36   ` Patrick Noffke
@ 2013-12-31 22:43   ` ISHIKAWA,Chiaki
  1 sibling, 0 replies; 6+ messages in thread
From: ISHIKAWA,Chiaki @ 2013-12-31 22:43 UTC (permalink / raw
  To: Patrick Noffke, linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hi,

I am sorry that I could not follow this up earlier.

Thank you for reporting the problem. So I am not the only
one who is seeing the unexpectedly lengthy timeout, and
the failure to recover gracefully in the case of
network errors.

I, too, have been wondering why the tried recovery by the CIFS code
happens later  (2+ min) the timeout value (15 sec or so).
I can understand that this too long timeout before recovery process
is not good at all for embedded devices such as home AV device that
mounts CIFS for file storage, etc. To top it off, getting stuck in
unrecoverable state is not nice at all.

A follow-up by  Jeff Layton
http://article.gmane.org/gmane.linux.kernel.cifs/9261
suggests that
there may be problems in the lower-level code and the
possible disagreements of various time out values.

It would be interesting and helpful if we can reduce the 2+ min timeout 
value to a much lower value.

If anything comes up, please post tidbits.
I am interested fixing this,
er, more like having it fixed by someone :-)

I would be happy to test new patches, etc.

BTW, I am using fireshark, and wonder if there is
any tool that is geared toward CIFS debugging.

Oh, well, fireshark may be good enough.
Maybe my questions is more about kernel driver debugging:
we seem to have a problem with the kernel driver here.
We may want to find out where the timeout is triggered,
in what order, and why in relation to the external events such as
network error.

Is gdb good for debugging kernel driver under linux?
Or do we have to use a special kernel debugger?

I am familiar with user-space debugging,
and did a lot of low-level linux SCSI host adaptor driver debugging more 
than 10 years ago. So I hope I know the basics.
But, back then, most of the debugging was done by
dumping information by kernel logging code. This requires
edit-compile-test cycle. (I never tried using gdb to the
host adaptor driver).

I wonder if there is a modern tool that makes driver debugging easy.: 
e.g. tracing stack or dumping variable value without requiring recompile.

Back then, I used serial console when things got rough and I could not 
even reboot to a state where user process can run reliably, etc. Today, 
I can use a virtual machine environment
to make debugging hopefully much easier (no?). I have to investigate a 
little more about how virtual machine can help kernel debugging.

Any tips about recent/modern tools are appreciated.

TIA

(2013/12/06 6:03), Patrick Noffke wrote:
> Hi,
>
> I am having problems similar to those described here:
> http://article.gmane.org/gmane.linux.kernel.cifs/9024
>
> My system is an embedded Linux, kernel version 3.9.0, and the CIFS
> server is Windows Server 2003, SP1.  I can somewhat reliably produce a
> system that hangs for about three minutes, then recovers.  I would like
> to reduce this time, if possible (to more quickly recover under link
> failures or other conditions that cause the server to not respond).
>
> I tried changing SMB_ECHO_INTERVAL to 5 seconds (5 * HZ).  This appears
> to be working for part of cifs, but I think there is another socket that
> is still open, and doesn't disconnect until about two minutes later when
> the server sends a RST.
>
> Here is my sequence of actions:
>
> 1. Start lengthy process that accesses files on CIFS mount.
> 2. Pull Ethernet cable.
> 3. Wait about 20 seconds (with SMB_ECHO_INTERVAL at 5 seconds), then
> reconnect cable.  Process resumes almost immediately accessing files on
> CIFS mount.
> 4. Pull Ethernet cable again.
> 5. Wait about 20 seconds, then reconnect cable.  Process is hung. ps is
> also hung, printing everything before but not including the hung
> process.  cifsd was reported to have state DW (it has a PID before my
> process, so it was printed in the ps output).
> 6. About 165 seconds later, the hung process resumes, and the system is
> functioning normally.
>
> I have a wireshark capture for the above sequence.  I will try to
> describe the packet sequence corresponding to each of the above steps
> (except 1).
>
> 2. Last packet successfully transmitted is from server to client, which
> is a TCP segment of a reassembled PDU.  There are several
> retransmissions of packets from the server (when I pull the plug, I can
> still see packets from the server, since it is running on the same
> machine as wireshark).
> 3. Client sends new SYN packet (source port 43480), followed by
> Negotiate Protocol Request, followed by session setup and so forth (the
> server is responding as appropriate for client requests).
> 4. Last packet successfully transmitted is from server to client, and is
> a Read AndX Response, FID:  0x800f.  Again, there are several
> retransmissions from server to client.
> 5. Client sends new SYN packet (source port 43492), followed by
> Negotiate Protocol Request.
>   - Server replies with Negotiate Protocol Request.
>   - Then nothing for about 9 seconds.
>   - Client sends Echo Request *on previous TCP connection* (the one that
> had retransmissions in step 4, source port 43480).
>   - Server sends RST for previous TCP connection (dest port 43480).
>   - Then nothing for 111 sec, when server sends TCP keep-alive (this is
> also 120 seconds after Negotiate sequence, which is probably the
> configured TCP keep-alive interval).
>   - Client ACKs keep-alive immediately.
>   - 35 seconds later, server sends RST for new connection (dest port
> 43492).
>   - Client immediately sends new SYN packet.
> 6. 10 seconds after last SYN packet, client Negotiate Protocol Request,
> and normal communication resumes.
>
> I do see klog messages that the CIFS server has not responded in 10
> seconds (twice SMB_ECHO_INTERVAL, as expected), and that it is
> reconnecting.  I believe these correspond to the first two SYN packets
> above, but it is hard to correlate those timestamps to wireshark, so I
> can't be sure.  But the last such log occurred 177 seconds before my
> process resumed working, which makes me think the logs correlate to the
> first two SYN packets.
>
> Why would the Echo Request go out on the old connection after a new
> connection has been opened?  And why are there no Echo Requests on the
> new connection?
>
> I did check the cifsd stack (cat /proc/<cifsd PID>/stack) for previous
> tests, and it was waiting on a recv, and its state was SW (not DW).
> Unfortunately, I did not get the stack for this test.
>
> Please let me know if there's any more information I can provide.
>
> Also, is reducing SMB_ECHO_INTERVAL expected to reduce the recovery time
> under such failures?  If so, should the total time to reconnect to the
> server be 2 * SMB_ECHO_INTERVAL, or are there other timeouts on top of
> this?
>
> Best regards,
> Patrick
> --
> To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-12-31 22:43 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-05 21:03 hung system with cifsd, cannot reduce timeout Patrick Noffke
     [not found] ` <52A0EA24.9000600-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
2013-12-05 21:36   ` Patrick Noffke
     [not found]     ` <52A0F1CB.5060709-0qqNeQ6W4hOzQB+pC5nmwQ@public.gmane.org>
2013-12-06 15:50       ` Patrick Noffke
     [not found]         ` <1774674.CzhmRQMMgZ-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
2013-12-10 15:15           ` Patrick Noffke
     [not found]             ` <3540393.5GuVZWBWRs-J5+zLgb4AZ5BQzcUTsrH4w@public.gmane.org>
2013-12-14 11:50               ` Jeff Layton
2013-12-31 22:43   ` ISHIKAWA,Chiaki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.