9

I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like

ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23

Yes, my system can't consistently run an 'ls' command. :(

I note several errors in my dmesg output:

# dmesg | tail
[2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null)
[2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000]
[4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null)
[4982666.631444] VFS: file-max limit 1231582 reached
[4982666.764240] VFS: file-max limit 1231582 reached
[4982767.360574] VFS: file-max limit 1231582 reached
[4982901.904628] VFS: file-max limit 1231582 reached
[4982964.930556] VFS: file-max limit 1231582 reached
[4982966.352170] VFS: file-max limit 1231582 reached
[4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000]

Obviously, the file-max errors look suspicious, being clustered together and recent.

# cat /proc/sys/fs/file-max
1231582
# cat /proc/sys/fs/file-nr
1231712 0       1231582

That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network.

# lsof | wc
  16046  148253 1882901
# ps -ef | wc 
    574    6104   44260

I saw some documentation saying:

file-max & file-nr:

The kernel allocates file handles dynamically, but as yet it doesn't free them again.

The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit.

Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.

Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached".

My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open.

Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back)

Thanks in advance for any help.

4
  • Hm, I didn't know about that one. Thanks for the pointer, I'll post over there instead.
    – Rick Koshi
    Feb 13, 2011 at 20:38
  • That documentation doesn't seem accurate, linux/fs/file_table.c both allocates and frees file handles. It does sound like you have a leak somewhere, though, and I'm not sure how to best track it down.
    – ephemient
    Feb 13, 2011 at 20:48
  • Do you have any "weird" stuff going on with the kernel? i.e. Are you running any experimental or unusual modules, or anything which may be causing the kernel itself to open (and erroniously not close) files handles?
    – Brad
    Feb 13, 2011 at 23:00
  • ext4 is horribly unreliable compared with previous versions of ext. Just go back to ext3... Feb 14, 2011 at 1:04

1 Answer 1

10

I hate to leave a question open, so a summary for anyone who finds this.

I ended up reposting the question on serverfault instead (this article)

They weren't able to come up with anything, actually, but I did some more investigation and ultimately found that it's a genuine bug with NFSv4, specifically the server-side locking code. I had an NFS client which was running a monitoring script every 5 seconds, using rrdtool to log some data to an NFS-mounted file. Every time it ran, it locked the file for writing, and the server allocated (but erroneously never released) an open file descriptor. That script (plus another that ran less frequently) resulted in about 900 open files consumed per hour, and two months later, it hit the limit.

Several solutions are possible: 1) Use NFSv3 instead. 2) Stop running the monitoring script. 3) Store the monitoring results locally instead of on NFS. 4) Wait for the patch to NFSv4 that fixes this (Bruce Fields actually sent me a patch to try, but I haven't had time)

I'm sure you can think of other possible solutions.

Thanks for trying.

2
  • 5) Use a different NFS server, possibly NFS-Ganesha or the test server in pynfs.
    – ephemient
    Mar 5, 2011 at 18:15
  • 1
    Can you detail a bit how you troubleshooted this issue? How did you find the "guilty" process?
    – mhristache
    Feb 6, 2015 at 9:49

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.