File Descriptor Limits

Currently each server has 2000 virtual hosts on them, the resources it
consumes is very minimal, load avg, averages around 0.5 to 1, I
believe this is only that high because of the way load is calculated
with disk since the machines are instantly responsive.

This leads me to think it could do some serious more work
HP DL380 (G7 - 8G ram), apache 2.4 on slackware (recently upgraded
all to 3.6 kernels

Am I right in assuming the FD limits is about 20, or lets say 50 FD's
for apache internal,
plus 1 each for access/error/suexec logs per virtualhost,
TIMES number of daemons, using MPM event which seems to load 5 of them

So, more or less, is it 3*2000+50*5 ? Or is that internal 20 or so
FD's per virtualhost, not overall?

Just trying to make sure we don't blow the limits and make the servers
bork if we reduce our carbon footprint (and power bill) by
de-commissioning bunch of servers that can be turned off until needed.



Re: File Descriptor Limits

By Igor Cicimov at 12/11/2012 - 00:35

On Tue, Dec 11, 2012 at 2:38 PM, Nick Edwards <nick.z. ... at gmail dot com>wrote:

I would say not right. Each apache process loads heaps of modules and
libraries for normal operation and each of them uses one file descriptor.
Not to mention the network connections to the server where each socket
means file descriptor since on Unix/Linux everything is a file.
The best way to find your average file descriptors is to run lsof ans grep
for the apache user, something like this:

System limit:
# ulimit -n

The network files oppened by all apache processes (IPv4 only):
# lsof -i 4 | grep httpd | wc -l

FD used by apache user:
# lsof -s -u apache | wc -l

Specific file groups:
# lsof | grep httpd | egrep 'REG|IPv' | wc -l

So after finding the total number of used FD's on your system just divide
that with total number of processes ran by appache user:

# ps -u apache | grep [0-9] | wc -l

Re: File Descriptor Limits

By Nick Edwards at 12/11/2012 - 23:30

So, we did it :->
Had to set ulimit to 32768 to make apache start.

currently it uses only 33209 FD's

with 916 tcp connections, which includes a few WAITS

So on that basis its not an exact math or science, but shows we can
easily handle it, perhaps maybe even moving to 6K hosts, but at 4K
hosts the loads increased marginally to a average of 1 spiking
occasionally to 3 with the machines still instantly responsible (that
was tested on an 1.5mbps ADSL line, not LAN) so we hare very happy.