The packaging should care of these: http://wiki.apache.org/hadoop/Hbase/FAQ#A6
Why do I see "java.io.IOException...(Too many open files)" in my logs? Currently Hbase is a file handle glutton. Running an Hbase loaded w/ more than a few regions, its possible to blow past the common 1024 default file handle limit for the user running the process. Running out of file handles is like an OOME, things start to fail in strange ways. To up the users' file handles, edit /etc/security/limits.conf on all nodes and restart your cluster. {{{# Each line describes a limit for a user in the form: # # domain type item value # hbase - nofile 32768}}} You may need to also edit sysctl.conf. The math runs roughly as follows: Per column family, there is at least one mapfile and possibly up to 5 or 6 if a region is under load (lets say 3 per column family on average). Multiply by the number of regions per region server. So, for example, say you have a schema of 3 column familes per region and that you have 100 regions per regionserver, the JVM will open 3 * 3 * 100 mapfiles -- 900 file descriptors not counting open jar files, conf files, etc (Run 'lsof -p REGIONSERVER_PID' to see for sure). ------------------------- http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627- epoll-limits/ A day and several installation routines later we figured out that the available epoll resources were not sufficient any more. Java JDK 1.6 uses epoll to implement non-blocking-IO. With kernel 2.6.27 resource limits have been introduced and the default on openSuSE is 128 - way too low. Increasing the limit with echo 1024 > /proc/sys/fs/epoll/max_user_instances fixed the cluster immediately. To make this setting boot safe add the following line to /etc/sysctl.conf: fs.epoll.max_user_instances = 1024 Thomas Koch, http://www.koch.ro -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org