Hello,
I'm experiencing a strange behavior of relayd.
relayd is used for
reverse-proxy an Apache[localhost] web server instance and ssl acceleration.
relayd engine crashes with the following errors:
$ cat /var/log/daemon
....
Aug 21 04:41:47 www-apps-int relayd[1592]: pfe exiting, pid 1592
Aug 21
04:41:47 www-apps-int relayd[24962]: hce exiting, pid 24962
Aug 21 04:41:47
www-apps-int relayd[19232]: lost child: relay terminated; signal 11
Aug 21
04:41:47 www-apps-int relayd[19232]: parent terminating, pid 19232
Aug 21
04:41:47 www-apps-int relayd[17554]: relay exiting, pid 17554
...
It seems
that the crash is associated with a scan from ip address ranges of Qualys.
$
cat /var/www/logs/access_log
.......
[LAST ENTRY]: 64.39.111.34 - -
[21/Aug/2013:04:41:47 +0300] "GET /post-nuke/html/ HTTP/1.1" 404 221 "-" "-"
....
The crash happened in the same time with last entry access from Qualys.
It is the last because relayd crashed.
There is a total of 1010 connections
from that ip, with a number of connections/second between 3 and 10.
The
machine is OpenBSD 5.3/amd64 GENERIC.MP
$ sudo cat /etc/relayd.conf
ext_addr="10.10.13.93"
table <webhosts> { 127.0.0.1 }
#
# Relay and
protocol for HTTP layer 7 loadbalancing and SSL acceleration
#
http protocol
www_ssl_prot {
header append "$REMOTE_ADDR" to "X-Forwarded-For"
header append "$SERVER_ADDR:$SERVER_PORT" to "X-Forwarded-By"
header change "Connection" to "close"
# Various TCP performance
options
tcp { nodelay, sack, socket buffer 65536, backlog 128 }
#ssl { ciphers
"RC4:HIGH:!AES256-SHA:!AES128-SHA:!DES-CBC3-SHA:!MD5:!aNULL:!EDH" }
ssl { ciphers "HIGH" }
#ssl { no sslv2, sslv3, tlsv1, ciphers "HIGH" }
ssl session cache disable
}
relay www_ssl {
# Run as a SSL
accelerator
listen on $ext_addr port 443 ssl
protocol
www_ssl_prot
# Forward to hosts in the webhosts table using a src/dst
hash
forward to <webhosts> port 8080
}
In /etc/pf.conf I have the
following rules (for www):
ext_if="trunk0"
www_ports_ext = "{80, 443}"
altq on $ext_if cbq bandwidth 20Mb queue {std, interne, externe}
queue std
bandwidth 1000Kb cbq(default)
queue externe
bandwidth 5Mb {web, app, penalty}
queue web bandwidth
94% priority 5 cbq(borrow red)
queue app bandwidth 5% priority 7
cbq(borrow red)
queue penalty bandwidth 6Kb priority 0 cbq
queue
interne bandwidth 14Mb {ssh, servicii}
queue ssh bandwidth 8Mb
cbq(borrow) {ssh_prio, ssh_bulk}
queue ssh_prio
bandwidth 20% priority 7 cbq(borrow)
queue ssh_bulk bandwidth
80% priority 0 cbq(borrow)
queue servicii bandwidth 6Mb priority 5
cbq(borrow red)
pass in quick log on $ext_if inet proto tcp from
<www_bad_hosts> to any port $www_ports_ext queue penalty
#
# WWW extern
#
table <web_allowed> contains some ip ranges for testing purposes; it will be
replaced by keyword 'any' in production
#
pass in inet proto tcp from
<web_allowed> to ($ext_if) port $www_ports_ext flags S/SA keep state\
(max-src-conn-rate 100/10, \
max-src-nodes 500, max-src-states 250,
source-track rule, \
overload <www_bad_hosts> flush global) queue web
$ sudo pfctl -t www_bad_hosts -T show
$
Table <www_bad_hosts> is empty, so
the thresholds in the rule above are not met.
What could cause this
behavior?
>From the logs seems that pfe child process triggers the crash,
that is why I send the relevant www pf rules.
Please if somebody could guide
me in the right direction of fixing this.
Thank you in advanced,
Bogdan