```
# cat mongrel2.conf
# a sample proxy route
node_fib = Proxy(addr='127.0.0.1', port=3000)
# your main host
mongrel2 = Host(name="localhost", routes={
'/': node_fib
})
# the server to run them all
main = Server(
uuid="2f62bd5-9e59-49cd-993c-3b6013c28f05",
access_log="/logs/access.log",
error_log="/logs/error.log",
chroot="/home/test/deployment/",
pid_file="/run/mongrel2.pid",
default_host="localhost",
name="main",
port=8080,
filters = [],
hosts=[mongrel2]
)
settings = {"zeromq.threads": 1, "upload.temp_store":
"/home/zedshaw/projects/mongrel2/tmp/upload.XXXXXX",
"upload.temp_store_mode": "0666"
}
servers = [main]
```
I haven't benchmarked this particular one via Handler. Other handlers have
been pretty fast though ~19ms on average.
On Mon, Jul 21, 2014 at 5:44 PM, Paul Eipper <[email protected]> wrote:
> I probably cannot help, but I'm curious about your mongrel2.conf
>
> Also, did you try benchmarking the service as a Handler?
>
> att,
>
> --
> Paul Eipper
>
>
> On Mon, Jul 21, 2014 at 7:18 PM, John Jelinek IV <[email protected]>
> wrote:
> > Hi all,
> >
> > I am evaluating mongrel2 and wanted to get some simple perf benchmarks,
> so I
> > am using httperf to establish my baseline to proxy a service.
> >
> > Here I am hitting the service directly:
> >
> > ```
> > $ httperf --hog --server localhost --port 3000 --uri /100 --num-conn 200
> > --num-call 50
> > httperf --hog --client=0/1 --server=localhost --port=3000 --uri=/100
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --num-calls=50
> > httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open
> > files to FD_SETSIZE
> > Maximum connect burst length: 1
> >
> > Total: connections 200 requests 10000 replies 10000 test-duration 2.011 s
> >
> > Connection rate: 99.5 conn/s (10.1 ms/conn, <=1 concurrent connections)
> > Connection time [ms]: min 8.4 avg 10.1 max 28.8 median 9.5 stddev 2.2
> > Connection time [ms]: connect 0.1
> > Connection length [replies/conn]: 50.000
> >
> > Request rate: 4973.8 req/s (0.2 ms/req)
> > Request size [B]: 65.0
> >
> > Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
> > Reply time [ms]: response 0.2 transfer 0.0
> > Reply size [B]: header 163.0 content 21.0 footer 0.0 (total 184.0)
> > Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0
> >
> > CPU time [s]: user 0.32 system 1.69 (user 15.9% system 84.1% total
> 100.0%)
> > Net I/O: 1209.4 KB/s (9.9*10^6 bps)
> >
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
> > ```
> >
> > Here I am hitting the service through mongrel2's proxy feature:
> >
> > ```
> > $ httperf --hog --server localhost --port 8080 --uri /100 --num-conn 200
> > --num-call 50
> > httperf --hog --client=0/1 --server=localhost --port=8080 --uri=/100
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --num-calls=50
> > httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open
> > files to FD_SETSIZE
> > Maximum connect burst length: 1
> >
> > Total: connections 200 requests 10000 replies 10000 test-duration
> 637.910 s
> >
> > Connection rate: 0.3 conn/s (3189.5 ms/conn, <=1 concurrent connections)
> > Connection time [ms]: min 2829.2 avg 3189.5 max 3307.2 median 3216.5
> stddev
> > 88.2
> > Connection time [ms]: connect 0.1
> > Connection length [replies/conn]: 50.000
> >
> > Request rate: 15.7 req/s (63.8 ms/req)
> > Request size [B]: 65.0
> >
> > Reply rate [replies/s]: min 15.2 avg 15.7 max 17.4 stddev 0.5 (127
> samples)
> > Reply time [ms]: response 63.8 transfer 0.0
> > Reply size [B]: header 163.0 content 21.0 footer 0.0 (total 184.0)
> > Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0
> >
> > CPU time [s]: user 103.46 system 534.38 (user 16.2% system 83.8% total
> > 100.0%)
> > Net I/O: 3.8 KB/s (0.0*10^6 bps)
> >
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
> > ```
> >
> > I am wondering why Mongrel2 is handling these requests so much slower. Is
> > that normal? Maybe some DDOS prevention mechanism? For a low-latency,
> > high-traffic API, this kind of traffic might not be considered a DDOS.
> I'd
> > love your feedback.
> >
> > Note - this is the service I'm testing in this scenario:
> > https://github.com/glenjamin/node-fib/blob/master/app.js
> >
> > Thanks,
> > --John Jelinek IV
>