HPPS health check is difficult for the check module, I have added an
alternative feature for this request. The 'port' option can be
specifed with different port from the server's original port. For
example:
server {
server 192.168.1.1:443;
check interval=3000 rise=1 fall=3 timeout=2000 t
On Thu, Oct 31, 2013 at 09:39:54PM -0400, nehay2j wrote:
Hi there,
> I am making a GET call through browser like-
> https://example.com/ec2..com
So "$1" = "/ec2..com" and the proxy_pass argument is http:///ec2..com/test
> Error Logs-
>
> 2013/11/01 01:33:49 [error] 13086#0: *1 no host in upst
Thanks Francis.
I am making a GET call through browser like-
https://example.com/ec2..com
Error Logs-
2013/11/01 01:33:49 [error] 13086#0: *1 no host in upstream
"/ec2-xx-xxx-xxx-xxx..amazonaws.com:8080/test",
client: 10.10.4.167, server: clarity-test.cloud.tibco.com, request: "GET
/ec2-xx-xx-xx
On Thu, Oct 31, 2013 at 07:55:15PM -0400, nehay2j wrote:
Hi there,
> I need to do proxy_pass to host name passed in url and rewrite url as well.
> Since the host name is difference with each request, I cannot provide an
> upstream for it. Below is the nginx configuration I am using but it doesnt
Hi,
I need to do proxy_pass to host name passed in url and rewrite url as well.
Since the host name is difference with each request, I cannot provide an
upstream for it. Below is the nginx configuration I am using but it doesnt
do proxy pass and returns 404 error. The hostname resembles ec2...com.
Hello!
On Thu, Oct 31, 2013 at 10:33:33AM -0400, j0nes2k wrote:
> Hello,
>
> I have nginx in front of an Apache server and a gunicorn server for
> different parts of my website. I am using the SSI module in nginx to display
> a snippet in every page. The websites include a snippet in this form:
Hello,
I have nginx in front of an Apache server and a gunicorn server for
different parts of my website. I am using the SSI module in nginx to display
a snippet in every page. The websites include a snippet in this form:
For static pages served by nginx everything is working fine, the same goes
On Thu, Oct 31, 2013 at 02:26:41PM +0200, Pasi Kärkkäinen wrote:
> Hello,
>
> I'm using nginx as a http proxy / loadbalancer for an application which
> which has the following setup on the backend servers:
>
> - https/403 provides the application at:
> - https://hostname-of-backend/app/
>
On Thursday 31 October 2013 14:01:20 luckyknight wrote:
> I have setup SPDY on my application and have observed some nice reductions
> in page load times. However in a production environment my setup is
> somewhat different.
>
> At the moment my setup looks like this:
>
> server 1 running nginx,
Hello,
I'm using nginx as a http proxy / loadbalancer for an application which
which has the following setup on the backend servers:
- https/403 provides the application at:
- https://hostname-of-backend/app/
- status monitoring url is available at:
- http://hostname-of-backend
I have setup SPDY on my application and have observed some nice reductions
in page load times. However in a production environment my setup is somewhat
different.
At the moment my setup looks like this:
server 1 running nginx, terminates ssl and uses proxy_pass to server 2
server 2 running nginx,
11 matches
Mail list logo