I assume Liferay is throwing exceptions. Are these timeouts or indications
of broken connections?
A typical problem with the Elasticsearch Native Protocol is that it does not
like third-party tear-downs of connections it uses (e.g., by NGINX or some
load balancer).
Posted at Nginx Forum:
https:/
The key requirement you mentioned now: the user needs to be logged in.
So, the next question is: how do we know the user is logged in. It can't be
just a simple cookie because that could be faked (I could add "LOGGED_IN=1"
without the site authorizing this), and therefore there is no security at
a
I would generally say this is not possible in the way you describe it. There
are two ways, however, this could be implemented:
1. You use one-time links to content files: all content retrieval URLs will
get a parameter expires=X (how long the link should be valid) and a
signature (e.g., an HMAC wi
Optimizing for production is not simply an optimization of one component,
e.g., NGINX.
This is also about your security model, the application architecture and
scaling abilities.
If you simply have static files to be served, place them into a memory-based
file system and you'll serve them blindin
Hi Danny,
two comments:
1) Don't forget about $is_args$args to also pass any arguments supplied with
the URL.
2) You cannot redirect requests with a request body, most importantly POST
and PUT, so your rule is only applicable for GET/HEAD requests.
I have no idea what you are really strugglin
PS: If, like you mentioned in the other reply message, want to create
environments dynamically, you could use the map directive with an include
file that is dynamically updated by the deployment process of such an
environment (and then do nginx -s reload), but even more elegant would be
the replace
Try something like this:
map $urlprefix $urlproxy {
"foo" "https://foohost.foo.com";;
"bar" "http://barhost.blah.com";;
"fie" "https://fie.special.domain.com/blubb";;
default "https://standard.com";;
}
[...]
location ~ "^/(?[^/]+)(?/.*)$" {
[...]
proxy_pass "$urlproxy$urlsuffix$is_
Robots exclusion is generally quite unreliable. Exclusions based on user
agents are also not really reliable. You can try all of the options for
robots exclusion and may still get undesired crawlers on your site.
The only way you can keep robots out is to require authentication for those
parts you
I did. They said it works as designed as keyval maps with type=ip have no
option to retrieve the status of entries other than by supplying IP
addresses. Values cannot be retrieved anymore if the key needs to be a CIDR
block.
I am doing a workaround now.
--j.
Posted at Nginx Forum:
https://forum
In order to redirect http to https, you have to define a listener rule in
the ALB that redirects all traffic on port 80 to port 443 (of the ALB) with
the original path and query parameters. The status code should be a 301
(permanent redirection). That's the context between the client and the ALB.
A little correction to my earlier message: IPv6 addresses also seem to work.
In my test, I was checking for a dot in the key, and that excluded IPv6
addresses.
However, CIDR ranges still fail.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,285542,285543#msg-285543
___
The new R19 introduces "type=ip" keyval maps.
Posting IP addresses (e.g., 1.2.3.4) seems to work from both, the API 5 REST
calls and from Javascript, except IPv6 addresses are not accepted.
Posting CIDR blocks (e.g., 1.2.3.0/24) works fine via the API 5 REST calls
but not via Javascript. CIDR ent
I'm a big fan of throw-away certificates, i.e., self-signed certificates you
may dispose of any time. It seems, the generation of proper certificates is
still a mystery to some, so let me briefly include a recipe how to create
them:
Create a cert-client.conf of the following form:
---
I've been following this, and I would take a slightly different approach.
1. Serve all apps under /{app}/releases/{version}/{path} as you have them
organized in the deployment structure in the file system.
2. Forget about symbolic links and other makeshift versioning/defaulting in
the file system
14 matches
Mail list logo