Milestone #29
opennginx configuration
0%
Description
Use SNI for routing https requests¶
The scenario is a proxy exposing https protocol. SNI, Server Name Indication, happens before ssl handshake negotiation.
If SNI is used for routing https request, the same port can be used for multiple domains
The problem is on passing the headers:
- X-Forwarded-Proto
- X-Forwarded-Port
- X-Forwarded-Host
- X-Forwarded-For
These headers are require for internal http/1.1 service for:
- setting cookies
- craft the internal urls in each page served (base uri)
- internal redirections
Adopting https on the backend service¶
The drawback of https adoption is:
- ssl certificate managed by the internal service: at very least the internal service must generate a self signed certificate
SSL Termination¶
There are 2 options for the proxy service, for a given tcp port:
- act as SSL termination
- stream the SSL to the internal service
If the proxy does SSL termination, the proxy certificate must be valid for the browser
If the proxy stream the SSL data to internal service, the internal service certificate must be valid
If proxy does SSL termination, and then proxy requests to internal service, the internal service would be not aware about the client caller, and how te client expect cookie are settled, and how the pages internal urls must be crafted:
Proxy SSL Termination is only usable for pure REST calls (or GraphQL, or similar)
Conclusions¶
There are no options to serve both https and http requests for the backend for the same proxy service on the same port.
This limitation does not apply to internal services providing pure REST
Internal service CPU load¶
SSL handshake negotiation involves challenge solving and symmetric key generation for communication, that is described here: https://sematext.com/glossary/ssl-tls-handshake/
If proxy implement SSL termination, this happens once, on the proxy.
Connection bound and load balancing¶
When proxy is settled to not terminating SSL, the connection between external client, and internal service is bounded one-to-one.
This has some implications:
- internal service load depends on how many clients are bounded to the specific service instance
- during connection lifetime, the bound can not be changed for balancing between multiple clients, to multiple internal service instance
- if internal instance has some error (connection error, node not reachable, instance crash) the client must handle the error: close the https channel, open new https channel, negotiate SSL handshake. These operations pointing to the same url, from client point of view, but talking with another internal instance.
It is valuable to mention that, in case of error, receive a
- 502 Bad Gateway
- 504 Gateway Timeout
- 5XY (no standard)
see https://en.wikipedia.org/wiki/List_of_HTTP_status_codes for proxy related message.
UDP¶
The SSL "To Terminate or To do not terminate" dilemma does not exist for UDP messages. In UDP world does not exists SSL neither, because the connection control flow must be implemented on Application level (OSI Level 7). These are the application responsibility for UDP:
- Implement Security Layer
- Implement traffic control flow and deal with network errors
For each UDP based service (mostly media stream), there are libraries providing these features.
Be careful on proxy staff¶
Things to take into account for proxy configuration depends on application, protocol, backend service, client code.
All this staff can have impact on user experience and also on reliability of the service.
Balancing connection, and distribute loads may be more important than increase the number of nodes and instances providing the same service.
This apply to both proxy configuration, proxy adoption (nginx, haproxy), and to proxy service writing (returns the right http codes, close and free connection fd, etc.)
Most of the time, client does not handle errors with 5xx code, it just fails.
How many proxies?¶
All these problems seems related to exposing ports to the general public (internet). Really proxy happens at all level.
- in Docker Swarm, a service is a proxy (L4) that balance requests to its instances
- in Kubernetes, a Service is a proxy (L4) that balance requests to pods, and also apply policy to check metrics and autoscale pod instances
- still in Kubernetes, a NodePort expose a port for each nodes, but route request to a target pods based on round-robin or metrics, so it implements a proxy (L4)
It is important to note that these proxy layers are L4, not L7, the application level, the level of http(s) protocol.
At L4 level there are no knowledge of SSL that bound client to server, so the connections keep to be bound until it is explicitly closed.
The approach to TCP L4 proxy implementation is simple: keep the connection alive until demanded to close it (from one side)
So it is only at L7 that magic can happen. This is easy to get: there is no information about protocol and data passing through L4 connection, no way to apply routing policy at this level.
A proxy server can keep as many TCP connection as required, and distribute load between them, only if the stream served for the client is not bounded to the target end.
What happens by implementing SSL termination on the proxy is avoid the bound between client and internal service, so just keep it to the edge of the connection.
Security implication¶
Let's take a configuration where the proxy is not implementing SSL termination.
If a client setup an ssl connection, this route is almost fixed, and it bound external client to internal service instance.
If client want to do some investigation about the internal service, it can overload it by sending requests.
This can be mitigated:
- still in the proxy, by measuring the transfer load
- in the internal service, measuring the request-per-client
Depending of your application and audience, it might be useful to take this into account.
It also important to note that not all attack are effective when load is distributed. Some attack may require that only one instance is targeted
It easy to immagine an attack that access unwanted resource by changing the service status to a particular state. This attack would be effective only if client has warranty that the connection is bounded to the target account.
If proxy implements ssl-termination, and apply a pure round-robin, every requests is distributed between all backend-services.
Payload based proxy setup¶
HTTP is used for a range of applications, from html page serving, to js serving, to REST API serving Ajax requests.
REST API is by nature session-less, this means that every single request is self-contained, with all informations needed to identify the issuer, the issuer authorisation, and the requested data.
This type of requests can be multiplexed between backend service.
To implement this is enough for the proxy to implement TLS Termination.
In fact a REST call does not need to provide X-Forwarded-* headers in the response.
In this case an High Availability Proxy can have backend serving HTTP/1.1, HTTP/2, HTTP/3
Updated by Daniele Cruciani 3 months ago
- Description updated (diff)
Updated by Daniele Cruciani 3 months ago
- Description updated (diff)
Updated by Daniele Cruciani 3 months ago
- Description updated (diff)
Updated by Daniele Cruciani 3 months ago
- Description updated (diff)