Menu
this kind of question maybe has been asked herebut I couldn't find any that really match my question.Heard that nginx performance is quite impressive,but Apache has more docs, community(read:expert) to get help
Now what I want to know,how both web servers comparein term of performance, easiness of config, level of customization,etc.AS REVERSE PROXY server in a vps environment??
I'm still weighing between the two for a ruby web app(not ROR) served withthin (one of ruby web server).
Specific answer will be much appreciated.General answer not touching the ruby part is okay.I'm still noob in web server administration.
Specific answer will be much appreciated.General answer not touching the ruby part is okay.I'm still noob in web server administration.
mhd
mhdmhd
5 Answers
I wanted to put this in a comment since I agree with the most important point of webdestroyas answer, but it got a bit too long.
You're in a VPS environment, this means you're most likely going to be low on RAM. For this reason alone you'll want Nginx as its memory footprint is smaller than Apaches.
Also I do not agree with some of the arguments mentioned.
Easiness of Config:
Nginx is not more difficult than Apache. It's different. If you're used to Apache then change will always be more difficult, this does not mean that the configuration style itself is more difficult. I migrated completely from Apache to Nginx over a year ago and today I would struggle to configure an Apache server whereas I find Nginx extremely easy to configure.
Nginx is not more difficult than Apache. It's different. If you're used to Apache then change will always be more difficult, this does not mean that the configuration style itself is more difficult. I migrated completely from Apache to Nginx over a year ago and today I would struggle to configure an Apache server whereas I find Nginx extremely easy to configure.
For Ruby:
Nginx has Passenger, however, I usually see it described as the inferior method to connect to Ruby. I am not a Ruby programmer so I cannot verify this but I often see Unicorn and Thin mentioned as better alternatives.
Nginx has Passenger, however, I usually see it described as the inferior method to connect to Ruby. I am not a Ruby programmer so I cannot verify this but I often see Unicorn and Thin mentioned as better alternatives.
In Conclusion:
Nginx was made to be a reverse proxy. Initially all it did was serve static files and reverse proxy to a backend server via HTTP/1.0. Since then fastcgi, load balancing and various other features has been added, but it's initial design purpose was to serve static files and reverse proxy. And it does this really well.
Nginx was made to be a reverse proxy. Initially all it did was serve static files and reverse proxy to a backend server via HTTP/1.0. Since then fastcgi, load balancing and various other features has been added, but it's initial design purpose was to serve static files and reverse proxy. And it does this really well.
Apache, on the contrary is a general purpose web server. I have no doubt that it can reverse proxy perfectly fine, but it was not designed to have a minimal memory footprint and as a result it requires more resources than Nginx does, which means my initial VPS environment argument comes into play.
Martin FjordvaldMartin Fjordvald6,67811 gold badge2424 silver badges3232 bronze badges
Performance:
NGinX. This server is known to be one of the best performing web servers, and is used by many different companies (Notable, MediaTemple)
NGinX. This server is known to be one of the best performing web servers, and is used by many different companies (Notable, MediaTemple)
Easiness of Config:
Apache. Apache's config is really simple, and really powerful. Nginx is powerful, but can be very hard to understand, as it seems more like a programming language than a config file.
Apache. Apache's config is really simple, and really powerful. Nginx is powerful, but can be very hard to understand, as it seems more like a programming language than a config file.
Level of Customization:
Apache. Apache has tons of mods and other plugins written for it. While Nginx still has plugins made for it, I think that Apache has many many more than Nginx does.
Apache. Apache has tons of mods and other plugins written for it. While Nginx still has plugins made for it, I think that Apache has many many more than Nginx does.
For Ruby:
I know Nginx can be used as a powerful load balancer with Mongrel/webrick. However, Apache has Phusion/Passenger which makes the integration nicer.
I know Nginx can be used as a powerful load balancer with Mongrel/webrick. However, Apache has Phusion/Passenger which makes the integration nicer.
Reverse Proxy Winner:
NGinX
NGinX
Mitch DempseyMitch Dempsey
Nginx is event-based, while apache is process-based. Under high load, this makes all the difference in the world... Apache has to fork or start a new thread for each connection, while nginx doesn't. This difference shows up mainly in memory usage, but also in user response time and other performance metrics. Nginx can handle tens of thousands of simultaneous HTTP keepalive connections on modern hardware. Apache will use 1-2 MB of stack for each connection, so doing the math you see that you can only handle a few hundred or maybe a thousand connections simultaneously without starting to swap.
We use nginx in front of Apache and IIS in our environment as a load-balancing and caching proxy, and couldn't be happier. We use two small-ish nginx boxes in place of a pair of very expensive leased F5 devices and our sites are far faster in both feel and measured response times.
rmalayterrmalayter
I was in the same dilemma as You about two weeks ago.
To give You a really terse answer: From my research nginx is really fast and resource friendly, but it was only concieved to reverse proxy static files. The rest are bolt on solutions which You have to configure or script Your way around.
AFAIK nginx has no htaccess files so You have to find Your way around if depending on that feature.
AFAIK everything needed works and I've seen tutorials.
I will go with nginx with my testing and profiling setup. I have a typical LAMP application.
I have read that there are people who reverse proxy and serve static files from nginx and pass everything else like PHP to a running Apache instance. They claim a good tradeoff. I have no performance data about that, but You might want to know.
deploymonkeydeploymonkey
I’ve had serious problems with Apache’s mod_proxy on a variety of platforms in various different environments over the last couple of years. From time to time, it will simply stop working and the only cure seems to be to restart the Apache server.
Personally, I’d not be asking “nginx vs Apache”, but “nginx vs lighttpd” — and that’s a far tougher call!
Mo.Mo.
Not the answer you're looking for? Browse other questions tagged apache-2.2nginxreverse-proxy or ask your own question.
-->Reverse proxy built into Azure Service Fabric helps microservices running in a Service Fabric cluster discover and communicate with other services that have http endpoints.
Microservices communication model
Microservices in Service Fabric run on a subset of nodes in the cluster and can migrate between the nodes for various reasons. As a result, the endpoints for microservices can change dynamically. To discover and communicate with other services in the cluster, microservice must go through the following steps:
- Resolve the service location through the naming service.
- Connect to the service.
- Wrap the preceding steps in a loop that implements service resolution and retry policies to apply on connection failures
For more information, see Connect and communicate with services.
Communicating by using the reverse proxy
Reverse proxy is a service that runs on every node and handles endpoint resolution, automatic retry, and other connection failures on behalf of client services. Reverse proxy can be configured to apply various policies as it handles requests from client services. Using a reverse proxy allows the client service to use any client-side HTTP communication libraries and does not require special resolution and retry logic in the service.
Reverse proxy exposes one or more endpoints on local node for client services to use for sending requests to other services.
Note
Supported Platforms
Reverse proxy in Service Fabric currently supports the following platforms
- Windows Cluster: Windows 8 and later or Windows Server 2012 and later
- Linux Cluster: Reverse Proxy is not currently available for Linux clusters
Reaching microservices from outside the cluster
The default external communication model for microservices is an opt-in model where each service cannot be accessed directly from external clients. Azure Load Balancer, which is a network boundary between microservices and external clients, performs network address translation and forwards external requests to internal IP:port endpoints. To make a microservice's endpoint directly accessible to external clients, you must first configure Load Balancer to forward traffic to each port that the service uses in the cluster. Furthermore, most microservices, especially stateful microservices, don't live on all nodes of the cluster. The microservices can move between nodes on failover. In such cases, Load Balancer cannot effectively determine the location of the target node of the replicas to which it should forward traffic.
Reaching microservices via the reverse proxy from outside the cluster
Instead of configuring the port of an individual service in Load Balancer, you can configure just the port of the reverse proxy in Load Balancer. This configuration lets clients outside the cluster reach services inside the cluster by using the reverse proxy without additional configuration.
Warning
When you configure the reverse proxy's port in Load Balancer, all microservices in the cluster that expose an HTTP endpoint are addressable from outside the cluster. This means that microservices meant to be internal may be discoverable by a determined malicious user. This potentially presents serious vulnerabilities that can be exploited; for example:
- A malicious user may launch a denial of service attack by repeatedly calling an internal service that does not have a sufficiently hardened attack surface.
- A malicious user may deliver malformed packets to an internal service resulting in unintended behavior.
- A service meant to be internal may return private or sensitive information not intended to be exposed to services outside the cluster, thus exposing this sensitive information to a malicious user.
Make sure you fully understand and mitigate the potential security ramifications for your cluster and the apps running on it, before you make the reverse proxy port public.
Proxy And Reverse Proxy Youtube
URI format for addressing services by using the reverse proxy
The reverse proxy uses a specific uniform resource identifier (URI) format to identify the service partition to which the incoming request should be forwarded:
- http(s): The reverse proxy can be configured to accept HTTP or HTTPS traffic. For HTTPS forwarding, refer to Connect to a secure service with the reverse proxy once you have reverse proxy setup to listen on HTTPS.
- Cluster fully qualified domain name (FQDN) | internal IP: For external clients, you can configure the reverse proxy so that it is reachable through the cluster domain, such as mycluster.eastus.cloudapp.azure.com. By default, the reverse proxy runs on every node. For internal traffic, the reverse proxy can be reached on localhost or on any internal node IP, such as 10.0.0.1.
- Port: This is the port, such as 19081, that has been specified for the reverse proxy.
- ServiceInstanceName: This is the fully-qualified name of the deployed service instance that you are trying to reach without the 'fabric:/' scheme. For example, to reach the fabric:/myapp/myservice/ service, you would use myapp/myservice.The service instance name is case-sensitive. Using a different casing for the service instance name in the URL causes the requests to fail with 404 (Not Found).
- Suffix path: This is the actual URL path, such as myapi/values/add/3, for the service that you want to connect to.
- PartitionKey: For a partitioned service, this is the computed partition key of the partition that you want to reach. Note that this is not the partition ID GUID. This parameter is not required for services that use the singleton partition scheme.
- PartitionKind: This is the service partition scheme. This can be 'Int64Range' or 'Named'. This parameter is not required for services that use the singleton partition scheme.
- ListenerName The endpoints from the service are of the form {'Endpoints':{'Listener1':'Endpoint1','Listener2':'Endpoint2' ...}}. When the service exposes multiple endpoints, this identifies the endpoint that the client request should be forwarded to. This can be omitted if the service has only one listener.
- TargetReplicaSelector This specifies how the target replica or instance should be selected.
- When the target service is stateful, the TargetReplicaSelector can be one of the following: 'PrimaryReplica', 'RandomSecondaryReplica', or 'RandomReplica'. When this parameter is not specified, the default is 'PrimaryReplica'.
- When the target service is stateless, reverse proxy picks a random instance of the service partition to forward the request to.
- Timeout: This specifies the timeout for the HTTP request created by the reverse proxy to the service on behalf of the client request. The default value is 60 seconds. This is an optional parameter.
Example usage
As an example, let's take the fabric:/MyApp/MyService service that opens an HTTP listener on the following URL:
Following are the resources for the service:
/index.html
/api/users/<userId>
If the service uses the singleton partitioning scheme, the PartitionKey and PartitionKind query string parameters are not required, and the service can be reached by using the gateway as:
- Externally:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService
- Internally:
http://localhost:19081/MyApp/MyService
If the service uses the Uniform Int64 partitioning scheme, the PartitionKey and PartitionKind query string parameters must be used to reach a partition of the service:
- Externally:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService?PartitionKey=3&PartitionKind=Int64Range
- Internally:
http://localhost:19081/MyApp/MyService?PartitionKey=3&PartitionKind=Int64Range
To reach the resources that the service exposes, simply place the resource path after the service name in the URL:
- Externally:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService/index.html?PartitionKey=3&PartitionKind=Int64Range
- Internally:
http://localhost:19081/MyApp/MyService/api/users/6?PartitionKey=3&PartitionKind=Int64Range
The gateway will then forward these requests to the service's URL:
http://10.0.0.5:10592/3f0d39ad-924b-4233-b4a7-02617c6308a6-130834621071472715/index.html
http://10.0.0.5:10592/3f0d39ad-924b-4233-b4a7-02617c6308a6-130834621071472715/api/users/6
Special handling for port-sharing services
Reverse Proxy Diagram
The Service Fabric reverse proxy attempts to resolve a service address again and retry the request when a service cannot be reached. Generally, when a service cannot be reached, the service instance or replica has moved to a different node as part of its normal lifecycle. When this happens, the reverse proxy might receive a network connection error indicating that an endpoint is no longer open on the originally resolved address.
However, replicas or service instances can share a host process and might also share a port when hosted by an http.sys-based web server, including:
Apache Reverse Proxy Server
In this situation, it is likely that the web server is available in the host process and responding to requests, but the resolved service instance or replica is no longer available on the host. In this case, the gateway will receive an HTTP 404 response from the web server. Thus, an HTTP 404 response can have two distinct meanings:
- Case #1: The service address is correct, but the resource that the user requested does not exist.
- Case #2: The service address is incorrect, and the resource that the user requested might exist on a different node.
The first case is a normal HTTP 404, which is considered a user error. However, in the second case, the user has requested a resource that does exist. The reverse proxy was unable to locate it because the service itself has moved. The reverse proxy needs to resolve the address again and retry the request.
The reverse proxy thus needs a way to distinguish between these two cases. To make that distinction, a hint from the server is required.
- By default, the reverse proxy assumes case #2 and attempts to resolve and issue the request again.
- To indicate case #1 to the reverse proxy, the service should return the following HTTP response header:
X-ServiceFabric : ResourceNotFound
This HTTP response header indicates a normal HTTP 404 situation in which the requested resource does not exist, and the reverse proxy will not attempt to resolve the service address again.
Special handling for services running in containers
For services running inside containers, you can use the environment variable,
Fabric_NodeIPOrFQDN
to construct the reverse proxy URL as in the following code:For the local cluster,
Fabric_NodeIPOrFQDN
is set to 'localhost' by default. Start the local cluster with the -UseMachineName
parameter to make sure containers can reach reverse proxy running on the node. For more information, see Configure your developer environment to debug containers.Service Fabric services that run within Docker Compose containers require a special docker-compose.yml Ports section http: or https: configuration. For more information, see Docker Compose deployment support in Azure Service Fabric.
Next steps
- Set up and configure reverse proxy on a cluster.
- See an example of HTTP communication between services in a sample project on GitHub.
Posted by1 year ago
Archived
Traditional design is you put your public facing servers in the DMZ, however i'm hearing about people that keep their web apps on the internal network, and then use a reverse proxy to secure things instead of a DMZ. I'm having a hard time wrapping my head around if reverse proxies are a secure option to 'replace' a DMZ. Assuming you have an insecure web app and the server gets compromised, you now have an adversary on the internal network? or would would it just all depend on how it was compromised? Is there anyone that can help me wrap my head around how secure reverse proxies are?
6 comments