How do I use HTTP Load Balancing?

Load balancing ensures high system availability through the distribution of workload across multiple components. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy.
 
CloudJiffy uses two types of load balancing: TCP and HTTP.
 

HTTP load balancing dominates in CloudJiffy. As a reverse proxy server for HTTP protocols we use NGINX.

NGINX is the second-most popular open source web server in the world giving customers greater performance and efficiency for their applications. Using NGINX requires no extra deployment steps or pre-configuration. NGINX offers built-in Layer 7 load balancing and content caching to provide a cost-effective and highly available platform for hosted applications. NGINX with its scalability, security and high efficiency in memory and CPU became the fastest web server in the world.

Let's examine the very process of HTTP balancing in CloudJiffy.

The balancer represents frontend which receives all the http requests and distributes them between backends - application servers. It provides two-level balancing based on cookies.

The first level is at the rate of one node. And the second is at the rate of group of nodes connected by the same sticky session.


When the user makes his http request, the balancer provides him with two cookies:

C1 - node ID 
C2 - group ID 

The first cookie (node ID) redirects the request to the necessary node (server). If this node suddenly dies, the balancer will stop redirecting to it and instead favor the server that is still working. This active server will be chosen with a help of the second cookie (group ID) from the group of nodes with common to failed node sticky session.

Note: Storage is not shared between load-balanced instances, between replicated only.


Was this article helpful?

mood_bad Dislike 0
mood Like 0
visibility Views: 19584