Now that we've built a recent version of
mod_jk, lets use one of the newly gained features: Loadbalancing.
Suppose we have two hosts,
Node1 runs an Apache and a Tomcat instance. On
node2 we've got another Tomcat.
A browser will connect to the host that's running the Apache. Since the load on a single server running a web application can be pretty severe, we're going to share the burden of serving servlets with multiple hosts (in our case two hosts). And we're going to make mod_jk to do that for us.
First let's take a look at what's needed to get Apache to talk with
mod_jk. These lines should go into your
LoadModule jk_module /usr/lib/apache/1.3/mod_jk.so JkWorkersFile /etc/apache/workers.properties JkLogFile /var/log/apache/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " <VirtualHost *> ServerAdmin firstname.lastname@example.org ServerAlias www.example.com DocumentRoot /var/www ServerName example.com JkMount /* loadbalancer JkMount /status/* status ErrorLog /var/log/apache/example-com-error_log CustomLog /var/log/apache/example-com-access_log combined </VirtualHost>
The first line tells
mod_jk where to look for its configuration. We're going to create this file in a short while, but let's first look at the other options.
JkMount /* loadbalancer JkMount /status/* status
These lines redirect all requests to '/' to our JkWorker named
loadbalancer and the requests to '/status/' to the worker
status. The workers are specified in the
Let's have a look at this file:
# workers to contact, that's what you have in your httpd.conf worker.list=loadbalancer,status #setup node1 worker.node1.port=8009 worker.node1.host=localhost worker.node1.type=ajp13 worker.node1.lbfactor=50 #setup node2 worker.node2.port=8009 worker.node2.host=host2 worker.node2.type=ajp13 worker.node2.lbfactor=100 #setup the load-balancer worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=True #worker.loadbalancer.sticky_session_force=True # Status worker for managing load balancer worker.status.type=status
We need to supply mod_jk with a list of top level workers; in our case, these are
The configuration for our status worker is as easy as it gets:
Configuring our workers which will actually do the hard work is no different in a load balanced environment:
worker.node1.port=8009 worker.node1.host=localhost worker.node1.type=ajp13
This is exactly what you'd do if you had only one tomcat to connect to. Now comes the part that isn't in the standard playbook:
worker.node1.lbfactor=50 [...] worker.node2.lbfactor=100
These lines inform the loadbalancer to spread the load 1:2 over the nodes 1 & 2. It's the ratio that's important, and not the numbers itself. A
lbfactor of 2 and 4 would yield the same result.
But we don't even have our
loadbalancer worker defined, lets do that now:
worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=True #worker.loadbalancer.sticky_session_force=True
We're defining a worker named
loadbalancer with it's type set to
lb (obviously short for loadbalancer) and assign
node2 to handle the load.
Sticky sessions are an important feature if you rely on jSessionIDs and are not using any session-replication layer. If
True a request always gets routed back to the node which assigned this jSessionID. If that host should get disconnected, crash or become unreachable otherwise the request will be forwarded to another host in our cluster (although not too successfully as the session-id is invalid in it's context).
You can prevent this from happening by setting
True. In this case if the host handling the requested session-id fails, Apache will generate an internal server error 500.
Now we've told
mod_jk about our setup. If you are using sticky sessions, you'll need to tell Tomcat to append its node-id to the session id. This needs to be the same as
worker.name.jvm_route, which by default is the worker's name (in our case
Search for the
Engine-tag in your server.xml and add the following attribute:
Do that on both Tomcat installations. If you don't, the load balancing will work but only for the first request per session. The following lines will appear in your log file:
[Thu Oct 26 17:28:36 2006] [3986:0000] [info] get_most_suitable_worker::jk_lb_worker.c (672): all workers are in error state for session 1AB31B3F1F72D673E59D42F4A79E364C [Thu Oct 26 17:28:36 2006] [3986:0000] [error] service::jk_lb_worker.c (984): All tomcat instances failed, no more workers left for recovery [Thu Oct 26 17:28:36 2006] [3986:0000] [info] jk_handler::mod_jk.c (1986): Service error=0 for worker=loadbalancer
Meaning that Tomcat can't find the node that served your session.
Sign up for our Newsletter