Amazon Web Services’ Elastic Load Balancers can offer a great combination of security, availability, and performance to your JSS deployment. Keeping your servers behind an ELB makes it more difficult for attackers to compromise them since they’re not exposed to the public internet. With an ELB, you can run multiple Tomcat servers behind a single load balancer for redundancy so if one of your servers goes down, the JSS service can keep running. Additionally, terminating SSL at the load balancer takes the expensive SSL workload off of the webapps and gives them more cycles for processing other tasks.
Elastic Load Balancers aren’t terribly difficult to set up, but there are a few gotchas to be aware of when using them with the JSS.
For maximum redundancy, you’re going to want to build your JSS in 2 availability zones within a region. For the purposes of this exercise, we’ll call these zones 1a and 1b. Creating these availability zones is outside the scope of this article. You’ll want to have two Tomcat servers, one in each availability zone. We’ll refer to these as JSS1a and JSS1b.
It’s important to first understand the desired traffic flow. The Elastic Load Balancer will be listening to HTTPS connections on port 443, and forwarding the traffic on to the JSS servers via HTTP over port 8080. This is illustrated in the following diagram.
Setting up AWS Security Groups
We will need to configure two security groups for this configuration: one for the Elastic Load Balancer, and one for the Tomcat servers.
In Amazon Web Services under the EC2 service, find “Security Groups.”
Next, click “Create Security Group.”
Elastic Load Balancer Security Group
We will be asked to give the Security Group a name and description, as well as assign it to a VPC. Make sure you are assigning it to the same VPC that you intend to host your Tomcat servers and load balancers.
We will also need to configure a rule that allows all traffic on port 443, as shown in the screenshot below.
Once you create the security group, find it in the list of security groups and make a note of the Group ID, as you will need it in the next step.
Tomcat Server Security Group
Create an additional security group, like before. This time we want to allow traffic on port 8080, but only from the security group we created for the load balancer. This is accomplished by placing the Security Group ID for the load balancer in the Source section with the tag Custom IP, like so.
The two security groups are now configured, and we’re ready to move on to the next step.
First, we must set up our Tomcat servers to work behind the load balancer. This can be done either through the JSS web interface, or by editing server.xml on the Tomcat server.
- Two (or more) Tomcat servers with the JSS installed
Option 1: Through the JSS Web Interface
This can be done in the JSS web interface by navigating to Settings > System Settings > Apache Tomcat Settings. Click edit, then choose “Configure Tomcat for working behind a load balancer.”
You’ll want to enable “Remote IP Valve” and “Proxy Port”, put “443” into the Proxy Port field, and select HTTPS under Scheme.
Option 2: Editing Tomcat’s server.xml
If you’re unable to access the JSS web interface, or if you prefer working in the command line, you can change these settings directly in Tomcat’s server.xml. On a Linux system, you can find server.xml at the following path.
You’ll want to find the connector for port 8080. It will usually start with
<Connector URIEncoding=“UTF-8“ port=“8080“.
Make that line look like the following:
You’ll also want to find the <host> tag, usually towards the end of server.xml. Add the following line inside the <host> tags.
About These Settings
Remote IP Valve
In computer records in the JSS, you will see 2 IP addresses: IP Address and Reported IP Address. The Reported IP Address is the IP address that the computer has on its local network. IP Address is the IP address that the JSS sees the computer connecting from. In the case of having the JSS behind a load balancer, all requests are technically coming from said load balancer. Enabling the Remote IP Valve tells the JSS to look at the IP address specified in the packet headers rather than the actual connection IP address. Note: some load balancers require additional settings for the Remote IP Valve to work, however Elastic Load Balancers are automatically configured to do this.
Since Tomcat is listening on a different port and protocol than clients are using to talk to the JSS, we have to tell Tomcat to reply using the originating port rather than the port that it’s listening on. Since our Elastic Load Balancer will be listening on port 443, we want to make sure we specify port 443 here.
Since we’re terminating SSL at the Elastic Load Balancer and the Tomcat servers are accepting requests over HTTP, we need to tell them to respond to requests using SSL.
Disable Tomcat’s HTTPS Connector
The last thing you’ll need to do to properly configure Tomcat is disable the HTTPS connector. This must be done by editing the server.xml file.
Find the line that starts with
<Connector URIEncoding=”UTF-8″ port=”8443″
and comment the entire connector out by placing XML comment tags on either end, like so.
After making all of these changes, restart Tomcat.
Here is an example of a properly configured server.xml.
Assign Security Group to Tomcat Servers
Finally, we need to assign the security group that allows traffic from the load balancer to the tomcat servers.
Under EC2, find Instances, then select your Tomcat servers. Click Actions > Networking > Change Security Groups.
Find the Security Group we created for the Tomcat servers, check the box, and click Assign Security Groups.
The Tomcat servers are now configured for working behind an Elastic Load Balancer.
Elastic Load Balancer Configuration
Next, we will configure the Elastic Load Balancer
- An SSL certificate signed by a publicly trusted Certificate Authority in PEM format
Setting up the Load Balancer
In Amazon Web Services under the EC2 service, find “Load Balancers” in the left sidebar.
Next, click on “Create Load Balancer”.
First, we’ll have to name the load balancer and select a VPC. Make sure you place the load balancer in the same VPC as the Tomcat servers.
Next, we’ll have to configure the listener. This is where we tell the load balancer on what port and protocol to listen, and on what port and protocol to forward traffic to the Tomcat servers. We want to listen on HTTPS via port 443 and forward traffic over HTTP via port 8080.
NOTE: If you already have clients enrolled to your JSS and you were previously using port 8443 (the default on JSS installations), you will need to add an additional HTTPS listener on port 8443 forwarding to 8080.
We are then asked to select two subnets in different availability zones. Make sure the subnets you select are configured to automatically assign public IP addresses to hosts in the subnet, otherwise the load balancer won’t be reachable from the public internet. When you’ve added two subnets, your screen should look like this.
Next, we are asked to assign security groups. Assign the security group we created earlier for the load balancer, then click next.
Now we need to add the SSL certificate for the load balancer. If you have not already added the SSL certificate to AWS, you will need to choose Upload a new SSL certificate, and paste in the public and private keys for your certificate.
We will then need to configure the health check. The health check is what the Elastic Load Balancer uses to make sure that a Tomcat server is up before sending traffic.
I haven’t had good luck using an HTTP request as a reliable check, so I prefer to use a TCP ping over port 8080.
UPDATE: My good friend Tom Larkin pointed out to me that a few versions ago, JAMF put a health check into the JSS. From Tom: “If you do a GET request against /healthCheck.html it will have a JSON response of two square brackets “” and that means all is well. This is a lot more intelligent than a ping.”
You can set your health check intervals and timeouts to your preference, but I find leaving the defaults works just fine.
Finally, we have to specify the EC2 instances to which the load balancer will send traffic. Select all of the Tomcat servers that were configured to work behind this load balancer, and click next.
You will then be given the option to give the load balancer any tags you wish. These do not affect the functionality of the load balancer in any way, so you do not need to fill in anything here if you don’t want to. When you are finished with your tags, click Review and Create.
Look over your settings, then click Create.
Now that the Elastic Load Balancer has been created, we need to make a few tweaks to ensure it functions properly.
First, we have to enable stickiness to ensure that when a managed client communicates with the JSS, it talks to the same Tomcat server for the duration of a request. If enrollments and policies are failing when the JSS is behind a load balancer, lack of stickiness is almost always the cause.
To set stickiness, select the load balancer, and find “Port Configuration” located under the Description tab, then click edit.
You’ll want to select “Enable Load Balancer Generated Cookie Stickiness” and set a time. I find 300 seconds works well in most environments, but you may have to tweak this for your specific environment.
Next, we want to double check and make sure our EC2 instances are available to the load balancer. Click on the Instances tab and make sure you see “InService” next to your instances.
Finally, we have to point the DNS address for our JSS to the new load balancer. To do this, look under the Description tab for the DNS Name. It should be a quite long string starting with what you named your JSS, and ending with .elb.amazonaws.com.
Point your DNS record for your JSS (something like jss.mycompany.com) to this A Record using a CNAME, and you should now have a fully functioning load balanced JSS!
If you’re having trouble getting it working, let me know in the comments and I’ll try to help.