JBoss.orgCommunity Documentation

Chapter 21. TorqueBox Production Setup

21.1. Example Fedora 15 Production Setup
21.1.1. Package Installation
21.1.2. TorqueBox Installation
21.1.3. Installing TorqueBox as a Startup Service
21.1.4. Request Dispatching with mod_cluster
21.1.5. Capistrano Deployment
21.2. Clustering
21.2.1. Enabling Clustering
21.2.2. Multicast Out of the Box
21.2.3. Don't Bind to
21.3. Clustering TorqueBox Without Multicast
21.3.1. Clustering Infinispan
21.3.2. Clustering HornetQ
21.4. Sizing Number of HTTP Threads to Connection Pool
21.4.1. Setting Database Connection Pool Size
21.4.2. Setting Max Number of HTTP Threads
21.5. SSL Configuration
21.5.1. SSL Termination at Load Balancer
21.5.2. SSL Termination at TorqueBox

A basic TorqueBox installation running a Rails 3.x application on a Fedora 15 server is a fairly straightforward setup. This section will outline the steps needed to deploy TorqueBox into a production enviornment. The example scenario assumes a stock Fedora 15.

Download the latest release from the website, and unzip it. By convention, TorqueBox is placed in /opt/torquebox/current and is owned by the torquebox user.

$ wget http://torquebox.org/release/org/torquebox/torquebox-dist/2.0.0.cr1/torquebox-dist-2.0.0.cr1-bin.zip
$ mkdir /opt/torquebox
$ chown torquebox:torquebox /opt/torquebox
$ su torquebox
$ unzip torquebox-dist-2.0.0.cr1-bin.zip -d /opt/torquebox/
$ cd /opt/torquebox
$ ln -s torquebox-dist-2.0.0.cr1 current

To ensure that the TORQUEBOX_HOME and other relevant environment variables are available to system users, add these to /etc/profile.d/torquebox.sh

export TORQUEBOX_HOME=/opt/torquebox/current

You can test your installation by logging in with a new shell and running the torquebox command.

$ torquebox
  torquebox deploy ROOT        # Deploy an application to TorqueBox
  torquebox undeploy ROOT      # Undeploy an application from TorqueBox
  torquebox run                # Run TorqueBox
  torquebox rails ROOT         # Create a Rails application at ROOT using the...
  torquebox archive ROOT       # Create a nice self-contained application arc...
  torquebox cli                # Run the JBoss AS7 CLI
  torquebox env [VARIABLE]     # Display TorqueBox environment variables
  torquebox help [TASK]        # Describe available tasks or one specific task
  torquebox list               # List applications deployed to TorqueBox and ...

Check to see if the server starts correctly by executing torquebox run. You can just type ^C to kill the server and continue to set up your system.

As with MRI, a TorqueBox production server will typically have a request dispatcher fronting the application, accepting web requests and handing them off to your application. In this case, we will use Apache and mod_cluster to achieve that. Even though we're not running a cluster of servers, mod_cluster makes it very simple to get Apache and TorqueBox talking with each other. And when the application does outgrow a single backend, it's trivial to add more to the cluster.

Download and install mod_cluster using the instructions provided from the mod_cluster downloads page.

After downloading and installing, check the configuration file /etc/httpd/conf.d/mod_cluster.conf. It should look something like this.

LoadModule slotmem_module       modules/mod_slotmem.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module     modules/mod_advertise.so
LoadModule manager_module       modules/mod_manager.so

<Location /mod_cluster_manager>
    SetHandler mod_cluster-manager
    AllowDisplay On

Listen torquebox-balancer:6666
<VirtualHost torquebox-balancer:6666>
  <Directory />
    Order deny,allow
    Deny from all
    Allow from all
  KeepAliveTimeout 60
  MaxKeepAliveRequests 0

  ManagerBalancerName torquebox-balancer
  AllowDisplay On
  AdvertiseFrequency 5
  AdvertiseSecurityKey secret

Note that you will likely need to add the AdvertiseSecurityKey setting to the configuration file that comes out of the box when you install. You'll also need to set this in your JBoss standalone-ha.xml file.

<mod-cluster-config advertise-socket="modcluster" proxy-list="torquebox-server:6666" advertise-security-key="secret">

The torquebox-server and torquebox-balancer names in these examples are host names that we've added to /etc/hosts. torquebox-server torquebox-balancer

With these settings, you should have Apache's httpd accepting web requests on your host and JBoss mod_cluster dispatching those requests to the TorqueBox system process.

By default when you start TorqueBox in clustered mode other members of the cluster are discovered using multicast. Sometimes this isn't the desired behavior, either because the environment doesn't support multicast or the administrator wants direct control over the members of a cluster. In these cases, it's possible to configure TorqueBox to use a predefined set of cluster members.

Infinispan is used for web session replication and can be used for clustered caching if your application is setup appropriately. See Chapter 7, TorqueBox Caching for more details on this setup. Under the hood Infinispan uses a library called JGroups to handle the cluster discovery and transports. An example of configuring Infinispan to cluster without multicast is below.

HornetQ is used for all messaging. Right now HornetQ doesn't use JGroups for its cluster configuration so we must configure it separately from Infinispan. An example of configuring HornetQ to cluster without multicast is below.

Example 21.2. HornetQ Configuration ($JBOSS_HOME/standalone/configuration/standalone-ha.xml)

<server name="xyz" xmlns="urn:jboss:domain:1.1">
    <subsystem xmlns="urn:jboss:domain:messaging:1.1">
          <netty-connector name="netty" socket-binding="messaging"/>
          <netty-connector name="server2-connector" socket-binding="messaging-server2"/>
          <netty-connector name="server3-connector" socket-binding="messaging-server3"/>
          <cluster-connection name="default-cluster-connection">
  <socket-binding-group name="standard-sockets" default-interface="public">
    <socket-binding name="messaging" port="5445"/>
      <outbound-socket-binding name="messaging-server2">
        <remote-destination host="" port="5445"/>
      <outbound-socket-binding name="messaging-server3">
        <remote-destination host="" port="5445"/>

Change the outbound socket binding hosts and ports to match your environment. The port should match the value of the messaging socket binding configured on each host. Each additional host needs the netty-connector, connector-ref under static-connectors, and outbound-socket-binding elements.

When running under load in production and against a database, you'll want to size the number of HTTP threads concurrently processing web requests based on the number of connections available in your database connection pool so you don't have too many requests waiting to grab a connection from the pool and timing out. The specific ratio of HTTP threads to database connection pool size will depend on your application, but a good starting point is 1 to 1.