JBoss.orgCommunity Documentation

Chapter 21. TorqueBox Production Setup

21.1. Example Fedora 15 Production Setup
21.1.1. Package Installation
21.1.2. TorqueBox Installation
21.1.3. Installing TorqueBox as a Startup Service
21.1.4. Request Dispatching with mod_cluster
21.1.5. Capistrano Deployment
21.2. Clustering
21.2.1. Enabling Clustering
21.2.2. Multicast Out of the Box
21.2.3. Don't Bind to 0.0.0.0
21.3. Clustering TorqueBox Without Multicast
21.3.1. Clustering Infinispan
21.3.2. Clustering HornetQ
21.3.3. Clustering mod_cluster
21.4. Sizing Number of HTTP Threads to Connection Pool
21.4.1. Setting Database Connection Pool Size
21.4.2. Setting Max Number of HTTP Threads
21.5. SSL Configuration
21.5.1. SSL Termination at Load Balancer
21.5.2. SSL Termination at TorqueBox

A basic TorqueBox installation running a Rails 3.x application on a Fedora 15 server is a fairly straightforward setup. This section will outline the steps needed to deploy TorqueBox into a production enviornment. The example scenario assumes a stock Fedora 15.

Download the latest release from the website, and unzip it. By convention, TorqueBox is placed in /opt/torquebox/current and is owned by the torquebox user.

$ wget http://torquebox.org/release/org/torquebox/torquebox-dist/2.3.1/torquebox-dist-2.3.1-bin.zip
$ mkdir /opt/torquebox
$ chown torquebox:torquebox /opt/torquebox
$ su torquebox
$ unzip torquebox-dist-2.3.1-bin.zip -d /opt/torquebox/
$ cd /opt/torquebox
$ ln -s torquebox-2.3.1 current

To ensure that the TORQUEBOX_HOME and other relevant environment variables are available to system users, add these to /etc/profile.d/torquebox.sh

export TORQUEBOX_HOME=/opt/torquebox/current
export JBOSS_HOME=$TORQUEBOX_HOME/jboss
export JRUBY_HOME=$TORQUEBOX_HOME/jruby
PATH=$JBOSS_HOME/bin:$JRUBY_HOME/bin:$PATH
        

You can test your installation by logging in with a new shell and running the torquebox command.

$ torquebox
Tasks:
  torquebox deploy ROOT        # Deploy an application to TorqueBox
  torquebox undeploy ROOT      # Undeploy an application from TorqueBox
  torquebox run                # Run TorqueBox (binds to localhost, use -b to ov...
  torquebox rails ROOT         # Create a Rails application at ROOT using the...
  torquebox archive ROOT       # Create a nice self-contained application arc...
  torquebox cli                # Run the JBoss AS7 CLI
  torquebox env [VARIABLE]     # Display TorqueBox environment variables
  torquebox help [TASK]        # Describe available tasks or one specific task
  torquebox list               # List applications deployed to TorqueBox and ...

Check to see if the server starts correctly by executing torquebox run. You can just type ^C to kill the server and continue to set up your system.

As with MRI, a TorqueBox production server will typically have a request dispatcher fronting the application, accepting web requests and handing them off to your application. In this case, we will use Apache and mod_cluster to achieve that. Even though we're not running a cluster of servers, mod_cluster makes it very simple to get Apache and TorqueBox talking with each other. And when the application does outgrow a single backend, it's trivial to add more to the cluster.

Download and install mod_cluster using the instructions provided from the mod_cluster downloads page.

After downloading and installing, check the configuration file /etc/httpd/conf.d/mod_cluster.conf. It should look something like this.

LoadModule slotmem_module       modules/mod_slotmem.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module     modules/mod_advertise.so
LoadModule manager_module       modules/mod_manager.so

<Location /mod_cluster_manager>
    SetHandler mod_cluster-manager
    AllowDisplay On
</Location>

Listen 127.0.0.1:6666
<VirtualHost 127.0.0.1:6666>
 
  <Directory />
    Order deny,allow
    Deny from all
    Allow from all
  </Directory>
 
  KeepAliveTimeout 60
  MaxKeepAliveRequests 0

  EnableMCPMReceive
 
  ManagerBalancerName torquebox-balancer
  AllowDisplay On
  AdvertiseFrequency 5
 
</VirtualHost>

With these settings, you should have Apache's httpd accepting web requests on your host and JBoss mod_cluster dispatching those requests to the TorqueBox system process.

By default when you start TorqueBox in clustered mode other members of the cluster are discovered using multicast. Sometimes this isn't the desired behavior, either because the environment doesn't support multicast or the administrator wants direct control over the members of a cluster. In these cases, it's possible to configure TorqueBox to use a predefined set of cluster members.

Infinispan is used for web session replication and can be used for clustered caching if your application is setup appropriately. See Chapter 7, TorqueBox Caching for more details on this setup. Under the hood Infinispan uses a library called JGroups to handle the cluster discovery and transports. An example of configuring Infinispan to cluster without multicast is below.


HornetQ is used for all messaging. Right now HornetQ doesn't use JGroups for its cluster configuration so we must configure it separately from Infinispan. An example of configuring HornetQ to cluster without multicast is below.

Example 21.2. HornetQ Configuration ($JBOSS_HOME/standalone/configuration/standalone-ha.xml)

<server xmlns="urn:jboss:domain:1.3">
  <profile>
    ...
    <subsystem xmlns="urn:jboss:domain:messaging:1.2">
      <hornetq-server>
        ...
        <connectors>
          <netty-connector name="netty" socket-binding="messaging"/>
          ...
          <netty-connector name="server2-connector" socket-binding="messaging-server2"/>
          <netty-connector name="server3-connector" socket-binding="messaging-server3"/>
        </connectors>
        ...
        <cluster-connections>
          <cluster-connection name="my-cluster">
            <address>
              jms
            </address>
            <connector-ref>
              netty
            </connector-ref>
            <static-connectors>
              <connector-ref>
                server2-connector
              </connector-ref>
              <connector-ref>
                server3-connector
              </connector-ref>
            </static-connectors>
          </cluster-connection>
        </cluster-connections>
        ...
      </hornetq-server>
    </subsystem>
    ...
  </profile>
  <socket-binding-group name="standard-sockets" default-interface="public">
    ...
    <socket-binding name="messaging" port="5445"/>
      ...
      <outbound-socket-binding name="messaging-server2">
        <remote-destination host="10.100.10.2" port="5445"/>
      </outbound-socket-binding>
      <outbound-socket-binding name="messaging-server3">
        <remote-destination host="10.100.10.3" port="5445"/>
      </outbound-socket-binding>
  </socket-binding-group>
</server>
        

Change the outbound socket binding hosts and ports to match your environment. The port should match the value of the messaging socket binding configured on each host. Each additional host needs the netty-connector, connector-ref under static-connectors, and outbound-socket-binding elements.


When running under load in production and against a database, you'll want to size the number of HTTP threads concurrently processing web requests based on the number of connections available in your database connection pool so you don't have too many requests waiting to grab a connection from the pool and timing out. The specific ratio of HTTP threads to database connection pool size will depend on your application, but a good starting point is 1 to 1.