Resin Clustering

From Resin 3.0

Jump to: navigation, search

<document> <header> <title>Resin Clustering</title> <description>

As traffic increases beyond a single server, Resin's clustering lets you add new machines to handle the load and simultaneously improves uptime and reliability by failing over requests from a downed or maintenance server to a backup transparently.

</description> </header>

<body>

<localtoc/>

<s1 title="Persistent Sessions">

A session needs to stay on the same JVM that started it. Otherwise, each JVM would only see every second or third request and get confused.

To make sure that sessions stay on the same JVM, Resin encodes the cookie with the host number. In the previous example, the hosts would generate cookies like:

<deftable> <tr>

 <th>index</th>
 <th>cookie prefix</th>

</tr> <tr>

 <td>1</td>
 <td>axxx</td>

</tr> <tr>

 <td>2</td>
 <td>bxxx</td>

</tr> <tr>

 <td>3</td>
 <td>cxxx</td>

</tr> </deftable>

On the web-tier, Resin will decode the cookie and send it to the appropriate host. So bacX8ZwooOz would go to app-b.

In the infrequent case that app-b fails, Resin will send the request to app-a. The user might lose the session but that's a minor problem compared to showing a connection failure error.

The following example is a typical configuration for a distributed server using an external hardware load-balancer, i.e. where each Resin is acting as the HTTP server. Each server will be started as -server a or -server b to grab its specific configuration.

In this example, sessions will only be stored when the server shuts down, either for maintenance or with a new version of the server. This is the most lightweight configuration, and doesn't affect performance significantly. If the hardware or the JVM crashes, however, the sessions will be lost. (If you want to save sessions for hardware or JVM crashes, remove the <save-only-on-shutdown/> flag.)

<example title="resin.xml"> <resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier">

 <server-default>
   <http port='80'/>
 </server-default>
 <server id='app-a' address='192.168.0.1'/>
 <server id='app-b' address='192.168.0.2'/>
 <server id='app-c' address='192.168.0.3'/>
 <web-app-default>
   <!-- enable tcp-store for all hosts/web-apps -->
   <session-config>
     <use-persistent-store/>
     <save-only-on-shutdown/>
   </session-config>
 </web-app-default>
 ...

</cluster> </resin> </example>

<s2 title="Choosing a backend server">

Requests can be made to specific servers in the app-tier. The web-tier uses the value of the jsessionid to maintain sticky sessions. You can include an explicit jsessionid to force the web-tier to use a particular server in the app-tier.

Resin uses the first character of the jsessionid to identify the backend server to use, starting with 'a' as the first backend server. If wwww.example.com resolves to your web-tier, then you can use:

  1. http://www.example.com/proxooladmin;jsessionid=abc
  2. http://www.example.com/proxooladmin;jsessionid=bcd
  3. http://www.example.com/proxooladmin;jsessionid=cde
  4. http://www.example.com/proxooladmin;jsessionid=def
  5. http://www.example.com/proxooladmin;jsessionid=efg
  6. etc.

</s2>

<s2 title="File Based">

For single-server configurations, the "cluster" store saves session data on disk, allowing for recovery after system restart or during development.

Sessions are stored as files in the resin-data directory. When the session changes, the updates will be written to the file. After Resin loads an Application, it will load the stored sessions.

</s2>

<s2 title="Distributed Sessions">

Distributed sessions are intrinsically more complicated than single-server sessions. Single-server session can be implemented as a simple memory-based Hashtable. Distributed sessions must communicate between machines to ensure the session state remains consistent.

Load balancing with multiple machines either uses sticky sessions or symmetrical sessions. Sticky sessions put more intelligence on the load balancer, and symmetrical sessions puts more intelligence on the JVMs. The choice of which to use depends on what kind of hardware you have, how many machines you're using and how you use sessions.

Distributed sessions can use a database as a backing store, or they can distribute the backup among all the servers using TCP.

<s3 title="Symmetrical Sessions">

Symmetrical sessions happen with dumb load balancers like DNS round-robin. A single session may bounce from machine A to machine B and back to machine B. For JDBC sessions, the symmetrical session case needs the always-load-session attribute described below. Each request must load the most up-to-date version of the session.

Distributed sessions in a symmetrical environment are required to make sessions work at all. Otherwise the state will end up spread across the JVMs. However, because each request must update its session information, it is less efficient than sticky sessions.

</s3>

<s3 title="Sticky Sessions">

Sticky sessions require more intelligence on the load-balancer, but are easier for the JVM. Once a session starts, the load-balancer will always send it to the same JVM. Resin's load balancing, for example, encodes the session id as 'aaaXXX' and 'baaXXX'. The 'aaa' session will always go to JVM-a and 'baa' will always go to JVM-b.

Distributed sessions with a sticky session environment add reliability. If JVM-a goes down, JVM-b can pick up the session without the user noticing any change. In addition, distributed sticky sessions are more efficient. The distributor only needs to update sessions when they change. So if you update the session once when the user logs in, the distributed sessions can be very efficient.

</s3>

<s3 title="always-load-session">

Symmetrical sessions must use the 'always-load-session' flag to update each session data on each request. always-load-session is only needed for jdbc-store sessions. tcp-store sessions use a more-sophisticated protocol that eliminates the need for always-load-session, so tcp-store ignores the always-load-session flag.

The always-load-session attribute forces sessions to check the store for each request. By default, sessions are only loaded from persistent store when they are created. In a configuration with multiple symmetric web servers, sessions can be loaded on each request to ensure consistency.

</s3>

<s3 title="always-save-session">

By default, Resin only saves session data when you add new values to the session object, i.e. if the request calls setAttribute. This may be insufficient when storing large objects. For example, if you change an internal field of a large object, Resin will not automatically detect that change and will not save the session object.

With always-save-session Resin will always write the session to the store at the end of each request. Although this is less efficient, it guarantees that updates will get stored in the backup after each request.

</s3>

</s2>


<s2 title="Cluster Sessions">

The distributed cluster stores the sessions across the cluster servers. In some configurations, the cluster store may be more efficient than the database store, in others the database store will be more efficient.

With cluster sessions, each session has an owning JVM and a backup JVM. The session is always stored in both the owning JVM and the backup JVM.

The cluster store is configured in the in the <cluster>. It uses the <server> hosts in the <cluster> to distribute the sessions. The session store is enabled in the <session-config> with the <use-persistent-store>.

<example> <resin xmlns="http://caucho.com/ns/resin">

 ...
 <cluster id="app-tier">
   <server id="app-a" host="192.168.0.1" port="6802"/>
   <server id="app-b" host="192.168.0.2" port="6802"/>
   ...
 </cluster>

</resin> </example>

The configuration is enabled in the web-app.

<example> <web-app xmlns="http://caucho.com/ns/resin">

 <session-config>
   <use-persistent-store="true"/>
 </session-config>

</web-app> </example>

The <server> are treated as a cluster of server. Each server uses the other servers as a backup. When the session changes, the updates will be sent to the backup server. When the server starts, it looks up old sessions in the other servers to update its own version of the persistent store.

<example title="Symmetric load-balanced servers"> <resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier">

 <server-default>
   <http port='80'/>
 </server-default>
 <server id="app-a" address="192.168.2.10" port="6802"/>
 <server id="app-b" address="192.168.2.11" port="6803"/>
 <host id=>
 <web-app id=>
 <session-config>
   <use-persistent-store="true"/>
 </session-config>
 </web-app>
 </host>

</cluster> </resin> </example> </s2>

<s2 title="Clustered Distributed Sessions">

Resin's cluster protocol for distributed sessions can is an alternative to JDBC-based distributed sessions. In some configurations, the cluster-stored sessions will be more efficient than JDBC-based sessions. Because sessions are always duplicated on separate servers, cluster sessions do not have a single point of failure. As the number of servers increases, JDBC-based sessions can start overloading the backing database. With clustered sessions, each additional server shares the backup load, so the main scalability issue reduces to network bandwidth. Like the JDBC-based sessions, the cluster store sessions uses sticky-session caching to avoid unnecessary network traffic.

</s2>

<s2 title="Configuration">

The cluster configuration must tell each host the servers in the cluster and it must enable the persistent in the session configuration with <a href="../reference/session-tags.xtp#session-config">use-persistent-store</a>. Because session configuration is specific to a virtual host and a web-application, each web-app needs use-persistent-store enabled individually. The <a href="../reference/webapp-tags.xtp#web-app-default">web-app-default</a> tag can be used to enable distributed sessions across an entire site.

<example title="resin.xml fragment"> <resin xmlns="http://caucho.com/ns/resin">

 ...
 
 <cluster id="app-tier">
   <server id="app-a" host="192.168.0.1"/>
   <server id="app-b" host="192.168.0.2"/>
   <server id="app-c" host="192.168.0.3"/>
   <server id="app-d" host="192.168.0.4"/>
   ...
   <host id="">
   <web-app id='myapp'>
     ...
     <session-config>
       <use-persistent-store/>
     </session-config>
   </web-app>
   </host>
 </cluster>

</resin> </example>

Usually, hosts will share the same resin.xml. Each host will be started with a different -server xx to select the correct block. The startup will look like:

<example title="Starting Server C"> resin-4.0.x> java -jar lib/resin.jar -conf conf/resin.xml -server c start </example>

<s3 title="always-save-session">

Resin's distributed sessions needs to know when a session has changed in order to save the new session value. Although Resin can detect when an application calls HttpSession.setAttribute, it can't tell if an internal session value has changed. The following Counter class shows the issue:

<example title="Counter.java"> package test;

public class Counter implements java.io.Serializable {

 private int _count;
 public int nextCount() { return _count++; }

} </example>

Assuming a copy of the Counter is saved as a session attribute, Resin doesn't know if the application has called nextCount. If it can't detect a change, Resin will not backup the new session, unless always-save-session is set. When always-save-session is true, Resin will back up the session on every request.

<example> ... <web-app id="/foo"> ... <session-config>

 <use-persistent-store/>
 <always-save-session/>

</session-config> ... </web-app> </example>


</s3>

<s3 title="Serialization">

Resin's distributed sessions relies on Hessian serialization to save and restore sessions. Application object must implement java.io.Serializable for distributed sessions to work.

</s3>

</s2>

<s2 title="Protocol Examples">

<s3 title="Session Request">

To see how cluster sessions work, consider a case where the load balancer sends the request to a random host. Server C owns the session but the load balancer gives the request to Server A. In the following figure, the request modifies the session so it must be saved as well as loaded.

<figure src="srunc.gif"/>

The session id encodes the owning host. The example session id, ca8MbyA, decodes to an server index of 3, mapping to Server C. Resin determines the backup host from the cookie as well. Server A must know the owning host for every cookie so it can communicate with the owning srun. The example configuration defines all the sruns Server A needs to know about. If Server C is unavailable, Server A can use its configuration knowledge to use Server D as a backup for ca8MbyA instead..

When the request first accesses the session, Server A asks Server C for the serialized session data (2:load). Since Server A doesn't cache the session data, it must ask Server C for an update on each request. For requests that only read the session, this TCP load is the only extra overhead, i.e. they can skip 3-5. The always-save-session flag, in contrast, will always force a write.

At the end of the request, Server A writes any session updates to Server C (3:store). If always-save-session is false and the session doesn't change, this step can be skipped. Server A sends the new serialized session contents to Server C. Server C saves the session on its local disk (4:save) and saves a backup to Server D (5:backup).

</s3>

<s3 title="Sticky Session Request">

Smart load balancers that implement sticky sessions can improve cluster performance. In the previous request, Resin's cluster sessions maintain consistency for dumb load balancers or twisted clients like the AOL browsers. The cost is the additional network traffic for 2:load and 3:store. Smart load-balancers can avoid the network traffic of 2 and 3.

<figure src="same_srun.gif"/>

Server C decodes the session id, caaMbyA. Since it owns the session, Server C gives the session to the servlet with no work and no network traffic. For a read-only request, there's zero overhead for cluster sessions. So even a semi-intelligent load balancer will gain a performance advantage. Normal browsers will have zero overhead, and bogus AOL browsers will have the non-sticky session overhead.

A session write saves the new serialized session to disk (2:save) and to Server D (3:backup). always-save-session will determine if Resin can take advantage of read-only sessions or must save the session on each request.

</s3>

<s3 title="Disk copy">

Resin stores a disk copy of the session information, in the location specified by the path. The disk copy serves two purposes. The first is that it allows Resin to keep session information for a large number of sessions. An efficient memory cache keeps the most active sessions in memory and the disk holds all of the sessions without requiring large amounts of memory. The second purpose of the disk copy is that the sessions are recovered from disk when the server is restarted.

</s3>

<s3 title="Failover">

Since the session always has a current copy on two servers, the load balancer can direct requests to the next server in the ring. The backup server is always ready to take control. The failover will succeed even for dumb load balancers, as in the non-sticky-session case, because the srun hosts will use the backup as the new owning server.

In the example, either Server C or Server D can stop and the sessions will use the backup. Of course, the failover will work for scheduled downtime as well as server crashes. A site could upgrade one server at a time with no observable downtime.

</s3>

<s3 title="Recovery">

When Server C restarts, possibly with an upgraded version of Resin, it needs to use the most up-to-date version of the session; its file-saved session will probably be obsolete. When a "new" session arrives, Server C loads the saved session from both the file and from Server D. It will use the newest session as the current value. Once it's loaded the "new" session, it will remain consistent as if the server had never stopped.

</s3>

<s3 title="No Distributed Locking">

Resin's cluster sessions does not lock sessions. For browser-based sessions, only one request will execute at a time. Since browser sessions have no concurrently, there's no need for distributed locking. However, it's a good idea to be aware of the lack of distributed locking.

</s3>

</s2>

</s1>

 </body>

</document>

<document> <header> <product>resin</product> <title>Dynamic Servers</title> <description>

Resin includes the ability to add servers to clusters dynamically. These dynamic servers are able to use distributed sessions and the distributed object cache. The triad also updates these servers with applications that are deployed via the remote deployment server. The Resin load balancer is also able to dispatch requests to them as with any static server.

</description> </header>

<body>

<localtoc/>

<s1 title="Overview">

Adding a dynamic server to a cluster is a simple two-step process:

  1. Register the dynamic server with a triad server via JMX.
  2. Start the new dynamic server using the registration in the previous step.

</s1>

<s1 title="Preliminaries">

Before adding a dynamic server, you must:

  • Set up and start a cluster with a triad, e.g. <example title="Example: conf/resin.xml"> <resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier"> ... <server id="triad-a" address="234.56.78.90" port="6800"/> <server id="triad-b" address="34.56.78.90" port="6800"/> <server id="triad-c" address="45.67.89.12" port="6800"/> </example>
  • Install at least one admin password, usually in admin-users.xml
  • Enable the RemoteAdminService for the cluster, e.g. <example> <resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier"> ... <admin:RemoteAdminService xmlns:admin="urn:java:com.caucho.admin"/> ... </example>
  • Enable the dynamic servers for the cluster, e.g. <example> <resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier"> ... <dynamic-server-enable>true</dynamic-server-enable> ... </example>

Check the main <a href="clustering.xtp">Clustering</a> section for more information on this topic.

</s1>

<s1 title="Registering a dynamic server">

For the first step of registration, you can use a JMX tool like jconsole or simply use the Resin administration web console. We'll show how to do the latter method here. For registration, you'll specify three values:

<deftable title="web-app deployment options"> <tr>

 <th>Name</th>
 <th>Description</th>

</tr> <tr>

 <td>Server id</td>
 <td>Symbolic identifier of the new dynamic server.  
     This is also specified when starting the new server.</td>

</tr> <tr>

 <td>IP</td>
 <td>The IP address of the new dynamic server.  May also be host name.</td>

</tr> <tr>

 <td>Port</td>
 <td>The server port of the new dynamic server.  Usually 6800.</td>

</tr> </deftable>

With these three values, browse to the Resin administration application's "cluster" tab. If you have enabled dynamic servers for your cluster, you should see a form allowing you to register the server in the "Cluster Overview" table.

<figure src="dynamic-server-add.png"/>

Once you have entered the values and added the server, it should show up in the table as a dead server because we haven't started it yet. The dynamic server's registration will be propagated to all the servers in the cluster.

<figure src="dynamic-server-added.png"/> </s1>

<s1 title="Starting a dynamic server">

Now that we've registered the dynamic server, we can start it and have it join the cluster. In order for the new server to be recognized and accepted by the triad, it needs to start with the same resin.xml that the triad is using, the name of the cluster it is joining, and the values entered in the registration step. These can all be specified on the command line when starting the server:

<example> dynamic-server> java -jar $RESIN_HOME/lib/resin.jar -conf /etc/resin/resin.xml \

                    -dynamic-server app-tier:123.45.67.89:6800 start

</example>

Specifying the configuration file allows the new server to configure itself using the <server-default> options, to find the triad servers of the cluster it is joining, and to authenticate using the administration logins. This command starts the server, which immediately contacts the triad to join the cluster. Once it has successfully joined, the "Cluster" tab of the administration application should look like this:

<figure src="dynamic-server-started.png"/> </s1>

</body> </document>

<document> <header>

 <title>cluster: Cluster tag configuration</title>
 <version>Resin 3.1</version>
 <description>

Each <cluster> contains a set of <a href="virtual-host.xtp">virtual hosts</a> served by a collection of <<a href="server-tags.xtp">server</a>>s. The cluster provides <a href="resin-clustering.xtp">load-balancing</a> and <a href="config-sessions.xtp">distributed sessions</a> for scalability and reliability.

 </description>

</header> <body>

<localtoc/>

<s1 title="See Also">

  • See the <a href="index-tags.xtp">index</a> for a list of all the tags.
  • See <a href="webapp-tags.xtp">Web Application</a> configuration for web.xml (Servlet) configuration.
  • See <a href="server-tags.xtp">Server tags</a> for ports, threads, and JVM configuration.
  • See <a href="config-env.xtp">Resource</a> configuration for resources: classloader, databases, connectors, and resources.
  • See <a href="config-log.xtp">Log</a> configuration for access log configuration, java.util.logging, and stdout/stderr logging.

</s1>

<defun title="<access-log>">

<access-log> configures a HTTP access log for all virtual hosts in the cluster. See <a href="host-tags.xtp#%3caccess-log%3d">access-log</a> in the <host> tag for more information.

</defun>

<defun title="<cache>" version="Resin 3.1"> <parents>cluster</parents>

<cache> configures the proxy cache (requires Resin Professional). The proxy cache improves performance by caching the output of servlets, jsp and php pages. For database-heavy pages, this caching can improve performance and reduce database load by several orders of magnitude.

The proxy cache uses a combination of a memory cache and a disk-based cache to save large amounts of data with little overhead.

Management of the proxy cache uses the <a href="javadoc|com.caucho.management.server.ProxyCacheMXBean">ProxyCacheMXBean</a>.

<deftable title="<cache> Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr><td>path</td>

   <td>Path to the persistent cache files.</td>
   <td>cache/</td></tr>

<tr><td>disk-size</td>

   <td>Maximum size of the cache saved on disk.</td>
   <td>1024M</td></tr>

<tr><td>enable</td>

   <td>Enables the proxy cache.</td>
   <td>true</td></tr>

<tr><td>enable-range</td>

   <td>Enables support for the HTTP Range header.</td>
   <td>true</td></tr>

<tr><td>entries</td>

   <td>Maximum number of pages stored in the cache.</td>
   <td>8192</td></tr>

<tr><td>max-entry-size</td>

   <td>Largest page size allowed in the cache.</td>
   <td>1M</td></tr>

<tr><td>memory-size</td>

   <td>Maximum heap memory used to cache blocks.</td>
   <td>8M</td></tr>

<tr><td>rewrite-vary-as-private</td>

   <td>Rewrite Vary headers as Cache-Control: private to avoid browser

and proxy-cache bugs (particularly IE).</td>

   <td>false</td></tr>

</deftable>

<def title="<cache> schema"> element cache {

 disk-size?
 & enable?
 & enable-range?
 & entries?
 & path?
 & max-entry-size?
 & memory-size?
 & rewrite-vary-as-private?

} </def>

<example title="Example: enabling proxy cache"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <cache entries="16384" disk-size="2G" memory-size="256M"/>
       <server id="a" address="192.168.0.10"/>
       <host host-name="www.foo.com">
   </cluster>

</resin> </example>

</defun>

<defun title="<cluster>" version="Resin 3.1"> <parents>resin</parents>

<cluster> configures a set of identically-configured servers. The cluster typically configures a set of <server>s, each with some ports, and a set of virtual <host>s.

Only one <cluster> is active in any on server. At runtime, the <cluster> is selected by the <server> with id matching the -server-id on the command line.

<deftable title="<cluster> Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>id</td>
 <td>The cluster identifier.</td>
 <td>required</td>

</tr> <tr>

 <td><a href="host-tags.xtp#access-log">access-log</a></td>
 <td>An access-log shared for all virtual hosts.</td>
 <td></td>

</tr> <tr>

 <td><a href="#cache">cache</a></td>
 <td>Proxy cache for HTTP-cacheable results.</td>
 <td></td>

</tr> <tr>

 <td><a href="#connection-error-page">connection-error-page</a></td>
 <td>IIS error page to use when isapi_srun to Resin connection fails</td>
 <td></td>

</tr> <tr>

 <td><a href="host-tags.xtp#ear-default">ear-default</a></td>
 <td>default values for deployed ear files</td>
 <td></td>

</tr> <tr>

 <td><a href="#error-page">error-page</a></td>
 <td>Custom error-page when virtual-hosts fail to match</td>
 <td></td>

</tr> <tr>

 <td>host</td>
 <td>Configures a virtual host</td>
 <td></td>

</tr> <tr>

 <td>host-default</td>
 <td>Configures defaults to apply to all virtual hosts</td>
 <td></td>

</tr> <tr>

 <td>host-deploy</td>
 <td>Automatic host deployment based on a deployment directory</td>
 <td></td>

</tr> <tr>

 <td>ignore-client-disconnect</td>
 <td>Ignores socket exceptions thrown because browser clients have prematurely disconnected</td>
 <td>false</td>

</tr> <tr>

 <td>invocation-cache-size</td>
 <td>Size of the system-wide URL to servlet invocation mapping cache</td>
 <td>16384</td>

</tr> <tr>

 <td>invocation-cache-max-url-length</td>
 <td>Maximum URL length saved in the invocation cache</td>
 <td>256</td>

</tr> <tr>

 <td>machine</td>
 <td>Configuration for grouping <server> onto physical machines</td>
 <td></td>

</tr> <tr>

 <td>persistent-store</td>
 <td>Configures the distributed/persistent store</td>
 <td></td>

</tr> <tr>

 <td>ping</td>
 <td>Periodic checking of server URLs to verify server activity</td>
 <td></td>

</tr> <tr>

 <td>redeploy-mode</td>
 <td>"automatic" or "manual"</td>
 <td>automatic</td>

</tr> <tr>

 <td><a href="env-tags.xtp#resin:choose">resin:choose</a></td>
 <td>Conditional configuration based on EL expressions</td>
 <td></td>

</tr> <tr>

 <td><a href="env-tags.xtp#resin:import">resin:import</a></td>
 <td>Imports a custom cluster.xml files for a configuration management </td>
 <td></td>

</tr> <tr>

 <td><a href="env-tags.xtp#resin:if">resin:if</a></td>
 <td>Conditional configuration based on EL expressions</td>
 <td></td>

</tr> <tr>

 <td>rewrite-dispatch</td>
 <td>rewrites and dispatches URLs using regular expressions, similar to mod_rewrite</td>
 <td></td>

</tr> <tr>

 <td>root-directory</td>
 <td>The root filesystem directory for the cluster</td>
 <td>${resin.root}</td>

</tr> <tr>

 <td><a href="server.xtp">server</a></td>
 <td>Configures JVM instances (servers).  Each cluster needs at least one server</td>
 <td></td>

</tr> <tr>

 <td>server-default</td>
 <td>Configures defaults for all server instances</td>
 <td></td>

</tr> <tr>

 <td>server-header</td>
 <td>Configures the HTTP "Server: Resin/xxx" header</td>
 <td>Resin/Version</td>

</tr> <tr>

 <td>session-cookie</td>
 <td>Configures the servlet cookie name</td>
 <td>JSESSIONID</td>

</tr> <tr>

 <td>session-sticky-disable</td>
 <td>Disables sticky-sessions on the load balancer</td>
 <td>false</td>

</tr> <tr>

 <td>url-character-encoding</td>
 <td>Configures the character encoding for URLs</td>
 <td>utf-8</td>

</tr> <tr>

 <td>url-length-max</td>
 <td>Configures the maximum length of an allowed URL</td>
 <td>8192</td>

</tr> <tr>

 <td>web-app-default</td>
 <td>Configures defaults to apply to all web-apps in the cluster</td>
 <td></td>

</tr> </deftable>

<def title="<cluster> schema"> element cluster {

 attribute id { string }
 & <a href="env-tags.xtp">environment resources</a>
 & access-log?
 & cache?
 & connection-error-page?
 & ear-default*
 & error-page*
 & host*
 & host-default*
 & host-deploy*
 & ignore-client-disconnect?
 & invocation-cache-size?
 & invocation-cache-max-url-length?
 & machine*
 & persistent-store?
 & ping*
 & redeploy-mode?
 & resin:choose*
 & resin:import*
 & resin:if*
 & rewrite-dispatch?
 & root-directory?
 & server*
 & server-default*
 & server-header?
 & session-cookie?
 & session-sticky-disable?
 & url-character-encoding?
 & url-length-max?
 & web-app-default*

} </def>

<example title="Example: cluster-default"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <server-default>
           <http port="8080"/>
       </server-default>
       <server id="a" address="192.168.0.10"/>
       <server id="b" address="192.168.0.11"/>
       <host host-name="www.foo.com">
         ...
       </host>
   </cluster>

</resin> </example>

<s2 title="rewrite-vary-as-private">

Because not all browsers understand the Vary header, Resin can rewrite Vary to a Cache-Control: private. This rewriting will cache the page with the Vary in Resin's proxy cache, and also cache the page in the browser. Any other proxy caches, however, will not be able to cache the page.

The underlying issue is a limitation of browsers such as IE. When IE sees a Vary header it doesn't understand, it marks the page as uncacheable. Since IE only understands "Vary: User-Agent", this would mean IE would refuse to cache gzipped pages or "Vary: Cookie" pages.

With the <rewrite-vary-as-private> tag, IE will cache the page since it's rewritten as "Cache-Control: private" with no Vary at all. Resin will continue to cache the page as normal.

</s2>

</defun>

<defun title="<cluster-default>" version="Resin 3.1"> <parents>resin</parents>

<cluster-default> defines default cluster configuration for all clusters in the <resin> server.

<example title="Example: cluster-default"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster-default>
       <cache entries="16384" memory-size="64M"/>
   </cluster-default>
   <cluster id="web-tier">
       ...
   </cluster>
   <cluster id="app-tier">
       ...
   </cluster>

</resin> </example>

</defun>

<defun title="<connection-error-page>" version="Resin 3.1"> <parents>cluster</parents>

<connection-error-page> specifies an error page to be used by IIS when it can't contact an app-tier Resin. This directive only applies to IIS.

<def title="connection-error-page"> element connection-error-page {

 string

} </def>

</defun>

<defun title="<development-mode-error-page>" version="Resin 3.2.0"> <parents>cluster</parents>

<development-mode-error-page> enables browser error reporting with extra information. Because it can expose internal data, it is not generally recommended in production systems. (The information is generally coped to the log.

</defun>

<defun title="<ear-default>" version="Resin 3.1"> <parents>cluster</parents>

<ear-default> configures defaults for .ear resource, i.e. enterprise applications.

</defun>

<defun title="<error-page>" version="Resin 3.1"> <parents>cluster</parents>

<error-page> defines a web page to be displayed when an error occurs outside of a virtual host or web-app. Note, this is not a default error-page, i.e. if an error occurs inside a <host> or <web-app>, the error-page for that host or web-app will be used instead.

See <a href="webapp.xtp#error-page">webapp: error-page</a>.

<def title="<error-page> schema"> element error-page {

 (error-code | exception-type)?
 & location?

} </def>

</defun>

<defun title="<host>" version="Resin 3.0"> <parents>cluster</parents>

<host> configures a virtual host. Virtual hosts must be configured explicitly.

  • See <a href="host-tags.xtp">host tags</a> for configuration details.

<deftable-childtags title="<host> attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr><td>id</td>

   <td>primary host name</td>
   <td>none</td></tr>

<tr><td>regexp</td>

   <td>Regular expression based host matching</td>
   <td>none</td></tr>

<tr><td>host-name</td>

   <td>Canonical host name</td>
   <td>none</td></tr>

<tr><td>host-alias</td>

   <td>Aliases matching the same host</td>
   <td>none</td></tr>

<tr><td>secure-host-name</td>

   <td>Host to use for a redirect to SSL</td>
   <td>none</td></tr>

<tr><td>root-directory</td>

   <td>Root directory for host files</td>
   <td>parent directory</td></tr>

<tr><td>startup-mode</td>

   <td>'automatic', 'lazy', or 'manual', see <a href="resin-tags.xtp#startup-mode">Startup and Redeploy Mode</a></td>
   <td>automatic</td></tr>

</deftable-childtags>

<example title="Example: explicit host"> <host host-name="www.foo.com">

 <host-alias>foo.com</host-alias>
 <host-alias>web.foo.com</host-alias>
 <root-directory>/opt/www/www.foo.com</root-directory>
 <web-app id="/" document-directory="webapps/ROOT">
   
 </web-app>
 ...

</host> </example>

<example title="Example: regexp host"> <host regexp="([^.]+)\.foo\.com">

 <host-name>${host.regexp[1]}.foo.com</host-name>
 <root-directory>/var/www/hosts/www.${host.regexp[1]}.com</root-directory>
 ...

</host> </example>

It is recommended that any <host> using a regexp include a <host-name> to set the canonical name for the host.

</defun>

<defun title="<host-default>" version="Resin 3.0"> <parents>cluster</parents>

Defaults for a virtual host.

The host-default can contain any of the host configuration tags. It will be used as defaults for any virtual host.

</defun>

<defun title="<host-deploy>" version="Resin 3.0.4"> <parents>cluster</parents>

Configures a deploy directory for virtual host.

The host-deploy will add an EL variable ${name}, referring to the name of the host jar file.

<deftable-childtags> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr><td>path</td>

   <td>path to the deploy directory</td>
   <td>required</td></tr>

<tr><td>expand-path</td>

   <td>path to the expansion directory</td>
   <td>path</td></tr>

<tr><td>host-default</td>

   <td>defaults for the expanded host</td></tr>

<tr><td>host-name</td>

   <td>the host name to match</td>
   <td>${name}</td></tr>

</deftable-childtags>

</defun>

<defun title="<ignore-client-disconnect>" version="Resin 3.1"> <parents>cluster</parents> <default>true</default>

ignore-client-disconnect configures whether Resin should ignore disconnection exceptions from the client, or if it should send those exceptions to the application.

<def title="<ignore-client-disconnect> schema"> element ignore-client-disconnect {

 r_boolean-Type

} </def>

</defun>

<defun title="<invocation-cache-size>" version="Resin 3.1"> <parents>cluster</parents> <default>8192</default>

Configures the number of entries in the invocation cache. The invocation cache is used to store pre-calculated servlet and filter chains from the URLs. It's also used as the basis for proxy caching.

<def title="<invocation-cache-size> schema"> element invocation-cache-size {

 r_int-Type

} </def>

</defun>

<defun title="<invocation-cache-max-url-length>" version="Resin 3.1"> <parents>cluster</parents> <default>256</default>

Configures the longest entry cacheable in the invocation cache. It is used to avoid certain types of denial-of-service attacks.

<def title="<invocation-cache-max-url-length> schema"> element invocation-cache-max-url-length {

 r_int-Type

} </def>

</defun>

<defun title="<max-uri-length>" version="Resin 4.0.2"> <parents>cluster</parents> <default>1024</default>

Sets limit on longest URIs that can be served by Resin.

<def title="<max-uri-length> schema"> element max-uri-length {

 r_int-Type

} </def>

</defun>

<defun title="<persistent-store>" version="Resin 3.0.8"> <parents>cluster</parents>

Defines the cluster-aware persistent store used for sharing distributed sessions. The allowed types are "jdbc", "cluster" and "file". The "file" type is only recommended in single-server configurations.

The <persistent-store> configuration is in the <server> level because it needs to share update information across the active cluster and the <cluster> definition is at the <server> level. Sessions activate the persistent store with the <use-persistent-store> tag of the <session-config>.

See <a href="resin-clustering.xtp">Persistent sessions</a> for more details.

<deftable title="<persistent-store> Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>init</td>
 <td>initialization parameters for the persistent-store</td>
 <td></td>

</tr> <tr>

 <td>type</td>
 <td>cluster, jdbc, or file</td>
 <td>required</td>

</tr> </deftable>

<def title="<persistent-store> schema"> element persistent-store {

 type
 & init?

} </def>

<s2 name="cluster-store" title="cluster store">

The cluster store shares copies of the sessions on multiple servers. The original server is used as the primary, and is always more efficient than the backup servers. In general, the cluster store is preferred because it is more scalable, and with the "triplicate" attribute, the most reliable..

<deftable title="cluster tags"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>always-load</td>
 <td>Always load the value</td>
 <td>false</td>

</tr> <tr>

 <td>always-save</td>
 <td>Always save the value</td>
 <td>false</td>

</tr> <tr>

 <td>max-idle-time</td>
 <td>How long idle objects are stored (session-timeout will invalidate

items earlier)</td>

 <td>24h</td>

</tr> <tr>

 <td>path</td>
 <td>Directory to store the objects</td>
 <td>required</td>

</tr> <tr>

 <td>save-backup</td>
 <td>Saves backup copies of all distributed objects (3.2.0).</td>
 <td>true</td>

</tr> <tr>

 <td>triplicate</td>
 <td>Saves three copies of all distributed objects (3.2.0).</td>
 <td>false</td>

</tr> <tr>

 <td>wait-for-acknowledge</td>
 <td>Requires the sending server to wait for all acks.</td>
 <td>false</td>

</tr> </deftable>

<def title="cluster schema"> element persistent-store {

 type { "cluster "}
 element init {
   always-load?
   & always-save?
   & max-idle-time?
   & triplicate?
   & wait-for-acknowledge?
 }

} </def>

<example title="Example: cluster store"> <resin xmlns="http://caucho.com/ns/resin"> <cluster>

 <server id="a" address="192.168.0.1" port="6800"/>
 <server id="b" address="192.168.0.2" port="6800"/>
 <persistent-store type="cluster">
   <init>
     <triplicate>true</triplicate>
   </init>
 </persistent-store>
 <web-app-default>
   <session-config use-persistent-store="true"/>
 </web-app-default>

</cluster> </resin> </example>

</s2>

<s2 title="jdbc store">

The JDBC store saves sessions in a JDBC database. Often, this will be a dedicated database to avoid overloading the main database.

<deftable title="jdbc store Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>always-load</td>
 <td>Always load the value</td>
 <td>false</td>

</tr> <tr>

 <td>always-save</td>
 <td>Always save the value</td>
 <td>false</td>

</tr> <tr>

 <td>blob-type</td>
 <td>Schema type to store values</td>
 <td>from JDBC meta info</td>

</tr> <tr>

 <td>data-source</td>
 <td>The JDBC data source</td>
 <td>required</td>

</tr> <tr>

 <td>table-name</td>
 <td>Database table</td>
 <td>persistent_session</td>

</tr> <tr>

 <td>max-idle-time</td>
 <td>How long idle objects are stored</td>
 <td>24h</td>

</tr> </deftable>

<example title="Example: jdbc-store"> <resin xmlns="http://caucho.com/ns/resin"> <cluster>

 <server id="a" address="192.168.0.1" port="6800"/>
 <server id="b" address="192.168.0.2" port="6800"/>
 <persistent-store type="jdbc">
   <init>
     <data-source>jdbc/session</data-source>
     <max-idle-time>24h</max-idle-time>
   </init>
 </persistent-store>
 <web-app-default>
   <session-config use-persistent-store="true"/>
 </web-app-default>

</cluster> </resin> </example>

</s2>

<s2 title="file store">

The file store is a persistent store for development and testing or for single servers. Since it is not aware of the clusters, it cannot implement true distributed objects.

<deftable title="file tags"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>always-load</td>
 <td>Always load the value</td>
 <td>false</td>

</tr> <tr>

 <td>always-save</td>
 <td>Always save the value</td>
 <td>false</td>

</tr> <tr>

 <td>max-idle-time</td>
 <td>How long idle objects are stored</td>
 <td>24h</td>

</tr> <tr>

 <td>path</td>
 <td>Directory to store the sessions</td>
 <td>required</td>

</tr> </deftable>

</s2>

</defun>

<defun title="<ping>" occur="*" version="Resin 3.0"> <parents>cluster</parents>

Starts a thread that periodically makes a request to the server, and restarts Resin if it fails. This facility is used to increase server reliability - if there is a problem with the server (perhaps from a deadlock or an exhaustion of resources), the server is restarted.

A failure occurs if a request to the url returns an HTTP status that is not 200.

Since the local process is restarted, it does not make sense to specify a url that does not get serviced by the instance of Resin that has the ping configuration. Most configurations use url's that specify 'localhost' as the host.

This pinging only catches some problems because it's running in the same process as Resin itself. If the entire JDK freezes, this thread will freeze as well. Assuming the JDK doesn't freeze, the PingThread will catch errors like deadlocks.

<deftable title="<ping> Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>url</td>
 <td>A url to ping.</td>
 <td>required</td>

</tr> <tr>

 <td>sleep-time</td>
 <td>Time to wait between pings.  The first ping is always 15m after the server starts, this is for subsequent pings.</td>
 <td>15m</td>

</tr> <tr>

 <td>try-count</td>
 <td>If a ping fails, number of times to retry before giving up and restarting</td>
 <td>required</td>

</tr> <tr>

 <td>retry-time</td>
 <td>time between retries</td>
 <td>1s</td>

</tr> <tr>

 <td>socket-timeout</td>
 <td>time to wait for server to start responding to the tcp connection before giving up</td>
 <td>10s</td>

</tr> </deftable>

<example title="Example: resin.xml - simple usage of server ping"> <resin xmlns="http://caucho.com/ns/resin"

      xmlns:resin="http://caucho.com/ns/resin/core">
   <cluster id="app-tier">
       <ping url="http://localhost/"/>
       ...
   </cluster>

</resin> </example>

<example title="Example: resin.xml - configured usage of server ping"> <resin xmlns="http://caucho.com/ns/resin"

      xmlns:resin="http://caucho.com/ns/resin/core">
   ...
   <cluster id="app-tier">
       <ping>
           <url>http://localhost:8080/index.jsp</url>
           <url>http://localhost:8080/webapp/index.jsp</url>
           <url>http://virtualhost/index.jsp</url>
           <url>http://localhost:443/index.jsp</url>
           <sleep-time>5m</sleep-time>
           <try-count>5</try-count>
   
           <!-- a very busy server -->
           <socket-timeout>30s</socket-timeout>
       </ping>
     ...
   </cluster>

</resin> </example>

The class that corresponds to <ping> is <a href="javadoc|com.caucho.server.admin.PingThread|">PingThread</a>.

<s2 title="Mail notification when ping fails">

A refinement of the ping facility sends an email when the server is restarted.

<example title="resin.xml - mail notification when ping fails"> <resin xmlns="http://caucho.com/ns/resin"

      xmlns:resin="http://caucho.com/ns/resin/core">
 ...
 <cluster id="web-tier">
   <ping resin:type="com.caucho.server.admin.PingMailer">
     <url>http://localhost:8080/index.jsp</url>
     <url>http://localhost:8080/webapp/index.jsp</url>
     <mail-to>fred@hogwarts.com</mail-to>
     <mail-from>resin@hogwarts.com</mail-from>
     <mail-subject>Resin ping has failed for server ${'${'}server.name}</mail-subject>
   </ping>
   ...
 </server>

</resin> </example>

The default behaviour for sending mail is to contact a SMTP server at host 127.0.0.1 (the localhost) on port 25. System properties are used to configure a different SMTP server.

<example title="resin.xml - smtp server configuration">

 <system-property mail.smtp.host="127.0.0.1"/>
 <system-property mail.smtp.port="25"/>

</example>

</s2>

</defun>

<defun title="Resource Tags" version="Resin 3.1"> <parents>cluster</parents>

All <a href="env-tags.xtp">Environment tags</a> are available to the <cluster>. For example, resources like <database>.

<example title="Example: cluster environment"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="app-tier">
       <database jndi-name="jdbc/test">
           <driver type="org.postgresql.Driver">
               <url>jdbc:postgresql://localhost/test</url>
               <user>caucho</user>
           </driver>
       </database>
       <server id="a" ...>
         ...
       <host host-name="www.foo.com">
         ...
   </cluster>

</resin> </example>

</defun>

<defun title="<rewrite-dispatch>" version="Resin 3.1"> <parents>cluster</parents>

<rewrite-dispatch> defines a set of rewriting rules for dispatching and forwarding URLs. Applications can use these rules to redirect old URLs to their new replacements.

See <a href="rewrite-tags.xtp">rewrite-dispatch</a> for more details.

<example title="rewrite-dispatch"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <rewrite-dispatch>
           <redirect regexp="^http://www.foo.com"
                     target="http://bar.com/foo"/>
       </rewrite-dispatch>
 
   </cluster>

</resin> </example>

</defun>

<defun title="<root-directory>" version="Resin 3.1"> <parents>cluster</parents> <default>The root-directory of the <resin> tag.</default>

<root-directory> configures the root directory for files within the cluster. All paths in the <cluster> will be relative to the root directory.

<def title="<root-directory> schema"> element root-directory {

 r_path-Type

} </def>

<example title="Example: cluster root-directory"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="app-tier">
       <root-directory>/var/www/app-tier</root-directory>
       <server id="a" ...>
       <host host-name="www.foo.com">
   </cluster>

</resin> </example>

</defun>

<defun title="<server>" version="Resin 3.1"> <parents>cluster</parents>

The <server> tag configures a JVM instance in the cluster. Each <server> is uniquely identified by its id attribute. The id will match the -server-id command line argument.

See the full <a href="server-tags.xtp">server configuration</a> for more details of the <server> tag and its children.

The current server is managed with a <a href="javadoc|com.caucho.management.server.ServerMXBean">ServerMXBean</a>. The <g>ObjectName</g> is resin:type=Server.

Peer servers are managed with <a href="javadoc|com.caucho.management.server.ServerConnectorMXBean">ServerConnectorMXBean</a>. The ObjectName is resin:type=ServerConnector,name=server-id.

<deftable title="<server> Attributes"> <tr>

 <th>Attribute</th>
 <th>Description</th>
 <th>Default</th>

</tr> <tr>

 <td>address</td>
 <td>IP address of the cluster port</td>
 <td>127.0.0.1</td>

</tr> <tr>

 <td>bind-ports-after-start</td>
 <td>If true, listen to the ports only after all initialization has

completed, allowing load-balance failover.</td>

 <td>true</td>

</tr> <tr>

 <td>cluster-port</td>
 <td>Configures the cluster port in detail, allowing for customization

of timeouts, etc.</td>

 <td></td>

</tr> <tr>

 <td>group-name</td>
 <td>Used by the watchdog to switch setgid before starting the Resin

JVM instance for security.</td>

 <td></td>

</tr> <tr>

 <td>http</td>
 <td>Adds a HTTP port (see <a href="port-tags.xtp">port tags</a>)</td>
 <td></td>

</tr> <tr>

 <td>id</td>
 <td>Unique server identifier</td>
 <td>required</td>

</tr> <tr>

 <td>java-exe</td>
 <td>The specific Java executable for the watchdog

to launch the JVM</td>

 <td>java</td>

</tr> <tr>

 <td>jvm-arg</td>
 <td>Adds a JVM argument when the watchdog launches Resin.</td>
 <td></td>

</tr> <tr>

 <td>jvm-classpath</td>
 <td>Adds a JVM classpath when the watchdog launches Resin.</td>
 <td></td>

</tr> <tr>

 <td>keepalive-connection-time-max</td>
 <td>The total time a connection can be used for requests and keepalives</td>
 <td>10min</td>

</tr> <tr>

 <td>keepalive-max</td>
 <td>The maximum keepalives enabled at one time.</td>
 <td>128</td>

</tr> <tr>

 <td>keepalive-select-enable</td>
 <td>Enables epoll/select for keepalive requests to reduce threads (unix only)</td>
 <td>true</td>

</tr> <tr>

 <td>keepalive-timeout</td>
 <td>Timeout for a keepalive to wait for a new request</td>
 <td>15s</td>

</tr> <tr>

 <td>load-balance-connect-timeout</td>
 <td>How long the load-balancer should wait for a connection to this server</td>
 <td>5s</td>

</tr> <tr>

 <td>load-balance-idle-time</td>
 <td>How long the load balancer can keep an idle socket open to this server (see keepalive-timeout)</td>
 <td>keepalive-time - 2s</td>

</tr> <tr>

 <td>load-balance-recover-time</td>
 <td>How long the load balancer should treat this server as dead after a failure before retrying</td>
 <td>15s</td>

</tr> <tr>

 <td>load-balance-socket-timeout</td>
 <td>timeout for the load balancer reading/writing to this server</td>
 <td>65s</td>

</tr> <tr>

 <td>load-balance-warmup-time</td>
 <td>Warmup time for the load-balancer to throttle requests before sending the full load</td>
 <td>60s</td>

</tr> <tr>

 <td>load-balance-weight</td>
 <td>relative weight used by the load balancer to send traffic to this server</td>
 <td>100</td>

</tr> <tr>

 <td>memory-free-min</td>
 <td>minimum memory allowed for the JVM before Resin forces a restart</td>
 <td>1M</td>

</tr> <tr>

 <td>ping</td>
 <td>Configures a periodic ping of the server to force restarts when non-responsive</td>
 <td></td>

</tr> <tr>

 <td>port</td>
 <td>Configures the cluster port (shortcut for <cluster-port>)</td>
 <td>6800</td>

</tr> <tr>

 <td>protocol</td>
 <td>Adds a custom socket protocol, e.g. for IIOP or SNMP.</td>
 <td></td>

</tr> <tr>

 <td>shutdown-wait-max</td>
 <td>The maximum of time to wait for a graceful Resin shutdown before forcing a close</td>
 <td>60s</td>

</tr> <tr>

 <td>socket-timeout</td>
 <td>The read/write timeout for the socket</td>
 <td>65s</td>

</tr> <tr>

 <td>thread-max</td>
 <td>The maximum number of threads managed by Resin (JVM threads will be larger because of non-Resin threads)</td>
 <td>4096</td>

</tr> <tr>

 <td>thread-executor-thread-max</td>
 <td>Limits the threads allocated to application ScheduledExecutors from Resin</td>
 <td></td>

</tr> <tr>

 <td>thread-idle-max</td>
 <td>Maximum number of idle threads in the thread pool</td>
 <td>10</td>

</tr> <tr>

 <td>thread-idle-min</td>
 <td>Minimum number of idle threads in the thread pool</td>
 <td>5</td>

</tr> <tr>

 <td>user-name</td>
 <td>The setuid user-name for the <a href="resin-watchdog.xtp">watchdog</a>

when launching Resin for Unix security.</td>

 <td></td>

</tr> <tr>

 <td>watchdog-jvm-arg</td>
 <td>Additional JVM arguments when launching the watchdog manager</td>
 <td></td>

</tr> <tr>

 <td>watchdog-port</td>
 <td>The port for the watchdog-manager to listen for start/stop/status

requests</td>

 <td>6700</td>

</tr> </deftable>

<def title="<server> schema"> element server {

 attribute id { string }
 & address?
 & bind-ports-after-start?
 & cluster-port*
 & group-name?
 & http*
 & java-exe?
 & jvm-arg?
 & jvm-classpath?
 & keepalive-connection-time-max?
 & keepalive-max?
 & keepalive-select-enable?
 & keepalive-timeout?
 & load-balance-connect-timeout?
 & load-balance-idle-time?
 & load-balance-recover-time?
 & load-balance-socket-timeout?
 & load-balance-warmup-time?
 & load-balance-weight?
 & memory-free-min?
 & ping?
 & port?
 & protocol?
 & shutdown-wait-max?
 & socket-timeout?
 & thread-max?
 & thread-executor-task-max?
 & thread-idle-max?
 & thread-idle-min?
 & user-name?
 & watchdog-jvm-arg*
 & watchdog-port?

} </def>

<example title="Example: server"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <server id="a" address="192.168.0.10" port="6800">
         <http port="8080"/>
       </server>
       <server id="b" address="192.168.0.11" port="6800">
         <http port="8080"/>
       </server>
       <server id="c" address="192.168.0.12" port="6800">
         <http port="8080"/>
       </server>
       <host id="">
         ...
   </cluster>

</resin> </example>

</defun>

<defun title="<server-default>" version="Resin 3.1"> <parents>cluster</parents>

Defines default values for all <server> instances. See <a href="server-tags.xtp"><server> configuration</a> for more details.

<example title="Example: server"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <server-default>
           <server-port>6800</server-port>
           <http port="8080"/>
       </server-default>
       <server id="a" address="192.168.0.10"/>
       <server id="b" address="192.168.0.11"/>
       <server id="c" address="192.168.0.12"/>
       <host id="">
         ...
   </cluster>

</resin> </example>

</defun>

<defun title="<server-header>" version="Resin 3.1"> <parents>cluster</parents> <default>Resin/3.1.x</default>

Configures the HTTP Server: header which Resin sends back to any HTTP client.

<def title="<server-header> schema"> element server-header {

 string

} </def>

<example title="server-header"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <server-header>MyServer/1.0</server-header>
   </cluster>

</resin> </example>

</defun>

<defun title="<session-cookie>" version="Resin 3.1"> <parents>cluster</parents> <default>JSESSIONID</default>

Configures the cookie used for servlet sessions.

<def title="<session-cookie> schema"> element session-cookie {

 string

} </def>

</defun>

<defun title="<session-sticky-disable>" version="Resin 3.1"> <parents>cluster</parents> <default>false</default>

Disables sticky sessions from the load balancer.

<def title="<session-sticky-disable> schema"> element session-sticky-disable {

 r_boolean-Type

} </def>

</defun>

<defun title="<session-url-prefix>" version="Resin 3.1"> <parents>cluster</parents> <default>;jsessionid=</default>

Configures the URL prefix used for session rewriting.

<note>Session rewriting is discouraged as a potential security issue.</note>

<def title="<session-cookie> schema"> element session-cookie {

 string

} </def>

</defun>

<defun title="<ssl-session-cookie>" version="Resin 3.1"> <parents>cluster</parents> <default>value of session-cookie</default>

Defines an alternative session cookie to be used for a SSL connection. Having two separate cookies increases security.

<def title="<session-cookie> schema"> element ssl-session-cookie {

 string

} </def>

<example title="Example: ssl-session-cookie"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="web-tier">
       <ssl-session-cookie>SSLJSESSIONID</ssl-session-cookie>
       ...
   </cluster>

</resin> </example>

</defun>

<defun title="<url-character-encoding>" version="Resin 3.1"> <parents>cluster</parents> <default>UTF-8</default>

Defines the character encoding for decoding URLs.

The HTTP specification does not define the character-encoding for URLs, so the server must make assumptions about the encoding.

<def title="<url-character-encoding> schema"> element url-character-encoding {

 string

} </def>

</defun>

<defun title="<web-app-default>" version="Resin 3.1"> <parents>cluster</parents>

<web-app-default> defines default values for any <g>web-app</g> in the cluster.

<example title="Example: web-app-default"> <resin xmlns="http://caucho.com/ns/resin">

   <cluster id="app-tier">
       <web-app-default>
           <servlet servlet-name="resin-php"
                    servlet-class="com.caucho.quercus.servlet.QuercusServlet"/>
           <servlet-mapping url-pattern="*.php"
                            servlet-name="resin-php"/>
       </web-app-default>
       <host id="">
         ...
   </cluster>

</resin> </example>

</defun>

</body> </document>

Personal tools