http://wiki3.caucho.com/index.php?title=Resin_Clustering&feed=atom&action=historyResin Clustering - Revision history2024-03-29T15:50:07ZRevision history for this page on the wikiMediaWiki 1.18.0http://wiki3.caucho.com/index.php?title=Resin_Clustering&diff=3491&oldid=prevReza at 08:49, 17 December 20092009-12-17T08:49:29Z<p></p>
<a href="http://wiki3.caucho.com/index.php?title=Resin_Clustering&diff=3491&oldid=3465">Show changes</a>Rezahttp://wiki3.caucho.com/index.php?title=Resin_Clustering&diff=3465&oldid=prevReza: New page: <document> <header> <title>Resin Clustering</title> <description> <p>As traffic increases beyond a single server, Resin's clustering lets you add new machines to handle the load and simul...2009-12-17T07:35:06Z<p>New page: <document> <header> <title>Resin Clustering</title> <description> <p>As traffic increases beyond a single server, Resin's clustering lets you add new machines to handle the load and simul...</p>
<p><b>New page</b></p><div><document><br />
<header><br />
<title>Resin Clustering</title><br />
<description><br />
<br />
<p>As traffic increases beyond a single server, Resin's clustering<br />
lets you add new machines to handle the load and simultaneously improves<br />
uptime and reliability by failing over requests from a downed or maintenance<br />
server to a backup transparently.<br />
</p><br />
<br />
</description><br />
</header><br />
<br />
<body><br />
<br />
<localtoc/><br />
<br />
<s1 title="Persistent Sessions"><br />
<br />
<p>A session needs to stay on the same JVM that started it.<br />
Otherwise, each JVM would only see every second or third request and<br />
get confused.</p><br />
<br />
<p>To make sure that sessions stay on the same JVM, Resin encodes the<br />
cookie with the host number. In the previous example, the hosts would<br />
generate cookies like:</p><br />
<br />
<deftable><br />
<tr><br />
<th>index</th><br />
<th>cookie prefix</th><br />
</tr><br />
<tr><br />
<td>1</td><br />
<td><var>a</var>xxx</td><br />
</tr><br />
<tr><br />
<td>2</td><br />
<td><var>b</var>xxx</td><br />
</tr><br />
<tr><br />
<td>3</td><br />
<td><var>c</var>xxx</td><br />
</tr><br />
</deftable><br />
<br />
<p>On the web-tier, Resin will decode the cookie and send it<br />
to the appropriate host. So <var>bacX8ZwooOz</var> would go to app-b.</p><br />
<br />
<p>In the infrequent case that app-b fails, Resin will send the<br />
request to app-a. The user might lose the session but that's a minor<br />
problem compared to showing a connection failure error.</p><br />
<br />
<p>The following example is a typical configuration for a distributed<br />
server using an external hardware load-balancer, i.e. where each Resin is<br />
acting as the HTTP server. Each server will be started<br />
as <var>-server a</var> or <var>-server b</var> to grab its specific configuration.</p><br />
<br />
<p>In this example, sessions will only be stored when the server shuts down,<br />
either for maintenance or with a new version of the server. This is the most<br />
lightweight configuration, and doesn't affect performance significantly.<br />
If the hardware or the JVM crashes, however, the sessions will be lost.<br />
(If you want to save sessions for hardware or JVM crashes,<br />
remove the &lt;save-only-on-shutdown/&gt; flag.)</p><br />
<br />
<example title="resin.xml"><br />
&lt;resin xmlns="http://caucho.com/ns/resin"&gt;<br />
&lt;cluster id="app-tier"&gt;<br />
&lt;server-default><br />
&lt;http port='80'/&gt;<br />
&lt;/server-default><br />
<br />
&lt;server id='app-a' address='192.168.0.1'/&gt;<br />
&lt;server id='app-b' address='192.168.0.2'/&gt;<br />
&lt;server id='app-c' address='192.168.0.3'/&gt;<br />
<br />
&lt;web-app-default&gt;<br />
&lt;!-- enable tcp-store for all hosts/web-apps --&gt;<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store/&gt;<br />
&lt;save-only-on-shutdown/&gt;<br />
&lt;/session-config&gt;<br />
&lt;/web-app-default&gt;<br />
<br />
...<br />
&lt;/cluster&gt;<br />
&lt;/resin&gt;<br />
</example><br />
<br />
<s2 title="Choosing a backend server"><br />
<p><br />
Requests can be made to specific servers in the app-tier. The web-tier uses<br />
the value of the jsessionid to maintain sticky sessions. You can include an<br />
explicit jsessionid to force the web-tier to use a particular server in the app-tier.<br />
</p><br />
<br />
<p><br />
Resin uses the first character of the jsessionid to identify the backend server<br />
to use, starting with 'a' as the first backend server. If wwww.example.com<br />
resolves to your web-tier, then you can use:<br />
</p><br />
<br />
<ol><br />
<li>http://www.example.com/proxooladmin;jsessionid=abc</li><br />
<li>http://www.example.com/proxooladmin;jsessionid=bcd</li><br />
<li>http://www.example.com/proxooladmin;jsessionid=cde</li><br />
<li>http://www.example.com/proxooladmin;jsessionid=def</li><br />
<li>http://www.example.com/proxooladmin;jsessionid=efg</li><br />
<li>etc.</li><br />
</ol><br />
</s2><br />
<br />
<s2 title="File Based"><br />
<br />
<p>For single-server configurations, the "cluster" store saves session<br />
data on disk, allowing for recovery after system restart or during<br />
development.</p><br />
<br />
<p>Sessions are stored as files in the <var>resin-data</var><br />
directory. When the session changes, the updates will be written to<br />
the file. After Resin loads an Application, it will load the stored<br />
sessions.</p><br />
<br />
</s2><br />
<br />
<s2 title="Distributed Sessions"><br />
<br />
<p>Distributed sessions are intrinsically more complicated than single-server<br />
sessions. Single-server session can be implemented as a simple memory-based<br />
Hashtable. Distributed sessions must communicate between machines to ensure<br />
the session state remains consistent.</p><br />
<br />
<p>Load balancing with multiple machines either uses <var>sticky sessions</var> or<br />
<var>symmetrical sessions</var>. Sticky sessions put more intelligence on the<br />
load balancer, and symmetrical sessions puts more intelligence on the JVMs.<br />
The choice of which to use depends on what kind of hardware you have,<br />
how many machines you're using and how you use sessions.</p><br />
<br />
<p>Distributed sessions can use a database as a backing store, or they can<br />
distribute the backup among all the servers using TCP.</p><br />
<br />
<s3 title="Symmetrical Sessions"><br />
<br />
<p>Symmetrical sessions happen with dumb load balancers like DNS<br />
round-robin. A single session may bounce from machine A<br />
to machine B and back to machine B. For JDBC sessions, the symmetrical<br />
session case needs the <var>always-load-session</var> attribute described below.<br />
Each request must load the most up-to-date version of the session.</p><br />
<br />
<p>Distributed sessions in a symmetrical environment are required to make<br />
sessions work at all. Otherwise the state will end up spread across the JVMs.<br />
However, because each request must update its session information, it is<br />
less efficient than sticky sessions.</p><br />
<br />
</s3><br />
<br />
<s3 title="Sticky Sessions"><br />
<br />
<p>Sticky sessions require more intelligence on the load-balancer, but<br />
are easier for the JVM. Once a session starts, the load-balancer will<br />
always send it to the same JVM. Resin's load balancing, for example, encodes<br />
the session id as 'aaaXXX' and 'baaXXX'. The 'aaa' session will always go<br />
to JVM-a and 'baa' will always go to JVM-b.</p><br />
<br />
<p>Distributed sessions with a sticky session environment add reliability.<br />
If JVM-a goes down, JVM-b can pick up the session without the user<br />
noticing any change. In addition, distributed sticky sessions are more<br />
efficient. The distributor only needs to update sessions when they change.<br />
So if you update the session once when the user logs in, the distributed<br />
sessions can be very efficient.</p><br />
<br />
</s3><br />
<br />
<s3 title="always-load-session"><br />
<br />
<p>Symmetrical sessions must use the 'always-load-session' flag to<br />
update each session data on each request. always-load-session is only<br />
needed for jdbc-store sessions. tcp-store sessions use a more-sophisticated<br />
protocol that eliminates the need for always-load-session, so tcp-store<br />
ignores the always-load-session flag.</p><br />
<br />
<p>The <var>always-load-session</var> attribute forces sessions to check the store for<br />
each request. By default, sessions are only loaded from persistent<br />
store when they are created. In a configuration with multiple symmetric<br />
web servers, sessions can be loaded on each request to ensure consistency.</p><br />
<br />
</s3><br />
<br />
<s3 title="always-save-session"><br />
<br />
<p>By default, Resin only saves session data when you add new values<br />
to the session object, i.e. if the request calls <var>setAttribute</var>.<br />
This may be insufficient when storing large objects. For example, if you<br />
change an internal field of a large object, Resin will not automatically<br />
detect that change and will not save the session object.</p><br />
<br />
<p>With <var>always-save-session</var> Resin will always write the session<br />
to the store at the end of each request. Although this is less efficient,<br />
it guarantees that updates will get stored in the backup after each<br />
request.</p><br />
<br />
</s3><br />
<br />
</s2><br />
<!--<br />
<s2 title="Database Based"><br />
<br />
<p>Database backed sessions are the easiest to understand. Session data<br />
gets serialized and stored in a database. The data is loaded on the<br />
next request.</p><br />
<br />
<p>For efficiency, the owning JVM keeps a cache of the session value, so<br />
it only needs to query the database when the session changes. If another JVM<br />
stores a new session value, it will notify the owner of the change so<br />
the owner can update its cache. Because of this notification, the database<br />
store is cluster-aware.</p><br />
<br />
<p>In some cases, the database can become a bottleneck.<br />
By adding load to an already-loaded<br />
system, you may harm performance. One way around that bottleneck is to use<br />
a small, quick database like MySQL for your session store and save the "Big<br />
Iron" database like Oracle for your core database needs.</p><br />
<br />
<p>The database must be specified using a <var>&lt;database&gt;</var>.<br />
The database store will automatically create a <var>session</var> table.</p><br />
<br />
<p>The JDBC store needs to know about the other servers in the cluster<br />
in order to efficiently update them when changes occur to the server.</p><br />
<br />
<example title="JDBC store"><br />
&lt;resin xmlns="http://caucho.com/ns/resin"&gt;<br />
&lt;cluster id="app-tier"&gt;<br />
&lt;server-default><br />
&lt;http port="80"/><br />
&lt;/server-default><br />
<br />
&lt;server id="app-a" address="192.168.2.10" port="6800"/><br />
&lt;server id="app-b" address="192.168.2.11" port="6800"/><br />
<br />
&lt;database jndi-name="jdbc/session"&gt;<br />
...<br />
&lt;/database&gt;<br />
<br />
&lt;persistent-store type="jdbc"&gt;<br />
&lt;init&gt;<br />
&lt;data-source&gt;jdbc/session&lt;data-source&gt;<br />
&lt;/init&gt;<br />
&lt;/persistent-store&gt;<br />
...<br />
<br />
&lt;web-app-default&gt;<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store/&gt;<br />
&lt;/session-config&gt;<br />
&lt;/web-app-default&gt;<br />
...<br />
&lt;/cluster><br />
&lt;/resin><br />
</example><br />
<br />
<p><br />
Each web-app which needs distributed sessions must enable<br />
the persistent store with a<br />
<a href="../reference/webapp-tags.xtp#session-config">use-persistent-store</a><br />
tag in the session-config.</p><br />
<br />
<deftable><br />
<tr><br />
<td>data-source</td><br />
<td>data source name for the table</td><br />
</tr><br />
<tr><br />
<td>table-name</td><br />
<td>database table for the session data</td><br />
</tr><br />
<tr><br />
<td>blob-type</td><br />
<td>database type for a blob</td><br />
</tr><br />
<tr><br />
<td>max-idle-time</td><br />
<td>cleanup time</td><br />
</tr><br />
</deftable><br />
<br />
<example><br />
CREATE TABLE persistent_session (<br />
id VARCHAR(64) NOT NULL,<br />
data BLOB,<br />
access_time int(11),<br />
expire_interval int(11),<br />
PRIMARY KEY(id)<br />
)<br />
</example><br />
<br />
<p>The store is enabled with &lt;use-persistent-store&gt; in the session config.<br />
</p><br />
<br />
<example><br />
&lt;web-app xmlns="http://caucho.com/ns/resin"&gt;<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store/&gt;<br />
&lt;always-save-session/&gt;<br />
&lt;/session-config&gt;<br />
&lt;/web-app&gt;<br />
</example><br />
<br />
</s2><br />
--> <!-- jdbc sessions --><br />
<br />
<s2 title="Cluster Sessions"><br />
<br />
<p>The distributed cluster stores the sessions across the<br />
cluster servers. In some configurations, the cluster store<br />
may be more efficient than the database store, in others the database<br />
store will be more efficient.</p><br />
<br />
<p>With cluster sessions, each session has an owning JVM and a backup JVM.<br />
The session is always stored in both the owning JVM and the backup JVM.</p><br />
<br />
<p>The cluster store is configured in the in the &lt;cluster&gt;.<br />
It uses the &lt;server&gt; hosts in the &lt;cluster&gt; to distribute<br />
the sessions. The session store is enabled in the &lt;session-config&gt;<br />
with the &lt;use-persistent-store&gt;.</p><br />
<br />
<example><br />
&lt;resin xmlns="http://caucho.com/ns/resin"&gt;<br />
...<br />
<br />
&lt;cluster id="app-tier"&gt;<br />
&lt;server id="app-a" host="192.168.0.1" port="6802"/><br />
&lt;server id="app-b" host="192.168.0.2" port="6802"/><br />
<br />
...<br />
&lt;/cluster><br />
&lt;/resin><br />
</example><br />
<br />
<p>The configuration is enabled in the <var>web-app</var>.</p><br />
<br />
<example><br />
&lt;web-app xmlns="http://caucho.com/ns/resin"&gt;<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store="true"/&gt;<br />
&lt;/session-config&gt;<br />
&lt;/web-app&gt;<br />
</example><br />
<br />
<p>The &lt;server&gt; are treated as a cluster<br />
of server. Each server uses the other servers as a backup. When the session<br />
changes, the updates will be sent to the backup server. When the server<br />
starts, it looks up old sessions in the other servers to update its<br />
own version of the persistent store.<br />
</p><br />
<br />
<example title="Symmetric load-balanced servers"><br />
&lt;resin xmlns="http://caucho.com/ns/resin"&gt;<br />
&lt;cluster id="app-tier"&gt;<br />
<br />
&lt;server-default&gt;<br />
&lt;http port='80'/&gt;<br />
&lt;/server-default&gt;<br />
<br />
&lt;server id="app-a" address="192.168.2.10" port="6802"/><br />
&lt;server id="app-b" address="192.168.2.11" port="6803"/><br />
<br />
&lt;host id=''&gt;<br />
&lt;web-app id=''&gt;<br />
<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store="true"/&gt;<br />
&lt;/session-config&gt;<br />
<br />
&lt;/web-app&gt;<br />
&lt;/host&gt;<br />
&lt;/cluster&gt;<br />
&lt;/resin&gt;<br />
</example><br />
</s2><br />
<br />
<s2 title="Clustered Distributed Sessions"><br />
<p>Resin's cluster protocol for distributed sessions can<br />
is an alternative to JDBC-based distributed sessions. In some<br />
configurations, the cluster-stored sessions will be more efficient<br />
than JDBC-based sessions.<br />
Because sessions are always duplicated on separate servers, cluster<br />
sessions do not have a single point of failure.<br />
As the number of<br />
servers increases, JDBC-based sessions can start overloading the<br />
backing database. With clustered sessions, each additional server<br />
shares the backup load, so the main scalability issue reduces to network<br />
bandwidth. Like the JDBC-based sessions, the cluster store sessions<br />
uses sticky-session caching to avoid unnecessary network traffic.</p><br />
</s2><br />
<br />
<s2 title="Configuration"><br />
<br />
<p>The cluster configuration must tell each host the servers in the<br />
cluster<br />
and it must enable the persistent in the session configuration<br />
with <a href="../reference/session-tags.xtp#session-config">use-persistent-store</a>.<br />
Because session configuration is specific to a virtual host and a<br />
web-application, each web-app needs <var>use-persistent-store</var> enabled<br />
individually. The <a href="../reference/webapp-tags.xtp#web-app-default">web-app-default</a><br />
tag can be used to enable distributed sessions across an entire site.<br />
</p><br />
<br />
<example title="resin.xml fragment"><br />
&lt;resin xmlns="http://caucho.com/ns/resin"><br />
...<br />
<br />
&lt;cluster id="app-tier"&gt;<br />
<br />
&lt;server id="app-a" host="192.168.0.1"/><br />
&lt;server id="app-b" host="192.168.0.2"/><br />
&lt;server id="app-c" host="192.168.0.3"/><br />
&lt;server id="app-d" host="192.168.0.4"/><br />
<br />
...<br />
&lt;host id=""&gt;<br />
&lt;web-app id='myapp'&gt;<br />
...<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store/&gt;<br />
&lt;/session-config&gt;<br />
&lt;/web-app&gt;<br />
&lt;/host&gt;<br />
&lt;/cluster&gt;<br />
&lt;/resin&gt;<br />
</example><br />
<br />
<p>Usually, hosts will share the same resin.xml. Each host will be<br />
started with a different <var>-server xx</var> to select the correct<br />
block. The startup will look like:</p><br />
<br />
<example title="Starting Server&#160;C"><br />
resin-4.0.x&gt; java -jar lib/resin.jar -conf conf/resin.xml -server c start<br />
</example><br />
<br />
<s3 title="always-save-session"><br />
<br />
<p>Resin's distributed sessions needs to know when a session has<br />
changed in order to save the new session value. Although Resin can<br />
detect when an application calls <var>HttpSession.setAttribute</var>, it<br />
can't tell if an internal session value has changed. The following<br />
Counter class shows the issue:</p><br />
<br />
<example title="Counter.java"><br />
package test;<br />
<br />
public class Counter implements java.io.Serializable {<br />
private int _count;<br />
<br />
public int nextCount() { return _count++; }<br />
}<br />
</example><br />
<br />
<p>Assuming a copy of the Counter is saved as a session attribute,<br />
Resin doesn't know if the application has called <var>nextCount</var>. If it<br />
can't detect a change, Resin will not backup the new session, unless<br />
<var>always-save-session</var> is set. When <var>always-save-session</var> is<br />
true, Resin will back up the session on every request.</p><br />
<br />
<example><br />
...<br />
&lt;web-app id="/foo"&gt;<br />
...<br />
&lt;session-config&gt;<br />
&lt;use-persistent-store/&gt;<br />
&lt;always-save-session/&gt;<br />
&lt;/session-config&gt;<br />
...<br />
&lt;/web-app&gt;<br />
</example><br />
<br />
<!--<br />
<p>Like the JDBC-based sessions, Resin will ignore the<br />
<var>always-load-session</var> flag for cluster sessions. Because the<br />
cluster protocol notifies servers of changes, <var>always-load-session</var> is<br />
not needed.</p><br />
--><br />
<br />
</s3><br />
<br />
<s3 title="Serialization"><br />
<br />
<p>Resin's distributed sessions relies on Hessian serialization to save and<br />
restore sessions. Application object must <var>implement<br />
java.io.Serializable</var> for distributed sessions to work.</p><br />
<br />
</s3><br />
<br />
</s2> <!-- clustered sessions --><br />
<br />
<s2 title="Protocol Examples"><br />
<br />
<s3 title="Session Request"><br />
<br />
<p>To see how cluster sessions work, consider a case where<br />
the load balancer sends the request to a random host. Server&#160;C owns the<br />
session but the load balancer gives the request to Server&#160;A. In the<br />
following figure, the request modifies the session so it must be saved<br />
as well as loaded.</p><br />
<br />
<figure src="srunc.gif"/><br />
<br />
<p>The session id encodes the owning host. The example session<br />
id, <var>ca8MbyA</var>, decodes to an server index of 3, mapping<br />
to Server&#160;C. Resin determines the backup host from the cookie<br />
as well.<br />
Server&#160;A must know the owning host<br />
for every cookie so it can communicate with the owning srun.<br />
The example configuration defines all the sruns Server&#160;A needs to<br />
know about. If Server&#160;C is unavailable, Server&#160;A can use its<br />
configuration knowledge to use Server&#160;D as a backup<br />
for <var>ca8MbyA</var> instead..</p><br />
<br />
<p>When the request first accesses the session, Server&#160;A asks<br />
Server&#160;C for the serialized session data (<var>2:load</var>).<br />
Since Server&#160;A doesn't cache the session data, it must<br />
ask Server&#160;C for an update on each request. For requests that<br />
only read the session, this TCP load is the only extra overhead,<br />
i.e. they can skip <var>3-5</var>. The <var>always-save-session</var><br />
flag, in contrast, will always force a write.</p><br />
<br />
<p>At the end of the request, Server&#160;A writes any session<br />
updates to Server&#160;C (<var>3:store</var>). If always-save-session<br />
is false and the session doesn't change, this step can be skipped.<br />
Server&#160;A sends<br />
the new serialized session contents to Server&#160;C. Server&#160;C saves<br />
the session on its local disk (<var>4:save</var>) and saves a backup<br />
to Server&#160;D (<var>5:backup</var>).</p><br />
<br />
</s3><br />
<br />
<s3 title="Sticky Session Request"><br />
<br />
<p>Smart load balancers that implement sticky sessions can improve<br />
cluster performance. In the previous request, Resin's cluster<br />
sessions maintain consistency for dumb load balancers or twisted<br />
clients like the AOL browsers. The cost is the additional network<br />
traffic for <var>2:load</var> and <var>3:store</var>. Smart load-balancers<br />
can avoid the network traffic of <var>2</var> and <var>3</var>.</p><br />
<br />
<figure src="same_srun.gif"/><br />
<br />
<p>Server&#160;C decodes the session id, <var>caaMbyA</var>. Since it owns<br />
the session, Server&#160;C gives the session to the servlet with no work<br />
and no network traffic. For a read-only request, there's zero<br />
overhead for cluster sessions. So even a semi-intelligent load<br />
balancer will gain a performance advantage. Normal browsers will have<br />
zero overhead, and bogus AOL browsers will have the non-sticky<br />
session overhead.</p><br />
<br />
<p>A session write saves the new serialized session to disk<br />
(<var>2:save</var>) and to Server&#160;D (<var>3:backup</var>).<br />
<var>always-save-session</var> will determine if Resin can take advantage<br />
of read-only sessions or must save the session on each request.</p><br />
<br />
</s3><br />
<br />
<s3 title="Disk copy"><br />
<p>Resin stores a disk copy of the session information, in the location<br />
specified by the <var>path</var>. The disk copy serves two purposes. The first is<br />
that it allows Resin to keep session information for a large number of<br />
sessions. An efficient memory cache keeps the most active sessions in memory<br />
and the disk holds all of the sessions without requiring large amounts of<br />
memory. The second purpose of the disk copy is that the sessions are recovered<br />
from disk when the server is restarted.</p><br />
</s3><br />
<br />
<s3 title="Failover"><br />
<br />
<p>Since the session always has a current copy on two servers, the load<br />
balancer can direct requests to the next server in the ring. The<br />
backup server is always ready to take control. The failover will<br />
succeed even for dumb load balancers, as in the non-sticky-session<br />
case, because the srun hosts will use the backup as the new owning<br />
server.</p><br />
<br />
<p>In the example, either Server&#160;C or Server&#160;D can stop and<br />
the sessions will use the backup. Of course, the failover will work<br />
for scheduled downtime as well as server crashes. A site could<br />
upgrade one server at a time with no observable downtime.</p><br />
<br />
</s3><br />
<br />
<s3 title="Recovery"><br />
<br />
<p>When Server&#160;C restarts, possibly with an upgraded version of Resin,<br />
it needs to use the most up-to-date version of the session; its<br />
file-saved session will probably be obsolete. When a "new" session<br />
arrives, Server&#160;C loads the saved session from both the file and<br />
from Server&#160;D. It will use the newest session as the current<br />
value. Once it's loaded the "new" session, it will remain consistent<br />
as if the server had never stopped.</p><br />
<br />
</s3><br />
<br />
<s3 title="No Distributed Locking"><br />
<br />
<p>Resin's cluster sessions does not lock sessions. For browser-based<br />
sessions, only one request will execute at a time. Since browser<br />
sessions have no concurrently, there's no need for distributed<br />
locking. However, it's a good idea to be aware of the lack of<br />
distributed locking.</p><br />
<br />
</s3><br />
<br />
</s2><br />
<br />
</s1> <!-- persistent sessions --><br />
<br />
</body><br />
</document><br />
<br />
<document><br />
<header><br />
<product>resin</product><br />
<title>Dynamic Servers</title><br />
<description><br />
<p>Resin includes the ability to add servers to clusters dynamically. These<br />
dynamic servers are able to use distributed sessions and the distributed<br />
object cache. The triad also updates these servers with applications<br />
that are deployed via the remote deployment server. The Resin load balancer<br />
is also able to dispatch requests to them as with any static server.<br />
</p><br />
</description><br />
</header><br />
<br />
<body><br />
<br />
<localtoc/><br />
<br />
<s1 title="Overview"><br />
<p><br />
Adding a dynamic server to a cluster is a simple two-step process:<br />
</p><br />
<ol><br />
<li>Register the dynamic server with a triad server via JMX.</li><br />
<li>Start the new dynamic server using the registration in the previous step.</li><br />
</ol><br />
</s1><br />
<br />
<s1 title="Preliminaries"><br />
<p><br />
Before adding a dynamic server, you must:<br />
</p><br />
<ul><br />
<li>Set up and start a cluster with a triad, e.g.<br />
<example title="Example: conf/resin.xml"><br />
&lt;resin xmlns="http://caucho.com/ns/resin"><br />
<br />
&lt;cluster id="app-tier"><br />
...<br />
&lt;server id="triad-a" address="234.56.78.90" port="6800"/><br />
&lt;server id="triad-b" address="34.56.78.90" port="6800"/><br />
&lt;server id="triad-c" address="45.67.89.12" port="6800"/><br />
</example><br />
</li><br />
<li>Install at least one admin password, usually in <br />
<var>admin-users.xml</var></li><br />
<li>Enable the RemoteAdminService for the cluster, e.g.<br />
<example><br />
&lt;resin xmlns="http://caucho.com/ns/resin"><br />
<br />
&lt;cluster id="app-tier"><br />
...<br />
&lt;admin:RemoteAdminService xmlns:admin="urn:java:com.caucho.admin"/><br />
...<br />
</example><br />
</li><br />
<li>Enable the dynamic servers for the cluster, e.g.<br />
<example><br />
&lt;resin xmlns="http://caucho.com/ns/resin"><br />
<br />
&lt;cluster id="app-tier"><br />
...<br />
&lt;dynamic-server-enable>true&lt;/dynamic-server-enable><br />
...<br />
</example></li><br />
</ul><br />
<p><br />
Check the main <a href="clustering.xtp">Clustering</a> section for more<br />
information on this topic.<br />
</p><br />
</s1><br />
<br />
<s1 title="Registering a dynamic server"><br />
<p><br />
For the first step of registration, you can use a JMX tool like jconsole or<br />
simply use the Resin administration web console. We'll show how to do<br />
the latter method here. For registration, you'll specify three values:<br />
</p><br />
<br />
<deftable title="web-app deployment options"><br />
<tr><br />
<th>Name</th><br />
<th>Description</th><br />
</tr><br />
<tr><br />
<td>Server id</td><br />
<td>Symbolic identifier of the new dynamic server. <br />
This is also specified when starting the new server.</td><br />
</tr><br />
<tr><br />
<td>IP</td><br />
<td>The IP address of the new dynamic server. May also be host name.</td><br />
</tr><br />
<tr><br />
<td>Port</td><br />
<td>The server port of the new dynamic server. Usually 6800.</td><br />
</tr><br />
</deftable><br />
<br />
<p><br />
With these three values, browse to the Resin administration application's<br />
"cluster" tab. If you have enabled dynamic servers for your cluster, you <br />
should see a form allowing you to register the server in the "Cluster Overview"<br />
table.<br />
</p><br />
<figure src="dynamic-server-add.png"/><br />
<p><br />
Once you have entered the values and added the server, it should show up <br />
in the table as a dead server because we haven't started it yet. The<br />
dynamic server's registration will be propagated to all the servers in the<br />
cluster.<br />
</p><br />
<figure src="dynamic-server-added.png"/><br />
</s1><br />
<br />
<s1 title="Starting a dynamic server"><br />
<p><br />
Now that we've registered the dynamic server, we can start it<br />
and have it join the cluster. In order for the new server to be<br />
recognized and accepted by the triad, it needs to start with the<br />
same resin.xml that the triad is using, the name of the cluster it is<br />
joining, and the values entered in the registration step. These can<br />
all be specified on the command line when starting the server:<br />
</p><br />
<example><br />
dynamic-server> java -jar $RESIN_HOME/lib/resin.jar -conf /etc/resin/resin.xml \<br />
-dynamic-server app-tier:123.45.67.89:6800 start<br />
</example><br />
<p><br />
Specifying the configuration file allows the new server to configure<br />
itself using the &lt;server-default> options, to find the triad servers<br />
of the cluster it is joining, and to authenticate using the administration<br />
logins. This command starts the server, which immediately contacts the<br />
triad to join the cluster. Once it has successfully joined, the "Cluster"<br />
tab of the administration application should look like this:<br />
</p><br />
<figure src="dynamic-server-started.png"/><br />
</s1><br />
<br />
</body><br />
</document></div>Reza