Running the Endpoint Mediation samples with WSO2 Enterprise Service Bus (ESB)

Sample 50: POX to SOAP conversion

<definitions xmlns="http://ws.apache.org/ns/synapse">
    <!-- filtering of messages with XPath and regex matches -->
    <filter source="get-property('To')" regex=".*/StockQuote.*">
        <send>
            <endpoint>
                <address uri="http://localhost:9000/services/SimpleStockQuoteService" format="soap11"/>
            </endpoint>
        </send>
        <drop/>
    </filter>
    <send/>
</definitions> 

Objective: POX to SOAP conversion

Prerequisites:
Start the Synapse configuration numbered 50: i.e. wso2esb-samples -sn 50

Start the Axis2 server and deploy the SimpleStockQuoteService if not already done

Execute the 'ant stockquote' specifying that the request should be a REST request as follows:

ant stockquote -Dtrpurl=http://localhost:8280/services/StockQuote -Drest=true

This example shows a http REST request (as shown below) being transformed into a SOAP request and forwarded to the stock quote service.

POST /services/StockQuote HTTP/1.1
Content-Type: application/xml; charset=UTF-8;action="urn:getQuote";
SOAPAction: urn:getQuote
User-Agent: Axis2
Host: 127.0.0.1
Transfer-Encoding: chunked

75
<m0:getQuote xmlns:m0="http://services.samples/xsd">
   <m0:request>
      <m0:symbol>IBM</m0:symbol>
   </m0:request>
</m0:getQuote>0

Sample 51: MTOM and SwA optimizations and request/response correlation

<definitions xmlns="http://ws.apache.org/ns/synapse">
    <in>
        <filter source="get-property('Action')" regex="urn:uploadFileUsingMTOM">
            <property name="example" value="mtom"/>
            <send>
                <endpoint>
                    <address uri="http://localhost:9000/services/MTOMSwASampleService" optimize="mtom"/>
                </endpoint>
            </send>
        </filter>
        <filter source="get-property('Action')" regex="urn:uploadFileUsingSwA">
            <property name="example" value="swa"/>
            <send>
                <endpoint>
                    <address uri="http://localhost:9000/services/MTOMSwASampleService" optimize="swa"/>
                </endpoint>
            </send>
        </filter>
    </in>
    <out>
        <filter source="get-property('example')" regex="mtom">
            <property name="enableMTOM" value="true" scope="axis2"/>
        </filter>
        <filter source="get-property('example')" regex="swa">
            <property name="enableSwA" value="true" scope="axis2"/>
        </filter>
        <send/>
    </out>
</definitions>

Objective: MTOM and SwA optimizations and request/response correlation

Prerequisites:
Start the Synapse configuration numbered 51: i.e. wso2esb-samples -sn 51
Start the Axis2 server and deploy the MTOMSwASampleService if not already done

Execute the 'ant optimizeclient' specifying MTOM optimization as follows:

ant optimizeclient -Dopt_mode=mtom

The configuration now sets a local message context property, and forwards the message to 'http://localhost:9000/services/MTOMSwASampleService' optimizing binary content as MTOM. By sending this message through TCPMon you would be able to see the actual message sent over the http transport if required. Thus during response processing, by checking the local message property ESB could identify the past information about the current message context, and uses this knowledge to transform the response back to the client in the same format as the original request.

When the client executes successfully, it will upload a file containing the ASF logo and receive its response back again and save it into a temporary file.

[java] Sending file : ./../../repository/samples/resources/mtom/asf-logo.gif as MTOM
[java] Saved response to file : ./../../work/temp/sampleClient/mtom-49258.gif

Next try SwA as:

ant optimizeclient -Dopt_mode=swa
[java] Sending file : ./../../repository/samples/resources/mtom/asf-logo.gif as SwA
[java] Saved response to file : ./../../work/temp/sampleClient/swa-47549.gif

By using TCPMon and sending the message through it, one can inspect that the requests and responses sent are indeed MTOM optimized or sent as http attachments as follows:

POST http://localhost:9000/services/MTOMSwASampleService HTTP/1.1
Host: 127.0.0.1
SOAPAction: urn:uploadFileUsingMTOM
Content-Type: multipart/related; boundary=MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353; type="application/xop+xml";
start="<0.urn:uuid:B94996494E1DD5F9B51177413845354@apache.org>"; start-info="text/xml"; charset=UTF-8
Transfer-Encoding: chunked
Connection: Keep-Alive
User-Agent: Synapse-HttpComponents-NIO

--MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353241
Content-Type: application/xop+xml; charset=UTF-8; type="text/xml"
Content-Transfer-Encoding: binary
Content-ID:
   <0.urn:uuid:B94996494E1DD5F9B51177413845354@apache.org>221b1
      <?xml version='1.0' encoding='UTF-8'?>
         <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
            <soapenv:Body>
               <m0:uploadFileUsingMTOM xmlns:m0="http://www.apache-synapse.org/test">
                  <m0:request>
                     <m0:image>
                        <xop:Include href="cid:1.urn:uuid:78F94BC50B68D76FB41177413845003@apache.org" xmlns:xop="http://www.w3.org/2004/08/xop/include" />
                     </m0:image>
                  </m0:request>
               </m0:uploadFileUsingMTOM>
            </soapenv:Body>
         </soapenv:Envelope>
--MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353217
Content-Type: image/gif
Content-Transfer-Encoding: binary
Content-ID:
         <1.urn:uuid:78F94BC50B68D76FB41177413845003@apache.org>22800GIF89a... << binary content >>
POST http://localhost:9000/services/MTOMSwASampleService HTTP/1.1
Host: 127.0.0.1
SOAPAction: urn:uploadFileUsingSwA
Content-Type: multipart/related; boundary=MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177414170491; type="text/xml";
start="<0.urn:uuid:B94996494E1DD5F9B51177414170492@apache.org>"; charset=UTF-8
Transfer-Encoding: chunked
Connection: Keep-Alive
User-Agent: Synapse-HttpComponents-NIO

--MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177414170491225
Content-Type: text/xml; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-ID:
   <0.urn:uuid:B94996494E1DD5F9B51177414170492@apache.org>22159
      <?xml version='1.0' encoding='UTF-8'?>
         <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
            <soapenv:Body>
               <m0:uploadFileUsingSwA xmlns:m0="http://www.apache-synapse.org/test">
                  <m0:request>
                     <m0:imageId>urn:uuid:15FD2DA2584A32BF7C1177414169826</m0:imageId>
                  </m0:request>
               </m0:uploadFileUsingSwA>
            </soapenv:Body>
         </soapenv:Envelope>22--34MIMEBoundaryurn_uuid_B94996494E1DD5F9B511774141704912
17
Content-Type: image/gif
Content-Transfer-Encoding: binary
Content-ID:
         <urn:uuid:15FD2DA2584A32BF7C1177414169826>22800GIF89a... << binary content >>

Sample 52: Session less load balancing between 3 endpoints

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <loadbalance>
                        <endpoint>
                            <address uri="http://localhost:9001/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9002/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9003/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                    </loadbalance>
                </endpoint>
            </send><drop/>
        </in>

        <out>
            <!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">

        <makefault>
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>

        <header name="To" action="remove"/>
        <property name="RESPONSE" value="true"/>

        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the simple load balancing among a set of endpoints

Prerequisites:

Start ESB with sample configuration 52. (i.e. wso2esb-samples -sn 52)

Deploy the LoadbalanceFailoverService by switching to <ESB installation directory>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.

Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.

Example commands to run sample Axis2 servers from the <ESB installation directory>/samples/axis2Server directory in Linux are listed below:

./axis2server.sh -http 9001 -https 9005 -name MyServer1
./axis2server.sh -http 9002 -https 9006 -name MyServer2
./axis2server.sh -http 9003 -https 9007 -name MyServer3

Now we are done with setting up the environment for load balance sample. Start the load balance and failover client using the following command:

ant loadbalancefailover -Di=100

This client sends 100 requests to the LoadbalanceFailoverService through ESB. ESB will distribute the load among the three endpoints mentioned in the configuration in round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer3
[java] Request: 4 ==> Response from server: MyServer1
[java] Request: 5 ==> Response from server: MyServer2
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
...

Now run the client without the -Di=100 parameter to send infinite requests. While running the client shutdown the server named MyServer1. You can observe that requests are only distributed among MyServer2 and MyServer3 after shutting down MyServer1. Console output before and after shutting down MyServer1 is listed below (MyServer1 was shutdown after request 63):

...
[java] Request: 61 ==> Response from server: MyServer1
[java] Request: 62 ==> Response from server: MyServer2
[java] Request: 63 ==> Response from server: MyServer3
[java] Request: 64 ==> Response from server: MyServer2
[java] Request: 65 ==> Response from server: MyServer3
[java] Request: 66 ==> Response from server: MyServer2
[java] Request: 67 ==> Response from server: MyServer3
...

Now restart MyServer1. You can observe that requests will be again sent to all three servers roughly after 60 seconds. This is because we have specified <suspendDurationOnFailure> as 60 seconds in the configuration. Therefore, load balance endpoint will suspend any failed child endpoint only for 60 seconds after detecting the failure.

Sample 53: Failover sending among 3 endpoints

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <failover>
                        <endpoint>
                            <address uri="http://localhost:9001/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9002/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9003/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                    </failover>
                </endpoint>
            </send><drop/>
        </in>

        <out>
            <!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">

        <makefault>
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>

        <header name="To" action="remove"/>
        <property name="RESPONSE" value="true"/>

        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the failover sending

Prerequisites:

Start ESB with sample configuration 53 (i.e. wso2esb-samples -sn 53)

Deploy the LoadbalanceFailoverService and start three instances of sample Axis2 server as mentioned in sample 52.

Above configuration sends messages with the failover behavior. Initially the server at port 9001 is treated as primary and other two are treated as back ups. Messages are always directed only to the primary server. If the primary server has failed, next listed server is selected as the primary. Thus, messages are sent successfully as long as there is at least one active server. To test this, run the loadbalancefailover client to send infinite requests as follows:

ant loadbalancefailover

You can see that all requests are processed by MyServer1. Now shutdown MyServer1 and inspect the console output of the client. You will observe that all subsequent requests are processed by MyServer2.

The console output with MyServer1 shutdown after request 127 is listed below:

...
[java] Request: 125 ==> Response from server: MyServer1
[java] Request: 126 ==> Response from server: MyServer1
[java] Request: 127 ==> Response from server: MyServer1
[java] Request: 128 ==> Response from server: MyServer2
[java] Request: 129 ==> Response from server: MyServer2
[java] Request: 130 ==> Response from server: MyServer2
...

You can keep on shutting down servers like this. Client will get a response till you shutdown all listed servers. Once all servers are shutdown, the error sequence is activated and a fault message is sent to the client as follows.

[java] COULDN'T SEND THE MESSAGE TO THE SERVER.

Once a server is detected as failed, it will be added to the active servers list again after 60 seconds (specified in <suspendDurationOnFailure> in the configuration). Therefore, if you have restarted any of the stopped servers and have shutdown all other servers, messages will be directed to the newly started server.

Sample 54: Session affinity load balancing between 3 endpoints

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <!-- specify the session as the simple client session provided by Synapse for
                    testing purpose -->
                    <session type="simpleClientSession"/>

                    <loadbalance>
                        <endpoint>
                            <address uri="http://localhost:9001/services/LBService1">
                                <enableAddressing/>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9002/services/LBService1">
                                <enableAddressing/>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9003/services/LBService1">
                                <enableAddressing/>
                            </address>
                        </endpoint>
                    </loadbalance>
                </endpoint>
            </send><drop/>
        </in>

        <out>
            <!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">

        <makefault>
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>

        <header name="To" action="remove"/>
        <property name="RESPONSE" value="true"/>

        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the load balancing with session affinity using client initiated sessions

Prerequisites:

Start ESB with sample configuration 54 (i.e. wso2esb-samples -sn 54).

Deploy the LoadbalanceFailoverService and start three instances of the sample Axis2 server as in sample 52.

Above configuration is same as the load balancing configuration in sample 52, except that the session type is specified as "simpleClientSession". This is a client initiated session, which means that the client generates the session identifier and sends it to with each request. In this sample session type, client adds a SOAP header named ClientID containing the identifier of the client. ESB binds this ID with a server on the first request and sends all seccessive requests containing that ID to the same server. Now switch to samples/axis2Client directory and run the client using the following command to check this in action.

ant loadbalancefailover -Dmode=session

In the session mode, client continuously sends requests with three diferent client (session) IDs. One ID is selected among these three IDs for each request randomly. Then client prints the session ID with the responded server for each request. Client output for the first 10 requests are shown below.

[java] Request: 1 Session number: 1 Response from server: MyServer3
[java] Request: 2 Session number: 2 Response from server: MyServer2
[java] Request: 3 Session number: 0 Response from server: MyServer1
[java] Request: 4 Session number: 2 Response from server: MyServer2
[java] Request: 5 Session number: 1 Response from server: MyServer3
[java] Request: 6 Session number: 2 Response from server: MyServer2
[java] Request: 7 Session number: 2 Response from server: MyServer2
[java] Request: 8 Session number: 1 Response from server: MyServer3
[java] Request: 9 Session number: 0 Response from server: MyServer1
[java] Request: 10 Session number: 0 Response from server: MyServer1
... 

You can see that session number 0 is always directed to the server named MyServer1. That means session number 0 is bound to MyServer1. Similarly session 1 and 2 are bound to MyServer3 and MyServer2 respectively.

Sample 55: Session affinity load balancing between fail over endpoints

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <!-- specify the session as the simple client session provided by Synapse for
                    testing purpose -->
                    <session type="simpleClientSession"/>

                    <loadbalance>
                        <endpoint>
                            <failover>
                                <endpoint>
                                    <address uri="http://localhost:9001/services/LBService1">
                                        <enableAddressing/>
                                    </address>
                                </endpoint>
                                <endpoint>
                                    <address uri="http://localhost:9002/services/LBService1">
                                        <enableAddressing/>
                                    </address>
                                </endpoint>
                            </failover>
                        </endpoint>
                        <endpoint>
                            <failover>
                                <endpoint>
                                    <address uri="http://localhost:9003/services/LBService1">
                                        <enableAddressing/>
                                    </address>
                                </endpoint>
                                <endpoint>
                                    <address uri="http://localhost:9004/services/LBService1">
                                        <enableAddressing/>
                                    </address>
                                </endpoint>
                            </failover>
                        </endpoint>
                    </loadbalance>
                </endpoint>
            </send><drop/>
        </in>

        <out>
            <!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">

        <makefault>
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>

        <header name="To" action="remove"/>
        <property name="RESPONSE" value="true"/>

        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the session affinity based load balancing with failover capability

Prerequisites:

Start ESB with sample configuration 55 (i.e. wso2esb-samples -sn 55).

Deploy the LoadbalanceFailoverService and start four sample Axis2 servers on http ports 9001, 9002, 9003 and 9004 respectively (make sure to specify unconflicting https ports).

This configuration also uses "simpleClientSession" to bind sessions as in the previous sample. But failover endpoints are specified as the child endpoints of the load balance endpoint. Therefore sessions are bound to the failover endpoints. Session information has to be replicated among the servers listed under each failover endpoint using some clustering mechanism. Therefore, if one endpoint bound to a session failed, successive requests for that session will be directed to the next endpoint in that failover group. Run the client using the following command to observe this behaviour.

ant loadbalancefailover -Dmode=session

You can see a client output as shown below.

...
[java] Request: 222 Session number: 0 Response from server: MyServer1
[java] Request: 223 Session number: 0 Response from server: MyServer1
[java] Request: 224 Session number: 1 Response from server: MyServer1
[java] Request: 225 Session number: 2 Response from server: MyServer3
[java] Request: 226 Session number: 0 Response from server: MyServer1
[java] Request: 227 Session number: 1 Response from server: MyServer1
[java] Request: 228 Session number: 2 Response from server: MyServer3
[java] Request: 229 Session number: 1 Response from server: MyServer1
[java] Request: 230 Session number: 1 Response from server: MyServer1
[java] Request: 231 Session number: 2 Response from server: MyServer3
...

Note that session 0 is always directed to MyServer1 and session 1 is directed to MyServer3. No requests are directed to MyServer2 and MyServer4 as they are kept as backups by failover endpoints. Now shutdown the server named MyServer1 while running the sample. You will observe that all successive requests for session 0 is now directed to MyServer2, which is the backup server for MyServer1's group. This is shown below, where MyServer1 was shutdown after the request 534.

...
[java] Request: 529 Session number: 2 Response from server: MyServer3
[java] Request: 530 Session number: 1 Response from server: MyServer1
[java] Request: 531 Session number: 0 Response from server: MyServer1
[java] Request: 532 Session number: 1 Response from server: MyServer1
[java] Request: 533 Session number: 1 Response from server: MyServer1
[java] Request: 534 Session number: 1 Response from server: MyServer1
[java] Request: 535 Session number: 0 Response from server: MyServer2
[java] Request: 536 Session number: 0 Response from server: MyServer2
[java] Request: 537 Session number: 0 Response from server: MyServer2
[java] Request: 538 Session number: 2 Response from server: MyServer3
[java] Request: 539 Session number: 0 Response from server: MyServer2
...

Sample 56: WSDL endpoint

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main">
        <in>
            <send>
                <!-- get epr from the given wsdl -->
                <endpoint>
                    <wsdl uri="file:repository/samples/resources/proxy/sample_proxy_1.wsdl" service="SimpleStockQuoteService" port="SimpleStockQuoteServiceSOAP11port_http"/>
                </endpoint>
            </send>
        </in>

        <out>
            <send/>
        </out>
    </sequence>

</definitions>

Objective: Demonstrate the use of WSDL endpoints

Prerequisites:

Start the Synapse configuration numbered 56 (i.e. wso2esb-samples -sn 56).

Deploy the SimpleStockQuoteService and start the sample Axis2 server.

This sample uses a WSDL endpoint inside the send mediator. WSDL endpoints can extract endpoint's address from the given WSDL. As WSDL documents can have many services and many ports inside each service, the service and port of the required endpoint has to be specified. As with address endpoints, QoS parameters for the endpoint can be specified inline in the configuration. An excerpt taken from the sample_proxy_1.wsdl containing the specified service and port is listed below.

<wsdl:service name="SimpleStockQuoteService">
    <wsdl:port name="SimpleStockQuoteServiceSOAP11port_http"
               binding="axis2:SimpleStockQuoteServiceSOAP11Binding">
        <soap:address location="http://localhost:9000/services/SimpleStockQuoteService"/>
    </wsdl:port>
    <wsdl:port name="SimpleStockQuoteServiceSOAP12port_http"
               binding="axis2:SimpleStockQuoteServiceSOAP12Binding">
        <soap12:address location="http://localhost:9000/services/SimpleStockQuoteService"/>
    </wsdl:port>
</wsdl:service>

Specified service and port refers to the endpoint address "http://localhost:9000/services/SimpleStockQuoteService" according to the above WSDL. Now run the client using the following command.

ant stockquote -Dsymbol=IBM -Dmode=quote -Daddurl=http://localhost:8280

Client will print the quote price for IBM received from the server running on port 9000. Observe the Axis2 console and the ESB console to verify this behavior.

Sample 57: Dynamic load balancing between 3 nodes

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint name="dynamicLB">
                    <dynamicLoadbalance failover="true"
                                           algorithm="org.apache.synapse.endpoints.algorithms.RoundRobin">
                        <membershipHandler
                                class="org.apache.synapse.core.axis2.Axis2LoadBalanceMembershipHandler">
                            <property name="applicationDomain" value="apache.axis2.application.domain"/>
                        </membershipHandler>
                    </dynamicLoadbalance>
                </endpoint>
            </send>
            <drop/>
        </in>

        <out>
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">
        <makefault response="true">
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>
        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the simple dynamic load balancing among a set of nodes

Prerequisites:

Enable clustering and group management in the <ESB installation directory>/ repository/conf/axis2.xml file. This can be done by setting the "enable" attribute of the "cluster" and "groupManagement" elements. Also provide the IP address of you machine as the values of the "mcastBindAddress" and "localMemberHost" parameters.

Start ESB with sample configuration 57. (i.e. wso2esb-samples -sn 57)

Deploy the LoadbalanceFailoverService by switching to <Synapse installation directory>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.

Enable clustering in the <Synapse installation directory>/samples/axis2Server/ repository/conf/axis2.xml file. This can be done by setting the "enable" attribute of the "cluster" element. Also provide the IP address of you machine as the values of the "mcastBindAddress" and "localMemberHost" parameters. Make sure that the "applicationDomain" of the membershipHandler is the same as the domain name specified in the axis2.xml files of the Axis2 servers. Then Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.

Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:

./axis2server.sh -http 9001 -https 9005 -name MyServer1
./axis2server.sh -http 9002 -https 9006 -name MyServer2
./axis2server.sh -http 9003 -https 9007 -name MyServer3

Now we are done with setting up the environment for load balance sample. Start the load balance and failover client using the following command:

ant loadbalancefailover -Di=100

This client sends 100 requests to the LoadbalanceFailoverService through Synapse. Synapse will distribute the load among the three nodes mentioned in the configuration in a round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer3
[java] Request: 4 ==> Response from server: MyServer1
[java] Request: 5 ==> Response from server: MyServer2
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
...

Now run the client without the -Di=100 parameter, i.e. ant loadbalancefailover, to send infinite requests. While running the client shutdown the server named MyServer1. You can observe that requests are only distributed among MyServer2 and MyServer3 after shutting down MyServer1. Console output before and after shutting down MyServer1 is listed below (MyServer1 was shutdown after request 63):

...
[java] Request: 61 ==> Response from server: MyServer1
[java] Request: 62 ==> Response from server: MyServer2
[java] Request: 63 ==> Response from server: MyServer3
[java] Request: 64 ==> Response from server: MyServer2
[java] Request: 65 ==> Response from server: MyServer3
[java] Request: 66 ==> Response from server: MyServer2
[java] Request: 67 ==> Response from server: MyServer3
...

Now restart MyServer1. You can observe that requests will be again sent to all three servers.

Sample 58: Static load balancing between 3 nodes

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <loadbalance failover="true">
                        <member hostName="127.0.0.1" httpPort="9001" httpsPort="9005"/>
                        <member hostName="127.0.0.1" httpPort="9002" httpsPort="9006"/>
                        <member hostName="127.0.0.1" httpPort="9003" httpsPort="9007"/>
                    </loadbalance>
                </endpoint>
            </send>
            <drop/>
        </in>

        <out>
            <send/>
        </out>
    </sequence>

    <sequence name="errorHandler">
        <makefault response="true">
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>
        <send/>
    </sequence>

</definitions>

Objective: Demonstrate the simple static load balancing among a set of nodes

Prerequisites:

Start Synapse with sample configuration 58. (i.e. wso2esb-samples -sn 58)

Deploy the LoadbalanceFailoverService by switching to <Synapse installation directory>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.

Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.

Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:

./axis2server.sh -http 9001 -https 9005 -name MyServer1
./axis2server.sh -http 9002 -https 9006 -name MyServer2
./axis2server.sh -http 9003 -https 9007 -name MyServer3

Now we are done with setting up the environment for load balance sample. Start the load balance and failover client using the following command:

ant loadbalancefailover -Di=100

This client sends 100 requests to the LoadbalanceFailoverService through Synapse. Synapse will distribute the load among the three nodes mentioned in the configuration in a round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer3
[java] Request: 4 ==> Response from server: MyServer1
[java] Request: 5 ==> Response from server: MyServer2
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
...

Now run the client without the -Di=100 parameter, i.e. ant loadbalancefailover, to send infinite requests. While running the client shutdown the server named MyServer1. You can observe that requests are only distributed among MyServer2 and MyServer3 after shutting down MyServer1. Console output before and after shutting down MyServer1 is listed below (MyServer1 was shutdown after request 63):

...
[java] Request: 61 ==> Response from server: MyServer1
[java] Request: 62 ==> Response from server: MyServer2
[java] Request: 63 ==> Response from server: MyServer3
[java] Request: 64 ==> Response from server: MyServer2
[java] Request: 65 ==> Response from server: MyServer3
[java] Request: 66 ==> Response from server: MyServer2
[java] Request: 67 ==> Response from server: MyServer3
...

Now restart MyServer1. You can observe that requests will be again sent to all three servers.

Sample 59: Weighted load balancing between 3 endpoints

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://ws.apache.org/ns/synapse">
    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <loadbalance
                            algorithm="org.apache.synapse.endpoints.algorithms.WeightedRoundRobin">
                        <endpoint>
                            <address uri="http://localhost:9001/services/LBService1">
                                <enableAddressing/>
                                <suspendOnFailure>
                                    <initialDuration>20000</initialDuration>
                                    <progressionFactor>1.0</progressionFactor>
                                </suspendOnFailure>
                            </address>
                            <property name="loadbalance.weight" value="1"/>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9002/services/LBService1">
                                <enableAddressing/>
                                <suspendOnFailure>
                                    <initialDuration>20000</initialDuration>
                                    <progressionFactor>1.0</progressionFactor>
                                </suspendOnFailure>
                            </address>
                            <property name="loadbalance.weight" value="2"/>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9003/services/LBService1">
                                <enableAddressing/>
                                <suspendOnFailure>
                                    <initialDuration>20000</initialDuration>
                                    <progressionFactor>1.0</progressionFactor>
                                </suspendOnFailure>
                            </address>
                            <property name="loadbalance.weight" value="3"/>
                        </endpoint>
                    </loadbalance>
                </endpoint>
            </send>
            <drop/>
        </in>
        <out>
            <send/>
        </out>
    </sequence>
    <sequence name="errorHandler">
        <makefault response="true">
            <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>
        <send/>
    </sequence>
</definitions>

Objective: Demonstrate the weighted load balancing among a set of endpoints

Prerequisites:

Start ESB with sample configuration 59. (i.e. wso2esb-samples -sn 59)

Deploy the LoadbalanceFailoverService and start three instances of sample Axis2 server as mentioned in sample 52.

Above configuration sends messages with the weighted loadbalance behaviour. Weight of each leaf address endpoint is defined by integer value of "loadbalance.weight" property associated with each endpoint. If weight of a endpoint is x, x number of requests will send to that endpoint before switch to next active endpoint.
To test this, run the loadbalancefailover client to send 100 requests as follows:

ant loadbalancefailover -Di=100

This client sends 100 requests to the LoadbalanceFailoverService through ESB. ESB will distribute the load among the three endpoints mentioned in the configuration in weighted round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer2
[java] Request: 4 ==> Response from server: MyServer3
[java] Request: 5 ==> Response from server: MyServer3
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
[java] Request: 8 ==> Response from server: MyServer2
[java] Request: 9 ==> Response from server: MyServer2
[java] Request: 10 ==> Response from server: MyServer3
[java] Request: 11 ==> Response from server: MyServer3
[java] Request: 12 ==> Response from server: MyServer3
...

As logs, endpoint with weight 1 received a 1 request and endpoint with weight 2 received 2 requests and etc... in a cycle