WSO2 Business Process Server comes with a embedded H2 database as the BPEL engine persistence storage. Embedded databases are not suitable as the BPEL engine persistence storage for production deployments.This document will guide you through the steps to configure WSO2 Business Process Server with external database servers.
WSO2 Business Process Server uses Apache ODE as it's BPEL engine and Apache ODE can be configured to use external database other than the embedded H2 databse as it's persistence storage.
There are two approaches to setup the DB.
eg - to load a mysql database script to a database called "bps" you can use the
following command.
mysql -u root -p bps <
/opt/WSO2-BPS-3.1.0/dbscripts/bps/bpel/mysql.sql
Definition of the reusable database sources can be done using datasources.properties file located inside WSO2 Business Process Server 'repository/conf' directory under BPS root directory. It is possible to configure any number of data sources. But for WSO2 Business Process Server one data source is enough.
WSO2 Business Process Server ships with a default datasources.properties file which contains configurations for embedded H2 database. To configure a external database you only need to change following properties which are database specific. The following property values are specific to MySQL database.
synapse.datasources.bpsds.driverClassName=com.mysql.jdbc.Driver synapse.datasources.bpsds.url=jdbc:mysql://localhost:3306/bps_212 synapse.datasources.bpsds.username=root synapse.datasources.bpsds.password=root
Note:As external database related JDBC libs are not shipped with WSO2 Business Process Server, user have to manually put them in WSO2-BPS-3.1.0/repository/components/lib.
You can fine tune your data source configurations using various other properties available. You can find description about other configuration in parameters document from Apache Commons. WSO2 Business Process Server data sources support all the parameters support by Apache dbcp and you must follow the synapse.datasources.<data source name>.<parameter>=<parameter value> pattern when specifying parameters for the data source. You can use any of the parameters supported by Apache dbcp to replace 'parameters' field.
Note:For embedded H2 database, if absolute file URL is used for the datasource
URL on Windows environment, that needs to be configured as follows. (i.e. the
backslashes should be escaped)
synapse.datasources.bpsds.url=jdbc:h2:file:C:\\bps\\myDs
The datasources.properties contains following property which requires to change when configuring to run multiple server instances.
synapse.datasources.providerPort=2199
The following two approaches use the advantage of port-offsetting to automatically alter this property value without manually changing datasources.properties or bps.xml.
Method 1: Uncomment following section in carbon.xml<Ports> <!-- Override datasources JNDIProviderPort defined in bps.xml and datasources.properties files --> <!--<JNDIProviderPort>2199< /JNDIProviderPort>--> </Ports>
JNDIProviderPort has been overridden in carbon.xml and can be enabled by uncommenting above. Therefore, it is not required to change RMI ports in multiple places as defined in bps.xml and datasource.properties files.
Method 2: Without enabling above property in Method 1.This only requires to change port offset value in carbon.xml. Then it will automatically alter the JNDI ports defined in bps.xml and datasources.xml. This won't require to enable/uncomment above section in Method 1.
You must leave Registry configuration in bps.xml as it is. If you want to configure registry to use external data base you must follow the registry configuration document.
WSO2 Business Process Server generates events to let you track what is exactly happening in the engine and produces detailed information about process executions. These events are persisted in BPS's database and can be queried using theManagement API. The default behaviour for the engine is to always generate all events for every executed action. However from a performance standpoint it is better to deactivate some of the events you're not interested in (or even all of them). Inserting all these events generates a non-negligible overhead.
The following table details each event possibly generated by ODE:
Event Name | Process/Scope | Description | Type |
---|---|---|---|
ActivityEnabledEvent | Scope | An activity is enabled (just before it's started) | activityLifecycle |
ActivityDisabledEvent | Scope | An activity is disabled (due to dead path elimination) | activityLifecycle |
ActivityExecStartEvent | Scope | An activity starts its execution | activityLifecycle |
ActivityExecEndEvent | Scope | An activity execution terminates | activityLifecycle |
ActivityFailureEvent | Scope | An activity failed | activityLifecycle |
CompensationHandlerRegistered | Scope | A compensation handler gets registered on a scope | scopeHandling |
CorrelationMatchEvent | Process | A matching correlation has been found upon reception of a message | correlation |
CorrelationNoMatchEvent | Process | No matching correlation has been found upon reception of a message | correlation |
CorrelationSetWriteEvent | Scope | A correlation set value has been initialized | dataHandling |
NewProcessInstanceEvent | Process | A new process instance is created | instanceLifecycle |
PartnerLinkModificationEvent | Scope | A partner link has been modified (a new value has been assigned to it) | dataHandling |
ProcessCompletionEvent | Process | A process instance completes | instanceLifecycle |
ProcessInstanceStartedEvent | Process | A process instance starts | instanceLifecycle |
ProcessInstanceStateChangeEvent | Process | The state of a process instance has changed | instanceLifecycle |
ProcessMessageExchangeEvent | Process | A process instance has received a message | instanceLifecycle |
ProcessTerminationEvent | Process | A process instance terminates | instanceLifecycle |
ScopeCompletionEvent | Scope | A scope completes | scopeHandling |
ScopeFaultEvent | Scope | A fault has been produced in a scope | scopeHandling |
ScopeStartEvent | Scope | A scope started | scopeHandling |
VariableModificationEvent | Scope | The value of a variable has been modified | dataHandling |
VariableReadEvent | Scope | The value of a variable has been read | dataHandling |
The second column specifies whether an event is associated with the process itself or with one of its scopes. The event type is used for filtering events.
Using thedeployment descriptor, it is possible to tweak events generation to filtrate which ones get created. First, events can be filtered at the process level using one of the following stanza:
<dd:process-events generate="all"/> <!-- Default configuration --> <dd:process-events generate="none"/> <dd:process-events> <dd:enable-event>dataHandling</dd:enable-event> <dd:enable-event>activityLifecycle</dd:enable-event> </dd:process-events>
The first form just duplicates the default behaviour, when nothing is specified in the deployment descriptor, all events are generated. The third form lets you define which type of event is generated, possible types are:
It is also possible to define filtering for each scope of your process. This overrides the settings defined on the process. In order to define event filtering on a scope, the scope activity MUST have a name in your process definition. Scopes are referenced by name in the deployment descriptor:
<dd:deploy xmlns:dd="http://www.apache.org/ode/schemas/dd/2007/03"> ... <dd:process-events generate="none"> <dd:scope-events name="aScope"> <dd:enable-event>dataHandling</bpel:enable-event> <dd:enable-event>scopeHandling</bpel:enable-event> </dd:scope-events> <dd:scope-events name="anotherScope"> <dd:enable-event>activityLifecycle</bpel:enable-event> </dd:scope-events> </dd:process-events> ... </dd:deploy>
Note that it is useless to enable an event associated with the process itself when filtering events on scopes. The filter defined on a scope is automatically inherited by its inner scopes. So if no filter is defined on a scope, it will use the settings of its closest parent scope having event filters (up to the process). Note that what gets inherited is the full list of selected events, not each event definition individually.
WSO2 Business Process Server lets you register your own event listeners to analyse all produced events and do whatever you want to do with them. To create a listener, you just need to implement the org.apache.ode.bpel.iapi.BpelEventListener interface.
Then add your implementation in the server's classpath (BPS_HOME/repository/components/lib) and add a property in bps.xml giving your fully qualified implementation class name:
<tns:WSO2BPS xmlns:tns="http://wso2.org/bps/config"> ... <tns:EventListeners> <tns:listener class="org.wso2.bps.samples.eventlistener.CustomEventListener"/> </tns:EventListeners> ... </tns:WSO2BPS>
You can try the sample event listener that is shipped with WSO2 Business Process Server by adding abouve configuration to the bps.xml and restart the server. You can find the source of the sample implementation of event listener here.
WSO2 Carbon platform supports unified-endpoints(UEPs) to configure partner endpoints which are used in the BPEL processes. In more general terms, UEPs facilitate for a generalized way of configuring endpoints taking quality of service in to the picture. So a particular UEP configuration can be used across the carbon platform to configure security in a partner endpoint in BPEL process and to configure WS-Addressing in a WSO2 ESB endpoint.
The UEP configuration is engaged to a partner endpoint at the deploy.xml which declares the particular partner service as follows.
<?xml version="1.0" encoding="UTF-8"?> <deploy xmlns="http://www.apache.org/ode/schemas/dd/2007/03" xmlns:client="urn:ode-apache-org:example:async:client" xmlns:server="urn:ode-apache-org:example:async:server"> <process name="server:Server"> <active>true</active> <retired>false</retired> <process-events generate="all" /> <provide partnerLink="client"> <service name="server:ServerService" port="ServerPort" /> </provide> <invoke partnerLink="client"> <service name="client:ServerCallbackService" port="ServerCallbackPort"> <endpoint xmlns="http://wso2.org/bps/bpel/endpoint/config" endpointReference="uep.epr"/> </service> </invoke> </process> </deploy>
Here, you can see the endpointReference property is pointing to a file path for the endpoint reference file. This EPR file will contain the address that needs to be used for the invoke. Following is one such sample EPR file (endpoint reference file).
<wsa:EndpointReference xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3schools.com uep_schema.xsd" xmlns:wsa="http://www.w3.org/2005/08/addressing" xmlns:wsdl11="http://schemas.xmlsoap.org/wsdl/"> <wsa:Address>http://localhost:9973/services/ServerCallbackService</wsa:Address> <wsa:Metadata> <id>SInvokeEPR</id> </wsa:Metadata> </wsa:EndpointReference>
The UEP can be located outside the BPEL artifact, in the file system or registry too. Maintaining an UEP outside from the BPEL artifact becomes very useful when governing endpoints which are used by multiple BPEL processes in multiple WSO2 BPS instances. eg - If the UEP to be maintained in the file sytem - Use the absolute path for the UEP in deploy.xml
<endpoint xmlns="http://wso2.org/bps/bpel/endpoint/config" endpointReference="conf:/uep.epr"/><endpoint xmlns="http://wso2.org/bps/bpel/endpoint/config" endpointReference="/opt/wso2/server/config/uep.epr"/>
eg - If the UEP to be maintained in the registry - Use the registry specific path for the UEP in deploy.xml
<endpoint xmlns="http://wso2.org/bps/bpel/endpoint/config" endpointReference="conf:/uep.epr"/>
If you are interested in other supported constructs by UEPs, please refer this xml schema which provides the current supported functionality like setting ? ReplyTo ? header. Please take a look at Async-Server sample which is a sample BPEL process which uses an UEP to configure the target endpoint of a partner service.
You can find more details on BPEL extensionshere.
You can specify an external or custom transaction factory class to take care of transactions. The following onfiguration in BPS_HOME/repository/conf/bps.xml can be used to set transaction factory.
<tns:WSO2BPS xmlns:tns="http://wso2.org/bps/config"> ... <tns:TransactionFactory class="class name"/> ... </tns:WSO2BPS>
Message Exchange Interceptors can be used to pull out data as well as to manipulate them by enabling interception of parntner/server invocations. MEX interceptors can be used in four different situations.
You need to implement MessageExchangeInterceptor interface to implement a custom message exchange interceptor and drop the respective jar to the BPS_HOME/repository/components/lib/ directory. Then, add the following configuration entry to the bps.xml in BPS_HOME/repository/conf directory
<tns:WSO2BPS xmlns:tns="http://wso2.org/bps/config"> ... <tns:MexInterceptors> <tns:interceptor class="class name"/> </tns:MexInterceptors> ... </tns:WSO2BPS>
You can configure and fine tune OpenJPA by specifying OpenJPA properties in BPS_HOME/repository/conf/bps.xml file.
<tns:WSO2BPS xmlns:tns="http://wso2.org/bps/config"> ... <tns:OpenJPAConfig> <tns:property name="openjpa.FlushBeforeQueries" value="true"/> <tns:property name="property name" value="value"/> ... </tns:OpenJPAConfig> ... </tns:WSO2BPS>
HTTP connection manager should be configured to be sync with the concurrent HTTP connections that should be in the BPS server.
<tns:WSO2BPS xmlns:tns="http://wso2.org/bps/config"> ... <tns:MultithreadedHttpConnectionManagerConfig> <tns:maxConnectionsPerHost value="20"/> <tns:maxTotalConnections value="200"/> </tns:MultithreadedHttpConnectionManagerConfig> ... </tns:WSO2BPS>
Prerequisites - Start a JMS-provider. ActiveMQ is used here as the default configs are already added to axis2.xml (see step 4 below).
<transportReceivername="jms"class="org.apache.axis2.transport.jms.JMSListener"/>
If you have correctly configured, these logs will be shown during the server start-up.
INFO - JMSSender - JMS Sender started INFO - JMSSender - JMS Transport Sender initialized... ... INFO - JMSListener - JMS Transport Receiver/Listener initialized... ... INFO - JMSListener - JMS listener started
Note - More information can be found from this article Configuring JMS Transport in WSO2 Business Process Server (BPS).
Process instance cleanup feature in WSO2 Business Process Server allows you to configure periodic process instance cleanup tasks based on various process instance properties to remove process instance data from WSO2 Business Process Server persistence storage.
You can use 'Schedules' section in bps.xml to configure instance cleanup. 'Schedules' section can contain multiple 'Schedule' elements with multiple 'cleanup' elements. In each 'Schedule' element you can specify attribute 'when' which will be the time that the instance cleanup task get executed. Time is configured using cron expressions. Inside cleanup element you can use filters. In those filter elements you can specify various instance properties which will use to select the instance to be deleted.
One technique to reduce memory utilization of BPS engine is process hydration and dehydration. That means user can configure the hydration/dehydration policy at bps.xml in repository/conf directory. As well programmatically user can define a custom hydration/dehydration policy.
In bps.xml user can set the maximum age of a process before it is dehydrated via MaxAge element. And the maximum deployed process count can exist in memory at a particular time via maxCount attribute.
Below example policy with enable the dehydration policy and set the maximum deployed process count can exist in memory at a particular time to 100 and the maximum age of a process before it is dehydrated to 5 minutes.
<tns:ProcessDehydration maxCount="100" value="true"> <tns:MaxAge value="300000"/> </tns:ProcessDehydration>
The usePeer2Peer property informs the BPEL engine not to use internal communications for sending messages between BPEL processes that may be executing within the same engine. The usePeer2Peer property has true as the default value.
<dd:invoke partnerLink="..." usePeer2Peer="true"> <dd:service name="..." port="..."/> </dd:invoke>
Disabling P2P Communication
<dd:invoke partnerLink="..." usePeer2Peer="false"> <dd:service name="..." port="..."/> </dd:invoke>
When the value of this attribute is false, BPS engine will send the message to other process through the integration layer. If you have deployed your BPEL process and its partner services on the same WSO2 Carbon instance, you can avoid the network overhead using usePeer2Peer="false" for the particular partner interaction in deploy.xml.
Clustering BPS has three different aspects.
The configuration sharing is done using the WSO2 Governance Registry. All the BPS nodes in the cluster are pointed to one instance of WSO2 G-Reg. WSO2 G-Reg consists of three registry spaces.
For BPS clustering, local registry is used per each instance, the configuration registry of each BPS instance is mounted to the same configuration registry and the governance registry is shared among G-Reg and all the BPS instances.
Modify the dbConfig element from GREG_HOME/repository/conf/registry.xml to have a database configurations as follows. Copy MySQL jdbc driver library to GREG_HOME/repository/components/lib directory.
<currentDBConfig>wso2registry</currentDBConfig> <readOnly>false</readOnly> <enableCache>true</enableCache> <registryRoot>/</registryRoot> <dbConfig name="wso2registry"> <dataSource>jdbc/WSO2CarbonDB</dataSource> </dbConfig>
Include datasource details in the GREG_HOME/repository/conf/datasources/master-datasources.xml as follows. Change the IP address, url, username and passwords accordingly.
<datasource> <name>WSO2_CARBON_DB</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:MySQL://ip:3306/greg?autoReconnect=true</url> <userName>root</userName> <password>root123</password> <driverName>com.mysql.jdbc.Driver</driverName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <minIdle>5</minIdle> </configuration> </definition> </datasource>
Modify the dbConfig element from BPS_MASTER_HOME/repository/conf/registry.xml to have a database configurations as follows. Copy MySQL jdbc driver library to BPS_MASTER_HOME/repository/components/lib directory.
<currentDBConfig>wso2registry</currentDBConfig> <readOnly>false</readOnly> <registryRoot>/</registryRoot> <dbConfig name="wso2registry"> <dataSource>jdbc/WSO2CarbonDB</dataSource> </dbConfig>
Include datasource details in the BPS_MASTER_HOME/repository/conf/datasources/master-datasources.xml as follows. Change the IP address, url, username and passwords accordingly.
<datasource> <name>WSO2_CARBON_DB</name> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:MySQL://IP:3306/bpsMaster?autoReconnect=true</url> <userName>root</userName> <password>root123</password> <driverName>com.mysql.jdbc.Driver</driverName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <minIdle>5</minIdle> </configuration> </definition> </datasource>
Let's configure mounting configuration by adding more parameters to registry.xml. Add following database configurations to the BPS_MASTER_HOME/repository/conf/registry.xml.
<dbConfig name="bpsMountRegistry"> <dataSource>jdbc/WSO2MountRegistryDB</dataSource> </dbConfig> <remoteInstance url="https://<IP of G-Reg>:<HTTPS prot of G-Reg>/registry"> <id>Mount1</id> <dbConfig>bpsMountRegistry</dbConfig> <readOnly>false</readOnly> <registryRoot>/</registryRoot> </remoteInstance> <mount path="/_system/config" overwrite="true"> <instanceId>Mount1</instanceId> <targetPath>/_system/bpsConfig</targetPath> </mount> <mount path="/_system/governance" overwrite="true"> <instanceId>Mount1</instanceId> <targetPath>/_system/governance</targetPath> </mount>
Update the remoteInstance URL according to the configuration of G-Reg running machine. Note: "InstanceId", "id" and "dbConfig" elements should be mapped properly if you are using different names for them. Add a new datasource details in the BPS_MASTER_HOME/repository/conf/datasources/master-datasources.xml as follows. Change IP address, url, username and passwords accordingly.
<datasource> <name>WSO2_REGISTRY_DB</name> <jndiConfig> <name>jdbc/WSO2MountRegistryDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:MySQL://IP:3306/greg?autoReconnect=true</url> <userName>root</userName> <password>root123</password> <driverName>com.mysql.jdbc.Driver</driverName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <minIdle>5</minIdle> </configuration> </definition> </datasource>
Modify the dbConfig element from BPS_SLAVE_HOME/repository/conf/registry.xml to have a database configurations as follows. Copy MySQL jdbc driver library to BPS_MASTER_HOME/repository/components/lib directory.
<currentDBConfig>wso2registry</currentDBConfig> <readOnly>false</readOnly> <registryRoot>/</registryRoot> <dbConfig name="wso2registry"> <dataSource>jdbc/WSO2CarbonDB</dataSource> </dbConfig>
Include datasource details in the BPS_SLAVE_HOME/repository/conf/datasources/master-datasources.xml as follows. Change the IP address, url, username and passwords accordingly.
<datasource> <name>WSO2_CARBON_DB</name> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:MySQL://IP:3306/bpsSlave?autoReconnect=true</url> <userName>root</userName> <password>root123</password> <driverName>com.mysql.jdbc.Driver</driverName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <minIdle>5</minIdle> </configuration> </definition> </datasource>
Let's configure mounting configuration by adding more parameters to registry.xml. Add following database configurations to the BPS_SLAVE_HOME/repository/conf/registry.xml. Update the remoteInstance URL according to the configuration of G-Reg running machine.
<dbConfig name="bpsMountRegistry"> <dataSource>jdbc/WSO2MountRegistryDB</dataSource> </dbConfig> <remoteInstance url="https://<IP of G-Reg>:<HTTPS prot of G-Reg>/registry"> <id>Mount1</id> <dbConfig>bpsMountRegistry</dbConfig> <readOnly>true</readOnly> <registryRoot>/</registryRoot> </remoteInstance> <mount path="/_system/config" overwrite="true"> <instanceId>Mount1</instanceId> <targetPath>/_system/bpsConfig</targetPath> </mount> <mount path="/_system/governance" overwrite="true"> <instanceId>Mount1</instanceId> <targetPath>/_system/governance</targetPath> </mount>
Note: "InstanceId", "id" and "dbConfig" elements should be mapped properly if you are using different names for them. Add a new datasource details in the BPS_MASTER_HOME/repository/conf/datasources/master-datasources.xml as follows. Change IP address, url, username and passwords accordingly.
<datasource> <name>WSO2_REGISTRY_DB</name> <jndiConfig> <name>jdbc/WSO2MountRegistryDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:MySQL://IP:3306/greg?autoReconnect=true</url> <userName>root</userName> <password>root123</password> <driverName>com.mysql.jdbc.Driver</driverName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <minIdle>5</minIdle> </configuration> </definition> </datasource>
Note: If you compare the configuration with master node, we are only changing the configuration of local registry and the registry access mode in mounting is readonly but the local registry configuration should be read-write since it is specific for each node.
For all the cluster nodes (G-Reg, BPS Master and BPS Slave), same user store should be used. Change the database configuration at CARBON_HOME/repository/conf/user-mgt.xml by adding a line as below to refer to the datasource mentioned in the BPS_MASTER_HOME/repository/conf/datasources/master-datasources.xml. Change IP address, url, username and passwords should be changed accordingly.
<Property name="dataSource">jdbc/WSO2CarbonDB</Property>
To configure the user store for the cluster nodes, refer How to Configure an External LDAP User Store.
The following instructions should be followed only for the BPS cluster nodes.
BPEL database for the both master and slave nodes should be pointed to the same database. The default BPEL database is pointed to a embedded H2 database per each BPS cluster node. You need to configure it to a single database preferably to a MySQL database. Open BPS_HOME/repository/conf/datasources.properties file of both BPS nodes and edit the configuration as follows. See Configuring a Data Source for the instructions. Ips, ports, usernames and passwords should be updated to reflect the real and appropriate values.
Note - JNDI port (synapse.datasources.providerPort) should be changed in each BPS node if the BPS cluster nodes are on the same host.
synapse.datasources=bpsds synapse.datasources.icFactory=com.sun.jndi.rmi.registry.RegistryContextFactory synapse.datasources.providerPort=2199 synapse.datasources.bpsds.registry=JNDI synapse.datasources.bpsds.type=BasicDataSource synapse.datasources.bpsds.driverClassName=com.mysql.jdbc.Driver synapse.datasources.bpsds.url=jdbc:MySQL://localhost:3306/bps210MySQL?autoReconnect=true synapse.datasources.bpsds.username=root synapse.datasources.bpsds.password=root123 synapse.datasources.bpsds.dsName=bpsds synapse.datasources.bpsds.maxActive=100 synapse.datasources.bpsds.maxIdle=20 synapse.datasources.bpsds.maxWait=10000
The deployment happens using the deployment synchronizer that is to easily synchronize the configuration across a cluster of carbon servers. The deployment synchronizer can be tuned by defining a synchronization period to run periodically.
It is possible to maintain all the nodes in the cluster in sync through the shared registry with the deployment synchronizer. One of the nodes can be designated as the master node and it can upload its local repository to the registry using the deployment synchronizer. Other nodes (slave nodes) can then download the same repository from the registry and deploy locally.
For that, the synchronizer has to be run in auto commit mode in the master node. When in auto commit mode, it will periodically upload the changed artifacts in the local repository to the registry. Similarly slave nodes should run the synchronizer in the auto checkout mode. If needed, registry eventing can be employed to run the checkout operations so that a checkout will be made only when some artifact has changed in the shared registry.
Configure master node to enable Auto Commit mode
Configure slave node to enable Auto Checkout mode
Update the cache configuration for all the nodes in the cluster including the G-Reg node.
Open CARBON_HOME/repository/conf/etc/cache.xml file and change the clustering configuration as below.
Comment/remove the following
<configuration> <cacheMode>local</cacheMode> </configuration>
Uncomment the following
<configuration> <clustering> <enabled>true</enabled> <clusterName>wso2carbon-cache</clusterName> </clustering> <cacheMode>replicated</cacheMode> <sync>true</sync> </configuration>
If you are running multiple instances of same or different WSO2 Product Instances, then you need to configure ports for each instance. You can configure $CARBON_HOME/repository/conf/carbon.xml by using port Offset
<Offset>0</Offset>
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445