Friday, February 17, 2012

Introducing ThreadLogic

Eric Gross and Sabha Parameswaran, from Oracle FMW Architects Team (The A-team) are happy to introduce ThreadLogic, an open source Thread Dump Analyzer tool, to Middleware community.

Motivation behind ThreadLogic
 
The current set of TDA tools (Samurai/TDA) don't mine the thread dumps or provide a more detailed view of what each thread is doing while just limiting themselves to reporting the state (locked/waiting/running) or the lock information.  They don't mention the type of activity within a thread, should it be treated as normal or deserving a closer look? Can a pattern or anti-pattern be applied against them? Any possible optimizations? Are there any hot spots? Any classification of threads based on their execution cycles? 

We decided to create ThreadLogic to address these deficiencies. It is based on a fork of the TDA open source tool with new capabilities to analyze and provide advice based on a set of extensible advisories and thread grouping while supporting all JVM thread dumps. Also, a thorough and in-depth analysis of WebLogic Server thread dumps is provided. Both the thread grouping and advisories are extensible where user can add new patterns to match and tag or group threads. Please check the document in the ThreadLogic projects web site for more details of the tool.

Download

The latest stable bits (version 0.9) can be downloaded from here .

Feedback welcome.

Thursday, February 9, 2012

Analyzing Thread Dumps in Middleware - Part 4

This posting is the fourth and final section in the series Analyzing Thread Dumps in Middleware


In this section, we will see a new version of TDA (Thread Dump Analysis) tool developed by the author of this blog (Sabha Parameswaran in collaboration with his colleague, Eric Gross, also from Oracle A-Team) and its capabilities as well as some real world samples of thread dump analysis before concluding the series.


TDA A-Team - an Enhanced TDA (version 3.0)

As mentioned earlier, the current TDA tools (Samurai/TDA) don't mine the data inside thread dumps or provide a more detailed view of what each thread is doing while just limiting themselves to reporting the state (locked/waiting/running) or the lock information. No mention of type of activity within a thread, should it be treated as normal or deserving a closer look? Can a pattern or anti-pattern be applied against them? Any possible optimizations? Are there any hot spots? Any classification of threads based on their execution cycles?

In order to fill some of the gaps in the existing TDA Tools as well as provide a detailed analysis of WebLogic Server specific thread dumps, the author of this blog (Sabha Parameswaran) decided to developed a custom TDA tool that can suggest set of actions and recommendations based on some predefined thread execution patterns.

In collaboration with Oracle A-team colleague, Eric Gross, the author decided to enhance the existing open source TDA version 2.2 instead of reinventing the wheel and leveraging the capabilities of base TDA to handle base parsing of the thread dump, the UI and Reporting. Eric Gross enhanced the TDA to fully support Sun Hotspot, JRockit (support was partial for JRockit in base TDA v2.2) and IBM JVM Thread dumps as well as adding support for custom categories in the TDA tool. Sabha Parameswaran enhanced the tool's analytics capabilities - grouping of threads based on functionality and tagging of threads with advisories using pre-defined rules and patterns which can be extended to handle additional patterns.

We wish to thank Ingo Rockel, Robert Whitehurst and numerous others who had contributed to the original TDA which allowed us build on their work in delivering a more powerful tool for the entire Java community.

Once a thread dump is parsed and threads details are populated, each of the thread is then analyzed against matching advisories and tagged appropriately. The threads are also associated with specific Thread Groups based on functionality or thread group name.

Each of the advisory has a health level indicating severity of the issue found, pattern, name, keyword and related advice. 

Samples of advisories:

Thread Advisory Name Large # of WLS Muxer Threads
Health Level
WATCH
Keyword WebLogicMuxerThreads
Description Large number of WLS Muxer Threads
Advice Reduce number of WLS Muxer Threads to under 4, use -Dweblogic.SocketReaders=NoOfThreads flag in command line
 
Thread Advisory Name Stuck Thread
Health Level
FATAL
Keyword STUCK
Description Thread is Stuck, request taking very long time to finish
Advice Check why the thread or call is taking very long?? Is it blocked for unavailable or bad resource or contending for Lock? Can be ignored if it is doing repeat work in a loop (like adapter threads polling for events in a infinite loop)...



Thread Advisory Name Hot Spots
Health Level
WARNING
Keyword HotCallPattern
Description Multiple Threads executing same code path
Advice Ensure there are no blocking locks or bottlenecks, sufficient resources are available,remote service being invoked is responsive and scaling well to handle increased load

Each of the advisory gets triggered based on either call execution patterns observed in the thread stack or presence of other conditions (thread blocked or multiple threads blocked for same lock can trigger BlockedThreads Advisory). Sometimes a thread might be tagged as IGNORE or NORMAL based on its execution logic or might be tagged more specifically as involved in JMS send or receive client or a Servlet thread.

The health levels (in descending of severity) are FATAL (meant for Deadlocks, STUCK, Finalizer blocked etc), WARNING, WATCH (worth watching), NORMAL and IGNORE. Based on the highest criticality of threads within a group, that health level gets promoted to the Thread Group's health level and same is repeated at the thread dump level. There can be multiple advisories tagged to a Thread, Thread Group and Thread Dump.

Snapshot of Advisory Map

Advisory Map
Snapshot of Threads tagged with advisories in the thread dump

Threads in a thread dump tagged with Advisories/Health Levels

Thread Groups Summary

The threads are associated with thread groups based on the functionality or thread names. Additional patterns exists to tag other threads (like iWay Adapter, SAP, Tibco threads) and group them. The summary page reports on health level of the group, total number of threads, threads that are blocked, critical advisories found etc.

Thread Groups Summary

Critical Advisories per thread group
The critical advisories (at Warning/Fatal health levels) found in individual threads are then promoted to the the parent thread group and reported in the thread group summary page.

Critical Advisories for Thread Group

Thread Groups

One can see the thread groups are divided into two buckets - WLS and non-WLS related threads. The JVM threads, Coherence, LDAP and other unknown custom threads go under the non-WLS bucket while all the WLS, Muxer, Oracle, SOA, JMS, Oracle Adapter threads are all under the WLS bucket.



 Individual Thread tagging with Advisories

Clicking on the individual threads will display the advisories and thread stack.
Advisories and details at thread level
The details of the advisory will pop up on mouse over on the advisory links.
The Advisories are color coded and  details can be highlighted.

Colored advisories for individual threads

 
Sub-groups are also created within individual Thread Groups based on Warning Levels, Hot call patterns (multiple threads executing same code section), threads doing remote I/O (socket or db reads) etc.

Following snapshot shows example of a Hot call pattern where multiple threads appear to be executing the same code path (all are attempting to update the MBean Server with a new mbean).

Hot Call Pattern - multiple threads exhibiting similar code execution

Merging of threads across multiple thread dumps and reporting of progress in the thread state

Merge has been enhanced to report on the progress of the thread across the thread dumps. Based on the order of the thread dumps, the thread stack trace is compared for every consecutive thread dump.

Merged view showing progress information for individual threads

Merged reporting of individual thread stack traces (exists in base TDA version 2.2)

Merged Thread stack traces across thread dumps

Merging can also be done across multiple thread dump log files (like in case of IBM which creates new log file containing the thread dump every time a request is issued).

Usability benefits of new TDA A-Team
 
Thanks to the advisories and health levels, its easy for users to quickly understand the usage patterns, hot spots, thread groups, as well as highlight the patterns or anti-patterns already implemented in the advisory list.

For example of an anti-pattern: a Servlet thread should not be waiting for an event to occur as this will translate to bad performance for end user. Similarly usage of synchronous jms consumers might be less performant compared to using async consumers. Too many WLS Muxer threads is not advisable. If WLS Muxer or Finalizer threads are blocked for unrelated locks, this will be a fatal condition. It would be okay to ignore STUCK warning issued by WLS Server for Pollers like AQ Adapter threads but not for other threads that are handling servlet request.

The thread groups help in bunching together related threads; so SOA Suite users can see how many BPEL Invoke and Engine threads are getting used, B2B users can see number of JMS consumers/producers, WLS users can look at condition and health of Muxer threads, similarly for JVM/Coherence/LDAP/other thread groups.

The merged report lets the user see at a glance the critical threads and check if they are progressing or not instead of  wading through numerous threads and associated thread dumps.

We hope this tool can really help both beginners and experts do their jobs more quickly and efficiently when it comes to thread dumps. 


Real World Sample of Thread Dump Analysis

A Customer was planning on a POC (Proof of Concept) of WebLogic Portal Server (WLP) on the new Oracle Exalogic Platform and measure its performance relative to their existing configurations. The test configuration used a cluster of WLP servers, communicating via Multicast for cluster membership and other service announcements. As soon as the servers were brought up, the servers became unresponsive. The server instances could not serve any request - neither traffic to Portal Application nor WLS Admin Server requests for status from the server instances. Following errors get getting thrown in the server: 


UnsyncCircularQueue$FullQueueException: Queue exceeded maximum capacity of: '65536' elements. 

There were also frequent log messages of p13n cache getting updated and flushed repeatedly. The p13n cache is the Personalization cache used internally by Portal for handling Portal personalization. Each of the servers in the cluster are kept of changes in cache via multicast broadcasts. When the customer disabled the p13n cache, the server regained normalcy and responsiveness. But the p13n cache is a requirement for portal application functionality and its not possible to have it disabled.

The FullQueueException essentially indicated that the WLS's internal request queue was getting overwhelmed with requests and its starting to ignore/drop new requests as its already has 64K requests waiting to be executed. These requests can be internal (like transaction timeouts, schedulers, async processing) or externally generated (like client http requests). The storm of requests coupled with p13n being a key presence for the issue indicated p13n cache is triggering an internal flood of requests to the server somehow. It was decided to take thread dumps to understand what the p13n cache is executing.

Analyzing the thread dumps threw light on a weird behavior.

    at java/lang/Thread.sleep(J)V(Native Method)
    at weblogic/cluster/MulticastFragmentSocket.sendThrottled(MulticastFragmentSocket.java:198)
    at weblogic/cluster/MulticastFragmentSocket.send(MulticastFragmentSocket.java:157)                                      
    ^-- Holding lock: weblogic/cluster/MulticastFragmentSocket@0x1d4afd4c8[thin lock]
    at weblogic/cluster/FragmentSocketWrapper.send(FragmentSocketWrapper.java:91)
    at weblogic/cluster/MulticastSender.fragmentAndSend(MulticastSender.java:395) 
    at weblogic/cluster/MulticastSender.send(MulticastSender.java:178)
    ^-- Holding lock: weblogic/cluster/MulticastSender@0x1c7ce3628[thin lock]
    at com/bea/p13n/cache/internal/system/SystemCacheManager.doFlushCluster(SystemCacheManager.java:222)
    at com/bea/p13n/cache/internal/system/SystemCacheManager.flushClusterKeys(SystemCacheManager.java:193)
    at com/bea/p13n/cache/CacheManager.flushKeys(CacheManager.java:117)

    at com/bea/p13n/cache/internal/system/SystemCacheManager.doFlushLocal(SystemCacheManager.java:135)
    at com/bea/p13n/cache/internal/system/SystemCacheManager.flushLocalKeys(SystemCacheManager.java:84)
    at com/bea/p13n/cache/internal/system/CacheClusterMessage.execute(CacheClusterMessage.java:80)
    at weblogic/cluster/MulticastReceiver$1.run(MulticastReceiver.java:112)
    at weblogic/work/ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic/work/ExecuteThread.run(ExecuteThread.java:173)

The above thread stack indicates the MulticastReceiver used by WLS internally to handle multicast announcements got some request/announcement that gets handled by the p13n cache. It appears to trigger a flush of local keys in the cache. But then it gets modified as a flush of cluster keys which then results in a MulticastSender sending out a cluster broadcast.

This essentially means for every announcement that lands on a server instance, it will resend the announcement to other members whcih then gets replicated for every member across the cluster leading to an infinite looping of the same message and contributing to a never ending multicast packet storm.

Analyzing the product code against the stack trace, it appeared that there was a mismatch between the CacheManager.flushKeys() implementation as the method code lines didn't match. So, it was apparent that there was some variant form of CacheManager that was getting picked by the application while handling the cluster announcements.

    at java/lang/Thread.sleep(J)V(Native Method)
    at weblogic/cluster/MulticastFragmentSocket.sendThrottled(MulticastFragmentSocket.java:198)
    at weblogic/cluster/MulticastFragmentSocket.send(MulticastFragmentSocket.java:157)

       REPEAT MULTICAST TRAFFIC - CREATE INFINITE FLOOD

    ^-- Holding lock: weblogic/cluster/MulticastFragmentSocket@0x1d4afd4c8[thin lock]
    at weblogic/cluster/FragmentSocketWrapper.send(FragmentSocketWrapper.java:91)
    at weblogic/cluster/MulticastSender.fragmentAndSend(MulticastSender.java:395) 
    at weblogic/cluster/MulticastSender.send(MulticastSender.java:178)

    ^-- Holding lock: weblogic/cluster/MulticastSender@0x1c7ce3628[thin lock]

       LOGIC HIJACKED
    at com/bea/p13n/cache/internal/system/SystemCacheManager.doFlushCluster(SystemCacheManager.java:222)
    at com/bea/p13n/cache/internal/system/SystemCacheManager.flushClusterKeys(SystemCacheManager.java:193)
    at com/bea/p13n/cache/CacheManager.flushKeys(CacheManager.java:117)

       LOCAL FLUSH ONLY
    at com/bea/p13n/cache/internal/system/SystemCacheManager.doFlushLocal(SystemCacheManager.java:135)
    at com/bea/p13n/cache/internal/system/SystemCacheManager.flushLocalKeys(SystemCacheManager.java:84)
    at com/bea/p13n/cache/internal/system/CacheClusterMessage.execute(CacheClusterMessage.java:80)

       RECEIVE MULTICAST MESSAGE
    at weblogic/cluster/MulticastReceiver$1.run(MulticastReceiver.java:112)
    at weblogic/work/ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic/work/ExecuteThread.run(ExecuteThread.java:173)



Using WLS ClassLoader Analysis Tool (CAT), we were able to drill down to where this particular CacheManager class was getting loaded from. It was packaged inside a jar that got added to the Portal EAR application libraries. Removing that particular jar containing the CacheManager resolved the problem. Later investigation revealed that the jar came from an older version of the Portal Product and should not have been used with the latest version of Portal that was being tested on Exalogic. The analysis of a single thread stack trace helped pin point the root cause for all the hang conditions in this scenario.

Limitations of Thread Dumps

Although Thread dumps can throw light on multiple things, there are also some limitations.

  • It is a snapshot of a running system
    • The system should have some representative load or exhibit symptoms when capturing thread dumps
    • The dumps are of no value if none of the above
  • It cannot show details on memory usage
  • It can show what is the pattern executed but not how many times it was executed previously (unless there are multiple threads dumps captured at periodic intervals).
  • It cannot show details of the request that is driving the code execution or which remote service is slow
    • Same set of framework code can get executed for SOA Suite BPEL Processes or OSB Proxies
    • we wont be able to determine which external service is blocking the response for a thread
    • Would require analysis of server logs/proxies/access logs and or heap dumps to identify slow services and then drill down from there or other forms of monitoring (EM/OSB Console monitoring statistics)
Thread Dump takeaways

Even if thread dump does not give a direct solution, it can definitely provide good pointers as to where the problem lies and what are the hotspots.

  • Finalizer doing heavy work (too many finalizable objects to clean up) in each successive thread dump would indicate memory handling issues (and possibly overuse/misuse of finalize – lazy cleanup or recreating additional objects during finalize instead of active deleting/cleaning up)
  • Busy GC Threads would indicate memory thrashing
  • Threads stuck on socket reads would indicate slow reads or backends not responding in a timely fashion or socket closures due to firewall or other bad socket handling
  • Threads stuck in socket connect indicates the backend service itself is unavailable or not listening at that address or wrong connection detail or connection recreation every time.
  • Threads stuck in database sql reads/executes can indicate db response issue or sql tuning issue
  • Threads blocked in opening files, jars, zips or checking on file attributes can indicate problem in reading the application bits or files on a remote storage or network mount point that’s will slow down server startup/deployment. One customer problem was related to server instance accessing WLS install & deployed application bits from a remote site that was about a hundred miles away.
  • Threads blocked on locks for synchronized code imply bottlenecks
    • Resolve the bottleneck by increasing the resources under contention
    • Avoid the synchronized calls entirely (ex: logging might be excessively synchronized, reduce logging levels or turn off logging ) whenever possible
    • Optimize the call execution to reduce lock time
  • Some of the issues might then require further investigation into environment (process, cpu, memory, io, network etc) or services/partners/actors/clients to resolve the issues
  • For cluster related or distributed system issues, thread dumps should be captured and analyzed from all related parties (cluster/services). 

Summary

Thread Dump Analysis is a really valuable and key weapon in every Java Developer and Architect's arsenal when it comes to understanding, identifying and resolving performance or other issues. I hope this series (along with the new enhanced TDA tool) helps makes it easier and useful for people to troubleshoot, tune performance and get a better handle over complex Java/JEE systems.











Wednesday, February 8, 2012

Exalogic and Multicast

Exalogic is the complete Engineered System from Oracle, delivering Hardware and Software in one solution. The software is primarily WebLogic (or Coherence, Tuxedo or other Oracle Middleware products)  with Linux or Solaris as the operating system. Customers will invariably cluster the WebLogic Server instances using multicast or unicast. There is one gotcha when it comes to using multicast on Exalogic. This posting delves into a bit of detail on how multicast gets used inside WLS and how to resolve the problem.

WLS & Multicast

Multicast is essentially broadcast option for network packets where multiple recipients can all listen to the same broadcast over a designated ip address and port. The ip range for multicast is from 224.0.0.1 to 239.255.255.255. Its a pub-sub model for ip packets and is excellent for use in communicating to a broad membership. For more details, check on wikipedia under Multicast.

WebLogic Clustering uses multicast as one option (other is unicast) to maintain membership among cluster members. A specific multicast address and port are designated for a given cluster as cluster listen address and all members within the cluster send and receive broadcasts on address port combination. Using periodic broadcast, cluster members show their liveliness and retain their membership while failure to send those broadcast will lead to membership removal (based on predefined intervals) from the rest of the cluster. The unicast option for cluster membership in WebLogic Server uses a point to point (between members and groups leaders and amongst group leaders themselves) to maintain membership information. For more details on WLS Clustering, please refer to http://docs.oracle.com/cd/E11035_01/wls100/cluster/features.html

Routers by default are configured not to propagate multicast traffic; Multicast can also contribute to a chatty network. Network admins avoid multicast for these reasons. And so most WLS users might prefer to opt for Unicast instead of Multicast for clustering.

Sometimes a customer might report a server instance not being able to join the cluster via multicast even if the cluster multicast address is valid. This can be verified by checking the cluster membership in the WLS Admin Console -> Cluster Monitoring page.



If the cluster is healthy, all the members should be part of the cluster and Drop-out Frequency should be "Never" and the number of fragments send and received should be close to equal (some members might join later or be up for longer durations compared to others which might result in some differences in fragment count). If the cluster monitoring data is to the contrary, it implies the cluster membership is not healthy.

Multicast Troubleshooting on WLS

Most often, the problem might be due to the server instances not being in the same subnet or router not forwarding the packets. The Multicast TTL setting controls how far a multicast packet can be propagated. It gets decremented for every hop across router. Ensure the Multicast TTL is set to (No of Hops between members + 1) in the Cluster Configurations.



The following picture shows the Multicast TTL configuration within the Cluster General Configuration -> Messaging page of the WLS Admin Console.

 

So, we have the TTL configured correctly and routers configured to allow multicast. But the servers are still not part of the cluster. What could be wrong?

In a multi-homed machine that carry multiple network interface cards (NICs), a specific interface might be designated the default interface and all routing would go through it unless specific routing instructions (called routes) are added to do otherwise. If a server instances listens on a network interface that is different from where the multicast packets are getting sent over, then there can be a disconnect and leading to cluster membership problems. How to identify if multicast is getting sent and received on the correct interface?

WLS provides a utility class called "utils.MulticastTest" (packaged within weblogic.jar) that can be used to send and receive test packets over a designated multicast address and port. Running this on two different machines and using the same address and port will help confirm if the parties is able to see each other. It also allows specifying a network interface as the designated channel for multicast instead of going with default interface. Note: Do not run this tool on the same mulitcast address port combination as running WLS server instances.

Node1 starts sending broadcast a specific multicast:port combination over an InterfaceX


Node1: java -cp weblogic.jar utils.MulticastTest -N foo -A 229.111.112.12 -I 10.60.3.9
Sample output:
Using interface at 10.60.3.9 
Using multicast address 229.111.112.12:7001 
Will send messages under the name foo every 2 seconds 
Will print warning every 600 seconds if no messages are received
      I (foo) sent message num 1                 
      I (foo) sent message num 2 
   Received message 2 from foo            ---> This indicates multicast is working within the node 
                                               It can listen to itself


Node2 starts sending and listening to broadcast at the same multicast:port combination as Node1 but over its InterfaceY



Node2:  java -cp weblogic.jar utils.MulticastTest -N bar -A 229.111.112.12 -I 10.60.3.19
Sample output:
Using interface at 10.60.3.19 
Using multicast address 229.111.112.12:7001 
Will send messages under the name bar every 2 seconds
Will print warning every 600 seconds if no messages are received
      I (bar) sent message num 1                 
      I (bar) sent message num 2 
   Received message 2 from bar    ---> This indicates basic multicast is working within the node

   Received message 29 from foo   ---> This indicates multicast is working as it received transmissions from Node1



The interfaces InterfaceX and InterfaceY should be in the same subnet or should be able to see each other via common network routes.

If the MulticastUtils test succeeds, then the configuration is good and should be applied to the WLS Cluster. The interface to be used should be specified in the Interface Address of the Cluster configuration for each of the managed server belonging to the cluster and the managed server needs to be restarted.



 
So these should fix the weblogic clustering issues for most hardware. But is there something special for Exalogic?

Multicast on Exalogic

Exalogic provides multiple network interfaces even in the default factory settings. There is the 10g Ethernet network interface (designated bond1 or EoIB) for talking to outside world via external routers, a 1GB Ethernet Management network (Eth0 or Mgmt) interface for administration/management of the Exalogic hardware itself and there is the Infiniband internal or private network (designated as bond0 or IBoIB) interface for real fast (40GB) communication within the Infiniband fabric. Refer to http://docs.oracle.com/cd/E18476_01/doc.220/e18478/intro.htm for Exalogic and particularly http://docs.oracle.com/cd/E18476_01/doc.220/e18478/network.htm for more details on the Exalogic Network interfaces. These would be in addition to any new interfaces created using VLANs or Partitions.

Exalogic is pre-configured to allow multicast within the Infiniband network interface. While running on Exalogic, we want the WLS clustered instances to be running and communicating directly over the Infiniband instead of switching to EoIB or other network interfaces if we go with multicast option for cluster messaging over  unicast.

I was involved in an Exalogic POC where we had to test performance of WLS cluster on Exalogic. The WLS instances were configured to listen on the Infiniband internal network interface and use multicast for clustering. When the servers came up, they were not able to see each other or join the cluster.

I decided to run the MulticastUtils test using the Infiniband Private Network Interface for multicast communication. It failed to receive any multicast traffic. But if I didn't specify the interface while running the test, I was able to receive the multicast packets. There was considerable time lag in receiving the packets.

So, debugging this with an Exalogic Network Engineer, we could decipher the reasons for the failure and strange behavior. Exalogic nodes are all configured to route every traffic over the 1GB Ethernet management network by tagging it as the default gateway in factory settings. As the Infiniband interfaces get added, network routes are added automatically to send packets to all Infiniband related IPs over that interface.

When we tried to send and receive the multicast packets over the private Infiniband network, although we had specified the Infiniband interface, the routing for multicast went over the Ethernet Management network interface as there was no routing defined for multicast and so, it just went with the default gateway which was the Ethernet Mgmt Interface. Once we added an explicit route to send the multicast over the private bond0/Infiniband network, multicast broadcast started working and WLS server instances joined the cluster.

The route command to add multicast route is shown below:


route add -net 224.0.0.0 netmask 240.0.0.0 dev bond0 

The command  denotes: add a network route for all traffic in the 224.0.0.0 range (multicast packets) over the bond0 or Infiniband private network. Use netstat -rn to check the network routes after the change.

> netstat -rn

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
192.168.10.0    0.0.0.0         255.255.255.0   U         0 0          0 bond0
10.204.80.0     0.0.0.0         255.255.254.0   U         0 0          0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U         0 0          0 bond0
0.0.0.0         10.204.80.1     0.0.0.0         UG        0 0          0 eth0




To make the route changes persistent, create a file /etc/sysconfig/network-scripts/route-bond0 on the Exalogic nodes with following content:



224.0.0.0/4 dev bond0 


Conclusion

This article should give readers a basic overview of multicast usage within WLS clustering, identifying and resolving multicast related issues, and some tips on network and multicast in general on the Exalogic Platform.