Monday, February 16, 2009

Memory leaks are easy to find

Last time I talked about the dominator tree and how it can be used to find the biggest objects in your heap easily.

So what exactly is the dominator tree?

Dominators

An object A dominates on an object B if all the paths to object B pass through object A.

Remember that the Garbage Collector removes all objects that are not referenced anymore. If A dominates B and A could be removed from memory, that means that there's no path anymore that leads to B. B is therefore unreachable and would be reclaimed by the Garbage Collector.
One could also say that A is the single object that is responsible for B still being there!

The Dominator Tree

Using the the "dominates" relationship we can create a dominator tree out of the the graph of objects in memory. At each node of this tree we store the amount of memory that would be freed (= retained size).
At the top of the tree we have a "virtual" root object, which we also use to represent objects that don't have "real" single dominator.
Here's an example of an object tree (on the left) and the corresponding dominator tree (on the right) :




  1. Note that A, B and C are dominated by a "virtual" root object.
  2. Note that the dominator relationship is transitive;C dominates E which dominates G therefore C also dominates G.

Because of the transitivity of "dominated", the retained size of a parent object within the dominator tree is always greater than the sum of it's child objects.

To get the biggest objects we just need to sort the second level of the dominator tree (the first level is the "virtual" root object) by retained size.

Now if you are looking to find a memory leak, and you have no a priori knowledge that could help you, the typical approach is to run a test that reproduces the leak, and then try to somehow figure out what is leaking.

Do we really need Object allocations tracing?
In my experience people often seem to believe that finding leaks requires recording object creations, also called "object allocations tracing "sometimes, because you want to know where in the code objects are always allocates but never released.
Tess Ferrandez, ASP.NET Escalation Engineer (Microsoft) has an example of how this method for finding leaks can be applied to .NET applications. Dear Microsoft I encourage you to look at the Eclipse Memory Analyzer ;)


All you need is the dominator tree

With the dominator tree finding these kind of leaks is much easier and you don't need the high overhead of allocation tracing, which is typically not acceptable in a production environment. Allocation tracing has it's uses, but in general IMHO it is overrated.

It's almost guaranteed that a significant memory leak will show up at the very top of the dominator tree!
The Eclipse Memory Analyzer not only supports the dominator tree but can even find memory leaks automatically, based on some heuristics.

The dominator tree can also be used to find high memory usage in certain areas, which is a much harder problem. More on this in the next post.





Tuesday, February 03, 2009

How to really measure the memory usage of your objects

Measuring and analyzing the Memory consumption in Java and other OOP languages is not an easy task, because all the objects form a big highly connected graph. Only counting the flat, also called "shallow" size, of the objects, is not always helpful:


This figure shows a typical memory histogram. In general this table does not help very much to figure out which objects really cause the high memory usage. Typically String and char[] will show up in the list of the classes with the highest shallow size, just because they are frequently used.
There is probably a correlation between the high number of char[] instances and the high number of String instances(indicated by the yellow arrows), but we cannot really know from the information in the table. We also don't know whether the Order object could potentially be the root cause for all those Objects there.

So how can we find out, who is really responsible for the memory usage?

Let's take a step back and ask ourselves how Garbage Collection works in a language like Java (and also in other "modern" languages ;)).

Garbage Collection


The general idea is that the Garbage Collector will reclaim all objects that cannot be reached from a so called "GC root object".

GC root objects are
  • Local variables on the stack
  • Parameters of functions on the stack
  • JNI native references
  • All classes loaded by the bootstrap Loader
Now, let's come back to our original problem, we can ask us the interesting question:
"How much memory would be freed, if we would remove all those "Order" objects and afterwards a full GC would run?"
This would give us a measure for how much memory would be freed if those objects would not be there.
We call this measure the "retained size". The set of objects that would be freed is called the "retained set".
With the "retained set" we can figure out how many Strings would be freed if those "Order" objects would not be there. We can therefore solve the problem to find the objects "responsible" for the memory usage.

Definition "Retained size/set"


Retained set of X is the set of objects which would be collected by the GC
Retained size of X is the sum of shallow sizes of all objects in the retained set of X, i.e. memory kept alive by X.
Generally speaking, shallow size of an object is its "flat" size in the heap whereas the retained size of the same object is the amount of heap memory that will be freed when the object is garbage collected.
The retained set for a leading set of objects, such as all objects of a particular class or all objects of all classes loaded by a particular class loader or simply a bunch of arbitrary objects, is the set of objects that is released if all objects of that leading set become inaccessible. The retained set includes these objects as well as all other objects only accessible through these objects. The retained size is the total heap size of all objects contained in the retained set.

How to find the object with the biggest retained size?

The Eclipse Memory Analyzer was built from the beginning with support for computing the retained set/size of objects.
The retained size was pioneered, as far as I know, by the commercial
Yourkit profiler(great product!). When we started to develop the Eclipse Memory Analyzer (SAP Memory Analyzer at that point in time), Yourkit already had this very nice feature to find the single object with the biggest retained size. We still don't know how Yourkit does it, but for sure at that point in time they used a different approach than we do now. The reason is that it took Yourkit several minutes to get the first 10 (I think) biggest (in terms of retained size) objects in the heap. We knew that this was already still much faster than the naive method, that would have to simulate a Garbage Collector run each of the object in the heap. With millions of objects on the heap this would have taken years!

So I went on to research, what kind of Graph algorithms could help us to speed things up. After a lot of Google searches I finally found the dominator tree.

The dominator tree

The dominator tree is a datastructure that allows you to answer the question about the biggest objects in almost no time. After further research I found that it could be build in (almost) O(n) time, by a complicated algorithm invented by Lengauer and Tarjan. Since the plan by the architect of the project, was always to store precomputed information on disk, we could compute the dominator tree once and then store it on disk. Therefore opening a heap dump for the first time is a little bit slow (but still often faster than with any other tool), but opening it again is blazing fast (just a few seconds).
Still implementing the algorithm was a high risk, because at that point in time relatively new paper "Finding Dominators in Practice" was talking about graphs with at most a few thousand nodes, whereas we wanted to run it on a few millions of nodes on a 32bit machine.

Still the team would get it done (I was always only in "consultant" role), and it turned out that the dominator tree would not only allow us to speed up the process of finding the biggest objects, but would also allow us to really significantly simplify the process of finding the root causes for high memory usage.

More on this in the next post.
Stay tuned!