How to Monitor Non Heap Memory Using FusionReactor


After disabling segmentation, there are still a number of ways in which you can reduce the JIT’s memory consumption. However, these methods come at a small performance cost. Therefore, you need to also monitor your application’s performance while you are taking on these measures.

The 3 ways in which you can reduce how much memory the JIT uses are below:

Reducing the Number of Compilations

You can do this by reducing the rate at which compilation is done. There are two JVM options that affect this rate -XX: CompileThreshold and -XX: OnStackReplacePercentage.

The CompileThreshold option affects the number of method invocations before a method is compiled. The OnStackReplacePercentage option is a percentage that affects the number of backward branches a method takes before compilation.

You need to increase the value of these options in order to achieve reduced compilations. An ideal starting point would be tripling the default values ​​for your client JVM. For the JVM server, it may not be necessary to adjust the value of the
CompileThreshold, since its default, is fairly high.

You can check for the default values ​​by using the command below:

java -XX: + PrintFlagsFinal

You can now gradually adjust the two mentioned options while observing the code cache space from your FusionReactor dashboard (Resources> Memory spaces> Code Cache). When you hover over the graph, you can see the Maximum, Allocated, and Used memory space.

Reducing the Code Cache Size

Sometimes an application may have a lot of compilations at the start, but later on, have very little as the program executes. In such a case, you may find it useful to constrain the default code cache size (ie the maximum given to the JIT compiler).

The -XX: ReservedCodeCacheSize option is used to set this. The idea is to reduce this value (in MB) such that it forces the compiler to flush out methods that are no longer in use.

You tune the -XX: ReservedCodeCacheSize by trying to make the code cache in use to be close to the maximum code cache.
Reducing the Size of Compiled Methods

This involves reducing the inlining that the compiler does when compiling methods. Inlining refers to including the code of another method inside another compiled method. The JVM by default uses certain heuristics to determine the inlining technique to use, with a general goal of optimizing performance.

You can trade some performance benefits for reduced code cache. Below are some of the options you can use for method inlining:

-XX: InlineSmallCode
-XX: MaxInlineLevel

You can read more about the available inlining options from this Oracle page.



Please enter your comment!
Please enter your name here