Friday, June 6, 2008

Steaming Java

By Roderick Flores

When Rich asked us to walk through a software development process, I immediately
thought back to a conversation that I had with my friend Leif Wickland about building high-performance Java applications. So I immediately emailed him asking him for his best practices. We have both produced code that is as fast, if not faster than C compiled with optimization (for me it was using a 64-bit JRE on a x86_64 architecture with multiple cores).



That is not to say that if you were to spend time optimizing the equivalent C-code that it would not be made to go faster. Rather, the main point is that Java is a viable HPC language. On a related note, Brian Goetz of Sun has a
very interesting discussion on IBM's DeveloperWorks, Urban performance legends, revisited on how garbage collection allows faster raw allocation performance.



However I digress… Here is a summary of what we both came up with (in no
particular order):


           
  1. It is vitally important to "measure, measure, measure," everything
    you do.  We can offer any set of helpful hints but the likelihood that all of them should be applied is extremely low.
  2.        
  3. It is equally important to remember to only optimize areas in the program that are bottlenecks. It is a waste of development time for no real gain.
  4.        
  5. One of the most simple and overlooked things that help your application is to overtly specify method parameters that are read-only using the final modifier. Not only can it help the compiler with optimization but it also
    is a good way of communicating your intentions to your teammates. Furthermore, i
    f you can make your method parameters final, this will help even more. One thing
    to be aware of is that not all things that are declared final behave as expected (see Is that your final answer? for more detail).
  6.        
  7. If you have states shared between threads, make whatever you can final so that that the VM takes no steps to ensure consistency. This is not
    something that we would have expected to make a difference, but it seems to help.
  8.        
  9. An equally ignored practice is using the finally clause. It i
    s very important to clean up the code in a try block. You could leave open streams, SQL queries, or perhaps other objects lying around taking up space.
           
  10. Create your data structures and declare your variables early. A core goal is to avoid allocating short-lived variables. While it is true that the garbage collector may reserve memory for variables that are declared often, why make it have to try to guess your intentions. For example, if a loop is called repeatedly, there is no need to say, for (int i = 0; …
    when you should have declared i earlier. Of course you have to be careful
    not to reset counters from inside of loops.
           
  11. Use static for values that are constants. This may seem obvious, but not everybody does.
  12.        
  13. For loops embedded within other loops:
                   
                              
    • Replace your outer loop with fixed-pool of threads.
      In the next release of java, this will be even easier using the fork-join keywords. This has become increasingly important with processors with many cores.
    •                         
    • Make sure that your innermost loop is the longest even if it doesn't necessarily map directly to the business goals. You shouldn't
      force the program to create a new loop too often as it wastes cycles.
    •        
    • Unroll your inner-loops. This can save an enormous amount of time even if it isn't pretty. The quick test I just ran was 300% faster. If you haven'
      t unrolled a loop before, it is pretty simple:
             

              unrollRemainder = count%LOOP_UNROLL_COUNT;

             

              for( n = 0; n < unrollRemainder; n++ ) {

                  // do some stuff here.

              }

             

              for( n = unrollRemainder; n < count; n+=LOOP_UNROLL_COUNT ) {

                  // do stuff for n here

                  // do stuff for n+1 here

                  // do stuff for n+2 here

                  …

                  // do stuff for n+LOOP_UNROLL_COUNT - 1 here

              }

              Notice that both n and unrollRemainder were declared earlier as recommended previously.
  14.        
  15. Preload all of your input data and then operate on it later. There
    is absolutely no reason that you should be loading data of any kind inside of your main calculation code. If the data doesn't fit or belong on one machine, use
    a Map-Reduce approach to distribute it across the Grid.
  16.        
  17. Use the factory pattern to create objects.
                   
                              
    • Data structures can be created ahead of time and only the necessary pieces are passed to the new object.
    •                         
    • Any preloaded data can also be segmented so that only the necessary parts are passed to the new object.
    •                         
    • You can avoid the allocation of short-lived variables by using constructors with the final keyword on its parameters.
    •                         
    • The factory can perform some heuristic calculations
      to see if a particular object should even be created for future processing.
  18.        
  19. When doing calculations on a large number of floating-point values,
    use a byte array to store the data and a ByteWrapper to convert it to floats. This should primarily be used for read only (input) data. If you are writing floating-point values you should do this with caution as it may take
    more time than using a float array. One major advantage that Java has when you use this approach is that you can switch between big and little-endian data rather easily.
  20.        
  21. Pass fewer parameters to methods. This results in less overhead. If
    you can pass a static value it will pass one fewer parameter.
  22.        
  23. Use static methods if possible. For example, a FahrenheitToCelsius(float fahrenheit); method could easily be made static. The main advantage
    here is that the compiler will likely inline the function.
  24.        
  25. There is some debate whether you should make particular methods
    final
    if they are called often. There is a strong argument to not do this because the enhancement is small or nonexistent (see Urban Performance Legends or
    once again Is that your final answer?). However my experience is that a small enhancement on a calculation that is run thousands of times can make a significant difference. Both Leif and I have seen measurable differences here. The key is to benchmark your code to be certain.

2 comments:

  1. Benchmarking these sorts of things (such as manually unrolling loops, or whether final methods are faster) is extremely easy to do incorrectly. My advice is to avoid microbenchmarking entirely, and leave that to the experts, and focus on MACRO-bechmarks -- measuring the performance of your entire app under realistic load.

    ReplyDelete
  2. I agree that you need to do "macro"-benchmarks so that you understand the true cost of the calculation that you are performing. Once you know this, you will need to weigh the potential savings against the user need. If tuning can reduce your execution time significantly but the business doesn't see any adverse impact from the current situation, nothing should be done. However there are many situations where the users prefer, if not demand, high-turnaround times in their SLAs. This is particularly important for the large jobs we typically see in the Grid world.
    For example, imagine that your business line routinely runs jobs which repeatedly performs a calculation on the order of 100,000 times. Typically these tasks are run on 20 nodes and we know from our overall timings that each individual calculation takes an average of 2.8s. Thus, each job will take approximately four hours to complete. If the users are waiting on the results to start their interpretation, this is lost productivity. Consequently, if we are able to shorten that time down to 1.9s, then we can reduce the user wait-time down to approximately 2.5 hours.
    Clearly, there are a number of advantages to tuning such a time-consuming algorithm. As I mentioned previously, we have provided at least 1 hour of extra productivity to each business user per day. That said, it is also possible that we reduced the peak load requirement of the grid and can therefore save money on computer operational expenses. Finally, we have increased the size of the job that a user can run and get back in a reasonable amount of time.
    It is important to note that I didn't pull this example out of thin air; these timings come from an implementation on which I worked where the lead-scientist estimated that the time saved was the equivalent to one million dollars per year.
    The key thing to consider in these situations is how much productivity can be gained by performance tuning your Grid-enabled algorithms.

    ReplyDelete