Performance Of Java Versus C
J.P.LEWIS AND ULRICH Neumann Computer Graphics and Immensive Technology Lab University of Southern California www.idiom.com/~zilla Jan. 2003 Updated 2004
[Also See this FAQ]
This article surveys a number of benchmarks and finds that Java performance on numerical code is comparable to that of C , with hints that Java's relative performance is continuing to improve. We then describe clear theoretical reasons why these benchmark results should be expected.
Benchmarks
Five Composite Benchmarks listed Below show That Modern Java Has Acceptable Performance, Being Nearly Equal To (AND in Many Cases Faster Than) C / C Across A Number of Benchmarks.
Numerical Kernels Benchmarking Java against C and Fortran for Scientific Applications Mark Bull, Lorna Smith, Lindsay Pottage, Robin Freeman, EPCC, University of Edinburgh (2001). The authors test some real numerical codes (FFT, Matrix factorization, SOR, fluid solver, N-body) on several architectures and compilers. On Intel they found that the Java performance was very reasonable compared to C (eg, 20% slower), and that Java was faster than at least one C compiler (KAI compiler on Linux). The authors conclude, "On Intel Pentium hardware, especially with Linux, the performance gap is small enough to be of little or no concern to programmers." More numerical methods: SciMark2 scores RF Boisvert, J. Moriera, M. Phillipsen, R. Pozo, Java and Numeric Computing, Computing In Science & Engineering, 3 (2): 18-24, Mar.-Apr., 2001. Scimark INCLUDES A NUBER OF NUMERICAL CODES.
ON A PIII / 500, MFLOPS (Higher IS Better):
IBM JDK 1.3.0 84.5Linux2.2 GCC (2.9X) -O6 87.1
Still more numerical methodsFrom the book Object-Oriented Implementations of Numerical Methods by Didier Besset (MorganKaufmann, 2001): OperationUnitsCSmalltalkJavaPolynomial 10th degreemsec.1.127.79.0Neville Interpolation (20 points) msec.0.911.00.8LUP matrix inversion (100 x 100) sec. 3.922.91.0
Microbenchmarks (cache effects considered) Several years ago these benchmarks showed java performance at the time to be somewhere in the middle of C compiler performance -. Faster than the worst C compilers, slower than the best These are "microbenchmarks", but they do have the advantage that they were run across a number of different problem sizes and thus the results are not reflecting a lucky cache interaction (see more details on this issue in the next section). These benchmarks were updated with a more recent java (1.4) and GCC (3.2), Using Full Optimization (gcc -o3 -mcpu = pentiumpro -fexpensive-optimizations -fschedule-insns2 ...). This Time Java is Faster Than C The Majority of The Tests, by a factory of more Than 2 in Some Cases ... Suggesting That Java Performance is catching up to or even pulling ahead of gcc at least.
These test were mostly integer (except for an FFT) Microbenchmarks (cache effects not considered) In January 2004 OSNews.com posted an article, Nine Language Performance Round-up:. Benchmarking Math & File IO These are simple numeric and file I /. O loops, and no doubt suffer from the arbitrary cache interaction factor described below. They were however run under several different compilers, which helps. Again Java is competitive with (actually slighty faster than) several C compilers including Visual C in the majority of the . benchmarks (One exceptional benchmark tested trigonometry library calls Java numerical programmers are aware that these calls became slower in java 1.4;. recent benchmarks suggest this issue was fixed in java 1.4.2) Note that these benchmarks are on Intel architecture machines Java compilers. ON Some Other Processors Are Less Developed at present.
And in theory: Maybe Java SHOULD BE FASTER
Java proponents have stated that Java will soon be faster than C. Why Several reasons (also see reference [1]):?. 1) Pointers make optimization hardThis is a reason why C is generally a bit slower than Fortran In C, consider the Code X = Y 2 * (...)
* p = ...
Arr [J] = ...
z = x ...
Because p could be pointing at x, a C compiler can not keep x in a register and instead has to write it to cache and read it back -. Unless it can figure out where p is pointing at compile time And because arrays act like pointers in C / C , the same is true for assignment to array elements: arr [j] could also modify x This pointer problem in C resembles the array bounds checking issue in Java:. in both cases, if the compiler can determine the array ( or pointer) index at compile time it can avoid the issue. In the loop below, for example, a Java compiler can trivially avoid testing the lower array bound because the loop counter is only incremented, never decremented. A single test before starting the loop Handles the Upper Bound Test if 'Len' Is Not Modified Inside The Loop (AND Java Has NO Pointers, SOMPLY LOOKING for An Assignment IS ENOUGH TO DETERMINE THIS): for (INT I = 0; I 2) Garbage collection- is it worse ... or better? Most programmers say garbage collection is or should be slow, with no given reason- it's assumed but never discussed. Some computer language researchers say otherwise. Consider what happens when you do a new / malloc:.. a) the allocator wanders through some lists looking for a slot of the right size, then returns you a pointer b) This pointer is pointing to some pretty random place With GC, a) the allocator does not need to look for memory, it knows where it is, b) the memory it returns is adjacent to the last bit of memory you requested. The wandering around part happens not all the time but only at garbage collection. And then (depending on the GC algorithm) things get moved of course as well. The cost of missing the cache The big benefit of GC is memory locality. Because newly allocated memory is adjacent to the memory recently used, it is more likely to already be in the cache. How much Of an Effect Is this? One Rather Dated (1993) EXAMP le shows that missing the cache can be a big cost:.! changing an array size in small C from 1023 to 1024 results in a slowdown of 17 times (not 17%) This is like switching from C to program VB This particular program stumbled across what was probably the worst possible cache interaction for that particular processor (MIPS); the effect is not that bad in general ... but with processor speeds increasing faster than memory, missing the cache is probably an even bigger cost now than it (It's Easy to Find Other Research Studies Demonstrating this; Here " s one from Princeton: they found that (garbage-collected) ML programs translated from the SPEC92 benchmarks have lower cache miss rates than the equivalent C and Fortran programs) This is theory, what about practice In a well known paper [2].? several widely used programs (including perl and ghostscript) were adapted to use several different allocators including a garbage collector masquerading as malloc (with a dummy free ()) The garbage collector was as fast as a typical malloc / free;. perl was one of several programs that ran faster when converted to use a garbage collector Another interesting fact is that the cost of malloc / free is significant:. both perl and ghostscript spent roughly 25-30% of their time in these calls Besides the improved cache behavior,. also note that automatic memory management allows escape analysis, which identifies local allocations that can be placed on the stack. (Stack allocations are clearly cheaper than heap allocation of either sort) .3) Run-time compila TionThe Jit Compiler Knows More Than ConvertIal "Pre-Compiler", And It May Be Able To Do a better Job Given The Extra Information: The compiler knows what processor it is running on, and can generate code specifically for that processor. It knows whether (for example) the processor is a PIII or P4, if SSE2 is present, and how big the caches are. A pre-compiler on the other hand has to target the least-common-denominator processor, at least in the case of commercial software. Because the compiler knows which classes are actually loaded and being called, it knows which methods can be de-virtualized and inlined. ( Remarkably, modern java compilers also know how to "uncompile" inlined calls in the case where an overriding method is loaded after the JIT compilation happens.) A dynamic compiler may also get the branch prediction hints right more often than a static compiler.It might Also Be Noted That Microsoft Has Somilar Comments Regarding C # Performance [5]: "Myth: JITed Programs Execute Slower than Precompiled Programs" .NET still provides a traditional pre-compiler ngen.exe, but "since the run-time only optimizations can not be provided ... the code is usually not as good as that generated by a Normal Jit. " Speed and Benchmark Issues Benchmarks usually lead to extensive and heated discussion in popular web forums. From our point of view there are several reasons why such discussions are mostly "hot air". What is slow? The notion of "slow" in popular discussions is often poorly calibrated. IF you Write a Number of Small Benchmarks in Several Different Types of Programming Language, The Broad View of Performance Might Be Sometying Like this: Language ClassTypical Slowdown Assembler: 1Low Level Compiled (Fortran, C): 1-2BYTE-CODE (Python): 25-50INTERPRED STRINGS (CSH, TCL?): 250X Despite this big picture, performance differences of less than a factor of two are often upheld as evidence in speed debates. As we describe next, differences of 2x-4x or more are often just noise.Don't characterize the speed of a language based ON A Single Benchmark of A Single Program. We often see people drawing conclusions from a single benchmark. For example, an article posted on slashdot.org [3] claims to address the question "Which programming language provides the fastest tool for number crunching under Linux?", Yet it discussed only one . program Why is not one program good enough For one, it's common sense;? the compiler may happen to do particularly well or particularly poorly on the inner loop of the program;. this does not generalize The fourth set of benchmarks above show Java as being faster than C by a factor two on an FFT of an array of a particular size. Should you now proclaim that Java is always twice as fast as C? No, it's just one program. There is a more important issue than the code quality on the particular benchmark, however: Cache / Memory effects Look at the FFT microbenchmark that we referenced above The figure is reproduced here with permission:.. On this single program, depending on the input size, the relative performance of 'IBM' (IBM's JAV a) Varies from About TWICE AS SLOW TO TWICE AS FAST AS 'MAX-C' (GCC) (-O3 -LM -S -STATIC-FOMIT-FRAME-POINTER -MPENTIUMPRO-MARCH = PENTIUMPRO-MALIGN-FUNCTIONS = 4 -FU nroll-all-loops -fexpensive-optimizations -malign-double -fschedule-insns2 -mwide-multiply -finline-function s -fstrict-aliasing). So what do we conclude from this benchmark? Java is twice as fast as C, or twice as slow, or ... This performance variation due to factors of data placement and size is universal. A more dramatic example of such cache effects is the link mentioned in the discussion on garbage collection above.The person who posted [3] demonstrated The Fragility of His Own Benchmark In a FOLLOP POST, Writing That "Java Now Performs as Well AS GCC on Many Tests" After Changing Something (Note That It Was NOT The Java Language That Changed). Conclusions: why is "java is slow" so popular? Java is now Nearly Equal to (or Faster Than) C on low-level and numeric benchmarks. This should not be surprising: Java is a compiled language (albeit JIT compiled) Nevertheless, the idea that "java is slow" is widely believed Why this is so is perhaps the most interesting aspect of this article... Let's Look at Several Possible Reasons: Java circa 1995 was slow. The first incarnations of java did not java a JIT compiler, and hence were bytecode interpreted (like Python for example). JIT compilers appeared in JVMs from Microsoft, Symantec, and in Sun's java1.2. This explanation is implausible. Most "computer folk" are able to rattle off the exact speed in GHz of the latest processors, and they track this information as it changes each month (and have done so for years). Yet this explanation asks us to believe that they are not able to remember that a single and rather important language speed change occurred in 1996. Java can be slow still. For example, programs written with the thread-safe Vector class are necessarily slower (on a single processor at least) than those written with the equivalent thread-unsafe ArrayList class. This explanation is equally unsatisfying, because C and other languages have similar "abstraction penalties". For example, The Kernighan and Pike book The Practice of Programming has a table w ITH The Following Entries, Describing The Performance of Several Implementations of A Text Processing Program: Version400 MHz PIIC0.30 sec C / STL / deque11.2 sec C / STL / list1.5 secAnother evidently well known problem in C is the overhead of returning an object from a function (several unnecessary object create / copy / destruct cycles are involved ). Java program startup is slow. As a java program starts, it unzips the java libraries and compiles parts of itself, so an interactive program can be sluggish for the first couple seconds of use. This approaches being a reasonable explanation for the speed myth. But while it might explain user's impressions, it does not explain why many programmers (who can easily understand the idea of an interpreted program being compiled) share the belief Two of the most interesting observations regarding this issue are that.: there is a similar "garbage collection is slow" myth that persists despite decades of evidence to the contrary, and that in web flame wars, people are happy to discuss their speed impressions for many pages without ever referring to actual data. Together these suggest that IT IS Possible That No Amount of Data Will Alter People 'Beliefs, And That In Actuality There "Speed Beliefs" probably homewise to do with java, garbage collection, or the Otherwise stats. Our answer probably lies somewhere in sociology or psychology. Programmers, despite their professed appreciation of logical thought, are not immune to a kind of mythology, though these particular "myths" are arbitrary and relatively harmless. Acknowledgementsian Rogers and Curt Fischer Clarified Some Points. References