Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Historically Java has traded long pause times for fast allocations, although I'm of the impression that it has recently found a way to have its cake and eat it.


Java has been tunable for a long time. Periodically, the recommended tuning changes, or new GC algorithms become available, etc. But it has long been possible to get short pause times with various combinations of choosing the right algorithm and writing your program the right way.

I think what really throws people off here is that getting good performance out of a Java application involves some skills which are alien to C++ programmers, and vice versa. You take an experienced C++ programmer and drop them into a Java codebase, they may have a very poor sense of what is expensive and what is cheap. Vice versa… experienced Java programmers don’t do well in C++ either.

The result is that you have precious few people with any significant, real-world experience fixing performance issues in both languages.


Agreed, but usually tuning for short pause times involves trading off throughput or allocation performance. But at the end of the day, if you aren't allocating a bunch of garbage in the first place, then you don't need to be as concerned about the costs of allocating or cleaning up the garbage. I wish Go did more to make allocations explicit so they could be more easily recognized and avoided; I dislike Java's approach of making allocations even more implicit/idiomatic while trying to fix the cost problem in the runtime (although I admire the ambition).


Parallel garbage collection in Java has been a thing for a long time. With the right tuning you might have very infrequent STW GCs, even in a game as allocation-heavy as Minecraft.


What do you consider a ‘long’ pause time?

I’ve had no issues with Java 17+ under heavy allocation/garbage collection (data encryption pipeline I haven’t tuned to reuse buffers yet), and it’s pause times are on the order of a handful of milliseconds, without meaningful tuning. I think it’s doing something like a GB/s of garbage collection.

And the jvm in question is doing a LOT more than just this, so it’s coping with millions of allocated objects at the same time.


I consider tens of milliseconds to be a long pause time (P99 should be <10ms), which is what I understand to be the ballpark for G1 (I haven't tested 17+). No doubt JVM is a fine piece of engineering, but I prefer Go's approach of just not allocating as much to begin with (Java's cheap allocations require a moving GC, which introduces challenges with respect to rewriting pointers and so on, which in turn introduces constraints for FFI and makes the whole system more complex). I wish it went further and made allocations more explicit.


If one cares about pause times, G1 isn't it, rather the pauseless ZGC, Azul's C4 or Shenodah.

Capable of handling TB sized heaps with micro seconds pauses.


Yeah, I’m nominally familiar with these, but I can’t understand why one of these low latency collectors wouldn’t be the default GC unless they impose other significant tradeoffs.


They don’t change it from the default because G1 isn’t terrible and Java has a long history of ensuring backwards compatibility - not just in APIs, but also behavior, warts and all.

If they changed the default GC, a lot of folks would freak out, even if it was objectively better in nearly every situation. Because someone, somewhere is relying on some weird bug in G1, or whatever and now their software behaves differently, and it’s a problem.

Give it a couple more major revs, and it might still happen. G1 became the default in what, Java 9?


Naturally there are tradeoffs, for example the amount of infrastruture they need only justifies when one is working at such scale, for example ZGC requires large pages and makes use of NUMA if available.


Sure, but isn't Shenandoah more "general purpose" or whatever?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: