Why Java Benchmarks lie and what we could do about it
This talk is devoted to Java enterprise benchmarks: from naive opensource bench suites to the mighty SPECjbb2015 — the main benchmark of our time.
But how and by whom it was developed? What impacted SPECjbb2015 development process? And finally, what did it lead to?
The fact is that SPECjbb2015 being the major benchmark for HW produces irrelevant measures, as well as some other benchmarks in the industry. We will understand how to fix that and for that we'll discuss key questions about benchmarking: latency, throughput, SLA and how to deal with all this staff.
This talk is inspired by many challenging issues met during benchmarking Azul Zing JVM.