[armedbear-devel] Fw: [jvm-l] Re: Some Array vs GETFILED access times
logicmoo at gmail.com
logicmoo at gmail.com
Mon Nov 2 05:44:24 UTC 2009
----- Original Message -----
From: "Charles Oliver Nutter" <headius at headius.com>
To: <jvm-languages at googlegroups.com>
Sent: Sunday, November 01, 2009 8:47 PM
Subject: [jvm-l] Re: Some Array vs GETFILED access times
None of your results are too surprising to me. In JRuby, we moved
several structures from encapsulating an array to using
differently-sized "slotted" objects, with performance registering much
better as a result. Some of that was due to the lack of boundschecks
on the arrays, but another portion was probably due to the reduced
size of an object + reference versus object + reference + array.
Some comments inline below.
On Sun, Nov 1, 2009 at 7:48 PM, <logicmoo at gmail.com> wrote:
> public class ArrayVsClass {
>
> public static void main(String[] args) {
> long lVectorStartTime = System.currentTimeMillis();
> int iterations = Integer.MAX_VALUE;
> while (iterations-- > 0) {
> // int iterations2 = 10;
> //while (iterations2-- > 0)
> {
> //testArray(); // ARRAY
> // vs
> //testGETFIELD(); // GETFIELD
> // vs
> testIBean(); // INVOKE INTERFACE
> // vs
> //testBean(); // INVOKE VIRTUAL
> // vs
> //testABean(); // INVOKE VIRTUAL POSSIBLY THROW
> // vs
> //testSlots(); // INVOKE FOR AVALUE
> }
> }
> long lVectorRunTime = System.currentTimeMillis() - lVectorStartTime;
> System.out.println("Bench time: " + lVectorRunTime);
>
> }
Because of optimization effects, you should try running them all
together in the same benchmark in varying orders. Short benchmarks
like these can skew actual results because only a small subset of the
full code needs to be considered for optimization.
> // SLOTS time: 33157,33250,33156
> public static void testSlots() {
> ClassWithSlots oneSlot = new ClassWithSlots(6);
...
> // Array time: 18438,18437,18422
> public static void testArray() {
> final long[] accessArray = new long[] { 6 };
Not too surprising; you're paying the cost of the array plus the cost
of the virtual invocation. So even after inlining, you've got
something like boundscheck + deref + virtual call.
> // GETFIELD time: 14688,14531,14453
> public static void testGETFIELD() {
> ClassWithOneSlot oneSlot = new ClassWithOneSlot(6);
...
> // INVOKE VIRTUAL time: 14750,14594,14719
> public static void testBean() {
> ClassWithOneSlot oneSlot = new ClassWithOneSlot(6);
This is exactly the pattern we use in JRuby for heap-based scopes. We
have from ZeroVarDynamicScope up to FourVarDynamicScope and then it
falls over into an array-based version. Because we can statically tell
how many variable slots we'll need in most Ruby scopes, this ended up
being a big perf improvement for us.
> // INVOKE INTERFACE time: 14469,14610,14859
> public static void testIBean() {
> IBeanWithOneSlot oneSlot = new ClassWithOneSlot(6);
>
> int iterations = Integer.MAX_VALUE;
> long result = 0;
> while (iterations-- > 0) {
> result += oneSlot.getValue();
> }
> }
invokeinterface ends up as fast as invokevirtual once it's been
inlined, so this is no surprise.
> // INVOKE VIRTUAL POSSIBLY THROW time: 14641,14594,14547
> public static void testABean() {
> AClassWithOneSlot oneSlot = new ClassWithOneSlot(6);
Exception-handling paths not followed do not impact performance, so
this is also not surprising. Try having one out of the N invocations
trigger the exception and watch the perf change drastically from then
on.
- Charlie
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "JVM Languages" group.
To post to this group, send email to jvm-languages at googlegroups.com
To unsubscribe from this group, send email to jvm-languages+unsubscribe at googlegroups.com
For more options, visit this group at http://groups.google.com/group/jvm-languages?hl=en
-~----------~----~----~----~------~----~------~--~---
More information about the armedbear-devel
mailing list