<div dir="auto"><div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Den tors 3 dec. 2020 20:58Pascal Costanza <<a href="mailto:pc@p-cos.net">pc@p-cos.net</a>> skrev:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
We evaluated Go and Java for their concurrent, parallel GCs, and C++ for its reference counting. Interestingly, reference counting is often described as more efficient than GC, but in our case that’s not true: Because there is a huge object graph at some stage that needs to be deallocated, reference counting incurs more or less the same pause that a non-concurrent GC does. That’s why we don’t expect Rust to fare better here either.</blockquote></div></div><div dir="auto"><br></div><div dir="auto">I'm surprised that you seem to acknowledge the myth that refcounting is somehow more efficient in any way compared to a good GC.</div><div dir="auto"><br></div><div dir="auto">Refcounting suffers from a worst-of-both-worlds situation where both allocations and release operations are potentially slow. Allocations need to scan the allocation tree for free memory, and upon release it has to again search the tree to determine where to record it. And that's not even mentioning the overhead during pointer handoff. </div><div dir="auto"><br></div><div dir="auto">A GC language by contrast can optimise allocations down to a single instruction, zero instructions for release and no overhead for pointer transfer.</div><div dir="auto"><br></div><div dir="auto">In practice, when comparing heap memory allocation between Java and C++ the former readily beats the latter. C++ code may still be faster due to the fact that it can often avoid heap allocations altogether by storing things on the stack. C++ developers also go to great length to avoid using reference counting as well.</div><div dir="auto"></div></div>