I think this paper overestimates the benefit of what I call isoheaps (partitioning allocations by type). I wrote the WebKit isoheap implementation so it’s something I care about a lot.
Isoheaps can make mostly neutralize use after free bugs. But that’s all they do. Moreover they don’t scale well. If you isoheap a select set of stuff then it’s fine, but if you tried to deploy isoheaps to every allocation you get massive memory overhead (2x or more) and substantial time overhead too. I know because I tried that.
If an attacker finds a type confusion or heap buffer overflow then isoheaps won’t prevent the attacker from controlling heap layout. All it takes is that they can confuse an int with a ptr and game over. If they can read ptr values as ints then they can figure out how the heap is laid out (no matter how weirdly you laid it out). If they also can write ptr values as ints then control they whole heap. At that point it doesn’t even matter if you have control flow integrity.
To defeat attackers you really need some kind of 100% solution where you can prove that the attacker can’t use a bug with one pointer access to control the whole heap.
Yes, having coded quite a few years in C++ (on the Firefox codebase) before migrating to Rust, I believe that many C and C++ developers mistakenly assume that
1/ memory safety is the unachievable holy grail of safety ;
2/ there is a magic bullet somewhere that can bring about the benefits of memory safety, without any of the costs (real or expected).
In practice, the first assumption is wrong because memory-safety is just where safety _starts_. Once you have memory-safety and type-safety, you can start building stuff. If you have already expended all your cognitive budget on reaching this point, you have lost.
As for the magic bullets, all those I've seen suggested are of the better-than-nothing variety rather than the it-just-works variety they're often touted as. Doesn't mean that there won't ever be a solution, but I'm not holding my breath.
And of course, I've seen people claim more than once that AI will solve code safety & security. So far, that's not quite what's written on the wall.
Well, GC is very close to that magic bullet (comparatively spatial safety via bound checking is easy). It does have some costs of course, especially in a language like C++ that is GC-hostile.
C++ isn't hostile toward garbage collection — it's more the programmers using C++ who are . C++ is the only language that can have an optional, totally pause-less, concurrent GC engine (SGCL). No other programming language, not even Java, offers such a collector.
This is false.
Lots of pauseless concurrent GCs have shipped for other languages. SGCL is not special in that regard. Worse, SGCL hasn’t been shown to actually avoid disruptions to program execution while the shipping concurrent GCs for Java and other languages have been shown to really avoid disruptions.
(I say disruptions, not pauses, because avoiding “pauses” where the GC “stops” your threads is only the first step. Once you tackle that you have to fix cases of the GC forcing the program to take bursts of slow paths on pointer access and allocation.)
SGCL is a toy by comparison to other concurrent GCs. For example is has hilariously complex pointer access costs that serious concurrent GCs avoid.
There isn’t a single truly pause-less GC for Java — and I’ve already proven that to you before. If such a GC exists for any other language, name it.
And no, SGCL doesn’t introduce slow paths, because mutators never have to synchronize with the GC. Pointer access is completely normal — unlike in other languages that rely on mechanisms like read barriers.
Poluzuj gumkę, serio.
> There isn’t a single truly pause-less GC for Java — and I’ve already proven that to you before. If such a GC exists for any other language, name it.
You haven't proven that. If you define "pause" as "the world stops", then no, state of the art concurrent GCs for Java don't have that. If you define "pause" as "some thread might sometimes take a slow path due to memory management" then SGCL has those, as do most memory management implementations (including and especially malloc/free).
> And no, SGCL doesn’t introduce slow paths, because mutators never have to synchronize with the GC. Pointer access is completely normal — unlike in other languages that rely on mechanisms like read barriers.
The best concurrent GCs have no read barriers, only extremely cheap write barriers.
You have allocation slow paths, at the very least.
First, there are no Java GCs that completely eliminate stop-the-world pauses. ZGC and Shenandoah reduce them to very short, sub-millisecond windows — but they still exist. Even the most concurrent collectors require STW phases for things like root scanning, final marking, or safepoint synchronization. This is documented in OpenJDK sources, benchmarks, and even in Oracle’s own whitepapers. Claiming Java has truly pause-less GC is simply false.
Second, you’re suggesting there are moving GCs that don’t use read barriers and don’t stop mutator threads at all. That’s technically implausible. Moving collectors by definition relocate objects, and unless you stop the world or have some read barrier/hazard indirection, you can’t guarantee pointer correctness during concurrent access. You must synchronize with the mutator somehow — either via stop-the-world, read barriers, or epoch/hazard-based coordination. It’s not magic, it’s basic memory consistency.
SGCL works without moving anything. That’s why it doesn’t need synchronization, read barriers, or even slow-path allocation stalls. That’s not a limitation — that’s a design goal. You can dislike the model, but let’s keep the facts straight.
It is hostile in the sense that it allows hiding and masking pointers, so it is hard to have an exact moving GC.
SGCL, as impressive as it is, AFAIK requires pointers to be annotated, which is problematic for memory safety, and I'm not sure that it is a moving GC.
SGCL introduces the `tracked_ptr` smart pointer, which is used similarly to `shared_ptr`. The collector doesn't move data, which makes it highly efficient and — perhaps surprisingly — more cache-friendly than moving GCs.
Based on what data?
Folks who make claims about the cache friendliness of copying GCs have millions of lines of credible test code that they’ve used to demonstrate that claim.
Compaction doesn't necessarily guarantee cache friendliness. While it does ensure contiguity, object layout can still be arbitrary. True cache performance often depends on the locality of similar objects — for example, memory pools are known for their cache efficiency. It's worth noting that Go deliberately avoids compaction, which suggests there's a trade-off at play.
I'm not saying that compaction guarantees cache friendliness.
I'm saying you have no evidence to suggest that not compacting is better for cache friendliness. You haven't presented such evidence.
As I mentioned earlier, take a look at the Golang. It's newer than Java, yet it uses a non-moving GC. Are you assuming its creators are intentionally making slower this language?
I feel that the lack of GC is one of the key differentiators that remain to C++. If a group of C++ developers were to adopt a GC, they'd be well on their way to abandoning C++.