If you read their published algorithm and are familiar with lock-less algorithms, it's clear that theirs is a transactional memory algorithm. Specifically, their LVB primitive. If this isn't obvious to you, I would recommend reading the seminal transactional memory algorithms from the 1970s and 1980s, including everything written by Hoare. Most of those are available from the ACM library. You particularly need to pay close to attention to how wait-free algorithms are achieved.
"Transactional memory" is not a marketing term, nor a synonym for a particular set of CPU instructions. It's a class of lock-free algorithms, especially lock-free, wait-free algorithms. And the C4 collector very clearly fits into that class of algorithms. It's use of page remapping and read/write page protections is precisely how you would emulate strong transactional memory primitives on x86.
I think this terse quotation (from their own research paper) sums up the relationship between the Vega hardware and the Linux software-based implementation:
"Azul has created commercial implementations of the C4 algorithm on three successive generations of its custom Vega hardware (custom processor instruction set, chip, system, and OS), as well on modern X86 hardware. While the algorithmic details are virtually identical between the platforms, the implementation of the LVB semantics varies significantly due to differences in available instruction sets, CPU features, and OS support capabilities." (http://www.azulsystems.com/sites/default/files/images/c4_pap...)
Regarding the performance of C4, the reason Azul doesn't publish TPC benchmarks is because there's no avoiding the immense costs of their page mapping hacks. From the paper above: "the garbage collector needs to sustain a page remapping at a rate that is 100x as high as the sustained object allocation rate for comfortable operation."
Page remapping is insanely expensive at the micro-granularity needed. They mitigate the cost by batching requests, but it's still significant. Furthermore, they must use atomic reads and writes for internal pointers. Those are cache killers.
I never said C4 can't be faster for particular workloads. Obviously for workloads sensitive to latency a pauseless collector can be faster overall. But as a general matter, those workloads are not in the majority. Ergo, for the majority of workloads C4 will not be faster, at least not on commodity hardware architectures.
You can continue to believe the hype, and believe that Azul possesses some sort of magical fairy dust, using techniques entirely beyond the comprehension of mere mortals. Or you can read about and learn how it _actually_ works. Their algorithm and implementations are all laudable and significant achievements. But there's nothing magical or secret about them.
If you read their published algorithm and are familiar with lock-less algorithms, it's clear that theirs is a transactional memory algorithm. Specifically, their LVB primitive. If this isn't obvious to you, I would recommend reading the seminal transactional memory algorithms from the 1970s and 1980s, including everything written by Hoare. Most of those are available from the ACM library. You particularly need to pay close to attention to how wait-free algorithms are achieved.
"Transactional memory" is not a marketing term, nor a synonym for a particular set of CPU instructions. It's a class of lock-free algorithms, especially lock-free, wait-free algorithms. And the C4 collector very clearly fits into that class of algorithms. It's use of page remapping and read/write page protections is precisely how you would emulate strong transactional memory primitives on x86.
I think this terse quotation (from their own research paper) sums up the relationship between the Vega hardware and the Linux software-based implementation:
"Azul has created commercial implementations of the C4 algorithm on three successive generations of its custom Vega hardware (custom processor instruction set, chip, system, and OS), as well on modern X86 hardware. While the algorithmic details are virtually identical between the platforms, the implementation of the LVB semantics varies significantly due to differences in available instruction sets, CPU features, and OS support capabilities." (http://www.azulsystems.com/sites/default/files/images/c4_pap...)
Regarding the performance of C4, the reason Azul doesn't publish TPC benchmarks is because there's no avoiding the immense costs of their page mapping hacks. From the paper above: "the garbage collector needs to sustain a page remapping at a rate that is 100x as high as the sustained object allocation rate for comfortable operation."
Page remapping is insanely expensive at the micro-granularity needed. They mitigate the cost by batching requests, but it's still significant. Furthermore, they must use atomic reads and writes for internal pointers. Those are cache killers.
I never said C4 can't be faster for particular workloads. Obviously for workloads sensitive to latency a pauseless collector can be faster overall. But as a general matter, those workloads are not in the majority. Ergo, for the majority of workloads C4 will not be faster, at least not on commodity hardware architectures.
You can continue to believe the hype, and believe that Azul possesses some sort of magical fairy dust, using techniques entirely beyond the comprehension of mere mortals. Or you can read about and learn how it _actually_ works. Their algorithm and implementations are all laudable and significant achievements. But there's nothing magical or secret about them.