Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Fourth Edition does not appear to mention this pattern under that name, and indeed it gives as an example burying the mutex inside the type to be protected, which has the same downside (the resulting object is bigger†) but not the upside (Stroustrup's approach means we can still forget to take the lock)

That Boost link says it is "experimental and subject to change in future versions" but I don't know whether Boost just says that about everything or whether this would particularly mark out this feature.

† In Rust Mutex<T> is 8 bytes bigger than T, typically. In C++ std::mutex is often 40 bytes.



I don't have the copy of the book with me, so I don't know how Stroustrup called it. It was part of a discussion of overloading operator->.

The boost warning doesn't have much to it. Boost libraries don't even guarantee API stability across versions and boost.synchronized has been available for a few years with no changes.

> In Rust Mutex<T> is 8 bytes bigger than T, typically. In C++ std::mutex is often 40 bytes.

That's because on libstdc++ std::mutex embeds a pthread_mutex_t which is 40 bytes for ABI reasons. It is a bad early ABI decision that can't be changed unfortunately. std::mutex on MSVC is worse. std::shared_mutex is much smaller on MSVC, but even worse than std::mutex on libstdc++.

A portable Mutex<T> of minimal size can be built on top of std::atomic::wait though.


Maybe the standard should then take the opportunity to define such a thing, since it would be smaller and more useful than what they have today in practice.


Well, yes. Then again the committee took 10 years to standardize std::mutex, 14 years for std::shared_mutex. 17 for std::optional. We still don't have a good hash map.

We have to be realistic, the standard library will never be complete and you'll always have to get basic components from 3rd party or write them yourself.


One thing that’s nice about C++ is that you reimplement stuff that doesn’t have a decade of battle testing behind it.

That way, such ‘bleeding edge’ features evolve and improve a lot before being set in stone.


Unfortunately, they did standardize the bad hash map.


To be fair they standardized the hash map you'd have probably been taught 30 and maybe even 20 years ago in CS class. It's possible that if your professor is rather slow to catch on they are still teaching new kids bucketed hash tables like the one std::unordered_map requires.

I'd guess that while a modern class are probably taught some sort of open addressed hash map, they aren't being taught anything as exotic as Swiss Tables or F14 (Google Abseil and Facebook Folly's maps) but that's OK because standardising all the fine details of those maps would be a bad idea too.

On the other hand, the document does not tell you to use a halfway decent hash function, and many standard implementations don't provide one, so in practice many programs don't use one. The "bad hash map" performs OK with a terrible hash function, whereas the modern ones require decent hashes or their performance is miserable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: