That's an odd assumption. That you either have to choose between rabbit holes or development speed. Usually languages with lots of rabbit holes tend to require a lot more work.
Take C++ vs Go for instance. C++ is an endless series of deep, complex rabbit holes where you get enough rope to hang not only yourself but your entire team. Serious C++ projects tend to require a style guide for either the company (most companies don't have that kind of discipline) or at least some consensus of what subset of the language to use and how to structure problems for a particular project. Look at the guides Google use or the Joint Strike Fighter. They're massive. C++ is not a particularly fast language to develop quality code in unless you have a small team and/or you are supremely well aligned and disciplined. And even when you work on a disciplined team, it takes time.
Go has fewer rabbit holes. It gives you an easier path to concurrency. It has managed memory. It is sufficiently performant for a very wide range of projects. It has a very capable and practically oriented standard library that even includes HTTP, cryptography, support for some common formats and a decent networking abstraction that actually allows you design flexibility. And importantly: it comes with a shared code style which is tooling enforced, so you don't have to waste time on mediating between various opinionated hothead programmers on your project.
Because not all code is yours. In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs. A `slower` but predictable language can lead to more consistent, maintainable code, which is often more valuable in the long run or last but not least, running in production.
> In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs.
It adds up with the number of people on the team? Or the number of people on the team squared? Cubed? nlogn? Because a lot of those options would still favor the former language.
And if it's happening particularly often, that means the rate will fall off drastically as mastery is achieved.
I see a risk when code does something different from expectations. I don't see any risk when code has some kind of novel syntax that requires looking it up. Or when you learn about a feature from the documentation or a blog post.
Being predictable is quite valuable, but predictability is different from memorizing every feature.
Let's say one language has me find a weird rabbit hole every 50 hours of use, and I spend half an hour learning about it.
Let's say another language has no rabbit holes, but I'm 5% slower at coding in it.
Why would I not prefer the first language?
(And 5% is supposed to be an intentional lowball. I'm confident I can find language pairs where my productivity differs by significantly more.)