I've read my share of cryptic JavaScript written by others and in that sense I agree that in multi-person long-term projects static typing no doubt will have its advantages.
My hunch is however that what is often overlooked in development with statically typed languages is that it takes considerable time and effort to come up with the right set of types. Many examples are written showing how types almost magically make programs easier to understand. But when you read such an example what is not stated is how much effort it took to come up with just those types.
One way of thinking about it is that type-definitions are really a "second program" you must write. They check upon the primary program and validate it. But that means you must write that second program as well. It's like building an unsinkable ship with two hulls one inside the other. The quality will be great but it does cost more.
No matter what, you need a rigorous schema for your data. If you write a complex JS/Python program without doing the equivalent of "come up with the right set of types" then you will have a bad time. I'm sure in the OP here the skilled Python programmer did think carefully about the shapes of her data, she just didn't write it down.
To be sure, having to write down those data structure invariants in a rigorous way that fits into the type system of your programming language has a cost. But the hard part really is coming up with the invariants, and it's dangerous to think that dynamic languages obviate the need for that.
It's also hard to massage your invariants into a form that a type checker will accept, since you're now restricted to weird, (usually) non-Turing-complete language.
A good example of this is matrix operations - there are plenty of invariants and contracts to check (e.g. multiplication must be between m x n and n x p matrices), but I don't believe there's yet a particularly convincing Haskell matrix library, in part because the range of relevant mathematical invariants don't cleanly fit into Haskell's type system.
For those cases, checking the invariants at runtime is your escape hatch to utilize the full expressive power of the language.
This particular example can be encoded into the Haskell type system though. For example, there's a tensor library where all operations are (according to the description) checked for the correct dimensions by the type system. It seems to require a lot of type-level magic though, and that may disqualify it for "cleanly".
> But the hard part really is coming up with the invariants,
Surely. But if you have to write them down it becomes hard to change them because then you will have to rewrite them, and you may need to do that many times if your initial invariants are not the final correct ones.
The initial ones are likely not to be the final correct ones because as you say coming up with the invariants is ... the hard part.
What I'm trying to think about is that in a language that requires you to write the types down they have to be always written down correctly. So if you have to change the types you use or something about them you may have a lot of work to do because not only do you have to rewrite the types you will also have to rewrite all code that uses those types.
That does allow you to catch many errors but it can also mean a lot of extra work. The limitation is that types and executable code must always agree.
Whereas in a dynamic language you might have some parts of your program that would not even compile as such, if you used a compiler, but you don't care because you are currently focusing on another part of your program.
You want to test it fast to get fast feedback without having to make sure all parts of your program comply with the current version of your types.
A metaphor here could be something like trying to furnish a house trying out different color curtains in one room. In a statically typed language you could not see how they look and feel until all rooms have curtains of the same new color, until they all follow the same type type-constraints.
"that it takes considerable time and effort to come up with the right set of types. "
I've written once here before, this is one of the 'accidental advantages' of TypeScript: you set the compiler 'loose' when you're hacking away, writing quickly, and then 'make it more strict' as you start to consolidate your classes.
I almost don't bother to type something until I have to. Once I see it sitting there for a while, and I know it's not going to change much ... I make it a type.
It's an oddly liberating thing that I don't think was ever part of the objectives of the language, moreover, I can't think of any similar situation in other (at least mainstream) languages.
You can do that in Haskell also. Just turn on the -fdefer-type-errors GHC option and leave out most of the type signatures. Any expression with a type error will be reported when/if the expression is evaluated at runtime. You'll probably still need a few type hints, to avoid ambiguity, but otherwise it's not that different from programming in a dynamically-typed language.
My hunch is however that what is often overlooked in development with statically typed languages is that it takes considerable time and effort to come up with the right set of types. Many examples are written showing how types almost magically make programs easier to understand. But when you read such an example what is not stated is how much effort it took to come up with just those types.
One way of thinking about it is that type-definitions are really a "second program" you must write. They check upon the primary program and validate it. But that means you must write that second program as well. It's like building an unsinkable ship with two hulls one inside the other. The quality will be great but it does cost more.