I don't think behaviours are all that interesting; after all other programming languages have them.
Rather, what is interesting about the BEAM is that throwing an error is so graceful that it's not such a sin to just throw an error. In otherwords, a component that CAN error or get into a weird state can be shoved into a behaviour that CANNOT. And by default you are safe from certain operational errors becoming logic or business errors.
For example. You might have a defined "get" interface that doesn't return an error -- let's say it starts as an in-memory K/V store and it returns an optional(value), which is NULL in the case that the key didn't exist.
But suppose you want to have two datastores that the same interface targets, so you might abstract that to a filesystem, and you could have a permission error. And returning "NULL" is not actually "correct". You should throw, because that bubbles up the error to ops teams instead of swallowing it whole. A panic in this case is probably fine.
What if now you're going over a filesystem that's over the network, and the line to the datacenter was backhoe'd and there was a 10 millisecond failover by your SDN -- returning "NULL" is really not correct, because consumers of your getter are liable to have a bad time managing real consistency business cases that could cost $$$. And in this case a panic is not necessarily great, because you bring down everything over a minor hiccup.
The other power with throwing errors + behaviors is that it makes trapping errors with contextual information reporting (e.g. a user-bound 500 error with stack trace information sent somewhere where ops can take a gander) really easy and generically composable, that's not so for error monads or panics.
Anyways it was always strange to me that erlang-inspired actor system programming languages came out that obsessed over "never having errors" as a principle (like ponylang) because that's throwing out a big part of erlang.
Rather, what is interesting about the BEAM is that throwing an error is so graceful that it's not such a sin to just throw an error. In otherwords, a component that CAN error or get into a weird state can be shoved into a behaviour that CANNOT. And by default you are safe from certain operational errors becoming logic or business errors.
For example. You might have a defined "get" interface that doesn't return an error -- let's say it starts as an in-memory K/V store and it returns an optional(value), which is NULL in the case that the key didn't exist.
But suppose you want to have two datastores that the same interface targets, so you might abstract that to a filesystem, and you could have a permission error. And returning "NULL" is not actually "correct". You should throw, because that bubbles up the error to ops teams instead of swallowing it whole. A panic in this case is probably fine.
What if now you're going over a filesystem that's over the network, and the line to the datacenter was backhoe'd and there was a 10 millisecond failover by your SDN -- returning "NULL" is really not correct, because consumers of your getter are liable to have a bad time managing real consistency business cases that could cost $$$. And in this case a panic is not necessarily great, because you bring down everything over a minor hiccup.
The other power with throwing errors + behaviors is that it makes trapping errors with contextual information reporting (e.g. a user-bound 500 error with stack trace information sent somewhere where ops can take a gander) really easy and generically composable, that's not so for error monads or panics.
Anyways it was always strange to me that erlang-inspired actor system programming languages came out that obsessed over "never having errors" as a principle (like ponylang) because that's throwing out a big part of erlang.