How would it make a difference if you split the statements up? A bad API is a bad API, and I’m pretty sure you can design one in any language (e.g. hide a while (true) within an innocuous method). The way to trace it down is the same in PHP as elsewhere, too—use a debugger. All that said, just don’t design shitty APIs and give your methods useful names.
An advantage of being able to chain calls are fluent expressions that you can return immediately, for example from arrow functions or match expressions, which definitely makes for easier to read code.
I can't think of a lot of examples where I'd want to call:
new API()->object->method()->subResult
particularly if the API might block. But I can think of a lot of reasons I wouldn't want to see that in my codebase. Usually starting with the fact that
try {
$api = new API();
}
should have a catch after it.
For local stuff, fine, if you want to write that way. I don't find it eminently more readable than seeing a variable created and employed. And I'd like to see the line number the error was thrown on, if I'm reading error logs in production.
For what it's worth, blocking might not be an issue in the future. There's some discussion on the PHP mailing list about adding async capabilities to the language.
That would be a welcome language feature... my only question would be, how? Node accomplishes it by basically deferring things throughout the main loop. PHP has no main loop; it's not a webserver. The PHP paradigm is terribly ugly but very predictable in one way: You just rely on Apache or Nginx to spin up and kill a new PHP process with each call. In order for a call to last long enough to return something async, you can't rely on checking it the next cycle. There is no next cycle. So anything "async" basically blocks at the end, unless PHP sprouts its own endless loop the way Node does. And then you probably don't want more than one of them running. So it runs contrary to the paradigm.
Many years ago (see my profile), I wrote a PHP socket server that worked a little bit like Node does. The server itself was a CLI daemon. The webserver sent all the API calls to a normal PHP script that acted as a dispatcher. If the dispatcher saw that the daemon wasn't running, it would call the endpoint itself, block and return a result. If the daemon was running, it would accept an external socket connection to the user and keep itself alive, and open an internal socket to the daemon to route the user's API calls through, which would end up in a queue held by the daemon. The daemon responded to the dispatcher over the internal socket, and the dispatcher sent the results to the user as they came in over the external websocket. Thus it went from one PHP process per call to one PHP process per user, kept alive as long as the daemon was running. I actually think this was niftier in some ways than Node, because no one user could hang the whole server - only their own thread. This was a semi-solution for async calls in PHP.
And, if you want or need to spawn new PHP processes as workers that chew on lots of data, and wait for them to send messages back, you can already do that in a multitude of different ways... as long as the main process waits for them, or else aborts.
In any case, blocking API calls inside a multi-part single line of code are lazy and an invitation to disaster.
An advantage of being able to chain calls are fluent expressions that you can return immediately, for example from arrow functions or match expressions, which definitely makes for easier to read code.