> In C#/typescript you can no longer use the friendly `await` keyword that makes async calls almost as easy synchronous calls. Ironically, the `await` keyword makes it really easy to force HTTP requests to execute sequentially that could easily execute in parallel. This is what the pit of fail looks like!
I'm not specifically familiar with C# or typescript's async keyword implementations, but this.. shouldn't be true?
If you can't launch multiple requests and then gather their results either through aggregating to a single future or by awaiting on them individually (which will mean you will start acting on results somewhere between min(t1,t2,t3,..tn) and max(t1,t2,t3,..tn) time) while all of them proceed just as well as if you had done callbacks, that's not a very good implementation of async/await...
Agreed. But GP is correct in that you can't turn a C# async method into a sync call simply by awaiting it. Some good discussion on workarounds here [0]. That does seem "unfortunate" as it means async will grow through the codebase (the SO article refers to it as a "zombie virus"). If it's all code you're writing then you have options. But if you're using a 3rd party lib, and the author decided it should be async, then your code becomes transitively async too. Whether you want/need it or not.
That just doesn't seem like good language design. I totally get the need for concurrency, but the feature shouldn't have such an invasive impact on the code base. Go and Erlang manage to provide good concurrency support without the tax.
There is quite a bit of overcomplication in that answer, because it's trying to cover all possible task kinds. In C#, compared to Go at least (but I suspect Erlang as well), Tasks are much more complex than goroutines and can accomplish more things.
In particular, C# is not limited to a single scheduler for tasks, and not all task schedulers/runners are multithreaded. When you want to interact with the UI for example, which is single-threaded, you can still use async/await to do so, but the tasks that it produces are also going to run on the UI single-threaded executor. This is very useful for avoiding the need of synchronization, but it can easily introduce deadlocks if you're writing code as if tasks can run in parallel (they are only guaranteed to wait in parallel).
If, on the other hand, you're working with Tasks that are simply used for concurrency, as in Go (and Erlang?), then those run on a thread-pool executor and you can easily and safely use Task.Wait and get sync-like code, including exception propagation, across the sync/async boundary (though you do have to handle the AggregateException, since Task.Wait doesn't know whether it's waiting for a single task or a set of tasks).
In fact, this is actually easier than in Go, since the Task that you want to synchronously wait for doesn't have to cooperate with you in any way - Tasks are objects in C# and you can observe their state directly, unlike goroutines.
The main thing I dislike about this in C# is that they desicded to have a single Task type, regardless of the executor, so the language wont help you avoid synchronous waits on the dangerous tasks unfortunately.
Summary: Many round trips of small api requests can add up a lot of latency. An often used tactic is to bundle requests. Even better is to locally cache data when possible.
The article overlooks the end-to-end overhead of HTTP/2 which unfortunately makes the trade-offs between batching API calls and bundling JavaScript very complex in practice. For instance in Chrome the overhead of an additional request is still several milliseconds of local CPU time. Which means in practice you'll trade off CPU time to benefit warm loads where the cache is populated.
I'm hopeful someday the overhead will be made negligible but until then I'd suggest profiling the trade-offs.
Meanwhile data frameworks like Relay offer incremental flushing with much more flexibility all over a single request.
Fetching all resources in a single HTTP request (or all data in a single GraphQL query) avoids a lot of issues and workaround effort caused by non-deterministic completion order of multiple small requests.
I'm not specifically familiar with C# or typescript's async keyword implementations, but this.. shouldn't be true?
If you can't launch multiple requests and then gather their results either through aggregating to a single future or by awaiting on them individually (which will mean you will start acting on results somewhere between min(t1,t2,t3,..tn) and max(t1,t2,t3,..tn) time) while all of them proceed just as well as if you had done callbacks, that's not a very good implementation of async/await...