These discussions tend to zoom in on just one quality attribute, latency in this case. The point this article tries to make is that there are multiple quality attributes and that you trade them off against each other. There is no right or wrong here other than getting sucked into having these choices imposed on you by irrational arguments made by engineers about things they care about vs. things that are actually business relevant.
In the case of high speed trading systems, any second they are late to market, they are losing money. It might be the fastest thing ever once it's finished but while it's not running, its slower competitors have the market to themselves. Worse, any month you are late is time your competitors can use against you to learn from your mistakes and beat you at your own came. High speed trading is of course extremely competitive. So things change all the time. So, now we are talking speed of development and maintainability in addition to raw performance and throughput. The impact of most improvements you make are in any case typically temporary in the sense that they only provide you an advantage as long as your competitors don't catch up. So, getting bogged down into lengthy maintenance and bug fixing cycles while your competitors copy everything that you got right means you lose some or all of the market opportunity.
This article mentions what happens when C++ projects fail. Basically, you are late to market with something that doesn't work as advertised (slow, unstable). That's not an inherent risk of course but it's a risk none the less. Performance often comes at the price of complexity. Complexity has its own penalties as well in the form of e.g. poor maintainability, bugs, or stability. C++ is frankly notorious for this. You compensate with having skilled engineers acting in a disciplined way. And sometimes that actually works as advertised. The project mentioned in the article sounds like it had issues across the board. Probably, at least one of the root causes for that was that they were trying to take some short cuts to get it to market.
The JVM ecosystem has a lot going for when making such trade-offs. That's why it's so entrenched in the industry. And it's been around long enough that there are lots of solutions to all sorts of issues that you might need solutions for. Also, you always have the option to go native in Java; it's not an either or kind of thing.
For the same reason you see people opting for a not particularly fast language like python in a domain where you get to use fast GPUs to get things done quickly (i.e. machine learning). Python gets that job done, everything on the critical path is basically delegated to lower level native stuff running on a GPU (or custom hardware in some cases). I wouldn't be surprised to learn that some high frequency traders have lots of python code around in addition to their precious native code. It would be a pragmatic thing to do.
Understanding your domain and understanding the trade offs is what differentiates successful companies and engineers here. There are lots of valid reasons for preferring C++ for some things. Personally, I rarely encounter such requirements and I'd probably prefer Rust over C++ if I did. And since I don't, I actually have limited experience with Rust as well because other than the intellectual challenge I don't see how I'd be getting much real world advantage out of it. I'm pretty sure that holds true across large parts of our industry. It's why you see people actively not caring about performance by choosing to run on slow (virtual hardware) on slow cloud networks using slow run time environments that include slow interpreters and slow application frameworks because they can get stuff done quickly with it and can trivially scale by throwing dollars at the problem. Slow is fast enough for most.
In the case of high speed trading systems, any second they are late to market, they are losing money. It might be the fastest thing ever once it's finished but while it's not running, its slower competitors have the market to themselves. Worse, any month you are late is time your competitors can use against you to learn from your mistakes and beat you at your own came. High speed trading is of course extremely competitive. So things change all the time. So, now we are talking speed of development and maintainability in addition to raw performance and throughput. The impact of most improvements you make are in any case typically temporary in the sense that they only provide you an advantage as long as your competitors don't catch up. So, getting bogged down into lengthy maintenance and bug fixing cycles while your competitors copy everything that you got right means you lose some or all of the market opportunity.
This article mentions what happens when C++ projects fail. Basically, you are late to market with something that doesn't work as advertised (slow, unstable). That's not an inherent risk of course but it's a risk none the less. Performance often comes at the price of complexity. Complexity has its own penalties as well in the form of e.g. poor maintainability, bugs, or stability. C++ is frankly notorious for this. You compensate with having skilled engineers acting in a disciplined way. And sometimes that actually works as advertised. The project mentioned in the article sounds like it had issues across the board. Probably, at least one of the root causes for that was that they were trying to take some short cuts to get it to market.
The JVM ecosystem has a lot going for when making such trade-offs. That's why it's so entrenched in the industry. And it's been around long enough that there are lots of solutions to all sorts of issues that you might need solutions for. Also, you always have the option to go native in Java; it's not an either or kind of thing.
For the same reason you see people opting for a not particularly fast language like python in a domain where you get to use fast GPUs to get things done quickly (i.e. machine learning). Python gets that job done, everything on the critical path is basically delegated to lower level native stuff running on a GPU (or custom hardware in some cases). I wouldn't be surprised to learn that some high frequency traders have lots of python code around in addition to their precious native code. It would be a pragmatic thing to do.
Understanding your domain and understanding the trade offs is what differentiates successful companies and engineers here. There are lots of valid reasons for preferring C++ for some things. Personally, I rarely encounter such requirements and I'd probably prefer Rust over C++ if I did. And since I don't, I actually have limited experience with Rust as well because other than the intellectual challenge I don't see how I'd be getting much real world advantage out of it. I'm pretty sure that holds true across large parts of our industry. It's why you see people actively not caring about performance by choosing to run on slow (virtual hardware) on slow cloud networks using slow run time environments that include slow interpreters and slow application frameworks because they can get stuff done quickly with it and can trivially scale by throwing dollars at the problem. Slow is fast enough for most.