Hacker Newsnew | past | comments | ask | show | jobs | submit | elisbce's commentslogin

Totally agree. I found the results surprising because a bunch of languages are faster than C++. Then I looked closer. The requirements are self-conflicting, No SIMD, but must be production-ready. No one would use the unoptimized version in production. Also looking at the C++ implementation, they are not optimized at all. This makes this benchmark literally pointless.


Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.


The more I see these languages that have neither power nor readability, the more I appreciate C.


I think almost everyone is with you on readability, but I think it would be hard to make the case that it lacks power.


Indeed. Read some of the Project Euler discussions (after solving a problem). The J answers tend to be very short and very fast.


When SpaceX came about, they said it was impossible for the rocket to come back from space and get reused. They said it wasn't going to work to combine multiple thrusters to form a big thruster and be reliable enough. When Starlink was introduced, they said it was stupid because the bandwidth is too small to be useful. Where are we now? 10 years ago, AI couldn't even beat a high-rank amateur Go player, let alone the best of pros. Everyone takes the excuse of dimensionality curse. Now what?

People who only look at the past/present and conclude impossibility are never going to be the ones who invent the future. Even math and science evolve, let alone engineering. The problems described in this article don't even remotely feel like the kind of barriers we faced when Go was solved, when protein fold was predicted and when LLM was solving problems with one prompt. If there is a strong NEED for datacenters to be up in the space, there will eventually be datacenters in the space.


1. In Silicon Valley, people are not bounded by non-compete clauses and can come and go at will. So fungibility is a top priority for any tech company. The only way to do that is to make sure expertise is shared across the team and not monopolized by one or a few old-timers.

2. Eng teams that have mostly old-timers tend to get stale and slow in changes. This is bad for products that need rapid evolution or new ideas to break status quo. New engineers have way more incentives to make changes to prove themselves and collect credits, while old-timers tend to play safe and stay on the side of stability.

3. Bad coders, not new coders, write bad code.


That's so stupid. Just because I posted a video on TikTok doesn't mean someone should be able to go to the city's public website, look me up on a yellow page and download my photo id and fingerprints.


That’s not what the poster meant.

What treating this biometric info as public means is that it won’t be accepted as valid proof of identity. Just because you posted a video on TikTok shouldn’t mean that a scammer can take out a loan in your name.


Yeah, blame the victims for not protecting themselves enough. How familiar.


Yes, it is messy when you want your MySQL databases to be mission critical in production, e.g. handling a large amount of customer data. Historically MySQL's High Availability architecture has a lot of design and implementation issues because it was an afterthought. Dealing with large amount of critical data means you need it to be performant, reliable and available at the same time, which is hard and requires you to deal with caching, sharding, replication, network issues, zone/resource planning, failovers, leader elections and semi-sync bugs, corrupted logs, manually fixing bad queries that killed the database, data migration, version upgrades, etc. There is a reason why big corps like Google/Meta has dedicated teams of experts (like people who actually wrote the HA features) to maintain their mission critical MySQL deployments.


I wrote an app tracking my location using the latest SwiftUI and SwiftData and the performance is so bad to the point it starts to stall the UI after just a few hundred data points. Apparently the magic of SwiftData + SwiftUI is only useful for making demos and anything beyond a few hundred data points wouldn't work out of the box. Everything is done on the main thread and offloading to non-main thread creates huge headaches and breaks the UI updates. It's almost as if the dev guys at Apple were just trying to hit their WWDC OKRs by releasing something so immature and useless for production. Even just reading/writing data to local, CoreData is two to three orders of magnitude faster than SwiftData in some cases. Their newly released core location APIs for getting location updates are also full of caveats and not useful for production at all. It seems that the teams are just focused on making good-looking Swift code using fancy new language syntax sugars.


So they have no problems with Microsoft, Meta, Blackrock, Berkshire, Exxon/Chevron, but Google.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: