Hacker Newsnew | past | comments | ask | show | jobs | submit | rwaksmunski's commentslogin

Apple might just win the AI race without even running in it. It's all about the distribution.

Because someone managed to run LLM on an iPhone at unusable speed Apple won AI race? Yeah, sure.

whoa, save some disbelief for later, don't show it all at once.

Apple is already one of the winners of the AI race. It’s making much more profit (ie it ain’t losing money) on AI off of ChatGPT, Claude, Grok (you would be surprised at how many incels pay to make AI generated porn videos) subscriptions through the App Store.

It’s only paying Google $1 billion a year for access to Gemini for Siri


Apple’s entire yearly capex is a fraction of the AI spend of the persumed AI winners.

Fantasy buildouts of hundreds of billions of dollars for gear that has a 3 year lifetime may be premature.

Put another way, there is no demonstrated first mover advantage in LLM-based AI so far and all of the companies involved are money furnaces.


Which is mostly insane amounts of debt leveraged entirely on the moonshot that they will find a way to turn a profit on it within the next couple years.

Apple’s bet is intelligent, the “presumed winners” are hedging our economic stability on a miracle, like a shaking gambling addict at a horse race who just withdrew his rent money.


Plus all those pricey 512GB Mac Studios they are selling to YouTubers.

Most of the influencer content I saw demonstrating LLMs on multiple 512gb Mac Studios over Thunderbolt networking used Macs borrowed from Apple PR that were returned afterwards - network chuck, Jeff Geerling et al didn't actually buy the 4 or 5 512gb Mac Studios used in their corresponding local LLM videos.

The financial math on actually buying over $40k worth of Mac for 1 to 2 youtube videos probably doesn't work that well, even for the really big players.


They don't offer the 512 gig RAM variant anymore. Outside of social media influencers and the occasional AI researcher, the market for $10K desktops is vanishingly small.

Huh, interesting. I wonder if there's a premium price right now for the one on my desk...

Pretty sure the M5 Ultra will be out after WWDC, so my M3 Ultra is (while still completely capable of fulfilling my needs) looking a bit long in the tooth. If I can get a good price for it now, I might be able to offset most of the M5 post WWDC...


My understanding is that the 512gb offering will likely return with the new M5 Ultra coming around WWDC in June. Fingers crossed anyway!

The best desktop you could get has been around $10k going back all the way back to the PDP-8e (it could fit on most desks!).

jemalloc 5.2.1 vs mimalloc v3.2.8 in Rust software processing hundreds of Terabytes. Could not measure a meaningful difference, but mimalloc would release freed memory to the OS a lot sooner and therefore look nicer in top. That said, older mimalloc from default rust crate would cause memory corruption with large allocations >2Gb in about 5% of the cases. Stuck with battle hardened jemalloc for now.

Mimalloc my beloved. The fact that jemalloc is this fiendishly complex allocator with a gazillion algorithms and approaches ( and a huge binary), yet mimalloc (a simple allocator with one bitmap-tracked pool per allocation size, and one pool collection per thread) is one of the bigger wins in software simplicity in recent memory.

AI seems to work a lot better once you acquire some AI equity, you go from not working at all to AI writing all the code. /s


Every Rust SIMD article should mention the .chunks_exact() auto vectorization trick by law.


Didn't know about this. Thanks!

Not related, but I often want to see the next or previous element when I'm iterating. When that happens, I always have to switch to an index-based loop. Is there a function that returns Iter<Item=(T, Option<T>)> where the second element is a lookahead?


You probably just want to use `.peekable()`: https://doc.rust-lang.org/stable/std/iter/trait.Iterator.htm...


Note that `peekable` will most likely break autovectorization


I use the slice::windows for that.


Early in 1999 my 1st build was a Celeron 300A on Asus P2B-LS overclocked to 450. Later upgraded to 1.4Ghz and 512MB ECC RAM. Much later running FreeBSD as a home server probably till 2015 when the power supply finally gave out, I wish I kept it. Got me through the capacitor plague and was competitive with Pentium 4 for a while. Absurdly stable and quite snappy with a 10k SCSI system drive. Would love to install Windows 98SE again on it and play some Unreal Tournament.


It's decent at explaining my code back to me, so I can make sure my intent is visible within code/comments/tracing messages. Not too bad at writing test cases either. I still write my code.


Are you saying that you literally write the features by yourself and that you only use LLMs to understand old code and write tests?

Or a more meta point that “LLMs are capable of a lot”?


I'm saying it's not good enough to write code yet, but good at explaining code. If it can't that means I messed up and I need to make it clearer. Once it's explanation and my meaning line up, the code is good enough and I move on. Meta point is that people are using AI for the wrong thing. It's great at consuming, not creating.


Been doing Rust lambdas for 4 years now, Rust is absurdly fast, especially when compared to non compiled languages. If anything, Rust is even faster than those benchmarks in real world workloads.


I got over 20 years of sane, reliable and consistent computing from the FreeBSD Project, thank you.


FreeBSD on bare metal hooked up to a nice network.


Tier 3 is max official


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: