Just landscape changed significantly comparing to 2010.
Before you didn't have current smartphones and tablets. On desktop you could just support windows only (today probably have to macOS as well). On browser you didn't have to support Safari. You didn't have to worry much about screen DPI, aspect ratio, landscape vs portrait, small screen sizes. Your layout didn't have to be adaptive. There was not much demand for SPA. Back then there was much less people worldwide with access to internet.
Opus 4.5 is not living in vacuum. It’s the most expensive of models for coders and there is Gemini 3 pro - with many discounts and deepseek 3.2 that is 50x cheaper and not much behind.
> We’ll be able to do things like run fast models on the edge, run model pipelines on instantly-booting Workers, stream model inputs and outputs with WebRTC, etc.
Benefit to 3rd party developers is reducing latency and improving robustness of AI pipeline. Instead of going back and forth with https request at each stage to do inference you could make all in one request, e.g. doing realtime, pipelined STT, text translation, some backend logic, TTS and back to user mobile device.
Depends on how much the latency matters to you and the customers. Most services realistically won't gain much at all. Even the latency of normal web requests is very rarely relevant. Only the business itself and answer that question though.
> "Even the latency of normal web requests is very rarely relevant."
Hard disagree. Performance is typically the most important feature for any website. User abandonment / bounce rate follows a predictable, steep, nonlinear curve based on latency.
I've changed the latency of actual services as well as core web vials many times and... no. Turns out the line is not that steep. For the range 200ms-1s, it's pretty much flat. Sure, you can start seeing issues for multi second requests, but that's terrible processing time. A change like eliminating intercontinental transfer latency - barely visible in results in ecommerce.
There's this old meme of Amazon seeing a difference for every 100ms latency and I've never seen it actually reproduced in a controlled way. Even when CF tries to advertise lower latency https://www.cloudflare.com/en-au/learning/performance/more/w... their data is companies reducing it by whole seconds. "Walmart found that for every 1 second improvement in page load time, conversions increased by 2%" - that's not steep. When there's a claim about improvements per 100ms, it's still based on averaging multi-second data like in https://auditzy.com/blog/impact-of-fast-load-times-on-user-e...
In short - if you have something extremely interactive, I'm sure it matters for experience. For a typical website loading in under 1s, edge will barely matter. If you have data proving otherwise, I'd genuinely love to see that. For websites loading in over 1s, it's likely much easier to improve the core experience than split thing out into edge.
Ok, I think we're actually in agreement -- given your all-important qualification "for the range 200ms-1s". Yes, ofc, that first part of the curve above the drop is quite flat; there's hardly time for the user to get impatient and bounce.
My point about the shape of the curve stands. 100ms can matter more on the steepest part of the slope than 2s does further to the right.
The author headline starts with "LLMs are a failure", hard to take author seriously with such a hyperbole even if second part of headline ("A new AI winter is coming") might be right.
on top of that Google is also cloud infrastructure provider - contrary to OpenAI that need to have someone like Azure plug those GPUs and host servers.
not sure why the attach rpi for every mac mini, wouldn't it be cheaper to have one rpi and 9 mac minis connectd to 10 port switch? I also wished one day to make cluster out of Apple TVs - they are very cheap (~150usd for version with ethernet) and most likely the new upcoming version will have more powerful A-series apple sillicon. I guess tvOS is just very restricted.
They’re connected to a single USB-C cable. For many technical reasons you can’t have a simple kvm which switches inputs. You’ll need to continuously power all 9 minis some way.
All nine USB-C cables will need a continuous, active connection.
To do this, you will need a smart controller that switches which port it’s talking to.
Or you can stick a relatively cheap device on every mini and and connect it to the network.
Having a “controller” for every mini means you can swap single units in both hardware and software very easily. There’s a one-to-one relationship and you don’t have to deal with pairing.
Just spitballing here, but if your interface to these things is USB-C, you should be able to boot them off an image that has a standard SSH key and then you can get in and ID them from a serial number or MAC address. I don't see the identification part as being a huge part of what's gained with the 1:1 configuration.
Is video forwarding part of the product offering, or simply considered required for management functions?
Either way, I probably agree that a raspi per unit probably makes sense at a scale of a few dozen racks, but it would be interesting to do the math on when it would be price-efficient to have a 1:n management node scheme. I don't imagine there are many USB-C hubs that support being display sinks on the downstream ports (if that's even possible at all) but perhaps you could use an FPGA to synthesize a small ARM core with a bunch of native USB-C interfaces capable of doing it?
> not sure why the attach rpi for every mac mini, wouldn't it be cheaper to have one rpi and 9 mac minis connectd to 10 port switch?
Simple... they're (likely) running something on the Raspberry Pi's that sets them up as USB gadgets, aka the Mac Mini "sees" a virtual keyboard and mouse. That's enough to manage remote provisioning.
To replicate that they'd need a KVM switch which doesn't have some weird edge case in how exactly it does USB-C switching, and it needs to be remotely controlled. A Pi is cheaper plus the failure modes of a Pi are more understood than the failure mode of some weird ass KVM switch someone cobbled together in China.
Simpler design... let alone constraints of the USB and/or other interfaces in use. Not to mention 1:1 of management port access to system access, where other solutions may be problematic.
MacOS doesn’t have a gatekeeper status in the Digital Markets Act (DMA), so Apple doesn’t need to provide it. This shows that they only provide the SDK because of regulatory pressure, and try to maintain their vendor lock-in where possible.
Not necessarily, Since 2015 launch NAN has been vaporware outside android, nobody else support it. Windows does not do so today either [1].
In Linux iw and the new cfg80211 NAN module has support for some hardware. There are few chips in desktop/laptop ecosystem that have the feature, but it is hard to know which ones today, it is more common not to have support than to.
AFAIK no major distros include UI based support that regular users can use. Most Chromebooks do not have the hardware to support, ChromeOS[2] did not have support OOB, so even Google does not implement it for all their devices in the first place.
For Apple to implement is easier than Microsoft or Google given their vertical control, but not simple even if they wanted to. They may still need a hardware update/change and they typically rollout few versions of the hardware first before they announce support so most people have access to it, given the hardware refresh cycle it is important for basic user experience which is why people buy Apple. What is the point if you cannot share with most users because they don't have latest hardware? Average user will try couple of times and never use it again because it doesn't "work".
Sometimes competing standards / lack of compliance are political play for control of the standards not about vendor lock-in directly. Developers are the usual casualties in these wars, rather than end users directly. Webdevs been learning that since JScript in the mid 90s.
All this to say, as evidences go this is weak for selective compliance due to regulatory pressure.
Look, you might be right. But you might be wrong. We don't know for sure.
One of my first jobs was in infosec, and there was a sign above one of the senior consultant's door quoting Hanlon's Razor: "Never attribute to malice that which is adequately explained by stupidity". That quote is right.
There's so much going on at any medium-to-large organisation, from engineering to politics and personalities. All that multiplied across hundreds of thousands of people in thousands of teams. Its possible you're right. Apple might have provided an iOS-only SDK for wifi aware because of regulatory pressure. Its also possible they want to provide it on all platforms, but just started with an ios only version because of who works on it, or which business unit they're part of, or politics, or because they think its more useful on ios than on macos. We just don't know.
Whenever I've worked in large organisations, I'm always amazed how much nonsense goes on internally that is impossible to predict from the outside. Like, someone emails us about something important. It makes the rounds internally, but the person never gets emailed back. Why? Maybe because nobody inside the company thought it was their job to get back to them. Or Steve should really have replied, but he was away on paternity leave or something and forgot about it when he got back to work. Or sally is just bad at writing emails. Or there's some policy that PR needs to read all emails to the public, and nobody could be bothered. And so on. From the outside you just can't know.
I don't know if you're right or wrong. Apple isn't all good or all bad. And the probability isn't 100% and its not 0%. Take off the tin foil hat and have some uncertainty.
Your reply makes sense in a vacuum, but in reality we have the context of having seen Apple comply with regulation maliciously before, so we do know for sure that there's no macOS in the sdk because they weren't forced to by regulation.
> we do know for sure that there's no macOS in the sdk because they weren't forced to by regulation.
Unless you have insider knowledge, we don't know anything for sure here. Apple isn't a person. Apple doesn't have a single, consistent opinion when it comes to openness and EU regulation. (And even a person can change their mind.) All we know is that some teams at apple responded in the past to some EU regulation with malicious compliance. That doesn't tell us for sure what apple will do here.
Apple is 165 000 people. That's a lot of people. A lot more people than comment regularly on HN, and look at us! We don't agree about anything. I'm sure plenty of apple's employees hate EU regulation. And plenty more would love to opensource everything apple does.
That sort of inconsistency is exactly what we see across apple's product line. The Swift programming language is opensource. But SwiftUI is closed source. Webkit and FoundationDB are opensource. But almost everything on iOS is closed source. Apple sometimes promotes open standards - like pushing Firewire, USB and more recently USB-C - which they helped to design. But they also push plenty of proprietary standards that they keep under lock and key. Like the old 20-pin ipod connector, that companies had to pay money to apple to be allowed to use in 3rd party products. Or Airdrop. Or iMessage. AFS (apple filesystem) is closed source. But its also incredibly well documented. My guess is the engineers responsible want to support 3rd party implementations of AFS but for some reason they're prohibited from open-sourcing their own implementation.
We don't know anything for sure here. For my money, there's even odds in a year or two this API quietly becomes available on macos, watchos and tvos as well. If you "know for sure" that won't happen, lets make a bet at 100-1 odds. If you're sure, its free money for you.
> Apple is 165 000 people. That's a lot of people. A lot more people than comment regularly on HN
How do you know the HN numbers? I’m not doubting you, I’m curious about the data.
> and look at us! We don't agree about anything.
At the same time, anyone can join HN. There’s no “culture fit” or anything like that. It is possible to have a larger difference of ideas in a smaller pool of people.
Again, what’s the source of the data? Anyone can throw around vague numbers. “A few million” and “a small fraction” provide no useful information for the context.
- instead of grams I would like also tell me how many americano coffee cups more or less than looking for scale.
- I wanna scale down - if serving is for 6 people i want to scale down to to servings
reply