I got the same email regarding my hobby website "The Wheel of Lunch" (https://wheelof.com/lunch). The site makes no money and sends traffic to Yelp. The cheapest plan offered was $230 a month! I can't afford that.
I have stopped calling the Yelp API for local listings and put up a notice on the site. It was fun while it lasted!
I often ponder what the generator algorithm might be when I am solving "insane" 13x17 Kakuro puzzles [1], because there always seems to be just enough logical paths to solve each puzzle. Well done!
Generally this is very similar to the Vista-era Microsoft requirements for PC manufacturers, where it was a logo requirement to include a GPU that could be used to accelerate composition in the desktop window manager, such as transparency effects and blur. (Prior to that, low-end PCs had "graphics cards" that were fixed pipeline and not programmable /general purpose.)
Now Microsoft is forcing PC manufacturers to do the same kind of thing, but instead of a GPU it's now an "NPU" that they have to include. This can be a CPU instruction set, a co-processor, or a GPU baseline capability. The requirement is 40 teraoperations per second.
IMHO, 40 TOPS is way too low, and doesn't focus enough on memory bandwidth. Also, that 40 is total across CPU+GPU+NPU, which means in practice it'll require fiddly optimisation to get anywhere near that level of performance.
Windows Vista had the same issue, where many low-end laptops especially would struggle and spin up their GPU fans just from desktop workloads, let alone gaming...
"40 TOPS is way too low", I'm curious about this, from what I've read, Apple's M3 maxes out at 18 TOPS, and the newer M4 at 38 TOPS, so to me it sounds like even the entry-level Copilot+ PC is going to beat Apple's M3/M4 family. Am I misunderstanding.
It's possible that Microsoft's target is too low and Apple's performance is also too low. However when you find yourself saying that the entire industry is wrong you might want to stop and think.
An NVIDIA 4070 puts out 836 "AI TOPS" using 200 watts of power. So the 40 TOPS target is about 10 watts of power draw equivalent, assuming it uses a similar silicon logic tech to what NVIDIA used. With a more modern process, this is about 5 watts.
For an ultra-mobile tablet or laptop, this is... reasonable, I suppose.
For a desktop, it's quite a bit behind the current-gen tech, let alone "the future".
Like I was saying, this is aiming for where the puck was, not where the puck will be.
It's the same thing as Vista GPU requirements. Vendors will do the bare minimum that ticks the checkbox, but in practice it'll be useless garbage.
Modern AIs require > 1 TB/s memory bandwidth, > 128 GB memory capacity, and > 1,000 TOPS of compute to be really usable locally, not just technically capable of "running".
The problem is that "AI" isn't a feature. It's not a product. It's a pretty broad category of technologies that can be used to build features and products.
"AI infused at every layer" is like saying "we build our software with the best Agile(tm) practices". The customer doesn't care how you built the software, the customer cares what your software will do for them, and all "AI" means is "we probably have a deep learning inference engine somewhere under the hood, but no promises".
No, it's more like saying "media" isn't a feature or a product. It's not. It's a very vague word that could mean any number of loosely related things that do form part of many different features and products.
I was already there when Windows 11 came along and blocked my perfectly functional 4 or 5 yr old HW from running it (even though the pre-releases ran just fine).
This "AI" bullshit is just another nail in the already nailed shut coffin.
Most likely more effective data harvesting. Individual user level AI isn't the end goal I don't think. It's just a nice byproduct. This just means they can embed more data collection, and use your device to crunch the numbers for them.
This is the big thing I’m worried about with the AI revolution. If AI is being baked into your OS, where does usable training data end and your sensitive data start? Recall is pinkie promising that it won’t record sensitive data but at the same time it’s also saying “if your application doesn’t protect it well we mayyy see it and record it, whoops”. I can bet you 120% Recall is just a way for Microsoft to collect training data on millions of users every day. It is a privacy nightmare. But I don’t think the average consumer will care. Privacy died a long time ago.
Someone someday may actually ship code like this into production. Horrifying to think about. For some reason this reminds me of trying to grow plants with Brawndo https://www.youtube.com/watch?v=kAqIJZeeXEc
I agree with you. I love using copilot (and similar tools) and I’ve found it is exceedingly good at predicting patterns in my own coding. It has saved me a lot of time, and I would hate for it to go away or be crippled because of lawsuits like this. I could care less if my own code is used for training. My code isn’t precious, it’s what I do with it that is important.
Gonna mention Quiver (The Programmer's Notebook) as it hasn't been mentioned yet. I've been on Evernote for about 10 years and have a few thousand notes in it. This news is discomfiting and I look forward to checking out some of the contemporary alternatives. Lately, I've been putting more technical/nerdy stuff into Quiver, which understands code and does syntax highlighting and all that good stuff. I don't like it as much for the free-form stuff I do in Evernote, but for code snippets it's great.
Seeing a lot of posts of the "Observable vs Jupyter" variety. I really don't think this has to be a this versus that kind of discussion. There is plenty of room for both.
I have stopped calling the Yelp API for local listings and put up a notice on the site. It was fun while it lasted!