Hacker Newsnew | past | comments | ask | show | jobs | submit | krazydad's commentslogin

I got the same email regarding my hobby website "The Wheel of Lunch" (https://wheelof.com/lunch). The site makes no money and sends traffic to Yelp. The cheapest plan offered was $230 a month! I can't afford that.

I have stopped calling the Yelp API for local listings and put up a notice on the site. It was fun while it lasted!


Sorry to hear that. Remember your site from 15 years ago when I was an intern trying to figure out where to eat.


This is very much the approach I use as well (I've been publishing Sudoku for a long time at krazydad).


I often ponder what the generator algorithm might be when I am solving "insane" 13x17 Kakuro puzzles [1], because there always seems to be just enough logical paths to solve each puzzle. Well done!

[1] - https://krazydad.com/kakuro/index.php?sv=13x17I_v1


“AI infused at every layer“

Okay… What does that even mean? And why does the company think we want that?

I love me some AI, but right now it’s getting put on everything like Nutella at a hipster eatery.


In practice it just means that the processor includes a matrix multiply unit somewhere.

For example, some of the latest Intel Xeon CPUs have 8x 1KB registers that can perform only one operation: matrix multiply (and accumulate results). See: https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions

Generally this is very similar to the Vista-era Microsoft requirements for PC manufacturers, where it was a logo requirement to include a GPU that could be used to accelerate composition in the desktop window manager, such as transparency effects and blur. (Prior to that, low-end PCs had "graphics cards" that were fixed pipeline and not programmable /general purpose.)

Now Microsoft is forcing PC manufacturers to do the same kind of thing, but instead of a GPU it's now an "NPU" that they have to include. This can be a CPU instruction set, a co-processor, or a GPU baseline capability. The requirement is 40 teraoperations per second.

IMHO, 40 TOPS is way too low, and doesn't focus enough on memory bandwidth. Also, that 40 is total across CPU+GPU+NPU, which means in practice it'll require fiddly optimisation to get anywhere near that level of performance.

Windows Vista had the same issue, where many low-end laptops especially would struggle and spin up their GPU fans just from desktop workloads, let alone gaming...


"40 TOPS is way too low", I'm curious about this, from what I've read, Apple's M3 maxes out at 18 TOPS, and the newer M4 at 38 TOPS, so to me it sounds like even the entry-level Copilot+ PC is going to beat Apple's M3/M4 family. Am I misunderstanding.


It's possible that Microsoft's target is too low and Apple's performance is also too low. However when you find yourself saying that the entire industry is wrong you might want to stop and think.


An NVIDIA 4070 puts out 836 "AI TOPS" using 200 watts of power. So the 40 TOPS target is about 10 watts of power draw equivalent, assuming it uses a similar silicon logic tech to what NVIDIA used. With a more modern process, this is about 5 watts.

For an ultra-mobile tablet or laptop, this is... reasonable, I suppose.

For a desktop, it's quite a bit behind the current-gen tech, let alone "the future".

Like I was saying, this is aiming for where the puck was, not where the puck will be.

It's the same thing as Vista GPU requirements. Vendors will do the bare minimum that ticks the checkbox, but in practice it'll be useless garbage.

Modern AIs require > 1 TB/s memory bandwidth, > 128 GB memory capacity, and > 1,000 TOPS of compute to be really usable locally, not just technically capable of "running".


The problem is that "AI" isn't a feature. It's not a product. It's a pretty broad category of technologies that can be used to build features and products.

"AI infused at every layer" is like saying "we build our software with the best Agile(tm) practices". The customer doesn't care how you built the software, the customer cares what your software will do for them, and all "AI" means is "we probably have a deep learning inference engine somewhere under the hood, but no promises".


That's like saying audio playback isn't a feature or a product.


No, it's more like saying "media" isn't a feature or a product. It's not. It's a very vague word that could mean any number of loosely related things that do form part of many different features and products.


And that was totally a thing in the early 2000's

Companies were selling "Media PC's", and Windows ME stood for Millennium Edition or Media Edition depending on who you asked.

And people think the word "content" is overused now...


or, like saying "media ready"


I think it means many of us will not use Windows ever again.


I was already there when Windows 11 came along and blocked my perfectly functional 4 or 5 yr old HW from running it (even though the pre-releases ran just fine).

This "AI" bullshit is just another nail in the already nailed shut coffin.


Realistically, this is an announcement that Windows 12 will have a double click gguf runner and requisite backend as a feature.


Onnx, which seems to be awq


Most likely more effective data harvesting. Individual user level AI isn't the end goal I don't think. It's just a nice byproduct. This just means they can embed more data collection, and use your device to crunch the numbers for them.


This is the big thing I’m worried about with the AI revolution. If AI is being baked into your OS, where does usable training data end and your sensitive data start? Recall is pinkie promising that it won’t record sensitive data but at the same time it’s also saying “if your application doesn’t protect it well we mayyy see it and record it, whoops”. I can bet you 120% Recall is just a way for Microsoft to collect training data on millions of users every day. It is a privacy nightmare. But I don’t think the average consumer will care. Privacy died a long time ago.


We don’t want that. Investors want that.


it's a "fundamental transformation of the OS itself", you see


Probably true. The new win64 API will have just one entry point:

  LPSTR do_thing(LPSTR prompt);
E.g.:

  if (do_thing("get a mutex or whatever it's called") == do_thing("What's the happy return code for getting a mutex?") { ... }


Someone someday may actually ship code like this into production. Horrifying to think about. For some reason this reminds me of trying to grow plants with Brawndo https://www.youtube.com/watch?v=kAqIJZeeXEc


The most troublesome part of this demo for me was the one tablespoon of mayo it wanted you to add to your vegetable omelet. Yikes.


I agree with you. I love using copilot (and similar tools) and I’ve found it is exceedingly good at predicting patterns in my own coding. It has saved me a lot of time, and I would hate for it to go away or be crippled because of lawsuits like this. I could care less if my own code is used for training. My code isn’t precious, it’s what I do with it that is important.


You're arguing that it should be allowed because you don't feel it harms you personally.

Literally no one objects to people like you donating your code to Microsoft.


delicio.us was pretty great in it’s day


Yes! One of the other comments got to the crux of this: nobody ever figured out a sustainable business model for this. I think it's a shame.


i don't think any of the things I've liked that found a sustainable business model remained the way I liked them either


And if you used the -v option to set the voice to “whisper”, it was super creepy. say -v whisper “Get out of the house”


Were you home schooled? ;-)

Saying that in a school lab wouldn't make much sense


say -v cellos "droid"

Pretty sure this is how they generated the sound that was used for those commercials once upon a time.


Gonna mention Quiver (The Programmer's Notebook) as it hasn't been mentioned yet. I've been on Evernote for about 10 years and have a few thousand notes in it. This news is discomfiting and I look forward to checking out some of the contemporary alternatives. Lately, I've been putting more technical/nerdy stuff into Quiver, which understands code and does syntax highlighting and all that good stuff. I don't like it as much for the free-form stuff I do in Evernote, but for code snippets it's great.


For the lazy, here is where you find Quiver: http://happenapps.com/


Paid for it, A+ app. I use it all the time. no cloud subscription or anything retarded.


Their work actually sounds pretty exciting to me.


> Their work actually sounds pretty exciting to me.

Why? They are nothing more than a off-the-shelf boring machine that was never used and a marketing department dedicated to fabricate hype.


The reason I'm excited is because at one point SpaceX was all hype too. :)


That's the joke.


Seeing a lot of posts of the "Observable vs Jupyter" variety. I really don't think this has to be a this versus that kind of discussion. There is plenty of room for both.


How about both rolled into one somehow!?!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: