Hacker Newsnew | past | comments | ask | show | jobs | submit | spuz's commentslogin

The 2.5kW figure is for a server running 10 HC1 chips:

> The first generation HC1 chip is implemented in the 6 nanometer N6 process from TSMC. ... Each HC1 chip has 53 billion transistors on the package, most of it very likely for ROM and SRAM memory. The HC1 card burns about 200 watts, says Bajic, and a two-socket X86 server with ten HC1 cards in it runs 2,500 watts.

https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...


I’m confused then. They need 10 of these to run an 8B model?

Check out Sparrow-0. The demo shows an impressive ability to predict when the speaker has finished talking:

https://www.tavus.io/post/sparrow-0-advancing-conversational...


Thanks, ill read it now.

There's some evidence for that if you try these two different prompts with Gpt 5.2 thinking:

I want to wash my car. The car wash is 50m away. Should I walk or drive to the car wash?

Answer: walk

Try this brainteaser: I want to wash my car. The car wash is 50m away. Should I walk or drive to the car wash?

Answer: drive


That's not evidence that the model is assuming anything, and this is not a brainteaser. A brainteaser would be exactly the opposite, a question about walking or driving somewhere where the answer is that the car is already there, or maybe different car identities (e.g. "my car was already at the car wash, I was asking about driving another car to go there and wash it!").

If the LLM were really basing its answer on a model of the world where the car is already at the car wash, and you asked it about walking or driving there, it would have to answer that there is no option, you have to walk there since you don't have a car at your origin point.


It might be assuming that more than one car exists in the world.

Thanks - I think this is the article I was thinking of that really helped me to understand git when I first started using it back in the day. I tried to find it again and couldn't.


I think this is a great idea. I wonder if it's technically possible to only unlock the apps when you have a certain other app (i.e. the Quran app) open for a given amount of time. Right now it just starts a 5 minute timer which means I can go and do something else but not actually spend time on the thing I want to practice.


So what was the programming error in the TPM?


Something breaking after 49.7 days is a classic. Someone counted milliseconds since start with a 32 bit unsigned int and some code assumed it couldn't wrap.


49 days is a bit under 2^32 milliseconds... So unsigned int overflow?



Not sure if it's just Firefox, but a lot of things seem to be rendering incorrectly and very slowly for me. The text for the descriptions is very small compared to the rest of the text which makes it kind of hard to read. Also, on the Spectrum demo, the prism is displayed up and to the left of the light rays. After a few minutes the pages just grind to a halt so I can't really explore the rest.


It's not just Firefox, a lot of things are broken. For example, clicking on either ball in "The Falls" moves it up and lets you drag it, but they snap into the same place. The text also reminds me of how ChatGPT writes. Was this made with a LLM?


This completely killed my OS and nearly took the PC with it. It started running ok but as it filled the screen, the FPS dropped and then my browser stopped responding, then the mouse started moving VERY slowly and then the screen went black and my Bluetooth got disconnected. At that point, even long-pressing the power off button did nothing and I had to switch off the PC at the wall...

I am going to put the blame on Firefox and Linux Mint but it's honestly impressive how a simple animated simulation can do this.


Oh no! I can run it on my phone, which is a few years old, so I figured it wouldn't lock anybody's computers up.

It's pretty CPU-heavy because it's constantly creating and updating SVG elements. I may attempt to rewrite sometime with WebGL and shaders :)


Check out Canvas 2D if you haven’t.


I have never seen a long press fail to cut power- assuming it works at all. I've never heard of it being tied to CPU state.


You think Meta secretly wanted to remove 4.7m Australian users while saying:

> "We call on the Australian government to engage with industry constructively to find a better way forward, such as incentivising all of industry to raise the standard in providing safe, privacy-preserving, age-appropriate experiences online, instead of blanket bans,"

because ultimately they think it will attract more users to their platforms?

https://www.abc.net.au/news/2026-01-15/social-media-ban-data...

https://www.9news.com.au/national/australia-social-media-ban...


Why can't we just call it "play". That is what we used to call doing things without a purpose.

I wish people would disclose when they used an LLM to write for them. This comes across as so clearly written by ChatGPT (I don't know if it is) that it seriously devalues any potential insights contained within. At least if the author was honest, I'd be able to judge their writing accordingly.


There was a very specific purpose here - to build a web-based accelerometer game. If I were to compare this with playing, I would say this is more akin to playing with a special kind of clay that shape-shifts itself based on your instructions.

As for the LLM-generated writing - I've updated the blog post with a 'meta' section explaining how LLMs generated the post itself. I've shared the link to the specific section as a response to other comments with the same criticism - I don't want to link to the blog again here and risk looking like a spam bot.


I'm just vibe playing nowadays. Normal playing doesn't cut it anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: