Hacker Newsnew | past | comments | ask | show | jobs | submit | fho's commentslogin

Can't confirm. We had students at university (18-20-ish) that had not used a mouse prior to our courses. That was at least 3-4 years ago now and not a single case.

I think their "4-bit multiplier with a single transistor" bit is hinting at them using transistors in the sun-threshold regime.


So something that you can do with PDKs is add your own custom standard cell and tell the EDA tools to use them. This is actually pretty smart, this way you can use most of the foundry cells (which have been extensively validated) and focus on things like this "magic multiplier", that you will have to manually validate. This also makes porting across tech nodes easier if you manage only a handful of custom cells versus a completely custom design.

(I have my guesses as to what that is, but I admittedly don't know enough about that particular part of the field to give anything but a guess).


My "only" experience here is designing ASICs for Neuromorphic Chips. We used sub-threshold exclusively for linearity and energy reduction. No standard cells for us


Just wanted to say thanks one note time!

We have been running Ardour 9 for a while now during band rehearsals. Currently 12 channels that we record and monitor in realtime with some effects on top.


Then let me quickly say: thank you! I used that algorithm three times in different projects during my academic "career" :-)


You might be interested in RWKV: https://www.rwkv.com/

Not exactly "minimal viable", but a "what if RNNs where good for LLMs" case study.

-> insanely fast on CPUs


My personal idea revolves around "can I run it on a basic smartphone, with whatever the 'floor' for basic smartphones under lets say $300 is for memory (let's pretend RAM prices are normal).

Edit: The fact this runs on a Smartphone means it is highly relevant. My only thing is, how do we give such a model an "unlimited" context window, so it can digest as much as it needs. I know some models know multiple languages, I wouldnt be surprised if sticking to only English would reduce the model size / need for more hardware and make it even smaller / tighter.


Started a comment to write basically what you said. I've been commuting like that for five years. At the end I didn't bother trying anything productive anymore.

Losing 2-3h per day commuting is not something I am gone miss anytime soon.


Just nitpicking, but there is at least one ball next to his contraption in his video :-)

Doesn't make the whole thing less remarkable.


Don't Github have emoji reactions? I would assume that those tie "PR" and "needs emojis" closely together.


> On the ground, we know that creating CLAUDE.md or cursorrules basically does nothing.

I don't agree with this. LLMs will go out of their way to follow any instruction they find in their context.

(E.g. i have "I love napkin math" in my kagi Agent Context, and every LLM will try to shoehorn some kind of napkin math into every answer.)

Cursor and Co do not follow these instructions because they:

(a) never make it into the context in the first place, or (b) fall out of the context window.


I still remember when Google (and Facebook?) used XMPP for their chat functions. You could log into any XMPP client and chat with people using Google infrastructure.

Good times, I feel old now.


Yeah I used to use Pidgin to chat with people on Facebook! I miss those days.


Yeah, I had iChat logged into 4 different things, one of them being AIM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: