Hacker Newsnew | past | comments | ask | show | jobs | submit | codyb's commentslogin

It's not the _cellphone_ that Stallman has an issue with lol

But if everyone acted like Stallman then solutions that have gone away such as public payphones would come back due to their requirement.

He doesn't give a crap if a random phone record of his appears in a random haystack, and that's kind of the point isn't it? It's the aggregated, crawlable stores that are the threat

There may be other issues with Stallman, but that behavior doesn't strike me as particularly inconsistent


Really? I'd think a human being would be more likely to recognize they'd crossed a boundary with another human, step back, and address the issue with some reflection?

If apologizing is more likely the response of an AI agent than a human that's either... somewhat hopeful in one sense, and supremely disappointing in another.


A human is obviously capable of a turn around. I just won't expect it to happen right after. Of course, it's not like that couldn't happen either.

> I'd think a human being would be more likely to recognize they'd crossed a boundary with another human

Please. We're autistic software engineers here, we totally don't do stuff like "recognize they'd crossed a boundary".


I guess you could have two people per presentation, one person who confirms whether to slide in the generated slide or maybe regenerate. And then of course, eventually that's just an agent

Wow, I'm staggered, thanks for sharing

I was under the impression that often times chip manufacture at the top of the lines failed to be manufactured perfectly to spec and those with say, a core that was a bit under spec or which were missing a core would be down clocked or whatever and sold as the next in line chip.

Is that not a thing anymore? Or would a chip like this maybe be so specialized that you'd use say a generation earners transistor width and thus have more certainty of a successful cast?

Or does a chip this size just naturally ebb around 900,000 cores and that's not always the exact count?

20kwh! Wow! 900,000 cores. 125 teraflops of compute. Very neat


Designing to tolerate the defects is well trodden territory. You just expect some rate of defects and have a way of disabling failing blocks.

So you shoot for 10% more cores and disable failing cores?

More or less, yes. Of course, defects are not evenly distributed, so you get a lot of chips with different grades of brokenness. Normally the more broken chips gets sold off as lower tier products. A six core CPU is probably an eight core with two broken cores.

Though in this case, it seems [1] that Cerebras just has so many small cores they can expect a fairly consistent level of broken cores and route around them

[1]: https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...


Well, it's more like they have 900,000 cores on a WSE and disable whatever ones that don't work.

Seriously, that's literally just what they do.


In their blog post linked in the sibling comment it says the raw number is 970k and they enable 900k (table at the end).

IIRC, a lot of design went into making it so that you can disable parts of this chip selectively.

Sounds like you should build a competitor if that's literally all it is...

I suspect there's quite a few other things you have to consider when you're managing trillions of dollars of transactions a year. Fraud, settlement times, up times, security, customer service, debt collection, interest rate calculation, reach, KYC, record keeping, legal inquiries.

But I'm sure we're just a couple grok comments away from a competitor


Don't forget stand-ins, much of this hasn't discussed that credit card networks do a lot of "stand-ins" when the issuer is unreachable (bank goes down, latency too high, etc). It's a bit unclear how things like Wero would operate when a network issue hits as Wero and EU rails won't just assume the liability for the transaction and hope it clears later as it does on Visa/Mastercard.

Very good examples. I'd add that Trust and connections are also huge in payments. Even if your technology is perfect, you need to integrate with tons of different systems to get full coverage, and the people who run those systems don't sign contacts with just anyone.

I think it's a fair point to say that many people do not feel as if they're the ones responsible even if they're direct contributors.

A lot of folk justify it to themselves.

I've heard "well, you have to change things from the inside" before.

And a lot of people have been there for a while, it wasn't always... quite as bad even if a lot of the warning signs were absolutely there.

I was actually just thinking to myself this morning that I literally have no idea what these feeds look like at this point, but more and more people seem to be looking at me with envy when I say I don't have any lol. I'm kind of curious and might ask my friends if I can see what they're looking at day to day if they'll show me.


I think this is the crux of why, when used as an enhancement to solo productivity, you'll have a pretty strict upper bound on productivity gains given that it takes experienced engineers to review code that goes out at scale.

That being said, software quality seems to be decreasing, or maybe it's just cause I use a lot of software in a somewhat locked down state with adblockers and the rest.

Although, that wouldn't explain just how badly they've murdered the once lovely iTunes (now Apple Music) user interface. (And why does CMD-C not pick up anything 15% of the time I use it lately...)

Anyways, digressions aside... the complexity in software development is generally in the organizational side. You have actual users, and then you have people who talk to those users and try to see what they like and don't like in order to distill that into product requirements which then have to be architected, and coordinated (both huge time sinks) across several teams.

Even if you cut out 100% of the development time, you'd still be left with 80% of the timeline.

Over time though... you'll probably see people doing what I do all day (which is move around among many repositories (although I've yet to use the AI much, got my Cursor license recently and am gonna spin up some POCs that I want to see soon)), enabled by their use of AI to quickly grasp what's happening in the repo, and the appropriate places to make changes.

Enabling developers to complete features from tip to tail across deep, many pronged service architectures would could bring project time down drastically and bring project management, and cross team coordination costs down tremendously.

Similarly, in big companies, the hand is often barely aware at best of the foot. And space exploration is a serious challenge. Often folk know exactly one step away, and rely on well established async communication channels which also only know one step further. Principal engineers seem to know large amounts about finite spaces and are often in the dark small hops away to things like the internal tooling for the systems they're maintaining (and often not particularly great at coming in to new spaces and thinking with the same perspective... no we don't need individual micro services for every 12 request a month admin api group we want to set up).

Once systems can take a feature proposal and lay out concrete plans which each little kingdom can give a thumbs up or thumbs down to for further modifications, you can again reduce exploration, coordination, and architecture time down.

Sadly, seems like User Experience design is an often terribly neglected part of our profession. I love the memes about an engineer building the perfect interface like a water pitcher only for the person to position it weirdly in order to get a pour out of the fill hole or something. Lemme guess how many users you actually talked to (often zero), and how many layers of distillation occurred before you received a micro picture feature request that ends up being build and taking input from engineers with no macro understanding of a user's actual needs, or day to day.

And who often are much more interested in perfecting some little algorithm thank thinking about enabling others.

So my money is on money flowing to... - People who can actually verify system integrity, and can fight fires and bugs (but a lot of bug fixing will eventually becoming prompting?) - Multi-talented individuals who can say... interact with users well enough to understand their needs as well as do a decent job verifying system architecture and security

It's outside of coding where I haven't seen much... I guess people use it to more quickly scaffold up expense reports, or generate mocks. So, lots of white collar stuff. But... it's not like the experience of shopping at the supermarket has changed, or going to the movies, or much of anything else.


That's been my target for the last... wow... has it nearly been 18 years since I started futzing around with Bitcoin for the first time?

Been a swift drop this time. My buddy goes "if only I sold it all in October", what... like the rest of the people participating in the Ponzi scheme who would also like to cash out at the top?

Looks like all those stupid bitcoin treasury companies are finally unwinding.

At least no one's sitting around pretending it's going to actually do anything useful anymore.


> At least no one's sitting around pretending it's going to actually do anything useful anymore

It enriches certain elected officials (and their friends). That's why the US Government holds a "Strategic Bitcoin Reserve".

Other than that if you measured the utility of Crypto assets versus AI, there's no argument that AI (even though it's in a bubble) is still more valuable per MWh than Crypto.


Oh ya, AI may be frothy and bubbly right now but there will certainly be and already are tremendous real world software and hardware products and tech flowing out of the space

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: