Hacker Newsnew | past | comments | ask | show | jobs | submit | NewsaHackO's commentslogin

It's like pressing the "close door" button on an elevator.

Door close button is supposed to cancel the door dwell time. But due to some disability codes in some regions all major manufactures allow it to be disabled (as required by some codes). i.e. The owners/managers/technicians can disable it.

His point is going to be some copium like since the c compiler is not as optimized as gcc, it was not impressive.

You probably don’t know what you’re talking about.

Why wasn't the C compiler it made impressive to you?

Like everything genAI, it was amazing yet surprisingly crappy.

Yes, the bear is definitely dancing.

But a few feet away there's a world-class step dancer doing intricate rhythms they've perfected over twenty years of hard work.

The bear's kind of shuffling along to the beat like a stoner in a club.

It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.


>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.

No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.


OK, but don't you see where this is going? The trajectory that we're on?

I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1

>I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1

So, I just skimmed the discussion thread, but I am not seeing how this shows that CCC is not impressive. Is the point you're making that the person who opened the issue is not impressive?


It didn’t work without gcc and it was significantly worse than gcc with gcc optimizations disabled.

Two things: 1) Employees are not that easy to replace. These employees have already been onboarded, screened, and proven, at least to this point, to be the type of people the company wants. If an employee starts lagging behind solely because they are stubborn about adopting AI, yes, the company can fire them, but then it has to go through the entire hiring process again and risk bringing in someone new, when it could have simply improved performance by helping the existing employee use AI. 2) Companies themselves have performance metrics that are compared to those of other companies. If an employee is not using AI and has reduced output, then the company’s overall output and its profits are affected. No investor cares if the reason is that other companies have a higher rate of AI adoption; investors care that the company was not able to get its employees to use AI effectively to increase profits.

Huh? Why wouldn’t developers (who probably have stock options in Claude) try to prevent becoming 'the Microsoft of AI'? That's probably what they are actively trying to do.

This take is overly cynical. Every major corporation has people with influence who care and fight for good outcomes. They win some fights, they lose others. The only evidence you need is to notice the needlessly good decisions that were made in the past.

Some greatest hits:

- CoreAudio, Mac OS memory management, kernel in general, and many other decisions

- Google's internal dev tooling, Go, and Chrome (at least, in its day)

- C#, .NET, and Typescript (even Microsoft does good work)

One of the hallmarks of heroic engineering work is that everyone takes it for granted afterward. Open source browsers that work, audio that just works, successors to C/C++ with actual support and adoption, operating systems that respond gracefully under load, etc. ... none of these things were guaranteed, or directly aligned with short-term financial incentives. Now, we just assume they're a requirement.

Part of the "sensibility" I'm talking about is seeking to build things that are so boring and reliable that nobody notices them anymore.


Your incentive is to stay in the job so you can vest. Fighting the slide may just make enemies

Yeah, I made a similar point about the tone of ChatGPT responses; to me, I can't imagine why someone would want less information when working and tuning an AI model. However, something tells me they actually have hard evidence that users respond better with less information regardless of what the loud minority say online, and are following that.

100%. Metrics don't lie. I've A/B tested this a lot. Attention is a rare commodity and users will zone out and leave your product. I really dislike this fact

> Metrics don't lie

Metrics definitely lie, but generally in a different way to users/others. It's important to not let the metric become the goal, which is what often happens in a metric-heavy environment (certainly Google & FB, not sure about the rest of big tech).


is that a joke?

>Death of flash

>The library support for games[in Go] is quite poor, and though you can wrap C libs without much trouble, doing so adds a lot of busy work.

I can't see when this was written, but it has to be around 2015. So, about 10 years ago. I wonder what his opinion is today.


The first capture of the page on Internet Archive Wayback Machine is from January 9th, 2016. So it’s at least that old.

Also here is a snapshot of the main page of his website from that time, which has screenshots of his games and thereby provides context into what kind of games he had made and published when the blog post was written.

https://web.archive.org/web/20160110012902/http://jonathanwh...

This one looks like it’s 3d and has a pretty unique style:

https://web.archive.org/web/20160112060328/http://jonathanwh...


I think it would be aprópiate if @dang added a (2016) to the title

>make a computer that's safe for grandma to use

People also forget that it makes it safe for people who aren't grandmas. The reason why you think it's just grandmas is because, for you to get a virus or your computer hacked now, it requires so many user gaffes for something like that to happen. In addition, it almost always involves typing in or telling someone your password when you shouldn’t. In the early 2000s, I still remember there was some ad affiliate for the cyanide and happiness webcomic website that, if you let it's ad load, instantly infected your computer with adware just from visiting the site. That’s unheard of now because of increasingly protective/restrictive policies set by technology companies. It’s one of those situations where if a system is working correctly, you won’t even know it’s working at all.


PIN numbers are easier to remember. Remember, 99% of the population does not care about defense against state actors, just stopping nosy co-workers or family members from looking at their stuff. The next group (which I would include myself in) is concerned about theft (both physical and remote), where someone can get "unlimited" access to your machine and may be able to defeat a short PIN but is unlikely to beat a strong password. If you are in the realm of defending against state actors, then that is something you have to take multiple steps to ensure, and a single slip-up will tank your operation (like with this lady).

I assume some sort of corporate espionage. This is high stakes after all

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: