Door close button is supposed to cancel the door dwell time. But due to some disability codes in some regions all major manufactures allow it to be disabled (as required by some codes). i.e. The owners/managers/technicians can disable it.
>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.
No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.
So, I just skimmed the discussion thread, but I am not seeing how this shows that CCC is not impressive. Is the point you're making that the person who opened the issue is not impressive?
Two things: 1) Employees are not that easy to replace. These employees have already been onboarded, screened, and proven, at least to this point, to be the type of people the company wants. If an employee starts lagging behind solely because they are stubborn about adopting AI, yes, the company can fire them, but then it has to go through the entire hiring process again and risk bringing in someone new, when it could have simply improved performance by helping the existing employee use AI. 2) Companies themselves have performance metrics that are compared to those of other companies. If an employee is not using AI and has reduced output, then the company’s overall output and its profits are affected. No investor cares if the reason is that other companies have a higher rate of AI adoption; investors care that the company was not able to get its employees to use AI effectively to increase profits.
Huh? Why wouldn’t developers (who probably have stock options in Claude) try to prevent becoming 'the Microsoft of AI'? That's probably what they are actively trying to do.
This take is overly cynical. Every major corporation has people with influence who care and fight for good outcomes. They win some fights, they lose others. The only evidence you need is to notice the needlessly good decisions that were made in the past.
Some greatest hits:
- CoreAudio, Mac OS memory management, kernel in general, and many other decisions
- Google's internal dev tooling, Go, and Chrome (at least, in its day)
- C#, .NET, and Typescript (even Microsoft does good work)
One of the hallmarks of heroic engineering work is that everyone takes it for granted afterward. Open source browsers that work, audio that just works, successors to C/C++ with actual support and adoption, operating systems that respond gracefully under load, etc. ... none of these things were guaranteed, or directly aligned with short-term financial incentives. Now, we just assume they're a requirement.
Part of the "sensibility" I'm talking about is seeking to build things that are so boring and reliable that nobody notices them anymore.
Yeah, I made a similar point about the tone of ChatGPT responses; to me, I can't imagine why someone would want less information when working and tuning an AI model. However, something tells me they actually have hard evidence that users respond better with less information regardless of what the loud minority say online, and are following that.
100%. Metrics don't lie. I've A/B tested this a lot. Attention is a rare commodity and users will zone out and leave your product. I really dislike this fact
Metrics definitely lie, but generally in a different way to users/others. It's important to not let the metric become the goal, which is what often happens in a metric-heavy environment (certainly Google & FB, not sure about the rest of big tech).
The first capture of the page on Internet Archive Wayback Machine is from January 9th, 2016. So it’s at least that old.
Also here is a snapshot of the main page of his website from that time, which has screenshots of his games and thereby provides context into what kind of games he had made and published when the blog post was written.
People also forget that it makes it safe for people who aren't grandmas. The reason why you think it's just grandmas is because, for you to get a virus or your computer hacked now, it requires so many user gaffes for something like that to happen. In addition, it almost always involves typing in or telling someone your password when you shouldn’t. In the early 2000s, I still remember there was some ad affiliate for the cyanide and happiness webcomic website that, if you let it's ad load, instantly infected your computer with adware just from visiting the site. That’s unheard of now because of increasingly protective/restrictive policies set by technology companies. It’s one of those situations where if a system is working correctly, you won’t even know it’s working at all.
PIN numbers are easier to remember. Remember, 99% of the population does not care about defense against state actors, just stopping nosy co-workers or family members from looking at their stuff. The next group (which I would include myself in) is concerned about theft (both physical and remote), where someone can get "unlimited" access to your machine and may be able to defeat a short PIN but is unlikely to beat a strong password. If you are in the realm of defending against state actors, then that is something you have to take multiple steps to ensure, and a single slip-up will tank your operation (like with this lady).
reply