On the other hand, I consider v8 the most extreme optimized runtime in a weird way, in that there’re like 100 people on the planet understand how it works, while the rest of us be like “why my JS not fast”
and then there are the people saying “why JS” before going on a archaic rant and then leaving the interview concluding their rejection was age discrimination
This. OP, tools often install their own update mechanisms (e.g. `uv self update`), so this may not be as useful as you think. As an alternative (albeit one that adds potential hosting costs), consider running a small DB - can be as simple as SQLite - with hashes of scripts. You also need to handle legitimate updates from the script's author[s], though. If you can extract versioning from the URL, e.g. GitHub releases, you could include that in the schema.
Obviously the article is making valid points. But a recent epiphany I had is, things by default are just mediocre but works. Of course the first shot at this problem is not going to be very good, very much like the first version of JavaScript is a shitshow and we’ll take years to pay down the technical debts. In order to force a beautiful creation, significant effort and will power needs to be put in place. So Id say I’m not surprised at all and this is just how the world works, in most cases.
I think this is a cop out. OpenAI literally published a better integration spec two years ago, stored on `/.well-known/ai-plugin.json`. It just gave a summary of an OpenAPI spec, which ChatGPT could consume and then run your functions.
It was simple and elegant, the timing was just off. So the first shot at this problem actually looked quite good, and we're currently in a regression.
I agree except I think 100 lines is definitely worth a method, whereas 15 lines is obviously not worthy for the most cases and yet we do that a lot.
My principle has always been: “is this part a isolated and intuitive subroutine that I can clearly name and when other people see it they’ll get it at first glance without pausing to think what this does (not to mention reading through the implemention)”. I’m surprised this has not been a common wisdom from many others.
In recent years my general principle has been to introduce an abstraction (in this case split up a function) if it lowers local concepts to ~4 (presumably based on similar principles to the original post). I’ve taken to saying something along the lines of “abstractions motivated by reducing repetition or lines of code are often bad, whilst ones motivated by reducing cognitive load tend to be better”.
Good abstractions often reduce LOC, but I prefer to think of that as a happy byproduct rather than the goal.
>My principle has always been: “is this part a isolated and intuitive subroutine that I can clearly name and when other people see it they’ll get it at first glance without pausing to think what this does (not to mention reading through the implemention)”.
I hold this principle as well.
And I commonly produce one-liner subroutines following it. For me, 15 lines has become disturbingly long.
I tend toward John Carnack's view. He seemed annoyed that he was being pressed to provide a maximum at all and specified 7000 lines. I don't think I have ever gone that high. But really is just a matter of what you are doing. We expect to reuse things way more often than we actually do. If you wrote out everything you need to do in order and then applied the rule of three to make a function out of everything you did three times, it is very possible you wouldn't remove anything. In which case I think it should just be the one function.
> We expect to reuse things way more often than we actually do.
This is about readability (which includes comprehensibility), not reuse. When I read code from others who take my view, I understand. When I read code from those who do not, I do not, until I refactor. I extract a piece that seems coherent, and guess its purpose, and then see what its surroundings look like, with that purpose written in place of the implementation. I repeat, and refine, and rename.
It is the same even if I never press a key in my editor. Understanding code within my mind is the same process, but relying on my memory to store the unwritten names. This is the nature of "cognitive load".
Yeah, I find extracting code into methods very useful for naming things that are 1) a digression from the core logic, and 2) enough code to make the core logic harder to comprehend. It’s basically like, “here’s this thing, you can dig into it if you want, but you don’t have to.” Or, the core logic is the top level summary and the methods it calls out to are sections or footnotes.
Not necessarily. They still would have needed to have a majority of board seats on their side - I mean, Brockman was chairman of the board and he didn't find out about all this until the machinations were complete.
Read a good a article about the history of the OpenAI board that argued this all went down due to the recent loss of 3 board members, bringing total board membership from 9 to 6 (including losses like Reid Hoffman, who never would have voted for something like this), and Altman wanted to increase board membership again. Likely the "Gang of Four" here saw this as their slim window to change the direction of OpenAI.
Possibly but not certainly. It would have deadlocked the vote if a board member would have been replaced and it would have still passed if Microsoft had had an extra seat (4:2 -> 3:3) vs (4:2 -> 4:3) assuming the Microsoft representative to the board would vote against having Altman removed.
What I'm fairly sure of though is that if the board had been stocked with heavyweights rather than lightweights that this would have been handled in a procedural correct way with a lot more chance that it would stick.
Engineers at small startups are generally founder minded. They’re like founders, and are passionate about the product and the problem. That’s how they get to decide/involve in the decision about what to work on