Hacker Newsnew | past | comments | ask | show | jobs | submit | norir's commentslogin

You don't need a repl for this workflow and it can be easily implemented in any language. `ls *.MY_LANG | entr -c run.sh` You get feedback whenever you save the file.

Personally, I find waiting more than 200ms unacceptable and really < 50ms is ideal. When the feedback is very small, it becomes practical to save the file on every keystroke and get nearly instantaneous results with every input char.


> You don't need a repl for this workflow and it can be easily implemented in any language. `ls *.MY_LANG | entr -c run.sh` You get feedback whenever you save the file.

As in restarting the entire program and re-running every subsequent query/state changing mechanism that got you there in the first place, being careful not to accidentally run other parts of the program, setting it up to be just so, having to then rewrite parts if you then want to try something somewhere else?

Perhaps I'm misunderstanding you, because that sounds horrible unless you're scripting something small with one path.

The whole point of the REPL is that you evaluate whatever functions you need to get into your desired state and work from there. The ad hoc nature of it is what makes you able to just dig into a part of the code base and get going, even if that project is massive.

> Personally, I find waiting more than 200ms unacceptable and really < 50ms is ideal.

Calling a function on a running program takes microseconds; you're not restarting anything.


It is easily possible to parse at > 1MM lines per second with a well designed grammar and handwritten parser. If I'm editing a file with 100k+ lines, I likely have much bigger problems than the need for incremental parsing.

It's not just speed - incremental parsing allows for better error recovery. In practice, this means that your editor can highlight the code as-you-type, even though what you're typing has broken the parse tree (especially the code after your edit point).

> I recognize that it is reminiscent of a few decades ago when old timers complained about the proliferation of high level programming languages and insisted they would lead to a generation of programmers lacking a proper understanding of how the system behaves beneath all that syntactic sugar and automatic garbage collection. They won’t have the foundational skills necessary to design and build quality software. And, for the most part, they turned out to be wrong.

What if the old timers were actually right? I tend to think they were.


The things is, they were and they were not.

It's absolutely necessary that there's a line of people somewhere who will understand the path from garbage collection to assembly instructions. We can't build upon abstractions only as long as we still run stuff on physical cpus.

But it's also unequivocally true that once we have enough long-bearded oldtimers and newtimers who do understand how writing a Python expression somewhere will end up with a register write elsewhere all the others just don't — have to.

In old times, all you had was hardware and to program you had to understand hardware. But those who then did program and did understand were the few smart people who had access to hardware. Everyone else was left out. Now we have high-level languages, scripting languages, AI, what else. As long as we can maintain the link to hardware by some people, the rest can build on that.


Yeah people in the past were always more right. The people that built the first processors were much more aware of low level stuff than those so called low level programmers with their fancy compilers.

And before them, the electrical and mechanical engineers, without them we wouldn't even have these processors. We all ultimately are dependent on them.

And like that you can go on and on


> In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself.

You can only speak for yourself.


If you have well defined boundaries, you can move the stack to an arbitrarily large chunk of memory before the recursive call and restore it to the system stack upon completion.


And if you never do reach completion, you can just garbage collect that chunk. AKA "Cheney on the MTA": https://dl.acm.org/doi/10.1145/214448.214454


I have a personal aversion to defer as a language feature. Some of this is aesthetic. I prefer code to be linear, which is to say that instructions appear in the order that they are evaluated. Further, the presence of defer almost always implies that there are resources that can leak silently.

I also dislike RAII because it often makes it difficult to reason about when destructors are run and also admits accidental leaks just like defer does. Instead what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.


If you dislike things happening out of lexical order, I expect must already dislike C because of one of its many notorious footguns, which is that the evaluation order of function arguments is implementation-defined.

About RAII, I think your viewpoint is quite baffling. Destructors are run at one extremely well-defined point in the code: `}`. That's not hard to reason about at all. Especially not compared to often spaghetti-like cleanup tails. If you're lucky, the team does not have a policy against `goto`.


> I have a personal aversion to defer as a language feature.

Indeed, `defer` as a language feature is an anti-pattern.

It does not allow the abstraction of initialization/de-initialization routines and encapsulating their execution within the resources, transferring the responsibility to manually perform the release or de-initialization to the users of the resources - for each use of the resource.

> I also dislike RAII because it often makes it difficult to reason about when destructors are run [..]

RAII is a way to abstract initialization, it says nothing about where a resource is initialized.

When combined with stack allocation, now you have something that gives you precise points of construction/destruction.

The same can be said about heap allocation in some sense, though this tends to be more manual and could also involve some dynamic component (ie, a tracing collector).

> [..] and also admits accidental leaks just like defer does.

RAII is not memory management, it's an initialization discipline.

> [..] what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.

Why would you want to replicate the same cleanup procedure for a certain resource throughout the code-base, instead of abstracting it in the resource itself?

Abstraction and explicitness can co-exist. One does not rule out the other.


Failure is inherently "non-linear" in this sense, unless there is exhaustive case-analysis. That sounds a lot like "just never program a mistake."


I personally would prefer to hear more about what is uniquely good about Odin semantically or syntactically than more ad hominem attacks on the intelligence of the critics of the language, which I have seen in multiple recent pieces by this author.


Agreed; this communication style by itself makes me less inclined to try out the language.


Communism is neither the opposite of laissez-faire capitalism nor the only alternative.


Model competition does nothing to address monopoly consolidation of compute. If you have control over compute, you can exert control over the masses. It doesn't matter how good my open source model is if I can't acquire the resources to run it. And I have no doubt that the big players will happily buy legislation to both entrench their compute monopoly/cartel and control what can be done using their compute (e.g. making it a criminal offence to build a competitor).


Model competition means that users have multiple options to chose from, so if it turns out one of the models has biases baked in they can switch to another.

Which incentivizes the model vendors not to mess with the models in ways that might lose them customers.


I don't think anyone considers biases more important than, say, convenience. The model that only suggests Coca–Cola brands will win over the one that's ten times slower because it runs on your computer.


> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.

I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.


The way I put it is: AI assistance in programming is a service, not a tool. It's like you're commissioning the code to be written by an outside shop. A lot of companies do this with human programmers, but when you commission OpenAI or Anthropic, the code they provide was written by machine.


Also it's not only the work of "the model" it's the work of human beings the model is trained on, often illegally.


Copyright infringement is a tort. “Illegal” is almost always used to refer to breaking of criminal law.

This seems like intentionally conflating them to imply that appropriating code for model training is a criminal offense, when, even in the most anti-AI, pro-IP view, it is plainly not.


> “Illegal” is almost always used to refer to breaking of criminal law.

This is false, at least in general usage. It is very common to hear about civil offenses being referred to as illegal behavior.



> There are four essential elements to a charge of criminal copyright infringement. In order to sustain a conviction under section 506(a), the government must demonstrate: (1) that a valid copyright; (2) was infringed by the defendant; (3) willfully; and (4) for purposes of commercial advantage or private financial gain.

I think it’s very much an open debate if training a model on publicly available data counts as infringement or not.


I'm replying to your comment about infringement being a civil tort versus a crime, it can be both.


Or for another analogy, just substitute the LLM for an outsourced firm. Instead of hiring a firm to do the work, you're hiring a LLM.


How many JavaScript libraries does the average fortune 1000 developer invoke when programming?


That average fortune 1000 developer is still expected to abide by the licensing terms of those libraries.

And in practice, tools like NPM makes sure to output all of the libraries' licenses.


I was about to argue, and then I suddenly remembered some past situations where a project manager clearly considered the code I wrote to be his achievement and proudly accepted the company's thanks.


Prompting the AI is indeed “do[ing] it yourself”. There’s nobody else here, and this code is original and never existed before, and would not exist here and now if I hadn’t prompted this machine.


Sure. But the sentence "I am a programmer" doesn't fit with prompting, just as much as me prompting for a drawing that resembles something doesn't make me a painter.


Exactly. He's acting as something closer to a technical manager (who can dip into the code if need be but mostly doesn't) than a programmer.


So, what's your take on Andy Warhol, or sampling in music?


The line gets blurrier the more auto-complete you use.

Agentic programming is at the end of the day a higher level auto complete, with extremely fuzzy matching on English.

But when you write a block and you let copilot complete 3, 4, 5 statements. Are you really writing the code?


Truth is the highest level of autocomplete


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: