Hacker Newsnew | past | comments | ask | show | jobs | submit | digikata's commentslogin

When writing tables in markdown files, text align data in the columns for readability with a plain text editor.

Large PRs could follow the practices that the Linux kernel dev lists follow. Sometimes large subsystem changes could be carried separately for a while by the submitter for testing and maintenance before being accepted in theory, reviewed, and if ready, then merged.

While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.

With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.


Nobody wants to review AI-generated code (unless we are paid for doing so). Open source is fun, that's why people do it for free... adding AI to the mix is just insulting to some, and boring to others.

Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write? I couldn't care less if it improves (best case scenario) my open source codebase, I simply don't enjoy the imbalance.


> Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write?

If the PR does what it says it does, why does it actually matter if it took 2 weeks or 2 minutes to put together, given that it's the equivalent level of quality on review?


“It works” is the bare minimum. Software is maintained for decades and should have a higher bar of quality.

> given that it's the equivalent level of quality on review?

One reason: if it takes 2 minutes to put together a PR, then you'll get an avalanche of contributions of which you have no time to review. Sure, I can put AI in fron to do the review, but then what's the point of my having an open source project?

> but then what's the point of my having an open source project?

For some people, the point was precisely to improve the software available to the global commons through a thriving and active open source effort. "Too many people are giving me too many high-quality PRs to review" is hardly something to complain about, even if you have to just pick them randomly to fit them in the time you have without AI (or other committers) to help review.

If your idea of open source is just to share the code you wanted to work on and ignore contributions, you can do that too. SQLite does that, after all.


> If the PR does what it says it does, why does it actually matter if it took 2 weeks or 2 minutes to put together, given that it's the equivalent level of quality on review?

You're right that the issue isn't how many minutes it took. The issue is that it's slop. Reviewing thousands of lines of crappy code is unpleasant whether they were autogenerated or painstakingly handcrafted. (Of course, few humans have the patience and resistance to learning to generate the amount of terrible code that AIs do routinely).


I get the frustration but I think this take only holds if you assume AI generated code is inherently worse. If someone uses Claude to scaffold the boilerplate and then actually goes through it properly, the end result is the same code you would have written by hand, just faster. The real problem is when people submit 14k lines they clearly did not read through. But that is a review process problem, not an AI problem. Bad PRs existed long before AI.

I resonate with OP a lot, and in my opinion, it's not about the code quality. It's about the effort that was put in, like in each LOC. I can't quite put it in words, but, like, the art comparison works quite well. If someone generates a painting with Gemini, it makes it somewhat heartless. It may still be good and bring the project forward (in case of this PR), but it lost every emotional value.

I would probably never be able to review this kind of code in open source projects without any financial compensation, because of that reason. Not because I don't like LLMs, not use LLMs, or think their code is of bad quality. But, while without LLMs I know there was a person who sat down and wrote all this in painstaking work, now I know that he or she barely steered a robot that wrote it. It may still be good work, and the steering and prompting is still work and requires skill, but for me I would not feel any emotional value in this code, and it would make it A LOT harder to gather motivation to review it. Interestingly, when I think about it, I realize that I would inherently have motivation to find out how the developer prompted the agent.

Like, you know, when I see a wooden statue of which I know it was designed and carved by someone in months of work, I could appreciate every single edge of the wood much more than if there's a statue that was designed by someone but carved by some kind of wooden CNC machine. It may be same statue and the same or even better quality, and it was still skillful work, but I lose my connection to it.

Can't quite pinpoint it, but for me, it seems, the human aspect is really important here, at least when it's about passion and motivation.

Maybe that made some sense, idk. I just wrote out of my ass.


Yes and no. Previously when someone submitted a 14k line PR you could be assured that they'd at least put a significant amount of time and effort into it, and the result was usually a certain floor on the quality level. Now that's no longer true.

In theory because the code being added is introducing a feature so compelling that it is worth it. In practice, that’s rarely the case.

My personal approach to open source is more or less that when I need a piece of software to exist that does not and there is no good reason to keep it private, it becomes open source. I don’t do it for fun, I do it because I need it and might as well share it. If someone sends me a patch that enhances my use case, I will work with them to incorporate it. If they send me a patch that only benefits them it becomes a calculus of how much effort would it take for me to review it. If the effort is high, my advice is to fork the project or make it easier for me to review. Granted I don’t maintain huge or vital projects, but that’s precisely why: I don’t need yet another programming language or runtime to exist and I wouldn’t want to work on one for fun.


Why do you care how much effort it took the engineer to make it? If there was a huge amount of tedium that they used Claude Code for, then reviewed and cleaned up so that it’s indistinguishable from whatever you’d expect from a human; what’s it to you?

Not everyone has the same motivations. I’ve done open source for fun, I’ve done it to unblock something at work, I’ve done it to fix something that annoys me.

If your project is gaining useful functionality, that seems like a win.


Because sometimes programming is an art and we want people to do it as if it was something they cared about. I play chess and this is a bit like that. Why do I play against humans? Because I want to face another person like me and see what strategies they can come up with.

Of course any chess bot is going to play better, but that's not the point


What about the other times?

I don't think node virtual filesystems is anything like chess.

Solving problems is not like chess? I want to use my brain, not sure why that's so complicated to understand

[flagged]


TIL that when I do anything that makes society label me as a "developer", I am not allowed to enjoy it, or feel about it in any way, as it's now a job, entirely neutral in nature, and I gotta do it, whether I hate or enjoy it - no attached emotions allowed.

Ignore the mercenaries. Here they are legion.

As for us (aspiring) craftsman, there are dozens of us! Dozens!


> Why do you care how much effort it took the engineer to make it?

Because they're implicitly asking me to put in effort as a reviewer. Pretending that they put more effort in than they have is extremely rude, and intentionally or not, generating a large volume of code amounts to misleading your potential reviewers.

> If there was a huge amount of tedium that they used Claude Code for, then reviewed and cleaned up so that it’s indistinguishable from whatever you’d expect from a human; what’s it to you?

They never do though. These kind of imaginary good AI-based workflows are a "real communism has never been tried" thing.

> If your project is gaining useful functionality, that seems like a win.

Lines of code impose a maintenance cost, and that goes triple when the code quality is low (as is always the case for actually existing AI-generated code). The cost is probably higher than the benefit.


I hate being paid to review AI slop.

> With AI blowing up the line counts on PRs,

Well, the process you’re describing is mature and intentionally slows things down. The LLM push has almost the opposite philosophy. Everyone talks about going faster and no one believes it is about higher quality.


Go slow to go fast. Breaking up the PR this way also allows later humans and AI alike to understand the codebase. Slowing down the PR process with standards lets the project move faster overall.

If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.

Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.


TBF, most of the AI code I've reviewed isn't significantly different than code I've seen from people... in fact, I've seen significantly worse from real people.

The fact is, it's useful as a tool, but you still should review what's going on/in. That isn't always easy though, and I get that. I've been working on a TS/JS driver for MS-SQL so I can use some features not in other libraries, mostly bridging a Rust driver (first Tiberious, then mssql-client), the clean abstraction made the switch pretty quick... a fairly thorough test suite for Deno/Node/Bun kapt the sanity in check. Rust C-style library with FFI access in TS/JS server environment.

My hardest part, is actually having to setup a Windows Server to test the passswordless auth path (basically a connection string with integrated windows auth). I've got about 80 hours of real time into this project so far. And I'll probably be doing 2 followups.. one with be a generic ODBC adapter with a similar set of interfaces. And a final third adapter that will privide the same methods, but using the native SQLite underneath but smothing over the differences.

I'm leveraging using/dispose (async) instead of explicit close/rollback patterns, similar to .Net as well as Dapper-like methods for "Typed" results, though no actual type validation... I'd considered trying to adapt Zod to check at least the first record or all records, and may still add the option.

All said though, I wouldn't have been able to do so much with so relatively little time without the use of AI. You don't have to sacrifice quality to gain efficiency with AI, but you do need to take the time to do it.


  > Everyone talks about going faster and no one believes it is about higher quality.
Go Fast And Break Things was considered a virtue in the JavaScript community long before LLMs became widely available.

Crash? The software, or physically? A 200Hz as a min control loop rate seems on the fast side as a general default, but it all depends on the control environment - and I may be biased as I've done a lot more bare silicon controls than ROS.

I'm guessing running a 200 Hz command rate on an e-series UR which uses 1 kHz internally will give you a protective stop?

Physically crash. When we would block the control loop at all (even down to 100hz), we would get errors and then occasionally the arm would erratically experience massive acceleration spikes and crash into its nearby surroundings before e-stopping.

Re: Other comment. Yes, this was with ur3e s which by default have update rates at around 500hz.


A couple of historical notes that come to mind.

When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.

When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.


Location: Portugal

Remote: Yes Willing to relocate: No

Willing to Relocate: No

Technologies: Rust, Python, C/C++, Typescript, LLM APIs, Distributed Systems, Embedded Systems, devops, Linux Kernel

Resume: https://uplinklabs.com

Email: alan@uplinklabs.com

Hands on builder, fractional CTO/Architect. 25+ years of US tech experience. Full stack with data intensive backend experience. Multi domain expertise, 0 -> 1 startup stacks, AI prototype cleanup for production, cloud, storage, embedded, autonomous vehicles, regulated industries. Problem solver with using tech and team leadership skills. Open to fractional and contract opportunities. US B2B invoicing available.


The easiest is to add short info in comments, and longer info in some sort of document and reference the doc in comments.

Lightweight ADRs are a good recommendation. I've put similar practices into place with teams I've worked with. Though I prefer to use the term "Technical Memo", of which some contain Architectural Decisions. Retroactive documentation is a little misaligned with the term ADR, in that it isn't really making any sort of decision. I've found the term ADR sometimes makes some team members hesitant to record the information because of that kind of misalignment.

As for retroactively discovering why, code archeology skills in the form of git blame and log, and general search skills are very helpful.


A fun use of this kind of approach would be to see if conversational game NPCs could be generated that stick the the lore of the game and their character.


To borrow some definitions from Systems engineering for verification and validation, this question is one of validation. Verification is performed by Lean and spec syntax and logic enforcement. But Validation is a question of is if the Lean spec encodes a true representation of the problem statement (was the right thing specced). Validation at highest levels is probably an irreplaceable human activity.

Also, on the verification side - there could also be a window of failure that Lean itself has a hidden bug in it too. And with automated systems that seek correctness, it is slightly elevated that some missed crack of a bug becomes exploited in the dev-check-dev loop run by the AI.


I would guess by now none have that internally. As a rule of thumb every major flash density increase (SLC, TLC, QLC) also tended to double internal page size. There were also internal transfer performance reasons for large sizes. Low level 16k-64k flash "pages" are common, and sometimes with even larger stripes of pages due to the internal firmware sw/hw design.


Also due to error correction issues. Flash is notoriously unreliable, so you get bit errors _all the time_ (correcting errors is absolutely routine). And you can make more efficient error-correcting codes if you are using larger blocks. This is why HDDs went from 512 to 4096 byte blocks as well.


Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.

It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...


"didn't support object tagging"

Thanks for pointing that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: