I generally agree with your underlying point concerning attribution and intellectual property ownership but your follow-up comment reframes your initial statement: LLMs generate recombinations of code from code created by humans, without giving credit.
Stack Overflow offers access to other peoples’ work, and developers combined those snippets and patterns into their own projects. I suspect attribution is low.
GitHub, Bitbucket, GCE, AWS…all have licensing agreements for user contributions which the user flagged as “public” so I’m not exactly clear of your point if you are holding SO up as a bastion of intellectual property rights different from the other places LLM training sets were scraped from.
But that is rarely how it works. In the dozens of different projects across ten or twelve companies I’ve had insight into, “doing Agile” is analogous with “we have a scrum master, hold stand ups, and schedule iterations” while the simple reality is “Agilefall.”
I have had many successful projects where we spent approximately zero time on estimates. The fact that a successful approach is culturally seen as illegitimate to even talk about is a great example of why I wrote that last paragraph.
On the contrary, constraints often mean you don't need formal estimates. (I'll come back to prioritization in a sec.)
Startups are a great example. When you raise your first chunk of money, the size of that isn't really driven by a carefully considered, detailed plan with engineering hours estimated per task. What you get is basically determined what's currently fashionable among angels and small-end VCs, plus who's doing your fundraising. (If you're Jeffery Katzenberg and Meg Whitman, you can raise $1bn. [1] https://en.wikipedia.org/wiki/Quibi But the rest of us have to make do with what we can get.)
So at that point you have a strong constraint (whatever you raised) and some relatively clear goal. As I said, cost isn't nearly as relevant as ROI, and nobody can put real numbers on the R in a startup. At that point you have two choices.
One is just to build to whatever the CEO (or some set of HiPPOs wants). Then you launch and find out whether or not you're fucked. The other is to take something akin to the Lean Startup approach, where you iteratively chase your goal, testing product and marketing hypotheses by shipping early and often.
In that later context, are people making intuitive ROI judgments? Absolutely. Every thing you try has people doing what you could casually call estimating. But does that require an estimation practice, where engineers carefully examine the work and produce numbers? Not at all. Again, I've done it many times. Especially in a startup context, the effort required for estimation is much better put into maximizing learning per unit of spending.
And how do you do that? Relentless prioritization. I was once part of a team that was so good at it that they launched with no login system. Initial users just typed their names in text fields. They wanted proper auth, and eventually they built it, but for demonstrating traction and raising money there were higher priorities. It worked out for them; they built up to have millions of users and were eventually acquired for tens of millions. On very little investor money.
Being great at prioritization makes estimation way less necessary. The units of work get small enough that the law of large numbers is on your side. And the amount of learning gained from the things released change both the R and I numbers frequently enough that formal estimates don't have a long shelf life.
So I get what you're saying in theory, but I'm telling you in practice it's different.
If you're saying there's some sort of wisps-and-moonbeams notion of estimation in everything that we do, sure. I'm not going to argue with that. One world, brah.
What I am talking about here, though, is a practice of software estimation where programmers produce detailed numbers on the amount of time requested work will take. Which is certainly the common meaning of estimating around here, and also what the original article is about.
I'm not upset. But as far as I can tell you've wandered off into some sort of philosophical space when I'm speaking of practicalities, without apparent recognition of the transition. I thought that was a bit ridiculous, something from the "wow dude have you ever really looked at your hand" school of conversation, so I made a joke. Apparently the joke didn't land for you. Alas.
Research is subject to constraints of money, time, and resources, but is not normally estimated in the sense that software industry people would use the term.
Yes, yet estimates are still made. The author of the article didn't use some highly formal definition of estimation, didn't imply one, and seems to be focused on devops (not software development) as a practitioner.
Estimates are difficult, and in unhealthy environments are weaponized against developers. That doesn't mean they're unnecessary or impossible.
I think that the replies I am getting are demonstrating why developers have estimates used against them - people forget that they are estimates, and they also forget that when new information comes to hand that invalidates that estimate a completely new one may need to be created to take into account the new data.
If developers (or anyone giving estimates) discovers that the initial estimate was based on faulty information then they need to push that information back to whomever they are reporting it to (Team Lead, Product Owner, Manager, customer, angel investor...). The receiver of that information then needs to decide on how to react according to the changes.
Yes, agile is a reaction to spreadsheet driven development and some very dumb ways of tracking progress towards completion and managing work in general.
In my experience, people don't forget they're estimates, they just want to force developers to meet whatever they agreed to that's most convenient for management.
If you want to fight back against that, my experience has been that giving terrible estimates or refusing to give them at all will not result in more autonomy or authority.
> If you want to fight back against that, my experience has been that giving terrible estimates or refusing to give them at all will not result in more autonomy or authority.
In my experience giving terrible estimates or refusing to give them at all is the least bad course of action. It wastes less of your time than any realistic alternative, it does no noticeable damage to the business or your own position, and the people who want to paint you as just trying to avoid accountability were going to find a way to do that anyway.
Research is estimated, sometimes those estimates are hilariously bad (Computer vision is easy, a summer research for a student should be enough), but more often than not it's "We expect that this research will take someone doing a Ph.D approximately 3 - 5 years to do"
The entire premise of a project is "Look at this, with the intent to find X, and, if it's not possible, break it down so that we can create more projects to work toward that goal" which is an estimate, or a breakdown into sub projects that also come with estimates.
I, and others, don't agree with the blanket statement that "no estimates" is not a legitimate argument in any scenario. Can you expand on why you think there isn't a single case where estimates don't add value? Similarly, is there anything specifically in that post's claims that you think was incorrect, leading to their false conclusion?
Okay, a scenario where you're building a hobby project alone and you don't care if or when it gets finished would be one where estimates aren't needed.
There is no scenario where it's appropriate or necessary when developing software professionally or even as a side project where others are expecting you to complete work at some point.
One of the many misconceptions in the original comment in this thread is that "worthwhile software is usually novel", which is not the case without a very specific and novel definition of worthwhile that I don't believe was intended.
If software isn't novel, that means some other, existing software does the same thing just as well in the same way on the same platform. So, unless it's a hobby project you're building alone, why don't you just use the existing software?
I think that writing software that isn't novel fails to be worthwhile by a perfectly ordinary, mainstream definition of "worthwhile".
So you would consider a CRUD app with some basic business rules to be novel? Basically meaning that any software that requires any development effort is novel?
That's a completely valid definition of worthwhile software, but to claim it's impossible to create an estimate to complete said development is absurd.
You just keep saying things are absurd or obvious but not putting anything behind it.
I hope this isn't a semantics game where things like "1 - 6 months" counts as an estimate in this context.
The point way back up this thread was accurate timelines for complicated, novel work have large error bars but those error bars aren't as bad as the equivalent error bars on estimating whatever "return" it is being pitted against.
I wouldn't consider something like "1-6 months" as a valid estimate, as that would indicate there is too much uncertainty and it needs to be broken down into subtasks that can be estimated with much less variance.
I've written what is probably several pages now in response to two individuals who are redefining terms in order to play the exact semantic games you mentioned, but in order to claim no estimation of any sort needs to be done. We seem to be done talking past each other now that I explicitly pointed out their usage of non-standard terms and my suspicions of why (having also unfortunately lived through software development managed by Gantt chart and other unpleasant experiences where someone who had no idea what they were managing was in control of a project), which is fine with me.
Feel free to describe your experience in practice when working in an organization where software developers answer to no one but themselves and are never asked for any justification for their progress or any projections of when they will be finished (both of which would require estimation to provide).
If you are able to tell stakeholders something like you'll be done in 1-6 months or provide no insight at all into when your tasking will be done, do no tracking of progress internally, and perform no collaboration around the completion of synchronous tasks within your team, I'll acknowledge no estimation is taking place during that process.
For novices, LLMs are infinitely patient rubber ducks. They unstick the stuck; helping people past the coding and system management hurdles that once required deep dives through Stack Overflow and esoteric blog posts. When an explanation doesn’t land, they’ll reframe until one does. And because they’re confidently wrong often enough, learning to spot their errors becomes part of the curriculum.
For experienced engineers, they’re tireless boilerplate generators, dynamic linters, and a fresh set of eyes at 2am when no one else is around to ask. They handle the mechanical work so you can focus on the interesting problems.
The caveat for both: intentionality matters. They reward users who know what they’re looking for and punish those who outsource judgment entirely.
> 1. The raw code with no empty space or comments. 2. Code with comments
I like the sound of this but what technique do you use to maintain consistency across both views? Do you have a post-modification script which will strip comments and extraneous empty space after code has been modified?
As I think more on how this could work, I’d treat the fully commented code as the source of truth (SOT).
1. SOT through a processor to strip comments and extra spaces. Publish to feature branch.
2. Point Claude at feature branch. Prompt for whatever changes you need. This runs against the minimalist feature branch. These changes will be committed with comments and readable spacing for the new code.
3. Verify code changes meet expectations.
4. Diff the changes from minimal version, and merge only that code into SOT.
1. Run into a problem you and AI can't solve.
2. Drop all comments
3. Restart debug/design session
4. Solve it and save results
5. Revert code to have comments and put update in
If that still doesn't work:
Step 2.5 drop all unrelated code from context
In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
The problem is the organizing principle for our entire global society is competition.
This is the default, the law of the jungle or tribal warfare. But within families or corporations we do have cooperation, or a command structure.
The problem is that this principle inevitably leads to the tragedy of the unmanaged commons. This is why we are overfishing, polluting the Earth, why some people are freeriding and having 7 children with no contraception etc. Why ecosystems — rainforests, kelp forests, coral reefs, and even insects — are being decimated. Why one third of arable farmland is desertified, just like in the US dust bowl. Back then it was a race to the bottom and the US Govt had to step in and pay farmers NOT to plant.
We are racing to an AIpocalypse because what if China does it first?
In case you think the world don’t have real solutions… actually there have been a few examples of us cooperating to prevent catastrophe.
1. Banning CFCs in Montreal Protocol, repairing hole in Ozone Layer
2. Nuclear non-proliferation treaty
3. Ban on chemical weapons
4. Ban on viral bioweapons research
So number 2 is what I would hope would happen with huge GPU farms, we as a global community know exactly the supply chains, heck there is only one company in Europe doing the etching.
And also I would want a global ban on AGI development, or at least of leaking model weights. Otherwise it is almost exactly like giving everyone the means to make chemical weapons, designer viruses etc. The probability that NO ONE does anything that gets out of hand, will be infinitesimally small. The probability that we will be overrun by tons of destructive bot swarms and robots is practically 100%.
In short — this is the ultimate negstive externality. The corporations and countries are in a race to outdo each other in AGI even if they destroy humanity doing it. All because as a species, we are drawn to competition and don’t do the work to establish frameworks for cooperation the way we have done on local scales like cities.
PS: meanwhile, having limited tools and not AGI or ASI can be very helpful. Like protein folding or chess playing. But why, why have AGI proliferate?
It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.
The trick is to register an LLC, and then get your employer to outsource the work to your consulting company. You get laid off, and then continue to work through your company.
Only mild sarcasm, as this is essentially what happens.
In my experience that “blink of an eye” has turned out to be a single moment when the LLM misses a key point or begins to fixate on an incorrect focus. After that, it’s nearly impossible to recover and the model acts in noticeably divergent ways from the prior behavior.
That single point is where the model commits fully to the previous misunderstanding. Once it crosses that line, subsequent responses compound the error.
From my viewpoint you are conflating software quality with ambition. All software develops iteratively. Tools now celebrated for quality and consistency (commercial and OSS alike) shipped from states where they were neither. Jerm-CAD existing gives it a shot at improvement. The alternative is it doesn’t exist.
You actually don’t. Technologists have more leverage than most workers. There’s no shortage of jobs that don’t require building surveillance states or engagement addiction engines.
At this point, the path from what these teams of people are building to dystopian outcomes is well-mapped. Whether it’s an explicit goal is irrelevant because if you can reasonably foresee the harm and proceed anyway, you’re making a conscious choice to enable it.
I don’t think it’s fair to call someone who used Stack Overflow to find a similar answer with samples of code to copy to their project an asshole.
reply