GitLab could be the perfect case study on AI-powered efficiency improvements. I have never interacted with a piece of software that, for every single problem I found, there was an open issue always at least 4-7 years old that was just being shuffled around by managers adding and removing random labels.
Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
The reason for this is: the only way to show productivity gains enabled by AI is to lay off people and pretend you are doing the same amount of work (while in reality you are severely dropping quality and accumulating technical debt).
I think that in these cases, what they need more than more engineering or AI productivity, is good management. Close issues that get shuffled around too much as "yeah this is too vague", or "nah we can't fix this", or "you know what, fuck you I'm not doing it".
Productivity gains can also be achieved by reducing scope. The coming issues will be that because of increased productivity (idea -> working code) that software is too bloated, does too much, that product managers will and can say "yes" to everything. Until it becomes unmanageable.
And that's not a new problem, it's what basically every programming adage / wisdom going back 70 years is about.
I don't think this is literally true, but as an example an employee might not justify their wage in a single quarter but have a huge impact over time. In the short term firing them makes the company more profitable, but that's definitely not a good way to run a business. Everything is gambling now :/
On the other hand, most issues rot due to process overhead, not because the ticket is hard.
For example, why are you working on a four-year-old issue, and a trivial one at that, when you're already behind schedule on the tasks assigned to you? Now someone else who has their own things to get done has to review it? And even trivial changes can be annoying to truly review beyond a blind LGTM.
Just one of the many ways that pressure builds against the utopia of burning through old tickets.
Aside, watch out for the double standard we have for AI on forums like this. AI is expected to be so good that it can magically overcome the forces that keep engineers from working on old tickets (which were never related to engineer productivity) and, when AI can't, well of course it couldn't because AI sucks.
And who knows the fix to some of these issues might be a hell of a lot more worked now that the bug has been baked in and the "real" fix is herculean now.
Yes, there are plenty of those tickets in GitLab public tracker, for example the "Allow defining scheduled pipelines triggers as yaml" [1] one is a good example. But still, it's a 6 year old ticket that has been clearly shuffled around with nobody taking full ownership of it, probably because it was never deemed important by higher level product management.
And this is for a "feature" and it's the first that came into my mind for $REASON, but there are similar tickets for very annoying bugs.
That’s been my last couple months. “Yes this is a bug / not optimal, but all the imagined paths / solutions are not great or a mountain of new code / requirements …”
This is it, exactly. In any org, current management only cares about you working on their ideas. If they thought up the project, then they get the credit for it. Customer ideas, and old ones at that, don't get leaders credit.
Basically, every organization is constantly rotting due to the cliqueish behavior of leadership.
Dunno how it is these days, but that reads like Android roughly 2012-2020.
I once found a looooong bug report thread on their issue tracker 7ish years old that had all the usual waves of promises that a fix might make the next release, then silence, then repeat, and the usual challenges to the bug’s status every time a release happened, plus it saw community members correctly diagnose the problem in the first couple years, then by like year 5 there’s was a (small!) patch posted by a community member with multiple posters confirming it was good and fixed the issue, that the author and others had been begging Google to apply and get in a release for a couple years. There’d been no responses from Google folks for a while.
That might be the worst one I saw, but encountering something like that was a few-times-per-year thing in my android app dev years.
On a similar note, Firefox doesn't support <input type="month">, which I was surprised to see (chrome landed it in 2012). I checked their issue tracker and... as you describe. Browsers are complex, of course, but they do stand out as a really glacial corner of the software world.
I'm certain that if they would start doing that, without a proper strategy / workflow when it comes to QA, it will be GitHub reloaded. You'll be able to watch the decline in real-time.
But that’s the issue the parent is highlighting, you can’t just throw AI at these problems because the bottleneck is decision making, it always is, and AI is bad at that.
So nothing really changes in terms of product development velocity, it’s just headcount reduction.
But that’s not what their own marketing strategy communicates.
I think what OP means is that these companies keep promising AI is exceptional for one thing but for some reason it's never used for that. The only visible outcome of AI in these companies is that they spend so much on it they end up laying off employees.
Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?
I'm going to be honest with you, I never even considered that the pinnacle of enterprise software would have a public issue tracker (do they?). If something doesn't work the way I expect I just accept it and move on.
Because an enterprise customer might decide it’s a needed fix tomorrow. I’ve seen it happen - 20 year old bug on the backlog and suddenly it jumps to the front of the line.
maybe the 'microservices' approach + agentic coding (self directed agents) that have agency to pick up old tickets, open merge requests with maybe a human in the loop will fix all that.
To be fair, any LLM project gets a lot of stupid tickets, by virtue of a) marketing to users who aren't really developers and b) bad developers being more likely to use LLMs. Both of these groups are more likely to write bogus or non-reproducible bug tickets, as well as feature requests that don't make any sense. My guess is 10% of those 10,000 open issues are actual bugs or sensible requests.
On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.
My colleague had a problem with commit messages, so now they're all written by AI. I don't know what depth of hell he managed to get the prompt from, but they're all now in the format "Updated /path/to/file: fixed issue in thingamabob", which means they're all at least 200 characters long and half of it is the file path, an absolutely pointless thing to put in a commit message. The best part is that whenever you look at GitLab or GitHub, instead of seeing the commit message next to the file you just see the file name again, then the message is cut off.
There is simply no chance that LLMs are saving you 30 hours of work a week, especially if they're doing something where you'd have to do the research yourself. Either you're just simply wrong, or you went from understanding the code you were writing to skimming whatever the magic box spits out and either merging it outright or pawning off the effort of review on someone else.
That's why I gave a range. I didn't say it is saving me 30 hours every week, I said 10 to 30 hours a week. So 30 is the max of the range, and I'd say the distribution is pretty heavily left-skewed. It really depends on what I'm doing, but I do think there are weeks where it has save me 75% of the time I would have otherwise spent. I think there are two kinds of weeks where this is the case:
1. A week where I would have otherwise actually spent the majority of my time writing out and doing a ton of refactoring of a lot of implementation code. This is very rare for me, but it does exist. I can remember how it could actually take me a whole week to just "code up" meaningfully sized prototypes or greenfield implementations of some unambiguous thing. Truly, now, for that kind of work, claude code can save me full days of mechanical work.
2. A week where there is something very subtle going on that I have to figure out, probably having to do with some component or system I'm not very familiar with yet. Having an AI tool as a rubber ducky, or like a supercharged stackoverflow, can save me days of reading, debugging, working on minimal repros, etc.
Again, I'm not saying this is the common case at all. And estimating this kind of thing is always wildly inaccurate, so sure, take it with a grain of salt. But I know that a few times now, doing estimates based on my past experience, I've said "that will take me a week" (in case #1) or "gosh, I dunno, that's a tricky one, that might take me a week to figure out" (in case #2), and instead it only took me a day.
But honestly I think people focus too much on the high end of this range. The more valuable thing to me is the large number of weeks where it saves me that 10 to 15 hours, where I can then use that time to research new things, try more ideas, say "yes" to more things, or just not spend that time working.
Are you ashamed of other people finding out you used Claude? I think the co-authored-by bit should not be a setting at all, AI-generated code should be clearly identified.
I use Claude at work. I've never instructed it to make a commit, and it's never attempted to make one. It would fail anyway because my commits are signed by Yubikey and it requires presence detection, so I have to tap it.
But I don't want it to make commits, and I don't want to review its code in the Claude Code TUI, either. I want to read its changes in my text editor, decide what to drop or revise or revert, and then stage individual hunks or regions into logical commits.
If anyone asks I'll tell them I used an LLM, idc. I often mention it in commit messages or PRs. But I don't want LLM agents to write commits at all.
Basically what you’re saying is that if AI does anything on your computer, anything the AI impacts you should lose control over. If the AI touched it at all in any way, big or small, you now lose ownership of the actions your computer takes (on open source tools, I might add).
In case you need reminding of common sense, I’m supposed to be allowed to decide what my commit messages are because it’s my fucking computer.
I prefer that my software is not a morality police.
mind-boggling people are trying to hide this, tells you all you need to know about our “profession.” presence of that hook or the like
in a place of business should be fireable offense
Let AI autonomously produce code of a quality that I care about and I might consider giving it credit. I don't know how other people write code but I come up with an idea and use a multitude of LLMs to brainstorm a reasonably comprehensive spec that any reasonably competent person can read and produce a working program from, including a locally working Q2 quant of Qwen 3.6. Even Kimi is as good as Claude at most coding tasks, and I don't see why any single agent deserves any credit for my design.
Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.
Do Adobe or Arri or Red get authorship credit for the work their hardware and software do on projects? After all, artists would not be able to produce a single pixel without them. In a similar vein, you could make the argument that modern farming is sitting on your ass in your modern tractor while software handles most of the work. Does John Deere get rights over a quarter/half your harvest?
I am stuck between the luddites and "artisanal" coders on this one. LLMs are neither as smart/useful or as dumb/useless as people think. Unless your job involves producing useless garbage every single day, good software requires a lot of thought before the first line of code is even written. For those with serious domain knowledge, the thinking time can be compressed into minutes/hours rather than days/weeks it might take.
LLMs are a tool. You either pay for it or you use the freely available ones on your own hardware. As long as the output is directed by my thinking, the output belongs to me. If it were up to me, I would abolish IPR (and even permanent ownership of land) as a category altogether, but that is a different discussion.
I think the Linux kernel's standard of disclosure via the "Assisted-By" trailer is the right move.
Makes it clear you used a bullshit machine, without implying it's an author.
...assuming you think using them at all is a good move - I won't deny they have some utility (though I'd argue much lower than many seem to think), but I do presently believe they're a disaster for humanity.
The ruination of the Internet with slop, the massive propagation of propaganda, and the insanely easy-to-wield tools for abuse are in no way worth the ability to accrue tech debt at 10x velocity (though to be clear, accruing tech debt can absolutely be a useful strategy, if one I personally dislike).
I'm sorry but none of this sounds in any way exciting or like a breakthrough. There are ASML machines that hit microscopic tin particles with a laser 50,000 times per second, but it's somehow an achievement we've managed to create a ping pong paddle that's fast enough to hit a ball? Precision robotics have been used in manufacturing for decades.
Is this a joke? You proclaim your support for a party that proudly posts AI-generated pictures of Obama as a monkey, shits out vitriol-filled messages on literally every holiday, and sends the gestapo to execute American citizens in the streets, and then you demand civil discourse? I'm sorry but that ship has sailed, there is no reason why someone should maintain a civil discussion with you.
How so? Half the people here have LLM delusion in every thread posted here; more than half of the things going to the frontpage are AI. Just look at hours where Americans are awake.
Fucking Americans. Only 4% of the world population, with the magic of disproportionately afflicting the global news headlines which make their way here.
A bunch of AquaSec stuff has been getting compromised since the initial incident at end of February. Apparently in the latest attack they managed to compromise their internal organisation: https://opensourcemalware.com/blog/teampcp-aquasec-com-githu...
Same for me, it's simply never crashing for my day to day use. It doesn't mean there aren't idiosyncratic cases out there but anecdata can easily paint any number of pictures.
Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
reply