maybe it means they were never really as smart as you thought?
Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.
We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1
You might be right but they used to read much more and our arguments used to be deeper. The changes I'm seeing in them are highly correlated to their increased use of AI.
Maybe it's something like that AI allows them to indulge in their shallowness/laziness by giving them the impression that they're not doing that.
Anyone claiming these systems are for the "democratization of software engineering (or any knowledge field)" are simply not grasping the reality at hand.
It's not like our corporate leadership is being subtle in any way about their openly stated end goals here. We are simply instructed to continuously burn our enormous piles of "thought tokens" for their machines. The same machines only made possible by the theft of humanity's collective works at unimaginable scales. We must hold up these statistical facsimiles of human work, now rendered by machine, as the inevitable future output of humanity. To do otherwise is the way of the luddites.
The gods of unbounded growth, efficiency and productivity have come to demand our sacrifice. Who are we to stand in their way when countless skilled laborers before us have fallen to automation? Our number has been called as it were, and to reject what they demand is to reject "progress" itself. The march continues as it has since before the dawn of industrialization, relentless and indifferent to any and all that are crushed beneath it.
I don't know when the last bastion of "inefficiency" will fall, but at some point humanity may collectively grow to regret some forms of automation. Time will tell.
There's a certain simplicity and beauty in honing and stewarding a craft towards mastery that automation doesn't provide. I feel this is an innately human desire. Many cultures in the past used to place a great emphasis in this very personal endeavor. Sadly it seems each decade we slowly suppress these basic human needs, too tantalized by Western values of having ever "more" of everything, much to our detriment.
You could be the only one producing sane products in an insane world. But you won't get paid for it. The people mandating Claude Code are the people dictating large-scale money flows.
> (Claude Code got mandated at my work this week. Like literally engineers must use CC.)
As I haven't been a typical full-time employee in software development for some time, could you possibly just like leak the entire email where this was announced or something? (open invitation to others who could too, if parent cannot) I'm very curious to see how it was announced and what possible reasoning they could have for that.
Don't get me wrong, I use agents for lots of coding too, but forcing people to use tools they might not want to use doesn't feel like the right way. I was also allowed to use vim whenever I wanted for most of my career, something that feels more and more rare when speaking with people just starting their careers now.
The official declaration came about in a team-meeting. We're a tiny startup, 3 full-time eng with the CTO co-founder driving the AI transformation. We have scheduled onboarding meetings to get the entire company finding automation opportunities with specifically Claude (CC, Cowork). For eng specifically, there is "acknowledgement" that we all may have different setups, but we should all be unifying our prompts, strategies, pipelines, Agents, and so... it's CC. I still use Cursor so I'm the only one not on CC; my eye-brow was especially raised.
I don't want to be doxxed lol, so ironically I'll be sharing in the sense that on one hand I am unsettled by the mandate, on the other hand, for a tiny startup it's seems the state of the industry, less so company specific.
Startups (think they) are in a fight for their life, so the mandate comes from everyone contributing 10x or whatever. The expectation is that agentic coding should 3x/5x/10x your feature output, because that's how we're going to win.
I have many thoughts. But I'll focus on that last one: the mandate is literally more features, more code, because it gets us closer to winning. In my small engineering circles, surprisingly, this is like the defacto stance. All things considered, might as well ship more code!
tbh I have been struggling with the state of software in the agentic era. I'm pro LLM, there's undeniable leverage in its coding abilities. I do want to write this post! I'll start with hopefully this distillation:
In the agentic era, if shipping code is a commodity, then why ship more code? we can say this for the entire concept of "build" - we've commoditized building anything software related, i don't understand how this translates to therefore BUILD MORE.
So then the more nuanced conversation is that taste and judgement is the leverage. i agree with this. But its hand wavy in that we can't agree on what taste is. and also accelerationists hold true that all can be encoded. more agents. i don't even disagree with this entirely.
What I'm missing is that AI-native software engineers are going to brute force their way to PMF, to judgment, taste, enlightenment, consciousness.
But why is this a straight line? Just add more agents. add a "designer", a "sales" and "user researcher" agent. just add more agents.
You don't know what you don't know, is my retort. It's surreal to me that we're living through the equivalent of the smartest software engineers effectively giddily prompting, with full conviction mind you, "add more pop!" to make their thing better.
Man, I completely agree with your thinking here. I've been trying to be more active in online communities, to try to discuss this exact idea.
LLM code can be leveraged, but pretending that tokens are just going to turn into money printers at some point is not productive. The primary source of software's value to an end user is the thought that was placed into it. Where does that go for the AI-natives? As you say, they are seemingly brute forcing software engineering, at least so far.
One thing I have been considering is how LLMs primarily change the "build vs buy" calculus for a fair number of software niches, particularly things like developer tooling and small libraries and packages. Partially due to a projected increase in supply chain attacks, and partially due to the changing standards of engineers. There's no longer anything stopping someone from working with an ugly or clunky syntax, presuming it's a well documented standard. So many "developer experience" tools are going to hit this - Tailwind primarily comes to mind.
It's a sort of "erosion" of niches in the current landscape - although to me this does not really work out for the worse in the long term, since again, the thinking in the process will need to just go somewhere else.
We did a similar Claude code mandate a few weeks ago.
Motivation was people being so allergic to tests and automation, that making them use superpowers produced better code, but also started adding a test pyramid.
The mandate was actually phrased in a way that you must produce industry standard code, and if you struggle with it you can use cc to bridge the gap.
Honestly I worry that this way devs will produce higher quality code, but will not understand why, how to measure the “quality” and steer towards it themselves.
At this point though the founders were pretty adamant with the code quality and lack of tests so this seemed like a reasonable way for the company, and I am curious to see how such a mandate affects code, deliverables and individual’s knowledge.
So far it seems to be working as intended, but it is early days.
> Motivation was people being so allergic to tests and automation
Frankly if your devs weren't doing this stuff before, forcing them to do it with AI assistance is probably going to be counter productive. If it is possible to produce good quality code and tests and such with LLMs, it is not likely by forcing LLMs on people who didn't care about code quality or tests before
I sometimes see myself writing too much manual code, so I reprompt my work in another directory for claude, and let it do whatever the fuck he wants, because I know for a fact that my company is monitoring claude usage as a metric.
> And we don't even know it, don't realize it? don't care?
People here are warning about this daily. And outside this tech bubble full of people trying to profit from it, higher percentages of people elsewhere do realize and do care.
That's a good point. But I also think about the unprecedented adoption of chatGPT. Global mass consumer adoption of all ages, and people giddy about offloading their jobs to chatGPT not connecting the very simple straight line to "you think your boss is going to pay you to ask chatGPT?"
Good point. I used to heavily criticize rote memorization for being distinct from intelligence because: school. It's painful to go through so many years of school where everyone is seemingly complicit on rote memorization being equal to "winning".
I grew up, and it still pains me but I had it better explained to me that memorizing things doesn't equal intelligence, but those people that remember a lot have a much larger pool of information to draw upon. So whatever definition I use for intelligence, like synthesis, well, a large pool of information to draw from probably helps a whole lot.
I would also say that it matters how you remember things. It's possible to memorize a large quantity of facts without really meshing them together into a coherent whole, and this is a lot less useful than remembering things through the relationships between them.
Tech debt used here is likely a catch-all term, and you're disagreeing, reasonably so, with one definition.
I think human understanding of the surface area of a company is already very unwieldy. AI balloons the surface area. at some point using more AI to solve AI is reasonable! But to whatever extent a human needs to interface and manage this world, that's the accrued debt.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?
Coordinating with people is hard and only gets harder as you live. And actually, finding someone that is earnestly receptive to hearing you pitch your half-baked startup ideas (just an example) and is in some capacity qualified to be at all helpful, is uhhh, not easy.
Really? Sometimes I think I'm not very social, then I read something like this. Don't you have any friends? Colleagues? Maybe that's the problem you need to solve rather than sitting in a room burning energy for endless token streams with LLMs that anyone has access to?
Ah, I couldn't help myself practice my creative writing in the other reply. This reply is more constructive:
Both LLM based rubber-ducking and human discussions seem like a win win. I see no reason to jump to labeling unhealthy social connections just for pairing with LLMs.
lol. nobody is proposing this "well if not friends, then...". Appreciate your concern. I am fine.
This is for Internet posterity: thought-partnering with AI does not in fact make you a sorry socially inept loser that needs globular-toast to come in and help you dial that helpline.
Also: one's friends do not, in reality want to thought-partner about work issues, esoteric hobbies, and that million dollar idea.
Also: these friends, every and any one of them it seems, will not in fact speak the word of God into your ear as manifest insight for said work issue, million dollar idea, and so forth.
But a non well-written prompt is not a good prompt. What are you really going to do with a shit prompt? It's meta: we need better writers all the way down.
Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.
reply