Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.
That is pretty much my existence at $MAJOR_TECH_COMPANY now. Inexperienced security engineers running bots against my codebase and sending me pages long tickets with their "findings". There might be a couple of interesting nuggets here and there but by and large the reports are just noise. This churn is actively taking away from my ability to actually respond to customer-impacting issues because "security is always our top priority".
Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.
They're getting paid to encode some inane prompt into paragraphs of text, and then they're getting paid again to summarize that back into something with even less value than the original prompt. And they're making money hand over fist because people are happier to play that game rather than just pushing back on the jerks sending them pages of generated garbage in the first place.
I would agree with you, except right now the walls of text come from people using the free or very cheap versions of ChatGPT, et al. So there's not even anyone making money off of it.
Ah yes, take my single sentence, blow it up to 3 paragraphs with LLMs, and then the person reading it can have an LLM summarize it in a single sentence.
Agreed that just step 1 or step 1 and 2 would be depressingly pointless, but step 3 and 4 make this the equivalent of sending someone a let-me-google-that-for-you kind of link, does it not?
Caught out like this I imagine many people will kind of get the fact that you'd rather have their direct inputs..
Worse still, the person that is the most egregious about doing this seems to appreciate it and responds with "Yes, that's right!" and just ignores (or has no idea what I'm talking about) when I ask for the original prompt.
I simply ask for a positive affirmation of the summary so that I can act on that, instead of other things.
The thing is, eventually these products will be more integrated into business workflows and have access to all the context, so the three paragraph expansion probably will be a significant improvement upon the original input.
And either that person won't be employed anymore, of the thing they were asking for in the first place will be automated for them.
I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.
My employer already records every scrap of communications, I'm running everything on corporate infrastructure, and they sent the information to me.
Giving the AI knowledge of the org chart, who works on what, how they prefer to communicate, what their goals/biases are, is no different than what every ape implicitly collects in their own head.
Oh I know. In the past month I’ve moved several thousand dollars in spending away from companies that turned their support into a useless understaffed AI program.
The disease has spread to six figure enterprise contracts hallucinating about their own APIs.
As these products improve, one person sending the output and not the prompt will remain useless. The prompt captures the intent and level of real consideration of the person sending it, the receiver can augment that with additional information if they want to.
Professional communication has a completely different goal than a student essay, and it's weird you conflate the two. A student paper is useless as an artifact, the actual value is for the student to learn how to write the paper. If a coworker sends me a long email for me to read it should provide some actual value.
I'm arguing against people who essentially say that running the LLM is useless; just send the prompt.. Obviously that is true if the person does zero additional value add, but then that person probably sucked as a colleague before LLMs anyway. When you use an LLM agent correctly you are adding value beyond just the prompt, and those three additional paragraphs won't just be extra noise. Especially if the agent is automatically fed your personal context.
An essay states a hypothesis and then uses first and second party sources to validate it. I'm not conflating anything, it's just a good abstract example of the type of knowledge synthesis work, which is why we make kids do them.
A business strategy proposal is nothing more than a specific type of essay where the research sources are internal research results, market trend analysis, etc.
A technical design doc is an essay about the best way to implement a feature.
An "executive summary" is just an abstract, and the MBR puts the latest research citations and raw results in bullet points.
> I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.
have you asked these people how they feel about this? have you asked them for permission, for their consent to do this with their communications to you?
what you’re doing sounds incredibly creepy. like, meta/facebook kinda of creepy. granted, it’s at a more limited scale, but it’s still creepy af dude.
fwiw, if i was your colleague and you asked me how i felt about you doing this with me, i’d be seeing about getting HR involved.
Um, I absolutely expect my colleagues to update their internal model of me every time we communicate, to a greater or lesser degree depending on how much that communication deviates from their expectations, or how much new information it contains. In fact, that is essentially the purpose of communication.
Do you think you are not constantly being "influenced" to do what people want from you?
What do think happens during a peer review or promotion decision?
What do you think the pile of data in SharePoint / GDrive represents?
You think HR will care about someone taking prolific detailed notes at work?
I did phrase my comment in a glib way to draw out this type of reaction. But this type of stuff is what "intelligence augmentation" will include, and the corporate panopticon is already alive and well anyway.
their mental model. the human being’s mental model. the one in our private head. not some model on a corporate server, some secret “dossier” on every interaction you’ve ever had with them. you’re basically creating your own black book / surveillance tool on everyone you interact with dude.
just because the corporations do this to us doesn’t make it okay to do it to each other. just because your employer does it doesn’t mean it’s okay to do to your co-workers. like, there has to be a degree of trust between colleagues dude.
compiling a record of every single thing anyone has ever said to you, an individual human being who is not a corporation or a machine, all for the purposes of “it makes my emails better” is just plain fucking creepy.
i think you might need some time away from the screen. seriously.
> i did phrase my comment in a glib way to draw out this type of reaction.
maybe, just maybe, it would be a good idea to take a bit of time to seriously think about why being glib about this super creepy thing you’re doing is not a good thing.
bit of self-reflection. the thing us humans are supposedly still capable of doing and the machines are not.
does that make it morally okay to do with your colleagues?
like, jfc, these are fucking people were talking about building “dossiers” of. people the person works with where a degree of trust and bonding is necessary. people they probably spend at least a quarter of their waking hours interacting with.
and your defence for it is “well, google does it”.
the best engineers know what not to build. they don’t build every single thing under the sun because they can.
also, don’t you have to explicitly agree to google’s terms for that stuff to use their services?
Nice article!
I wrote something similar this year too after seeing the 5-bullet-points of information stretched out with AI homologous slop too many times.
> What I remember most about the 90s was the overwhelming optimism.
To me it felt we were slowly making the world better for all. Progress was happening and would continue to happen.
Now it feels like we are rapidly on the path to a dystopian Elysium like future. A dystopia for everyone but the sociopathic ultra wealthy that want to rule over us. And they’re not even hiding their intent from us anymore.
This is where the media (or your media bubble) failed you. Trump was always this way. In his first term he significantly increased the amount of bombs dropped and number of countries bombed over previous presidents.
Democrats shouldn’t have wasted effort on trying to reduce student loans simply because the constituency (students) didn’t even give them recognition for it. They simply blamed Biden when SCOTUS blocked it.
But more generally we shouldn’t do one off things like this when we still haven’t fixed the cause of the problem. A better policy would be to start by making community college or first two years of college free or something like that.
I am onboard with free community college. Unsure kids can figure out their majors before they fork out beaucoup bucks for pricier institutions. They should also be able to default on loans they can’t repay.
Biden is guilty of pretense. He very well knew or should have know this had very small chance of this manoeuvre being upheld. It’s akin to Dems or Reps in the Congress opposing or in favor of something knowing the opposite of their stance is the forgone outcome just to look good to their constituents.
Biden is guilty of being a Democrat in the middle of a Republican putsch. If Trump had tried forgiving student loans it would have gone off without a hitch. Congress would have fallen in line and SCOTUS would have favored him. Everyone complaining that Biden was practicing communism would be praising Trump instead.
The way AI is being used feels like it is proving that, in many orgs, what has always mattered has been the appearance of work, not results of work. Will we wake up in a few years and find out we’ve fired all the doers and are now overloaded with the fakers?
I find that to be a very defeatist take. It always mattered how much value you provide to the business. Writing pretty code or arguing about some implementation detail never really mattered. If you are good at coming up with solutions to problems AI is just one additional tool in your toolbox and personally it allows me to do much more than before.
There were fakers before, and there will be fakers after.
> Writing pretty code or arguing about some implementation detail never really mattered.
True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.
Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.
Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.
Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.
Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.
Are you willing to wake up at 3 AM when that "valuable" AI-written code pages on-call?
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
I think the last ten+ years has taught us that massive security breaches are more of an insurance claim problem and some $4/mo credit monitoring payouts.
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Excuse me? Amazon lost more money in one day than most companies have in revenue, from dropped orders. I would say that matters. Believe it or not, the systems we work on do things that matter in the real world.
Seems to be an instance of the prevention paradox: Security (in general) is taken seriously enough that major incidences are low enough that people think that security does not matter that much.
The quality of our work is too subordinated to business leaderships who see the forms of technical insurance we build into software development processes as fat, and are fundamentally opposed to doing things right. Besides solidarity this is the major reason for tech workers to unionize. We won't because we don't have any sense.
> It always mattered how much value you provide to the business.
My experience says the opposite: the value you provide to the business is irrelevant compared to the value you provide people in positions of power in said business. These are mutually exclusive things.
I've saved employers entire multipliers of value relative to my TC; that value was irrelevant compared to folks who gamed AI tool usage to look better on dashboards to those in power seeking to have loyalists under foot. I've reduced product build times exponentially and halved build costs, but that value was irrelevant to those whose power was dependent on higher costs and longer times. I have contributed substantially more value to businesses than I cost, yet I am first out the door because I deliver value, not blind fealty.
Business value is irrelevant compared to personal power.
Actually i think we will see a faker take over and then a doer conquest. All those going now take the recipe with them and are capable of cooking it elsewhere. Elsewhere being a place without ai management.
There is a shift to software mass production over the last decade(s). AI is now speeding up this process extremely. There will be most software produced with AI and "cog coders", similar to a production line in manufacturing.
Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.
I you love programming you should try to get into the second category. Be a master craftsman.
Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.
You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.
Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.
This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.
Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.
This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".
What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.
From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.
And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.
Your graph model lack the aspect of increasing complexity. As you traverse the graph every available node gets increasingly more distant. In some areas of the graph less so than others, a good heuristic function not only identifies a single shortest path, but also dense areas of possible value in the graph.
The question is if blind speed scales quicker then distances grow.
That's true, and I guess the reason why we're building so many datacenters is to answer the question how far exactly will blind speed take us, assuming that we fail to make substantial improvements to AI architecture.
And that's what makes it actionable defamation. If your doctor signs off on an AI summary that accuses you of being an drug dependant sex worker, that's serious malpractice.
I rely on Medicare as a disabled person. I love it. The reduction in stress I experienced when I got to transition from my former employer plan to Medicare is pretty indescribable. I want every American to have at least this as a baseline.
Most of the complaints around Medicare come from those who get sold (conned) on takin Medicare “Advantage”, which is a privatized option for Medicare that denies a lot of coverage.
IIUC, the difference (for USG) of Medicare vs Medicare Advantage is that Medicare subsidizes the cost of a procedure done by a provider while Medicare Advantage (MA) pays a fixed rate per treatment to an insurer.
So if the MA rate is less than the provider changes then the insurer is highly incentivized to deny you coverage. While for Medicare you'd have a higher co-pay.
This also leads to scenarios where MA insurers upcode patients so that the treatment is at a higher rate [1]. (ex. Marking patients as recovering drug addicts when prescribing opioids to get both money from both counseling and the opioid treatment).
If that were true why is everyone so irritated by this? Just ignore it in that case. But for those people that may want to become subject to British jurisdiction in future or do other business there in future, they will take requests from Ofcom seriously.
When DoorDash or whatever courier comes to a restaurant, they pick up “order number”. That order number is in essence just private IP. Courier translates it to address=public ip.
It follows that the restaurant writes the address on every delivery. Do they ID each recipient?
In the original example the Parisian bars sells and sends the alcohol.
You’ve modified that to introduce a proxy, DoorDash, that now sells and sends the alcohol. If DoorDash sells it they’re the ones in trouble in your example.
reply