I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.
Also I think this has been a long time dream of business types. They have always resented domain experts, because they need them for their businesses to be successful. They hate the leverage the domain experts have and they think these LLMs undermine that leverage.
I can sort of relate. If you hire an expert, you need to trust them. If you don't like what they say, you're inclined to want a second opinion. Now you need to pay two experts, which is often not reasonable financially, or problematic when it comes to corporate politics. And even if you have two experts, what if they disagree, pay a third?
To manage this well, you need the courage to trust people, as well as the intelligence and patience to question them. Not everybody has that.
But that aside, I think business people generally like having (what they think are) strong experts. It means they can use their people skills and networks to create competitive advantage.
Happens in programming as well, often even by developers.
The "copilot experiences", that finishes the next few lines can be useful and intuitive - an "agent" writing anything more than boilerplate is bound to create more work than it lifted in my experience.
Where I am having a blast with LLMs is learning new programming languages more deeply. I am trying to understand Rust better - and LLMs can produce nice reasoning to whether one should use "Vec<impl XYZ>" or "Vec<Box<dyn XYZ>>". I am sure this trivial for any experienced Rust developer though.
>> I've been getting the same thing at my company. Honestly no idea what is driving it other than hype.
> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.
Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."
They’ve always resented those employees having leverage to negotiate better pay and status. Many techies looked at near-management compensation and thought that meant we were part of the elite clubhouse, but they never did.
The thing with stereotypes is that, while they tend to be well enough based in fact for most people to recognize, they are no better than anything else at applying generalizations to large groups of people. Some will always be unfairly targeted by them. You personally might not have done anything to contribute to those things we are lashing out against (and if not, thank you!), but then again you personally were not targeted by these remarks. In the same way that you are possibly unfairly swept up in these assertions, it is, to a degree, unfair for you to use your wounds to deprive the rest of us of freely voicing our well-founded grievances. Problems must be recognized before they can be addressed, after all, and collectively so for anything so widely spread. It's never pleasant to be told to "just tough it out", but perfect solutions are rare when people are involved, just as how surgeons have to cut healthy flesh to remove the unhealthy.
An analogue to this would be "all cops are bastards". Sure, there are some good ones out there, but there are enough bad ones out there that the stereotype generally applies. The statement is a rallying cry for something to be done about it. The "guilty by association" bit that tends to follow is another thing entirely.
Rather than some conspiracy, my suspicion is that AI companies accidentally succeded in building a machine capable of hacking (some) people's brains. Not because it's superhumanly intelligent, or even has any agenda at all, but simply because LLMs are specifically tuned to generate the kind of language that is convincing to the "average person".
Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.