I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
You would think they would care about the fact that their brand is being torched but I guess they figure they're too big to need to care.
Their new philosophy is "the user is too stupid to even think for themselves LOL." It's not just their rhetoric, it's every single choice they've made screaming out their new priorities of which user respect is both last and least
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
Why is this, I wonder? Aren't the models trained on about the same blob of huggingface web scrapes anyway? Does one tool do a better job of pre-parsing the web data, or pre-parsing the prompts, or enhancing the prompts? Or a better sequence of self-repair in an agent-like conversation? Or maybe more precision in the weights and a more expensive model?
their products are just just good enough to allow them to put a checkbox in a feature table to allow it to be sold to someone who will then never have to use it
but not even a penny more will be spent than the absolute bare minimum to allow that
this explains Teams, Azure, and everything else they make you can think of
How do you QA adding weird prediction tool to say Outlook. I have to use Outlook at one of my clients and have switched to writing all emails in VS Code and then pasting it to Outlook as “autocomplete” is unbearable… Not sure QA is possible with tools like these…
Part of QA used to be evaluating whether a change was actually helpful in doing the thing it was supposed to be doing.
... why, it's almost like in eliminating the QA function, we removed the final checks and balances on developers (read: PMs) from implementing whatever ass-backwards feature occurs to them.
Just in time for 'AI all the things!' directives to come down from on high.
exactly!! though evaluating whether a change was actually helpful in doing the thing it was supposed to be doing is hard when no one knows what it is supposed to be doing :)
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
This is the most annoying thing, and it's even happened to Jetbrains' rider too.
Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.
You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.
In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.
It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.
Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.
I've been immensely frustrated by no longer being able to set reminders by voice. I got so used to saying "remind me in an hour to do x" and now that's just entirely not an option.
I'm a very forgetful person and easily distracted. This feature was incredibly valuable to me.
I got Gemini Pro (or whatever it's called) for free for a year on my new Pixel phone, but there's an option to keep Assistant, which I'm using.
Gotta love the enshittification: "new and better" being more CPU cycles being burned for a worse experience.
I just have a shortcut to the Gemini webpage on my home screen if I want to use it, and for some reason I can't just place a shortcut (maybe it's my ancient launcher that's not even in the play store anymore), so I have to make a tasker task that opens the webpage when run.
This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE.
Had to turn ML auto-completion off. It was getting in the way.
There is no setting to revert back to the very reliable and high quality "AI" autocomplete that reliably did not recommend class methods that do not exist and reliably figured out the pattern I was writing 20 lines of without randomly suggesting 100 lines of new code that only disrupts my view of the code I am trying to work on.
I even clicked the "Don't do multiline suggestions" checkbox because the above was so absurdly anti-productive, but it was ignored
The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.
I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.
You can still use the older ML-model (and non-LLM-based!) IntelliCode completion suggestions - it’s buried in the VS Installer as an optional feature entirely separate from anything branded CoPilot.
The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):
This task cannot be accomplished
USING
standard SQL queries against the provided database schema. Replication slots
managed through PostgreSQL system views AND functions,
NOT through user-defined tables. Therefore,
I must return
Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.
I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).
Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.
This is a great post. Next time that you see it, grab a screenshot, put on GitHub pages and post it here on HN. It will generated lots of interesting discussion about rubbish suggestions from poor LLM models.
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.