that's kinda what I was looking for tbh. I didn't know that was an option, and nothing in the thread (or article) seemed to imply it was.
I was mostly working off "well I could ask claude to look at my code for security problems, i.e. 'plz check for security holes kthx', but is that really going to be the best option?". if "yes", then it would kinda imply that all the customization and prompt-fiddling people do is useless, which seems rather unlikely. a premade tool is a reasonable starting point.
You can't copyright a work that is only generated by a machine: "In February 2022, the Copyright Office’s Review Board issued a final decision affirming the refusal to register a work claimed to be generated with no human involvement"
But human direction of machine processes can be copyright:
"A year later, the Office issued a registration for a comic book incorporating AI-generated material."
and
"In most cases, however, humans will be involved in the creation process, and the work
will be copyrightable to the extent that their contributions qualify as authorship. It is axiomatic
that ideas or facts themselves are not protectible by copyright law and the Supreme Court has
made clear that originality is required, not just time and effort. In Feist Publications, Inc. v. Rural Telephone Service Co., the Court rejected the theory that “sweat of the brow” alone could be
sufficient for copyright protection. “To be sure,” the Court further explained, “the requisite
level of creativity is extremely low; even a slight amount will suffice."
I have no doubt that I was oversimplifying it. The court case that determines whether code written by an LLM in response to various types of prompts has not yet been launched (AFAIK; if it has, it has not yet been decided).
We may have some bug in the clustering showing too much dots at low zoom levels.
Panoramax coverage is mainly located in France as the project started there. It is a decentralized and federated effort and there is not yet a Panoramax server covering Australia.
OpenStreetMap France Panoramax server accepts pictures outside of France, but for testing only.
The TL;DR is that there is little measurable impact (and I'd personally add "yet").
To quote:
"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"
My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.
The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!
> My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
Gemini 3 and Opus 4.6 were the "woah, they're actually useful now!" moment for me.
I keep saying to colleagues that it's like a rising tide. Initially the AIs were lapping around our ankles, now the level of capability is at waist height.
Many people have commented that 50% of developers think AI-generated code is "Great!" and 50% think its trash. That's a sign that AI code quality is that of the median developer. This will likely improve to 60%-40%, then 70%-30%, etc...
I don’t see definitive evidence that there is some kind of Moore’s law for model improvement though. Just because this year’s model performs better than last year’s model doesn’t mean next year’s model will be another leap. Most of the big improvements this year seem to be around tooling - I still see Opus 4.6 (which is my daily driver at work) making lots of mistakes.
> So, these speculators are like "oh no, more GPUs requires more RAM!", and then just start speculating on all RAM.
Are you claiming that these speculators are buying DDR5 RAM and warehousing it somewhere? Or what exactly is the mechanism you are proposing here?
To me it seems much simpler - AI companies want HBM, but HBM and DDR5 share the same wafer production process and facilities, but the HBM process is much more fragile and takes three times the wafer production.
There isn't enough DDR5 RAM being produced, so prices go up.
> No, we are literally trying to find a use case where using a lower accuracy LLM makes sense for a vision task.
They're reconfigurable on the fly with little technical expertise and without training data, that's really useful. Personally in projects for people I've found models have fewer unusual edge cases than traditional models, are less sensitive to minor changes in input and are easier to debug by asking them what they can see.
Seems like a way to use a sledgehammer to hammer in screws, and inviting nondeterminism in important systems. Besides being way larger and more complex than what most specialized industrial processes need, they are also vulnerable to adversarial attacks.
> Seems like a way to use a sledgehammer to hammer in screws
The lazy analogy the other way is that developing a custom system to do these jobs is like hiring a team of experts to spend 2 years designing the perfect crosshead screwdriver that fits exactly one screw (and doesn't work if the screw starts slightly rotated) when you have a flathead one right next to you that'll work and it'll work right now.
> and inviting nondeterminism in important systems.
Traditional ML is just as non-deterministic.
> they are also vulnerable to adversarial attacks.
Typically not relevant in these kinds of cases but also this is easily a problem in many traditional ML algos.
A flathead screwdriver is not a valid analogy, because LLMs are big complicated and opaque machines. And while other ML methods are non-deterministic as well, gaussian process, decision trees or even CNNs are easier to try to make sense of than these huge black boxes.
And I still haven't seen a single example of anyone actually using a finetuned Qwen in industrial inspection, which leads me to believe than nobody is actually using it for that, but some people want to use it because it's their new favorite toy. You don't need a VLM to count cells in microscopy images, or find scratches in painted parts, or estimate output from a log in a saw mill. I can see the use case for things like describing a scene from a surveillance camera, finding a car of a certain model and colour, or other tasks that demand more reasoning or description. But in those cases latency is not super important compared to getting the right output, which was the tradeoff discussed from the start of this thread.
The last thing I'd want to deal with is to have a computer say something like "You're absolutely right, it was wrong of me to classify the metal debris as food".
I’ve used multimodal LLMs for this sort of task and if a fine tuned model would get reasonable performance compared to frontier models I’d use that. Running things purely locally lets you massively simplify the overall architecture and data transfer requirements of some of these tasks if nothing else and lower latency means you can report problems much faster (vs transfer images off device, batch process).
> The last thing I'd want to deal with is to have a computer say something like "You're absolutely right, it was wrong of me to classify the metal debris as food".
The cnn will do that potentially more often and it can be because it’s just not seen enough examples of the debris at that angle or something else equally irrelevant to a human.
They definitely do. But in my experience they "accumulate".
Like, things all work pretty well at first. And then god only knows what happens as config and preference files get into weird states, and temp files accumulate and never get deleted, and cache files get stuck with old info and refuse to update, and god only knows.
So people with relatively new installations have a pretty good time, while people who have migrated their data across three MacBooks over ten years are encountering problems left and right.
I reinstalled Sequoia fresh last year because some mystery process would slowly consume 50GB of disk space over the course of every two weeks, no disk utility could locate any file responsible, but restarting reset it. But with the fresh reinstall, everything started working fine again. It's annoying. Then I upgraded to Tahoe and zero problems. But I'm sure they'll gradually start appearing over the next year or two.
Yes, things like small bugs and abnormal user experiences accumulate and over time the OS and other apps become inconsistent.
As heavy users who are generally by profession spend a lot more time with a Mac, they tend to experience more issues, and things that used to work for decades start to crumble. It all works if you’re acting like working on glass pieces, but that’s not what computers are made for.
You’re supposed to use it extensively and get more efficient over time without a glassy UI and other broken systems pulling you down at every turn.
It’s not about using a system for 10 minutes to visit a website with Chrome, but instead spending days programming things, having a normal life, and still having the very simple file discovery features working.
There’s no reason for a computer to be this choppy and slow (in things like context switching etc.) unless something else is going on in the background.
Which particular long list of bugs? I’m on my work laptop, personal laptop and phone all day no problem. I’ve seen all the ranting here about the interface, window corners and menu icons but in day to day use have not encountered a single “bug”. And after some initial skepticism I actually like the design direction of Tahoe.
In the article. It says (paraphrasing): Time Machine goes wrong (over time!), Spotlight doesn't index tags right (requires relaunching Finder), Finder sometimes hangs when using Spotlight (requires relaunching Finder), folders sometimes won't update to show new files (requires relaunching Finder), using Quick Look on a video makes airpods glitch, and switching by cmd+tab to a fullscreen window doesn't give it keyboard focus.
My biggest gripes are around Spotlight. Have you ever tried to search for a file or or create a smart folder that works? These features are horrendous. And Cmd+Space [type app name] used to be a decent app launcher. These days I often find it sits and spins and I end up using the Finder and navigating to Applications -- and that's faster than Spotlight.
When I actually need to find a buried file, I drop into a terminal. Then I feel sorry for people who don't know how to do that.
No and indeed they have said they never do this at all.
reply