I don't speak Mandarin but is this not an issue of style rather than the language itself? English can be courtly or poetic or abstruse but that's a matter of the speaker making a bunch of choices. I can't help but think of "Yes Minister" and Humphrey Appleby working quite skillfully to communicate in a way that ensured he would not be understood.
Do Mandarin speakers not also have such a range of choices to be clear or not?
Maybe it's a matter of code switching? I've read that some Japanese teams prefer English for practical reasons, since a shared second language prevents anyone from getting bogged down in formalities. That is not to say Japanese is unable to be formulated with just as much precision.
I think the other moat is access to non-public data. If you can train, measure, or make decisions based on specific data that the vibecoder trying to clone you can't get, you can keep ahead.
But isn't part of the point of this that you want people who are eager to learn about AI and how to use it responsibly? You probably shouldn't want employees who, in their rush to automate tasks or ship AI powered features, will expose secrets, credentials, PII etc. You want people who can use AI to be highly productive without being a liability risk.
And even if you're not in a position to hire all of those people, perhaps you can sell to some of them.
Honestly, it seems worse than web3. Yes, companies throw up their hands and say "well, yeah the original inventors are probably right, our safety teams quit en masse or we fired them, the world's probably gonna go to shit, but hey there's nothing we can do about it, and maybe it'll all turn out ok!" And then hire the guy who vibecoded the clawdbot so people can download whatever trojan malware they can onto their computers.
I've seen Twitter threads where people literally celebrate that they can remove RLHF from models and then download arbitrary code and run it on their computers. I am not kidding when I say this is going to end up far worse than web3 rugpulls. At least there, you could only lose the magic crypto money you put in. Here, you can not even participate and still be pwned by a swarm of bots. For example it's trivially easy to do reputational destruction at scale, as an advanced persistent threat. Just choose your favorite politician and see how quickly they start trying to ban it. This is just one bot: https://www.reddit.com/r/technology/comments/1r39upr/an_ai_a...
If universities should take on that risk, should they also receive some share of the student's future earnings? Do you want all schools to work like the US military colleges, where is admitted you get free tuition but also are agreed to work for the school's parent organization for a period?
> but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't
I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.
In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.
In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.
I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.
Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.
> You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.
> To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.
So I think even these people should not feel secure.
The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise.
If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter.
Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.
I think the class divide that is most relevant is more literal and old-school:
- Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.
- Can you invest specifically in the firms that are actively building AI, or applications thereof?
We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.
I think "embodied" is a word with decades of past associations in cognitive science and AI and if you're claiming that your model is embodied you should try to back that up with specific evidence.
This seems cool but I don't think they live up to their self-description
Ok so obviously this is a security disaster. But also ... is there a hackable consumer EEG device that gets useful data and is as comfortable as a sleep mask (and presumably you're not slathering electrode every time you put on your sleep mask)?
Cuz once the thing can't phone home, that sounds pretty cool.
> Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer.
... is now the moment to form worker cooperatives? The companies don't really have privileged access to these tools, and unlike many other things that drive increased productivity, there's not a huge up-front capital investment for the adopter. Why shouldn't ICs capture the value of their increased output?
reply