These models cost millions of dollars to create. Not only do they have incentive to keep them when the technology has the ability to change industry like alphafold(which alone could have been a multimillion dollar company).
Add to this mix the USA trying to prevent China copying cutting edge AI technology and this was bound to happen sooner or later.
It's no longer research it's near market viability.
The Linux kernel or other critical pieces of software infrastructure cost billions to create - yet it is available for free allowing companies to create immense value to society on top of it.
The amount of cool stuff people have created on top of Stable Diffusion since it was created is also amazing and would never have happened if they didn't release the weights.
Like blender Integration to generate assets for your (graphic) models and and full blown video sequences generated from a single prompt.
The resulting storm of new applications was definitely interesting to watch, though none of em were really market viable with that version of the model.
> Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this.
Both of you can be right. Just don't try to call yourself open.
It is all marketing. Apple patted themselves for their “courage” (hahaha) to remove headphone jacks. I forgot which company it was - they gave some money to charity, but spent 10 times more on ads that highlighted their one good deed of donating money.
Claiming to be “open” while not actually being open is a feature, not a bug. Just business as usual
What's wrong with the charity example? Presumably that money came out of their marketing budget, and if they hadn't spent it on ads highlighting the donation, it would have been spent on some other kind of ads instead.
> Add to this mix the USA trying to prevent China copying cutting edge AI technology
ML-tech from the past century seems to work just fine for killing each other (have a look at Ukraine). I would be rather more concerned about the foe having better batteries at this point.
Genuinely curious - what more weapons do we need? Especially western superpowers? Do we not have enough weaponry to destroy this planet million times over?
What more destructive power can AI provide, beyond the already existing drones/biological etc etc?
Now you can, but 120 years ago the British Empire ruled the waves with no reason to think battleships or cavalry would ever be obsolete.
The machine gun and trench warfare forced the development of the tank.
Once aircraft became military useful, battleships were rapidly vulnerable to aircraft and hence aircraft carriers.
I can imagine replacing naval mines with normally-quiescent remote-triggered torpedoes that are scattered on the seafloor a decade in advance.
How effective are troops against 3D printed drones designed to mimic animals (including birds), but which have a short-range firearm and some computer vision? Or engineered mosquitoes with the bare minimum of remote control, perhaps similar to the "cyborg cockroach" kits that have been on sale for about 9 years now? (Or just weaponise those 'roaches…)
What happens if you can predict what a command officer will say, use an AI to fake a their voice before they've spoken, and interfere with the communications to give misleading orders in the heat of combat? Doesn't even need to be a big change to the orders to alter the outcome.
(Obviously everything I've thought of in 5 minutes, the actual military will have categorised into "haha no" and "let's wargame this scenario"; I assume mostly the former).
I'm redoing the final two chapters before I even look for a publisher. But I've been at it (in my spare time) for six years, so don't hold your breath.
(I'm not sure if your reply was a thumbs-up or a serious question, but that's a serious answer).
It’s not a question of destructive power. Destroying the earth has no strategic value because we need the earth. But there are a number of other problems that need to be solved, such as gathering and analyzing intelligence, precision strikes that harm the enemy while sparing friendlies and civilians, and so forth that have not been perfected.
> Do we not have enough weaponry to destroy this planet million times over?
You do understand that there are nuances here, yes? In other words, that there are military operations that lie somewhere between "do nothing" and "destroy the entire world"?
I guess it depends on how you define competitors. I think they protected themselves from people playing against their strengths (ie photographic film), but failed to prepare against disruption from left field.
> These models cost millions of dollars to create.
It's reasonable that it's fair use to train on unlicensed, freely available data as long as the model is released. I don't know anything about the law or care much about the legal stuff, but that's my opinion for how it should be.
Add to this mix the USA trying to prevent China copying cutting edge AI technology and this was bound to happen sooner or later.
It's no longer research it's near market viability.