Security has always been a game of just how much money your adversary is willing to commit. The conclusions drawn in lots of these articles are just already well understood systems design concepts, but for some reason people are acting like they are novel or that LLMs have changed anything besides the price.
For example from this article:
> Karpathy: Classical software engineering would have you believe that dependencies are good (we’re building pyramids from bricks), but imo this has to be re-evaluated, and it’s why I’ve been so growingly averse to them, preferring to use LLMs to “yoink” functionality when it’s simple enough and possible.
Anyone who's heard of "leftpad" or is a Go programmer ("A little copying is better than a little dependency" is literally a "Go Proverb") knows this.
Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.
> Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.
I dunno about that quoted bit; "Defense in depth" (Or defense via depth) is a good thing, and obscurity is just one of those layers.
"Security through obscurity" is indeed wrong if the obscurity is a large component of the security, but it helps if it is just another layer of defense in the stack.
IOW, harden your system as if it were completely transparent, and only then make it opaque.
> "security through obscurity" has been a well understand fallacy in open source circles for decades
The times, as they say, are a-changin’.
Open software is not inherently more secure than closed software, and never has been.
Its relative security value was always derived from circumstantial factors, one of the most important of which was the combination of incentive and ability and willingness of others in the community to spend their time and attention finding and fixing bugs and potential exploits.
Now, that’s been the case for so long that we all implicitly take it for granted, and conclude that open software is generally more secure than closed, and that security through obscurity falls short in comparison.
But this may very well fundamentally change when the cost of navigating the search space of potential exploits, for both the attacker and the defender, is dramatically reduced along the axes of time and attention, and increased along the axis of monetary investment.
It then becomes a game of which side is more willing to pool monetary resources into OSS security analysis – the attackers or the defenders – and I wouldn’t feel comfortable betting on the defenders in that case.
Yes, there is nothing novel in "to harden a system we need to spend more tokens discovering exploits than attackers spend exploiting them." That's what security always looked like, physical security included (burglars, snipers, etc.) So when AI is available you have to throw more AI at securing your system than your adversaries do. What a surprise.
Maybe we could start with the prompts for the code generation models used by developers.
For example from this article:
> Karpathy: Classical software engineering would have you believe that dependencies are good (we’re building pyramids from bricks), but imo this has to be re-evaluated, and it’s why I’ve been so growingly averse to them, preferring to use LLMs to “yoink” functionality when it’s simple enough and possible.
Anyone who's heard of "leftpad" or is a Go programmer ("A little copying is better than a little dependency" is literally a "Go Proverb") knows this.
Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.