@pg on @sama: "you could parachute him into an island full of cannibals and come back in 5 years and he'd be the king."
In retrospect this quote comes across as way more foreboding given what we've learned about the scale of his ambitions and his willingness to lie and bend reality to gain power.
Dario on the other hand seems to have an integrity that's particularly rare in this era. I hope he remains strong in the face of the regime.
>Dario on the other hand seems to have an integrity that's particularly rare in this era.
Anthropic actually partnered up with Palantir. They are not the saints you think they are, either.
We should stop worshipping people and companies and stop putting them on pedestals.
Just because one party is at fault, doesn't mean the other is automatically innocent. These are all for-profit companies at play here.
FWIW he gives his ethical reasoning on his website:
> Broadly, I am supportive of arming democracies with the tools needed to defeat autocracies in the age of AI—I simply don’t think there is any other way. But we cannot ignore the potential for abuse of these technologies by democratic governments themselves. Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies. Thus, we should arm democracies with AI, but we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.
Basically, he's afraid that not arming the government with AI puts it at a disadvantage vs. other governments he trusts less. Plus, if Anthropic is in the loop that gives them the chance to steer the direction of things a bit (what they were kicked out for doing).
It's not the purest ethical argument, but I also would not say that there is a clearly correct answer.
Basically he's asking everyone to trust him that he won't cross the line himself. Whatever argument he makes for democracies applies to him as well, and he's not somehow above it. That's the flaw in his argument.
Brutally honest, to me it just sounds like a very elaborate way to say "trust me, bro"
I would agree if not for the fact that they just let a $200M contract slip through over it. You could argue it's "safety theater" in itself but that seems like a risky gambit especially with this administration. I definitely trust Anthropic more than OpenAI. In fact I'd go as far as to say it's probably pretty imperative that Anthropic stays a frontrunner in this race and doesn't leave the field exclusively to OAI (and maybe Google which is just as bad). That doesn't mean I'm exactly happy with Anthropic's comments like "mass surveillance bad but only for the US". But Anthropic at least regularly asks questions about the direction of AI development. I haven't seen the other frontier model companies do any such thing.
Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move. It's basically free financial and technological backing at the cost of ethics. Additionally, basically everyone with functioning eyeballs knows that the current US administration is incredibly vindicative, reckless and short-tempered. I would agree that in a more tame administration, you might do something like this as a publicity stunt. In the Trump administration, and while the AI arms race is still in full force, it feels like there has to be at least somewhat genuine sentiment behind it, otherwise it just doesn't really make sense. Like what do they accomplish from this? You'll get some users who will view you more favourably for it but it probably won't make up for the lost revenue, and no matter how many people like you, if you are first to AGI in this industry you win. The prior sentiment basically won't matter at that point. In the most critical interpretation I guess you could say if the bubble pops it might be more of a matter of sentiment. I don't know, in my mind the math just doesn't work for it to be a business move.
>Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move.
It wasn't, there's been non-stop talk here for days about how Anthropic is a step-above, better-than-the-rest, the "only good AI" company. Enough already. It is a marketing tactic they are taking in opposition to OpenAI.
If you look at his comments about Palantir and their proposed safeguards, it's clear it's a case of "if you are dining with the Devil, you'd better bring a very long spoon"
These comments were after the deal had soured. Not before. If it was a case of such morality, the partnership with Palantir would have never happened in the first place.
The contract was explicit - it was for defence purposes with a company known for spying activities. So, obviously spying is involved and they weren't just going to generate cat videos with it.
I've heard Palantir is essentially the only federal cloud vendor with this administration for secure services. By "partnered up with Palantir", do you mean they provided their models to the government? Or something more?
> Anthropic actually partnered up with Palantir. They are not the saints you think they are, either.
And now you've got people on here saying, well actually Palantir ain't so bad, you see! They're just one tool in the chain, basically just boring data integration, like IBM!
The mental gymnastics is difficult to keep up with.
pg's sama praise bewilders me. Is there some other Sam Altman he's talking about?
> Graham was immediately impressed by Altman, later recalling that meeting the 19-year-old felt like what it must have been like to talk to Bill Gates at the same age. He noted Altman's intense "force of will" from their early interactions.
Is there a Gates-like "presence" or a "force of will" displayed in his public interviews?
> pg's sama praise bewilders me. Is there some other Sam Altman he's talking about?
Paul Graham was a pudgy mediocrity clever enough to capitalize on nerds' obsession with Lisp, and leverage it into f-you money. Game recognized game in the shape of Sam Altman.
yeah man what a mediocre loser all he did was create the first cloud based ecommerce platform , sell it to yahoo for $49m in the 90s, then co-found the most successful early stage VC firm of all time, which made THIS forum which you are using to attack him
its reasonable praise. a 19 year old social outcast who grew up in the midwest drops out of ivy league and starts a company before smartphones exist that he sells for $43 million dollars at age 27, then invested almost all the money into more startups, became a billionaire, and hijacked chatgpt from the richest person in the world.
I think I'm a bit more of an iconoclast than the average HN reader, but when this community was fawning over him when he was head of YC, I always got the impression, without knowing the guy or much about him, that it was totally undeserved. Mainly because thoughtless fawning of any kind makes me immediately suspicious. Nobody deserves that kind of praise.
I read that quote and see no positive interpretation. It was always a negative description.
I think maybe this community could use a bit more natural skepticism of hierarchy.
Same. What sam tried to do on his own, he failed at (not horribly failed, but he certainly wasn't in the same league as zuckerberg or page or gates or musk - he raised at least $30M then was forced to sell a failure of a product for $40M).
His ascendancy only came when he basically was given an ulta powerful position by an ultra powerful man.
Which of these two CEOs wants to have an unelected spot in the decision loop of our government?
Once I dug into this story, I realized that only one of these companies was attempting a real power grab. Maybe the EAs are doomed to try coup after coup and lose every time.
The SCR part is excessive, though, especially if it's interpreted broadly. Altman gets credit for sticking up for Anthropic on that point, but not much credit, because it's so obvious that it's overkill.
It's always hilarious watching online fights between tech industry billionaires, sort of like the geek version of UFC. The weirdest part is how regular people pick sides and defend their billionaire against the other guy.
> The weirdest part is how regular people pick sides and defend their billionaire
Someone told me in another comment that it's possibly bot activity. I suspect so too, because in a tech forum like HN, a top voted comment can shift the entire focus/narrative of any given issue. I know there are a lot of mods on here to prevent this sort of thing, but given how good LLMs have gotten, I wonder if we are at a point where humans can even discern cases where this a mix of human and AI involvement in online activity (such as commenting).
It's not only single comments, but if you surround people in a sea of opinion, they will definitely start swimming in your direction. Thought, that's probably more important on reddit.
It's very easy to adopt a posture of above-it-all cynicism, and to think that anyone who sees an important distinction between two flawed powerful people is a sucker. But it's not particularly smart or sophisticated, and it's not helpful. In politics, the assumption that they're all equally corrupt and sociopathic is exactly what the worst of them want us to default to. In rich-guy PR wars, too, it's only going to work to the benefit of the ones with 0 principles, at the expense of the ones with some principles.
(Or, if the maximally cynical perspective is correct and 'principles' always actually means 'a company culture and public image that depends on the appearance of having principles, and which requires costly signals of principledness to maintain' -- well, why on earth shouldn't we favour the ones who have that property over the ones who are nakedly unprincipled, and the ones who have a paper-thin veneer that doesn't meaningfully affect their behaviour? It would be stupid to throw away the one bit of leverage we have to make powerful people behave better than they otherwise would.)
well yeah but should Smith and Wesson just be like man the US Gov doesnt align with our corporate values so we are just gonna start selling belt fed machine guns to civilians
absurd yes but same principle. companies have to be subject to government especially in technologies that enable or manage violence. this is because the role of the government is to collectively manage and allocate violence in the manner the people desire
> companies have to be subject to government especially in technologies that enable or manage violence. this is because the role of the government is to collectively manage and allocate violence in the manner the people desire
I don’t know what you’re describing, but it’s not how the US works.
Companies aren’t extensions of the state; they’re private actors that have to follow the law. If Congress wants something prohibited, it passes a law. Otherwise firms are free to choose who they do business with.
agencies create many regulations which affect companies, with no input from congress. the ATF in 2017 banned 'bump stocks' by reclassifying them as machineguns with zero input from congress.
companies and the people who work for them are subject to the state via the law and regulations. if they violate the law, the state will use violence to enforce the law, with a government entity called law enforcement and law enforcement officers.
if new technologies are invented, like the internet, missiles, nuclear power, and so on, which represent an ability to manage and allocate violence, or remove the state ability to control violence, the government needs to reassert their monopoly on that violence and take control of it. without this monopoly, how will they collect taxes and enforce the law?
without the monopoly on violence the government is little more than an idea
You’re kind of losing the plot here. The original topic was whether Anthropic can set conditions on a contract with the government, which they obviously can. If Anthropic says “we won’t sell this unless you agree to X safeguards,” the DoD can either accept the terms or buy from someone else.
my point is the government/DoD/DoW/state can set conditions that it wants, and do what they want. DOD can literally invoke DPA with trump and force anthropic to sell them the product
"On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."
Well, sometimes they only do stuff that should be illegal.
Superfast arbitrage, which is Jane Street's main thing as far as I understand, doesn't really produce anything of value to humanity. Yeah, I guess the contributions to OCaml are worth something, but maybe not 10 billion?
Jane Street is one of the slower market making firms and generates a significant share of revenue from being everywhere on everything (MUCH easier said than done). You want to trade some Canadian lumber ETF? Jane Street will be there. Some bond product with constituents that trade across 3 different trading sessions? Jane Street's active in that market. None of that is to say they don't have any presence in major products or don't have real short term alphas/edges of course.
They've never been at the forefront of latency games, like Jane Street isn't the firm sweeping equity markets since they have the fastest radio network out of CME (dubious value) or getting their quotes first-in-line every time.
They're not that good at arb. Real arb doesn't even really exist anymore. Even when it does, it's not JS that closes it. They market make, which is different.
well, running the financial system is overpaid work but it's the high-order bit in the current form of capitalism.
it's the operating system, some people might think it's a tax on everything, some people might think it provides the foundation to produce everything of value.
similarly, Google is the high-order-bit in the information or content economy, the creators get underpaid, the people who do ad optimization get overpaid.
no financial markets -> no IPOs -> no VC -> no Google and Silicon Valley as we know it.
the closer you are to the money and the transactions, and the high-order bit, the better the opportunities to redirect and organize to your advantage, and the more you get paid.
Well, Tether only needs to hold treasuries and collect quarterly interest payments. They don't need much staff, at least if they haven't looted the treasuries SBF style.
I would be happy to support a bailout of SVB in exchange for some reasonable concessions to American taxpayers.
1) Skin in the game - GPs of YC / a16z / Sequoia / Founders Fund / etc put in some equity in to a joint venture to acquire SVB's assets and make depositors whole. Government will backstop some portion of it.
2) YC / a16z / Sequoia / Founders Fund / etc agree to support ending the carried interest tax exemption
I know a bit about this industry and I have worked on some profitable systems. Honestly not a bad effort for someone working on their own with low-cost data. Don’t let the haters get you down. I would recommend you to pick up a more recent textbook on portfolio construction like Isichenko’s recent book.
Not quite. The market is pricing in a certain trajectory of rate hikes - basically another four 50bp increases with terminal rates around 2.5-3%. If inflation doesn’t start coming down that trajectory will change and the market will adjust accordingly.
my personal theory is that active management can still produce market-beating returns but the good active managers will rapidly grow their capital base to huge levels where they don't need much outside investment.
On the other hand - now we have a situation where most investment savings by individual investors are tracking passive indexes. But if everyone is indexing - what determines the relative weight of each stock in the index?
The answer is active investors. But with an increasingly smaller field of increasingly skilled investors (with more discretionary capital), we end up with the valuations of companies (and thus how societal time is allocated) competitively determined by a fierce prediction competition between the top active managers (Renaissance, DE Shaw, Citadel, etc).
I guess the big question how much active management is good. According to the article for funds it is now roughly 50%. I believe even 10% is plenty for market efficiency.
I was under the impression the relative weights are (often) simply proportional to their market cap. Afaict that's how that works for the sp500 at least.
Exactly, the relative weights are often proportional to the market cap. So if I invest $1M in the S&P that doesn't actually change the relative weights of the stocks in the index (mostly), since the impact on all of the underlying stock market caps should be roughly even as more money flows in to the stocks with higher market cap and smaller sums flow in to the stocks with lower market cap.
So in the case that say Meta's metaverse initiatives suddenly start taking off with the general population, it will take active investors to invest more in FB to increase FB's marketcap relative to the other stocks in the index so that passive investors are "correctly" allocating to stocks in proportion to their earnings potential.
Obviously there are some caveats here. Passive investors still have to choose an index to invest in, and inflows in to one narrow index (i.e. QQQ) will affect the weights of particular stocks in broader indexes. But the point stands that the relative marketcap ranking between stocks in the index is not affected much by in/outflows in to a particular index, and in some sense indexes are outsourcing their stock picking to active investors that actually try to accurately value individual stocks on an absolute and relative basis.
YC is not some magic fantasy organization. Like any other org you should probably discuss things your upset about internally with relevant stakeholders before complaining on Twitter. That said telling other people how to circumvent vaccine queues is clearly unethical and someone high up should have loudly squashed that immediately.
> someone high up should have loudly squashed that immediately
Both the posts about this were promptly removed by YC staff. (And I've just tried searching for any posts with vaccination advice, and there are none whatsoever.)
At least one of the posts didn't seem to be unethical (no lies were being told or rules being broken – though I don't know for sure, I don't live in the US and I haven't read the post myself). But it was still removed by YC staff.
I posted this to see if it would get upvoted. I was a bit surprised to come back and find it doing well, but not surprised it got flagged.
Did you know karma here is < 1/2 of the votes up on a submission, but downvotes on a comment seem to count about 1 to 1? At least I got some karma from it!
In regards to it being flagged, I suppose it's a questionable ethical practice to see the site flag an article about itself. I've made other self-interest related censorship observations here before and I guess the lesson is that those in control can do what they want, when they want.
I was listening to a podcast with Jason Sudeikis and he said that they specifically brought it up several times because they knew that later in the same episode they would be doing a riff on the Allen Iverson "we talking about practice" speech.
In retrospect this quote comes across as way more foreboding given what we've learned about the scale of his ambitions and his willingness to lie and bend reality to gain power.
Dario on the other hand seems to have an integrity that's particularly rare in this era. I hope he remains strong in the face of the regime.
reply