Not sure how many of you have WBD shares with its rather tumultuous past (spin off from ATT, the Bill Hwang mess), but if you've picked up shares on the cheap in the past few years sub $10, congratulations.
"Under the terms of the agreement, each WBD shareholder will receive $23.25 in cash and $4.501 in shares of Netflix common stock for each share of WBD common stock outstanding at the closing of the transaction. "
Note: this is after completion of the current splitting of WBD; as you'd expect Netflix wants the catalog and production but they're not taking the sports and some other pieces. The left over / newly revived Discovery Global will likely be a hollowed-out shell of less desirable properties saddled with a bunch of debt.
I thought someone really had to break some threshold so they wouldn't close the deal unless they got another .001. Like maybe some bonus depended upon some target value.
This article doesn't mention it, but PIMCO is lead lender
"Pacific Investment Management Co. (PIMCO) is the anchor lender on the deal. The debt, which matures in 2049, is fully amortising and has been rated A+ by S&P. The bonds were priced at around 225 basis points over U.S. Treasuries."
Ugh. A year ago, I was (a) fairly confident that the AI bubble would burst, but (b) fairly confident that contagion would be limited; a bunch of startups would evaporate, and some VCs would be badly burned or fail, but the broader economy would largely shrug and carry on.
Based on this sort of thing, I'm not sure I still believe (b).
I am 99.0% percent sure that the AI bubble burst is coming. I was only 95% sure until very recently.
Does this mean that this investment spreads the bubble burst risk into people's pension funds? (those lucky enough to have such a thing) Or, not necessarily?
I would think so. I don't think pension funds are always exactly smart money. They have lot of pressure to make numbers work so they are after anything that sounds reasonably like it will make them work. And that can work until it does not.
Pension funds, banks, etc, are never smart money. They aren’t allowed to be.
They are required to pick based on some defined formula which the various stakeholders signed off on, and hence are prime juicy targets for people trying to game systems.
They also took the ‘08 mortgage crisis right in the shorts, for the same reasons.
This is why all defined benefit pension funds should be eliminated and replaced with defined contribution plans. Pensions are far too systemically risky for employees, employers, and taxpayers alike. The only one who really benefits are the fund managers who get to collect large fees.
Reading the replies, my next question is: How much of say, CALPERS [0], is invested in "AGI by next 2030" or similar. If it's far less than 1%, that seems like a fair bet. Does anyone have a good read on the real number?
I'm still mad, and I'm not even American. Even over here in Germany, it was a massive shock wave that went through society and I still remember the day it happened vividly. The effects in society are felt to this day.
He wanted to radicalize Muslims worldwide against the West and drain American resources through prolonged wars.
It's also interesting how infrequently Americans know OBL's motivations for the 9/11 attacks. A big part of it was the American support of Israel, and OBL's belief that this would lead to further oppression of Muslim people in Palestine.
He did terrible things but was pretty accurate in his predictions.
Not the parent, and I'm not saying it definitely wouldn't have happened (I have no idea), but it's at least possible. There was advance intelligence around the event that might have been treated differently by a different administration.
Even if it had happened, the response would also have been different.
In Richard Clarke’s book he details the intelligence community’s multiple warnings to the new Bush administration that spring and summer. They were ignored.
As Gore came from the Clinton admin he and the people around him would have had a lot more experience dealing with and familiar with the threats and actors, who were already known.
I have no idea; I'm just saying that with different people in charge, they wouldn't have reacted in exactly the same way.
If I had to guess, I'd say at least no Iraq war, if we consider that part of the response. Patriot Act probably would have looked different. I expect there still would have been military action in Afghanistan, but likely with differences as well.
Bush's team ignored Clinton's team attempt to handover what they knew about the threats (in these threads someone mentioned Richard Clarke's book, I remember reading a 2003 TIME article, you can probably also read the results of the congressional investigation).
If the Supreme Court hadn't done the shenanigans in Florida, Clinton's team would've been Gore's team, and who knows, maybe those hijackers would've been caught...
I feel like the first big disaster already happened. In the job market.
Anyway, where there is public safety risk involved, I can't imagine a scenario where AI will directly cause a catastrophe. If anything, I feel overall safety will increase.
However, I can't help but wonder what the over reliance would do to general population over the next few decades. Would people essentially become dumber and less skilled?
I don’t think AI is meaningfully taking any jobs. Just like return to office mandates, it is just more cover to do layoffs in less than prosperous economic conditions.
It's not the same as "AI replacing humans", but seems clear that the launch of ChatGPT led to dramatic cuts of NLP researchers. [1][2]
For instance, at HuggingFace:
> A few days after that, Thom Wolf, who was my manager at Hugging Face and also one of the co-founders, messages me, “Hey, can you get on a call with me ASAP?” He told me that they had fired people from the research team and that the rest would either be doing pre-training or post-training — which means that you are either building a foundation model or you’re taking a foundation model and making it an instruction-following model, similar to ChatGPT.
It’s only natural that a breakthrough new way of doing $thing would lead to dramatic cuts in people who are experts in the old way of doing $thing, right? Doesn’t matter what $thing is. (Tons of examples, automated phone switches replacing telephone operators, online booking sites replacing travel agents, etc etc.)
That giant shock to the NLP research community is more than two years old now. It would be interesting to hear from people affected by that today, I'd like to know what they ended up doing in response.
That's true. From the quanta article it looks like many affected researchers went on to work on LLMs successfully.
I do wonder how people are adjusting to similar, ongoing shocks across other industries. For instance, the ArtStation protest against AI art a couple years ago showed widespread hostility towards the technology. In the time since, AI art has only gotten better and been adopted more, but from articles like [1] it doesn't seem like concept artists have much benefitted.
That’s very odd. You’d think a breakthrough and surge of funding would massive boost NLP research - unless those researchers who not focused on LLMs and the company just wanted to move all resources to that.
There are other ways that AI can hurt the job market in major ways. One, already happening, is AI-generated resumes based on the listing itself, causing HR departments to waste a ton of time, potentially preventing real experienced candidates from even getting a callback.
I guess everyone on Hacker News should stop developing software then, since almost every piece of software I've ever worked on has allowed companies (and individuals) to do more with less.
Yep I agree. It does make senior developers more efficient at building new features but it's maybe like 20% more efficient in the grand scheme of things... But this efficiency saving is a joke considering that companies spend most of their development money creating and maintaining unnecessary complexity. People will just produce unnecessary code at a faster rate. I think the net benefit for a typical corporation will be negative and they will need to hire more people to maintain the increasingly large body of code, not fewer.
It's an intractable problem because even if some really astute senior engineer joins a company and notices the unnecessary complexity and knows how to fix it, there is no way they will do anything about it. Do you think thousands of existing engineers will support a plan to refactor and simplify the complexity? The same complexity which guarantees their job security? It's so easy to discredit the 'new dev' who is advocating for change. Existing devs will destroy the new dev with complicated-but-deeply-flawed arguments which management will not fully comprehend. At the end of the day, management will do what the majority of existing devs tell them. Devs will nod in unison for the argument which sounds more complex and benefits them the most.
Nobody ever listens to the new hire, even the new hire knows they are powerless to change the existing structure. The best they can do is add more complexity to integrate themselves into the scheme.
Ironically, the new hire can provide more value doing nothing but that will not provide them with job security. The winning strategy is to trade-off deep long-term value for superficial short-term value by implementing lots of small features loaded with technical debt.
It's like gears in a big machine, you could always add 3 gears to do the exact same job as 1 gear but once those 3 gears are in place, remove any of them, the machine stops working... So each one seems demonstrably essential to anyone who doesn't understand the tech. That's the principle. When pressed, a merchant of complexity can always remove a gear and say "Look, the machine doesn't work without it, just like I told you."
> Existing devs will destroy the new dev with complicated-but-deeply-flawed arguments which management will not fully comprehend. At the end of the day, management will do
I sympathize with this position, but I think it’s worth admitting that it’s also often the case that the complexity increases all made sense at the time of implementation, and the big rewrite in the sky is often more expensive, more risky, and likely to still end up with new complexity than is anticipated at the beginning.
"Under the terms of the agreement, each WBD shareholder will receive $23.25 in cash and $4.501 in shares of Netflix common stock for each share of WBD common stock outstanding at the closing of the transaction. "
reply