> mag7 (minus) tesla are all relatively cheap when they dip
I asked ChatGPT for a list of Magnificent 7 stocks and their most recent price to earnings (PE) ratios.
Company Ticker P/E Ratio
Apple Inc. AAPL ~33
Microsoft Corporation MSFT ~25
Alphabet Inc. GOOGL ~29
Amazon.com Inc. AMZN ~30
NVIDIA Corporation NVDA ~38
Meta Platforms Inc. META ~28
Tesla Inc. TSLA ~378
In the last 50 years, I think the median PE ratio for S&P 500 index is about 15. Seven and below is considered rock bottom, and 30 and above is very high. These PE ratios look pretty damn high to me.
How much do these names need to "dip" for you to consider them cheap?
There are a few things to consider if you are in the investment space:
- Growth rate: you can't compare them to the average single digit growth companies or dividend focused companies. Most of these tech companies revenue are still growing at double digit with good moat. Pe is a good measure but it's not absolute. If you believe they sustain their growth then it's a good bet. And you can choose not to buy in their growth stories too. At the end of the day investment is about judgement call
- History benchmark: some of their pe is at historical low. So they are actually cheaper than before.
- Pe ttm and forward pe: how much pe ttm are they at? how much forward pe are they projecting? If forward pe is significantly lower, that means the current analysts consensus is that they will grow in future
- Pe is the a number but it's not everything. You need to consider multiple things to decide if that's undervalued for you. It's highly subjective as different interpretations are common.
- This post is about if you want to play the gaap game with private tech companies. My point is that there are still many public companies that are cheap at certain point. You just need to be patient and be willing to research and wait. For example, meta at around 500 was a buy for me, but since then it has rebounded it's still good but not as undervalued as a few days ago
For other readers, I want to add some context here. NASDAQ is pondering whether or not to change their NASDAQ 100 index membership rules for IPOs. Currently, there is a three month waiting rule for IPOs. They are proposing (not sure if passed/agree/completed yet) to remove this waiting rule for IPOs.
Real question: What is the real impact of this rule change? To me, it seems so minor. Three months is just a blip in time for any long term investor.
> which corruptly will force us all to buy into these companies
Why is this "corrupt"? That term makes no sense here.
Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it. You can trade the basket yourself minus the names that you don't like.
Finally, I would say that S&P 500 index is far more important than NASDAQ 100. To join the S&P 500 index, the name must be profitable for the most recent year. (four quarters). Recall that Uber IPO'd in 2019, but was not profitable until 2023. OpenAI probably will not be profitable when it goes public; thus, it will not join the S&P 500 immediately.
I think the bigger story is SpaceX. It will likely IPO very close to a 1T USD market cap (with a small float: ~10%). And, thanks to StarLink, I assume that SpaceX is now wildly profitable.
> Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it.
Isn't the idea with the indexes that they allow you to intentionally not take an activist position in the market? The exposure is not tied to any underlying market hypothesis. In other words, if we make people form a market hypothesis in order to decide whether or not to hold this index, it has failed in its purpose.
The "corruption" allegation is that for, yes, SpaceX, index funds will effectively be "forced" to buy in right away at their IPO price, rather than seeing where they settle before getting the money in. Given that most people have most of their money in index funds, it's sort-of an automatic buy and raises some hackles about a fixed game.
Saying "you can trade the basket yourself minus the names you don't like" is not a real counterargument. Most of us are not going to do that, I'm not going to do that and I'm writing this post right now. John Doe is certainly not doing that.
Scala could be one example? When I upgraded to a newer version of the standard library (the Scala 2.13 or Scala 3 collections library), there was a tool, Scalafix [1], that could update my source code to work with the new library. Don't think it was perfect (don't remember), but helpful.
Personally I've heard Odin [1] to do a decent job with this, at least from what I've superficially learned about its stdlib and included modules as an "outsider" (not a regular user).
It appears to have things like support for e.g. image file formats built-in, and new things are somewhat liberally getting added to core if they prove practically useful, since there isn't a package manager in the traditional sense.
Here's a blog post by the language author literally named "Package Managers are Evil" [2]
(Please do correct me if this is wrong, again, I don't have the experience myself.)
Normally, your posts are very coherent, but this one flies on the rails. (Half joking: Did someone hack your account!?) I don't understand your rant here:
> With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave
I use KDE/GNU/Linux, and I don't see a lot of unnecessary animations. Even at work where I use Win11, it seems fine. "[M]ost applications being webapp": This is a pretty wild claim. Again, I don't think any apps that I use on Linux are webapps, and most at work (on Win11) are not.
> Until recently, I was rather skeptical of agentic code. February 2026, however, has been a sort of inflection point even stubborn developers like myself can’t ignore.
"February 2026" is just way to specific. It feels like a PR/marketing team wrote it. It acts like a jump scare in the post for any normie programmer.
Opus 4.5 to 4.6 was pretty incremental, I didn't see much of a difference.
The big coding model moments in recent recollection, IMO, were something like:
- Sonnet 3.5 update in October 2024: ability to generate actually-working code using context from a codebase became genuinely feasible.
- Claude 4 release in May 2025: big tool calling improvements meant that agentic editors like Claude Code could operate on a noticeably longer leash without falling apart.
- Gemini 3 Pro, Claude 4.5, GPT 5.2 in Nov/Dec 2025: with some caveats these were a pretty major jump in the difficulty and scale of tasks that coding assistants are able to handle, working on much more complex projects over longer time scales without supervision, and testing their own work effectively.
Maybe they're like me, who didn't spend a lot of time investigating Claude until 4.6 launched and the hype was enough to be the tipping point to invest energy. I do know that I've been having good/great results with Opus 4.6 and the CLI, but after an hour or so, it'll suddenly forget that the codebase has tab-formatted files and burn up my quota trying to figure out how to read text files. And apparently this snafu has been around since at least late last year [0]. Again, I can't complain about the overall speed and quality for my relatively light projects, I'm just fascinated by people who say their agents can get through a whole weekend without supervision, when even 4.6 appears to randomly get tripped up in a very rookie way?
There's definitely a productivity curve element to getting it to behave effectively within a given codebase. Certainly in the codebases I work with most frequently I find Claude will forget certain key aspects (how to run the tests or something) after a while and need a reminder, otherwise it gets into a loop like that trying to figure out how to do it from first principles with slightly incorrect commands.
I think a lot of the noise about letting Claude run for very extended periods involves relatively greenfield projects where the AI is going to be using tools and patterns and choices that are heavily represented in training data (unless you tell it not to), which I think are more likely to result in a codebase that lends itself to ongoing AI work. People also just exaggerate and talk about the one time doing that actually worked vs the 37 times Claude required more handholding.
The bigger problem I see with the "leave it running for the weekend" type work is that, even if it doesn't get caught up on something trivial like tabs vs spaces (glad we're keeping that one alive in the AI era, lol), it will accumulate bad decisions about project structure/architecture/design that become really annoying to untie, and that amount to a flavor of technical debt that makes it harder for agents themselves to continue to make forward progress. Lots of insidious little things: creating giant files that eventually create context problems, duplicating important methods willy nilly and modifying them independently so their implementations drift apart, writing tests that are..."designed to pass" in a way that creates a false sense of confidence when they're passing, and "forest for the trees" kind of issues where the AI gets the logic right inside a crucial method so it looks good at a glance, but it misses some kind of bigger picture flaw in the way the rest of the code actually uses that method.
Yes, for me I think it was around Nov/Dec 2025, along with harness improvements, and hearing about lots of successes with agenic programming. Having the agent managing its own context and doing the full software engineering loop with writing code, running it, and seeing if it works. That was already there before February 9th.
This is also supported by the Opus degradation tracker [1]. The dotted line is when they switched from Opus 4.5 to 4.6. There's no difference on statistically significant difference the tested benchmark.
Whatever I used Sonnet 4.6 for, including Claude Code and Claude Chat, it made so many mistakes and totally awkward assumptions that I can’t fathom what it’s supposed to be good at. The mistakes were so blatant. Plan mode, several passes, couple grand in API costs… just disappointing at every task in every session over the past few weeks. Opus 4.6 has been good, still quite a few unexpected, silly mistakes, a few subtle but critical mistakes, but produced workable increments and code reviews, vastly subpar to GPT-5.x in chat mode (with and without identical customization).
The blog post said that the Iran war costs the US at least 1 billion USD per day. The US is incredibly rich and can afford the cost. What I don't see being discussed: What if the US (and Israel) does not put troops on the ground in Iran, but continues relentless, daily aerial bombing... forever (1/2/3 years)? I am not saying that you can control a country from air superiority only (this has been widely discussed by military strategists -- it cannot), but you can endlessly bomb their military assets. What would happen? Honestly, I don't know. I don't think it has been done in the last 50 years of war. (Please provide counter examples if you know any.)
That's one way to make sure people living under aerial bombing firmly support a regime defending their sovereignty, hence legitimizing the islamic republic. Example: Taliban, with boots on the ground, didn't get any weaker at the end.
"There are a lot of people who say that bombing can never win a war. Well, my answer to that is that it has never been tried yet, and we shall see." - Sir Arthur Harris
The response is as applicable now as it was then. Time will tell.
Many of their military assets are underground out of reach of bombers. And you need somewhere to stage out of. Probably not the Gulf bases that are being wiped by missiles and drones at the moment. The aircraft carriers have been having issues and are being pushed back out of missile range. So it becomes more difficult and expensive to keep the bombing up.
I mean the answer to underground facilities is you just keep bombing the entrance which is exactly what they've done. Iran still has insane supply levels of ballistic missiles so the US/Israel are eradicating their tele-launcher fleet.
The US bombed basically all of the Iraqi military in 1991, yet the war didn't end and Iraq didn't leave Kuwait until troops on the ground went in. Air power alone cannot control territory or compel political change
The second the first bomb hit, the Republican Guard went from a standing military force to a guerrilla army, similar in a lot of ways to what the US faced in Iraq, just vastly better-trained and better-equipped. The US couldn't subdue Iraq with hordes of troops on the ground for years, so why would anyone imagine an air-only campaign would have better results against a stronger and larger opponent?
I don't think we could see a bombing campaign like the one we've seen so far anywhere near that length of time. Partly for munitions reasons and partly for target reasons. There is only so much stuff to blow up and only so many bombs to blow things up with. We can't produce them at any where near the rate that would be required to just to do this for years.
> In a different scenario there'd be no motivation for a country like Iraq or Jordan to help.
While unprovable, I think the sentiment is too strong for Jordan. They have pretty good relations with Israel, and have been using their own fighter jets to down some drones from Iran. If anything, it is good practice for their airforce.
First, hat tip on that Guardian article that you shared. The map of desalination plants around the Persian Gulf is excellent.
My first thought looking at it: Why does Saudi Arabia have desal plants in Riyadh? It is 100s of km away from the Persian Gulf! Maybe they want some far away from the Gulf for security reasons? Else, it looks weird. I imagine that they need to pump sea (salty) water from the Gulf to Riyadh, desal it, then pump back the waste water. Quite a journey.
Some background for interested readers: Sophie Schmidt is the daughter of the former Google CEO Eric Schmidt. She accompanied him in Jan 2013 on a (state-sponsored? humanitarian?) visit to North Korea.
My favourite part of the blog post is when she visits The Kim Il Sung University e-Library, "or as I like to call it, the e-Potemkin Village".
How much do these names need to "dip" for you to consider them cheap?
reply