The post fails to mention that spaceX is not just a rocket company. Bundled with it is xAI, which is presumably losing money hand over fist. Package enough risk together and sell for a higher price to retail consumers. We’ve seen this play…
What I don't get is what happened to 'I'm keeping spaceX private so we can actually get to mars', with the understanding that'd be almost impossible as a public company.
Unfortunately, shorting it means betting that space data centers won’t happen (I’d happily take that bet) and that some fuckery won’t extend the inflated valuation beyond my ability to hold the position.
> Unfortunately, shorting it means betting that space data centers won’t happen (I’d happily take that bet)
This seems like an easy bet to win, though I don't know what you were responding to now that it's flagged. It's hard to imagine 'space data centers' being a meaningful product or infra in the collective lifetimes of anyone posting on this site.
It was a silly comment saying that if you’re dumb enough to think data centers in space won’t be worth a trillion bucks, you can turn that into cash by shorting SpaceX.
Not doubting your experience, but I can't see a single ad using Safari on iPhone.
I was really impressed with my setup, but after disabling content blockers (Firefox Focus), and turning off using Mullvad's free DNS proxy service, still nothing!
Perhaps the author turned the ads off since you visited?
I’m in the exact same boat. Also toying with the idea of going back to android for the pixel 10 pro. I do miss android notifications and keyboard. Are there any features keeping you from going back?
The whole family is on iPhone, we use Mac now the way everything integrates together I just dont see myself going back. I spent way more on Android and Windows than I have on Apple products. My Macbook Pro with an M4 Pro chip costs half of what my Surface Book 2 laptop costs, and I can do more on it especially with AI locally.
Something about iOS and macOS just feels right. Any time I boot up my old Android phones they feel like a convoluted mess.
I want to believe, and I promise I'm not trying to be a luddite here. Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?
Agents are great at familiarizing me with a new codebase. They're great at debugging because even when they're wrong, they get me thinking about the problem differently so I ultimately get the right solution quicker. I love using it like a super-powered search tool and writing single functions or SQL queries about the size of a unit test. However, reviewing a junior's code ALWAYS takes more time than writing it myself, and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?
>Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?
Yes. Claude Code has turned quarter long initiatives into a few afternoons of prompting for me, in the context of multiple different massive legacy enterprise codebases. It all comes down to just reaching that "jesus take the wheel" level of trust in it. You have to be ok with letting it go off and potentially waste hundreds of dollars in tokens giving you nonsense, which it will some times. But when it doesn't it's like magic, and makes the times that it does worth the cost. Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place. It has fundamentally changed what myself and our team is able to accomplish.
>Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place.
Just speculating here, but I wouldn't be surprised if the truth of both parts of this sentence vary quite a bit amongst users of AI coding tools and their various applications; and, if so, if that explains a lot of the discrepancy amongst reports of success/enthusiasm levels.
“Kinda”. I run Claude code on a parallel copy of our monorepo, while I use my primary copy.
I typically only give Claude the boring stuff. Refactors, tech debt cleanup, etc. But occasionally will give it a real feature if the urgency is low and the feature is extremely well defined.
That said, I still spend a considerable amount of time reviewing and massaging Claude’s code before it gets to PR. I haven’t timed myself or anything, but I suspect that when the task is suitable for an LLM, it’s maybe 20-40% faster. But when it’s not, it’s considerably slower and sometimes just fails completely.
> Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?
I would say yes. I have been blown away a couple of times. But find it is like playing a slot machine. Occasionally you win — most of the time you lose. As long as my employer is willing to continue to cover the bet, I may as well pull the handle. I think it would be pretty hard to convince myself to pay for it myself, though.
> and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?
The emperor is stark naked, but the hype is making people see clothes where there is only an hairy shriveled old man.
Sure, I can produce "working" code with Claude, but I have not ever been able to produce good working code. Yes, it can write a okay-ish unit test (almost 100% identical to how I'd have written it), and on a well structured codebase (not built with Claude) and with some preparation, it can kind of produce a feature. However, on more interesting problems it's just slop and you gotta keep trying and prodding until it produces something remotely reasonable.
It's addictive to watch it conjure up trash and you constantly trying to steer it in the right direction, but I have never ever ever been able to achieve the code quality level that I am comfortable with. Fast prototype? Sure. Code that can pass my code review? Nah.
What is also funny is how non-deterministic the quality of the output is. Sometimes it really does feel like you almost fly off with it, and then bam, garbage. It feels like a roulette, and you gotta keep spinning the wheel to get your dopamine hit/reward.
All while wasting money and time, and still it ends up far far worse than you doing it in the first place. Hard pass.
Voluntary certification, please. Law is slower than technology. This is a good thing! EnergyStar is a great example of a voluntary program doing more good than DoE or FTC mandates. HIPAA is a good example of what happens when mandates can’t keep up with technology. When it comes to security, we can’t afford another HIPAA.
100% agree! This is a totally voluntary program that is explicitly based on EnergyStar.
I also worry that check-the-box compliance is one possible outcome. I'd love to see professionals comment on the record about where a checklist would and wouldn't be helpful. I'd also love commentary on if and where liability for failure to meet stated commitments would be helpful.
Personally, I let vscode do typechecking on open files & have a pre-commit git hook to typecheck changed files.
When it comes to starting up a development server & building the client, there's a huge cost to repeatedly typechecking the same 1000+ files. By cutting out typechecking & only compiling, I can reduce the webpack client build from 40 seconds to 3 seconds (using sucrase).
I'm answering to you but this is really for most siblings:
I maintain a fairly large TS code base (nodejs backend). I think a full rebuild is 1:30 on my relatively recent laptop (i7-8650U, lots of RAM). But in practice I always use tsc -w and compile is mostly instant after editing a file (I do have to wait whenever I launch the command after I boot, but after that, it's fast enough).
tsc now support incremental compilation too, though I haven't played with it too much as I'm happy with watch mode.
Incremental compilation is _okay_ but it seems to perform full rebuilds if the previous build is _too_ old which is weird- meaning that more builds than expected take the entire time to complete. Nonetheless I use it for all of my builds. On the other hand this may just be because of the spiraling nature of type inference, where TS needs to re-examine a large amount of the program from a change that appears simple.
Personally I only have a small number of projects that take more than 10-20 seconds to compile, but those ones are painful. I should probably do the same with -w for those.
What if not everyone else on your team is paying attention to their editor's warnings? What if the LS's async response doesn't come fast enough? What if, god forbid, you have to change something on a different computer?
If it's not in your CI/CD scripts, it's not actually getting checked.
> I can reduce the webpack client build... to 3 seconds
This is about 10x longer than any interaction with a computer should be.
Sorry, I don't understand the analogy. Yes, TS allows some bugs through, and that's a problem too. But that's also a far cry from "always rely on humans to run complex processes in a consistent way."
This is similar to what I do (although I use esbuild), however like an idiot I just run tsc manually so obviously I forget sometimes and it takes 10 minutes to realise the build failed.
Not great... I'm ashamed but it's just me on this project atm.
As an app backend guy, it's always fun to dive into infrastructure stuff like this that I know little about. Naively, it looks like they pieced together some K8s alternatives. I saw they wrote about why they moved away from K8s here: https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-....
Usually when blog posts like these come out other K8s practitioners say their K8s was just misconfigured.
Would love to see more opinions from K8s folks on the assumptions & design decisions.
I'm a k8s practitioner and I don't think they were wrong. There are some limitations around maximum number of nodes in a cluster, multi-region clusters, and multi-tenancy that make it difficult for a platform provider intending to serve a global user base that can potentially get very, very big. The simplest solution is to deploy many clusters, but if the solution they went with made it possible to deploy a single control plane, that is a simpler setup and I can't see any good reason not to go with that. At least one advantage of Kubernetes is it already exists, so you don't need to implement a whole bunch of orchestration and separation logic from scratch, but if you're a platform provider, you should have all the expertise to roll your own if pre-existing solutions don't meet your needs. I'd rather do that than tailor my product to the limitations of a toolchain I didn't create myself.
How about the null coalescing operator that literally has zero use outside of encouraging an anti pattern in failing to disambiguate null and undefined at the earliest possible opportunity?
How the fuck did that ever make it into the language?
Some of the upcoming proposals are absolutely ridiculous too. A lot of these are ideas that would be fine for niche use cases, but the proposal itself has examples so stupid that they should be shot down immediately because the person doing the proposal doesn't know what the fuck they're doing (e.g. the ridiculous jsx example in the 'do' operator proposal where for some reason we now want to shoehorn a block of procedural code into a 50 line declarative expression instead of just using a basic one line ternary operator that has been in the language for years).