There will always be many gaps in peoples knowledge. You start with what you need to understand, and typically dive deeper only when it is necessary. Where it starts to be a problem in my mind is when people have no curiosity about what’s going on underneath, or even worse, start to get superstitious about avoiding holes in the abstraction without the willingness to dig a little and find out why.
I mean you can always make things slower. There are lots of non-optimizing or low optimizing compilers that are _MUCH_ faster than this. TCC is probably the most famous example, but hardly the only alternative C compiler with performance somewhere between -O1 and -O2 in GCC. By comparison as I understand it, CCC has performance worse than -O0 which is honestly a bit surprising to me, since -O0 should not be a hard to achieve target. As I understand it, at -O0 C is basically just macro expanding into assembly with a bit of order of operations thrown in. I don't believe it even does register allocation.
That would be true of one using a libc, but in a boot sector, you only have the bios, so the atoi being referenced is the one defined in c near the beginning of the article
The “fancy jump” is the branch instruction. As far as I know all ISAs have them. Even rv32i which is famously minimal has several branch instructions in addition to two forms of unconditional jump. Branches are typically used to construct if / for / while as well as && and || (because of short circuiting) and ternary (although some architectures may have special instructions for that that may or may not be faster than branches depending on the exact model). Without it you would have to use computed goto with a destination address computed without conditional execution using constant time techniques.
There's a synergy effect here - Tesla sells you a solar roof and car bundle, the roof comes without a battery (making it cheaper) and the car now gets a free recharge whenever you're home (making it cheaper in the long term).
Of course that didn't work out with this specific acquisition, but overall it's at least a somewhat reasonable idea.
In comparison to datacenters in space yes. Solar roofs are already a profitable business, just not likely to be high growth. Datacenters in space are unlikely to ever make financial sense, and even if they did, they are very unlikely to show high growth due to continuing ongoing high capital expenses inherent in the model.
I think a better critique of space-based data centres is not that they never become high growth, it's just that when they do it implies the economy is radically different from the one we live in to the degree that all our current ideas about wealth and nations and ownership and morality and crime & punishment seem quaint and out-dated.
The "put 500 to 1000 TW/year of AI satellites into deep space" for example, that's as far ahead of the entire planet Earth today as the entire planet Earth today is from specifically just Europe right after the fall of Rome. Multiplicatively, not additively.
There's no reason to expect any current business (or nation, or any given asset) to survive that kind of transition intact.
It's obviously a pretty weird thing for a car company to do, and is probably just a silly idea in general (it has little obvious benefit over normal solar panels, and is vastly more expensive and messy to install), but in principle it could at least work, FSOV work. The space datacenter thing is a nonsensical fantasy.
Not physics defying, just economically questionable.
The main benefits to being in space are making solar more reliable and no need to buy real estate or get permits.
Everything else is harder. Cooling is possible but heavy compared to solar, the lifetimes of the computer hardware will probably be lower in space, and will be unserviceable. The launch cost would have to be very low, and the mean time between failure high before I think it would make any economical sense.
It would take a heck of a lot of launches to get a terrestrial datacenter worth of compute, cooling and solar in orbit, and even if you ship redundant parts, it would be hard to get equivalent lifetimes without the ability to have service technicians doing maintenance.
For the bridge, I love how it added a bunch of electrical wires along the top. Imo that’s not very realistic, given there are tons of better places to run wires on a bridge, but somehow it does look substantially more realistic. Even though it seems to be trying to make everything look sad I honestly find the results more inviting because they look lived in.
I think the point is that that sounds like a potential problem for turso, but it’s not really a problem for everyone else unless some sort of vendor lockin would prevent using open source alternatives. But given the strong compatibility story with the SQLite file format implied already that just doesn’t seem credible.
Yeah I am really not speaking in terms of a risk for all end users, just those who may rely on turso as a company. To assume a startup, no matter how well funded, will survive is naive. There are historically a lot of issues with this business model. This is not a certain failure, but we cannot ignore the challenges companies in a similar space have faced.
> 2) They have a paid cloud option to drive income from:
I’ve been confused by this for a while. What is it competing with? Surely not SQLite, being client server defeats all the latency benefits. I feel it would be considered as an alternative to cloud Postgres offerings, and it seems unlikely they could compete on features. Genuinely curious, but is there any sensible use case for this product, or do they just catch people who read SQLite was good on hacker news, but didn’t understand any of the why.
The thing that cooks my noodle - who are these insane people who want to beta test a new database? Yes, all databases could have world destroying data loss/corruption, but I have significantly more confidence in a player than has been on the market for many years.
The article talks about this. If you have a project that starts small and an in-process DB is fine, but you end up needing to scale up then you don't have to switch DBs.
After all, if you can tell in advance that you might hit the limits of SQLite, you'd simply start with postgresql on day one, not with a new unproven DB vendor with a product that has been through the trial by fire of existing DBs.
I think it's more like you started with SQLite and now you need concurrent writes, replication, sharding, etc. etc. - all the stuff that the "big" databases like PostgreSQL provide.