Yeah it’s a silly line of reasoning. The transformations of TS -> JS are a lot smaller and simpler than C-> asm / machine code; it’s basically just removing type annotations. Now minification and optimization can make the output a lot more terse, but that can be done for JS too. And it’s not as complicated and detached from the source as an optimizing compiler is.
Let's not act like it's the same thing. I'm not strictly talking about just Typescript, I'm saying that if you work with these technologies every day it would be wise to go look at their Vite plugins to see how they transform your code and be sure to understand it. It's nice to have magic but it's nicer to use the magic if we have demystified it first.
And I don't know about you, but I occasionally do open compiled ELF files in a hex editor and I certainly did at first when I was learning more. That's a good practice also.
> including a way of compiling and executing C that panics when a memory access bug is encountered.
WASM couldn’t do that because it doesn’t have a sense of the C memory model nor know what is and isn’t safe - that information has long been lost. That kind of protection is precisely what Fil-C is doing.
WASM is memory safe in that you can’t escape the runtime. It’s not memory safe in that you can escape escape the program running within the sandbox, which you can’t do with a memory safe language like Rust or Fil-C.
You can do that but you keep missing that you’re no longer a true microservice as originally defined and envisioned, which is that you can deploy the service independently under local control.
Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.
OP is correct that you are indeed now in a weird hybrid monolith application where it’s deployed piecemeal but can’t really be deployed that way because of tightly coupled dependencies.
Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.
Internal and external have wildly different requirements. Google internally can't update a library unless the update is either backward-compatible for all current users or part of the same change that updates all those users, and that's enforced by the build/test harness. That was an explicit choice, and I think an excellent one, for that scenario: it's more important to be certain that you're done when you move forward, so that it's obvious when a feature no longer needs support, than it is to enable moving faster in "isolation" when you all work for the same company anyway.
But also, you're conflating code and services. There's a huge difference between libraries that are deployed as part of various binaries and those that are used as remote APIs. If you want to update a utility library that's used by importing code, then you don't need simultaneous deployment, but you would like to update everywhere to get it done with - that's only really possible with a monorepo. If you want to update a remote API without downtime, then you need a multi-phase rollout where you introduce a backward-compatibility mode... but that's true whether you store the code in one place or two.
The whole premise of microservices is loose coupling - external just makes it plainly obvious that it’s a non starter. If you’re not loosely coupling you can call it microservices but it’s not really.
Yes I understand it’s a shared library but if updating that shared library automatically updates everyone and isn’t backward compatible you’re doing it wrong - that library should be published as a v2 or dependents should pin to a specific version. But having a shared library that has backward incompatible changes that is automatically vendored into all downstream dependencies is insane. You literally wouldn’t be able to keep track of your BOM in version control as it obtains a time component based on when you built the service and the version that was published in the registry.
> if updating that shared library automatically updates everyone and isn’t backward compatible you’re doing it wrong that library should be published as a v2 or dependents should pin to a specific version
...but why? You're begging the question.
If you can automatically update everyone including running their tests and making any necessary changes to their code, then persisting two versions forever is a waste of time. If it's because you can't be certain from testing that it's actually a safe change, then fine, but note that that option is still available to you by copy/pasting to a v2/ or adding a feature flag. Going to a monorepo gives you strictly more options in how to deal with changes.
> You literally wouldn’t be able to keep track of your BOM in version control as it obtains a time component based on when you built the service
This is true regardless of deployment pattern. The artifact that you publish needs to have pointers back to all changes that went into it/what commit it was built at. Mono vs. multi-repo doesn't materially change that, although I would argue it's slightly easier with a monorepo since you can look at the single history of the repository, rather than having to go an extra hop to find out what version 1.0.837 of your dependency included.
> the version that was published in the registry
Maybe I'm misunderstanding what you're getting at, but monorepo dependencies typically don't have a registry - you just have the commit history. If a binary is built at commit X, then all commits before X across all dependencies are included. That's kind of the point.
> ...but why? You're begging the question.
If you can automatically update everyone including running their tests and making any necessary changes to their code, then persisting two versions forever is a waste of time.
I’m not begging the question. I’m simply stating what loose coupling looks like and the blog post is precisely the problem of tight coupling. If you have multiple teams working on a tightly coupled system you’re asking for trouble. This is why software projects inevitably decompose against team boundaries and you ship your org chart - communication and complexity is really hard to manage as the head count grows which is where loose coupling helps.
But this article isn’t about moving from federated codebases to a single monorepo as you propose. They used that as an intermediary step to then enable making it a single service. But the point is that making a single giant service is well studied and a problem. Had this constantly at Apple when I worked on CoreLocation where locationd was a single service that was responsible for so many things (GPS, time synchronization of Apple Watches, WiFi location, motion, etc) that there was an entire team managing the process of getting everything to work correctly within a single service and even still people constantly stepped on each other’s toes accidentally and caused builds that were not suitable. It was a mess and the team that should have identified it as a bottleneck in need of solving (ie splitting out separate loosely coupled services) instead just kept rearranging deck chairs.
> Maybe I'm misunderstanding what you're getting at, but monorepo dependencies typically don't have a registry - you just have the commit history
I’m not opposed to a monorepo which I think may be where your confusion is coming from. I’m suggesting slamming a bunch of microservices back together is a poorly thought out idea because you’ll still end up with a launch coordination bottleneck and rolling back 1 team’s work forces other teams to roll back as well. It’s great the person in charge got to write a ra ra blog post for their promo packet. Come talk to me in 3 years with actual on the ground engineers saying they are having no difficulty shipping a large tightly coupled monolithic service or that they haven’t had to build out a team to help architect a service where all the different teams can safely and correctly coexist. My point about the registry is that they took one problem - a shared library multiple services depend on through a registry depend on latest causing problems deploying - and nuked it from orbit using a monorepo (ok - this is fine and a good solution - I can be a fan of monorepos provided your infrastructure can make it work) and making a monolithic service (probably not a good idea that only sounds good when you’re looking for things to do).
> I’m not begging the question. I’m simply stating what loose coupling looks like and the blog post is precisely the problem of tight coupling.
But it is not! They were updating dependencies and deploying services separately, and this led to every one of 140 services using a different version of "shared-foo". This made it cumbersome, confusing and expensive to keep going (you want a new feature from shared-foo, you have to take all the other features unless you fork and cherrypick on top, which makes it a not shared-foo anymore).
The point is that true microservice approach will always lead to exactly this situation: a) you either do not extract shared functions and live with duplicate implementations, b) you enforce keeping your shared dependencies always on very-close-to-latest (which you can do with different strategies; monorepo is one that enables but does not require it) or c) you end up with a mess of versions being used by each individual service.
The most common middle ground is to insist on backwards compatibility in a shared-lib, but carrying that over 5+ years is... expensive. You can mix it with an "enforce update" approach ("no version older than 2 years can be used"), but all the problems are pretty evident and expected with any approach.
I'd always err on the side of having a capability to upgrade at once if needed, while keeping the ability to keep a single service on a pinned version. This is usually not too hard with any approach, though monorepo makes the first one appear easier (you edit one file, or multiple dep files in a single repo): but unless you can guarantee all services get replaced in a deployment at exactly the same moment — which you rarely can — or can accept short lived inconsistencies, deployment requires all services to be backwards compatible until they are all updated with either approach).
I'd also say that this is still not a move to a monolith, but to a Service-Oriented-Architecture that is not microservices (as microservices are also SOA): as usual, the middle ground is the sweet spot.
To reference my other comment. This thread is about the nuance of if a dependency on a shared software repository means you are a microservice or not. I'm saying it's immaterial to the definition.
A dependency on an external software repository does not make a microservice no longer a microservice. It's the deployment configuration around said dependency that matters.
What everyone else is saying is that the core value proposition of microservices is that they are independently deployable (which I believe is what you are aiming for as well), which means that there is no tight coupling between them.
If one introduces tight coupling by having a shared library that gets updated in backwards incompatible way and needs to be updated simultaneously in each microservice, you move away from a microservices architecture as your services are not independently deployable anymore.
So in the general case, it is immaterial, but in practice, it can be a mechanism which introduces tight coupling and negates the core value of the microservices architecture.
Here, it was done on purpose as a step to a more monolithic architecture (though it was still only a single service in a larger system, so I'd avoid the "monolith" term).
> Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.
Some of their "solutions" I kind of wonder how they plan on resolving this, like the black box "magic" queue service they subbed back in, or the fault tolerance problem.
That said, I do think if you have a monolith that just needs to scale (single service that has to send to many places), they are possibly taking the correct approach. You can design your code/architecture so that you can deploy "services" separately, in a fault tolerant manner, but out of a mono repo instead of many independent repos.
They don't have a monolith: they have a service that has a restricted domain of responsibility matched to the team that runs it.
There is nothing magic about their queue service, and it seems correctly tuned to the complexity that they've got to cover: yes, just like most queue implementations, it will get different types of messages (events). If anything, their previous implementation was too complex which caused lots of waste.
With hindsight, they should have evolved their original architecture into exactly what they pivoted to now: better fault tolerance in "processors" of different types.
I would hope that my general rule of "only solve exactly the problem you have in front of you" would have avoided the approach they took, but engineers love to abstract away things and introduce indirection layers and add accidental complexity that way. And ofc, "microservices great, me want microservices" too :)
Again, I am not saying this as a slight: I believe many of us have learned the limits of microservices by, well, living through them :) And now we tune our abstraction layers differently.
> They don't have a monolith: they have a service that has a restricted domain of responsibility matched to the team that runs it.
Except, for lack of a better definition, that is a monolith.
Which there's nothing wrong with one if that's what you need.
> I would hope that my general rule of "only solve exactly the problem you have in front of you"
True. Which was the issue with everyone jumping on the microservice train, most of it was about solving problems nobody had.
When you really need an independent service, go build an independent service. Call them micro if you like (again, no good definition for what microservice or monolith actually mean).
> Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.
Internal Google services: *sweating profusely*
(Mostly in jest, it's obviously a different ballgame internal to the monorepo on borg)
Go is not a memory safe language. Even in memory safe languages, memory safety vulnerabilities can exist. Such vulnerabilities can be used to hijack your process into running untrusted code. Or as others point out sibling processes could attack yours. This underlying principle is defense in depth - you make add another layer of protection that has to be bypassed to achieve an exploit. All the chains combined raise the expense of hacking a system.
Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
I’m not aware of any definition of memory safety that allows for segfaults- by definition those are an indication of not being memory safe.
It is true that go is only memory unsafe in a specific scenario, but such things aren’t possible in true memory safe languages like c# or Java. That it only occurs in multithreaded scenarios matters little especially since concurrency is a huge selling point of the language and baked in.
Java can have data races, but those data races cannot be directly exploited into memory safety issues like you can with Go. I’m tired of Go fans treating memory safety as some continuum just because there are many specific classes of how memory safety can be violated and Go protecting against most is somehow the same as protecting against all (which is what being a memory safe language means whether you like it or not).
I’m not aware of any other major language claiming memory safety that is susceptible to segfaults.
Safety is a continuum. It's a simple fact. Feel free to use a term other than memory safety to describe what you're talking about, but so long as you use safety, there's going to be a continuum.
Also, by your definition, e.g. Rust is not memory safe. And "It is true that Rust is only memory unsafe in a specific scenario, but [...]". I hope you agree.
Another canard, unfortunately. "Segfault" is simply Go's reporting convention for things like nil pointer hits. "Segfaults" are not, in fact, part of the definition for memory safety or a threshold condition for it. All due respect to Ralf's Ramblings, but I'm going to rest my case with the Prossimo page on memorysafety.org that I just posted. This isn't a real debate.
The panic address is 42, a value being mutated, not a nil pointer. You could easily imagine this address pointing to a legal but unintended memory address resulting in a read or write of unintended memory.
No, you can't, and the reason you know you can't is that it's never happened. That looks like a struct offset dereference from a nil pointer, for what it's worth.
> That looks like a struct offset dereference from a nil pointer, for what it's worth.
The 42 is an explicit value in the example code. From what I understand the code repeatedly changes the value assigned to an interface variable from an object containing a pointer to an object containing an integer. Since interface variables store the type of the assigned value, but do not update both type and value atomically a different thread can interpret whatever integer you put into it as a valid pointer. Putting a large enough value into the integer should avoid the protected memory page around 0 and allow for some old fashioned memory corruption.
You’d be wrong. I recommend you reread the blog post and grok what’s happening in the example.
> When that happens, we will run the Ptr version of get, which will dereference the Int’s val field as a pointer – and hence the program accesses address 42, and crashes.
If you don’t see an exploit gadget there based on a violation of memory safety I don’t know how to have a productive conversation.
Rust is susceptible to segfaults when overflowing the stack. Is Rust not memory safe then?
Of course, Go allows more than that, with data races it's possible to reach use after free or other kinds of memory unsafety, but just segfaults don't mark a language memory unsafe.
Go is most emphatically NOT memory-safe. It's trivially easy to corrupt memory in Go when using gorotuines. You don't even have to try hard.
This stems from the fact that Go uses fat pointers for interfaces, so they can't be atomically assigned. Built-in maps and slices are also not corruption-safe.
In contrast, Java does provide this guarantee. You can mutate structures across threads, and you will NOT get data corruption. It can result in null pointer exceptions, infinite loops, but not in corruption.
This is just wrong. Not that you can't blow up from a data race; you certainly can. Simply that any of these properties admit to exploitable vulnerabilities, which is the point of the term as it is used today. When you expand the definition the way you are here, you impair the utility of the term.
Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities; that remains true even when those systems are maintained by the best-resourced security teams in the world. Functionally no Go projects have this property. The empirics are hard to get around.
There were CVEs caused by concurrent map access. Definitely denials of service, and I'm pretty sure it can be used for exploitation.
> Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities
I'm not saying that Go is as unsafe as C. But it definitely is NOT completely safe. I've seen memory corruptions from improper data sync in my own code.
Go ahead and demonstrate it. Obviously, I'm saying this because nobody has managed to do this in a real Go program. You can contrive vulnerabilities in any language.
It's not like this is a small track record. There is a lot of Go code, a fair bit of it important, and memory corruption exploits in non-FFI Go code is... not a thing. Like at all.
Go is rarely used in contexts where an attacker can groom the heap before doing the attack. The closest one is probably a breakout from an exposed container on a host with a Docker runtime.
One classic problem in all ML is ensuring the benchmark is representative and that the algorithm isn’t overfitting the benchmark.
This remains an open problem for LLMs - we don’t have true AGI benchmarks and the LLMs are frequently learning the benchmark problems without actually necessarily getting that much better in real world. Gemini 3 has been hailed precisely because it’s delivered huge gains across the board that aren’t overfitting to benchmarks.
This could be a solved problem. Come up with problems not online and compare. Later use LLMs to sort through your problems and classify between easy-difficult
Hard to do for an industry benchmark since doing the test in such a mode requires sending the question to the LLM which then basically puts it into a public training set.
This has been tried multiple times by multiple people and it ends up not doing so great over time in terms of retaining immunity to “cheating”.
I’m curious why the dict->frozendict operation is not an O(1) operation when there’s a refcnt of 1 on the dict - that resolves the spooky action at a distance problem raised and solves the performance for the most common usage pattern (build up the dict progressively and convert to a frozendict for concurrent reads).
You cannot return an immutable version. You can return it owned (in which case you can assign/reassign it to a mut variable at any point) or you can take a mut reference and return an immutable reference - but whoever is the owner can almost always access it mutably.
Arg, you’re right. Not sure what I was thinking there. I still think my point stands, because you get the benefits of immutability, but yeah, I didn’t explain it well.
> footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO
That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out.
I'm very familiar with this process as I use it regularly.
The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2. You can see the comparison on table 1. This is roughly 50000 times larger than an SRAM of 45nm process.
This is the problem with photonic circuits. They are massive compared to electronics.
It isn't unfortunately as the physical size of the resonators need to match a given wavelength. So for each wavelength you need a new circuit in parallel.
The size is a fundamental constraint of optical technologies, because it is related to the wavelength of light, which is much bigger than the sizes of semiconductor devices.
This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet.
The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices).
Such memories have special purposes in various instruments, they are not suitable as computer memories.
To give a feeling: micro-ribg resonators are anywhere between 10 to 40 micrometer in diameter. You also need a bunch of other waveguides. The process in the paper uses silicon waveguides, with 400nm width if I'm not wrong. So any optical feature unfortunately isn't going down as much as CMOS technology.
Fun fact: the photolithography has the same limitations. They use all kinds of tricks (different optical affects to shrink the features) but fundamentally limited by the wavelength used. This is why we are seeing a push to a lower and lower wavelengths by ASML. That + multiple patterning helps to scale CMOS down.
reply