Changing the links and doing nothing else would be a pretty dumb MITM. You could do a more complex variant which is not so easy to spot (targeting specific networks, injecting malware whilst modifying the checksum)
The key property of SSL that is useful for tamper resistance is that it’s hard to do silently. A random ASN doing a hijack will cause an observable BGP event and theoretically preventable via RPKI. If your ISP or similar does it, you can still detect it with CT logs.
Even the issuance is a little better, because LE will test from multiple vantage points. This doesn’t protect against an ISP interception, but it’s better than no protection.
I find it interesting that they bought Astro (https://blog.cloudflare.com/astro-joins-cloudflare/), which from my definitely-not-a-frontend-person perspective seems to tackle a similar problem to Next. A month ago.
If it is so cheap to make something that they recommend using (rather than a proof of concept), why buy Astro (presumably it was more expensive than the token cost of this clone?).
One conclusion is that, at the organisational level, it still makes sense to hire the “vision” behind the framework, rather than just clone it. Alternatively, maybe AI has improved that much in 1 month!
I'm very patient with the ai-led porting projects since they're revealed with a big engagement splash on social media. Could it be durable? sure but I doubt anyone is in that much of a rush to migrate to a project built in a week either.
I view it as a long-overdue exit ramp for maintainers of Next.js-based webapps to extricate themselves from its overly-opinionated and unnecessarily-tightly-coupled build tooling. Being stuck on webpack/rspack and unable to leverage vite has been a huge downside to Next.js. It's a symptom of Vercel's economic incentives. This project fixes it in one fell swoop. I predict it hurts Vercel but saves Next.js.
Astro is a different paradigm. Acquiring Astro gives Cloudflare influence over a very valuable class of website, in the same way Vercel has over a different class from their ownership of Next.js. Astro is a much better fit for Cloudflare. Next.js is very popular and god awful to run outside of Vercel, Cloudflare aren’t creating a better next.js, they’re just trying to make it so their customers can move Next.js websites from Vercel to Cloudflare. Realistically, anyone moving their next.js site to Cloudflare is going to end up migrating to Astro eventually.
Astro isn’t solving the same surface as next. Astro is great for static sites with some dynamic behavior. The same could be said about next depending on how you write your code, but next can also be used for highly dynamic websites. Using Astro for highly dynamic websites is like jamming a square peg into a round hole.
We use Astro for our internal dev documentation/design system and it’s awesome for that.
I think they just want steer users/developers to CF products, maybe not? It is interesting to see the two platforms. I've moved to svelte, never been a frontend person either but kind of enjoying it actually.
Astro has "server islands" which rely on a backend server running somewhere. If 90% of the page is static but you need some interactivity for the remaining 10%, then Astro is a good fit, as that's what makes it different than other purely static site generators. Unlike Next.js, it's also not tied to React but framework-agnostic.
Anyways, that's why it's a good fit for Cloudflare: that backend needs to be run somewhere and Astro is big enough to have some sort of a userbase behind them that Cloudflare can advertise its service to. Think of it more as a targeted ad than a real acquisition because they're super interested in the technology behind it. If that were the case, they could've just forked it instead of acquiring it.
From Astro's perspective, they're (presumably) getting more money than they ever did working on a completely open source tool with zero paywalls, so it's a win-win for both sides that Cloudflare couldn't get from their vibe-coded project nobody's using at the moment.
You can make it so employees don’t have ambient access to data, and require multi-party approval for all actions that require user data. Giving away a user password should be treated as a routine risk.
I’m not saying that’s how it actually works, and this process doesn’t have warts, but the ideal of individual employees not having direct access is not novel.
It doesn’t have to be a compile time constant. An alternative is to prove that when you are calling the function the index is always less than the size of the vector (a dynamic constraint). You may be able to assert this by having a separate function on the vector that returns a constrained value (eg. n < v.len()).
Management not having to listen to engineers is the structural problem. How do managers know which concerns that engineers bring up are actually relevant? How do engineers know which concerns have real world consequences (without having a incredibly high burden of proof)?
Having regulation, or standardisation is a step toward producing a common language to express these problems and have them be taken seriously.
Leadership gets a strong signal - ignoring engineers surfacing regulated issues has large costs. Company might be sued and executives are criminally liable (if discovered to have known about the violation).
Engineering gets the authority and liability to sign off on things - the equivalent of “chartership” in regular fields with the same penalties. This gives them a strong personal reason to surface things.
It’s possible that this is harder for software engineering in its entirety, but there is definitely low hanging fruit (password storage and security etc).
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Team lead manages the overall direction of the team (and is possibly the expert on some portions), but for an individual subsystem a senior engineer might be the expert.
For work coming from outside the team, it’s sort of upto your management chain and team lead to prioritise. But for internally driven work (tech debt reduction, reliability/efficiency improvements etc) often the senior engineer has a better idea of the priorities for their area of expertise.
Prioritisation between the two is often a bit more collaborative and as a senior engineer you have to justify why thing X is super critical (not just propose that thing X needs to be done).
I view the goal of managers + lead as more balancing the various things the team could be doing (especially externally) and the goal of a senior engineer is to be an input to the process for a specific system they know most about.
I agree, but I think that input is limited to unopinionated information about the technical impact or user-facing impact of each task.
I don't think it can be said that senior engineers persuade their leaders to take one position or the other, because you can't really argue against a political or financial decision using technical or altruistic arguments, especially when you have no access to the political or financial context in which these decisions are made. In those conversations, "we need to do this for the good of the business" is an unbeatable move.
I guess this is also a matter of organisational policy and how much power individual teams/organisational units have.
I would imagine mature organisations without serious short/medium term existential risk due to product features may build some push back mechanisms to defend against the inherent cost of maintaining existing business (ie prioritising tech debt to avoid outages etc).
In general, it is a probably a mix of the two - even if there is a mandate from up high, things are typically arranged so that it can only occupy X% of a team’s capacity in normal operation etc, with at least some amount “protected” for things the team thinks are important. Of course, this is not the case everywhere and a specific demand might require “all hands on deck”, but to me that seems like a short-sighted decision without an extremely good reason.
In my 30 years in industry -- "we need to do this for the good of the business" has come up maybe a dozen times, tops. Things are generally much more open to debate with different perspectives, including things like feasibility. Every blue moon you'll get "GDPR is here... this MUST be done". But for 99% of the work there's a reasonable argument for a range of work to get prioritized.
When working as a senior engineer, I've never been given enough business context to confidently say, for example, "this stakeholder isn't important enough to justify such a tight deadline". Doesn't that leave the business side of things as a mysterious black box? You can't do much more than report "meeting that deadline would create ruinous amounts of technical debt", and then pray that your leader has kept some alternatives open.
It’s possible, but I think it’s typically used for ingress (ie same IP, but multiple destinations, follow BGP to closest one).
I don’t think I’ve seen a similar case for anycast egress. Naively, doesn’t seem like it would work well because a lot of the internet (eg non-anycast geographic load balancing) relies on unique sources, and Cloudflare definitely break out their other anycast addresses (eg they don’t send outbound DNS requests from 1.1.1.1).
So reading the article you’re right, it’s technically anycast. But only at the /24 level to work around BGP limitations. An individual /32 has a specific datacenter (so basically unicast). In a hypothetical world where BGP could route /32s it wouldn’t be anycast.
I wasn’t precise, but what I meant was more akin to a single IP shared by multiple datacenters in different regions (from a BGP perspective), which I don’t think Cloudflare has. This is general parallel of ingress unicast as well, a single IP that can be routed to multiple destinations (even if on the BGP level, the entire aggregate is anycast).
It would also not explain the OP, because they are seeing the same source IP, but from many (presumably) different source locations whereas with the Cloudflare scheme each location would have a different source IP.
To be clear, they definitely use ingress anycast (ie anycast on external traffic coming into Cloudflare). The main question was whether they (meaningfully) used egress anycast (multiple Cloudflare servers in different regions using the same IP to make requests out to the internet).
Since you mentioned DDOS, I’m assuming you are talking about ingress anycast?
It doesn't really matter if they're doing that for this purpose, though. Cloudflare (or any other AS) has no fine control of where your packets to their anycast IPs will actually go. A given server's response packets will only go to one of their PoPs. It's just that which one will depend on server location and network configuration (and could change at any time). Even if multiple of their PoPs tried to fetch forward from the same server, all but one would be unable to maintain a TCP connection without tunneling shenanigans.
Tunneling shenanigans are fine for ACKs, but it's inefficient and therefore pretty unlikely that they are doing this for ingress object traffic.
For POSIX: I leave Bash as the system shell and then shim into Fish only for interactive terminals. This works surprisingly well, and any POSIX env initialisation will be inherited. I very rarely need to do something complicated enough in the REPL of the terminal and can start a subshell if needed.
Fish is nicer to script in by far, and you can keep those isolated with shebang lines and still run Bash scripts (with a proper shebang line). The only thing that’s tricky is `source` and equivalents, but I don’t think I’ve ever needed this in my main shell and not a throw-away sub shell.
I often write multi-line commands in my zsh shell, like while-loops. The nice thing is that I can readily put them in a script if needed.
I guess that somewhat breaks with fish: either you use bash -c '...' from the start, or you adopt the fish syntax, which means you need to convert again when you switch to a (bash) script.
I guess my workflow for this is more fragmented. Either I’m prototyping a script (and edit and test it directly) or just need throwaway loop (in which case fish is nicer).
I also don’t trust myself to not screw up anything more complex than running a command on Bash, without the guard rails of something like shellcheck!
I used to do it this way, but then having the mentally switch from the one to the other became too much of a hassle. Since I realized I only had basic needs, zsh with incremental history search and the like was good enough.
I don't care for mile-long prompts displaying everything under the sun, so zsh is plenty fast.
The key property of SSL that is useful for tamper resistance is that it’s hard to do silently. A random ASN doing a hijack will cause an observable BGP event and theoretically preventable via RPKI. If your ISP or similar does it, you can still detect it with CT logs.
Even the issuance is a little better, because LE will test from multiple vantage points. This doesn’t protect against an ISP interception, but it’s better than no protection.
reply