Apps written in an exceptions language (Java, JavaScript, PHP, etc..) are really annoying to monitor as everything that isn't the happy path triggers an 'error'/'fatal' log/metric.
Yes, you can technically work around it with (near) Go-level error verbosity (try/catches everywhere on every call) but I've never seen a team actually do that.
Modern languages that don't throw exceptions for every error like Rust, Go, and Zig make much more sane telemetry reports in my experience.
On this note, a login failure is not an error, it's a warning because there is no action to take. It's an expected outcome. Errors should be actionable. WARN should be for things that in aggregate (like login failures) point to an issue.
Login failure is like the most important error you'll track. A login failure isn't necessarily actionable but a spike of thousands of them for sure is. No single system has been more responsible for causing outages in my career than auth. And I get that it's annoying when they appear in your Rollbar but sometimes Login Failed is the only signal you get that something is wrong.
Some 3rd party IdP saying "nope" can be innocuous when it's a few people but a huge problem when it's because they let their cert/application token expire.
And I can already hear the "it should be a metric with an alert" and you're absolutely right. Except that it requires that devs take the positive action of updating the metric on login failures vs doing nothing and letting the exception propagate up. And you just said login failures aren't errors and "bad password" obviously isn't an error so no need to update the metric on that and cause chatty alerts. Except of course that one time a dev accidentally changed the hashing algorithm. Everyone was really bad at typing their password that day for some reason.
> Login failure is like the most important error you'll track. A login failure isn't necessarily actionable but a spike of thousands of them for sure is.
Sounds like you agree with me. Re-read my comment. Errors are actionable individually. Warnings are actionable in aggregate.
You don't have to treat logs and metrics as separate, you can have rules on log counts without emitting a metric.
Rather than login failures I would monitor login successes. A sharp decrease of successes likely points to some issue, but an increase in login failures might easily be someone trying tons of random credentials on your website (still not ideal, but much harder to act on)
I'm glad the air now comes standard with 16GB of RAM and 512GB disk space.
It's not that the M1 with 8/256GB was slow at all, but even browsing the web gets into 12GB of usage and exhausting the 256GB is fairly easy if you backup your 256GB phone, try to edit a few videos, download enough Gradle/Go/Cargo/Node packages, or install enough 20GB office apps.
Any apple silicon with 16GB / 512GB of stage (even the M1 series) should have a much longer useful life and avoid disk/storage aging as rapidly from the constant swapping.
Can I just lean on my cane for a second, and say that the first machine I connected to a network had 256KB RAM, and I considered myself lucky to have so much. My 150 baud modem downloaded text slower than I could read it.
I know how we got to these large numbers. Shit, I helped build the road. It still blows my brains out.
Lets be real, the fact that the Air is good for developers is.. honestly, great.
But these devices are meant for home users.
Not a tremendous amount of home users having huge gradle/go/cargo/node packages in my experience.
The backup problem is real, I'm surprised Apple doesn't come out with a new time capsule (edit: for phones/tablets)- but I guess they want that sweet iCloud services dollar.
maybe I’m forgetting all the benefits of time capsule but you can plug any old storage device into a Mac now and turn it into a “Time Machine.“ It’s pretty turnkey at this point. What would a modern time capsule offer besides maybe remote back ups?
Time "Machine" on MacOS continues to work (though it's clearly not as important to Apple as it once was).
The issue is: if you want to back up a phone: it will take space from your laptop and it must be tethered to do the backup. This means that if you have a 1TiB phone, like I do, you need at least 1TiB of local disk on your laptop to be able to do a single backup if the phone is anywhere near full.
This is in contrast to how Time Capsule works right now for MacOS, whereby you have an SMB share (like, a 100+TiB NAS) and your laptop will just back itself up when it can.
Such a feature would be pretty killer on iPhones/iPads, or having a "photo server" to offload your photos... idk, but Apple won't do it.
I'm excited about this. The previous generation base model 15" Air was good enough for our company to make it the default computer for everyone. Previously we were giving out base model MBP's. And they're $1000 cheaper.
Today, the MBP is just way too powerful for anything other than specific use cases that need it.
Out of curiosity, what are some good use cases for a MBP now with the MBAs being so powerful?
I can think of things like 4K video editing or 3D rendering but as a software engineer is there anything we really need to spend the extra money on an MBP for?
I'm currently on a M1 Max but am seriously considering switching to an MBA in the next year or two.
The Apple Silicon fanless MBAs are great until you end up in a workload that causes the machine to thermal throttle. I tried to use an M4 MBA as primary development machine for a few months.
A lot of software dev workflows often require running some number of VMs and containers, if this is you the chances of hitting that thermal throttle are not insignificant. When throttling under load occurs it’s like the machine suddenly halves in performance. I was working with a mess of micro services in 10-12 containers and eventually it just got too frustrating.
I still think these MBAs are superb for most people. As much as I love a solid state fanless design, I will for now continue to buy Macs with active cooling for development work. It’s my default recommendation anytime friends or relatives ask me which computer to buy and I still have one for light personal use.
While I agree that the slowdown is very noticeable once the MBA gets hot to the touch, I joke that it's a feature, encouraging you to take a cooldown break every once in a while :-)
More seriously though I agree it depends on workload. If you've got a dev flow that hits the resources in spikes (like initial builds that then flatten off to incremental) it works pretty well with said occasional breaks but if your setup is just continuously hammering resources it would be less than ideal.
It's all related to things outside the CPU and GPU that made me choose a base model M5 Macbook Pro. I prefer the larger 14-inch screen for its 120hz capability and much better brightness and colour capability. I adore that there are USB-C ports on both sides for charging. The battery's bigger. That's about it.
If nothing else, I’ve learned that for me personally, 14” is the sweet spot for a laptop. It’s just enough over the 13” to be good, without being obnoxiously large.
I also like NanoTexture way more than I thought I would, so there’s that.
> Out of curiosity, what are some good use cases for a MBP now with the MBAs being so powerful?
Local software development (node/TS). When opus-4.6-fast launched, it felt like some of the limiting factor in turnaround time moved from inference to the validation steps, i.e. execute tests, run linter, etc. Granted, that's with endpoint management slowing down I/O, and hopefully tsgo and some eslint replacement will speed things up significantly over there.
It's a personal thing how much you care, but the speakers on the MBPs are pretty amazing. The Air sounds fine, even good for a notebook, but the MBPs are the best laptop speakers I have ever heard.
Yes, back 10-15 years ago MBP felt more prosumer to me but they have monstrous performance and price points nowadays, like true luxury items or enterprise devices, that I'm happy to see good base specs on the MBA. The base spec on that device matters a lot. Also, Apple will probably release a cheaper MacBook this week and if the rumor holds, it'll be good enough for most consumers.
Have you used local AI models on a 32 GB MBP? I ask because I'm looking to finally upgrade my M1 Air, which I love, but which only has 16 GB RAM. I'm trying to figure out if I just want to bump to 32 GB with the M5 MBAir or make the jump all the way to 64 GB with the low-end M5 MBP. I love my M1 Air and I don't typically tax the CPU much, but I'm starting to look at running local models and for that I'd like faster and bigger. But that said, I don't want to overpay. Memory is my main issue right now. Anyway, if you have experience, I'd love to hear it. Which MBP, stats of the system, which AI model, how fast did it go, etc?
I’d like to do agentic coding first, but then chatbot and classification as lower priorities. I don’t really care about image gen.
Also, if you’re only able to run 35B models in 32GB, seems like I’d definitely want at least 64GB for the newer, larger models (qwen has a 122B model, right). My theory there is that models are only getting larger, though perhaps also more efficient.
I have noticed something similar. With the computer science undergrads and grad students I work with, Air is much more common than with the premeds and med students, many of whom have MBPs (who I am presuming do not need that much power).
I think its because compsci people know what they need to a greater degree than other majors. It's easier to upsell a computer to someone who doesn't really know about computers.
It could also be possible that compsci kids have a powerful desktop at home, or are more savvy with university cloud computing, for any edge cases or computationally expensive tasks.
It’s possible that their departments give them computer recommendations that exceed what they actually need.
I’m not sure why this happens or who formulates these recommendations, but I’ve seen it before with students in fields that just don’t do much heavy duty computation or video editing being told to buy laptops with top-of-the-line specs.
I think there is a tendency to simply give in and buy bigger hardware if something doesn't work. With friends and family, I sometimes feel like having to talk them off the roof with regards to pulling the trigger on really expensive (relative to the tasks they're doing) hardware, simply because performance is often abysmal due to the fact that they trashed their OS with malware and bloatware and whatnot and can't understand all of that.
It's the same at work, to some degree. Our in-house ERP software performs like kicking a sack of rocks down a hill. I don't know how often I had to show devs that the hardware is actually idle and they're mostly derailing themselves with DB table locks, GC issues and whatnot. If I weren't pushing back, we probably would have bought the biggest VMs just to let them sit idle.
This is also just the direction that AI is taking us, even for people who wouldn't describe themselves as traditional developers.
Setting aside on-device LLMs, one needs RAM and disk space just for the multiple isolated Claude Cowork etc. VMs that will increasingly become part of people's everyday lives.
And when it's easier than ever to create an Electron app, everything's going to have an Electron app, with all the RAM/disk overhead that entails. And of course, nobody's asking their agents "optimize the resource usage of the app I made last week" - they're moving on to the next feature or project.
I suppose the demoscene will always be there, for those of us who increasingly need a refuge from ram-flation.
Where does it stop? Of course having a bit more room does not hurt, but my view is that if 256GB was not enough for you, 512GB wouldn't be either.
To me it's mostly about learning to mange RAM and storage space on your machine. A lot of stuff does not need to be hoarded on the machine. Move infrequently accessed data to an external drive. Be ruthless about purging stuff you no longer really need. Refuse to run apps that consume tens of GBs of RAM on a whim (looking at you Firefox, I've been impressed with how efficient and stable the Helium browser has been for me). If you are a developer, engineer for efficient use of RAM and storage.
Like I said, 16gb RAM and 512GB storage minimum is nice, but if the fundamental issues that contribute to massive and wasteful use of resources on our machines are not addressed, nothing will be enough.
I don't know but macOS is making it ever more difficult to manage storage, with lots of random things under "macOS" pushing ~40GB or "System Data" that gets a crapload of unrelated things like podcast [1] downloads, with no easy way to purge.
[1] I spent too much time hunting down ~250GB of missing disk space, and it turns out it was the Podcasts app's cache, while the app itself reported no downloads. I fully expected this to be managed automatically, but was getting out of disk space warnings. It's a mess.
I think 512GB is a fair minimum for a computer these days, but I agree with your "Where does it stop?" sentiment when it comes to RAM.
If browsing the web takes 12GB of RAM, at what point do we stop chasing after more RAM and instead start demanding better performance and resource usage out of the web?
My first Apple Silicon machine was an 8GB/512GB M1 MacBook Air. I rarely bumped up against the RAM, but I was pretty happy using between 300-400GB on the SSD, so I really think the 512GB was plenty. I have a 1TB machine now, and typically still use less than 512GB...but now and then I've found a good use for nearly all of that terabyte.
You're right, learning to manage storage space is important, but you need to have some storage space to manage first. 256GB is the bottom of the barrel.
From observing family members, 256GB is usually fine, but small enough that normal computer use can accidentally fill it up. 512GB provides plenty of headroom for them. 512GB is tight for more involved usage that’s not serious media creation, and 1TB is comfortable. 1TB seems like the realistic minimum for heavier media creation.
I agree with the sentiment, but in general it's not worth my time to try to purge. I used to do that back in 2005. Heck, in the 1990s, I'd buy a new hard drive every year. But these days, I find that a hard drive lasts me for 5 years if I plan well.
I have a 16GB/512GB Air M1 (2020) because I knew I would need the extra space but this really makes me happy. A new Air, higher headroom, M5, is awesome. It’s not a MBP but it’s good enough for 95% of the daily stuff. If you aren’t running local agents this would be amazing.
This is why I jumped from PHP to Go, then why I jumped from Go to Rust.
Go is the most battery-included language I've ever used. Instant compile times means I can run tests bound to ctrl/cmd+s every time I save the file. It's more performant (way less memory, similar CPU time) than C# or Java (and certainly all the scripting languages) and contains a massive stdlib for anything you could want to do. It's what scripting languages should have been. Anyone can read it just like Python.
Rust takes the last 20% I couldn't get in a GC language and removes it. Sure, it's syntax doesn't make sense to an outsider and you end up with 3rd party packages for a lot of things, but can't beat it's performance and safety. Removes a whole lot of tests as those situations just aren't possible.
If Rust scares you use Go. If Go scares you use Rust.
Sorry, but it's honestly just a lot of our journeys. Started on scripting languages like PHP/Ruby/Lua (self-taught) or Java/VB/C#/Python (collage) and then slowly expanded to other languages as we realized we were being held back by our own tools. Each new language/relationship makes you kick yourself for putting up with things so long.
I understand that but there's a time and a place. Rust has nothing to do with this. 100% of the people on this site understand that this challenge can be done faster in C, or Rust, or whatever. This is a PHP challenge. Perhaps we could discuss the actual submission as opposed to immediately derailing it.
> I understand that but there's a time and a place.
Dude, this is a website where a bunch of developer nerds congregate and talk shop. They're fine, this is the same kind of shit that's been happening across these kinds of sites for decades.
I don't know about that... I like Rust a lot... but I also like a lot of things about C# or TS/JS... I'll still reach for TS first (Deno) for most things, including shell scripting.
Can't speak for go... but for the handful of languages I've thrown at Claude Code, I'd say it's doing the best job with Rust. Maybe the Rust examples in the wild are just better compared to say C#, but I've had a much smoother time of it with Rust than anything else. TS has been decent though.
I am not that smart to use Rust so take it with a grain of salt. However, its syntax just makes me go crazy. Go/Golang on the other hand is a breath of fresh air. I think unless you really need that additional 20% improvement that Rust provides, Go should be the default for most projects between the 2.
I hear you, advanced generics (for complex unions and such) with TypeScript and Rust are honestly unreadable. It's code you spend a day getting right and then no one touches it.
I'm just glad modern languages stopped throwing and catching exceptions at random levels in their call chain. PHP, JavaScript and Java can (not always) have unreadable error handling paths not to mention hardly augmenting the error with any useful information and you're left relying on the stack trace to try to piece together what happened.
I really wish more people wanted screens that looked as good as their cellphone.
Bright, sharp text, great color. We've had the great Apple Studio Display for years now, it's about time others came to fix some of it's short-comings like 27" size, 60hz and lack of HDMI ports for use with other systems.
So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.
The company I work at gives all new developers a pair of 1080p displays that could have come right out of 2010.
It amazes me, and it’s so sad. They have no idea what they’re missing. I’m sure high PPI would pay off fast in eye strain. And it’s not like monitors need replacement yearly. Tons of time to recoup that small cost.
I’m not arguing for $2k 37” monitors, just better than $200 ones.
Even $200 will already buy a 4K 27" (LG). Which aren't even bad. I swear by HiDPI as well but my work is the same. 1080p displays and really bad contrast screens too. Definitely not TN (they're not that bad) and not VA (they tend to have way better contrast than IPS). Probably just bottom barrel IPS.
Just about every company does something like this.
At one point in my career, I just started buying my own monitors and bringing them into work.
I remember when ~19 or 20" was the norm, and I bought a dell 30" 2560x1600 monitor. Best $1400 I ever spent, used it for years and years.
(I still have it although I retired it a few years back because it uses something called dual-link DVI which is not easily supported anymore)
I think if you are an engineer, you should dive headlong into what you are. Be proactive and get the tools you need. Don't wait for some management signoff that never comes while you suffer daily, and are worse at your job.
I work for a white-shoe law firm in Boston. I and most of my peers have total compensation approaching $500k.
And we have 1280x1024 monitors from the 00s, and we're not allowed to have anything better, even out of our own pockets, because "that's what we use here".
If you reach the point where you want to replace your in house IT with a company that will give you good tech and good tech support, let me know. I know a few people.
I don't think people care all that much about phones. It's just that phones are power-constrained, so manufacturers wanted to move to OLEDs to save on backlight; and because the displays are small, the tech was easier to roll out there than on 6k 32-inch monitors.
But premium displays exist. IPS displays on higher-end laptops, such as ThinkPads, are great - we're talking stuff like 14" 3840x2160, 100% Adobe RGB. The main problem is just that people want to buy truly gigantic panels on the cheap, and there are trade-offs that come with that. But do you really need 2x32" to code?
The other thing about phones is that you have your old phone with you when you buy a new one, so without even really meaning to you're probably doing a side by side direct comparison and improvements to display technology are a much bigger sales motivator.
This is the insight that sold a billion iPhones. They were obsessed with what happens when you’re at the store, and you don’t need a new phone, and you pick one up, and…
Outside Thinkpads IPS is basically the cheap/default option on laptops, with OLED being the premium choice. With Thinkpads TN without sRGB coverage is the cheap/default option, with IPS being the premium choice.
A fast color e ink would be possible but development would be very expensive for an unknown market. Would be a perfect anti eye strain second monitor though.
Is there a shortlist of top of the line utilitarian monitors that you can just buy, without researching or being some niche gamer?* Something similar to LG G-series TV's. Seems like Apple Studio, Dell UltraSharp are on that list. Any others?
*Struggling for words, but I'm looking more for the expedient solution rather than the "craft beer" or "audiophile" solution.
Keep in mind that normal OLEDs are quite bad for typical development tasks: lots of text with high contrast. Here is an example that would be unbearable for me: [1]. For text, IPS rules so far. For video and games, definitely OLED.
If you truly don't want to research use rtings best monitor for X articles and find your budget and buy that one, if you feel the need to compare further pop the model numbers into your favourite LLM
I have 27" 5K monitors at home since I WFH. One reason I don't really want to RTO is because these monitors aren't standard yet even if they have been out for more than a decade now (and my FAANG employer won't spring for the good stuff). That and my mechanical keyboard would never work in an open office :P.
Isn't the main difference glossy vs matte? With glossy you get usually bright great color and that's what you get on cellphones and Macbooks as well. For some reason matte is still the preference when it comes to monitors and you can't escape their muted color palette.
I have trouble making out details on my 45" UWQHD (3440x1440) displays... so I don't see much point.. maybe slightly easier to read typefaces... I am already zooming 25% in most of the time.
On the plus side, I can comfortably fit my editor on half the screen and my browser on the other half.
Or... it could be my general vision loss issues in that I can't hardly see...
I can't make out a native pixel as it is. I understand that if it had 2x the PPI that it might handle rendering better and that may help, some with visibility... I had a first generation MBP with retina display, and it was amazing. But that's not my issue here. Not to mention the trouble having my work laptop push effectively 4x the pixels if such a mythical beast existed.
The size is so that I can actually work with a single screen, editor on one side, browser on the other... it's almost like 2x 3:2 displays in one. For a workflow it's pretty good... I don't game much, but it's nice for that and content viewing as well. I had considered using side by side displays, like 2x 27" in portrait mode... but settled on this, which is working surprisingly well for now.
Yeah PPD is more useful, although for ultrawide I’ve also heard it’s common to have it closer than regular viewing distance, so that you can glance at side screens / information
I have a Studio Display and would also love if it had a much higher refresh rate, but only because I play WoW on it. Why isn't 60hz enough for programming? I don't think I notice the refresh rate at all when not playing a video game or watching videos.
I'm personally not very sensitive to refresh rates, I only really notice it in video games and it wasn't enough to keep me from replacing my 120hz primary monitor with the Studio Display. I was just curious about why you prefer higher refresh rate for programming, thanks for answering!
It seems rather silly to assume all people universally have the same needs, desires, and expenses. We don't live in the world of The Giver. I can accept that firefighters need a truck much more advanced and expensive than I ever will. It would be odd to compare that expense to how many pizza's I order each year.
> So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.
I'm 53 y/o and didn't have glasses until 52. And at 53 I only use them sporadically. For example atm I'm typing this without my glasses. I can still work at my computer without glasses.
And yet I spent 10 hours a day in front of computer screens since I was a kid nearly every day of my life (don't worry, I did my share of MX bike, skateboarding, bicycling, tennis, etc.).
You know the biggest eye-relief for me? Not using anti-aliased font. No matter the DPI. Crisp, sharp, pixel-perfect font only for me. Zero AA.
So a 110 / 120 ppi screen is perfect for me.
Not if you do use anti-aliased font (and most people do), I understand the appeal of smaller pixels, for more subtle AA.
But yup: pixel perfect programming font, no anti-aliasing.
38" ultra-wide, curved, monitor. Same monitor since 2017 and it's my dream. My wife OTOH prefers a three monitors setup.
So: people have different preferences and that is fine. To each his own bad tastes.
Anecdata but I played games at 4K on a 4GHz Haswell (2013) + 1080 Ti (2017). Definitely faster at 2K but 4K was servicable. It's probably less true now that I'm 1+ years away from the hardware, but 4K gameplay is surprisingly accessible for modest hardware IMO.
I currently have a 4k monitor (+nv4070-super) and it does handle some games fine at 4k but for others I need to use 2k w/ upscaling. Depends on the game.
So good news, there is a fair amount of monitors coming soon which are super high resolution that offer a "dual mode" which is lower resolution that has higher refresh rate. They are pretty cool.
That’s not really true. I tried out 5K which would reasonably be quite heavy, but honestly with a DLSS it’s super viable. If you get the gaming versions of these displays, they also have dual modes and in that mode a 6K display is going to be less heavy to run than a 4K one and a 5K display is going to be 1440p.
One of the things I wish more people talked about isn't just the language or the syntax, but the ecosystem. Programming isn't just typing, it's dealing with dependencies and trying to wire everything up so you can have tests, benchmarks, code-generation and build scripts all working together well.
When I use modern languages like Go or Rust I don't have to deal with all the stuff added to other languages over the past 20 years like unicode, unit testing, linting, or concurrency.
I use Go where the team knows Java, Ruby or TypeScript but needs performance with low memory overhead. All the normal stuff is right there in the stdlib like JSON parsing, ECC / RSA encryption, or Image generation. You can write a working REST API with zero dependencies. Not to mention so far all Go programs I've ever seen still compile fine unlike those Python or Ruby projects where everything is broken because it's been 8mo.
However, I'd pick Rust when the team isn't scared of learning to program for real.
I don't like that for fairly basic things one has to quickly reach for crates. I suppose it allows the best implementation to emerge and not be concerned with a breaking change to the language itself.
I also don't like how difficult it is to cross-compile from Linux to macOS. zig cc exists, but quickly runs into a situation where a linker flag is unsupported. The rust-lang/libc also (apparently?) insists on adding a flag related to iconv for macOS even though it's apparently not even used?
But writing Rust is fun. You kind of don't need to worry so much about trivialities because the compiler is so strict and can focus on the interesting stuff.
Yeah, I've never seen an all-in-one language like Go before. Not just a huge stdlib where you don't have to vet the authors on github to see if you'll be okay using their package, but also a huge amount of utility built in like benchmarking, testing, multiple platforms, profiling, formatting, and race-detection to name a few. I'm sad they still allow null, but they got a lot right when it comes to the tools.
Everything is literally built-in. It's the perfect scripting language replacement with the fast compile time and tiny language spec (Java 900 pages vs Go 130 pages) making it easy to fully train C-family devs into it within a couple weeks.
Oh Go with Rust's result/none setup and maybe better consts like in the article would be great.
Too ba null/nil is to stay since no Go 2.
Or maybe they would? Iirc 1.21 had technically a breaking change related to for loops. If it was just possible to have migration tooling. I guess too large of a change.
Technically, with generics, you could get a Result that is almost as good as Rust, but it is unidiomatic and awkward to write:
type Result[T, E any] struct {
Val T
Err E
IsErr bool
}
type Payload string
type ProgError struct {
Prog string
Code int
Reason string
}
func DoStuff(x int) Result[Payload, ProgError] {
if x > 8 {
return Result[Payload, ProgError]{Err: ProgError{Prog: "ls", code: 1, "no directory"}}
}
return Result[Payload, ProgError]{Val: "hello"}
}
This was not the case for a long time. Actually it seems like it's fairly recently you get native AOT and trimming to actually reduce build sizes and build time. Otherwise all the binaries come with a giant library
Even back in .NET Core 3.1 days C# had more than competitive performance profile with Go, and _much_ better multi-core scaling at allocation-heavy workloads.
It is disingenuous to say that whatever it ships with is huge also.
The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).
There’s cross-rs which simplifies things. But the main problem is less linker flags being unsupported and more cross compiling C dependencies somewhere in the dependency chain and that’s always a nightmare, not really anything to do with Rust (Go should have similar difficulties with cross compilation).
Buuut with Go one in general tends to reach less for dependencies so less likely to run into this and cgo is not go ;) https://go-proverbs.github.io
but for cross-compiling actually ended up filtering out the liconv flag with a bash wrapper and compiled a custom zig cc version with the support for exported_symbols_list patched in, things appear to work.
Should look into cross-rs I suppose. Hope it's not one of those "download macos sdk from this unofficial source" setups that people seem to do. Apparently not allowed by Apple.
Cross compiling to Apple products from non Apple products is going to run into the same hurdle around SDK setup as any other. There exists documentation but it’s probably not the easiest task. This limitation though applies equally to any library that depends on system C headers and/or system libraries.
> Go should have similar difficulties with cross compilation
It doesn't. Go code can be cross compiled for any OS and any CPU arch from any supported system. And it comes out of the box that way. You don't have to go out of your way to install or configure anything extra to do it.
We’re not talking about go here. This is true for rust. The issue is building against C libraries and APIs for a different OS. Unless go has done some magic I’m unaware of its the same problem, just cgo isn’t super popular in the Go community
The crates.io ecosystem for Rust... is like the amazing girlfriend that you go head over heels for, make her your wife, and then you meet the in-laws ... but it's too late now.
Unlimited access to a bunch of third party code is great as you're getting started.
Until it isn't and you're swimming in a fishing net full of code you didn't write and dependencies you do not want. Everything you touch eventually brings all of tokio along with it. And 3 or 4 different versions of random number generators or base64 utilities, etc. etc.
I can't speak for other Rust programmers but I can speak for myself.
I obviously enjoy programming Rust and I like many of the choices it made, but I am well aware of the tradeoffs Rust has made and I understand why other languages chose not to make them. Nor do I think Rust functions equally as well in every single use case.
I imagine most Rust users think like this, but unfortunately there seems to be a vocal minority who hold very dogmatic views of programming who have shaped how most people view the Rust community.
This is such a big deal and I wish more people talked about it in these types of blog posts.
I used to be a Python programmer and there were two things that destroyed every project;
- managing Python dependencies
- inability to reason about the input and output types for functions and inability to enforce it ; in Python any function can accept any input value of any type and can return any type of value of any type.
These issues are not too bad if it's a small project and you're the sole developer. But as projects get larger and require multiple developers, it turns into a mess quickly.
Go solved all these issues. Makes deployment so much easier. In all the projects I've done I estimate that more than half have zero dependencies outside of the standard library. And unlike Python, you don't have to "install" Go or it's libraries on the server you plan to run your program on. Fully static self contained executable binary with zero external files needed is amazing, and the fact that you can cross compile for any OS+ CPU arch out of the box on any supported system is a miracle.
The issues described by the original post seem like small potatoes compared to the benefits I've gotten by shifting from Python over to Go
Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.
Have the government own data collection? Yeah, I don't even know where to start with all the problems this would cause.
Ignore it and let companies keep abusing customers? Nope.
Stop letting class-action lawsuits slap the company's wrists and then give $0.16 payouts to everyone?
What exactly do we do without killing innovation, building moats around incumbents, giving all the power to politicians who will just do what the lobbyists ask (statistically), or accepting things as is?
Does it need to be hosted on your servers? Could you provide something to the customers where they host the data or their local doctors office does it?
Can you delete it after the shortest possible period of using it, potentially? Do you keep data after someone stops being a customer or stops actively using the tech?
Record retention is covered by a complex set of overlapping regulations and contracts. They are dependent on much more than date of service. M&A activity, interstate operations, subsequent changes in patient mental status, etc can all cause the horizon to change well after the last encounter.
As all the comments in this thread suggest the cost of having an extra record , even an extra breached record is low. The cost of failing to produce a required medical record is high.
Put this together with dropping storage prices, razor then margins, and IT estates made out of thousands of specialized point solutions cobbled together with every integration pattern ever invented and you get a de facto retention of infinity paired with a de jure obligation of could-be-anything-tomorrow.
I'm not trying to be rude, but it's clear you have idea what you're talking about. The medical world is heavily regulated and there are things we must do and thing's we can't do. If you go to your doctor with a problem, would you want your doctor to have the least amount of information possible or your entire medical history? The average person has no business hosting their sensitive data like banking and medical information. If you think fraud and hacks are bad now, what do you think would happen if your parents were forced to store their own data? Or if a doctor who can barely use an EMR was responsible for the security of your medical data? I would learn a lot more about the area before making suggestions.
Having seen this world up close, the absolute last place you ever want your medical data to be is on the Windows Server in the closet of your local doctors office. The public cloud account of a Silicon Valley type company that hires reasonably competent people is Fort Knox by comparison.
Yeah but the a local private practice is a fairly small target. No one is going to break into my house just to steal my medical records, for example.
This could also be drastically improved by the government spearheading a FOSS project for medical data management (archival, backup, etc). A single offering from the US federal government would have a massive return on investment in terms of impact per dollar spent.
Maybe the DOGE staff could finally be put to good use.
You seem to be confused about how this works. Attackers use automated scripts to locate vulnerable systems. Small local private practices are always targeted because everything is targeted. The notion of the US federal government offering an online data backup service is ludicrous, and wouldn't have even prevented the breach in this article.
> Attackers use automated scripts to locate vulnerable systems.
I'm aware. I thought we were talking about something a bit higher effort than that.
> online data backup service
That isn't what I said. I suggested federally backed FOSS tooling for the specific usecase. If nothing else that would ensure that low effort scanners came up empty by providing purpose built software hardened against the expected attack vectors. Since it seems we're worrying about the potential for broader system misconfiguration they could even provide a blessed OS image.
The breach in the article has nothing to do with what we're talking about. That was a case of shadow IT messing up. There's not much you can do about that.
I just registered CVEs in several platforms in a related industry, the founders of whom likely all asked themselves a similar question. And yet, it's the wrong question. The right one is, "Does this company need to exist?" I don't know you or your company. Maybe it's great. But many startups are born thinking there's a technological answer to a question that requires a social/political one. And instead of fixing the problem, the same founders use their newfound wealth to lobby to entrench the problem that justifies their company's existence, rather than resolves the need for it to exist in the first place. "How do you propose we service our customers without their medical data?" Fix your fucked healthcare system.
Otherwise it would suggest you think the problem is they didn't ask? When was the last time you saw a customer read a terms of service? Or better yet reject a product because of said terms once they hit that part of the customer journey?
The issue isn't about asking it's that for take your pick of reasons no one ever says no. The asking is thus pro forma and irrelevant.
We apply crippling fines on companies and executives that let these breaches happen.
Yes, some breaches (actual hack attacks) are unavoidable, so you don't slap a fine on every breach. But the vast majority of "breaches" are pure negligence.
> Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.
That's a terrible argument for allowing our data to be sprayed everywhere. How about regulations with teeth that prohibit "dragons" from hoarding data about us? I do not care what the impact is on the "economy". That ship sailed with the current government in the US.
Or, both more and less likely, cut us in on the revenue. That will at least help some of the time we have to waste doing a bunch of work every time some company "loses" our data.
I'm tired of subsidizing the wealth and capital class. Pay us for holding our data or make our data toxic.
Obviously my health provider and my bank need my data. But no one else does. And if my bank or health provider need to share my data with a third party it should be anonymized and tokenized.
None of this is hard, we simply lack will (and most consumers, like voters are pretty ignorant).
The solution is to anonymize all data at the source, i.e. use a unique randomized ID as the key instead of someone's name/SSN. Then the medical provider would store the UID->name mapping in a separate, easily secured (and ideally air-gapped) system, for the few times it was necessary to use.
33 bits is all that are required to individually identify any person on Earth.
If you'd like to extend that to the 420 billion or so who've lived since 1800, that extends to 39 bits, still a trivially small amount.
Every bit[1] of leaked data bisects that set in half, and simply anonymising IDs does virtually nothing of itself to obscure identity. Such critical medical and billing data as date of birth and postal code are themselves sufficient to narrow things down remarkably, let alone a specific set of diagnoses, procedures, providers, and medications. Much as browser fingerprints are often unique or nearly so without any universal identifier so are medical histories.
I'm personally aware of diagnostic and procedure codes being used to identify "anonymised" patients across multiple datasets dating to the early 1990s, and of research into de-anonymisation in Australia as of the mid-to-late 1990s. Australia publishes anonymisation and privacy guidelines, e.g.:
"Data De‑identification in Australia: Essential Compliance Guide"
It's not merely sufficient to substitute an alternative primary key, but also to fuzz data, including birthdates, addresses, diagnostic and procedure codes, treatment dates, etc., etc., all of which both reduces clinical value of the data and is difficult to do sufficiently.
________________________________
Notes:
1. In the "binary digit" sense, not in the colloquial "small increment" sense.
What a silly idea. That would completely prevent federally mandated interoperability APIs from working. While privacy breaches are obviously a problem, most consumers don't want care quality and coordination harmed just for the sake of a minor security improvement.
100% state bot. I wouldn't even think it was just France, other state actors would love to see GrapheneOS go down as well. How dare citizens have technology we can't access.
I know they say that your programming language isn't the bottleneck, but I remember sitting there being frustrated as a young dev that I couldn't parse faster in the languages I was using when I learned about Go.
It took a few more years before I actually got around to learning it and I have to say I've never picked up a language so quickly. (Which makes sense, it's got the smallest language spec of any of them)
I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
The nice thing about Go is that you can learn "all of it" in a reasonable amount of time: gotchas, concurrency stuff, everything. There is something very comforting about knowing the entire spec of a language.
I'm convinced no more than a handful of humans understand all of C# or C++, and inevitably you'll come across some obscure thing and have to context switch out of reading code to learn whatever the fuck a "partial method" or "generic delegate" means, and then keep reading that codebase if you still have momentum left.
> The nice thing about Go is that you can learn "all of it" in a reasonable amount of time
This always feels like one of those “taste” things that some programmers tend to like on a personal level but has almost no evidence that it leads to more real-world success vs any other language.
Like, people get real work done every day at scale with C# and C++. And Java, and Ruby, and Rust, and JavaScript. And every other language that programmers castigate as being huge and bloated.
I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
Just an anecdote and not necessarily generalizable, but I can at least give one example:
I'm in academia doing ML research where, for all intents and purposes, we work exclusively in Python. We had a massive CSV dataset which required sorting, filtering, and other data transformations. Without getting into details, we had to rerun the entire process when new data came in roughly every week. Even using every trick to speed up the Python code, it took around 3 days.
I got so annoyed by it that I decided to rewrite it in a compiled language. Since it had been a few years since I've written any C/C++, which was only for a single class in undergrad and I remember very little of, I decided to give Go a try.
I was able to learn enough of the language and write up a simple program to do the data processing in less than a few hours, which reduced the time it took from 3+ days to less than 2 hours.
I unfortunately haven't had a chance or a need to write any more Go since then. I'm sure other compiled, GC languages (e.g., Nim) would've been just as productive or performant, but I know that C/C++ would've taken me much longer to figure out and would've been much harder to read/understand for the others that work with me who pretty much only know Python. I'm fairly certain that if any of them needed to add to the program, they'd be able to do so without wasting more than a day to do so.
Of course, but the dataset was mostly strings that needed to be cross-referenced with GIS data. Tried every library under the sun. The greatest speed up I got was using polars to process the mostly-string CSVs, but didn't help much. With that said, I think polars was also just released when we were working with that dataset and I'm sure there's been a lot of performance improvements since then.
These only help if you can move the hot loop into some compiled code in those libraries. There's a lot of cases where this isn't possible and at that point there's just no way to make python fast (basically, as soon as you have a for loop in python that runs over every point in your dataset, you've lost).
> I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
I can imagine myself grappling with a language feature unobvious to me and eventually getting distracted. Sure, there is a lot of things unobvious to me but Go is not one of them and it influenced the whole environment.
Or, when choosing the right language feature, I could end up with weighing up excessively many choices and still failing to get it right, from the language correctness perspective (to make code scalable, look nice, uniform, play well with other features, etc).
An example not related to Go: bash and rc [1]. Understanding 16 pages of Duff’s rc manual was enough for me to start writing scripts faster than I did in bash. It did push me to ease my concerns about program correctness, though, which I welcomed. The whole process became more enjoyable without bashisms getting in the way.
Maybe it’s hard to measure the exact benefit but it should exist.
I think Go is a great language when hiring. If you're hiring for C++, you'll be wary of someone who only knows JavaScript as they have a steep learning curve ahead. But learning Go is very quick when you already know another programming language.
I agree that empirical data in programming is difficult, but i’ve used many of those languages personally, so I can say for myself at least that I’m far more productive in Go than any of those other languages.
> As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I think those problems are related. The more features you have, the more difficult it becomes to avoid strange, surprising interactions. It’s like a pharmacist working with a patient who is taking a whole cocktail of prescriptions; it becomes a combinatorial problem to avoid harmful reactions.
> Like, people get real work done every day at scale with C# and C++.
That would be me. I _like_ C#, but there are elements to that language that I _never_ work with on a daily basis, it's just way too large of a language.
I've been writing go professionally for about ten years, and with go I regularly find myself saying "this is pretty boring", followed by "but that's a good thing" because I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble if I were to get run over by a bus or die of boredom.
In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.
> I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble
Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.
[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.
My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way. If you're using interfaces you're probably up to no good and writing Java-ish code in Go. (usually the right reason to use interfaces is exportability)
Yes, not even for testing. Use monkey-patching instead.
> My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way.
They do make some sense for swappable doodahs - like buffers / strings / filehandles you can write to - but those tend to be in the lower levels (libraries) rather than application code.
Go is okay. I don't hate it but I certainly don't love it.
The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.
When Go was new, having better package management than Python and C++ was saying a lot. I’m sure Go wasn’t the first, but there weren’t many mainstream languages that didn’t make you learn some imperative DSL just to add dependencies.
I picked up Go precisely in 2012 because $GOPATH (as bad as it was) was infinitely better than CMake, Gradle, Autotools, pip, etc. It was dead simple to do basic dependency management and get an executable binary out. In any other mainstream language on offer at the time, you had to learn an entire programming language just to script your meta build system before you could even begin writing code, and that build system programming language was often more complex than Go.
The fact that virtualenv exists at all should be viewed by the python community as a source of profound shame.
The idea that it's natural and accepted that we just have python v3.11, 3.12, 3.13 etc all coexisting, each with their own incompatible package ecosystems, and in use on an ad-hoc, per-directory basis just seems fundamentally insane to me.
It's still pretty mid and still missing basic things like sets.
But mid is not all that bad and Go has a compelling developer experience that's hard to beat. They just made some unfortunate choices at the beginning that will always hold it back.
The tradeoff with that language simplicity is that there's a whole lot of gotchas that come with Go. It makes things look simpler than they actually are.
For Rust vs C++, I'd say it'll be much easier to have a complete understanding of Rust. C++ is an immensely complex language, with a lot of feature interactions.
C# is actually fairly complex. I'm not sure if it's quite at the same level as Rust, but I wouldn't say it's that far behind in difficulty for complete understanding.
Rust managed to learn a lot from C++ and other languages' mistakes.
So while it has quite a bit of essential complexity (inherent in the design space it operates: zero overhead low-level language with memory safety), I believe it fares overall better.
Like no matter the design, a language wouldn't need 10 different kinds of initializer syntaxes, yet C++ has at least that many.
For Rust I'd expect the implementation to be the real beast, versus the language itself. But not sure how it compares to C++ implementation complexity.
There's a different question too, that I think is more important (for any language): how much of the language do you need to know in order to use it effectively. As another poster mentioned, the issue with C++ might not be the breath of features, but rather how they interact in non-obvious ways.
ECMAScript is an order of magnitude more complicated than Go by virtually every measure - length of language spec, ease of parsing, number of context-sensitive keywords and operators, etc.
Sorry, hard disagree. Try to understand what `this` means in JS in its entirety and you'll agree it's by no stretch of the imagination a simple language. It's more mind-bending and hence _The Good Parts_.
While I might not think that JS is a good language (for some definition of a good language), to me the provided spec does feel pretty small, considering that it's a language that has to be specified to the dot and that the spec contains the standard library as well.
It has some strange or weirdly specified features (ASI? HTML-like Comments?) and unusual features (prototype-based inheritance? a dynamically-bounded this?), but IMO it's a small language.
Shrugging it off as just being large because it contains the "standard library" ignores that many JS language features necessarily use native objects like symbols or promises, which can't be entirely implemented in just JavaScript alone, so they are intrinsic rather than being standard library components, akin to Go builtins rather than the standard library. In fact, in actual environments, the browser and/or Node.JS provide the actual standard library, including things like fetch, sockets, compression codecs, etc. Even ignoring almost all of those bits though, the spec is absolutely enormous, because JavaScript has:
- Regular expressions - not just in the "standard library" but in the syntax.
- An entire module system with granular imports and exports
- Three different ways to declare variables, two of which create temporal dead zones
- Classes with inheritance, including private properties
- Dynamic properties (getters and setters)
- Exception handling
- Two different types of closures/first class functions, with different binding rules
- Async/await
- Variable length "bigint" integers
- Template strings
- Tagged template literals
- Sparse arrays
- for in/for of/iterators
- for await/async iterators
- The with statement
- Runtime reflection
- Labeled statements
- A lot of operators, including bitwise operators and two sets of equality operators with different semantics
- Runtime code evaluation with eval/Function constructor
And honestly it's only scratching the surface, especially of modern ECMAScript.
A language spec is necessarily long. The JS language spec, though, is so catastrophically long that it is a bit hard to load on a low end machine or a mobile web browser. It's on another planet.
The Javascript world hides its complexity outside the core language, though. JS itself isn't so weird (though as always see the "Wat?" video), but the incantations required to type and read the actual code are pretty wild.
By the time you understand all of typescript, your templating environment of choice, and especially the increasingly arcane build complexity of the npm world, you've put in hours comparable to what you'd have spent learning C# or Java for sure (probably more). Still easier than C++ or Rust though.
I’ve been using Python since 2008, and I don’t feel like I understand very much of it at all, but after just a couple of years of using Go in a hobby capacity I felt I knew it very well.
Well that's good, since Go was specifically designed for juniors.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.
Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...
Yeah, but just going back to warnings would be a regression.
I believe the correct approach is to offer two build modes: release and debug.
Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
Release is the default, is strict and runs fast.
That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.
> Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.
In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".
Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.
Right, I meant that the binary should run slowly on purpose, so that people don't end up defaulting to just using the debug build. A nice way of doing so without just putting `sleep()`s everywhere would be to enable extra safety checks.
I feel like people always take the designed for juniors thing the wrong way by implying that beneficial (to general software engineering) features or ideas were left out as a trade off to make the language easier to learn at the cost of what the language could be to a senior. I don't think the go designers see these as opposing trade offs.
Whats good for the junior can be good for the senior. I think PL values have leaned a little too hard towards valuing complexity and abstract 'purity' while go was a break away from that that has proved successful but controversial.
But it also has a advantages that you can literally read a lot of code from other devs without twisting your eyes sideways because everybody has their own style.
"This is Go. You write it this way. Not that way. Write it this way and everyone can understand it."
I wish I was better at writing Go, because I'm in the middle of writing a massive and complex project in Go with a lot of difficult network stuff. But you know what they say, if you want to eat a whole cow, you just have to pick and end and start eating.
Yep ... its like people never read some of the main dev's motivations. The ability for people to be able to read each others code was a main point.
I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs. When i started Go, it was always Rust devs in /r/programming pushing their agenda as Rust being the next best thing, the whole "rewrite everything in Rust"...
About 10 years ago, learned Rust and these days, i can barely read the code anymore with the tons of new syntax that got added. Its like they forgot the lessons from C++...
> I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs.
I see it as a bit like Python and Perl. I used to use both but ended up mostly using Python. They're different languages, for sure, but they work in similar ways and have similar goals. One isn't "better" than the other. You hardly ever see Perl now, I guess in the same way there's a lot of technology that used to be everywhere but is now mostly gone.
I wanted to pick a not-C language to write a thing to deal with a complex but well-documented protocol (GD92, and we'll see how many people here know what that is) that only has proprietary software implementing it, and I asked if Go or Rust would be a good fit. Someone told me that Go is great for concurrent programming particularly to do with networks, and Rust is also great for concurrent processing and takes type safety very seriously. Well then, I guess I want to pick apart network packets where I need to play fast and loose with ints and strings a bit, so maybe I'll use Go and tread carefully. A year later, I have a functional prototype, maybe close to MVP, written in Go (and a bit of Lua, because why not).
The Go folks seem to be a lot more fun to be around than the Rust folks.
But at least they're nothing like the Ruby on Rails folks.
Just because it was a design goal doesn't mean it succeeded ;)
From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."
It's interesting that I've also heard the same from people involved in Rust. Expecting more interest from C++ programmers and being surprised by the numbers of Ruby/Python programmers interested.
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
The people writing C++ either don't need much convincing to switch because they see the value or are unlikely to give it up anytime soon because they don't see anything Rust does as being useful to them, very little middle ground. People from higher level languages on the other hand see in Rust a way to break into a space that they would otherwise not attempt because it would take too long a time to reach proficiency. The hard part of Rust is trying to simultaneously have hard to misuse APIs and no additional performance penalty (however small). If you relax either of those goals (is it really a problem if you call that method through a v-table?), then Rust becomes much easier to write. I think GC Rust would already be a nice language to use that I'd love, like a less convoluted Scala, it just wouldn't have fit in a free square that ensured a niche for it to exist and grow, and would likely have died in the vine.
I think on average C++ programmers are more interested in Rust than in Go. But C programmers are on average probably not interested in either. I do agree that the accessible nature of the two languages (or at least perception thereof) compared to C and C++ is probably why there's more people coming from higher-level languages interested in the benefits of static typing and better performance.
I write a lot of Go, a bit of Rust, and Zig is slowly creeping in.
To add to the above comment, a lot of what Go does encourages readability... Yes it feels pedantic at moments (error handling), but those cultural, and stylistic elements that seem painful to write make reading better.
Portable binaries are a blessing, fast compile times, and the choices made around 3rd party libraries and vendoring are all just icing on the cake.
That 80 percent feeling is more than just the language, as written, its all the things that come along with it...
Error handling is objectively terrible in Go and the explicitness of the always repeating pattern just makes humans pay less attention to potentially problematic lines and otherwise increases the noise to signal ratio.
Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.
For non-trivial tasks, AI is neither of those. Anything you do with AI needs to be carefully reviewed to correct hallucinations and incorporate it into your mental model of the codebase. You point, you shoot, and that's just the first 10-20% of the effort you need to move past this piece of code. Some people like this tradeoff, and fair enough, but that's nothing like a nailgun.
For trivial tasks, AI is barely worth the effort of prompting. If I really hated typing `if err != nil { return nil, fmt.Errorf("doing x: %w", err) }` so much, I'd make it an editor snippet or macro.
> Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.
You missed it.
If I give a random person off the street a nail gun, circular saw and a stack of wood are they going to do a better job building something than a carpenter with a hammer and hand saw?
> Anything you do with AI needs to be carefully reviewed
Yes, and so does a JR engineer, so do your peers, so do you. Are you not doing code reviews?
> If I give a random person off the street a nail gun, circular saw and a stack of wood
If this is meant to be an analogy for AI, it doesn't make sense. We've seen what happens when random people off the street try to vibe-code applications. They consistently get hacked.
> Yes, and so does a JR engineer
Any junior dev who consistently wrote code like an AI model and did not improve with feedback would get fired.
You are responsible for the AI code you check in. It's your reputation on the line. If people felt the need to assume that much responsibility for all code they review, they'd insist on writing it themselves instead.
> there is a large contingent of the Go community that has a rather strong reaction to AI/ML/LLM generated code at any level.
This Go community that you speak of isn't bothered by writing the boilerplate themselves in the first place, though. For everyone else the LLMs provide.
> Which makes sense, it's got the smallest language spec of any of them
I think go is fairly small, too, but “size of spec” is not always a good measure for that. Some specs are very tight, others fairly loose, and tightness makes specs larger (example: Swift’s language reference doesn’t even claim to define the full language. https://docs.swift.org/swift-book/documentation/the-swift-pr...: “The grammar described here is intended to help you understand the language in more detail, rather than to allow you to directly implement a parser or compiler.”)
Thanks! Never considered that a 21st century language designed for “power of two bits per word” hardware would keep that feature from the 1970s, so I never looked at that production.
Are there other modern languages that still have that?
I don't understand the framing you have here, of Rust being an asymptote of language capability. It isn't. It's its own set of tradeoffs. In 2025, it would not make much sense to write a browser in Go. But there are a lot of network services it doesn't really make sense to write in Rust: you give up a lot (colored functions, the borrow checker) to avoid GC and goroutines.
Rust is great. One of the stupidest things in modern programming practice is the slapfight between these two language communities.
Language can be bottleneck if there's something huge missing from it that you need, like how many of them didn't have first class support for cooperative multitasking, or maybe you need it to be compiled, or not compiled, or GC vs no GC. Go started out with solid greenthreading, while afaik no major lang/runtime had something comparable at the time (Java now does supposedly).
The thing people tend to overvalue is the little syntax differences, like how Scala wanted to be a nicer Java, or even ObjC vs Swift before the latter got async/await.
I'll be the one to nickpick, but Scala never intended to be a nicer Java. It was and still is an academic exercise in compiler and language theory. Also, judging by Kotlin's decent strides, "little Syntex differences" get you a long way on a competent VM/Runtime/stdlib.
Kotlin's important feature is the cooperative multitasking. Java code has been mangled all these years to work around not having that. I don't think many would justify the switch to Kotlin otherwise.
Similar story for me. I was looking for a language that just got out of the way. That didn’t require me to learn a full imparable DSL just to add a few dependencies and which could easily produce some artifact that I could share around without needing to make sure the target machine had all the right dependencies installed.
It really is a lovely language and ecosystem of tools, I think it does show its limitations fairly quickly when you want to build something a bit complex though. Really wish they would have added sumtypes
Funny thing is that also makes it easier on LLM / AI... Tried a project a while ago both creating the same thing in Rust and Go. Go's worked from the start, while Rust's version needed a lot of LLM interventions and fixes to get it to compile.
We shall not talk about compile time / resource usage differences ;)
I mean, Rust is nice, but compared to when i learned it like 10 years ago, it really looks a lot more these days, like it took too much of a que from C++.
While Go syntax is still the same as it was 10 years ago with barely anything new. What may anger people but even so...
The only thing i love to see is reduce executable sizes because pushing large executables on a dinky upload line, to remove testing is not fun.
> I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
I don't see it. Can you say what 80% you feel like you're getting?
The type system doesn't feel anything alike, I guess the syntax is alike in the sense that Go is a semi-colon language and Rust though actually basically an ML deliberately dresses as a semi-colon language but otherwise not really. They're both relatively modern, so you get decent tooling out of the box.
But this feels a bit like if somebody told me that this new pizza restaurant does a cheese pizza that's 80% similar to the Duck Ho Fun from that little place near the extremely tacky student bar. Duck Ho Fun doesn't have nothing in common with cheese pizza, they're both best (in my opinion) if cooked very quickly with high heat - but there's not a lot of commonality.
> I don't see it. Can you say what 80% you feel like you're getting?
I read it as “80% of the way to Rust levels of reliability and performance.” That doesn’t mean that the type system or syntax is at all similar, but that you get some of the same benefits.
I might say that, “C gets you 80% of the way to assembly with 20% of the effort.” From context, you could make a reasonable guess that I’m talking about performance.
Yes, for me I've always pushed the limits of what kinds of memory and cpu usage I can get out of languages. NLP, text conversion, video encoding, image rendering, etc...
Rust beats Go in performance.. but nothing like how far behind Java, C#, or scripting languages (python, ruby, typescript, etc..) are from all the work I've done with them. I get most of the performance of Rust with very little effort a fully contained stdlib/test suite/package manger/formatter/etc.. with Go.
Rust is the most defect free language I have ever had the pleasure of working with. It's a language where you can almost be certain that if it compiles and if you wrote tests, you'll have no runtime bugs.
I can only think of two production bugs I've written in Rust this year. Minor bugs. And I write a lot of Rust.
The language has very intentional design around error handling: Result<T,E>, Option<T>, match, if let, functional predicates, mapping, `?`, etc.
Go, on the other hand, has nil and extremely exhausting boilerplate error checking.
Honestly, Go has been one of my worst languages outside of Python, Ruby, and JavaScript for error introduction. It's a total pain in the ass to handle errors and exceptional behavior. And this leads to making mistakes and stupid gotchas.
I'm so glad newer languages are picking up on and copying Rust's design choices from day one. It's a godsend to be done with null and exceptions.
I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust. I need it for my smaller tasks and scripting. Swift is kind of nice, but it's too Apple centric and hard to use outside of Apple platforms.
I'm honestly totally content to keep using Rust in a wife variety of problem domains. It's an S-tier language.
Borgo could be that language for you. It compiles down to Go, and uses constructs like Option<T> instead of nil, Result<T,E> instead of multiple return values, etc. https://github.com/borgo-lang/borgo
> I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust
OCaml is pretty much that, with a very direct relationship with Rust, so it will even feel familiar.
This actually isn't correct. That's because Go is the only language that makes you think about errors at every step. If you just ignored them and passed them up like exceptions or maybe you're basically just exchanging handling errors for assuming the whole thing pass/fail.
If you you write actual error checking like Go in Rust (or Java, or any other language) then Go is often less noisy.
It's just two very different approaches to error handling that the dev community is split on. Here's a pretty good explanation from a rust dev: https://www.youtube.com/watch?v=YZhwOWvoR3I
1) for one-off scripts and 2) If you ignore memory.
You can make about anything faster if you provide more memory to store data in more optimized formats. That doesn't make them faster.
Part of the problem is that Java in the real world requires an unreasonable number of classes and 3rd party libraries. Even for basic stuff like JSON marshaling. The Java stdlib is just not very useful.
Between these two points, all my production Java systems easily use 8x more memory and still barely match the performance of my Go systems.
I genuinely can’t think of anything the Java standard library is missing, apart from a json parser which is being added.
It’s your preference to prefer one over the other, I prefer Java’s standard library because atleast it has a generic Set data structure in it and C#’s standard library does have a JSON parser.
I don’t think discussions about what is in the standard library really refutes anything about Go being within the same performance profile though.
Memory is the most common tradeoff engineers make for better performance. You can trivially do so yourself with java, feel free to cut down the heap size and Java's GC will happily chug along 10-100 times as often without a second thought, they are beasts. The important metric is that Java's GC will be able to keep up with most workloads, and it won't needlessly block user threads from doing their work.
Also, not running the GC as often makes Java use surprisingly small amounts of energy.
As for the stdlib, Go's is certainly impressive but come on, I wouldn't even say that in general case Java's standard library is smaller. It just so happens that Go was developed with the web in mind almost exclusively, while Java has a wider scope. Nonetheless, the Java standard library is certainly among the bests in richness.
Java’s collectors vastly outperform Go’s. Look at the Debian binary tree benchmarks [0]. Go just uses less memory because it’s AOT compiled from the start and Java’s strategy up until recently is to never return memory to the OS. Java programs are typically on servers where it’s the only application running.
I guess the 80% would be a reasonably performant compiled binary with easily managed dependencies? And the extra 20% would be the additional performance and peace of mind provided by the strictness of the Rust compiler.
Single binary deployment was a big deal when Go was young; that might be worth a few percent. Also: automatically avoiding entire categories of potential vulnerabilities due to language-level design choices and features. Not compile times though ;)
Apps written in an exceptions language (Java, JavaScript, PHP, etc..) are really annoying to monitor as everything that isn't the happy path triggers an 'error'/'fatal' log/metric.
Yes, you can technically work around it with (near) Go-level error verbosity (try/catches everywhere on every call) but I've never seen a team actually do that.
Modern languages that don't throw exceptions for every error like Rust, Go, and Zig make much more sane telemetry reports in my experience.
On this note, a login failure is not an error, it's a warning because there is no action to take. It's an expected outcome. Errors should be actionable. WARN should be for things that in aggregate (like login failures) point to an issue.
reply