Thanks - I should have done an image search on the whole image. Instead, I clipped out the flag from the astronaut's shoulder and searched that, which how I found out it was the Ohio flag. I just assumed it was an AI-generated image by the author and not a common meme template.
Take the C4 training dataset for example. The uncompressed, uncleaned, size of the dataset is ~6TB, and contains an exhaustive English language scrape of the public internet from 2019. The cleaned (still uncompressed) dataset is significantly less than 1TB.
I could go on, but, I think it's already pretty obvious that 1TB is more than enough storage to represent a significant portion of the internet.
Thanks for the feedback. Checking my notes, it looks like you didn't enter your information in our payments system, invoiced us via email 8 months after your contract started/ended (in violation of our contract agreement), lost your hourly log book, and had some discrepancies on work location. In addition to not completing the MVP for the project we aligned on. I have your last email pulled up and have replied to you to resolve. I apologies, it shouldn't have taken a HN post for me to resolve this.
Interests : systems programming, compilers, 2D & 3D graphics, performance
--------
Hello, my name is Jesse.
I describe myself as a competent generalist, and a lifelong learner. I'm frequently working on something I've never done before. I've worked at every level of the stack, from cycle-shaving hot loops to the frontend of large web applications. My most recent professional experience has been about a year at a semiconductor startup; I wrote prototypes for an ISA, compiler, a cycle simulator and visual debugger.
In terms of hard technical skills, I can quickly become productive in nearly any language, and at any level of the stack. I'm comfortable working in heavily multithreaded/async soft-realtime environments where performance is a key acceptance criteria. I have a good understanding of modern hardware architectures, including GPUs, from main memory to registers and instruction pipelines. I have working knowledge of interpreters, dynamic language runtimes and garbage collectors. I'm convinced I can learn nearly anything (albeit some things more quickly than others), given an appropriate problem domain to apply it to.
In terms of business value, I can take hand-wavy visions of new products and turn them into working prototype(s), quickly. I'm comfortable refining those raw materials and delivering real value to customers, internal or external. I have a track record of successfully identifying 80/20 solutions and love the feeling of making tools that make peoples lives better.
In my personal life, I enjoy travelling, surfing, climbing, backcountry skiing, snowmobiling, pirates, and attending raves. I like the phrase "have strong opinions, weakly held".
If I was born to do one thing in this world, it's program computers.
With the general lack of scientific rigour, accountability, and totally borked incentive structure in academia, I'm really not sure if I'd trust whitepapers any more than I'd trust YouTube videos at this point.
> Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. ... Things would probably be different if we had to do it all over again from scratch.
They are clearly not against them per se. It simply wasn't practical for them to include it into their codebase.
And I think a lot of the cons of exceptions are handled in languages like F#, etc. If f calls g which calls h, and h throws an exception, the compiler will require you to deal with it somehow in g (either handle or explicitly propagate).
My issue with exceptions is also practical. If they didn't introduce significant stability issues, I'd have no problem. As it stands, it's impossible to write robust software that makes use of C++ exceptions.
> the compiler will require you to deal with it somehow in g
Yes, but that's not a dichotomy. Languages like Java have function declare what exceptions they throw, and the caller must either catch it or also declare that it throws it. Gets cumbersome quickly, but I believe it's for the best to encode exceptions at the type system.
In low-level systems software, which is a primary use case for C++, exceptions can introduce nasty edge cases that are difficult to detect and reason about. The benefits are too small to justify the costs to reliability, robustness, and maintainability.
Exceptions in high-level languages avoid many of these issues by virtue of being much further away from the metal. It is a mis-feature for a systems language. C++ was originally used for a lot of high-level application code where exceptions might make sense that you would never use C++ for today.
> In low-level systems software, which is a primary use case for C++
I don't this this is true. There is A LOT of C++ for GUI applications, video games, all kind of utilities, scientific computing and others. In fact, I find that the transition to "modern" alternatives from native GUI toolkits in C/C++ has led to a regression in UI performance in general. Desktop programs performed better 20 years ago when everything was written in Win32, Qt, GTK and others and people did not rely on bloated Web toolkits for desktop development. Even today you can really feel how much more snappy and robust "old school" programs are relative to Electron and whatnot.
To clarify, you think that low-level systems software is only a secondary use case for C++? The part you quoted does not make claims about whether there are other primary use cases, just that low-level systems software is one of them, so it's not clear why it being useful elsewhere is a rebuttal of that.
I don't think I agree with that. To me, how low-level something is makes more sense sense as a spectrum rather than a binary, and there are certainly other things that I'd consider to fit into the lower end than just OS kernels. Over the summer I contracted for a company making satellites working on software they had that facilitated interactions between various software and hardware components that had to run a 20 ms loop for processing everything, with delays cascading throughout those other components and causing systemic issues. This was all in userland (the OS stuff was managed by another team), but a tracing garbage collector would have been pretty much a non-starter due to the potential to miss the expected timing of a loop iteration.
You could handwave this objection by saying it's not "really" low level or that "nothing" was an exaggeration, but at that point it seems like we'd be back to the original question of why it's wrong to say that this isn't a primary use case for C++.
The model of communicating errors with exceptions is really nice. The implementation in C++ ABI's is not done as well as it could be and that results in large sad path perf loss.
> That's true, except for languages that ensure you can't simply forget that something deep down the stack can throw an exception.
Sometimes it is not safe to unwind the stack. The language is not relevant. Not everything that touches your address space is your code or your process.
Exception handlers must have logic and infrastructure to detect these unsafe conditions and then rewrite the control flow to avoid the unsafety. This both adds overhead to the non-exceptional happy path and makes the code flow significantly uglier.
The underlying cause still exists when you don't use exceptions but the code for reasoning about it is highly localized and usually has no overhead because you already have the necessary context to deal with it cleanly.
If you forget to handle a C++ exception you get a clean crash. If you forget to handle a C error return you get undefined behavior and probably an exploit.
Rust is better here (by a lot), but you can still ignore the return value. It's just a warning to do so, and warnings are easily ignored / disabled. It also litters your code with branches, so not ideal for either I-cache or performance.
The ultimate ideal for rare errors is almost certainly some form of exception system, but I don't think any language has quite perfected it.
Only when you don't need the Ok value from the Result (in other words, only when you have Result<(), E>). You can't get any other Ok(T) out of thin air in the Err case. You must handle (exclude) the Err case in order to unwrap the T and proceed with it.
> It also litters your code with branches, so not ideal for either I-cache or performance.
Anyhow erases the type of the error, but still indicates the possibility of some error and forces you to handle it. Functionality-wise, it's very similar to `throws Exception` in Java. Read my post
Poor man's checked exceptions. That's important. From the `?` you always see which functions can fail and cause an early return. You can confidently refactor and use local reasoning based on the function signature. The compiler catches your mistakes when you call a fallible function from a supposedly infallible function, and so on. Unchecked exceptions don't give you any of that. Java's checked exceptions get close and you can use `throws Exception` very similarly to `anyhow::Result`. But Java doesn't allow you to be generic over checked exceptions (as discussed in the post). This is a big hurdle that makes Result superior.
No, it's not quite the same. Checked exceptions force you to deal with them one way or another. When you use `?` and `anyhow` you just mark a call of fallible function as such (which is a plus, but the it's the only plus), and don't think even for a second about handling it.
Checked exceptions don't force you to catch them on every level. You can mark the caller as `throws Exception` just like you can mark the caller as returning `anyhow::Result`. There is no difference in this regard.
If anything, `?` is better for actual "handling". It's explicit and can be questioned in a code review, while checked exceptions auto-propagate quietly, you don't see where it happens and where a local `catch` would be more appropriate. See the "Can you guess" section of the post. It discusses this.
C++ exceptions are fast for happy path and ABI locked for sad path. They could be much faster than they are currently. Khalil Estell did a few talks/bunch of work on the topic and saw great improvements. https://youtu.be/LorcxyJ9zr4
> "In low-level systems software, which is a primary use case for C++, exceptions can introduce nasty edge cases that are difficult to detect and reason about. The benefits are too small to justify the costs to reliability, robustness, and maintainability."
Interestingly, Microsoft C / C++ compiler does support structured exception handling (SEH). It's used even in NT kernel and drivers. I'm not saying it's the same thing as C++ exceptions, since it's designed primarily for handling hardware faults and is simplified, but still shares some core principles (guarded region, stack unwinding, etc). So a limited version of exception handling can work fine even in a thing like an OS kernel.
FWIW, I think it is possible to make exception-like error handling work. A lot of systems code has infrastructure that looks like an exception handling framework if you squint.
There are two main limitations. Currently, the compiler has no idea what can be safely unwound. You could likely annotate objects to provide this information. Second, there is currently no way to tell the compiler what to do with an object in the call stack may not be unwound safely.
A lot of error handling code in C++ systems code essentially provides this but C++ exceptions can't use any of this information so it is applied manually.
Exceptions are actually a form of code compression. Past some break even point they are a net benefit, even in embedded codebases. They're "bad" because the C++ implementation is garbage but it turns out it's possible to hack it into a much better shape:
My memory of F# is very rusty, but IIRC, there are two types of error handling mechanisms. One of them is to be compatible with C#, and the other is fully checked.
It really depends on how reliable you want the code to be. Many business application developers prioritize development speed and don't want to think about errors, for them checked exceptions may seem like a hassle. For developers who prioritize reliability unchecked exceptions are a huge problem because they are not part of the contract and can change without notice.
Because java is garbage-collected and doesn't have any of the problems of C++ exceptions, so checked exceptions just become a nuisance of having to try/catch everything.
Most codebases that ban exceptions do it because they parrot Google.
Google’s reasons for banning exceptions are historical, not technical. Sadly, this decision got enshrined in Google C++ Style Guide. The guide is otherwise pretty decent and is used by a lot of projects, but this particular part is IMO a disservice to the larger C++ ecosystem.
I think reasonable people can disagree about whether C++ exceptions are "good" or not.
There are things you can't do easily in C++ without using exceptions, like handling errors that happen in a constructor and handling when `new` cannot alloc memory. Plus, a lot of the standard library relies on exceptions. And of course there's the stylistic argument of clearly separating error-handling from the happy-path logic.
I won't argue that it's popular to ban them, though. And often for good reasons.
For exception-less C++, you'd declare an operator new() that doesn't throw exceptions and just returns NULL on allocation failure along with a simple constructor and a followup explicitly-called init() method that does the real work which might fail and returns an error value on failure.
If you're planning on shutting down, what's the fundamental difference between throwing an exception, vs simply complaining loudly and calling exit() ..?
Sometimes it’s useful to handle the exception somewhere near its origin so you can close related resources, lockfiles, etc. without needing a VB6 style “On Error GoTo X” global error handler that has to account for all different contexts under which the exceptional situation might have occurred.
> a VB6 style “On Error GoTo X” global error handler that has to account for all different contexts under which the exceptional situation might have occurred
... That seems like a pretty accurate description of how exception handling mechanisms are implemented under the hood. :)
The code that's throwing an exception typically does not know that the exception catcher will shut anything down.
And - very often, you would _not_ shut down. Examples:
* Failure/error in an individual operation or action does not invalidate all others in the set of stuff to be done.
* Failure/error regarding the interaction with one user does not mean the interaction with other users also has to fail.
* Some things can be retried after failing, and may succeed later: I/O; things involving resource use, etc.
* Some actions have more than one way to perform them, with the calling code not being able to know apriori whether all of them are appropriate. So, it tries one of them, if it fails tries another etc.
> They're good for exceptional situations where foundamental, core assumptions are broken for some reason.
No, that's what assertions or contracts are for.
Most exceptions are supposed to be handled. The alternative to exceptions in C++ are error codes and `std::expected::. They are used for errors that are expected to happen (even if they may be exceptional). You just shouldn't use exceptions for control flow. (I'm looking at you, Python :)
Yet, if you can only explain an exception using the word ‘exception’ you’re not making any head way.
I like the idea of an exception as a way to blow out of the current context in order for something else to catch it and handle in a generic manner. I don’t like the idea of an exception to hide errors or for conditional logic because you have to know what is handling it all. Much easier to handle it there and then, or use a type safe equivalent (like a maybe or either monad) or just blow that shit up as soon as you can’t recover from the unexpected.
I agree this is the interesting part of the project. I was disappointed when I realized this art was AI generated - I love isometric handdrawn art and respect the craft. But after reading the creator's description of their thoughtful use of generative AI, I appreciated their result more.
Where’s the love here? There are artists who dedicate their lives to creating a single masterwork. This is someone spending a weekend on a “neat idea”.
You're inferring that time invested per project is directly proportional to love for the craft, which I disagree with strongly. Taking a weekend to explore a new medium is an act of interest/love in the craft. Is Robert Bateman any less dedicated to a life of art than Michelangelo was? Maybe, maybe not, but I think we can both agree they produced quick sketches, and dedicated their lives to producing incredible art.
I expect artists will experiment with the new tools and produce incredibly creative works with them, far beyond the quality I can produce by typing in "a pelican riding a bicycle".
You are welcome to use whatever definition of "small/medium/large" you like. Like you, 1-2 weeks is also far from the largest project I've worked on. I don't think that's particularly relevant to the point of my post.
The point that I'm trying to emphasize is that I've had success with it on projects of some scale, where you are implementing (e.g.) multiple related PRs in different services. I'm not just using it on very tightly scoped tasks like "implement this function".
The observation I was trying to make is that at the scope of one week, there's very little you actually get done, and it's likely mostly mechanical work. Given that, I suppose I'm unsurprised LLMs are proving useful. Seems like that's the type of thing they're excelling at.
That's not my experience. I agree that a project of any real size takes quite a bit longer than a week. But it's composed of lots of, well, week or two long subprojects. And if the AI coding tool is condensing week long projects into a day, that's a huge benefit.
Concretely speaking (well as concretely as I feel like being without piercing pseudonymity), at my last job I worked on a multi year rewrite of one of our core services. Within that rewrite were ton of much smaller projects that were a few weeks to a month long - refactor this algorithm, improve the load balancing, add a new sharding strategy, etc. An AI tool would definitely not have sped up the whole process. It's not going to, say, speed up figuring out and handling intra-team dependencies or figuring out product design. But speeding up those smaller coding subprojects would have been a huge benefit.
I'm not making any strong claims in my post. I don't have the experience of AI projects allowing me to one shot large projects. But OP asked if anyone has concrete experience with AI coding tools speeding up development, and the answer is yes, I do.
reply