With the number of people who go through Google's interview process, this might actually change the industry to think more carefully about the interview process. If Google isn't making whiteboard interviews the norm, who will keep it going?
That introduces new problems. Hiring someone only to realize they are a bad fit, and then firing them is bad for both the employer and employee. Bad for the employer because they wated resources on on-boarding and compensating the employeed. Bad for the employee, because of they spent time working for a company that fired them instead of looking for a different job they could keep, and may have made other sacrifices for the job, such as relocating.
I want to like BPF, but so few of my problems are in the kernel. Performance problems are usually at the application layer, either in my code or my dependencies. Even in the rare case that it something outside of my control (like the compiler, or the nature of the data), it's almost never the kernel. Lastly, when it is the kernel's fault, there's usually a sysctl or other knob to turn to fix it. Real kernel problems, such as some missing functionality, are usually better resolved by committing changes to the kernel itself, not so much on demand filters. BPF is a solution in search of problems.
Most performance problems are measured as some function of the hardware resources (CPU usage, throughput, disk iops, network latency, ...). How do you find out what application or what part of your application is causing problems or consuming resources? eBPF helps with observability from the kernel up through the application to connect what your machine is doing (disk throughput too slow, maybe?) with what your applications are doing (frequency of reads/writes, object sizes, prefetching, buffer cache usage?).
For some use-cases, the fact that you can insert probes and eBPF code to a running program is a huge win. This is more obvious to kernel developers who can't always recompile a kernel, deploy it, and recreate a particular state to debug a problem. Application developers may think they can just change the code and add printf to get better observability, or maybe use gdb? eBPF has its advantages.
For many of the kernel tracing tools, I'll add user stack traces as needed for the user context. TCP connections and latency _with_ the Java code paths responsible; ditto for disk I/O, memory growth, lock contention, etc. If you've ever had a network problem, a disk I/O problem, a memory problem, etc, BPF can give you new insights that are unavailable from user-space tooling.
But that's also why BPF doesn't seem to have a place in this world. Anything surfaced by a BPF program should probably be surfaced by a proper kernel module or syscall. As far as I can tell, the utility of BPF tracing is solely between the time a bug comes up, and a few weeks later when a kernel upgrade exposes this info anyways.
Neither of your suggestions really get at the point of eBPF. That is, to safely (goodbye kernel modules) and dynamically (goodbye syscalls) instrument the kernel.
I'm specifically suggesting that http/3 could have an implied value for this header if not supplied, so that omitting it was semantically equivalent (and a server could synthesize it).
That's not how HTTP works. HTTP is a transfer protocol, and is not so opinionated on the semantic meaning of the Data it moves. HTTP, HTTP/2, and HTTP/3 were designed to be backwards compatible. It would break that compatibility if there were a semantic change in the absence of the UA string. Specifically, H/3 requests couldn't be downgraded to HTTP/1.1 requests by proxies. There are a lot of proxies.
> It would break that compatibility if there were a semantic change in the absence of the UA string. Specifically, H/3 requests couldn't be downgraded to HTTP/1.1 requests by proxies.
It would not reduce the semantic meaning of HTTP to say "in http/3, the default value of the user-agent header is XYZ (a non-specific modern browser), and if you want an empty header, you must specify the header with an empty value. Proxies and other software translating requests between http/3 and older versions of http should map an omitted header in http/3 to this value, and should map an empty string to a missing header."
It would certainly require change, and it isn't a change likely to happen at this point, but it wouldn't have been impossible, nor would it have reduced the fidelity of http. It simply would have changed the default assumption from "something too ancient to identify itself" to "something modern that doesn't want to identify itself".
There are wrong reasons to sniff the UA string, and less wrong reasons. If a Browser has a bug that gets fixed in later versions, sites have to sniff the UA string to do the right thing. For example, a browser bug may cause people to see an error, and the they are using an old version of the browser. The right thing to do is to upgrade, but how to tell? When you are subject to the bugs of some other code, and which are skewed across different versions of the software, the most reasonable way to work around it is by UA sniffing.
I would have bought this argument a decade ago, but at this stage the number of site users not doing some form of automatic updates on their browser should be in the minority. The problems user agents were designed to solve back in the Netscape Navigator days have since been standardized using much better signals, and the odd standards compliance bug here and there isn't really justification for something as weak and unreliable as a user agent string.
I say kill it. Fix it to some static value and require new sites moving forward to do proper feature detection if they really care to work around standards bugs or use experimental new features.
The core web features don't have such bugs anymore for all practical purposes, but newer apis do. Content security policy in particular has a lot of bugs especially around features in v2 and v3 of the spec, when they were first implemented in chrome / Firefox often took a few major versions to get right. I think I've seen similar hacks around use of newer crypto apis. The alternative is to wait 2-3 years between the release of a new api by all major browsers and actually using it.
Sadly, feature detection is still impossible for many features.
For instance, the only way to detect `contentEditable` support is through user-agent sniffing. Many versions of Android Chrome and iOS Safari will happily report that they support `contentEditable` and then refuse to make the content editable.
I'm actually struggling with a similar issue right now: there's no way to detect an on-screen keyboard, so there's no way to focus a textbox only if it wouldn't cover up the screen with an on-screen keyboard (which is pretty important for chat apps). The best you can do involves a lot of hacks, including UA sniffing.
Ouch. Wasn't aware of those. But still not surprised there are edge cases out there.
If I look hard enough, I can always find places where different browser behaviors differ. For instance, I once discovered that the maximum top value I could put a position absolute div within a position relative div was around 20 million px in chrome but only 1.53 million in IE (if my memory serves). This was at least fixable by stacking multiple divs for every 1.5 million pixels I wanted to lay out.
But for every quirk like this that's possible to work around by coding to the lowest common denominator, there's another somewhere that you just can't. I recall doing another project which involved trying to pop open a mobile app to view content; and it was supposed to switch to the store to prompt you to install if you didn't have the app. At the time this involved different hacks for iOS Safari and Android chrome. Behaviors that differed included what happened when you navigate to a scheme with no handler (in chrome, the previous page kept running), and whether the scheme could trigger the app from an iframe (which was blocked in chrome but not Safari iirc). And handing off state to the app during the install flow was simple on Android, but on iOS required another pile of hacks. Whole thing ended up an overcomplicated mess, but we ended up pretty good ux for the intended flows. This was 2014 so the situation is probably better today - I think iOS Safari added some meta tags that are targeted to very similar use cases.
Probably the most common use case for such things is progressively-rendered/-loaded lists, where you know the number of items and the height of each item in the list, so that you can reserve all the space and provide a meaningful scrollbar, and have a limited number of children absolutely positioned on screen at any given time.
This gives a far better experience than infinite scrolling (which has no meaningful scrollbar, so you can’t jump to the end or an arbitrary point in the middle) or pagination (which is just generally painful once you want something not at the start).
We do this in Fastmail’s webmail, and have put in workarounds for overly-tall elements breaking browsers. https://github.com/fastmail/overture/blob/41cdf36f3e7c8f0dd1... lists them, including some values at which things break. (Actually, IE doesn’t allow containers anywhere near that tall, capping effective values much earlier, with the consequence that you can’t access things near the end of the list. But the problem was that once you get much larger still, it starts ignoring the values you specify altogether, which would break the entire technique.)
For us, the most common height of each email’s entry in the list is 51px; at that, 400,000 emails is enough to get to 20,400,000px high, which is enough to break both IE and Firefox (not sure about Chrome or Safari, I’ve never actually tested it; their failure mode may well just be different, limiting numbers instead of ignoring them).
400,000 emails in a mailbox isn’t all that common, but it does happen.
Oh the other thing was that we were using an https url override on Android and the custom scheme was primary iOS. And the iOS 8 or 9 added the ability for apps to handle https urls, which could be used to simplify our iOS nonsense. But not all of it.
That depends on your user base. For example [1] sees about 10-20% of Chrome and Firefox users on some notably "outdated" version. If your demographic skews towards enterprise users that seems very plausible, large companies regularly hold updates back until they are tested or use extended support releases.
For most CSS and JavaScript incompatibility issues, the way to check doesn't require UA sniffing at all. You merely check if the browser supports said feature by looking for whether the supports media query runs or if the new method is available at all.
@supports is often unreliable for CSS features. At least in Blink, it acts more as a "valid syntax" check then a "supported feature" check and for some reason Blink's CSS engine recognizes many new CSS features as valid syntax while not actually supporting them.
I for one would serve adapted css that is matched to the browser. Due to vendor prefixes there is unfortunately some mandatory bloat for a cross-browser stylesheet.
It would also skip polyfills when those aren't needed.
That's not what big O notation is about. It represents the growth rate. In a the case of a hash table, the size of the thing you are hashing is not affected by the number of items present in the hash table. Putting a trillion bit integer into a hash table of other integers is still O(1); it's a constant.
>In a the case of a hash table, the size of the thing you are hashing is not affected by the number of items present in the hash table.
But it is, and that's precisely the reason why hash tables are not O(1) but rather O(log(n)). By the pigeon hole principle, it is impossible to store N items in a hash table using a key whose representation is less than log(N) bits without a collision. This means that there is a relationship between the the size of the key being hashed and the number of elements being inserted in the hash map, specifically the relationship is an element of O(log(n)).
> But it is, and that's precisely the reason why hash tables are not O(1) but rather O(log(n))
I'm sorry, but as a reader it is quite amusing that various posters are claiming (all without citing any sources) hashtables are O(1), O(n) and O(log(n))
Yes, they still need unlimited kernel threads (for liveness reasons). When I asked a few years back about this, go-nuts@ suggested implemented my own rate limiting in application space.
In my case, it was for running Go on a resource constrained Raspberry-PI, where the kernel threads could easily live too long, and use up all memory. The threads were calling read(2) on a network mounted fuse fs, and would last for 30s+.
Disney is in a tough spot since they also pay dividends. Companies typically don't ever reduce their dividends, which leaves Disney unable to spend that cash on their growth.
It's poignant to watch Go slowly realize the why other languages have more powerful, more useful error types. The `error` interface is anemic to the point of uselessness. Making it unwrappable is step forward, but all that does is progress it the level of 1995 Java with their "cause" field. It still can't express suppressed errors (like those from `defer f.Close()`). There is no standard mechanism to include an optional stack trace. Worst of all, rolling your own error type and trying to use it everywhere is near impossible. Every method signature has to use `error` due to lack of covariant return types, and heaven help you if you return a nil struct pointer, which will be auto promoted to a non-nil interface. Perhaps in Go 3 they'll get it right.
I honestly don't understand where you're coming from.
Rolling your own error library is trivial. We have a great one at work that works great, prints stack traces, interops with normal error handlers, and does a lot of custom work for translating errors into external customer-visible messages and internal developer messages. We use it at every layer of a 50-60 grpc microservice ecosystem without much headache.
Catching deferred errors is trivial, not sure what you mean there:
defer func () {
err := f.Close()
...
}()
Printing stack traces is like 4 lines, I was able to implement ours in 60 minutes of doc skimming. Now it'd be 5 minutes.
I don't understand your last two sentences. It doesn't reflect anything I've run into in the real world, I don't think.
I have a few gripes about golang, but the minimalist error interface is not one of them.
And right there is where you’re going to lose most people.
If it’s is trivial, why isn’t in the standard library? If everyone needs to do it, why not standardize? I love Go, but i have to agree with grandparent poster.
> If it’s is trivial, why isn’t in the standard library
Getting a stack trace is available in the runtime/debug library.
Perhaps stack traces aren't in normal errors since a program can throw 1,000 errors per second in a perfectly functioning application and that could get expensive.
I have heard that performance is the reason but I'm not able to confirm that.
github.com/pkg/errors is fairly ubiquitous and includes stack traces.
> If everyone needs to do it, why not standardize
The only time I've needed to read the stack trace in go, is when there has been an unexpected panic. Otherwise my error messages are more than sufficient to find the root cause of the error. I only have to run my own code though, I imagine if I was debugging someone elses code the stacktrace would be invaluable.
Go has purposely tried to do a few things differently. Date format comes to mind, and lack of exceptions which I personally like.
Somehow I don't buy it. If you don't want rich standard for error handling, nothing prevents you from returning just a string as an error. It's just a value, isn't it?
Error handling is generic by itself. This is at the heart of any existing application. It is fundamental part of the design process and later on contributes greatly to troubleshooting. Making this solid should be, IMHO, one of the most important and thought through part of any language. In Go it seems to be left at the developer's convenience. And even though you may say there is a huge debate on the subject, it always leads to nothing. Or almost nothing, like in this case. It's just disappointing.
Speak for yourself, Go is a bastion of formatting sanity for me cause they chose (correctly) to use tabs. Tabs for indentation are the obvious choice to improve code readability and accessibility for people that want different indent sizes. I don't need it (usually), but I've met people that prefer everything from 2, 4 or 8 spaces, and have heard of people wanting everything from 1 to 8. It also handles far more sanely then spaces for people using proportional fonts instead of monospace, another common readability/accessibility tweak.
and gofmt existed before go really had a large community. It's also a very different problem because you can change formatting decisions from one version to another (and they do!), but API's are difficult to change.
That issue is one of the reasons everything has to return 'error', not your own struct that might be more convenient to work with. I've run into it in the wild in dozens of codebases, so it's definitely a real issue.
Without generics and covariance, it's an uphill battle to create usable monadic error handling in Go. If all you've used before is C which has int returns and globals for error handling, I can see how go's error handling looks nice, but compared to most languages invented in the last 20 years, it feels far worse.
One way to work around that is to use interfaces (https://play.golang.org/p/aAEAi8GvDkv), but either way, you are "shadowing" the type, and can no longer use the .TraceID in the second function cause it's just a plain error at that point.
It can be in the std, or it can be available as a package in your language package manager. But Go doesn't shine in dependency management either.
Minimalist standard library doesn't go well with their philosophy around dependency (“a little copying is better than a little dependency”). That's why the made so much stuff in std.
How do you convince every other library that you might sandwich on your call stack to use your error library so your error data gets passed through correctly?
The other library doesn't have to use your error library, it just needs to preserve your error when passing through. To make this a common practise, this proposal was made.
The idea of carrying around bloated errors everywhere is absurd to me, especially in a microservice ecosystem where that all has to cross the wire, potentially multiple times.
While Go has issues with lack of functionality in error handling, I think the biggest win has been treating error as value and returning error via multiple return values paradigm. It forces the programmer to be aware of the possibilities of errors when using functions and explicitly handle them.
When working in other programming languages, this is something I sorely miss and have to resort to ugly and non intuitive try catch constructs which feel like "bolt-on magic" rather than a natural part if the program.
IMO this feature makes up for most of the stuff that's lacking.
I'm not sure where you're going with the stack traces complainr. It is pretty trivial to get it in Go.
When you start thinking about every function having multiple return values, one of which is an error optional, treat every one of those function as failable, and then build that functionality into the compiler - then you soon arrive at proper exceptions. Granted, forcing the error machinery syntax upon the programmer makes it more explicit and requires less discipline, but it also gets old pretty fast.
Yes. Exception is generally a decent solution to error handling unless you're using RAII languages like C++/Rust where unwinding the stack doesn't work well with destructors.
Except it doesn't force you to be aware of them or explicitly handle them. It relies on a programmer having the discipline to handle them, or even just the inclination to do so. Fine in a perfect world but most of us work in an environment with unreasonable deadlines, we are tired and eager to get home on Friday afternoon, mistakes happen.
> I think the biggest win has been treating error as value and returning error via multiple return values paradigm.
Treating errors as value is cool (no exception!) but using multiple values not so much, the proper way to do so is with product types, but Go's type system don't have them unfortunately…
It would be great if Go had sum types for error handling, but errors-as-values is great. And the conventions around error handling are pretty easy to work with—the biggest issue is still the anemic error interface (no standard for stack traces, just a string error message, etc).
I agree that error handling has been one of very few rough points for Go. I think the scorn people pour out onto Go is undeserved. It’s easy to say that they didn’t learn from other languages’ experience, but much of that “failing to learn” is a feature. To its great success, Go failed to learn inheritance, “objects” (as in everything is a fat pointer with data, locks, and vtables), exceptions (as control flow), convoluted build systems, gratuitous design patterns (abstract factory bean), etc. Yeah, Go isn’t perfect, but it’s one of the best languages on the market at the moment.