Hacker Newsnew | past | comments | ask | show | jobs | submit | more yalue's commentslogin

Jon Bois is a genius, and this is the main example I return to when tempted to think that hypermedia is just a gimmick. It takes a special type of creativity to not just make it feel "tacked on." Perhaps the light jazz and dated look of the google maps visuals just meld well with the nostalgia evoked by the content of the story. The text alone would probably work as a good scifi short, but the additional material makes it so much more dramatic.


Here's an idea for how to maintain Go moving forward: keep making the damn tool however you want. The thing never would have existed in the first place if they had started with an industry survey. It was created to address a perceived need, by the people with the need, for themselves. This model is perfectly fine moving forward. Some industrial user wants a new Go feature or bugfix? Great. If it's enough of a problem, they can fix it and upstream a patch. That's how open source software is always supposed to work. Telemetry does nothing to improve this situation. If the Go team at google no longer has any ideas for what to work on (as must be the case if they're wasting their time on dumb crap like forced google spyware in a compiler) then they should just stop. Maybe focus on accepting PRs from people with ideas and strong enough motivation to work on them. I mean, there are 5000+ issues and 330 open PRs on the go github right now, so that should be plenty to keep them occupied. On top of that, how can literal thousands of issues not be a strong enough source of actionable usage information that they saw fit to try to get more? Do they plan on wrapping up all the open issues before looking at telemetry?


I really appreciate this point, and I think it applies very broadly. Good design comes from a coherent, individual (or group) vision. Citing examples will only incite discussions, because everyone has a different needs and ideas, but the things I most appreciate in tech and art share this common aspect.

As I see it, relying on excessive telemetry, surveys, focus-groups, etc is a bit of an indication that nobody at the top has a strong idea of where to take the project.


Which is strange coming from ~Go~ Google. The community loudly demanded features for years which ~Go~ Google said were unnecessary (dependency management, monotonic time, etc). Weird to think that telemetry would do anything to change their opinions when the project has always done what it wants.

Edit: fighting the markup trying to get strikethrough to work


> keep making the damn tool however you want. The thing never would have existed in the first place if they had started with an industry survey. It was created to address a perceived need, by the people with the need, for themselves. This model is perfectly fine moving forward.

I disagree, doing what you feel like is a great approach when you have no users, your building a tool that solves your needs and basically saying "Try this it, it works great for me, maybe it'll be good for you too!"

Once you have millions of users and billions of lines of code you need to think about how your changes affect those users. If you get it wrong there's a huge cost to that (c.f. early forays in to go packaging and dependency management). This is why go's comparability promise is such a big deal and a big part of why go is a very popular language.


> If the Go team at google no longer has any ideas for what to work on (as must be the case if they're wasting their time on dumb crap like forced google spyware in a compiler) then they should just stop.

They did stop. "Go 1" is complete.

Since 2017, the project renewed itself by moving to what is dubbed "Go 2"[1], being built around community feedback. Decisions are made based on data collected from the community. Adding telemetry to increase the amount of data available is in line with the new project scope. Go 1.13 was the first "Go 2" release[2].

Indeed, "Go 1" is still there if you want to use the tool that was built for what was needed at Google and nothing more. You don't have to move into the "Go 2" ecosystem. But if others want to put their time into that, why not? That's their prerogative.

[1] https://go.dev/blog/toward-go2

[2] https://go.dev/blog/go2-next-steps


there are 5000+ issues and 330 open PRs on the go github right now, so that should be plenty to keep them occupied

just for the sake of argument, wouldn't telemetry be useful to know which of these issues are most likely to benefit the majority of users?

whether that's worth the cost of telemetry is another issue.


You say this:

> keep making the damn tool however you want.

But then you follow up with this:

> If the Go team at google no longer has any ideas for what to work on (as must be the case if they're wasting their time on dumb crap like forced google spyware in a compiler) then they should just stop.

They want to add telemetry. So they should just add it. If you dislike it, then patch it out yourself. As you say:

>That's how open source software is always supposed to work.


> if they had started with an industry survey

They already do regular surveys.


> Some industrial user wants a new Go feature or bugfix? Great. If it's enough of a problem, they can fix it and upstream a patch.

Inbefore I go to a new job and find out that they are using outdated, custom patched go compiler.

> I mean, there are 5000+ issues and 330 open PRs on the go github right now

How do they know which ones are affecting the most users?

> forced google spyware in a compiler

go is open source, feel free to compile it yourself without the telemetry. Which distros will do if any major promises would be broken


There's a lot of 'just' handwaving here about compiling without telemetry. We just need to look as far as VSCode, which is riddled with unremovable telemetry, and the entire project of VSCodium which has to exist to provide telemetry free versions, and still cannot remove all of it. You're discounting the complete waste of human time and effort required to undo something that should simply not exist in the first place.

In terms of the open GH issues, people are pretty vocal about which ones they think are most important to fix, as is the case for most popular projects. It's simply not true that the Go team have no way of knowing which of the open issues are most important to the community.


As much as I'm looking forward to 4, has GDExtension been stabilized / documented yet? I know there's the existing C++ example [1], but I really, really don't want to jump through a scons & C++ project simply to call a single native function in a DLL. You can do such a thing with GDNative, and I presume that it is possible in GDExtension as well, though it isn't obvious how to do so. This strikes me as a huge barrier to adoption, since GDNative is one of the big things that will be incompatible with 4.

[1] In the godot-cpp library: https://github.com/godotengine/godot-cpp


There's an ongoing discussion at [1, 2] which highlights an important difference between GDNative and GDExtension in terms of what the developers expect them to be used for. A lot of people were using GDNative as a way of writing game logic in other programming languages (particularly I've seen this with Rust), but GDExtension seems to be designed to allow people to write editor plugins (e.g. things like a voxel node type, or a custom GUI tool). Each pligin currently seems to provide a custom Node in its own right, not the ability to hook into existing nodes.

This is why at the moment, you cannot live-reload a compiled GDExtension library, as the assumption is that it's a product that being provided to the user of the editor, not something which the game developer is directly creating.

I think there might be some mechanisms/hooks to allow this to work as wanted, but it has to work quite differently.

[1] https://github.com/godotengine/godot-proposals/issues/4437 [2] https://github.com/godotengine/godot/issues/66231


Faolan-Rad and I (fire) are working in that area https://github.com/godotengine/godot/pull/72883 "LibGodot is a system to allows Godot to be compiled as a library and connected into it using GDExtensions by giving a function pointer to the entry point of a GDExtension implementation"


Yes, I was in the first category of people, where I am using GDNative to ultimately call a single function written in Go, not due to speed, but simply in order to leverage a huge amount of code I had already written in that language. Judging by the GDExtension headers, I can probably pull off something similar if I forgo all of the C++ bindings that I don't need. However, I am hesitant to dedicate any time to doing so at the moment.

Don't get me wrong, GDExtension seems awesome, but it's also true that it doesn't seem geared towards the case that I was using GDNative for.


I've seen on their Discord where before they release the next version they call upon the community to help document all the new features before release if I remember correctly. Maybe a good issue to bring up before it gets looked over.


Boy, if only you saw how popular Beamer (a LaTeX presentation framework) was in certain academic circles. Spoiler: it leads to exactly the problem you describe, in addition to using some default styling that manages to be both hideous and crushingly bland at the same time. At least, based on the screenshots, this proposed framework doesn't have the second problem.

I guess, the way I view this is as a "better beamer" (which isn't saying much IMO) rather than a better powerpoint. Basically, it's a way for people who were going to make a text-heavy presentation anyway to produce something that looks OK, and to avoid using a heavyweight tool like LaTeX.


One of the most fun (?) parts of academia is the unique blend of frustration and satisfaction that results when a shoddy paper somehow clears peer review only to get eviscerated when it lands on the desk of an actual expert.

"the detailed exploration of irrelevancies" LOL, if that isn't a "time-tested" method for making a paper sound academic, I don't know what is.


It hasn't cleared peer review, it's a preprint (which is pretty common in cryptography, peer review usually happens when results are already old news).


This is not always the case. SA and Sabine, are the hyper rare outliers in this regard. Fields like biology have no such folks for variety of reasons (I’m counting out some celebrities involved in pointing out actual misconduct instead of sloppy work).


Biology has so many exceptions and idiosyncrasies compared to physics that being broadly competent is difficult.

Physics found the "zoo" of diverse particles to be inelegant (https://en.wikipedia.org/wiki/Particle_zoo). In biology, actual zoos hold a small fraction of diversity at the organismal level. The diversity at the molecular level is insanely high and the vast majority of "rules" have exceptions. Even the Central Dogma of Molecular Biology (https://en.wikipedia.org/wiki/Central_dogma_of_molecular_bio...) is a bit messy, unless stated fairly carefully (e.g., https://pubmed.ncbi.nlm.nih.gov/24965874/).


Who even are the big science communicators in for current research in bio? Nobody comes to mind, not even a non-professor.


Eric Topol


Akiko Iwasaki aka @virusesimmunity on Twitter is also a good follow for, well, viruses and immunity.

Ed Yong for excellent long form journalism.


> the detailed exploration of irrelevancies

Also signals a bad reviewer who wanted their work cited. Not in this case though.


I understand that key travel time is included in the latency measurements to facilitate the camera-based measurement, but wouldn't it make more sense to measure latency purely in terms of electrical signals? For example, measuring the time between the first connection of the circuit in the keyswitch to the time at which the USB packet including the keypress is sent across the wire? This seems like it would be equally possible to test with a second logic analyzer, without relying on a high FPS camera. Many people who use "special" mechanical keyboards are well aware of the actuation points on their keyboards, and understand that there are tradeoffs between travel time, physical feedback, and so on.

Put another way, unless you think gently resting your fingerprints on the top of a key should count as a "press", then it doesn't make sense to include the key travel time in latency.


Yeah I think so. Relevant quote from the article below. How are the keys being pressed, manually? We could have skipped all this by just knowing key travel time will govern based on his experiment. If you really want to know the fastest keyboards look at what the winners of typing competitions use.

>A major source of latency is key travel time. It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin. The video setup I’m using to measure end-to-end latency is a 240 fps camera, which means that frames are 4ms apart. When videoing “normal" keypresses and typing, it takes 4-8 frames for a key to become fully depressed. Most switches will start firing before the key is fully depressed, but the key travel time is still significant and can easily add 10ms of delay (or more, depending on the switch mechanism). Contrast this to the Apple "magic" keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.


Yeah, this is a pretty big issue that disproportionately effects mechanical keyboards, failing to account for the fact that the "ready" position in the context of gaming on a mechanical keyboard likely involves having the key being slightly depressed, just above the actuation point (think resting your fingers on WASD).


The author is concerned about perception of latency. You perceive the time from when you make the decision to press a button to when you see the result. From this perspective mechanical vs electrical is irrelevant.


That's assuming that the user always starts from a completely uncompressed key. On a medium-weight mechanical keyboard, for latency-sensitive actions you'd likely be hovering the key just above its actuation point (one of the reasons I actually prefer tactile keys for gaming: the idea keyboard holds the weight of my resting fingers just above the actuation point).


The article responds to this:

> A common response to this is that "real" gamers will preload keys so that they don't have to pay the key travel cost, but if you go around with a high speed camera and look at how people actually use their keyboards, the fraction of keypresses that are significantly preloaded is basically zero even when you look at gamers. It's possible you'd see something different if you look at high-level competitive gamers, but even then, just for example, people who use a standard wasd or esdf layout will typically not preload a key when going from back to forward. Also, the idea that it's fine that keys have a bunch of useless travel because you can pre-depress the key before really pressing the key is just absurd. That's like saying latency on modern computers is fine because some people build gaming boxes that, when run with unusually well optimzed software, get 50ms response time. Normal, non-hardcore-gaming users simply aren't going to do this. Since that's the vast majority of the market, even if all "serious" gamers did this, that would stll be a round error.


That's the best case scenario, not average case scenario. It only regularly happens with gun/mouse in shooters, as in any games that have abilities to cast you don't exactly know which one you might cast next in a given moment.


If accurate, it's unfortunate to hear the situation has worsened here. About 7 years ago, I sent what was essentially a single-line bugfix to Go, and it was reviewed and merged within 3 days.


That is often still the case. It really depends on the CL being proposed (which is not linked).


Same here, though it was a 3 line change to x/sys


At least in the original paper [1], the key idea is that instead of training a neural network to treat "images" as an array of pixels, you train a network to map a camera location and direction (in 3D space) to a distance and a color.

For example, if you tell a NeRF that you have a camera at location (x, y, z) and pointing in direction (g, h, j), then the network will output the distance at which the ray emitted from the camera is expected to "hit" something and the RGB color of whatever is hit.

Doing things in this way enables rendering images at arbitrary resolutions (though rendering can be slow), and is naturally conducive to producing rotated views of objects or exploring 3D space. Also, at least theoretically, it should allow for more "compact" network architectures, as it does not need to output, say, a 512x512x3 image.

[1] https://arxiv.org/pdf/2003.08934.pdf


Fun side fact: that's a project from Yusuke Endoh, who also happens to be the most prolific winner of the the obfuscated C contest (ioccc.org).


They figured out how to brick a Lego set.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: