Python has a list of locations to look for projects (modules) in (sys.path) and by default includes the working directory as the first item in that list.
Go has a list of locations to look for projects (packages) in ($GOPATH) and by default does not include the working directory. Unlike Python's sys.path, $GOPATH normally only contains a single value.
Python's behavior is similar to $PATH on Windows while Go's behavior is similar to $PATH on Unix.
There are pros and cons to each default value but either language can be overridden to have the other's behavior if you desire.
I personally quite like having a single unified $GOPATH for all of my source.
> There are pros and cons to each default value but either language can be overridden to have the other's behavior if you desire.
How? With Python I can release a package with code like `import foo from .` in it. This will not work in Go, a GOPATH package must import things from subdirectories via GOPATH. At least, this was the case the last time I checked.
There are many problems with running outside GOPATH. Among others, there's no
caching of built object files, so everything is much slower than it should be
(go build, go test, and so on). Vendoring is pretty low on the list of problems.
It may be that GOPATH should be reconsidered but if so it should be done as a
whole change, not just one piece at a time, like @bradfitz said above.
The solution for now is to use GOPATH.
> rsc closed this on 6 Dec 2016
Once the new dependency tool is done, it seems like GOPATH will be increasingly irrelevant with vendored dependencies and proper tooling.
You're absolutely correct—I just tested it, and I'll be damned. That's what I get for never playing around with it, and had (wrongly) assumed that this would have been one of the most basic features of vendoring.
Yeah, vendoring doesn't fix this problem as the sibling comment says. To be clear, I'm aware the team is aware of the inadequacies of the current system and is working on fixing it -- just saying that this doesn't work well now :)
Indeed. I have a couple of Go projects that are released in the classical manner as tarballs, which users can extract anywhere and compile with `make && make install`.
This used to work okay-ish until Go 1.5. Then Go 1.6 introduced vendoring support (i.e. you put the code for the libraries you need into your own repo) which is very nice if it works, so I wanted to have it. But `go build` stubbornly refuses to look at vendored code when you are not in the exact correct location inside the GOPATH.
To preserve the familiar build semantics, I set up the GOPATH inside the repo. Roughly like this:
$ git clone https://github.com/foo/bar && cd bar
$ mkdir -p .gopath/src/github.com/foo
$ ln -sTf ../../../.. .gopath/src/github.com/foo/bar
$ GOPATH=$PWD/.gopath go build github.com/foo/bar
You are trying to build Go in a way it wasn't designed for.
Go encourages absolute import paths (relative to $GOPATH) and solves a myriad of other language faults in this way. A good benefit is go-gettable projects, where all dependencies can be reached without a setup/configuration file (like setup.py, requirements.txt, and so on).
If you do not try to use Go like other language, your build should be something like:
> You are trying to build Go in a way it wasn't designed for.
Yeah, I think his issue is with Go's choice to not search the current directory.
The go tools should let you use the current directory, and the "best practice" would have been vendoring to start, relative to the current directory. Then suddenly you see no GOPATH would be necessary at all! For example:
% mkdir $proj
% go get github.com/$dep1
% go get github.com/$dep2
% vim main.go
% find .
./main.go
./github.com
./github.com/$dep1
./github.com/$dep1/somepkg
./github.com/$dep2
...
See, then main can import github.com/$dep1/somepkg, and it would be properly versioned and vendored! It falls out in a bunch of other ways of being nicer too IMO.
The currently GOPATH design encourages untracked and complected dependencies, where we're at 1.8 of the toolchain and we're still changing packaging behavior of dependencies.
Note that my project is still go-gettable. I've structured it in such a way that it is familiar both to Go devs using `go get` and to users who know `tar xf && make && make install `.
It doesn't. Package imports are either relative to GOPATH, GOROOT, the vendor directory or to the current package path (e.g. `import "../sibling"`).
This doesn't restrict you to using a special path or a particular mode of organizing. Rather, it leaves much of that problem to the user, for better or for worse. It's perfectly fine to write programs with GOPATH unset.
I actually really like that about Go. Not the fact that it wasn't set on standard, that was annoying. But the fact that Python modules can be anywhere as long as they're in some path can be confusing and hard to track down code. The Go mode is very restricting, but it keeps the environment tidy.
Honestly Python does too as soon you start working on multiple project or something larger. Sure virtualenv in Python allows you to specify where you want the project to be, but so does Go, just redefine GOPATH.
Python has PYTHONPATH. I believe it's the exact same thing? The difference is that Python puts the current directory and the standard library in by default.
Everyone I know who has learned Go has encountered this issue. "What's this GOPATH env variable? What should it be set to?" When I personally learned Go this (and how go do/doesn't do vendoring) was the hardest thing to learn. It's strange that it took this long to get this feature out.
Now if they could figure out a way to get go get to work nicely with private repos out of the box without SSH config hacks that'd be great..
Because it's supposed to be your actual development workspace. To be honest, I think this is one of the most asinine parts of the go development workflow (and there are a lot of contenders…). Having to organize all your go projects separately from everything else just because they happen to be developed in that particular language is maddening.
Yeah, this gets super annoying when you need your project written in other languages to contain go code.
I worked on the YCM support for godef, we eventually had to fork the library and rewrite paths since there was no way of pinning to a version within YCM. Libraries written with github paths for `go get` do not play nice with living outside of gopath, and libraries written with relative paths don't work with go get (even though this is something that go could support)
I've got a couple of projects that contain Go code. The Go code does not need to be the top level of the project. I just either check it out into the gopath or symlink it in (usually the former since I've already set up CDPATH in bash to go to the directory the project is living in), and the import paths in my go code point at something like scm.localdomain.com/team/project/go, where the "project" directory is the top level directory for the project that has no Go code in it. This is even fully go getable if you "go get scm.localdomain.com/team/project/go/cmd/whatever" or something.
YCM can't require its users to do that, nor can it pollute the gopath which may not be set when it is installed, nor can it rely on that path always being the same or the installed files not being removed.
It's fine for a Go project, but this is not a Go project. It should not be forced to change how it works because it wants to contain some Go code.
>It should not be forced to change how it works because it wants to contain some Go code.
Go is not like Python, the end result is a binary, not interpreted source code. You can simply write a build script that sets a custom GOPATH inside a temporary folder, pulls all dependencies, builds and pops out a binary ready for use; and you can either discard or save your custom GOPATH anywhere for caching purposes.
Note that this wasn't the only reason we had to fork. Go doesn't (or didn't) provide a way to pin to a dependency, so we would have had to fork anyway, and then rewrite all the paths anyway, so I went the way of using local paths in the fork.
I actually like it. Makes it really easy to share code between my own programs - really handy for "microservice" projects. And if I want to have a Go program in a different source tree, links allow me to do that.
Go is opinionated how things are done, and just happens that its opinion fits yours, the issue is what when it doesn't?
With non-opinionated languages nothing is stopping you from structuring your code the same way.
Some people stated how hard go makes it when you try to have project that utilizes different languages. Another issue is when you try to use a fork of one of library.
Yet another is that it is much more complex to generate a package (e.g. rpm) of projects that has many dependencies.
> With non-opinionated languages nothing is stopping you from structuring your code the same way.
With Go, I can do `go build $project` from anywhere and get a releasable binary in the working directory. I can do `import "$oldproject/$package"` in my new project and start using it immediately. I can use `godef` to jump around all of the Go code installed on my system and quickly drill down all the way to the Go builtins if I need to. I can use `guru` to see all references to a definition across all of the Go packages installed on my system. The list goes on. And I can do all of that from a favorite text editor (and not a huge slow IDE). I never felt so liberated in any other programming environment, and I've used quite a few.
> Yet another is that it is much more complex to generate a package (e.g. rpm) of projects that has many dependencies.
Go binaries have only the standard system libraries for dependencies - for all practical purposes they can be considered static. Or you mean something else?
> Some people stated how hard go makes it when you try to have project that utilizes different languages.
I've used links successfully, but maybe my scenarios were not complex enough?
> Another issue is when you try to use a fork of one of library.
Vendor support since 1.6 made this a nonissue; it's now trivial to use forked versions of libraries.
> Another issue is when you try to use a fork of one of library.
A fork of a library is either identical to the original, or a new library. And you could argue (or rather: I do), that it is a good thing that you need to explicitly specify which one you intend to use.
So let say you use code with many dependencies, but you want to make a small change in one of the dependencies. In other languages you would then just compile and be done with it, but now with golang you now need to also modify potentially hundreds of places unless you resort to somehow cheat golang.
As I said, being opinionated is great if the opinion matches yours otherwise you might end up fighting it on every step.
But how do you distribute such a project? Do you need an install script with any OSS project that includes go code, which sets up a bunch of links in the user's GOPATH directory? I imagine this script would be bug-prone and difficult for most users to debug.
I distribute commercial Go source with a Dockerfile to build the app (and rpm package) and it has been received remarkably well - better than I had imagined. YMMV.
You can view GOPATH centered development as a sign of simplicity. Probably not the most elegant thing in the world but the code organization is easy to fix with:
I actually adopted the GOPATH structure for all my repos, with some shell-script tooling (e.g. `cg github.com/foo/bar` is equivalent to `cd $GOPATH/src/github.com/foo/bar`, but clones the repo on first use). While transitioning to it, I indeed found quite a few repos that were checked out multiple times below my $HOME because of inconsistent directory structures.
FWIW: I instead started organizing my workspace in the GOPATH layout (that is, I have GOPATH=~ and put a C project cloned from github.com/foo/bar at ~/src/github.com/foo/bar too). It also helps knowing where $random_repo was cloned from. YMMV
I don't see any problems with .git. Most go packages use git anyway, it would be pretty amazing if go tooling would break if you have a .git directory anywhere.
Re the other question: If you don't run a go command with a path name, it doesn't matter what the directory structure is. I have tons of non-go projects in my GOPATH and build them with the usual tools. No issues.
I also set GOPATH=$HOME, and all my source code (not only Go) lives in $HOME/src, and I do use git for both Go and non-Go projects (so they have a .git directory in $HOME/src/what/ever/.git), and there's no problem. Why would there be one?
"For those who object that dot files serve a purpose, I don't dispute that but counter that it's the files that serve the purpose, not the convention for their names."
Oh, that's rich. This from the guy who designed a language where the case of a symbol's name determines the symbol's visibility.
That comparison makes little sense, given that these are the actual arguments he has against dot-files:
> First, a bad precedent was set. A lot of other lazy programmers introduced bugs by making the same simplification. Actual files beginning with periods are often skipped when they should be counted.
> Second, and much worse, the idea of a "hidden" or "dot" file was created. As a consequence, more lazy programmers started dropping files into everyone's home directory.
> I don't have all that much stuff installed on the machine I'm using to type this, but my home directory has about a hundred dot files and I don't even know what most of them are or whether they're still needed. Every file name evaluation that goes through my home directory is slowed down by this accumulated sludge.
At least, I can't think of any similar problems that semantically significant casing in Go creates.
What a blockbuster release (candidate). Congrats golang team!
I can't wait to start adding in the mutex contention to our tests, remove our custom graceful http shutdown library, compile 15% faster, start using context within our queries, write custom comparators [I needed this just last week! We had a type that already implemented the sort interface, but I needed to sort it by a different key in this one-off] for sorting, and have a much better time debugging JSON unmarshalling.
And those were just the big pieces I noticed that will be an immediate improvement!
Yes, with brand new sorter types for each different comparison. Knowing that Go had first-class functions from the beginning, it is actually quite surprising that a higher-order sort function didn't already exist.
One thing that might snag web applications built with Go 1.8 is the change to the html/template library. If you ever need to include script templates in your HTML for usage by a Javascript template framework (in my case it was EJS), then you will need to be aware that html entities will be escaped in a way they were not in 1.7
I selected EJS specifically because I wanted a templating library that didn't conflict with html/template's handlebars syntax. If you're in the same boat you'll want to find a template engine with non-html entity delimiters.
I think your comment above isn't quite right. The problem you're seeing is that previously this was escaped as js (incorrect), and now it is escaped as text (correct). If you set your type as above to "text/javascript", it works in go 1.8:
the one that fails is if you set the type to text/template or similar, which makes sense I guess as it is not js but might be annoying if your library uses <> as delimiters. Should work with text/template though if you mark those snippets as template.HTML type before including in an html/template.
Thanks for looking! Yeah, I managed to typo the HN example - yes, my existing templates contain type="text/template", and I think that's how I filed the bug. It's possible for me to inject those templates with a variable instead of including them directly in the template, although that's a bit more invasive of a change than what I was doing before.
If it is true, please don't use the text template as a quick fix. A lot if security thinking went into the html template and without it you will have xss all over your code.
Edit: sure enough, it validates type against know values and I assume jsx is not over of them.
then it won't be escaped presumably as js, but it will become js when it is run through the other templating system. sentiental have you tried that approach?
There was talk of the go team deciding on a vendoring paradigm. I don't see any mention in the release notes. Anyone have ideas/updates on how much closer we're getting to an official vendoring decision?
It would be neat if there was a Go Playground running the latest beta/RC version of Go. (Or is there?)
I wanted to quickly test out the new struct type conversion behavior and obviously my code[1] doesn't compile on the normal playground.
Perhaps interesting to note, the `beta.golang.org`[2] homepage specifically adds an inline `display:none` style to the div containing the mini Go Playground which is normally on the `golang.org`[3] homepage - removing that style in the Chrome inspector, you can see the mini Playground, but it doesn't work. Upon closer inspection, the code to init the playground is still there, but the `playground.js` script isn't – the init code, however, checks for this so it doesn't throw any errors.
I like the attention to detail, at least – you don't want the homepage for a beta version to have the plaground running the current stable release version; that would be pretty confusing. Still, I wonder why the HTML and JS initialization code for the playground is still there? If they went through the effort to add a `display:none` style to the HTML and a condition to check if the Playground script is available, why not just remove it entirely?
Just my random thoughts - obviously not a big deal at all, I just found it interesting!
Unlike f2f I do believe this should be possible without a lot of problems. NaCl isn't that special magical technology and everything that is special needs to be in an RC anyway. And even if you have any implementation bugs; the way NaCl works, this would, in the worst case, mean that the program gets killed because it contains invalid machine code or makes invalid syscalls (this confidence comes mainly from the fact that go uses the unmodified NaCl runtime for execution, which contains the appropriate check and is QA'd by the chrome team).
I assume that, so far, just no one did the work, they don't want to spend the additional resources on separate playgrounds or the go team doesn't consider it a good enough idea. I'd recommend either E-Mailing golang-nuts or directly opening an issue (after checking, that there is none so far).
> It would be neat if there was a Go Playground running the latest beta/RC version of Go. (Or is there?)
umm, no, not automagically. the playground is restricted. it would require some work to be put in to make it work with the latest release. that work isn't available for free.
Maybe I'm stupid and just always make stupid comments, but I think the attitude you displayed with your "umm, no" is way too common around here. Is it too hard for you to not be condescending when making a counter argument?
Edit: After re-reading both my comment and yours, I'm not sure if you were using the "umm, no" innocently in response to the `(Or is there?)` at the end of my sentence, or if you were trying to say it in a condescending way. I'll give you the benefit of the doubt and assume it was the former. I'm not trying to be one of those SJW people who gets offended at any little thing.
Regarding the rest of your comment:
> the playground is restricted
What exactly do you mean by that?
> automagically
And that?
I'm referring to large RCs like this. I think a lot of people want to test out new features/changes without pulling the latest version to their machine right away.
That work is going to have to be done anyway before the final release, right? And I can't really imagine it would be a lot of work, I would think _maybe_ a few things would break, but otherwise it should work?
And if you think I mean to have the normal `play.golang.org` start using the latest RC, that's not what I mean - I would imagine it would either be on a separate subdomain, like `play.beta.golang.org`. Or, have a dropdown on the main Playground, but that might be potentially confusing?
"What exactly do you mean by [the playground is restricted]?"
The Playground is a very special build of Go. I don't even think it's open sourced, often because while you can't depend on security-by-obscurity you still don't necessarily want to just hand people the source to attack on their own. It's written to prevent people from abusing the playground, so for instance, the os package can't read real files, the time package returns constants for "time.Now()" rather than using the clock, and an arbitrary number of other changes are made to prevent abuse. How many changes it is I don't know exactly, but there's certainly a number of them. But, more importantly, the QA work for such a thing is quite substantial. Many of the changes would also preclude using the playground to investigate the new features; for instance, you can't test HTTP2 support on there since you can't open sockets or servers. Presumably updating it would require significant effort, and at the very least, it can't be afforded to make that a blocker for a release.
They both use docker for isolation, with a bunch of things to make it work. IIRC we haven't had any issues with this. Furthermore, the official playground used to be a third completely different bit of software until it was rewritten in Rust. It also used to use https://github.com/thestinger/playpen.
I'm skeptical that the problem will take that much effort since Rust has solved it (to some degree) twice.
At the same time, it's likely that Go's playpen does a lot more mitigation, e.g. the time.Now() probably futzes timing attacks. Though in that case they would need to mess with mutexes or scheduling as well (since you can use shared memory to build a timer), which complicates things.
(For timing attacks I still think the solution still lies in how you set up the container, specifically isolating its memory accesses -- instead of tweaking the compiler itself)
Docker isn't really meant as a security mechanism. Among other things, you still expose the full kernel API as an attack surface. I can't imagine that they only use Docker, at the very least they'll probably have some dedicated machines for the playgrounds. Even then, this is probably not a great idea.
The playground uses NaCl (and AFAIK that includes seccomp), restricting both the instructions that you can use and the set of system calls that you can make (basically just read/write on fds 0-3 and exit). It's what chromes sandbox is based upon.
There are a couple of implications from this (most are described in the blog post linked elsewhere), e.g. that time is faked and that playground-code runs single-threaded. A great consequence, though, is that playground code is 100% deterministic, meaning it can be cached (and it is cached pretty aggressively), reducing the load for popular snippets.
Im pretty sure the source code is available somewhere, though usage isn't very well documented. There's periodic requests on golang-nuts about how to get your own version up and running.
Looks like the reason the time functionality is restricted is to avoid DOSes and server load, not to prevent against side channel attacks (which like I said are possible anyway with mutexes and channels).
The rest could be pretty easily implemented via docker or some other sandboxing method (the time stuff sort of can be handled too, but it's trickier). It might be more vulnerable to load, but it will be able to work with a pristine Go toolchain and provide options like switching between versions; much like the Rust playground.
Oh interesting, I didn't realize it was a very special build of Go itself.
I did know it was restricted, I guess I just underestimated how difficult that is to do. I didn't think the restrictions were actually implemented in the Go source, I thought it was more a locked down container plus a few restrictions implemented externally in the Playground's server code. It would be cool to know more about this, maybe I'll take a look at the source as @arjovr linked.
> Oh interesting, I didn't realize it was a very special build of Go itself.
It's not. It uses NaCl (native client), but that's just a special GOOS/GOARCH pair with a supporting runtime. NaCl is what the chrome sandbox is based upon, it's cool technology :)
> It would be cool to know more about this, maybe I'll take a look at the source as @arjovr linked.
While I'm also often annoyed by global negativity on HN (which, to be fair, has greatly improved lately, way to go guys!), I didn't read parent comment as aggressive, but like if "ummmm no" meant "wait, let me think about it for a sec... no, I don't think so", like you mention in your edit.
But now, this user has been heavily downvoted, which is aggressive, without any doubt. I really wish downvote features were out of the web, this is just group bullying. Upvoting is plenty enough to make good contributions raise above the others.
thank you. it wasn't intended to be negative (or, to put it in a positive spin, it was a positive negative :), mostly "i'm not sure, but i don't think so".
Go 1.8 will be the last release to support Linux on ARMv5E and ARMv6 processors: Go 1.9 will likely require the ARMv6K (as found in the Raspberry Pi 1) or later.
Does this mean that we will not be able to produce ARMv5 binaries or this is only a prerequisite for the compiler?
More importantly, the cross platform router worm that was ddos-ing everyone a few months back was written in go and will not be able to upgrade to 1.8 :)
And IoT companies will have to put trust in another language. We deployed hundreds of ARM5 gateways accross Africa with a golang stack.
This kind of decision is not compliant with serious IoT HW providers. Software is easy to upgrade, hardware not. Now we have to bet on another language... welcome rust?
Of course you should try Rust. However I do not think Rust would have anything better on this issue. Here is Rust supported platforms[1]. ARM5 seems 3rd tier platform. It might work if one does the builder work. What people are asking is official support from Go team which Rust does not provide either.
To be clear about how the tiers work, we would like to provide better support for any platform, but that requires expertise and build machines. If anyone has an interest in getting a platform to have better support in Rust, please give us a shout, we'd love to talk.
Exactly right. I was just pointing out that piling on Go on this issue does not seem right when discussion on this issue mentions lack of reliable builder machines is big problem to maintain official ARM5 support.
Stick with Go 1.7 means no security update. This give an open highway for hackers to perform large ddos attack etc.
Go is amazing. In few lines you have an SSL/HTTP server ready for production. But this is only possible if you are able to deploy security updates.
In the past they backported imported security fixes (e.g. a fix in 1.7.4 was also in 1.6.4).
I'd first wait and see if there are security fixes that are not available for 1.7 and affect packages you use. It could very well be that this is not going to be the case for a long while.
As noted in the above comment, this isn't necessarily the case.
You might consider writing on the golang mailing list, explicitly asking them to backport security fixes to 1.7 in the future. I think this is a reasonable request, and one that is likely to be implemented if it garners even slight interest.
They already dropped support for older freebsd systems and at some point discussed to drop freebsd as a first-class target. Already hard to judge the risks for non amd64-linux systems.
If you want to maximize your chance for long-term support of niche architectures, GCC is your best bet. One of the reasons GCC has endured is because it has such a huge, dedicated community surrounding it. And it has such a huge, dedicated community because so many hardware vendors directly or indirectly employ engineers to maintain GCC's extensive hardware support.[1] LLVM just doesn't have that community, and arguably it doesn't even have that kind of dedication, as it requires a significant expenditure in time and effort to maintain, and both clang and LLVM are very much fast moving targets.
Have you looked at gccgo?
Your second best bet would be sticking with a language, like C, with a large and mature field of compilers. Or at least a language that compiles to C (OCaml?) or otherwise built atop of C (Lua, which is implemented in 100% ISO standard C, and with a coroutine implementation that goroutines were intentionally or coincidentally patterned after).
[1] Which isn't to say that GCC doesn't deprecate architectures. But even NetBSD and OpenBSD, which have or are importing LLVM, are keeping GCC around for the architectures unsupported by LLVM. And GCC is happy to revive deprecated architectures when maintainers show-up.
Long-term support is just one aspect of the language decision, though. Security should be another big one for IoT vendors. I'm pleasantly surprised to hear an IoT vendor is using a more secure language than C. I'd be sad if lack of Go or LLVM support caused them to revert to C and likely (re)introduce buffer overflows.
Considering the recent bug discovered in Go's runtime, we can say definitively (not merely "likely") that moving away from Go 1.7 would resolve at least one invalid memory write bug.
The most complex protocol typically often seen on IoT devices is HTTP. It's trivial to implement HTTP in C without any buffer overflows--use a parser generator. It's even easier with IoT because you don't need to support serializing and deserializing arbitrary headers, but rather only a narrow a set. (You can discard unknown headers rather than reify'ing them as objects just so they can be ignored.) And this is how I'd implement HTTP in Rust, too--using a parser generator--just because it's an all-around cleaner approach in such a context.
Go and Rust are really cool languages and I hope they continue to see increased usage. But neither will ever be a serious contender to replace C as core infrastructure software unless and until there are multiple implementations with guaranteed interoperability. Diversity of implementations and diversity of tooling matter. They're some of the reasons Java has done so well--not because of the JVM, of which there are more versions than you can shake a stick at.
Or perhaps, conversely, that kind of diversity signals real uptake. In any event, without that diversity I wouldn't adopt any language across the board, but only for very specific applications like for particular daemons. For now only C and, to some extent, C++, have that kind of diversity. Java has come the closest, but everything else is beyond comparison.
> Considering the recent bug discovered in Go's runtime, we can say definitively (not merely "likely") that moving away from Go 1.7 would resolve at least one invalid memory write bug.
If you're making the point that using Go or Rust doesn't make you completely immune to security problems, I agree. But on balance, I think it makes you significantly better off.
As for this specific bug, I expect the fix to be backported to Go 1.7 if it hasn't been already, and ideally these devices would receive updates occasionally...although I know that isn't actually happening for many deployed devices...
(btw, I'd say "likely" rather than "definitively". That looks like a bad bug, but I can imagine a reasonable system that it doesn't apply to. They might not have a goroutine whose stack ever grows ever select simultaneously on the same channel as another goroutine, for example.)
> The most complex protocol typically often seen on IoT devices is HTTP. It's trivial to implement HTTP in C without any buffer overflows
You might be able to write a buffer overflow-free HTTP implementation, but "trivial" is a funny word. Would you still call it trivial if I pointed you at a list of buffer overflow bugs in C HTTP implementations? Does your calling it trivial fix all the buggy devices? If not, what relevance does your calling it trivial have to the security properties of widely deployed implementations?
These devices implement a variety of other protocols. I own some security cameras that implement at least: DNS, DHCP, UPnP, SNTP, RSTP, RTP, SNMP, SOAP, ONVIF, SMTP, SMB, SSH, a few custom protocols based on HTTP or directly on TCP. Some other likely protocols for IOT devices include SIP, IPP, and OAuth. And those are just off the top of my head. Maybe you consider each simpler than HTTP (and I'm uninterested in debating the ranking of protocol complexity), but collectively they represent a large increase in attack surface, and I'd be shocked if any of the protocols on that list didn't have widespread implementations with buffer overflows bugs.
I hear your point about the lack of implementation diversity being a significant risk to using these languages, but I think you're underestimating the scope of the security problem.
What's a good version manager that allows me to jump versions without moving my whole workspace over? Currently using gvm which installs and runs different versions fine, but whenever you switch version you have to move your entire workspace over or download all your packages again.
I'm using gvm now, but updating your workspace is very tedious. Here's the related issue https://github.com/moovweb/gvm/issues/189, also I'm not convinced gvm is maintained anymore.
Finally. This is going to make it easier for newcomers to the Go ecosystem, since commands such as "go get" will now work out of the box.
[0] https://beta.golang.org/doc/go1.8#gopath