The author lists multiple reasons for this, but for me the biggest one is the first one: Go is good for almost everything.
I have extremely good productivity when using Go. Once your project exceeds 100 lines it is usually even better than python. And yes, I am aware that Rustians did a survey where Rust was crowned as the most efficient language but in my reality (which may differ from yours) Go is simply the best tool for most tasks.
Why is that? Well, for me there are 3 reasosns:
1. The language is extremely simple. If you know 20% of C you are already a Go expert.
2. The core libraries are extremly well thought.
3. Batteries are included. The toolchain and the core libraries alone can do 90% of what most people will ever need.
Go is the only language I've ever felt highly productive working in. Oftentimes in other stacks I find myself in analysis paralysis on meta things that don't matter:
- what design patterns/language features make sense to use
- what is the best lib to accomplish X
- how do you keep things up to date
With Go, the language is so simple that it's pretty difficult to over engineer or write terse code. Everything you need is in stdlib. The tooling makes dependency management and upgrades trivial because of strong backwards compatibility.
My Go is a little rusty by now, but I thought they supported some type of dynamic linking(although if I recall correctly it comes with a number of free footguns)
It does a very crude one, where one is bound to expose C ABI types, all shared objects have to be linked with the same runtime, and there are still issues making this rather basic support work on Windows, land of game developers.
Gamedev in Go will require cgo (and not just the easy parts), which ups the complexity quite a bit, unless you're already very familiar with C.
I think it's pretty viable nonetheless, but more for the experienced developer with specific goals outside of the nice parts of common engines, or for a hobbyist who knows the language and wants to tinker and learn.
Sorry, this comment is so incorrect that I have to ask, what are you basing it on?
You can create games today using Go without cgo, and there are numerous examples of shipped games of varying complexity and quality. I do this to ship the bgammon.org client to Windows, Linux and WebAssembly users, all compiled using a Linux system without any cgo.
> A library for calling C functions from Go without Cgo.
Like... I mean.... okaaaay, it's not cgo, but it's basically cgo? ...but it's not cgo so you can say 'no cgo' on your banner page?
If you're calling c functions, it's not pure go.
If calls some C library, and it doesn't work on any other platform, its like 'pure go, single platform'.
hmm.
Seems kind of like... this is maybe not the right hammer for gamedev; or, perhaps, maybe not quite mature yet...
Certainly for someone in the 'solo dev pick your tools carefully' team, like the OP, I don't think this would be a good pick for people; even if they were deeply familiar with go.
It was based on my own experience (with e.g. sdl2) and, clearly, some ignorance.
I didn't mean to imply that cgo was an insurmountable barrier. But apparently it was a big enough deal for the authors of this engine that they copied over large parts of major API surface to Go to avoid it. Impressive.
However, AFAICT avoiding cgo means using unsafe tricks and trusting that struct layout will stay compatible. Nevertheless, it's a proven solution and as you say used by many already.
Note that Go has very different GC behavior to what .NET GC and likely Unreal GC do. At low allocation rates the pauseless-like behavior might be desirable, but it has poor scaling with allocation rate and cores and as the object graph and allocation patterns become more complex it will start pausing and throttling allocations, producing worse performance[0].
It also has weaker compiler that prevents or makes difficult efficient implementation of performance-sensitive code paths the way C# allows you to. It is unlikely game studios would be willing to compensate for that with custom Go ASM syntax.
Almost every game is also FFI heavy by virtue of actively interacting with user input and calling out to graphics API. Since the very beginning, .NET was designed for fast FFI and had runtime augmentations to make it work well with its type system and GC implementations. FFI call that does not require marshalling (which is the norm for rendering calls as you directly use C structs) in .NET costs ~0.5-2ns, sometimes as cheap as direct C call + branch. In GoGC it costs 50ns or more. This is a huge difference that will dominate a flamegraph for anything that takes, for example, 30ns to execute and is called in a loop.
It is also more difficult to do optimal threading with Go in FFI context as it has no mechanism to deal with runtime worker threads being blocked or spending a lot of time in FFId code - .NET threadpool has hill-climbing algorithm which scales active worker thread count (from 1 to hundreds) and blocked workers detection to make it a non-issue.
Important mention goes to .NET having rich ecosystem of media API bindings: https://github.com/dotnet/Silk.NET and https://github.com/terrafx cover practically everything (unless you are on macOS) you would ever need to call in a game or a game engine, and do so with great attention paid to making the bindings efficient and idiomatic.
For less intensive 2D games none of these are a dealbreaker. It will work, but unless the implementation details change and Go evolves in these areas, it will remain a language that is poorly suited for implementing games or game engines.
I don't know about that. Every programmer's first Go program seems to like to go to channel city. Perhaps more accurately: Over-engineering your Go program is going to quickly lead to pain. It doesn't have the escape hatches that help you paper over bad design decisions like some other languages do.
Also: interfaceiritus. Someone saw "accept interfaces, return structs" somewhere and now EVERYTHING accepts an interface, whether or makes sense or not. Many (sometimes even all) of these interfaces have just one implementation.
A lot of times you want to be able to cmd+click on something and actually see what the hell the code actually does and not get dead-ended at an interface declaration.
What are you using that can cmd+click to take you to a definition, but can't also take you to an interface implementation? I develop Go in Emacs with the built-in eglot + gopls, and M-. takes me to the definition, C-M-. takes me to the implementation(s). It's a native feature of gopls. Sure, it's one extra button, but hardly impossible.
The compiler certainly knows how to determine if there is only one implementation of an interface and remove the interface indirection when so. There is nothing really stopping the cmd+click tooling from doing the same.
Does the compiler do that? That sounds extremely unlikely, especially because an interface with only one implementation can store the nil type tag or a tagged pointer to an instance of that implementation.
The nil interface is another implementation. I mean, unless it is being used as the sole implementation, but I think we can assume that isn't the implementation being talked about given that it isn't a practical implementation. We're talking about where there is one implementation.
Right. Can you cite anything that says that the go compiler does this sort of whole-program analysis to try to prove that a certain argument to a function is always non-nil, so that it can change the signature of that function and the types of variables declared in other functions?
Uh. No. Why would I ever waste my time proving something I said? If I'm right, I'm right. If I'm wrong, you'll be sure to tell me. No reason for my involvement.
Nil is built-in. You just have to write the code to instantiate it and the compiler gives you one. The coder does not need to create an implementation, it's there for free.
I would not have called it a "second implementation" myself, but that's your claim to defend, not mine.
map is also built-in. Where do you find the hash map in the given program?
By your logic some nebulous package in a random GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.
> map is also built-in. Where do you find the hash map in the given program?
If you told me a type can be optimized because the compiler knows it can only have non-hash-map uses, but I could put that type into a hash map with a single line, I think I would be right to be skeptical.
> By your logic some nebulous package in a GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.
I expect the compiler to have a list of implementations somewhere. I don't know if I can expect it to track if nil is ever used with an interface. I could believe the optimization exists with the right analysis setup but you called the idea of finding a citation a "waste of time" so that's not very convincing.
> but you called the idea of finding a citation a "waste of time" so that's not very convincing.
Not only a waste of time, but straight up illogical. If one wants to have a discussion with someone else, they can go to that someone else. There is no logical reason for me to be a pointless middleman even if time were infinite.
Now, as fun as that tangent was, where is the nil implementation and hash map found in the given program?
You can head over to godbolt.org and see for yourself that changing the value to nil doesn't change the implementation of `bar`, though it does cause `main` to gain a body rather than returning immediately.
The implementation is preexisting. Even if it was directly used, there would not be an implementation in the snippet. So it not being implemented in the snippet proves nothing.
And what do you mean "someone else"? You're the one that said the compiler "certainly knows" how to do that.
> So it not being implemented in the snippet proves nothing.
It doesn't prove anything, but is what we've been talking about. Indeed, there is nothing to prove. Never was. What is it with this weird obsession you have with being convinced by something? Nobody was ever trying to convince you of anything, nor would there be any reason to ever try to. That would be a pointless endeavour.
What was the point of your question, if not to prove something?
If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.
If you were asking to waste time, then it worked.
If you had another motive, what was it?
Are we having a 5d chess game? I thought it was a normal conversation.
> He who wrote the "citation".
Nobody? Nobody wrote a citation.
Do you mean the person that asked for a citation? If so, you're wrong. Finding evidence for your own claims would not make you a middleman. They didn't want to have a discussion with someone else, they wanted a discussion with you, and for that discussion to have evidence. Citing evidence is not passing the buck to someone else, it's an important part of making claims.
> What was the point of your question, if not to prove something?
My enjoyment. For what other reason would you spend your free time doing something?
> If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.
And if I weren't trying?
> If you were asking to waste time, then it worked.
I ask nothing, but if you feel wasted your time, why? Why would you do such a thing?
> If you had another motive, what was it?
As before, my enjoyment. Same as you, I'm sure. What other possible reason could there be?
> Nobody? Nobody wrote a citation.
There was a request for me to refer another party who was willing to talk about the subject that was at hand – one that you made reference to ('you called the idea of finding a citation a "waste of time"'). Short memory?
> Finding evidence for your own claims would not make you a middleman.
There wasn't a request for evidence, there was a request for a citation. Those are very different things. A citation might provide some kind of pointer to help find evidence, which is what I suspect you mean, but, again, if that's what you seek then you're back to wanting to talk to someone else. If you want to talk to someone else, just go talk to them. There is no reason for me to serve as the middleman.
> it's an important part of making claims.
Nonsense. If my claim does not hold merit on its own, it doesn't merit further discovery. It's just not valuable at all. It can be left at that, or, if still enjoyable, can be talked about to the extent that remains enjoyable.
Perhaps you are under the mistaken impression that we are writing an academic research paper here? I can assure you that is not the case.
It's great that in your reply upthread you actually understood that it was a request for any kind of evidence, including evidence you just created on the spot, but now you pretend not to understand that.
What ever do you mean? There was no change in understanding. You spoke to seeking a proof in addition to a citation, the parent did not originally speak to the proof bit, only to a citation. Entirely different contexts.
In fact, you would have noticed, if you read it, that the "upstream" comment doesn't even touch on the citation at all. It is focused entirely on the proof aspect. While the parent wanted to talk about citations exclusively, at least at the onset. Very different things, very different topics.
> Many (sometimes even all) of these interfaces have just one implementation.
They are missing that mocks are the second implementation. (It took me years to see this point.) I would say that in most of my code at work, 95+% of my interfaces only have a single implementation for the production code, but any/all of them can have a second implementation when mocking for unit tests.
The point of using a mockable interface, even if there's only one real implementation, is to test the behavior of the caller in isolation without also testing the behavior of the callee.
This can be overdone of course, not everything needs this level of separation, but if it makes testing one or both sides easier, then it's usually worth it. It's especially useful for testing nontrivial interactions with other people's code, such as libraries for services that connect to real infrastructure.
Did you miss "just one implementation"? A mock is literally defined by being another implementation. If the 'mock' is your sole implementation, we don't call it a mock, that's just a plain old regular implementation.
I think my comment was clear on the distinction between real and mock implementations. If the code was testable with no need for mocks then certainly remove the interface and devirtualize the method calls.
Your comment was clear about mocks, but not why mocks are relevant to the topic at hand. The original comment was equally clear that it was in reference to where there is only one implementation. In fact, just to make sure you didn't overlook that bit amid the other words, the author extracted that segment out into a secondary comment about that and that alone.
Mocks, by definition, are always a supplemental implementation – in other words, where there is two or more implementations. What you failed to make clear is why you would bring up mocks at all. Where is the relevance in a discussion about single implementations the other commenter has observed? I wondered if you had missed (twice!) the "one implementation" part, but it seems you deny that, making this ordeal even stranger.
It is easy to generate mock implementation code (GoMock has mockgen, testify has mockery, etc.) The lack of a hand-rolled mock implementation doesn't mean that much. For example, many people do not like to put generated code under source control. So, just because you don't see a mock implementation right away doesn't mean one isn't meant to be there. Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it, but not had the time to write the tests or generate the mocks. If we are going to be this pedantic, I did say "mockable" interface, implying the usefulness and possibility, but not necessarily existence, of a mock implementation.
Since we are examining code we can't see, we can only speak about it in the abstract. That means the discussion may be broader than just what one person contributes to it. If this offends you or the OP, that was not the intent, but in the spirit of constructive discussion, if you find my response so unhelpful, it is better to disregard it and move along than to repeat the same point over and over again.
Not in any reusable way. Take a look at mockgen and testify: All they do is provide a mechanism to push implementation into being defined at runtime by user code. So, if they, or something like it, is in use the implementation is still necessarily there for all to see.
> Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it
Okay, sure, but this is exactly what the commenter replied to was talking about initially. What is a repetition of what he said meant to convey?
> That means the discussion may be broader than just what one person contributes to it.
Hence why we're asking where the relevance is. There very well may be something broader to consider here, but what that is remains unclear. Mocking in and of itself is not in any way interesting. Especially when you could say all the very same things about stubs, spies, fakes, etc. yet nobody is talking about those, and for good reason.
> If this offends you
For what logical reason would an internet comment offend?
Can't Go compiler statically prove that such single implementation interfaces are indeed that and devirtualize the callsites referring to them?
Either way, the problem seems to happen in most languages of today, if they (or their community) ever happen to accidentally encourage passing an opaque type abstraction over a concrete one.
I think it actually does that, but in local contexts, where this analysis is somewhat easy.
I also believe you don't actually have to prove it statically: PGO can collect enough data to e.g. add a check that a certain type is usually X, and follow a slow path otherwise
I understand that it does so when the exact type is observed - a direct call on a concrete type. But I was wondering if it performs whole-program-view optimization for interface calls. E.g. given a simple AOT-compiled C# program:
using System.Runtime.CompilerServices;
var bar = new Bar();
var number = CallFoo(bar);
Console.WriteLine(number);
// Do not inline to prevent observing exact type
[MethodImpl(MethodImplOptions.NoInlining)]
static int CallFoo(Foo foo) {
return foo.Number();
}
interface Foo {
int Number();
}
class Bar: Foo {
public int Number() => 42;
}
On x86_64, 'CallFoo' compiles to:
CMP byte ptr [RDI],DIL ;; null-check foo[0]
MOV EAX,0x2a ;; set 42 to return value register
RET
There is no interface call. In the above case, the linker reasons that throughout whole program only `Bar` implements `Foo` therefore all calls on `Foo` can be replaced with direct calls on `Bar`, which are then subject to other optimizations like inlining.
In fact, if we add and reference a second implementation of `Foo` - `Baz` which returns 8, `CallFoo` becomes
;; calculate the addr. of Bar's methodtable pointer
LEA RAX,[DevirtExample_Bar::vtable]
MOV ECX,0x8 ;; set ECX to 8
MOV EDX,0x2a ;; set EDX to 42
;; compare methodtable pointer of foo instance with Bar's
CMP qword ptr [RDI],RAX
;; set return register EAX to value of EDX, containing 42
MOV EAX,EDX
;; if comparison is false, set EAX to value of ECX containing 8 instead
CMOVNZ EAX,ECX
RET
Which is effectively 'return foo is Bar ? 42 : 8;'.
Despite my criticism of Go's capabilities, I am interested in how its implementation is evolving. I know it has the feature to manually gather a static PGO profile and then apply it to compilation which will insert guarded devirtualization fast-paths on interface calls, like what OpenJDK's HotSpot and .NET's JIT do automatically. But I was wondering whether it was doing any whole-program view or inter-procedural optimizations that can be very effective with "frozen world single static module" which both Go and .NET AOT compilations are.
EDIT: To answer my own question, I verified the same for Go. Given simple Go program:
package main
import (
"fmt"
)
func main() {
bar := &Bar{}
num1 := callFoo(bar)
fmt.Println(num1)
}
//go:noinline
func callFoo(foo Foo) int {
return foo.Number()
}
type Foo interface {
Number() int
}
type Bar struct{}
func (b *Bar) Number() int {
return 42
}
It appears that no devirtualization takes place of this kind. Writing about this, it makes for an interesting thought experiment what it would take to introduce a CIL back-end for Go (including proper export of types, and what about structurally matched interfaces?) and AOT compile it with .NET.
[0]: VMs like OpenJDK and .NET make hardware exception-based null-checks. That is, a SIGSEGV handler is registered and then pointers that need to throw NRE or NPE either do so via induced loads from memory like above or just by virtue of dereferencing a field out of an object reference. If a pointer is null, this causes SIGSEGV, where then a handler looks if the address of the invalid pointer is within first, say, 64KiB of address space. If it is, the VM logic kicks in that recovers the execution state and performs managed exception handling such as running `finally` blocks and resuming the execution from the corresponding `catch` handler.
I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?
I think it depends on what kind of data you're dealing with. If you know the shape of your data, it's pretty trivial to create a struct with json tags and serialize/deserialize into that struct. But if you're dealing with data of an unknown shape it can be tricky to work with that. In Python because of dynamic typing and dicts it's a little easier to deserialize arbitrary data.
Go's net/http is also slightly lower level. You have to concern yourself directly with some of the plumbing and complexity of making an http request and how to handle failures that can occur. Whereas in Python you can use the requests lib and fire off a request and a lot of that extra plumbing just happens for free and you don't have to deal with any of the extra complexity if you don't want to.
I find Go to be bad for interviewing in a lot of cases because you get bogged down with minutiae instead of working directly towards solving the exact problem presented in the interview. But that minutiae is also what makes Go nice to work with on real projects because you're often forced into writing safer code
It comes down to how the standard library makes you do things. I don't think there's any reason why a more stringly-typed way of handling JSON (or, indeed, a more high-level way of using HTTP) is outside of the realm of possibility for Go. It's just that the standard library authors saw fit not to pursue that avenue.
This variability is honestly one of the reasons why I dislike interviews that require me to synthesize solutions to toy problems in a very tightly constrained window of time, particularly if the recruiter makes me commit at the outset to one language over another as part of the overall interview process. It's frustrating for me, and, having been on the other side, it's noisy for the interviewer.
(In fact, my favorite interview loop of all time required that I use gdb to dig into a diabolical system that was crashing, along with some serious code spelunking. The rationale was that, if I'm good with a debugger and adept at reading the source that's in front of me, the final third of synthesizing code solutions to problems and conforming to institutional practice can be dealt with once I'm in the door.)
My favourite tech interview (so far) was similar: "here's the FOSS code base we're working on. This issue looks like about the size we can tackle in the time we have. Let's work on this together and solve it".
I got to show how I could grok a code base and work out where the problem was quickly, and work out a solution to the problem, and how I understood how to contribute a PR. Way better than random Leetcode bullshit, and actually useful: the issue was actually solved and the PR accepted.
I like your story about debugging during an interview. I can say from experience, you always have one teammate that can just debug any problem. I am always impressed to watch and learn new techniques from them.
This has also been my experience, yeah. My interviewers were very interested in watching me rifle through that core dump. (:
Ultimately, it feels to me like selecting for people who both can navigate existing code and interrogate a running system (or, minimally, one that had gone off the rails and left clues as to why) is the right way to go. It has interesting knock-on effects throughout the organization (in, like, say, product support and quality assurance) that are direly understated.
In our case we give some high-level description beforehand (which mentions working with REST apis) and allow candidates to use any language of their choice.
Also in our case the API has typing in form of generated documentation and example responses. I even saw one Go-candidate copying a response into some web tool to generate Go code to parse that form of json.
I can also say that people who chose Java usually have even more problems, they start by creating 3-4 classes just to follow Spring patterns.
I think other languages cause folks to understand JSON responses as a big bag of keys and values, which have many convenient ways of being represented in those languages. When you get to Go and you want to parse a JSON response, it has to be a well-defined thing that you understand ahead of time, but I also think you adapt when doing this more than once in Go.
If I had one complaint, it’s the use of ‘tags’ to configure how json is handled on a struct, such that it basically becomes part of the struct’s type. It can lead to a fair bit of duplication of structs whose only difference is the json handling, or otherwise a lot of boilerplate code with custom marshal/unmarshal methods. In some cases the advice is even to do parse the json into a map, do the conversion, and then serialise it again!
The case I ran into is where one API returned camelCase json but we wanted snake_case instead. Had to basically create another struct type with different json tags, rather than having something like decoders and encoders that can configure the output.
I like Go and a lot of the decisions it makes, but it has its fair share of pain points because of early decisions made in its design that results in repetitive and overly imperative code, and while that does help create code that is clear and comprehensible (mostly), it can distract attention away from the intended behaviour of the code.
I wasn't talking about getting the keys wrong, but rather the insane verbosity of GoLang - `myVariable := retrievedObject.(map[string]interface{})["firstLevelKey"].(map[string]interface{})["secondLevelKey"].(string)` vs. `myVariable = retrievedObject["firstLevelKey"]["secondLevelKey"]`
"Oh, but that's just how it is in strongly-typed languages" - that may well be true, but we're comparing "JS or python" with GoLang here.
> I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?
Have you considered that your interview process is actually the problem? Focus on the candidate’s projects, or their past work experience, rather than forcing them to jump through arbitrary leet code challenges.
Making an HTTP request and dealing with JSON data is a weed-out question at best. Not sure if you are interpreting the grandparent comment as actually having them write a JSON parser, but I don't think that's what they meant.
I either had that come up in an interview recently myself, OR it wasn't clear to me that I was allowed to use encodings/json to parse the json and then deal with that. I happened to bomb that part of the interview spectacularly because I haven't written a complex structure parser in years given every language I've used for such tasks ships with proper and optimized libraries to do that.
Well these are not arbitrary, we work with a number of json apis on a weekly basis, supporting the ones we have and integrating new ones as well. This is a basic skill we are looking for, and I don't see it as a "leet code challenge".
Candidates might have great deal of experience debugging assembly code or generating 3d models, but we just don't have tasks like that.
There is a dict-equivalent data type in Go for JSON (it's `map[string]any`), it's just rather counter-intuitive.
However, as a Go developer, I'm one of the people who consider that JSON support in Go should be burnt down and rebuilt from scratch. It's both limited, annoying, full of nasty surprises, hard to debug and slow.
There was a detailed proposal to introduce encoding/json/v2 last year but I don't know how far it's progressed since then (which you probably already know about but mentioning it here for others):
I've done, literally, hundreds and hundreds of coding interviews and an http api was a part of lots of them. Exported vs non-exported fields and json tags are about the only issues I've seen candidates hit in Go and I would just help in those kinds of cases. Python is marginally easier for many.
The problem was java devs. Out of dozens upon dozens of java devs asked to hit a json api concurrently and combine results, nearly ZERO java devs, including former google employees, could do this. Json handling and error handling especially confounded them.
Not a programmer, so this is every programmer's chance to hammer me on correctness.
No, Go doesn't have a type named Dict, or Hash (my Perl is leaking), or whatever.
It does have a map type[1], where you can define your keys as one type, and your values of another type, and that pretty closely approximates Dicts in other languages, I think.
They're a very common and useful idea from Computer Science so you will find them in pretty much any modern language, there are a lot of variations on this idea, but the core idea recurs everywhere.
I have a quibble here. A hash table, the basic CS data structure, is not a two-dimensional data structure like a map, it is a one-dimensional data structure like a list. You can use a hash table to implement a Map/Dictionary, and essentially everyone does that. Sets are also often implemented using a hash table.
The basic operations of a hash table are adding a new item to the hash table, and checking if an item is present (potentially removing it as well). A hash table doesn't naturally have a `V get(key K)` function, it only naturally has a `bool isPresent(K item)` function.
This is all relevant because not all maps use hash tables (e.g. Java has TreeMap as well, which uses a red-black tree to store the keys). And there are uses of hash tables besides maps, such as a HashSet.
Edit: the term "table" in the name refers to the internal two-dimensional structure: it stores a hash, and for each hash, a (list of) key(s) corresponding to that hash. Storing a value alongside the key is a third "dimension".
I think I'd want to try to decode into map[string]interface{} (offhand), since string keys can be coerced to that in any event (they're strings in the stream, quoted or otherwise), and a key can hold any valid json scalar, array, or object (another json sub-string).
That of course works, but the problem is then using this. Take a simple JSON like `{"list": [{"field": 8}]}`. To retrieve that value of 8, your Go code will look sort of like this:
var v map[string]any
json.Unmarshal(myjson, &v)
lst := v["list"].([]any)
firstItem := lst[0].(map[string]any)
field := firstItem["field"].(float64)
And this is without any error checking (this code will panic if myjson isn't a json byte array, if the keys and types don't match, or if the list is empty). If you want to add error checking to avoid panics, it gets much longer [0].
Here is the equivalent Python with full error checking:
try :
v = json.loads(myjson)
field = v["list"][0]["list"]
except Exception as e:
print(f"Failed parsing json: {e}")
Really, the mismatch is at the JSON side; arbitrary JSON is the opposite of strongly typed. How a language lets you handle the (easily fallible) process of "JSON -> arbitrarily typed -> the actual type you wanted" is what matters.
> arbitrary JSON is the opposite of strongly typed
On the surface, I agree. In practice, many big enterprise systems use highly dynamic JSON payloads where new fields are added and changed all the time.
Go maps have a defined type (like map[string]string), so you can only put values of that type in them. A JSON object with (e.g) numbers in it will fail if you try and parse that into a map of strings.
As others have said, the issue with Go parsing JSON is that Go doesn't handle unstructured data at all well, and most other languages consider JSON to be unstructured data. Go expects the JSON to be strongly typed and rigidly defined, mirroring a struct in the Go code that it can use as a receiver for the values.
There are techniques for handling this, but they're not obvious and usually learned by painful experience. This is not all Go's fault - there are too many endpoints out there that return wildly variable JSON depending on context.
> The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.
Unless you're dealing with JSON input that has missing fields, or unexpected fields, there is no pain. Go can natively turn a JSON payload into a struct as long as the payload's fields recursively match the struct's fields!
If, in any language, you're consuming or generating JSON that doesn't match a specific predetermined structure, you're yolo'ing it and all bets are off. Go makes this particualr footgun hard to do, while JS, Python, etc makes it the default.
In $other_language you'll parse the JSON fine, but then smack into problems when the field you're expecting to be there isn't, or is in the wrong format, or the wrong type, etc.
In Go, as always, this is up front and explicit. You hit that problem when you parse the JSON, not later when you try to use the resulting data.
Go's JSON decoder only cares if the fields that match have the expected JSON type (as in, list, object, floating point number, integer, or string). Anything else is ignored, and you'll just get bizarre data when you work with it later.
For example, this will parse just fine [0]:
type myvalue struct {
First int `json:"first"`
}
type myobj struct {
List []myvalue `json:"list"`
}
js := "{\"list\": [{\"second\": \"cde\"}]}"
var obj myobj
err := json.Unmarshal([]byte(js), &obj)
if err != nil {
return fmt.Errorf("Error unmarshalling: %+v", err)
}
fmt.Printf("The expected value was %+v", obj) //prints {List:[{First:0}]}
This is arguably worse than what you'd get in Python if you tried to access the key "first".
It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)
One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.
> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)
To me it looks like a footgun: if the parsing failed then an error should have been signalled. In this case, there is no error and you silently get the wrong value.
> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)
I do agree that there are good reasons on why this behaves the way it does, but I don't think the reason you cite is good. The implementation detail of generating a 0 value is not a good reason for why you'd implement JSON decoding like this.
Instead, the reason this is not a completely inane choice is that it is sometimes useful to simply not include keys that are meant to have a default value. This is a common practice in web APIs, to avoid excessive verbosity; and it is explicitly encoded in standards like OpenAPI (where you can specify whether a field of an object is required or not).
On the implementation side, I can then get away with always decoding to a single struct, I don't have to define specific structs for each field or combination of fields.
Ideally, this would have been an optional feature, where you could specify in the struct definition whether a fields is required or not (e.g. something like `json:"fieldName;required"` or `json:"fieldName;optional"`). Parsing would fail if any required field was not present in the JSON. However, this would have been more work on the Go team, and they generally prefer to implement something that works and be done with it, rather than working for all important cases.
Separately, ignoring extra fields in the JSON that don't match any fields in the struct is pretty useful for maintaining backwards compatibility. Adding extra fields should not generally break backwards compatibility.
> One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.
I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?
> I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?
No, I mean you create a struct that deals with only a part of the JSON, and do multiple calls to Unmarshal. Each struct gets either populated or left at its zero-value depending on what the json looks like. It's useful for parsing json data that has a variable schema depending on what the result was.
You can, but then it's a lot of work to actually traverse that map, especially if you want error handling. Here is how it looks like for a pretty basic JSON string: https://go.dev/play/p/xkspENB80JZ. It's ~50 lines of code to access a single key in a three-layer-deep JSON.
Its more like 30 lines of code without the prints. However, one generally should code generic functions for this. The k8s apimachinery module has helper functions which is useful for this sort of stuff. Ex: `NestedFieldNoCopy` and its wrapper functions.
Sure, in production you'd definitely want something like that, but the context was an interview exercise, I don't think you should go coding generic wrappers in that context.
Go has ruined all other languages for me. I really fell in love with Gleam recently and was trying to implement a fun side project in it. The problem is I really don’t have enough time to learn the intricacies of it, with a startup, two kids, etc. As soon as I have to look at some syntax and really _think_ about what it’s doing every time I look at it, I lose interest. I kept trying and eventually implemented it in Go much faster. And while doing it in Go I kept wishing I could just use actors and whatever to make it simpler but, is it really simpler?
I haven't looked too deeply into it but I came across https://github.com/ergo-services/ergo not too long ago and thought it could be pretty interesting to try using OTP in Golang
Packaging a Go service in Docker and dumping it into k8s is probably the easier/better understood path but also deploying Go services onto an Erlang node just sounds more fun
Yep.. say you wanted to make a simple http service that needs to
* request a json.gz file from another HTTP service
* decompress it
* deserialize the json, transform it a bit
That's net/http (and maybe crypto/tls), compress/gzip, encoding/json. I need to make zero decisions to get the thing off the ground. Are they the best libraries in the world for those things? no.. but will they work just fine for almost every use case.
Not saying you shouldn't use Go for that problem, in a particular context, but it does drive home how much of programming is glue ... there is combinatorial amounts of glue, which is why JSON, HTTP, compression, etc. end up being part of so many problems
It has not been my experience that Go is good for almost everything. On the contrary, it seems good at a couple very specific (though very common) niches: network services and cli utilities. But for most of what I do right now - data heavy work - it has not turned out to be very good (IMO). It really is just not better in any way to have to constantly write manual loops to do anything.
I think Go is pretty OK as a language for building data pipelines (I’m assuming you meant statistical ones, but the same argument applies to more data transform-y ones). What it is not good for is doing exploratory analysis (which is where Python shines).
Manual loops are pretty annoying when the focus is on figuring out which loops to write (exploratory phase). However, they are pretty nice once you’ve figured it out and need write a durable bit of code where your prioritise readability over conciseness (productionisation).
Going from Python to <any language> between the exploratory phase and the productionised pipeline is going to be a pain, I don’t think Go is particularly worse than others. At that point it’s all about the classic software tradeoffs (performance vs velocity vs maintainability) and I didn’t think Go is a good choice in many situations.
Well I totally disagree that writing manual loops is ever "pretty nice", but I agree that it's not as big an issue in final-version code as it is in exploration.
And I'm also in strong agreement that making any language transition between exploration and implementation is problematic. I do think go is worse than most, because I just think it has a mostly cultural allergy to manipulating collections of data as collections rather than element-by-element, but I agree that this is mostly lost in the noise of doing any re-write into a new language.
But this is why Python is best in this space. It simply has the best promotion path from experimentation to production. It is better than other "real" languages like go, because it thrives in the exploratory phase, and it is better than purpose-specific languages, like R, because it is also a great general-purpose language.
The other contender I see is Julia, which comes more from the experimentation-focused side, while trying to become a good general purpose language, but unfortunately I think it still needs to mature a lot on that side, and it's not clear that it has the community to push it far enough fast enough in that direction (IMO).
Even very performance-critical use cases work with python, because the iteration process can follow experimentation -> productionization -> performance analysis -> fixing low-hanging bottlenecks by offloading to existing native extensions -> writing custom native extensions for the real bottlenecks.
Yeah, but Go is also worse (in my experience) than most, if not all, of the other general-purpose languages I've used, for this niche.
For instance, Rust is actually pretty great in this space, despite being very ... not-python, and Java also has decent libraries. Then C++ (and Fortran!) are good for a very different reason than python, "on the metal" performance, which Go also isn't a great fit for.
Go is not good for data science and ML. It doesn't even have a proper, maintained dataframe library for data-science. R and Python beat it hands on. Rust also beats it now thanks to polars. And mobile ? gomobile is not maintained. Fyne is amateur level on mobile.
AFAIK Go has no maintained 3d game engines.
Go has its well-established niche for middle-ware services and CLI tools. And that's about it. If your domain is outside this, its best to choose another language.
Is the reason for the absence of a well-maintained dataframe library lack of demand? It looks like Gota and Dataframe-go are abandoned, while Gonum isn't particularly active. Did these wither on the vine because no one used them?
There is no good reason to pursue a project like this (or if there is, it would be very surprising to me), so it would reek suspiciously of "not invented here".
Because more likely than not, they're going to screw it up, and the choice to use go was not made for sound engineering reasons (go is not the only language this would apply to, but because of the lack of good FFI, its more likely to happen). The exception would be if said person had a solid background in numerical computing and was up-to-date with the state of the field, but that's pretty easy to find out.
I think it's lack of demand, yeah, but I think that's downstream of a real culture clash. Exploratory data analysis is just really not a good fit, culturally, for go. You don't want to be explicit about everything and check every single error case, etc.
But then it's natural to evolve production systems out of exploratory analyses, rather than re-writing everything from scratch, unless there is a very compelling reason to do that. The compelling reason is usually to get more speed, but that's not go's strong suit either.
Also anything that requires interop/FFI, syscalls, and lower level stuff. It's extremely hard to record your screen on Go, for example. On Rust this is much more doable and there even are crates for it
I can totally agree that Go is good enough for most projects, to the extent it's a go-to choice for many. However, it's not always the best tool - frequently it wins just because it allows to prototype quicker. YMMV, but for me, it's not a love relationship, it's a love-hate relationship.
> The language is extremely simple.
Simplicity is a double-edged sword. It didn't even have basic generics for a long while, and it's painful to remember how bad it was without them. Still doesn't have a lot of stuff anyone would expect from a modern language.
> If you know 20% of C you are already a Go expert.
Strong disagree. Knowing $language means knowing the patterns and caveats and (sorry for a cliche term, I'm not fond of it, but I don't have a better term so I hope you get the meaning) "best practices". Those are drastically different between C and Go. Especially when it comes to concurrency and parallelism.
> The toolchain and the core libraries alone can do 90% of what most people will ever need.
This is provably false. Virtually every serious project out there pulls a ton of dependencies, like database drivers, configuration toolkits, tracing and logging libraries and so on. Heck, I think a lot of shops have project templates for this reason - to standardize on the layout and pull the company-preferred set of libraries for common stuff. Core libraries are slowly getting there but so far they don't have very basic stuff like MapSet[T] or JSON codecs for nullable types like sql.NullString, so you gotta pull third-party stuff.
There is obviously truth in this, but I think it is more often the case than many people think it is that it is more efficient to learn the better tool "good enough" than to use the worse "I already know it" tool.
For instance, I've seen lots of people not want to learn sql and instead write complicated imperative implementations of relational primitives in the "I already know it" programming languages that are clearly "good enough" (because they are turing complete after all). But sql is usually a much better way to do this, and isn't hard to learn "good enough" to do most things.
Well, there's a balance. I'm not saying "never learn anything new, assembly's good enough and I know how to use it". Learn newer and better tools.
At the same time, don't learn every newer and better tool. There are too many.
You don't have enough time, even if you never do anything but learn.
SQL is enough better to be worth learning. The web framework of the week? Not so much.
And what's "worth learning" depends on what you're trying to do. For a home project, I'll use what I know, unless the goal of the project is "learn how to use X". For work, the question is whether it brings enough to the table to be worth the learning time. Sometimes it is; often it isn't.
Yeah what I'd say is: Seek out and be open to advice. There's a "don't know what you don't know" problem here, as always. But this is also part of the point of reading sites like HN! People here are saying "actually there are tools that are net positive to learn a bit because they are much better choices for particular niches". That is advice! It's fine and all to say "nah, I'm good", but in many cases that's doing yourself a disservice. I really do see people writing tedious for loops in go because it is what they're comfortable with, when they would be much better served writing sql and using a language with dataframes.
Most of the time people aren't just on a kick about selling some hot new thing (and I'm old enough that go was the hot new thing for me at one point!), they actually have relevant experience and are giving useful advice.
Is there much boilerplate aside from err checks and JSON tags? Even then, your IDE / copilot should automatically insert those along with imports and package names.
Err checks are a big one. I don't want to worry about error handling when prototyping. There are little things like having to prefix methods with the struct name and type, and bigger things like no default arguments and by name parameters, which makes setting up test fixtures cumbersome. Also, functions don't compose well because there can be more than one return value, so you end up just writing more intermediate values.
The requirement to define at least one function by itself is a boilerplate. Also IDE doesn't fully solve the inability to compile and run a partially written program (in fact, Go compiler is even more pedantic than rustc in some aspect), which happens a lot when working with dynamically typed languages and their strongest use cases.
My personal Python threshold is 10k lines. After that I tend to loose track of what I am doing and I start to miss static typing and nowadays, an IDE to navigate it. Maybe future Python IDEs can AI scan the codebase and compensate.
Type annotations plus modern IDEs have (in my experience) made this mostly a non-issue. It does require a bit more setup to get passable static analysis set up, which comes "for free" in languages with good compilers (like go!), but it's at least possible now to get to a (IMO) good point.
(FWIW, I moved away from python and ruby over a decade ago because of exactly this frustration, but I'm finding modern python to be pretty pleasant.)
Agreed on all points except #3 re the core libraries. Coming from the Java ecosystem, it was a bit of a shock to see how small the standard libraries are. For example, the minuscule collections library, among others.
Go has very little story when it comes to desktop or mobile GUI apps, which is too bad because it would be a very productive language for that kind of thing.
> The author lists multiple reasons for this, but for me the biggest one is the first one: Go is good for almost everything.
I'd nuance that claim.
I haven't found Go to be _particularly good_ at any specific task I've undertaken, but Go was _good enough_ for many of these tasks. Which makes Go a reasonable general programming language.
This is true, but there are a number of _good enough_ languages, and personally I don't think go is top-tier at this use case of being the go-to swiss army knife. I do think it is top-tier at being a good choice for tools in its niche. But not as "default language I reach for when I don't want to waste time thinking about it".
Agreed. For my personal sensibilities, Python or TypeScript are better default languages. Of course, I'm a bit obsessive about quality/safety, so I'm probably going to use Rust for most tasks :)
Turning off the GC isn't the blocker. Plenty of GC languages manage.
I'm more interested in how Go handles graphics APIs and binding a render thread or working with GUI threads considering how goroutines are done. Does one need to write non-idiomatic go, avoiding goroutines or is there some trick?
For example, GTK isn't thread safe so you can't just use that library without considering that.
No, go is not for native gui apps. I recently made some rough go bindings to minifb and while easy to do, it wasn't productive at all. Errors are hard to follow; are they go or minifb? Callbacks work until I have too many calls then the app might freeze altogether.
Go is great for image/draw and things like passing pixels: (C.uint)buffer.Pix
It comes down to Google wanting Go to use the web as the interface, which in practice means not doing dynamic linking (except in Windows.)
To your question, go gui apps will have 'runtime.LockOsThread()' near the start so it's all green/light/go threads and only 1 OS thread.
Offhand, I think the general pattern is to bind related external libraries (E.G. a gui stack) into one agent thread and use channels to send it messages.
Probably not Ebiten for 3d games. To be fair at this point when you are doing somewhat specialist things Go starts to lose its edge. I remember trying to replicate some Numpy code in Go and that was a pain. However that's just because Python is too good at scientific things.
4. Stability and backwards-compatibility. I have never seen a Go version upgrade break anything. Meanwhile I have colleagues who do a thousand-yard-stare if you so much as mention upgrading the version of Python we're using.
Python upgrades are fine, actually. The python library ecosystem is a bit of a mess in general, which does affect this, but the tools have actually improved to make this more manageable lately.
Yeah, this. It's just good enough for 95% of use-cases, while being very productive.
Personally, one of the biggest selling points for me is that imo modelling concurrency and asynchronicity via fibers (goroutines), rather than async/await, is just a ton easier and faster to work with. Again, there are use-cases for the alternative (mainly performance, or if you like to express everything in your type-system) but it's just great for 95% of use-cases.
I always find it odd when people refer to it as being "very productive". I find all the boilerplate very un-productive to work with. Every time I pick go back up I'm shocked how many manual loops and conditionals I have to write.
IME, there are two main differences between go and java:
1) go is more "batteries included". Modules, linting, testing, and much more are all part of the standard cli. Also, the go stdlib has a ton of stuff; in java, there is almost always a well-built third party library, but that requires you to find and learn more things instead of just reaching for stdlib every time.
2) golang is "newer" and "more refined". this is pretty subjective, but golang seems to have fewer features and the features are more well-planned. It's a more "compact" or "ergonomic" language. Whereas java has built up a lot of different features and not all of them are great. You can always ignore the java features you don't like ofc, but this is still a bit of cognitive overhead and increased learning time.
There are surprisingly few languages in this category, especially if you limit consideration to statically typed. Go, C#, Swift? Nim, Crystal? F#? Kotlin?
Go is not my favorite language, but it really is exceptional in terms of its effective utilitarian design.
Eh, imo the go libraries still aren't up to par with out of the box java libraries. Like there's still no Set class, nor the equivalent of Map.keys. yeah they're easy to write but that's still not an included battery.
Also, while the cli to add stuff is useful, there's still nothing to the level of maven or gradle for dependency management, and I usually find myself doing some fun stuff with `find -execdir` for module management.
Different strokes for different folks though. Java (really, kotlin) still makes a ton of sense for backend to me given how the jvm is architecture independent and you don't have to make tradeoffs/switch to graal if it's a long lived service.
Golang is nice, love it, but it's still got a bit to grow. I'm just happy they added modules and generics. I don't think it's a matter of being 'well thought out' as much as it's a simple language that cares a lot about simplicity and backwards compatibility and has iterated ever since. For my the killer app isn't go routines so much as you can produce a binary that's resilient even to shared/dynamically linked libraries in all platforms, which is awesome for portability independent of environment. No more gcc vs clang vs msvc headaches, no more incompatible shared libraries, no more wrong version of jvm or a bad python modules path etc.
Oh, also java had like a 15 year headstart on golang, and it wasn't until java 8 that many of my biggest complaints were addressed. And yeah stuff like apache commons +log4j+mockito/junit are pretty much required dependencies, and maven/gradle aren't language native.
The best STL is probably python imo but even that doesn't support a proper heap/priority queue inplementation. For data structures specifically I think java/kotlin has the best STL. All of this ignoring .NET or apple platforms.
I generally agree with what you are saying. Although I wouldn’t hold out Maven as a paragon. I used to make my living untangling pom.xml files. I don’t think anybody is feeding their family helping people with go.mod messes — although I wish somebody would do that for kubernetes.
Yup! Lots of good stuff in the go/x libraries, happy map.Keys is graduating.
Also totally agreed on maven not being an ideal, or even lovable dependency management system, but trust me that there are absolutely people spending too much time wrangling go.mod files, especially in open source where go.mod files cross repo boundaries (and thus consolidated test automation).
I typically do a bunch of git reset --hard's as I iterate on my find -exec to run the golang cli commands for module upgrades (for whenever rennovate fails). I like how it's much easier to experiment with over maven (maybe I just didn't know maven well enough), but it's still definitely a headache.
There is also https://github.com/hashicorp/go-set which includes HashSet and TreeSet implementations for types with custom hashing functions, and orderable data respectively.
FWIW, I think versions of java from the last 5ish years feel both "newer" and "more refined" than go.
But I do think go and java are very comparable languages. To me, go's advantage over java is more about use-case; go is the clear choice for little cli tools, because it's pretty far off the beaten path to coax java to start up quickly enough for this. This is the sweet spot for go, IMO.
As someone who isn't super proficient in Java I usually find Java daunting to get started with full of buckets of "meta" issues like in my other comment.
What JVM do I use? Does it matter?
Does it matter what version I install, what if I have to install/manage multiple versions?
If I want to write a web service can I use vanilla Java stdlib or do I have to use Spring or some framework? If I use Spring, do I have to get into the weeds of dependency injection and other complexity to actually get my app off the ground?
With Go, none of those questions exist. I install the latest Go, create a main.go file, I use net/http and I'm off to the races.
Besides what neonsunset points out for .NET world, where alongside C#, we get the pleasure to enjoy F#, VB and C++/CLI, it is relatively easy for Java.
When one doesn't know, just like with Go, one picks up the reference implementation => OpenJDK.
For basic stuff, the standard library also has you covered in the jdk.httpserver module.
By the way, where is the Swing equivalent on Go's standard library?
I agree with Go being simpler, but modern Java and Spring Boot is also fine. Backend programmers are spoiled with riches nowadays with all the completely workable options we have.
I have extremely good productivity when using Go. Once your project exceeds 100 lines it is usually even better than python. And yes, I am aware that Rustians did a survey where Rust was crowned as the most efficient language but in my reality (which may differ from yours) Go is simply the best tool for most tasks.
Why is that? Well, for me there are 3 reasosns:
1. The language is extremely simple. If you know 20% of C you are already a Go expert.
2. The core libraries are extremly well thought.
3. Batteries are included. The toolchain and the core libraries alone can do 90% of what most people will ever need.
When people argue about the validity of these claims, I simply point them to this talk https://go.dev/talks/2012/concurrency.slide#42