Go is everything I don’t want in a language for my personal projects. It’s verbose, every simple task feels like a lot to write. It’s not expressive, what would be a one-liner in Python makes you write three for loops in Go. I constantly need to find workarounds for the lack of proper enums, lack of sum types, no null safety etc.
I’m sure these are the exact reasons why Go is good for enterprise software, but for personal projects, I get no fun out of using it
You want to write a very clever library using elegant abstraction and generics to have a cool innovative interface to solve your problem? Tough luck, you can't. So instead, you will just have to write a bog standard implementation with for-loops and good old functions which you will have to copy and tweak as needed where you really need something more complicated. It will work perfectly fine and end up being very readable for you and for the other persons reading your code.
Go is basically anti-Haskell. It forces you to be less clever and that's great.
Take a look at a JSON parser or ORM written in Go. It's god awful the things they have to do to work around Go's type system. The average developer won't see these things, they're typically just writing glue code between Go's great stdlib (which also contains wild things if you take a look) and other 3rd party dependencies.
> The average developer won't see these things, they're typically just writing glue code between Go's great stdlib (which also contains wild things if you take a look) and other 3rd party dependencies.
This is what most of us are doing every day, and exactly what Go excels at.
Sum types allow for more robust modeling of the API boundary in libraries, so in fact having a better type system is desirable even when "just gluing libraries", because it can make incorrect program states physically unrepresentable.
Go doesn't excel at easy problems. Go is fine at pretty much everything. Do you think Kubernetes is an easy problem?
The thing is just that Go is very opiniated in its feature set. That's you see people here writing about complex projects using "wild" or even "god awful" things, and lament the inability to properly map API boundaries in the language.
The truth is obviously that all of these is not particularly wild. It's just things that the commenter considers inelegant but is perfectly able to follow which is Go strength and why it's so code. Want it or not, you will have to write code that someone else can follow.
Don't get me wrong, I'm not going to pretend that Go is in anyway perfect or has the correct feature in as much as that exists. I probably enjoyed writing Ocaml more. But in practice, for a large scale project where collaboration is important, using Go is an awesome experience.
Go is "opinionated" because it's designed to be simple rather than complete.
- Why are lower cased symbols not exported? Because it would be too complicated to add private / public keywords.
- Why isn't there exception handling? It's too complicated. It's simpler to just have everyone manually handle exception flow.
- Why isn't there an optional type? It's too complicated. Just use a nil pointer and or default values.
- Why aren't there sets or other rich datatypes in the stdlib? It's too complicated. Now go and write it yourself or download a microlibrary.
- Why are there no nil pointer protections? It's too complicated.
It's very easy to buy into the Golang PR and say "well it's just opinionated" as opposed to calling it "simplistic" or "incomplete". It's an okay language, I've written a lot of complicated stuff in it over the last 6 or so years, including a distributed KV database. Eventually you WILL hit the limits of "opinionated" design.
Kubernetes is an easy problem made hard by doing a bunch of things that don't need to be done. I've used small bash scripts to deploy software for most of my freelance career, and the few times I've been forced to use a containerization tool, it has been far more difficult, for no discernible benefit.
> The thing is just that Go is very opiniated in its feature set. That's you see people here writing about complex projects using "wild" or even "god awful" things, and lament the inability to properly map API boundaries in the language.
The problem isn't that Go is opinionated--I often wish Python was more opinionated. The problem is that Go started off with the wrong opinion on generics, and took two iterations (first casts, then `go generate`) to arrive at generics, resulting in a system that isn't opinionated on this issue, because all three ways work for reverse compatibility. And this is a) a very important issue to be opinionated on, and b) extremely forseeable given the languages that came before.
> The truth is obviously that all of these is not particularly wild. It's just things that the commenter considers inelegant but is perfectly able to follow which is Go strength and why it's so code. Want it or not, you will have to write code that someone else can follow.
The lack of abstractions means it's easy to follow on the line by line level, but that falls apart as context grows. Lines in more powerful languages are harder to follow because they do more. If you want to do the same amount of work in Go, you have to write more lines of code. You're going to be implementing the same abstractions ultimately, but because you're writing it custom you're going to do it a little differently every time. As a result, any few lines of code in Go is easy to understand, but the big picture is much harder to understand, because you're caught up in minutia which is slightly different every time instead of using a standard abstraction.
EDIT: There's no way the first person who downvoted this had time to read it.
I've been coding full time in it on a large team for four years now. If Go is this difficult to comprehend, then maybe its not the simple language it claims to be.
That is the joke. Go conflates simple with primitive, so you end-up building things like Computer Science has just emerged and it is the 50s with some syntax sugar, an decent concurrency to be fair.
> It will work perfectly fine and end up being very readable for you and for the other persons reading your code.
And make all future work both 3 times simpler, and take 3 times as long. I suppose there’s situations in which this is great, but I’m partial to requiring more.
I’ll admit that hasn’t worked out very well for me unless I’m working my myself though.
I'll take verbose and simple code over terse trickery every day.
People who disagree have never had to wake up at 3 in the morning to fix a critical production issue in someone else's code. And that someone else really loved "elegant and terse" code.
It's not fun to grok your brain around weird language trickery when you're half-awake and in a hurry to fix stuff before the customers wake up.
I personally think that writing so called "terse, clever" (misnomer) code, is not an issue with the language, rather the user. Do we really want to have worse tools, just because some people are writing bad code? Clearly it's an issue with the software engineering process rather than language itself. A good language should allow a skilled user to write code as clear as day, while properly modelling the problem domain and making incorrect states logically unrepresentable. We have a tool for that, type system and a compiler.
> Do we really want to have worse tools, just because some people are writing bad code?
People tend to write bad code. It's a fact of life. Tools forcing people who write bad code to write better code can't be worse tools by definition. They are better tools.
The fundamental issue is that humans contrary to machines will never know for sure whether whatever they do write is in fact correct code. One can think they are writing good and readable code, but that doesn't mean anything if the code is incorrect. And if you write lots of boilerplate that means more possible bugs. That's also why no one sane writes assembly (or increasingly these days C) unless they have to. We generally prefer more complex languages which put a constraint on the amount of possible bugs.
You don’t write terse tricky code. That’s just silly. But you can write beautiful Typescript constructs that are completely impenetrable but will disallow all wrongdoing.
"Simple" is a cop-out word. Things can be simple along a lot of vectors. The vector you've chosen seems to be "does less for you" which taken ad absurdum would have you using assembly. Go does have elegant abstractions, and they aren't the simplest along this vector, nor would anyone want them to be. Coroutines, for example, are actually quite conceptually complicated in some ways.
I prefer "understandable"--it appears this is what you're trying to get when you say "simple", but I think you're drastically overselling the understandability of go code. Sure, you understand any given line easily (what you described as "readable"), but you're not usually trying to understand one line of code. Since go's sparse feature set provides few effective tools for chunking[1] your mental model of the program, complex functionality ends up being in too large of chunks to be understood easily. This problem gets worse as programs grow in size.
Another poster mentioned that they start running into problems and wishing they had explicit types with Python programs over 10K LOC, which approximately matches my experience. But comparing to go, you've got to realize that 10K LOC of Python does a whole lot more than 10K LOC of Go; you'd have to write a lot more Go to achieve the same functionality because of all the boilerplate. That's not necessarily a downside because that boilerplate is giving you benefits, and I don't think entering your code into the computer is the limiting factor in development speed. But it does mean that a fair comparison of equally-complex programs is going to be a lot more lines of Go than Python, i.e. a fair comparison of might be 10K LOC of Python vs 50K LOC of Go. I say "might be" because I don't know what the numbers would be exactly.
How many people have written or worked on projects in Go of that complexity? How many people have written or worked on programs of equivalent complexity in other languages to compare? I'm seeing people discuss how easy it is to start a project in Go, but nobody is talking about how easy it is to maintain 50K LOC of Go.
I've worked on projects of >200K LOC in Python, and the possibly-equivalent >500K LOC in C#. I think the C# was easier to work with, but that's largely because the 200K lines of Python made heavy use of monkey patching, and I've worked in smaller C# codebases that made heavy use of dependency injection to similar detriment. I'm honestly not sure which feels more maintainable to me, given a certain level of discipline to not use certain misfeatures.
I haven't written as much Go, and I wouldn't, because the features of C# which make it viable for projects of this complexity simply aren't present, and unlike Python, Go doesn't provide good alternatives. I suspect the reason we don't have many people talking about this is that not many projects have grown to this complexity, and when they do these problems will become apparent.
The real weak point is Go's type system--it's genuinely terrible, because the features that came standard in other modern statically-typed languages decades before Go was invented were bolted onto Go after the fact. Gophers initially claimed they didn't need generics for a few years. As a result you've got conflicting systems developed before `go generate` (using casts), after `go generate` but before generics (using go generate), and after generics (using generics). It's telling that you seemingly reject generics ("clever library using elegant abstraction and generics") even though go has them now.
Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
Fwiw I have worked on multiple many hundred of thousands lines of code projects in multiple languages, several of which we rewrote from python, perl, php, and ruby to go where I first maintained the existing code and then worked on a rewrite. I've also walked into existing large Go projects and worked in elixir and some limited js.
In each and every case except one (some contractors did something really odd in trying to write go like java or ruby, can't recall, but the code was terribad), the go version was both more performant and easier to maintain. This is measured by counting bugs, development velocity, and mean time to remediation.
Meaningful comparisons between programming languages are difficult.
I've done rewrites of Python programs in Python, and the rewrites were more performant and easier to maintain.
My point is, is it the language? Or is it the fact that when you rewrite something, you understand which parts of the program are difficult, you know the gotchas, and you eliminate all the misfeatures you thought you needed the first time but didn't. In short, I suspect the benefit of learning from your mistakes is probably far more valuable than switching languages in either direction.
Hands down, the language made the projects easier to maintain. I have also rewritten from php to python, python to python, and perl to perl, many greenfield projects in each, etc.
Why did the language matter? Largely, static typing, concurrency ergonomics, fast compilation, and easy to ship/run single binaries. The fact it also saved 10-20x in server costs was a great bonus.
Better design can absolutely improve a project and make it easier to maintain and more performant. And bad code can be written in any language. I am more and more convinced that dynamically typed code doesn't have a place in medium to large organizations where a codebase no longer fits in one person's head.
> I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
Originally Haskell was designed to be a language providing
> faster communication of new ideas, a stable foundation for real applications development, and a vehicle through which others would be encouraged to use functional languages
That doesn't necessarily imply general purpose of course, but today pretty much any language suitable for "real applications development" would be considered as "general purpose", I think. In any case, regardless of what Haskell was originally intended to be, I would say it is a general purpose language (and in fact the best general purpose language).
> Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
I'll be that guy. We like our stuff in Haskell. Watching the rest of the industry move forward is like a reverse trip around the Monopoly board of computer science progress.
When I joined up, everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.
And whenever features progress in the forward direction again, there are two issues - firstly, they sometimes come out kind of crap. Secondly, arguments for or against this crapness tend to take up all the oxygen that could have facilitated discussions around goodness.
Exceptions or return values? Nope, monadic error handling, any day of the week.
Terse dynamic code, or bloated static code? Nope, terse code with full type inference.
Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.
First-order generics or no? Higher-kinded parametric polymorphism.
Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?
There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.
The issue is not language semantic. The issue is readability. Having the best feature set in the world is useless if the code produced by others is a pain to decipher.
Haskell disqualified itself for general programming when its community decided that point-free was desirable despite the style being impossible to read and custom operators were a good thing. I personally hate every Haskell code base I have ever seen despite being relatively fluent in the language (an issue Ocaml never had amusingly mostly because its community used to be very pragmatic).
The person you are responding to didn't say that, I did.
The abstractions I'm pointing at are cases where mutation or side effects are the desired result of execution. Ultimately this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break". Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.
The other thing is laziness which makes it hard to reason about performance. Note that I didn't say it's hard to reason about execution order--I think they did a good job of handling that.
Don't get me wrong, Haskell's dogmatic commitment to functional purity has led to the discovery of some powerful abstractions that have trickled out into other languages. That's extremely valuable work.
> The person you are responding to didn't say that, I did.
Ah, thanks, I got confused.
> Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.
In my experience it's completely the opposite, actually. I can only really write code that correctly handles a ton of weird edge cases in Haskell. It seems that many people think that Haskell is supposedly a language for "making easy code elegant". The benefit of Haskell is not elegance or style (although it can be elegant). The benefit is that it makes gnarly problems tractable! My experience trying to handle a ton of weird edge cases in Python is that it's really difficult, firstly because you can't model many edge cases properly at all because it doesn't have sum types and secondly because it doesn't have type checking. (As I understand it they have added both of these features since I last used Python, but I suspect they're not as ergonomic as in Haskell.)
> this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break"
Actually, I would say not really. The largest number of monads you "have to" learn is one, that is, the monad of the effect system you choose. Naturally, not every Haskell codebase uses an effect system, and those codebases can therefore be more complex in that regard, but that's not a problem with Haskell per se, it's an emergent property of how people use Haskell, and therefore doesn't say anything at all about whether Haskell is usable as a general purpose language. For example, consider the following Python code.
def main():
for i in range(1, 101):
if i > 4:
break
print(i)
You can write it in Bluefin[1], my Haskell effect system as follows.
main = runEff $ \ioe ->
withJump $ \break -> do
for_ [1..100] $ \i -> do
when (i > 4) $ do
jumpTo break
effIO ioe (print i)
Granted, that is noisier than the Python, despite being a direct translation. However, the noise is a roughly O(1) cost so in larger code samples it would be less noticeable. The benefit of Haskell here over Python is
1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit
2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)
3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)
> Granted, that is noisier than the Python, despite being a direct translation.
My complaint isn't the noise. My complaint is: can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.
> 1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit
Is this an upside? It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.
> 2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)
Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?
And lots of languages now have labeled breaks.
Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)
> 3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)
What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?
In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.
The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.
The fact is, Haskell does the wrong thing by default here, and even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that. Laziness still gets advertised up front as being one of the awesome things about Haskell, and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.
Haskell has a long history of a small base library with a lot of essential functionality being provided as third party libraries, including mtl and transformers (monad transformers), vector (an array library). Even time (a time library) and text (Unicode strings) are third party, by some definitions (they aren't part of the base library but they are shipped with the compiler).
Some people think that's fine, some people think it's annoying. I personally think it's great because it allows for a great deal of separate evolution.
Thanks for your detailed reply! As a reminder, my whole purpose in this thread is to try to understand your comment that Haskell is
> impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language)
From my point of view Haskell is a general purpose language, and an excellent one (and in fact, the best one!). I'm not actually sure whether you're saying that
1. Haskell is a general purpose language, but it's impractical
2. Haskell is a general purpose language, but it's too impractical to be used as one (for some (large?) subset of programmers)
2. Haskell is not a general purpose language because it's too impractical
(I agree with 1, with the caveat I don't think it's significantly less practical than other general purpose languages, including Python. It's just impractical in different ways!)
That out of the way, I'll address your points.
> can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.
Yes, I can explain what it does! `jumpTo break` throws an exception which returns execution to `withJump`, and the program continues from there. Do you think explaining this to a beginner is more difficult than explaining that `break` exits the loop and the program continues from there?
> It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.
> Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?
Yes, I would say that setting flags to exit loops causes additional complexity and difficulty in understanding.
> And lots of languages now have labeled breaks.
I'm finding this hard to reconcile with your comment above. Why do they have labelled breaks if it's good enough to set flags to exit loops?
> Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)
Sure, if you like, but remember that my purpose is not to argue that Haskell is the best general purpose language (even though I think it is) only that it is a general purpose language. It has at least the general purpose features of other general purpose languages. That seems good enough for me.
> What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?
def foo(x):
bar(x + 1)
Does foo print anything to the terminal, wipe the database or launch the missiles? I don't know. I can't see what possible effects bar has.
foo1 :: e :> es => IOE e -> Int -> Eff es ()
foo1 ioe x = do
bar1 ioe x
foo2 :: e :> es => IOE e -> Int -> Eff es ()
foo2 ioe x = do
bar2 (x + 1)
I know that foo2 does not print anything to the terminal, wipe the database or launch the missiles! It doesn't give bar access to any effect handles, so it can't. foo1 might though! It does pass an I/O effect handle to bar1, so in principle it might do anything!
But again, although I think this makes Haskell a better language, that's just my personal opinion. I don't expect anyone else to agree, necessarily. But if someone else says Haskell is not general purpose I would like them to explain how it can not be, even though it has all these useful features.
> In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.
Me too, and the results they see are better than if I wrote code in another language, because Haskell is the language that allows me to most clearly see what results will be produced by my code.
> The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.
Hmm, do you use that line of reasoning for everything? For example, if there were a blogpost about namedtuple in Python[1] would you say "the fact that you need a blog post to tell people how to use namedtuple exemplifies that it is not a solved problem"? I really can't understand why explaining how to do something exemplifies that that thing is not solved. To my mind it's the exact opposite!
Indeed in Python laziness is not turned on, so instead if you want to be lazy you need blog posts to tell people how to turn it on! For example [2].
> The fact is, Haskell does the wrong thing by default here
I agree. My personal take is that data should be by default strict and functions should be by default lazy. I think that would have the best ergonomic properties. But there is no such language. Does that mean that every language is not general purpose?
> even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that.
Ah, but that's the beauty of the solution. It doesn't matter whether others wrote "lazy code". If you define your data types correctly then your data types are free of space leaks. It doesn't matter what anyone else writes. Of course, other libraries may use laziness internally in a bad way. I've fixed my fair share of such issues, such as [3]. But other libraries can always be written in a bad way. In Python a badly written library may cause an exception and bring down your worker thread when you weren't expecting it, for example. That's a weakness of Python, but it doesn't mean it's not a general purpose language!
> Laziness still gets advertised up front as being one of the awesome things about Haskell
Hmm, maybe. "Pure and functional" is the main thing that people emphasizes as awesome. You yourself earlier brought up SPJ saying that the next Haskell will be strict, so we both know that Haskellers know that laziness is a double edged sword. I'm trying to point out that even though one edge of the sword of laziness points back at you it's not actually too hard to manage and having to manage it doesn't make Haskell not a general purpose language.
> and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.
Hmm, maybe. I don't think people deliberately write lazy (or strict) code. They just write code. The code will typically happen to have a lot of laziness, because Haskell is lazy by default. I think that we agree that that laziness is not the best default, but we disagree about how difficult it is to work around that issue.
I would be interested to hear whether you have more specific ideas you can share about why Haskell is not a general purpose language, in light of my responses.
> everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.
That paragraph made me chuckle, thanks.
> picture yourself happily doing your day-to-day coding without the existence of nulls
I've seen it, with Elm and Rust, and now I hate go's "zero values" too because it makes everything a bit more like PHP aka failing forward.
> Exceptions or return values? Nope, monadic error handling, any day of the week.
Ehhhh...
The thing is, there are a lot of cases where I can look at the code and I know the error won't happen because I'm not calling it that way. Sure, sometimes I get it wrong, but the fact is that not every application needs a level of reliability that's worth the effort of having to reason around error handling semi-explicitly to persuade the compiler that the error is handled, when it really doesn't need to be.
> Terse dynamic code, or bloated static code? Nope, terse code with full type inference.
I think you're significantly overselling this. Type inference is great, but you can't pretend that you don't have to implicitly work around types sometimes, resulting in some structures that would be terser in a dynamic language. Type inference is extremely valuable and I really don't want to use static types without it, but there are some tradeoffs between dynamic types and static types with type inference that you're not acknowledging. I think for a lot of problems Haskell wins here, but a lot of problems it doesn't.
One area I'm exploring with the interpreter I'm writing is strong, dynamic typing. The hypothesis is that the strictness of the types matters more than when they are checked (compile time or runtime). Python and Ruby I think both had this idea, but didn't take it far enough in my opinion, making compromises where they didn't need to.
> Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.
100% with you on this.
> First-order generics or no? Higher-kinded parametric polymorphism.
Ehhh, I feel like this is getting overly excited about something that simply isn't all that useful. I'm sure that there's some problem out there where higher-kinded types matter, or maybe I just lack vision, but I'm just not coming across any problems in my career where this feels like the solution.
I feel like there's a caveat I want to add to this but I'm not able to put my finger on it at the moment, so bear with me if I revise this statement a bit later. :)
> Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?
Ehh, the languages I like are all using transactional message-passing anyway, and I'm pretty sure Haskell didn't invent this.
> There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.
I mean sure, I'm 100% with you on Option types, as I said. But, imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.
> imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.
In Haskell that’s Debug.Trace.traceShow. You can use it in pure code too.
> Go is basically anti-Haskell. It forces you to be less clever and that's great.
This is a bit of tangent, but I think it's worth pointing out the value in Haskell (at least as far as I see it) is not that it allows you to write "clever" code, but that it allows you to define precise interfaces. I suspect some people like Haskell because they can be "clever", and I suspect to be able to define precise interfaces you have to allow some degree of cleverness (because you need things like higher order functions and higher kinded types), but cleverness is not, in itself, the value of Haskell.
I would rather go had real enums, and I would _prefer_ if there were sum types.
I agree it's more verbose, but I don't find that that verbosity really bothers me most of the time. Is
res= [x for x in foo if "banned" in x]
really actually more readable than
var result []string
for _, x := range foo {
if strings.Contains(x, "banned") {
result = append(result, x)
}
}
? I know it's 6 lines vs 1, but in practice I look at that and it's just as readable.
I think go's attitude here (for the most part) of "there's likely only one dumb, obvious way to do this" is a benefit, and it makes me think more about the higher level rather than what's the most go-esque way to do this.
I agree that list comprehensions aren't any easier to read. A proper streaming interface on the other hand lets you easily follow how the data is transformed:
As an aside, Go conflating lists and views irks me, in part due to what weird semantics it gives to append (e.g. if you have two disjunct slices and append an element to one slice, that might modify the other slice).
The problem with this is that people again get way to clever with it. it's not just stream -> filter -> collection, there will be a bunch of groupbys in there etc. If you have to debug or extend the functionality it's a nightmare to understand what all the intermediate representations are
Inspecting intermediate representations is trivial by just collecting them into a variable?
More complicated scenarios are exactly what streaming APIs excel at, by treating each step as a single transformation of data. Lack of a proper group by function is one of my classic examples for how Go forces you into an imperative style that's harder to understand at a glance.
You could write your own syntax sugar functions with signatures like...
func copyArrStringShallow(x []string) []string { return x }
// strings are immutable in go, but for []byte etc also deep copy that.
func copyArrStringDeep(x []string) []string {
ret := make([]string, 0, len(x))
copy(ret, x)
return ret
}
- The list comprehension is ever slightly more readable. (Small Positive)
- It is a bit faster to write the code for the Python variant. (Small Positive)
So this would be a small positive when using Python.
Furthermore, I believe there is this "small positive" trade-off on nearly every aspect of Python, when compared to Go. It makes me wonder why someone might prefer Go to Python in almost any context.
Some common critiques of Python might be:
- Performance in number crunching
- Performance in concurrency
- Development issues in managing a large code base
I believe the ecosystem is sufficiently developed that Numpy/Numba JIT can address nearly all number crunching performance, Uvicorn w/ workers addresses concurrency in a web serving context, ThreadPool/ProcessPool addresses concurrency elsewhere, and type hints are 90% of what you need for type safety. So where does the perceived legitimacy of these critiques come from? I don't know.
> The list comprehension is ever slightly more readable.
I disagree - it's terse to the point of being hard to parse, particularly when you get smart ones like:
[x for x in t if x not in s]
> It is a bit faster to write the code for the Python variant.
Code should be written to be read. Saving a few keystrokes vs time spent figuring out the `not in in not with` dance gives the the edge to Golang here. It's "high context"
> - Performance in number crunching
> - Performance in concurrency
And "performance in all other areas". See the thread last week about massive speedups in function calls in python where it was still 5-10x slower than go.
> So where does the perceived legitimacy of these critiques come from? I don't know.
It's pretty hard to discuss it when you've declared that performance isn't a problem and that type annotations solve the scalability of development problem.
I still believe Python comprehensions have confusing structure and in real code I've seen it's 10x worse with 5+ expressions packed in a single line. I much prefer a Nim's style of list comprehensions:
let a = collect:
for word in wordList:
if word notin bannedWords:
word
let b = collect(for x in list: x)
It's still very terse, but, more importanly, it's the same syntax as a regular `for loop`. It has the structure, where in Python complex comprehensions look like a "keyword soup".
I think Rusts terseness shows here - I think C#'s approach is the best. Also, if you don't use `.ToArray()`, you still have an IEnumerable which is very usable.
Though I'm not sure I'm a fan of it eagerly finishing with a List. If you chained several operations you could accidentally be wasting a load of allocations (with the solution being to start with foo.asSequence() instead)
Of course. This was just to illustrate the point, whether to snapshot/collect a sequence or not is another matter entirely. It just goes to show that idiomatic and fast* iterator expressions is something that modern general-purpose PLs are ought to have.
* I know little about performance characteristics of Kotlin but assume it is subject to behaviors similar to Java as run by OpenJDK/GraalVM. Perhaps similar caveats as with F#?
Unfortunately Kotlin fails very, very hard on the "iteration speed" side of things. The compilation speed is so unbelieveably slow, and it suffers very very much from the "JVM Startup time" problem.
If it were an order of magnitude faster to compile I'd consider it.
IMO the Python version provides more information about the final state of res than the Go version at a glance: It's a list, len(res) <= len(foo), every element is an element of foo and they appear in the same order.
The Go version may contain some typo or other that simply sets result to the empty list.
I'd argue that having idioms like list comprehension allows you to skim code faster, because you can skip over them (ah! we're simply shrinking foo a bit) instead of having to make sure that the loop doesn't do anything but append.
This even goes both ways: do-notation in Haskell can make code harder to skim because you have to consider what monad you're currently in as it reassigns the meaning of "<-" (I say this as a big Haskell fan).
At the same time I've seen too much Go code that does err != nil style checking and misses a break or return statement afterwards :(
”Is <python code> really actually more readable than <go code>?”
I mean, I mostly work in Python, but, yes absolutely.
There’s something to be said for locality of behavior. If I can see 6x as many lines at once that’s worth a lot in my experience.
This becomes blatantly apparent in other languages where we need 10 files open just to understand the code path of some inheritance hierarchy. And it’s not nearly that extreme in go, but the principle is the same.
But there is something to be said for the one way to do it, and not overthinking it.
Filtering a container by a predicate is 50 year old technology and a very common thing. It's unbelievable that a "modern" language has no shorter or clearer idiom than that convoluted boilerplate filled Ministry of Silly Walks blob. Python had filter() and then got list comprehensions from Haskell list comprehensions. PowerShell has Where-Object taking from C# LINQ .Where() which takes from SQL's WHERE. Prolog has include/3 which goes back to sublist(Test, Longer, Shorter) and LISP-Machine LISP had sublist in the 1970s[1]. APL has single character / compressing an array from a bitmask result of a Boolean test in the original APL/360 in 1968 and it was described in the book in 1962[2].
Brian Kernighan gave a talk on the readability of code and not getting too clever[3] "Elements of Programming Style" where he talks about languages having one way to write one thing so that you can spot mistakes easily. I am aware he's one of the Go designers and I will mention that again in a moment. In Python the non-list-comprehension version might be:
result = []
for x in foo:
if "banned" in x:
result.append(x)
Which is still clearer than the Go, simply by having less going on. I usually argue that "readable" merely means "familiar" and Python is familiar to me and Go isn't. Your Go code makes me wonder:
- "var" in C# does type inference. You declare []string but don't declare types for _ x or the tuple _,x what's up with the partial type inference? What is "var" adding to the code over classic "int x" style variable declarations?
- What is "range" doing? From _ I guess it does enumeration and that's a throwaway for the index (if so it has an unclear name). If you have to enumerate and throw away the index into _ because there isn't another way to iterate then why does keyword "range" need to exist? Conversely if there are other ways to iterate and the keyword "range" is optional, why do it this way with a variable taking up visual space only to be thrown away? (The code equivalent of saying "for some reason" and drawing the audience's attention to ... nothing). And why is range a keyword instead of a function call, e.g. Python's "for (idx, elem) in enumerate(foo):" ?
- Why is there assignment using both := and = ?
- Why string.Contains() with a module name and capital letter but append() with no module and all lowercase? Is there an unwritten import for "strings"?
- The order of arguments to Contains() and append(); Prof. Kernighan calls this out at 10m10s in that talk. C# has named arguments[3] or the object.Method() style haystack.Contains(needle) is clear, but the Go code has neither. It would be Bad Prolog(tm) to make a predicate Contains(String1, StringA) because it's not clear which way round it works, but "string_a in string_1" is clear in Python because it reads like English. AFAIK a compiler/type system can't help here as both arguments are strings, so it's more important that the style helps a reader notice if the arguments are accidentally the wrong way around, and this doesn't. We could ask the same about the _, x as well.
- "result =" looks like it's overwriting the variable each time through the loop (which would be a common beginner mistake in other languages). If append is not modifying in place and instead returning a new array, is that a terrible performance hit like it is in C#? Python list comprehensions are explicitly making a completely new list, but if the Go code said "result2 = append(result, x)" is it valid to keep variable "result" from before the append, or invalid, or a subtle bug? The reader has to think of it, and know the answer, the Python code avoids that completely.
- And of course the forever curly brace / indent question - are the closing } really ending the indented block that they look like they are ending judging from the dedent? I hear Go has mandatory formatting which might make that a non-issue, but this specific Python line has no block indentation at all so it's less than a non-issue.
- The Python has five symbols =[""] to mentally parse, pair and deal with, compared to twenty []_,:={.(,""){=(,)}} in the Go.
Step back from those details to ask "what is the code trying to achieve, and is this code achieving the goal?" in the Go the core test ".Contains()" is hiding in the middle of the six lines. I'm not going to say you need to be able to read a language without learning it, but in the long-form Python what is there even to wonder about? B. Kernighan calls that out about 12:50 in the talk "you get the sense the person who is writing the code doesn't really understand the language properly". You say code is meant to be read more than written, and I claim it's more likely that a reader won't understand details, than will. Which means code with fewer details and which "just works" the way it looks like (Pythonic) is more readable. As BWK said in the talk "It's not that you can't understand [the Go], it's that you have to work at it, and you shouldn't have to work at it for a task this simple".
You're probably thinking of value constraints? Or, perhaps, exhaustive case analysis? Go certainly lacks those future.
And, indeed, they sound like nice features, but, to be fair, not well supported in any popular programming language. At best we get some half-assery. Which always questions if the popular languages are popular because of their lacking type systems?
This topic has been beaten to death, and being pedantic about the definition of an enum to say "actually go has them" isn't helpful. There are dozens of articles from the last decade which explain the problems. Those problems don't exist in plenty of programming languages.
No language is perfect, but go's particular set of bugbears is a good tradeoff
> being pedantic about the definition of an enum to say "actually go has them" isn't helpful.
Incorrect. The term "real enums", where used to imply that enums are something other than the basic element of the same name, encompasses a number of distinct features that are completely independent of each other. In order to meaningfully talk about "real enums", we need to break it down into the individual parts.
If you're just trolling in bad faith, sure, leave it at "real enums" to prevent any discussion from taking place, but the rules of the site make it pretty clear that is not the intent of Hacker News.
> Those problems don't exist in plenty of programming languages.
Plenty, but none popular. Of the popular programming languages, Typescript seems to try the hardest, but even then just barely shows some semblance of supporting those features – still only providing support in some very narrow cases. The problems these features intend to solve are still very much present in general.
Words can have more than one meaning. As far as I know, no one voted you to be arbiter of all terms and their One True Correct™ meaning. It's pretty clear what the previous poster intended to say.
Quite clear, in fact, which is why we are able – in theory – to have a discussion about all the distinct features at play. If it weren't clear, we wouldn't be able to go there.
I say in theory, because as demonstrated by the sibling comment, there apparently isn't much programming expertise around here. At least it was fascinating to see how a mind can go haywire when there isn't a canned response to give.
> And yes popular languages do have real type safe enums.
Right, as we already established, but which is only incredibly narrow support within the features in question. While you can find safety within the scope of enums and enums alone, it blows up as soon as you want the same safety applied to anything else. No popular language comes close to completing these features, doing it half-assed at most. We went over this several times now. Typescript goes a little further than most popular languages, but even it doesn't go very far, leaving all kinds of gaps where the type system does not provide the aforementioned safety.
You clearly know how to read, by your own admission, so why are you flailing around like one of those wacky blow up men at the used car lot? Are you just not familiar with programming?
I am very familiar with programming. The only things you've said so far have been attempts to redefine well-understood terms and now ad hominem and incoherent rambling.
There is no redefinition. You know that because you literally repeated what I said in your comment, so clearly you are accepting of what was said.
Seemingly the almost verbatim repetition, yet presented in a combative form is because you don't have familiarity with programming, and thus can only respond with canned responses that you picked up elsewhere. If you do have familiarity, let's apply it.
So, do you actually want to discuss the topic at hand, or are you here just for a silly fake internet argument?
I agree with this take. I find Rust a more exciting language from a personal project perspective—and it's what I go with even when it doesn't make 100% sense.
Go is fine, though, and works well in a team environment. It's just clunky, but clunky in a productive way.
Damn, everything new really is old again. Everyone said the same thing about Java. Yet it still works and gets the job done. Go does as well. I'd rather poke my eyes out with a nail than use Python.
> I'd rather poke my eyes out with a nail than use Python.
Glad I'm not the only one. Every time I'm forced to use Python I cringe. What version disaster and I'm going to run into today? The 2->3 transition happened a long time ago at this point and I still run into lingering effects. Also, for short 'script' like things Python doesn't feel any quicker or easier to write. Go is my Goto (hehe) for short scripts.
And while I'm ranting, I'll also say that modern/latest version Java is also really nice.
I don't program as a hobby or for fun anymore. It's my job and I do it, then I go home and do non-programming there. I use Go and Java and occasionally Python if I have to. I do agree with the OP though.
I feel like autocomplete has reduced the problem of verbosity for all languages. And if I am writing something that is going to have to be supported a lot I want something that is very explicit and easy to read. For me that is the "verbose" languages.
This is exactly what I was going to say... ever since I started using Copilot, verbosity bothers me a lot less. It isn't painful when I don't have to type/copy/etc most of it.
> lack of proper enums, lack of sum types, no null safety etc
I miss the same for work.
I wonder how long it will take for Go to be the new PHP. Zealotism is through the roof. I read here people say "lack of proper enums, lack of sum types, no null safety" is a good thing. I read here people say "you can do anything with Go". It's simply not true, and very uninformed.
I wish everyone a good experience with their lang of choice.
I'm in the same boat.
I believe it's a matter of taste mostly.
We prefer expressiveness and power, other prefer simplicity of semantics and verbose code
If I activate type-checking in VS Code this will highlight an error, although the python interpreter will indeed try to run it without compile time error
As I said, for my side projects this is enough for me to model my problems properly without having to resort to multiple hacks
And I took Python as an example, I also enjoy using Ocaml and Rust
I’m sure these are the exact reasons why Go is good for enterprise software, but for personal projects, I get no fun out of using it