People always under-estimate the cost of properly learning a language. At any given time I tend to have a "main go-to language". I typically spend 2-4 years getting to the point where I can say I "know" a language. Then I try to stick to it long enough for the investment to pay off. Usually 8-10 years.
A surprising number of people think this is a very long time. It isn't. This is typically the time it takes to understand enough of the language, the compiler, the runtime, the standard library, and idiomatic ways to do things. It is the time it takes to where you can start to meaningfully contribute to evolving how the language is used and meaningfully coach architects, programmers and system designers. It is also what you need to absorb novices into the organization and train them fast.
Here is the thing: many software engineers don't need to learn a language "properly."
When you start a new job, there will almost always be an existing code base, and you'll have to make contributions to it. Pattern recognition will get you a long way before you need to dive deep into the language internals.
You are mixing two things. Learning a language and learning a codebase. Those are not the same thing. Besides, you are also just talking about what happens during the first N months. Most companies I've worked for allow for time to learn the language(s) being used. Some of them will fire you if you can't show sufficient progress over time.
Not so much. I know quite a few languages very well, more than the average programmer, and more often than not a senior programmer overrides your recommendation and don’t know the language as well as you or some other red tape. What has never happened is getting an increase in pay for knowing a language well.
So I share the sentiment that learning a language well is valuable. It’s not, and with the newer AI tools coming out there will soon be no reason to learn any language in depth.
I think you have that exactly backwards. LLM-based tools do best on shallow-knowledge tasks. It’s depth where they struggle. Show me the syntax for constructing a lazy sequence in your language? LLM. Reason about its performance characteristics and interactions with other language features? Human, for the foreseeable future.
> It is the time it takes to where you can start to meaningfully contribute to evolving how the language is used and meaningfully coach architects, programmers and system designers. It is also what you need to absorb novices into the organization and train them fast.
I think these criteria in particular are much more than a lot of people mean when they say "learn a language", which would explain why their estimates are lower than yours. You're talking about expertise, whereas plenty of people are speaking of basic competency. This seems more like (natural) language semantics about what it means to "know" and the definition of "properly learn" than about the accuracy of people's estimates.
2014: Python is a pain in the butt to manage packages and dependencies, how the hell am I gonna deploy this?
It's still a mess 10 years later unless you live deep in the ecosystem and know what third-party solutions du jour to manage that complexity. At least we have Docker now.
Also let's not forget the Python 3 migration fiasco that lasted ~2008-2018 and I still find myself porting libraries to this day.
I haven't used Python in a personal project in 10 years because of these painful paper cuts .
At least for Python dependency management, I'd hardly call it a mess these days and yes it was a horrible 10 years ago. The de-facto standard of venv and pip (using a requirements.txt file) is generally painless these days. To the extent that moving between MacOS, Linux and Windows is feasible for most (of my) Python work.
And the Python 2/3 pain stopped being an issue a few years back. I haven't been forced because of library requirement to use Python 2 in years.
Your criticisms WERE valid 5 or 10 years ago. In my experience, the ecosystem has improved greatly since then. I wouldn't use Python for large and shared developer code base but for one-man experiments and explorations, Python is pretty compelling these days in my opinion.
> The de-facto standard of venv and pip (using a requirements.txt file) is generally painless these days.
Except Python themselves now recommend the third party, pipenv. [0] Which means that there are multiple authoritative voices saying different things, and that produces confusion.
Now being from 2017 ;)
I suspect no-one has touched that file properly since then because of old scars :(
Part of the issue is there's not a lot of differentiation between most of the "project management" tools (apart from their age, which in some cases explains certain design choices), and the coupling of most of these to their PEP 517 backend doesn't help. There are backends where there is differentiation, such scikit-build, meson-python/mesonpy and enscons for wiring into pre-existing general purpose build systems (cmake, meson and scons respectively), maturin for rust, and flit-core as a bootstrapper, but given setuptools can be used without a setup.py now, I'm not clear on what some of the others provide.
That's why I deliberately described the pip/venv solution as a "de-facto" standard.
Is Python perfect in this regard? Absolutely not but the situation is FAR less painful than it was 10 years ago. The original comment claimed nothing has changed in 10 years. I used Python 10 years ago and disliked it intensely but I've recently (in the last 3 years) used it again and it is simply not true that nothing has improved in 10 years.
I still use python for stuff that never leaves my computer, but in most cases if I know I need to run it on Someone Else's Machine (or even a server of mine) I'll reach for Go instead.
I have a Taskfile for local compilation and running (macOS) and when I need to release I can just say `task publish` and it'll cross-compile a Linux binary and copy it to my server.
Eh, poetry, pipenv, conda, setup.py, GCC and all dev tools to possibly compile some C shite you didn't know you needed? It's a complete shit show still...
I started using Dagster a few days ago, which runs on Python.
As far as I can tell, I have to install the packages with pip. Wait - don't forget venv first. Oh but I'm not supposed to commit the venv, so how is anybody to know what I did? Okay I have to setup a requirements.txt... but now what did I install in the venv? Okay there's a way to dump the venv.. but it's giving me all the dependencies, not the root package I installed?!
Okay there's something called pipenv that seems smart... Oh wait no this Dagster has a pyproject.toml, maybe I should use poetry? Wait poetry doesn't work, it seems like I'm supposed to use setup.py... so what's the point of pyproject.toml.
Okay I put the package into the setup.py. Great now I have to figure out the correct version dependencies by hand?! I don't get it, python is a nightmare.
Compare to .NET:
dotnet add package
On build:
dotnet run # automatically restores packages to a project level scope
I've always wanted to like Python, but I got out of undergrad in 2006 and pretty much the entire first part of my career was the ridiculous Python 3 migration nonsense. Every time I wanted or needed to use Python there was some dependency management nightmare to deal with. For like a decade or so.
I think I'm like you, I have some sort of Python ptsd from those years and don't really care to venture into it again. It sounds like it's gotten a lot better recently, but also after years of using Go and TypeScript I can't imagine working on a codebase without static types.
I'm not so sure this is a desirable state to be in long-term.
While Docker (and other container solutions) does provide solutions to some of the challenges around environment consistency, build artefact management, development environments with linked services, etc., it's also one of numerous de-facto tools that in some ways ends up putting more effort on the end-user to adopt. Less effort than installing all dependencies yourself? Absolutely. More effort (and for no real reward) than a single binary built natively for the target platform? Yep.
This feels like a sort of 'shift right' approach to releasing an application in terms of developer effort. As a developer you can just bundle everything into a container image and now the end user needs Docker and the install is going to leave a mess of image layers on their system. You don't need all the dependencies installed system-wide, that's true, but why can't they just be bundled with the application?
It's also not a great cross-platform tool, as Docker on anything except Linux just hides the requisite virtualisation. It's mostly transparent, true, but is still plagued by platform-specific issues such as write performance on mounted volumes on macOS being ~10x as slow (there's a fairly old thread on the Docker forums that's still open with this as a reported issue, and it does't look like it's going away).
Additionally, while disk storage _is_ (comparably) cheap nowadays, the buildup of image layers that contain endless copies of binary dependencies doesn't feel great. I don't want or need hundreds of copies of libc et al. all with slightly different versions sitting around. Even the 'slim' Debian images are min. 120MB, let alone the 1GB full ones. As Python is on topic for this thread the Alpine images are smaller, but then Alpine image builds for Python don't work with the standard PyPI wheels (I'm not actually sure if this is still true, need to check).
As languages like Python are just source releases, some tooling is definitely needed to make that more palatable in terms of handling releases of Python tools (just big collections of scripts), but being able to build and copy a single binary for a Go application feels great having spent a lot of time practically requiring Docker for release Python tools.
And all of this is really just getting us back to where we were years ago, where it was actually feasible to release a native binary for a target platform. The difference is that Go's tooling around related areas such as dependency management and cross-platform compilation is more standardised, and as a result the developer experience for this is much nicer than with other comparable languages that also compile to native code.
Arguably a lot of the development decisions around how to manage and release tools comes down to how good the experience is for developers. Docker ends up being favourable for Python because there's no 'standard' Python way to bundle applications with their dependencies and Docker basically solves this problem, even if in a messy way.
Does Docker have its place? Definitely. Do I hope it sticks around in favour of improved cross-platform developer tooling? Absolutely not.
As opposed to writing everything from scratch? Python has a huge standard library. Making it bigger to avoid dependencies would add more bloat to the distribution.
I have a nice way of testing a language - I download a couple of projects written by beginner / mid-level developers and not having commercial dependencies and see how much effort it takes to get them running.
Python is one of the worst performers (and Java is shockingly bad too, although a lot of that is down to the way the JVM/language have been mismanaged). At least unless you're comparing it to very low-level languages (VDL, C).
> and Java is shockingly bad too, although a lot of that is down to the way the JVM/language have been mismanaged
This is interesting and not what I would expect. Generally to get a java app running from github you'd install the correct JDK and that should be about it. Many projects will use maven, which will know how to obtain dependencies. Could you describe a typical issue?
Dependencies requiring you edit some random XML file somewhere on your machine to get them working.
That and the whole JVM fragmentation and not being able to touch Oracle because it's Oracle and they'll use it as an excuse to sue you (at least that's the impression I had, the entire lawsuits thing is why I left Java back in the day).
You can say the same about other package managers, but it's not very often that I have to add another source to apt unless I'm running in a locked-down environment.
> Dependencies requiring you edit some random XML file somewhere on your machine to get them working.
No it doesn't. That's only if you're dealing with private maven repos and authentication etc. You don't need to do that at all for the use case in this thread, running a project from github.
Like the C and C++ compiler fragmentation, each with their own flavour of ISO support, UB and language extensions?
Or the Go compiler fragmentation, where only the reference compiler supports everything, tinygo a subset, and gccgo left to die in Go pre-generics era.
No but you can just use ./mvnw instead of installing it. I have never had to install gradle as a package on my computer in my entire lifetime. I always used ./gradlew.
My usual test for choosing a new programming language, is to see how easy or difficult it is to read documentation without having to open a web browser.
For languages/tools/frameworks/whatever that I'm already using, the main way I compare between them is looking at the ratio of "problems I was able to solve from their official documentation" vs "problems where I needed to use random blog posts (or worse, Stack Overflow) to find the solution".
Makes it a lot easier for me to choose tech for my hobby projects, where I sometimes don't even touch projects for several years because stuff just runs smoothly.
Oracle requiring commercial licensing for their JVM has made it so radioactive that my workplace firewalls "oracle.com" to prevent anyone from accidentally using it.
The docs are surprisingly unspecific for such a mature project, enough that you often need to read source code to confirm your suspicions one way or another. The style has been redone a few different times now, and most of the old stuff is there for backwards compatibility so it's unclear what The Correct Way is to use it (once you step outside the obvious classes and methods).
The source code is like an Escher drawing, I'm sure it makes sense to the maintainer but its logic is incredibly abstract and scattered. From my notes on it ~1 year ago: "I took a dive into Celery source to better understand their native way for inspecting workers... holy shit. Celery is an overgrown jungle of abstractions."
It's honestly kind of surprising to me that Celery is still the undisputed #1 message queue abstraction library. I guess it's a simple enough problem that companies using Python at scale roll their own. I wonder what Instagram uses instead of Celery.
I found this [1] to be a useful comparison between queues in Python, if anybody has found a similarly-terse and exhaustive one that's more up to date for 2024 I'd love to see it.
dramatiq seems like a true gem. I didn't know it existed. I don't see why this can't just be an immediate Celery replacement for your typical Django project that already uses Celery - with a bit of time spent on refactoring into dramatiq.
Its powerful but good luck reading the source. Its a bit of a tangled over-engineered mess at this point and the reason there are a number of "newer" libraries to try and replace it. I could not recall specifics since it has been maybe a decade since I last used it but it works until it does not and then its incredibly hard to debug.
Its been a decade and maybe its gotten better but that sentiment is why a number of people do not want to use celery. I would also add that it was created in a different mindset of delayed job and I believe there are better patterns these days without as much complexity.
I only dabbled with Python occasionally in college, and the lack of types was my biggest obstacle. Even reading the official documentation of libraries, I still couldn't figure out what I should expect from whatever function I was calling.
I recall the Selenium library I was using as a particularly gnarly offender -- pretty much every function I called, I had to debug or `print(typeof(x))` to figure out what I'd gotten back.
yeah ... i've become far more disenchanted with loosely typed languages. It's amazing to be able to quickly script things together and focus on the happy path in one-off scripts and bash-replacements but ... in a huge system when reasoning about what's going on -- I find it far more rewarding to know exactly what the type of a thing is and what it can do, etc., etc., and here with Python I rely on the editor a ton and I don't really like that.
And then you have the ever changing ecosystem, that can take years so sort out on it's own and must be constantly studied.
E.G: if you arrive in Python right now, numpy is in version 2 and polars is stable. uv is all the rage, pydantic gained so much perf it's not even funny, toml is part of the stdlib and textual is looking very good. Type hints are much better than 2 years ago, htmx is used a lot in the web department, fasthtml is having a moment in the light and pyscript got a platform. Is any of those information worth acting on? Should you ignore them all for the sake of productivity? Should you carefully evaluate each proposition?
Now if you have to do that with rust and erlang as well, it's a lot of time you could spend on coding instead.
Instead of constantly trying to keep up with everything in an ecosystem (which is simply impossible), just keep up with the stuff you are using to get your product working. Keep a side eye on the rest of the ecosystem to see if something neat is developing that you might use, but no need to study every hyped package coming out. In the end they are all tools to get something that works.
We still use flask and it's been great all the way, and it still holds up really well, there's no reason to move to any of the other packages just because that's the next new shiny thing.
I would think the best option is to always be suspicious of hype, unless you understand why something is being hyped? I'd also argue its worth understanding where your stack falls down, so you know when you need to look for alternative stacks.
The other bit is it's worth understanding how your stack interacts with related stacks (to use your examples, does uv and pydantic using rust vs. being pure python cause issues), but your OS is also changing, are your tools still going to work?
First, you don't have to follow hype, but listening to it is how you know where your ecosystem is heading.
Second, there is way more to an ecosystem that hype. Api, compat, deprecation, new features...
If you never created an web api and have to do it now, you still have to make a choice, hype or not. This choice will have very different consequences if you know only about flask and drf, if you heard of fastapi, or if you get told about django-ninja.
It's the same for every single task.
And even without new things, just learning the old ones is work. In python you can encounter -o ir PYTHONBREAKPOINT, be put on a project with one of 10 packaging systems, have to deal with pth files, etc.
And it's the same for all langages. It's a lot of work.
Sure, languages have depth, and there is at some level some need to keep up with the Joneses (though to mix metaphors, different ecosystems have their red queens running at very different paces, and is it wise to be in or try to join the fastest paced race?), but I feel there is a distinction between following the latest hype blindly (and using the latest tool because of hype), and evaluating tools based on some criteria (and choosing the latest tool because it actually solves the problem on hand best). The latter is a skill that can be gained and taught in one ecosystem, and applied in another (and the former causes issues far too often).
I think this skill is very similar to what highly productive academics have, they evaluate and examine ideas, techniques and concepts from a wide variety of fields, and combine them to build new ones.
Python has _remarkably little_ ecosystem churn compared to a lot of languages. New packages come and go all the time but the older packages are very well maintained and nobody's forcing you to switch to the new ones.
I'm sorry to say if you can't google the difference between setup.py and pyproject.toml then it's a skills issues, because in 14 years of using Python that has never been an impediment to getting anything done in a timely manner.
IMHO, that's not part of learning the language. You don't really even need half of the packages you mentioned unless you don't know the language and, instead, rely upon third-party crutches.
Depends on what your goal is. If your primary purpose for learning python is to be able to use numpy and pandas, then you should probably start learning them from day 1. Learning how to inefficiently implement parts of pandas in pure python on an ad-hoc basis is a waste of time for most people compared to just spending that time learning how to efficiently use pandas.
Agreed. Having a passing knowledge of many languages is of course easy (even being moderately productive with them, esp. with Copilot now), but knowing the ins and outs of a language and its ecosystem is a very nontrivial time investment.
Thus, I'm mostly an anti-fan of the idea of polyglot environments.
Naturally one can't know all the languages of the world.
Even so, I have done consulting since 1999, with a small 2 years pause in research, and have several languages that I keep switching, every couple of months when project reassignment takes place.
The usual T-shape kind of consultant, know some stuff really well, and others good enough to deliver.
It’s not just the time to understand those nuances, but to keep up with the pace of changes. There’s an aspect of navigating those in some languages (JS…) that can be time consuming depending on where it’s deployed.
I think JS is a degenerate case. The reason I stay as far away from JS/TS as possible is that everything seems sloppy and therefore people try to "fix" things by replacing them every N months.
Indeed, when you see so many commenters advocating not bothering to learn a language deeply, you kind of get a taste for how prevalent the sloppiness mindset is. JS seems to attract a very high proportion of unskilled programmers.
I would struggle to name a single project I have been involved in where the JS/TS based front-end hasn't been reimplemented multiple times, from what I can tell, because people get lost.
Not only is it the nuances and pace of changes, but theres also a temporal element of seeing cause and effect over time from within an ecosystem. Is there a way to just parachute in and learn how to judiciously use classes in JavaScript the way one would if they were out there assigning Thing.prototype.method = function () {}? Can Udemy give you the same a-ha moment learning Promises today that someone got when they were drowning in callback pyramids in their Express v3 app?
I totally agree with you. Only after min 2 years of fully coding a viable production solution can one learn most of the pitfalls of given language. I would add frameworks which give another level of complexity and sometimes a framework is like a separate language.
Whatever time frames you apply, the only way to really learn a language is to use it. Reading a book, blog articles, and watching videos on YouTube won't give you the hands on experience necessary to internalize how everything works.
Exactly. There usually is some stuff about the deeper, darker corners of a language in the type of book I pick.
I usually pick the authoritative book / advanced book.
Think "Programming PHP", "The joy of clojure", "The C Programming Language", etc.
I don't necessarily need to know all about performance optimization, the async model or non-html templating.
Reading a technical book cover to cover feels wasteful imo
The way to become good at something is to do it for a long time and to be willing to think about what you are doing and continually improve.
There's much more to learning a language properly than just learning the language spec itself. Especially if we're talking about languages such as Java, where you have to know quite a bit about how the runtime works in order to be able to make good design decisions. And Go isn't really that simple either when we start to get into the weeds of performant concurrent systems that also need to be robust. It requires quite a bit of work to be able to make full use of what Go offers. It may look simple, but concurrency never is.
You're essentially arguing in favor of not being good at what you do, or at best being mediocre at lots of things. And I'm sure there are lots of jobs where that is okay. The thing is: why would you want to spend a lifetime being mediocre, to work on mediocre projects with other people who are mediocre?
One of the most important qualities of Go is that it is actually possible to fully understand the language -- in the sense that you never see a snippet of Go code and think "wtf, you can do that?" or "hang on, why does that work?"
Getting to that point takes many years, to be sure. But the language is simple enough, and changes slowly enough, that it is not an unrealistic goal -- the way it would be in C++, or Rust, or just about any other mainstream language.
That's an odd assumption. That you either have to choose between rabbit holes or development speed. Usually languages with lots of rabbit holes tend to require a lot more work.
Take C++ vs Go for instance. C++ is an endless series of deep, complex rabbit holes where you get enough rope to hang not only yourself but your entire team. Serious C++ projects tend to require a style guide for either the company (most companies don't have that kind of discipline) or at least some consensus of what subset of the language to use and how to structure problems for a particular project. Look at the guides Google use or the Joint Strike Fighter. They're massive. C++ is not a particularly fast language to develop quality code in unless you have a small team and/or you are supremely well aligned and disciplined. And even when you work on a disciplined team, it takes time.
Go has fewer rabbit holes. It gives you an easier path to concurrency. It has managed memory. It is sufficiently performant for a very wide range of projects. It has a very capable and practically oriented standard library that even includes HTTP, cryptography, support for some common formats and a decent networking abstraction that actually allows you design flexibility. And importantly: it comes with a shared code style which is tooling enforced, so you don't have to waste time on mediating between various opinionated hothead programmers on your project.
Because not all code is yours. In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs. A `slower` but predictable language can lead to more consistent, maintainable code, which is often more valuable in the long run or last but not least, running in production.
> In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs.
It adds up with the number of people on the team? Or the number of people on the team squared? Cubed? nlogn? Because a lot of those options would still favor the former language.
And if it's happening particularly often, that means the rate will fall off drastically as mastery is achieved.
I see a risk when code does something different from expectations. I don't see any risk when code has some kind of novel syntax that requires looking it up. Or when you learn about a feature from the documentation or a blog post.
Being predictable is quite valuable, but predictability is different from memorizing every feature.
This is similar to the experience that I had with Erlang. After having spent time with it, I was hardly every surprised looking at any code and my brain could deal with the actual problems at hand without having to figure out how to apply what I knew about the language.
I've been doing Go for a long time now and I still occasionally come across snippets of code that I don't understand or understand the purpose of. So I think "never" is too strong a word. I'd say "rarely" rather than "never".
* Top-level variables being shared between all files in a package
* For-loop sharing (fixed in 1.22 [0])
* ldflags (this is more of build behavior, but it took me a while to figure out how some variables were being set [1]. Note: Go does embed some data by default, but an app I was working on introduced more metadata)
* Go build directives prevent IDEs/linters from analyzing a file at all. E.g. if you edit a linux-only file on macOS, you get _zero_ help.
I dunno, there are a lot of other weird, confusing things that Go does. It is less than most other languages, though.
You're misunderstanding what I'm saying. How the language works is relatively simple. Understanding what's going on when problems arise isn't.
* "how is my Cobra command being registered when I'm not calling any code?" -> "oh, there's an init function that registers the command when imported"
* "why is this variable not working?" -> "oh, it's that for loop thing again"
* "why does the result of my unit tests depend on the order of test execution?" -> "oh, we have 150 Cobra commands in one package with variables shared between _all_ of those commands"
* "how is this variable being set?" -> "oh, we have to call go build with a monstrous ldflags argument
* "why is CI not passing for this oneliner when this looked fine locally?" -> "oh, it's a linux-only file so the IDE isn't even showing me simple syntax errors"
I like to read language references (not just Go's, for example I have also fully read the ECMAScript standard as a part of my research in the past) and yet I will never say I can remember everything from those references. "Documented" isn't same as "easy to remember or recall".
I looked around for something like CppQuiz or its Rust analogue, where you're presented puzzling "Why would anybody do that?" fragments and asked what happens -- something that's often extremely difficult. If I had found one of those for Golang I could maybe see whether there are people who just ace the quiz first time.
But what I found were the sort of softball "Is Go Awesome, Brilliant, Cool or all of the above?" type questions, or maybe some easy syntax that you'd see in day 1. So I have no evidence whether in fact many, some or even a few Go programmers have this deep comprehensive understanding.
This is actually how I feel about Ruby. Not full on Rails because there is a lot more to learn there which comes with the time investment (which is also worth it in many cases).
But for most scripting work especially, just raw Ruby and maybe the Sequel library for a database just makes my productivity soar.
Do you have any open source examples (either your own or others) of simple Ruby + Sequel that are particularly elegant + productive that you could point to? Would love to see this.
I agree Ruby is the most elegant and enjoyable language to develop in.
Agreed, there's such a wide knowledge gap between "I read about this language design choice online" and "I know this from first principles". Truly, thoroughly understanding the languages and tools you work with every day unlocks a deeper level of understanding that you can't substitute with anything else. It's what people often describe when they first learn Lisp, but the same actually holds for every language, tool, etc. There is tremendous value in depth. However, since there is considerable opportunity cost in choosing what you put your time and effort into, do be mindful of the tools you decide to acquire this amount of knowledge in.
It takes forever to "properly learn" a language in my opinion, because it would mean the total emersion into the ecosystem and most prominent libraries built around that language as well, and they will be always moving targets even though the language itself might be static. Go is no exception, and in fact there are only a handful of languages that can be ever "properly learned" if we are strict about that. The trick is not to properly learn, but to know how to learn enough of anything you need and make the best use out of that.
I completely disagree. I have been successful in building a new project for my company with myself and the entire team being completely new to Go when we started. Did we have growing pains, and some sub-par decisions? Absolutely. Do we right better Go code today, 4 years later? Absolutely. Did this mean that the project was in any way significantly slowed down by our use of Go compared to the C# or Java we knew before after the first 1-2 months? No, not really. The ways in which Go fit better for our chosen domain completely made up for that (and this all despite my continued belief that Go as a language is inferior to Java and C#).
Now, there are other languages where I'm sure that we would have had significant issues if we tried to adopt them. In particular, switching from the GC languages we were used to to a native memory language like C++ or Rust or C would have surely been a much bigger problem. The same would have probably been true if we had switched to a dynamic language.
But generally, deep language proficiency has less value in my experience than general programming proficiency, and much less than relevant domain knowledge. I would hire a complete Go novice that had programmed in Python in our specific domain over someone who has worked in Go for a decade but has only passing domain knowledge any day.
I also think it is a good idea to learn a few very different languages when you are starting out so that you have a sense of the tradeoffs and the different ways things can be done. When I was in college they taught C and Lisp (I am dating myself) which seemed very different to me at the time but also useful in different ways. Close to the hardware, vs supporting objects, lambda functions, etc. Later I learned a number of other languages and now I try hard to avoid switching languages because of the lost productivity.
Now if I were starting out I think I would try to focus on two different languages, but more modern ones. Maybe Rust and Python?
I don't think it is that long. Think of learning any spoken language. We expect that it takes AT LEAST ten years to be able to master it, and typically it takes much longer to meaningfully add to the compendium of literature. Usually we are talking about decades. It isn't just about the tool (speaking, writing) but being so fluent that you can successfully express yourself in a way that has impact.
If you can do it in ten years with a software stack, I'd say that's pretty impressive.
ChatGPT accelerates this timeline significantly. As a senior programmer you can largely skip the "junior" phase of the learning, almost like pair programming with a very proficient junior programmer.
I disagree. ChatGPT is like pair programming with someone who is on drugs and tends to lie a lot. Co-Pilot is even worse as it constantly bombards you with poor suggestions and interrupts your flow when you should be left alone to think about the problem you are solving.
What separates a senior developer from a junior is that the language is no longer a barrier so your spend most of your time thinking about the problem you are solving and how you balance requirements, take into account human elements, and try to think ahead without prematurely spending time on things you aren't going to need.
Being shown a bunch of possible solutions to choose from doesn't help you develop the ability to think about problems and come up with better solutions.
Yes, ChatGPT can occasionally help you when you need a nudge to pick a direction or you come across something you don't know anything about. But it is not a substitute for knowing stuff and it can't actually teach you how to think.
I am fully convinced that programming language generation out of stuff like ChatGPT is an intermediate step on their evolution, just like early compiler adopters wouldn't trust compilers that wouldn't show the Assembly output.
Eventually we will achieve another milestone in what it means to program computers, where tools like ChatGPT will directly produce executables, or perform actions, and only when asked it will produce something akin to source code for cross-checking.
That's true. I agree. Python is the only language I ever really learned to a deep level and properly, and it makes a huge difference! (And I'm still learning of course.)
Using esoteric lang constructs is bad idea IMO. Makes software harder to read/port/maintain. It’s useful for some hack & high perf stuff but not typical apps.
Agreed. But knowing a bit about how a language (and runtime) actually works can make a huge difference when you design stuff.
For instance, I've seen lots of examples of people implementing LRU caches in Java, and then scratching their heads when performance drops because they have no idea how the GC works. Or complicating things immensely because they realize memory is tricky and then ending up with some complex pooling scheme that constantly breaks rather than realizing that often more naive code that exploits the low cost of ultra short-lived local objects.
If you know what you are doing it becomes easier to choose solutions that are performant and understandable.
By now I know three major languages: Go, Python, and Typescript. I know tradeoffs at-a-glance, I deeply understand the syntax and its various forms, the full array of tooling and what they do, and lastly (but maybe most importantly) I can estimate more accurately because I can architect in my head.
I can work in a myriad of other languages. I may be able to do some of the things above in Java or Rust but not nearly to the degree to which I can in languages I know. I think the difference is I'm probably not going to be leading a Java project or producing anything really innovative at a code level.
To me, more important than picking a hammer, is knowing a variety of hammers that are good at certain tasks. I don't focus on Rust or Java as much because, frankly, I can build most things that are pertinent to the constraints of my work environment with those tools and most people I encounter also know them. The other considerable factor I have is that most things I work on can be horizontally scaled so my need for Rust is very niche. With respect to Java, I have a lot of workarounds that are cleanly abstracted enough before I need its dynamism and subsequent mental overhead.
IME, engineers and "the discourse" when we argue on the Internet often conflate "ease of use" and "ease of learning". When we say "ease of use", we usually mean "ease of learning."
If you're a hobbyist, yeah, you should heavily value "ease of learning." If you're a professional, the learning curve is worth it if the tool's every day leverage is very high once you're ramped up. Too many developers don't put those 3-4 months in, in part due to the over-emphasis on "ease of learning" in our discussions/evaluations of things.
I was a part of a very large go project (https://news.ycombinator.com/item?id=11282948) and go-based company infra generally some years ago, and go is emblematic of the classic tool that is amazing at ease of learning, and quite mediocre at "ease of use" as time goes on.
I personally end up resenting those tools because I feel tricked or condescended to. (This is a little silly, but emotions are silly.)
I'd wager this is also why Rust is a perennial "most loved" winner in the surveys: it gets better as your relationship with it deepens, and it keeps its promises. Developers highly value integrity over trickery, and hard-earned but deep value feels like integrity and wins in the long run. (other examples: VIM, *nix, git)
A surprising number of people think this is a very long time. It isn't. This is typically the time it takes to understand enough of the language, the compiler, the runtime, the standard library, and idiomatic ways to do things. It is the time it takes to where you can start to meaningfully contribute to evolving how the language is used and meaningfully coach architects, programmers and system designers. It is also what you need to absorb novices into the organization and train them fast.