Maybe I should read the book again making notes like the author did. I finished it understanding how novel this would have been when it was released and impressed with how much worldbuilding was fit into a relatively short book, but ultimately pretty disappointed by the plot itself. Without giving away too much, I feel that there were a few segments that fell pretty flat for me (to be specific, with minor spoilers: the new recruit around the middle of the book and the hacking subplot towards the end).
As much as I understand that checked exceptions (the general term for the 'throws' feature) can lead to a bit of a maintenance nightmare and doesn't necessarily scale well with deeper call stacks, I have found python documentation extremely lacking in this area too.
I think what I've come to understand is that you can treat errors you don't know about as non-recoverable, because that's most likely what they are anyway if they aren't readily documented. Let them bubble up basically, and handle them at the top level if necessary to prevent crashes in prod/make sure they are logged correctly and so on.
I can just about tolerate this for brand names, as there's an additional set of requirements there, but for everything else, it's just gatekeeping and unnecessary obfuscation. If the purpose of something changes, fork it off into something new or just rename it if possible. Apache TinkerPop Gremlin might be fun for the creators, but not for anyone trying to understand what it does.
The article is kind of hyperbolic, but I feel like it echoes how I am beginning to feel about Python.
Just last week, I spent a whole afternoon getting a particular repository of Python code running on my laptop, even with the help of virtualenv and pyenv. requirements.txt doesn't tell me which version of Python the code was developed with, and several libraries are only available on certain versions, so I have to play the guessing game first of all. Then, some of the modules don't have binaries available for M1, and I can't build them from source because I don't have x, y, and z tools installed. Then there's always some issue with PYTHONPATH. I ended up having to build the whole Ubuntu docker container and develop inside that.
I love Python, it's not always like this, and it's certainly not only Python, but that experience is something I dread anyway coming in to every new Python project. It feels like DLL hell all over again. I have had a much better experience personally with C# and Rust, but admittedly I had much more solo control over those projects.
you write: "requirements.txt doesn't tell me which version of Python the code was developed with".
Do you expect to have less pain with any othe rlanguage with this situation?
I'm not saying other languages don't have the same problems, but this just hasn't been as much of an issue in my experience when working with other languages.
C# for example - I had some compatibility issues between versions 2 and 3.1 of .NET Core, but at least the .csproj tells me what version it's supposed to be built with, and the LangVersion property indicates the version of C#.
I agree thoroughly with this - it's fantastic for building something quickly. If I need a quick script to do one-off data processing there's nothing better. My biggest problems with it IMO with respect to maintainable software are:
- The syntax required to build libraries feels like messing with the internals of the language. Defining various methods with reserved names that have 4 underscores each doesn't really feel like something you are supposed to do. The code becomes harder to read and messy IMO.
- Runtime type checking is great for iterating quickly, but bad for stable software.
- Encapsulation is only enforced through external tools, so if you aren't using those religiously you end up with problems with tightly coupled modules.
- Dependency management is not a good experience. Understanding the different rules about where python pulls modules from is hard. Venv makes things a bit better, but even then it's still a bit opaque. It means that I often spend more time on getting external dependencies aligned properly than writing any python when working on a python codebase locally.
I have to admit - I use C# for that nowadays, at least as long as I don't have to follow the standard coding guidelines (which are great for software with a mid/long life-cycle). Once you get over the learning curve and don't have to apply good engineering practices (i.e. write code comparable to Python/Go norms) then it's way more productive (time-to-working-solution) and dependency management is great. The best bit - a huge amount of effort is going into reducing boilerplate so it's getting better and better with each release.
If I'm working with less experienced developers or people for whom software engineering is a side issue (researchers/academics, security experts, data-scientists) then it's Python all the way.
I like defining AI as a catch-all for higher-order solutions. Rather than defining a specific process for taking in input and producing the desired output, you define a process for taking in input to produce a process that takes in input and produces the desired output. That ends up including a lot of boring applications, like SAT solvers, Bayesian statistics engines, as well as the more hip deep learning stuff.
ML is the specific case where the inputs to both the higher level and base level processes are similar, and the goal is for the application to identify patterns to apply to specific cases.
> AI 'an algorithm pushed to production by a programmer who doesn’t understand it'
This is precisely how I read it too. The microsecond I hear a person say “AI” or “ML”, I’m thinking “oh ok so it’s untraceable, non-deterministic bullshit I’m gonna be held responsible for if I approve this.”
Probably still net profitable to continue using this marketing language. Certainly does me a favor too.
Without Swift Concurrency, you'd write something like this (pseudocode):
GetWeather() { results in
// Do something with the results
UpdateUI(results)
}
The part in the curly braces is a closure/completion handler. The GetWeather call is not blocking, so it runs on another thread and then it calls your completion code when it's done.
This looks fine in a small example, but it quickly gets gnarly when you need to pipe those results to something else, or you can also end up deep in many completion handlers.
With the new concurrency model it's just this
let results = await GetWeather ()
UpdateUI(results)
Now let's say your UpdateUI call was also non blocking:
They're saying that since it supports async/await the developer doesn't need to deal with callbacks/whatever. That line is accurate ("just a few lines of code") but also marketing.
I figured that was just a convenient way to avoid all the security problems that come with exposing a shell online whilst also limiting the inputs, although I'm sure there's other ways to do that. Might be nice to have the option I guess.
This looks cool and all, but why is it called Icecream? I know naming abstract stuff is hard but it feels like this lends itself to a more descriptive name. "ic()" tells me nothing about what the function does.
At its core, it has always called inspect.currentframe(). I suspect the first iteration was a wrapper around inspect.currentframe(), abbreviated as ic(), which was then backronym'd into ice cream.
Interesting he thought it was tedious - I thought this one was actually the closest to being like a real world software engineering task! No mathematical tricks or lateral thinking, just a gnarly problem that requires planning and breaking down into more manageable pieces. I actually really enjoyed it for that.