People tend to want some separation between what's theirs and what's others. Other programs/scripts quite often put something into ~/.local/bin, so it's not yours actually, it's theirs.
I personally use both, each for different purposes.
I snapshot my entire home directory every hour (using btrfs+snapper), but I exclude ~/.local/ from the snapshots. So I use ~/.local/bin/ for third-party binaries, since there's no reason to back those up; and ~/bin/ for scripts that I wrote myself, since I definitely want to back those up.
This is a pretty idiosyncratic use though, so I'd be surprised if many other people treated both directories this way.
I prefer ~/bin/ for my scripts, links to specific commands, etc.
~/.local/bin is tedious to write, when I want to see directory content and - most important - I treat whole ~/.local/ as managed automatically by other services and volatile.
Personally I use ~/opt//bin where ~/opt is a ‘one stop shop’ containing various things, including a symlink to ~/local and directories or symlinks for things that don't play well with others (e.g. cargo, go), and an ~/opt/prefer/bin that goes at the start of PATH containing symlinks to resolve naming conflicts.
(Anything that modifies standard behaviour is not in PATH, but instead a shell function present only in interactive shells, so as not to break scripts.)
Unix lore: Early unix had two-letter names for most common names to make them easy to type on crappy terminals, but no one* letter command names because the easier were reserved for personal use.
I thought was for mixin externally provided systems like Homebrew, local is for machine or org-level customizations, and ~ is for user-level customizations.
/opt showed up as a place for packaged software, where each package (directory) has its own bin/, lib/, man/, and so on, to keep it self-contained rather than installing its files in the main hierarchy. ~/opt is just a per-user equivalent, analogous to /usr/local vs ~/.local.
The advantage of /opt is that multi-file software stays together. The disadvantage is that PATHs get long.
I haven’t tried C3 myself, but I happened to interact a lot with Christopher Lerno, Ginger Bill and multiple Zig maintainers before. Was great to learn that C3, Odin and Zig weren’t competing with each other but instead learn from each other and discuss various trade-offs they made when designing their languages. Generally was a very pleasant experience to learn from them on how and why they implemented building differently or what itch they were scratching when choosing or refusing to implement certain features.
Competing with each other would be trying to one-up each other feature-wise, whereas what I have witnessed was things like discussing trade-offs made in different languages and juggling around ideas on if some feature from language A would make sense in language B too.
"Head over heels" is actually a corruption of "heels over head".
It's one of those corruptions which flips the meaning (ironically, in this case!) on its head, or just becomes meaningless over time as it's reinterpreted (like "the exception that proves the rule" or "begs the question").
This is such a mediocre article. It provides plenty of valid reasons to consider avoiding UUID in databases, however, it doesn’t say what should be used should one want primary keys that are not easy to predict. The XOR alternative is too primitive and, well, whereas I get why should I consider avoiding UUID, then what should I use instead?
I switched to Helix a year ago and I’m very happy about it. I used to spend way too much of my free time configuring my editor and now that I can’t do that I use my free time to actually write some code!
Because people broadly understand that service dogs are a legally protected thing but not the exact details of the relevant laws, and this leads to interesting consequences when combined with increasing awareness of mental health in recent years. Probably in another decade the controversy will have settled somewhat.
Video editing is not as portable as coding, there ain't no git. It doesn't surprise me that they have to do that, I imagine it's simply speedier and comfier to connect to a desktop that already has the work in progress in the latest state instead of ensuring everything is synced on different devices one uses. I also imagine that beefy MBPs with M3 and upwards could handle 4K editing of Severance (or maybe 8K) and they'd edit on local machines, should it be actually more convenient than connecting to a remote desktop. It's a bit shameful to admit, but still something we have to deal with while having such crazy advances in technology.
In principle a good editing tool could use Git for the edit operations (mere kilobytes!) and use multi-resolution video that can be streamed and cached locally on demand.
When I got into projection design I tried using git to keep track of my VFX workspace. After typing `git init` I heard a sharp knock at my apartment door. I opened it to find an exhausted man shaking his head. He said one word, “No.” and then walked away.
Undeterred by this ominous warning, I proceeded to create the git repo anyway and my computer immediately exploded. I have since learned that this was actually the best possible outcome of this reckless action.
All jokes aside, it's too big of a pain in the ass to have that stuff version controlled. Those file formats weren't meant to be version controlled. If there's persistent Ctrl-Z that's good enough and that's the only thing non technical people expect to have. Software should be empathetic and the most empathetic way to have the project available everywhere is either give people a remote machine they can connect to or somehow share the same editor state across all machines without any extra steps.
Will it lock the GIL if you use thread executor with asyncio for a native c / ffi extension? If that’s the case, that would also add to benefits of asyncio.
How did Waymo do this? As far as I know, replacing drivers with an AI was the goal of ever-money-losing Uber and Lyft, however, Waymo managed to get to this much sooner while having less funds.
Google/Waymo actually started way earlier, and had fistfuls of cash from the very start.
Google hired a bunch of people who'd done well in the 2005 DARPA Grand Challenge [1] - including Sebastian Thrun and Chris Urmson who lead the winning team.
Thrun is also behind Google Street View which in some regards [2] looks a lot like a self-driving-car sensor suite. So Google was having LIDAR-equipped, high-precision-GPS-equipped cars drive every street in every prosperous country, starting back in 2007. Uber wasn't even founded until 2009.
Other Google hires had a similar background - such as Anthony Levandowski who competed in the DARPA Grand Challenge with an autonomous motorcycle. He later gained fame after being caught stealing a bunch of LIDAR schematics and similar trade secrets while leaving Google for Uber.
We also know from court documents that Google was throwing around mountains of cash, even when the self-driving-car division had no revenue. Waymo was set up as an "internal startup" giving employees "equity" so Levandowski left not just with internal documents but also with over $100 million.
That's a stark contrast to a lot of other players who'd need to show investors a lot more to get a lot less. This endless money was undoubtedly helpful in giving them the confidence to design for L5 autonomy from the start, no need to design a lesser system to get the money coming in early. And of course if you can pay $100 million for one guy, you're not going to baulk at the cost of a few $10k LIDARs so long as the people making them claim the price will fall to $200 for automotive quantities.
The 2005 Grand Challenge simplified the driving problem a great deal - no pedestrians or moving vehicles to deal with, safe and driveable route guaranteed to exist - but it did a lot to focus development efforts.
I thought L4 was Tesla Autopilot style, someone in the driving seat, hands on the wheel, ready to take over with half a second's notice, and taking liability for insurance purposes? And the geofenced "freeway driving only" stuff some German carmakers are coming out with?
Whereas Waymo's taxis don't have anyone in the driving seat at all?
I was under the same impression, but the parent poster is right: the main difference between L4 and L5 is universal applicability. Waymo is L4 as it can drive without a driver, but only in certain areas.
> I thought L4 was Tesla Autopilot style, someone in the driving seat, hands on the wheel, ready to take over with half a second's notice, and taking liability for insurance purposes?
I've also not seen much drama coming out of Waymo in the past. I could be wrong but I guess this is the benefit of keeping your head down and focussing on the problem instead of growing too large.
I think they did something very smart, but I don't know if it was accidental or intentional.
Waymo existed for like a decade on the basis of "we have a pile of money and our founders think it's cool". They moved slow and broke nothing. They made slow, incremental progress exploring the self driving space for years with plenty of funding and no expectation of a product launch.
So when a bunch of other companies came along, years and years later, and were all "we're going to have self driving cars next year!!!", and Waymo had their "oh shit, we better actually make this into a product or why do we even exist " moment, they were already in a really good place. They weren't rushed, they probably could have pivoted to a product a year or three earlier. They'd already solved the problems everyone else was just handwaving.
If it was accidental, they sure got lucky in the amount of time they were given with relatively little pressure, and in the timing of that competitive pressure at the end.
Started 15-20 years ago, whereas Uber and Tesla hopped on the bandwagon late and tried to play catchup. I remember talking about the self-driving project at Google in 2008 or something like that. (IIRC they were trying to use Haskell for something.)
Google spent a lot of time and money attracting people who want to work on general hard problems rather than experts in specific domains. Maybe tasking a group of people like that is more likely to succeed in inventing something than building a team around the domain itself, especially in a space that has a lot of unknowns.
Waymo (January 2009) is older than Uber (March 2009) and has always been part of Google/Alphabet. So to answer your question they did it with enormous amounts of time, money, and talent.
* Time - Waymo started work on this in 2009 and are now getting a faster adoption in 2024. Uber started in 2015 and sold it off by 2020. Time has given Waymo the advantage of slowly ramping up and starting with toe-dips.
* Funding - Waymo has spent more money to get where they are now (probably 2-3x what Uber spent in total). Google also has deeper pockets than Uber, which means that there is less pressure on quickly ramping up Go To Market or immediately getting profitability.
* Culture - Waymo was much more cautious (likely because of funding structure) which is now paying dividends in terms of regulatory approval and consumer trust ('sometimes you have to go slow to go fast').
Not sure about the "less funds" bit, but Google started funding self driving cars in 2009. The continuous investment in tech is just now starting to make the dream feasible - as far as I know self driving cars were never more than a side project for Uber and Lyft.
With Cruise gone now there's basically just Waymo, Zoox, and Tesla (bit controversial) that are the names thrown around when talking about self driving market share, and out of those only Waymo has a functional service.
I adores wondered the same thing for any new breakthrough. I listened to the Eric Schmidt interview on Diary of a CEO regarding this and definitely agree with his reasoning. He basically says the reason new companies get ahead while we wonder why the existing companies who should’ve been doing this stuff are so far behind is because the old companies were also do everything else. He was especially talking in terms of AI and how Google seems behind while OpenAI came out so quickly.
Do you think the future is behind depth analysis from 2D pictures rather than in lidar scanning? I know lidars are quite pricey, but I haven’t heard of regular camera breakthroughs in the domain.
If you can't do reliable depth analysis from multi-angled pictures then you are not seeing the real world. That's somewhat of a problem for a task that is 100% visual and designed for human drivers.
Waymo's safety data, on carefully curated paths, with human assistance and in the most car-centric cities in the world, is worthless. When these things hit the real world, with black ice, open manhole covers potholes etc. etc. don't expect lidar to save you from disaster. Maybe the average safety case can be made, but that won't be acceptable if they regularly kill people by confusing cows with flying plastic bags.
reply