Not with Linux, typically. If you don't have drivers included in the kernel, it requires a lot of effort to get things working. I've done it many times, so now I will generally only buy laptops that have decent Linux support. [1]
I've had the laptop for about two years now and it still runs just as well as the day I bought it. I'm very happy with it.
[1] No I will not stick with Windows. Please feel free to read through my comment history to see why, but TL;DR I just don't like it.
I've had linux on every laptop I've owned for years, and I haven't really had a problem with any of them running linux, except for display port support on a dell xps.
Aside from that one dell laptop, though, I generally avoid HP and dell entirely, so perhaps that's why.
Weird. I must have uncommonly good fortune, as I don't think I've had Wi-Fi or sound issues for longer than that. I remember when I first tried out swaywm and having some sound issues because I also started moving to pipewore from pulseaudio, but nothing from an out of the box install of a decent distro.
^ this comment is more relevant than people might think. HP regularly deploys broken BIOS updates and literally bricks your laptops. Happened in 2023 I think 7 times that year, and one time even right in the next week. Our IT got so fed up and ditched any HP laptops because of it.
That advice doesn't hold up very well when in recent years we've had multiple instances of a BIOS update being necessary to deal with the problem of "the CPU gets fed too high a voltage and dies prematurely". That's happened to both Intel and AMD desktop CPUs.
It's a real problem that BIOS updates for consumer systems never come with a meaningful changelog, so evaluating whether a particular update is a good idea or not is basically impossible.
I would strongly advice against buying HP laptops if you want to install linux because MX linux worked well on mine pre-owned HP, Zorin OS worked well but somehow I could not install AntiX linux and secure boot of HP troubled me too much and I could install OpenBSD on it but each time I would restart then it would kernel panic and I would havento reinstall. Combined with a long holiday when I left it at home. Now my HP is practically bricked. It is not starting
I built a tower several years ago and it had CPU temp issues from the start. I RMA’d the cooler, reapplied the thermal paste a couple times, reassembled the whole build, etc. It wasn’t my main machine, but every time I sat down to use it the CPU would run hot and thermal-throttle. It’s an i9 with P/E cores, so I just chalked it up to Linux power management woes. A couple months ago I was on the brink of selling it for parts, but updated the BIOS as a Hail Mary. Totally fixed it.
I guess I did “ have a specific bug that needs fixed”; I just didn’t know it!
People don't have a choice to update their BIOS, as updates like this are automatically installed, by both Windows and the underlying Intel ME tools.
(And I'm trying to avoid talking about microcode updates, which is a whole other story of fuckups)
Regarding Thinkpad BIOS: I have a Raspberry Pi Zero and a self soldered RP2040 programmer [1] in my travel kit for a reason. When travelling, a lot of the Cellebrite rootkits rely on an OEM BIOS, so they typically reflash your BIOS in the "we gonna check your laptop" phase.
Something related to this article, but not related to AI:
As someone who loves coding pet projects but is not a software engineer by profession, I find the paradigm of maintaining all these config files and environment variables exhausting, and there seem to be more and more of them for any non-trivial projects.
Not only do I find it hard to remember which is which or to locate any specific setting, their mechanisms often feel mysterious too: I often have to manually test them to see if they actually work or how exactly. This is not the case for actual code, where I can understand the logic just by reading it, since it has a clearer flow.
And I just can’t make myself blindly copy other people's config/env files without knowing what each switch is doing. This makes building projects, and especially copying or imitating other people's projects, a frustrating experience.
How do you deal with this better, my fellow professionals?
Software folks love over-engineering things. If you look at the web coding craze of a few years ago, people started piling up tooling on top of tooling (frameworks, build pipelines, linting, generators etc.) for something that could also be zero-config, and just a handful of files for simple projects.
I guess this happens when you're too deep in a topic and forget that eventually the overhead of maintaining the tooling outweights the benefits. It's a curse of our profession. We build and automate things, so we naturally want to build and automate tooling for doing the things we do.
I don’t think those web tooling piles are over-engineered per se, they address huge challenges at Google and Facebook, but the profession is way too driven by hype and fashion and the result is a lot of cargo culting of stuff from Big Dogs unquestioningly. Wrong tooling for the job creates that bubble of over complicated app development.
Inventing GraphQL and React and making your own PHP compiler are absolutely insane and obviously wrong decisions — for everyone who isn’t Facebook. With Facebook revenue and Facebooks army of resume obsessed PHP monkeys they strike me as elegant technological solutions to otherwise intractable organizational issues. Insane, but highly profitable and fast moving. Outside of that context using React should be addressing clear pain points, not a dogmatic default.
We’re seeing some active pushback on it now online, but so much damage has been done. Embracing progressive complexity of web apps/sites should leave the majority as barebones with minimal if any JavaScript.
Facebook solutions for Facebook problems. Most of us can be deeply happy our 99 problems don’t include theirs, and live a simpler easier life.
Not sure why you lumped React in there. Hack is loopy, and GraphQL was overhyped but conditionally useful, but React was legitimately useful and a real improvement over other ways of doing things at the time. Compare React to contemporary stuff like jQuery, Backbone, Knockout, Angular 1.x, etc.
I agree with you very much, if what you are building actually benefits from that much client side interactivity. I think the counterpoint is that most products could be server rendered html templates with a tiny amount of plain js rather than complex frontend applications.
First of all, I read the documentation for the tools I'm trying to configure.
I know this is very 20th century, but it helps a lot to understand how everything fits together and to remember what each tool does in a complex stack.
Documentation is not always perfect or complete, but it makes it much easier to find parameters in config files and know which ones to tweak.
And when the documentation falls short, the old adage applies: "Use the source, Luke."
Don't fall for the "JS ecosystem" trap and use sane tools. If a floobergloob requires you to add a floobergloob.config.js to your project root that's a very good indicator floobergloob is not worth your time.
The only boilerplate files you need in a JS repo root are gitignore, package.json, package-lock.json and optionally tsconfig if you're using TS.
A node.js project shouldn't require a build step, and most websites can get away with a single build.js that calls your bundler (esbuild) and copies some static files dist/.
> As someone who loves coding pet projects but is not a software engineer by profession, I find the paradigm of maintaining all these config files and environment variables exhausting
Then don’t.
> How do you deal with this better, my fellow professionals?
By not doing it.
Look, it’s your project. Why are you frustrating yourself? What you do is you set up your environment, your configuration, what you need/understand/prefer and that’s it. You’ll find out what those are as you go along. If you need, document each line as you add it. Don’t complicate it.
You start with the cleanest most minimal config you can get away with, but over the years you keep adding small additions and tweaks until it becomes a massive behemoth that only you will ever understand the reasoning behind.
Part of doing it well is adding comments as you add options. When I used vim, every line or block in the config had an accompanying comment explaining what it did, except if the config’s name was so obvious that a comment would just repeat it.
That's a good call. It's a big problem for JSON configs given pure JSON's strict no-comments policy. I like tools that let you use .js or better yet .ts files for config.
I like this idea a lot, and pushed for json5 at a previous job, but I think there are a few snags:
- it's weird and unfamiliar, most people prefer plain JSON
- there are too many competing standards to choose from
- most existing tools just use plain JSON (sometimes with support for non-standard features, like tsconfig allowing trailing commas, but usually poorly documented and unreliable)
Much easier just to make the leap to .ts files, which are ergonomically better in almost every way anyway.
A lot of json parsers will permit comments even though it isn't meant to be valid. Worth trying it, see if a comment breaks the config, and if not then use comments and don't worry about it.
Sorry, but this sounds more like a myth, or at least heavily exaggerated. Similar to how Japan often gets romanticized.
Organizing the entire chain geographically at the scale you described (inter-city) doesn't bring huge cost advantages by itself. In China labor has historically been cheap, so the transport cost between regions was never the dominant factor anyway.
Most industrial clusters in China formed organically over time just like the rest of the world. Aside from some exceptions like mining, there isn't some master plan laying out entire cities as linear supply chains to the ocean It's not SimCity.
One thing you're right about is that there is less bureaucratic friction or 'lawyers' in the way when it comes to economic development. For the former, it's because economic growth is THE metric for the government, especially at the local level, so they do whatever it takes to make it happen. For the latter, it's because… well, in China no one sues the government, period. I'm not sure it's a good thing.
As a Chinese living in China, you must know the layout of the city does provide logical sense. I've only been once, and I buy stuff from factories fairly often. When I went there I basically went to a mall district where all the furniture was sold, then I went to the tile district to review tiles, I went to several other "districts" that where nothing but that single item.
I went to the window factory, which was directly beside more window factories, and directly beside that was the place that extruded aluminum for use. The aluminum they used was produced a up the road in what they called the metal district.
You are even saying that "industrial clusters in China" so there is clearly some amount of planning involved. There is obviously benefits to having all of the aluminum factories beside a aluminum producer, and having the shipping/packaging warehouses by the docks, etc.
There is some amount of government work at play here, either on a small scale or a larger scale to provide a reason for places to all setup.
I've also seen things that just are not possible in North America. Asked for samples of aluminum extrusions and had the die made and extrusion done in a day. Locally it would take months before a sample is at my door.
I've sent designs for quotes and get quotes in hours, half the time factory in NA doesn't even reply. And even when it does it's more of a "go away" then anything else.
I've seen live video of robotic factories building entire cabinets for housing.
There is some amount of rose coloured glasses in this thread. But we cannot deny that China wants business and can get stuff done fast and efficiently. That cannot be said for modern day factories in US or Canada. The work ethic and desire for business are just completely different.
You seem to assume that just because similar industries exist near each other in China, that it must have been government intervention. Which maybe it was, I don't know. But this same trend exists in the USA too.
You have areas with lots of Oil Refineries, Houston and Baton Rouge for example. You have areas with lots of steel mills, like in North West Indiana. These are examples I personally know of. Obviously a lot of big tech factories exist close to each other in Silicon Valley and in Austin Texas too.
There are "industrial clusters" in America too, simply put. It is natural for large chemical plants or industrial sites to build up near where their source is. Hence all the oil refineries around the gulf. This is not a uniquely China thing at all. Lots of major US cities are known for specific types of industries.
Is the labor cheap in China or are you comparing it US salaries?
Can a person working in a Chinese tech factory for a major US company afford a reasonable place to live a reasonable distance, food, some entertainment, and have savings?
I'm not comparing it to US anything, I'm comparing it to other cost components like raw materials and parts, whose prices are often global.
The point is that transportation within China isn't a dominant factor in industrial cost or efficiency. So the idea that major manufacturing cities are laid out like giant assembly lines isn't nearly as important as OP suggests.
China still has many advantages over the US in manufacturing. I just don't think this is a major one, even if there's a grain of truth to it.
Strategic industries, i.e. 5 year plan ones, local gov will absolutely master plan to excruciating detail for complete industrial chain. Less strategic industries local gov will get a few anchor industries to root and rest is organic. Intercity proximity also brought huge advantages in terms of transportation speed, especially in 90s-00s. The other consideration is scale, a bumfuck tier3 chinese city specialize in xyz will have millions of people which naturally enables greater levels/depths of industrial agglomeration, which is what makes PRC exceptional. Think old Detroit motocity hub that dominated 90% of US car production. PRC has 100s of said cities for different industries. It's not myth/exaggeration that consequence of PRC scale, historically exceptional/aberration tier industrial clusters in other countries, PRC has 100s of, as baseline template.
sorry to be pedantic but, do you mean perhaps oligopolies? and by that do you mean marketshare or technology share? i'm just curious what people are looking for in ladybird (just better tech or better or governance?)
To me, a project's "hype-ness" is the ratio of how much attention it gets over how useful it actually is to users.
As a browser, Ladybird usefulness is currently quite limited for obvious reasons. This is not meant to dismiss its achievements, nor to overlook the fact that building a truly useful browser for everyday users is something few open source teams can accomplish without the backing of a billion dollar company. Still, in its present state, its practical utility remains limited.
He encodes bits as signs of DCT coefficients. I do feel like this is not as optimal as it could be. A better approach IMO would be to just ignore the AC coefficients altogether and instead encode several bits per block into the DC. Not using the chrominance also feels like a waste.
This actually won't work against YouTube's compression. The DC coefficient is always quantized, rounded, scale, and any other things. That means that these bits are pretty much guaranteed to be destroyed immediately. If this is the case for every single block, then data is unrecoverable. Also, chrominance is not used on purpose, because chrominance is compressed much more aggressively compared to luminance.
I meant choosing multiple values, e.g. 4 to represent 2 bits. Say, 0.25, 0.5, 0.75, and 1. Then when decoding you would pick the closest valid value, so for example for 0.20 it would be 0.25. Not using AC coefficients would mean that theoretically you would get more bitrate for the DC ones.
I’ve been told this many times in the comments, but this again is not reliable. Simply put, compression doesn’t necessarily follow a pattern, so specifying “ranges” or rounding to a specific place will not work. Compression optimizes for the eye, and doesn’t do the same thing for every value. It will round some down, some other mores, others less. Giving a range is simply not enough.
I'm not well-versed in the terms, so I'm not sure which part is the so-called "audio aliasing."
To me, the original has very obvious background noise which the enhanced version removes. But as the author has said, the enhanced version sounds "muffled" (and, IMHO, not just a little), which probably makes most people (including me) feel it sounds worse.
Also, shouldn't most of music be included in the game's official OST? I assume that version would not be limited by the game media's technical limitation at the time and should represent the artistically intended version best.
Edit: apparently in this very case, "Metroid: Zero Mission" doesn't seem to have any official OST release. Unfortunate.
What I don't get is how the author can't pin the year down to anything narrower than "between 1994 and 1997," especially considering he wrote the article in 2002: only a few years later.
I'm not at all implying the story was fake; just this particular thing feels weird.
That’s because the HTML code is server-side rendered (SSR) with data-theme="dark" hardcoded on the <html> element, so on the initial page load the browser immediately renders with dark mode styles applied. After ± 500ms-600ms Nuxt’s JavaScript hydration kicks in (as this is a Nuxt app based on __NUXT__ at line 11,236), which detects your macOS system preference via prefers-color-scheme media query and updates data-theme to "light".
it shouldn't need to do this. Nuxt has a @nuxtjs/color-mode module which ensures that the correct colour scheme is applied before the browser starts rendering the html.
Not sure whether such "criticism" is welcome here, since it is ultimately subjective, but I will just be blunt and say: I disagree.
I like this style of writing as well, but I think this article overdoes it, to the point that it became somewhat irritating to read.
The part where I particularly feel this way is when the author spends two whole paragraphs discussing why YouTube (or its developers) chose to sample by "100" segments, to the extent that the author even asks, "If you work at YouTube and know the answer, please let me know. I am genuinely curious." Which, for lack of better words, I found ridiculous.
> Not sure whether such "criticism" is welcome here, since it is ultimately subjective, but I will just be blunt and say: I disagree.
If this was my post I'd certainly appreciate criticism.
> but I think this article overdoes it
Perhaps its overdone in places, to your credit the question about if 100 was an arbitrary number was a bit much. But, as a counterpoint, I found the related pondering of "might it make sense to have variable time duration windows" to be interesting. The interpolation YouTube ultimately selected is deceiving and variable density could be a way to mitigate that.
There's definitely a healthy balance and perhaps the author teeters on the verbose end, but I mostly just wanted to voice that I was surprised about the type of article it was, but not in an unpleasant way.
You are likely right that I over-rotated on the "storytelling" aspect there. My curiosity about the "100 segments" stemmed from wondering if there was a deeper statistical reason for that specific granularity (e.g., optimal binning relative to average video length) versus it just being a "nice round number."
That said, I can see how dedicating two paragraphs to it felt like over-dramatizing a constant. I will try to tighten the pacing on the next one. Thanks for reading despite the irritation!
reply