Hacker Newsnew | past | comments | ask | show | jobs | submit | necovek's commentslogin

Depends on what was missing.

If we used MacOS throughout the org, and we asked a SW dev team to build inventory tracking software without specifying the OS, I'd squarely put the blame on SW team for building it for Linux or Windows.

(Yes, it should be a blameless culture, but if an obvious assumption like this is broken, someone is intentionally messing with you most likely)

There exists an expected level of context knowledge that is frequently underspecified.


Humans ask each other silly questions all the time: a human confronted with a question like this would either blurb out a bad response like "walk" without thinking before realizing what they are suggesting, or pause and respond with "to get your car washed, you need to get it there so you must drive".

Now, humans, other than not even thinking (which is really similar to how basic LLMs work), can easily fall victim to context too: if your boss, who never pranks you like this, asked you to take his car to a car wash, and asked if you'll walk or drive but to consider the environmental impact, you might get stumped and respond wrong too.

(and if it's flat or downhill, you might even push the car for 50m ;))


If I understood you correctly, this might be an issue if you have multiple strokes (so multiple mins and maxes that you need to stay within) on a row of pixels (think all strokes of an N).

What I'm suggesting is just a way to do less computation to get the same result as before, it doesn't change the correctness of the algorithm (if implemented correctly!). Instead of testing every curve segment at each (x, y) pixel location in the target bitmap, you only need to test those curve segments that overlap (or, more precisely, aren't known not to overlap) that y location, and what I described is a way to do that efficiently.

Black text on white background with no backlight is easier to read. Think black text on paper.

When it comes to computer screens, usually set too bright to accommodate varying ambient lightning conditions throughout the day/year, it's not as simple, and I am not sure there is a study to confirm it.

And even if so, any individual's case might be different.


While you are right about the many misconfigured monitors, the right solution is to set an appropriate brightness and contrast, not to invert the text.

Too bright ambient lighting is better handled with monitor shields, not by increasing the display brightness, especially when the screen is glossy.


Not disagreeing (my external screens have never been set higher than 30% brightness, but they've also always been matte, except a couple instances I had to use Macs for work).

But I am sure none of this has been part of an actual study with screens.


I honestly recommend any introductory type design book for all the considerations that go into achieving optical balance.

It's wonderful to see someone dive into this as deep. A simpler way to understand the complexity might be to try designing your own font.

Pick up a book on type and start up Fontforge, and off you go.

Be careful though, make an early choice if you are going with 3rd order curves or 2nd order (Bezier) curves.

Going through TeXbook and MetaFont books by DEK is also a brilliant way to learn about all this, with note that they do have an explicit bitmap step in.

One correction though:

  Without it, you wouldn't be reading this right now.
Computers started with bitmap fonts of different pixel sizes. Your console terminal in Linux is still using that, and nothing stops you from using them ("Fixed" has large Unicode coverage and is usually preinstalled) elsewhere too.

So no, none of this tech is necessary for us to read text on computer screens.


If it is the terminal emulators running in a desktop system we are talking about, I doubt most people are using bitmap fonts these days. Most distros come with some modern font pre-configured for the terminal emulator.

I specifically said "console terminal in Linux" (so not an emulator), but in Linux, at least xterm uses bitmap fonts by default.

In general, some tools like mplayer or VLC allow playing an MP4 file without an "index", but may require certain CLI arguments.

There are tools which can fix that too (was it mconvert from mplayer or ffmpeg, not sure).


Impossible is a strong word when what you probably mean is "impractical": do you really believe that there is an actual unexplainable indeterminism in software programs? Including in concurrent programs.

I literally mean impossible from the perspective of customers and end users who don't have access to source code or developer tools. And some software failures caused by hardware faults are also non-deterministic. Those are individually rare but for cloud scale operations they happen all the time.

Thanks for the explanation: I disagree with both, though.

Yes, it is hard for customers to understand the determinism behind some software behaviour, but they can still do it. I've figured out a couple of problems with software I was using without source or tools (yes, some involved concurrency). Yes, it is impractical because I was helped with my 20+ years of experience building software.

Any hardware fault might be unexpected, but software behaviour is pretty deterministic: even bit flips are explained, and that's probably the closest to "impossible" that we've got.


I do not think that's how it worked out for GitHub: I'd rather say that Git (as complex as it was to use) succeeded due to becoming the basis of GitHub (with simple, clean interface).

At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.


Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.

A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.


I was involved with bzr and Launchpad: anybody using pure Git hated it. GitHub, even with fewer features compared to LP, was pretty well regarded.

Yes, kernel and Linus used it, but he used a proprietary VCS before that did not go anywhere anyway, really.


> changing a file required a checkout

SVN didn't need checkouts to edit that I recall? Perforce had that kind of model.


As did cvs. But you are right.

I am not sure, it seems I did misremember. Though it's possible I was actually working with needs-lock files. I can definitely see a certain coworker from that time to put this on all files :/


And even in P4, you could checkout files only at the end, kind of like `git add`. Though this could provide some annoyance if someone had locked the file in the upstream.

Yes to all that. And GitLab the company was only founded in 2014 (OSS project started in 2011) and ran through YC in 2015, seven years after GitHub launched.

and most of those, except maybe gitlab, were clunky AF to use

Well, if there is an emergency lane to the right... it actually happens quite a bit around here.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: