Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately people do not find out what it was meant for, before they start writing about it


We're at the point where many newer engineers haven't had real hands-on DOM experience but are expected to deliver applications built in React. You need to know at least one level abstraction below your current one to use the tool effectively. This all tracks with how the industry's changed, how we hire, how we train, and so on.


> You need to know at least one level abstraction below your current one to use the tool effectively.

I'm not sure that's true in general.

For example, for some tools there's no single 'one level abstraction below'. Let's take regular expressions as a simple example. You can use them effectively, no matter whether your regular expression matcher uses NFAs or Brzozowski derivatives under the hood as the 'one level [of] abstraction below'.

(Just be careful, if your regular expression matcher uses backtracking, you might get pathological behaviour. Though memoisation makes that less likely to hit by accident in practice.)


>For example, for some tools there's no single 'one level abstraction below'. Let's take regular expressions as a simple example. You can use them effectively, no matter whether your regular expression matcher uses NFAs or Brzozowski derivatives under the hood as the 'one level [of] abstraction below'.

If you're OK with the ocassional catastrophically slow regex: https://swtch.com/~rsc/regexp/regexp1.html , sure.

But you do need to understand the abstraction of strings, code points and so on, if you want to do regexes on unicode that doesn't stop at the ASCII level.

In general yes: it's not an absolute law that you need to "know one layer below". You can code in Python and never know about machine code.

But knowing the layers below sure does help making better decisions if you want e.g. to optimize this Python code. Knowing how the code will be executed, memory layouts, how computers work, access latencies of memory, disk, network, etc sure does help.


> If you're OK with the ocassional catastrophically slow regex: https://swtch.com/~rsc/regexp/regexp1.html , sure.

I already addressed that in my original comment. Approaches based NFAs and Brzozowski derivatives don't have these flaws; but you don't need to know anything about how they work to use them.

You just need to read one blog post that tells you to avoid regular expression matchers that use backtracking, and you are good to go. You don't even need to understand why matching via backtracking is bad.

> But you do need to understand the abstraction of strings, code points and so on, if you want to do regexes on unicode that doesn't stop at the ASCII level.

Why?


>You just need to read one blog post that tells you to avoid regular expression matchers that use backtracking, and you are good to go. You don't even need to understand why matching via backtracking is bad.

Yeah, no.

You might not be able to avoid using your standard lib's regex, or your project's chosen regex dependency - based on team/company policy. So it's not as simple as "use a regex engine that doesn't has this flaw".

Then if you want to avoid the cost, you need to know what backtracking is, to the level of understanding which kind of expressions can give you those performance issues.

>Why?

Because there are tons of factors that can affect your regex experience with unicode, normalization, different lower/upper case treatment, composite characters that don't match even though it looks like you typed the same character in your query, handling new unicode characters (ASCII 7/8 bit has been fixed for decades) and so on.


> You might not be able to avoid using your standard lib's regex, or your project's chosen regex dependency - based on team/company policy. So it's not as simple as "use a regex engine that doesn't has this flaw".

Well, yes, if someone forces you to use tools that have flaws, you need to learn about the flaws so you can work around them. Like when using a shoe as a hammer.

I'm not sure that proves anything about abstractions?

See also https://blog.codinghorror.com/the-php-singularity/

> Because there are tons of factors that can affect your regex experience with unicode, normalization, different lower/upper case treatment, composite characters that don't match even though it looks like you typed the same character in your query, handling new unicode characters (ASCII 7/8 bit has been fixed for decades) and so on.

Thanks.


> (Just be careful, if your regular expression matcher uses backtracking, you might get pathological behaviour. Though memoisation makes that less likely to hit by accident in practice.)

Doesn’t that exactly demonstrate why using the tool effectively requires an understanding of the implementation (or possible implementations) behind the abstraction?


Understanding is perhaps sufficient, but not necessary.

You can just consult a list that tells you which regular expression matchers to avoid (like eg Perl), and which ones are good (like grep), and you are good to go. No need to understand anything.


It does. Their argument is a farce. By now they've had many chances and made many attempts to illustrate their point if they had one. They don't have one, but they somehow don't know it. What can ya do?


It does, and I had exactly that same reaction! Regexes are a leaky abstraction, like all of them.


Understanding is perhaps sufficient, but not necessary.

You can just consult a list that tells you which regular expression matchers to avoid (like eg Perl), and which ones are good (like grep), and you are good to go. No need to understand anything.


Regular expressions are probably not a good example because you will eventually write a regex that has catastrophic backtracking behavior. I encountered one a few months ago, so it’s not at all uncommon or difficult to encounter. If you’re curious enough, you’d end up reading about how regexes work under the hood.

A better example might be that of a compiler, where you (very rarely) need to look at the asm output or encounter a case where the compiler generates incorrect code and you need to debug why.


> Regular expressions are probably not a good example because you will eventually write a regex that has catastrophic backtracking behavior.

Nope, that will never happen to me. No regular expression has that behaviour.

There are some bad implementations of regular expression matching that have these problems for some regular expressions. But you can avoid those bad implementations without understanding anything.

> I encountered one a few months ago, so it’s not at all uncommon or difficult to encounter.

That only happens, if you use a regular expression matcher that uses backtracking. Sane regular expression matchers take linear time on all input strings.

> If you’re curious enough, you’d end up reading about how regexes work under the hood.

There's no one single way regular expressions work under the hood. You had the misfortune of using a matcher that uses backtracking and is prone to catastrophic exponential runtimes.

There's multiple different ways to implement regular expression matchers. But a user doesn't have to care or understand anything (they just need to avoid the buggy ones).

Sure, if you read up on how regular expressions work under the hood, you can learn to avoid those bad matchers. (Or you can learn how to live with your batch implementation, if you are feeling masochistic.)

But that's entirely optional: You only need to read one blog post that tells you to use grep and avoid eg Perl. You don't need to understand why backtracking is bad for regular expression matching; as long as you avoid those bad matchers.


I would say that knowing that there are such things as different families of regex compiler, what their internal methodologies and consequences are, so that you can avoid certain otherwise non-obvious problems that arise not from something you did wrong, but something a layer lower did wrong (you wrote a valid regex according to all the rules in the manual, and got a bad result) qualifies exactly as an example of needing to have some understanding of how the layers below work.

In fact it wasn't necessary for me to qualify that with "I would say". I do say, and it simply does. You've made exactly no argument this entire thread.

Maybe everyone doesn't need to know everything, but the skin of stuff anyone needs to know is thicker than 0, and has no absolute boundary layer either, you just generally need to know less the further from your own work. But that never goes all the way to 0. You have to rely on other people to have done their jobs, but you still at least have to understand what those jobs are, that they exist and how you ultimately interact with them and how they impact you.

You said so yourself several times which makes this all farcical.


I’m not sure if this response was simply for the sake of replying, with the claims of writing perfect code all the time or how backtracking implementation is uncommon or insane (most languages use backtracking implementations of regexes).


Huh? I'm not making any claims of writing perfect code. If you have a sane regular expression matcher, there are no catastrophic cases to avoid.

Regular expression libraries that don't use backtracking are available for many languages. Yes, some languages have bad libraries that use backtracking, too. But bad libraries will always exist, they aren't an excuse when there are more good libraries around then ever before.


You have a point, but it seems clear to me that your parent commenter was referring to more pervasive abstractions like frameworks and probably programming languages.

Unless you're coding in Perl or sed or whatever then regular expressions arent really a primary abstraction. And even when they are, I don't see how the implementation wouldn be accessible as a lower level. They're not really a layered abstraction.


I am saying that regular expressions are an abstraction that has many different implementations. So there's no single underlying implementation you need to understand.

The only thing you need to watch out for is regular expression matchers that are prone to exponential blowup. But you don't need to understand anything; you can just consult a list of which ones are bad and which ones are good, and then avoid the bad ones; without any understanding of the mechanics necessary.

> You have a point, but it seems clear to me that your parent commenter was referring to more pervasive abstractions like frameworks and probably programming languages.

Probably. I was just looking for the simplest example that the maximum number of people would be familiar with.


I guess that makes the point of: perfect abstractions vs leaky abstractions.


People start using React as part of a larger framework or existing app and they associate all these unrelated things to React.

Also, the poster says this line:

> Finding a dev who actually groks the gap between useEffect and useMemo, error boundaries, hook based fetching or my (un)favourite, authentication, is difficult.

But truthfully the problems those constructs solve are problems in every paradigm. If you can’t grasp the difference these things despite being a heavy user of a library, you may… want to spend some time reading the docs at some point.


Unfortunately all the early was advocacy focused on how the virtual DOM is "faster", so the people are fully excused for not finding out "what it was meant for".

The "UI as a function of state" was not just sold as "more functional/simpler", but also as a necessity to have a DOM diffing algorithm to minimize updates and be "faster".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: