While the article is a cute way to introduce the loop body extraction refactoring, I think it is a disservice to talk about not using loops without mentioning recursion.
I am quite often shocked at the number of programmers who, even after years of experience, are not able to use recursion properly. (Not saying this is true of the article's author - but they should have mentioned recursion)
Problem is, tail recursion isn't
always particularly elegant and requires a lot more effort when pandering to the compiler (Exhibit A: the factorial example, naive recursive vs tail recursive) that the result is usually harder to follow than one with an explicit stack or queue. Also, some data structures like trees are explicitly stateful no matter how you transform it so they can't be optimized into a tail recursive form at all; I am not sure though in this case if it's clearer to manage the stack directly or to hope that the compiler would optimize it into the most efficient form.
I never said people should always use recursion. It's just that it's a technique that's useful to have up your sleeve unless you are working in constrained domains (embedded comes to mind)
Never forget that cultural differences look odd in either direction. My favorite passage from "Structure and Interpretation of Computer Programs":
> One reason that the distinction between process and procedure may be confusing is that most implementations of common languages (including Ada, Pascal, and C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose “looping constructs” such as do, repeat, until, for, and while. The implementation of Scheme we shall consider in Chapter 5 does not share this defect. It will execute an iterative process in constant space, even if the iterative process is described by a recursive procedure. An implementation with this property is called tail-recursive. With a tail-recursive implementation, iteration can be expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful only as syntactic sugar.
From the other side, tail call recursion is often actually implemented as a loop by the interpreter. Effectively, making many recursive programs just a different way to write a loop.
Is there an alternative to conditional jump instructions? What would a such a computer look like? Maybe such a computer exists, but it's hard for us to see it, because we've been drawn into thinking along Von Neumann machine lines.
You're absolutely right. I'm usually careful about being blinded by current machines. And there are other ways like changing your return stack manually which could be used to loop.
I was trying to express that the various mathematical formalisms we think in are all getting translated to the same machine in the end.
Honestly at this point in my career I'd have to spend some time remembering how to use recursion properly. I'm also in the embedded realm where I've encountered C compilers that will throw an error if you try to use recursion.
I understand your point main point, but not everyone is allowed to even use recursion.
My point is that people should at least know the basics of recursion. Mentioning "programming without loops" without even mentioning that recursion is an option in that specific case seems neglectful to me.
I wouldn't go that far. This post is basically newbie-fodder, seemingly from someone more towards that end of the spectrum than the other. Seems unfair to encumber them with expectations on what they thought was a helpful post for beginners grappling with "so, what exactly do I abstract?"
It's really just a small tutorial on DRY and finding repetition that you can extract. HN upvoted it because its provocative title makes HNers think of things like recursion and jmp which we can talk about in the comments.
Pretty easy to implement tail recursion though. Sometimes it’s hard to know if a language that automatically compiles to tail recursion will even always do it.
> Sometimes it’s hard to know if a language that automatically compiles to tail recursion will even always do it.
I wish more languages that do not always compile to tail recursive methods had something similar to Scala's `@tailrec` annotation (which forces the compilation to fail if the tail calls cannot be optimized).
It's not a very elegant solution, but at least it works.
I even convinced a C++ compiler (don't recall whether it was g++ or clang++; it might have been during a FreeBSD fling) to do TCO at one point, but I don't remember exactly how I did it and don't think the requirements are documented anywhere. It probably involved functions declared "static" (i.e. internal linkage) and with the same signature.
Recursion is either an abstract way to create a loop or a way to use the call stack as a stack data structure. There isn't really a good reason why it needs to be part of a programmer's toolbox.
I think the clever nature of recursion is what has made it into something that some programers think is a better way to traverse a data structure, but I think it is often difficult to justify.
Even trees can be traversed with a stack of the current state. If something goes wrong, checking the traversal stack is much easier than a runaway call stack.
I completely disagree. For handling recursive data structures, recursion will lead to shorter, simpler code than trying to hack something together with a loop and a custom stack object. While I agree that some of the more dogmatic FP people acting like recursion is the best thing ever might be a bit overstated, I think it's short-sighted to say that "There isn't really a good reason why it needs to be part of a programmer's toolbox".
Not to mention, without using recursion, there's no direct way of dealing with purely immutable data structures. You're going to be stuck creating some kind of mutable reference to serve as a counter to a loop, or to append to a list.
About two years ago, I had to (sort of accidentally) create a DSL out of JSON that emitted SQL. My first attempt was trying to hack something together with a bunch of arrays and hashmaps, but I realized that since there's no real "limit" to how nested SQL can be, that that wouldn't work. I scrapped what I was working on, and started over, fully utilizing recursion, and it led to substantially shorter code, that didn't perform measurably slower, and let me handle the full SQL spec correctly.
Sadly, I moved off that project, and the person who inherited it didn't understand recursion very well, and rewrote it without recursion, and ended up making it not able to handle the full spec.
Absolutely! As a thought exercise, what sort of non-trivial tree traversal is possible without recursion or looping?
I do use recursion quite a bit but always honoring the JPL coding rule (I can't find it) to put some form of safety fusing in recursive code to avoid inadvertent stack blowouts.
I'd be even more specific than that: understanding the transformation to turn a looping function into a recursive one is the key gateway into being able to comprehend code which uses advanced call-stack manipulation patterns, such as continuations or coroutines. While I hesitate to call such methodologies _simpler_ code, such patterns still will certainly yield more maintainable, reliable code in a strict single-thread language such as Javascript.
Remember that not everyone writes code that is designed to run on a machine with lots of RAM. Loops vs Recursion can be a proper engineering trade off when resources are low.
Sure, and I'm not saying you should never write with the loop+stack/list option; right tool, right job.
I'm just saying that there are certain jobs that lend themselves to recursion, and one shouldn't discount it as some purely-academic bit of fun; it's a useful tool to have in your arsenal. If you have reason to worry about a stack overflow or something similar, you probably shouldn't be using recursion for that project; if you're working in an environment where you aren't going to realistically hit those restraints (like in my case), recursion is incredibly valuable sometimes.
That said, even though I'm somewhat of an FP zealot, I think whenever possible (about 9*% of the time), you should use a looping function like `map`, `filter`, `reduce`, or even the Java-style enhanced-for-loop instead of recursion.
But what is a 'recursive' data structure? Recursive describes execution, not data. Data in a tree is a heirarchy. Traversing a tree can be done with a stack, since that's what recursion is doing in a less direct way.
Immutable also describes execution and is really just a way of hiding what is really happening under the hood. There isn't anything that recursion enables.
I'm defining "recursive" data structure in basically the same way everyone else does. Here's an example in Haskell:
data [a] = [] | a : [a]
The element is either empty, or an element and a pointer to another list. This is recursively defined since it points to the source data structure as part of its definition.
I know that traversing a tree can be done with a stack/list; I mentioned that in my response, but the code will typically be bigger and less intuitive; if the data structure is recursive (based on my definition from above), it lends itself to a recursive algorithm.
Sure, under the hood immutable data might be dressing for something at runtime, but the compiler still needs help to use those features; if I want to loop over something and prepend to a list immutably, at least from a code perspective (not the runtime context), I need recursion to properly respect that.
Others have described simple recursive data structures, but a real-world example is in the MTOSI definition of network modeling. Equipment Holders (EH), eg slots, can hold Equipment (EQ), eg cards. EHs can also hold other EHs arbitrarily deep, and EQ can hold EQ as well as EHs. So they are mutually recursive. A standard way to walk such data structures is with mutually recursive functions.
In other words, a natural way to talk about recursive data structures is with recursive functions and this is made more obvious with data structures that are mutually recursive.
Since you listed linked lists first, here's a fun little tidbit from my early days: I couldn't understand that until I first understood the recursive structure of binary trees. Only afterwards did I have the realization "ohhh, it's like a tree with only one child at each node" and suddenly understand the recursive structure in linked lists.
While it doesn't find it's way into my everyday programming so often, I think recursion does serve a pedagogic purpose if nothing else.
Just like learning a bit of Lisp or Prolog, having the "click" moment when first learning about recursion can be really helpful in building intuitions about programming in general.
It's the opposite, actually - every loop can be trivially encoded as recursion, but loops can't encode many common recursive patterns without a lot of work.
Recursion also has the potential to be much clearer than loops in some cases.
Of course, there are also cases where one would do much better to not use recursion, but that's why I think people should know of it.
Recursion just gives you a stack data structure by letting you use the call stack. If you have a stack data structure to use explicitly, then recursion doesn't give you anything and takes away the simplicity of being able to see the entire state of the traversal easily since the call stack and traversal state are now intertwined.
You can just take the variables you are passing to each recursive call, put them in a struct, and make a stack of those structs.
There is nothing brittle about it, you push values to the stack instead of passing them to a function (which just pushes them to the call stack). It actually ends up being far less brittle in practice because it is much easier to deal with edge cases and errors. You can break out of a loop much easier than returning different values and having those bubble up.
In Scheme, the only way to loop is via recursion. In Clojure, all high level functions and looping macros (for/doseq) are implemented via recursion as well (loop/recur).
Yes, it requires significant paradigm shift when you start to use these languages, but after a while, you start to appreciate high level functions (map, reduce) instead of language looping constructs (for, while) - they can be easily composed, parallelized, scaled over cluster, refactored... Even code looks much cleaner.
You like it even more if you are mathematician, because things looks more "natural" :D
Scheme has a "do" loop that doesn't require recursion. It's just that the people who generally teach Scheme are hell-bent on recursion rather than irritation.
Einstein summation for matrix multiplication isn’t really a great example because we have a perfectly good way to describe matrix multiplication without loops or sufficies or anything else:
C = AB
Rate is multiplication (and matrix-vector multiplication) is easy to denote. Tensor multiplication (well really tensor contraction) is annoying to denote and that is the point of summation notation.
Notation is easier when there is less to denote:
- Matrix multiplication: only one thing you can do (well you can add to but this distributes)
- Tensor multiplication: all you can do is add, tensor product, and contract. First two are easy, last one you add extra notation for. This notation can also express permuting indices but this is the same as multiplication by a particular tensor and contraction so ok to do.
- map,reduce,fold,etc. These can do quite a lot but not everything (eg processing multiple and in particular different data structures is often annoying. Processing things out of the usual order is often annoying. Processing eg consecutive elements is often annoying). One gets convenience for common things by making it impossible/hard to express uncommon things. If you see something that looks like map(f,x), you know the size of the input is the same as the size of the output and (hopefully) f acts independently on each element.
- loops/tail recursion these give you more power to do things so looking at a loop there are many things that could possible happen and one must look carefully to find which.
> Processing eg consecutive elements is often annoying
the only place i saw a higher-order function for this is the K language with its `eachpair` combinator (spelled `':` iirc). i wonder why it isn't more common!
(though in Haskell i'd just stick a `tails` somewhere to turn `[a,b,c]` into `[[a,b,c],[b,c],[c],[]]` and map over that)
That might be true in some way, but having a map/reduce abstraction could help map the operation on a computing cluster if necessary, or more generally abstractions allow to make use of more complex processing techniques (that take into account cache locality, NUMA, advanced matrix multiplication algorithms [1], etc.).
So, I would say that the thing you wrote is easier to understand if you look at the low-level code, but you have to look at it, while you could have the same overview at a high level, which provides a different perspective. In C++, that would be overloading the * operator to perform the loop in a method.
It boils down to top-bottom vs bottom-up, I think. And even though I often use the looping technique, I feel like a lot of my code is spent writing these "useless" loops, while caring for the index, boundaries, etc. I really like vectorization as used in Matlab/Octave to deal with that sort of things, and I feel like this is an answer of its own to the topic.
unless you actually start with these language , then move to loops. Which is really easy. Map/reduce is fundamental to scheme like the closure is fundamental to javascript.
Im not sure how many colleges teach it 1st year but Northeastern University uses scheme as the intro language. The professor there wrote the blue book.
Actually I think you do know of more than zero universities, since you replied to a comment that named Northeastern. A sibling comment to yours named another. Here's another: https://cs.brown.edu/courses/info/csci0170/
Just close your eyes and when you learn about recursion, closures, serialized data, evaluation, ... in Javascript, then just pretend you are using Lisp with a fancy notation. Even the original developer of Javascript wanted to do some Lisp/Scheme, was hired to do so and ended up with a slightly different language...
What I like about applicative/functional is that it linearize the thinking process.
- function arguments represent what imperative programming means as state
- each recursive call is a step in the walk toward convergence
- every step has to reduce[0] something in the arguments
- every step is clearly identified in the recursive call
I don't think it is "better" than mutable state + while () { }, but it helps my brain massively.
It may also be linked with old research (Turing and others) about expressing program space as a N-dim vector that has to reach a point in this N-dim space.
Some people might be able to make sense of complex imperative code, old array/string algorithms are a bit like that, maybe Knuth has some dedicated brain area I don't know of. But all in all, starting with functional may be a better pedagogical space.
[0] see htdp, sicp, friedman the <adj> <verb>er books
In the pure function pipeline data flow, no loop is required.
I wrote a 100k lines of pure clojure language project, no loop is used, and tail recursion is only used once. Most of them use high-order functions for dataflow processing or data driving.
I read the documentation on the URL you provided, and the pure function pipeline data flow is essentially different from Rx:
1. The essential difference between programming methods is the inherent thoughts and models. The idea and model of pure function pipeline data flow are highly consistent with integrated circuits.
2. The pure function pipeline data flow emphasizes the data flow, Rx emphasizes the asynchronous event flow, and does not mention or emphasize the data flow.
3. The pure function pipeline data stream is composed of only 5 components, and the model is much simpler than Rx.
4. Pure function pipeline data flow emphasizes the sequential structure (series of pipeline components), and the maintenance (code reading, expansion, debugging, etc.) is simpler.
5. The asynchronous event flow of the pure function pipeline data flow is simpler than Rx. I wrote an asynchronous event flow in my project, it is just a queue processing, so it is not necessary to specifically mention it.
Thanks for summarizing the differences. I think anyone interested in function pipeline data flows will also be interested to know about Rx and RxClojure as they both allow programs to be structured to work over a stream of data in many other languages as is available in Rx.
So, I was sort of expecting this article to use recursion instead of a regular loop, but it was not to be, as the author went ahead and used a for-loop anyway and abstracted a bit of logic into a separate function. Silly premise overall.
And if one wants the (implicitly hidden) behaviour that highlighted products stay highlighted forever, just add the condition p.highlighted to the disjunction above.
The standard doesn’t specify whether or not forEach is a “loop” or not. One could implement it using tail recursion. FWIW I would claim that such recursion should also be called a loop, because the control flow goes in a loop.
You're splitting hairs. I think we can both agree that no implementation does it as such. And if you're going to go to that low of a level, there's no such thing as a loop beyond emergent behavior (they're all conditional branches/jumps).
I was hoping for a massive parallel solution! You just run the same function on a million cores at once, with different inputs.
But I think there's a big difference between a language that lets you expect that iteration b will happen after iteration a, vs a language that merely gives you guarantee the result is derived from application of the function onto the necessary inputs.
The latter can be correctly parallelised, but the former can only ever be executed in serial. `map f [1,2,3]` (in Haskell) can be solved in parallel or in serial. `[1,2,3].map(f)` (in modern JS) must be executed in serial, since f might be the result of `() => { var j = 0; return (i) => { j += i; return j }}`.
`map` is strictly less "powerful" than a "loop", so I'd argue that's misleading.
Which I'd also argue is a good thing. Using the least powerful construct available both communicates to future readers of the code, and allows higher level optimization (eg, `map` is trivially parallelizable, an arbitrary loop is not)
In javascript, labels can only be used in the context of loops (break and continue to a label). But that would be the proper solution in most languages.
Really its the same kind of thing as teaching people to "run the code in their heads" when trying to debug -- an extremely primitive yet necessary skill... but not a utility interesting enough to bother running by HN IMO
I agree. Honestly, I thought he was going to be using Scheme, and maybe derive a for loop by using recursion.
But honestly, I hate the misleading title of this post, because it completely misdirects what the post is about, which is just breaking logic up into a functions. Typically, you'd teach this by explaining the concept of functions, rather than focusing on loops, which is completely irrelevant to the proper use of functions.
Everyone in this comment section was expecting some kind of programming language trickery, but that's not what this post is about. The author is illustrating a refactoring technique.
Half the people here are discussing the headline not the article because people don't read articles, they read the headline and jump to the comments.
In this case it drowns out discussion of the article because the article is terrible. It doesn't start with the code as it would look like with loops, but instead says "Let's pretend you don't have loops" and then refactors the logic which has nothing to do with loops, then puts the loop back in.
The same refactoring would be just as clear with loops, and in fact the version without the refactor might be even more clear.
A static function which is only called from a single place isn't always more clear than having the logic inside the loop, especially if all that is left from the original function is the loop.
I think calling the article terrible is a little harsh. It's clearly meant for people just starting with programming, and it utilizes a teaching technique that can be very powerful. It isn't meant to teach you how to do things in the best possible way, but to artificially constrain you so that you approach the techniques you know from many different angles to reach a better understanding.
>Half the people here are discussing the headline not the article
This often happens with articles that have really snappy title. If the title was just "Using refactoring instead of loops" it would probably generate more on topic discussion, but probably less interest overall. I wouldn't go as far as to call this particular title "clickbait", it's at least clickbait-adjacent.
I was thinking it would be an essay on computational completeness and memory or functionality without the ability to loop, which is as fundamental as moving the tape head left or right on a Turing machine. I suppose the title of the article overly encourages what one might interpret it to be about. The upvotes and comments here provide evidence that the article is useful, just not what everybody might be expecting.
I'd use functional programming, which is what I'd thought this post would be about. It seems the author is not familiar with the concepts of mapping, reducing and filtering. This is just a map() command.
yes, of course. And obviously I would do it with loops, so you don't have to. Ultimately a Turing machine needs to have loops somewhere. The best you can do is abstract them.
It's really hard to teach clean code to beginners.
Concepts like SOLID principles feel like meta-academic overkill, and get in the way of "getting things done".
Juniors don't click in to the value of well structured well modularized code until they handle the pain of refactoring code that isn't, and introducing unintentional bugs into it.
Your only hope, as a senior engineer, who's been through it already, is that they have enough trust in you that when you leave code review feedback that says "It would be more clear and maintainable if you refactor that into a function/make the function return the value instead of mutating it as a side effect/etc etc etc" they trust you, even if they don't yet feel the real value of it)
I can't tell if this blog post helps or not. I need to ask one of my juniors.
Concepts like SOLID principles feel like meta-academic overkill, and get in the way of "getting things done".
For small programs, certainly. But I've witnessed the effects of not applying any kind of SOLID principle at all to larger scale software and it was never pretty.
Your only hope, as a senior engineer, who's been through it already, is that they have enough trust in you that when you leave code review feedback that says "It would be more clear and maintainable if you refactor that into a function/make the function return the value instead of mutating it as a side effect/etc etc etc" they trust you, even if they don't yet feel the real value of it)
This sounds kinda pessimistic. Why would you not hope for the best i.e. for a junior understandting what you're saying and most importantly why you're saying it, and why the refactored version is better? I mean, in the end that's what's supposed to happen. And even if that doesn't work out, I'd rather have them questioning my reasoning than just blindly trust me. I'm much more satisfied with a junior showing critical thinking, even if they just don't understand the matter, than just blind trust.
In fact, I'd even say blind trust is the "mutate as side effect" of human understanding. It may work in small, isolated incidents, but apply it at scale and everything goes wrong.
A junior who learns engineering dogma from his seniors and repeats it without understanding it is a liability, since when his code goes wrong, he won't know why because he doesn't understand the principles at work. So it's the responsibility of the mentor to not settle for blind trust, but ensure his protege does actually understand the principles.
Not sure why you and stinos think I'm advocating for blind trust.
My point is simply that cleaner code is not intuitively self evidently better without experience. It feels intuitive to senior engineers but even side by side comparisons aren't necessarily enough for juniors.
The effort overhead required to achieve (effectively zero once you internalize it), but far from zero when you are a beginner does not seem like it offsets the hypothetical benefits.
So articles like OP's are quite fascinating to me because they take a totally unorthodox approach to advocating for it.
I think its a bit of a misnomer to say functional programming is the answer because you can call a function instead of doing a loop, because all you're doing is shifting responsibility to the function you're calling to implement the iteration (which is probably done via tail recursion in FP language).
Really the answer to avoiding loops via FP is to use tail recursion, but tail recursion is not always more natural than a loop.
I agree, but I think it's worth pointing out that this abstraction costs more in a language like python. Unless I'm mistaken, I don't believe python has any way to fuse these higher order operations into a single pass.
Action is the function you are applying. `for x in ...` will pull an iterator from a generator which could be doing a lot of interesting stuff. And `if x > 7` is the filter.
There are real life situations where you must program without loops, and eBPF programs are an example. They also have an program size and various other limitations (http://www.brendangregg.com/ebpf.html#bccprogramming).
When I was writing an ACME (Let's Encrypt) TLS-ALPN client that ran in-kernel with an eBPF tc program, it led to some pretty silly looking code: https://dpaste.de/82aZ
This is sort of what Microsoft's new/experimental Bosque language is supposed to be about—a paradigm shift to "Regularized Programming", similar to the advent of Structured Programming in the 1960's.
It ends up being a form of mapping. It's a good idea/exercise for developers to engage in occasionally in order to improve their ideas on how to organize their code.
Exactly, thank you! I believe it is a very good exercise for when you are teaching a programming class.
Another possible exercise is to not use ifs. The way to navigate around that would be to use double dispatch.
Not because of any reason of performance gain with the branch predictor like other comments mentioned, but because it adds another tool to your toolbelt
At the risk of inspiring a violent reaction (virtually, of course): Sorry folks, this is silly. Go solve real problems.
Does a patient care if the ML code that discovered a malignant tumor used an explicitly written loop?
As a silly test: What would be the value in rewriting the Facebook, Google, Amazon, etc. codebases to avoid loops? What’s the ROI?
Kindly provide a few examples at scale (not academic oddities) where this might actually matter?
You can’t avoid loops. Any time you are doing the same thing repeatedly, you are using a loop, whether you explicitly wrote it or not.
Sure, sure, there are corner cases. I have unrolled loops in assembly in real time embedded code where every microsecond counted. That is far from the norm and, frankly, should be avoided with absolute passion.
I have also worked with APL for ten years. You can do amazing things with the language while seemingly not looping. In reality nearly everything you do with APL includes heavy looping behind the curtains, and you better be well aware of this if performance is important.
Don’t get me wrong, the academic question is interesting. However, if you live and work in the real world there are far more interesting and pressing issues to be concerned with.
In real world, factoring out a function that's only ever used once is usually not the smartest thing to do early. You'd do that if you eg. do TDD.
The article is a false "tip": author explicitely decides not to extract the entire loop body (why?) into a function but only a part of the logic. They've chosen the factoring approach despite no loop, not thanks to it.
Their intrinsic reasoning that led to deciding what to split out would have been more useful, and then we could argue if it was good design tip or not.
The goal seems to be to restrict yourself for the sake of restricting yourself, as a way to induce creativity. So, hiding the for-loop away in a library function does seem like cheating...
A strange emphasis. I imagined old-school GPGPU (only with loops with fixed bounds).
Programming without branches of any kind (loop branches or otherwise) is a fun diversion; I've enjoyed it sufficiently to have put my blog at branchfree.org - will be more on branch free programming when my superoptimizer is more mature.
I primarily program in JavaScript and between Lodash and the built in iterator functions I don't see any reason to use loops other than the while loop.
I often use recursion rather than while loops; however, I find that is often difficult to do in a way that is readable and that uses tail recursion.
What I really wish is that something like "forUntil", "mapUntil", and "reduceUntil" would be added to the core language. Maybe the forEach, map, and reduce could be made generic functions that operated on any iterable rather than being defined as methods of Array.
The title of this article reminded me of that one time when I found a small programming app for android with a very limited language. It had some basic variables, if statements and a drawing routine for solid colour circles. It contained no function calls and I don't think it allowed for loops.
What it did have was timers which took a piece of callback code and could be set to repeat that code with a given frequency.
It was all very simplistic and restricted, so I spent an evening making a breakout clone in it, just to see if I could. That was a lot of frustrating fun.
Just because the loop is put in a function, the loop is still there. I would be very disturbed if someone said, "Let's not use a loop!" and then used a normal map().
Now, if they had implemented their own map() that used tail recursion, then we'd have something.
Well, I agree it’s still imperfect. The thing is, though, in JS the fact that map uses a “loop” is an implementation detail; Array.prototype.map is a native function.
Since the point of this experiment is to constrain the code to emphasize other details, I can see why it might make sense to limit use of just native JS loop control flow, though it is unclear that using map/filter/forEach/etc. would lead to better code than a simple for loop. (In this case it might be alright, but I do get annoyed when there’s a mess of functional operations going on with complicated transformations that would’ve been really mundane with imperative control flow.)
I have a much harder time reasoning with loops than using map, reduce, recursion etc. I'll just say the whole truth, I can't program with loops, I get a headache and I'm forced to quit.
As an alternative, what if you didn't have (or didn't want to use) common structured programming techniques, and you wanted to implement a state machine? Like, you could use if-then, but not while, loop, do, etc? But goto was allowed (argh?).
About a decade ago I was in a discussion with someone on this, and his technique changed my mind on where, when, and how goto could be acceptable in a real code base; some of you may or may not agree - but I thought I would show you, warts (on my part) and all. Here's the thread:
...also, here's the state machine (as implemented in C); the original post of the image on the forum, I think, requires a login to see, thus this copy:
Read the thread to understand the arguments for and against, but it ultimately (I think) comes down to implementation readability and maintainability. This is a more critical thing in an embedded hardware system, of course, but I think similar techniques, or at least the mindset behind them, could be an advantage.
It's kinda like how learning Golang, much later, forced me to rethink the idea of error handling. In Go, there's the concept of using "guard statements" at the top of functions/methods to essentially exit out as soon as possible if something untoward occurs or is passed as an argument incorrectly. Basically a bunch of simple if-then statements at the beginning, then your code proper. It was one of those things I recall as being something I railed against at first, but I quickly came to understand the reasoning behind doing it that way, rather than nested if-then statements or other constructs within the logic. It produced simpler, easier to follow code - but at first, it felt very "wrong" to me for some reason. It was one of many "golang-isms" I had to come to terms with, before I really appreciated the whys and whats of the language.
Which ultimately is what these techniques are all about - the author touches upon this: Restriction of your toolset can lead to fascinating solutions, and in some or many cases, this simplification is reflected in the code as well; things can become easier to understand, since less is being used. It's also one of the reasonings behind such "demo competition" contests such as the 256 byte challenge, or 1K intros, etc. Also behind such things as pixel-art, especially tile art using only 8 colors (or even just black and white!) with only an 8 x 8 grid-space to work in. Some amazing art comes out of that, operating within such restrictions.
I've heard/read that great programmers are those who are able to remove more code than they add for a given change (aka - simplifying the code). These techniques and everything else may all be a part of that idea.
I'm going to expand on this - the author of the article doesn't mention a couple of things that you can do if you don't use loops, or don't have them for some reason, but it is understandable that it wasn't mentioned - because the author is working in javascript, not in something lower level.
However, that may change with WASM coming into its own; depending on if anyone takes the challenge up of hand coding WASM (that actually isn't an "if" - I am certain people are doing it as I type this).
Those two commonly known (well, used to be - less so today, maybe, outside of embedded programmers of 8 and 16 bit controllers - and the retro and demo scene) methods:
1. Self-modifying code
2. Overflows
Number 2 could be seen as a subset of number 1, depending on how an overflow is exploited.
It's been decades since I've played with either of them - and really only at a "toy" level, but they were common techniques for those well versed in assembler (for whatever processor or architecture), as they could be used to write both more efficient and smaller code for a given routine.
Especially in the old 8-bit realm and before, when memory was precious and expensive, and not very large, such techniques could allow for more functionality than what might seemingly fit into memory space available. They were used for everything, particular games on 8-bit machines, as well as more mundane software.
The downside is that the code created was often very "opaque" in the sense that you really had to understand how potentially the whole system was working in order to understand how the technique worked and was being used in context. This worked to the advantage of some developers - namely those writing old-school viruses/trojans/malware - as the ability to have self-modifying code meant they could easily hide themselves from a malware detector, and do other nefarious (and honest, sometimes quite interesting) techniques. It's less a thing today, but back in the day, there were some amazing bits of code floating around that scene.
The guy who wrote about Mel understood all this - or came to understand it - very well. Mel did it mainly because he didn't trust compilers to make fast code (which they probably didn't back in his day - even today, there are edge cases that fall thru the cracks - it's just code, it can't do everything - yet) - so he hand coded everything; he also had the advantage that he knew precisely how the system worked, especially the drum memory of that system, and by precisely timing things, and deft use (abuse?) of certain registers - well, he was able to do seemingly wizard-like tasks. I can't say he didn't use any loops - he did in fact use one; it looked like an endless loop with nothing inside it; in fact, it should just cause the machine to hang. But the author of Mel's story found that the machine, stepping through the code, would pass right thru this seemingly endless loop. Well - he explains how Mel did it, and it really is a thing of wonder, combining both register manipulation, overflow techniques, and self-modifying code to the utmost.
Read the story to find out more (Mel - as far as can be determined, was likely a real person; there is a picture of him out there that was dug up. No one knows for certain whether that Mel is The Mel - but most believe the person depicted probably is).
Also google around on such techniques to learn how they work; you likely won't be able to apply them to your day-to-day (unless you work in assembler or something like that), but you might learn something useful for your toolbag.
Oh - one other thing - I kinda lied about how these tricks can only apply to a low-level; that's not completely true. There are certain techniques that can be done with higher level languages (and not using exploitable bugs). I won't go into detail here, though. Maybe you can work it out on your own?
I teach a JS coding class and I had the luxury of creating the curriculum. To the horror of my peers, I decided that coding with loops was not necessary for beginners and got rid of loops altogether from our curriculum. I thought OP was proposing the same thing, but I was wrong. the refactoring was good, but my students will probably end up with the following result (more in line with your no loops title):
```
function highlightProducts(aListOfProducts, i=0){
if (i === aListOfProducts.length) return null
if(productShouldBeHighlighted(product)){
product.highlighted = true
}
return highlightProducts(aListOfProducts, i + 1)
}
```
The added benefit of the above approach is that debugging is easier! If you want to debug the last 5 items, you can simply call the function like this:
Our first year coding lecturer did similar, but instead of teaching us Java (which is what most other classes used at the time) he insisted we use Gofer.
From the point of view of a student who was excited to finally learn "real" programming, having this toy language crammed down our throats instead, along with a side-serving of smugly academic FP-superiority, was enough to sour me on FP for a long time.
Hopefully you gave your students this more as "here's a neat trick to add to your toolbox" and not "listen up plebs, iteration is for losers."
This seems a bit worrying to me as a general principle, since Javascript's lack of tail call optimization (TCO) means that you can blow the stack pretty easily by manually recursing. Just my two cents :-)
(Out of curiosity, have you considered starting at raw recursion – like in your example – but then moving on to using utility functions like `map` from e.g. Ramda or lodash?)
Hey! you are the one from the blog comment :) It must be pretty fun to create those curriculums!!
I agree with you that the restriction of not using loops can also direct you to apply recursion.
However do you believe recursion makes it easier to debug certain parts of the array when you can adjust the starting and stoping points in the guard of the for loop?
Ultimately, everyone should use higher order functions. In your case (as other commenters have said), map or forEach should have been used.
I don't really want to make a statement that recursion makes debugging easier because I don't code like that at work. I use utility functions (map, reduce, filter, etc.)
thought you were going to say you teach functional basics like map, filter, reduce, etc.
converting between recursive/iterative is a good exercise but teaching one and not the other seems unnecessary, especially considering you see more loops in the wild.
---
also fwiw can just
```
highlightProducts(aListOfProducts.splice(-5))
```
I am quite often shocked at the number of programmers who, even after years of experience, are not able to use recursion properly. (Not saying this is true of the article's author - but they should have mentioned recursion)