Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you think termination is easier to reason about in eager languages? I think it's quite the opposite: in lazy languages functions compose, in strict ones they don't necessarily. For example, you cannot compose a "take ten values" and "square all elements" function in a strict language if the argument you apply their composition to has infinite length (e.g. is cyclic).


With a lazy language its not always obvious if a certain expression needs to be evaluated now or not. In particular I was writing some list comprehensions in Haskell, trying to solve some of the project Euler problems, and when I introduced a bug I got an infinite loop. When I fixed the bug I got the correct answer thanks to lazy evaluation, but the buggy code and the correct code looked awfully similar. Maybe it is just my inexperience with lazy languages that caused trouble (I came from C), and learning two radically new concepts: functional programming and lazy evaluation was too steep of a learning curve. Or perhaps list comprehensions aren't really supposed to be (ab)used like that. Unfortunately I don't have that code anymore, it would've made it more clear what I'm refering to...


Maybe it is because you didn't show an example, but that doesn't seem like something caused by lazy evaluation. It is pretty easy to get into infinite loops due to logic errors in Project Euler type problems even when using imperative loops.


Because I can walk through a strict program in my head (or on paper), and see every value, what is being calculated, and what is being stored. With a lazy language, I can't necessarily do that. The entire memory of my program can quickly fill up with thunks. The optimizer determines when things actually get calculated, and I need a much deeper understanding of my code to figure out the space constraints.

Just writing code? I guess I can agree that lazy evaluation makes it easier, but performance still needs to be considered.


The eager vs. lazy distinction is not relevant here. There are some programs that under eager application semantics will not terminate, but under lazy application semantics they will, and vice versa. What complicates reasoning about termination is the presence of mutation. In a lazy language, you'll never have mutation so you don't have to worry about those complications. You could have an eager language with no mutation (e.g., Elm[0]), but for the most part eager languages include some form of mutation and so you may have a harder time proving termination.

[0]: http://elm-lang.org


I don't think that's right. There is a basic theorem in (untyped) lambda calculus that says that if a term has a normal form, then any evaluation strategy, including strict and non-strict, will reduce that term to that normal form. Since non-strict evaluation can "skip" arguments that may have no normal form, there are expressions in non-strict languages that terminate while their strict counterparts don't. The opposite scenario does not exist: if a term has to be evaluated, it has to be done in both the strict and non-strict versions. (And in simply typed lambda calculus, all reduction sequences terminate.)


For (actual) programming languages without mutation and with more than just lambdas and application for control flow, it is right:

    f x y = y
    f (raise Done) Ω
Under lazy application semantics this program will never terminate. Under eager application semantics it will.


It doesn't necessarily have to be that way.

You can have both composable functions and non-wasteful semantics without turning the whole language into a lazy mess.

Note that this approach will also allow you to abstract over different data sources more easily.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: