Yes. (We have spent the last 10,000 years recovering from the Younger Dryas and we are only just now getting back on our feet. Heck, most of us still think agriculture is a good idea when really it's about the dumbest way imaginable to relate to the soil. But I digress.)
> Most stuff around the world is poorly engineered and just gets the job done. Wooden bridges with rope and wire holding them together and no analysis whatsoever done on what load it can bear. For most of history those bridges made up the majority of the bridges in the world. And it worked just fine until modern transport put higher demands on bridges. Yet, you still find clunky wooden bridges all over the undeveloped world, and they continue to work.
Ah, but none of those bridges are built out of electrified math.
Software is electrified math and it can be perfect.
And it's self-referential: we can write perfect meta-code that emits only perfect code.
> Should we try to do better? Of course we should. But someone has to pay for it. It doesn't happen magically.
My point is not that we never try. My point it that the world contains many attempts and most of them have been ignored by most working programmers.
> Regarding the Toyota Unintended Acceleration bug, that's gross negligence if the top priority coming from management wasn't quality and if someone can prove that, they should end up in jail. And I would not excuse the developers either. I would never ship code that I know might kill someone. I would rather quit and work as a cashier. Please re-read my original post because I never said quality isn't important. I only said that quality is not always a top priority, and that in some cases it should be a low priority. A website for a 2 week marketing campaign will never kill anyone. It would be a waste of resources to insist on anything other than just shipping it once it works.
Let's assume, for the sake of argument, that I'm wrong and correct software always costs more than incorrect software. In this scenario (which may well be the REAL scenario) you have put your finger on the important bit: we're talking about the location of the inflection point.
Allow me to reference Randall Munroe, "Is It Worth the Time?" https://xkcd.com/1205/ It's a handy chart that shows, "How long can you work on making a routine task more efficient before you're spending more time than you save? (Across five years)"
It's not precisely what we're talking about, but it's got the same flavor: how much do you expect to use the buggy software vs. the cost of correctness...
Now my point would be: The industry should have had a house-on-fire urgency around reducing the cost of correctness to shift the infection point downward so that all but the most trivial software can be made correct economically.
We should have been doing that since forever (or at least sometime after the Apollo 11 mission.) Instead we generally ignore these sorts of things.
> If you're honest with yourself and look around, you do it all the time in your own life too. You draw a diagram on a piece of paper or a white board to explain a concept and then you throw it away or erase it. You don't carve it in stone just because it will last longer and someone a hundred years from now might find it useful. You put up a simple rope barrier to keep people from stepping on newly planted grass. You don't erect the wall of China. Temporary "low quality" solutions that will later be dissembled and thrown out (or save the rope for reuse at least) are often the best fit based on the requirements.
Have you been to Daiso? It's the Japanese dollar store. Pretty much any human problem that can be solved by ten ounces of plastic can be solved in Diaso for $1.50. I'm not generally into consumer culture, but I love Daiso.
You don't need 'temporary "low quality" solutions' if you have Daiso.
It's not that I don't use hacks, or don't respect them, it's that we're so far behind where we should be in terms of off-the-shelf solutions (to programming) and we don't seem to be quick on the uptake...
> I'm very skeptical that markets would completely ignore an opportunity to beat the competition if the costs were exactly the same but the results were higher quality. But I'm going to look into this. Thanks for the reference. I'm going to read James Martin's book and see if I learn something new that will help me write better code.
God bless you! (I collect powerful ideas and I cannot tell you how many times people have said, "If $FOO is so great, why doesn't everybody use it already?"... I don't know! I don't freakin know! It makes me sad. All of human history is a tire fire, indeed.)
- - - - - - - - - - - - -
This is my reply to your later comment on this same thread.
First, wow, I'm impressed. You are actually doing the homework and I tip my hat to you with great respect. Seriously, that's the nicest thing you could have done and I really appreciate it.
Second, yes the language and presentation around these "HOS" ideas has apparently always been really bad, with the issues you describe. It also doesn't help that the necessary background knowledge and jargon wasn't wide-spread at the time.
Third, yes it was panned by Dijkstra and the Navy, I've read both of those reviews, and their objections are not without merit. But, and I say this as someone who has huge respect for Dijkstra, they were both wrong: they both missed the fundamental advantages or "paradigm", if you will, of how HOS et. al. works.
(Also, have seen that Simon Peyton Jones interview. And no, we're not just talking about Functional Programming, that's kinda orthogonal. E.g. Haskell helps you write code with fewer bugs, HOS prevents them in the first place. Another way to differentiate them is that if you're typing text in a text editor to make software you're not doing HOS regardless of the langauge.)
So, poor marketing, bad reviews, obscure principles and the general disinterest of industry led to this powerful technology languishing.
Yet, I insist there's something there. Let me try to convey my POV...
In modern terms I can describe the crucial insights of the HOS sytem concisely. Here goes:
Instead of typing text into a flat file of bytes and hoping it describes a correct program, the HOS method presents a tree of nodes that is essentially an Abstract Syntax Tree (but concrete and there's no driving syntax because there's no source text.) The developer edits the tree using only operations that maintain the correctness of the tree.
This is like "Par Edit"[1] in emacs, or a little bit like some of what J. Edwards is attempting with Subtext[2], or the old "syntax-directed programming environment called Alice"[3] for Pascal. (Again, it's not that no one has ever tried anything like this, my whole point is that powerful techniques for writing software with fewer bugs in have been around for a long time and we, in general, don't use them.)
The main difference from these is that HOS uses a very simple and restricted (but Turing complete) set of operations to modify the tree: Sequence, Branch, Loop, Parallel. (There are some "macros" built out of these operation for convenience but underneath it's just these four.)
Starting with a high-level node that stands for the completed program you gradually elaborate the tree to describe the structure of the program and the editor/IDE enforces correctness at each step. You literally cannot create an incorrect program.
Apparently normal people, accountants and such, could sit down in front of the IDE and,with a little training and coaching, learn to describe their own work processes in it and essentially write programs to automate (parts of) their own work.
I've been working towards bringing this to market, on and off, for years now. In fact, my first programming job was the result of a talk I gave on a prototype IDE at a hacker convention about fifteen years ago. I have just finished implementing type inference and type checking for my latest vehicle: a dialect of the Joy programming language. It has been slow going (I lead a chaotic life) but I'm on the cusp of having something I think will be really great. If it works, it will revolutionize software development.
Quixotic, I know, but somebody's gotta tilt at those windmills...
Anyway, thank you again for taking the time to look into this. I can't tell you what that means to me personally. I know the "Provably Correct" book is terribly written, but I urge you to try to look beyond that. All I can really honestly tell you is that I'm convinced there's something really important and useful there.
[3] "In a syntax directed editor, you edit a program, not a piece of text. The editor works directly on the program as a tree -- matching the syntax trees by which the language is structured. The units you work with are not lines and chracters but terms, expressions, statements and blocks. " https://www.templetons.com/brad/alice.html
That's about all the time I have left to spend on this. If you truly have discovered a way to do what you're claiming and it's has just been a victim of bad luck and poorly written books in the past, then there is a huge market opportunity and I wish you the best in bringing it to market.
Interesting discussion. Thanks. I will keep my eye on this space from time to time.
Yes. (We have spent the last 10,000 years recovering from the Younger Dryas and we are only just now getting back on our feet. Heck, most of us still think agriculture is a good idea when really it's about the dumbest way imaginable to relate to the soil. But I digress.)
> Most stuff around the world is poorly engineered and just gets the job done. Wooden bridges with rope and wire holding them together and no analysis whatsoever done on what load it can bear. For most of history those bridges made up the majority of the bridges in the world. And it worked just fine until modern transport put higher demands on bridges. Yet, you still find clunky wooden bridges all over the undeveloped world, and they continue to work.
Ah, but none of those bridges are built out of electrified math.
Software is electrified math and it can be perfect.
And it's self-referential: we can write perfect meta-code that emits only perfect code.
> Should we try to do better? Of course we should. But someone has to pay for it. It doesn't happen magically.
My point is not that we never try. My point it that the world contains many attempts and most of them have been ignored by most working programmers.
> Regarding the Toyota Unintended Acceleration bug, that's gross negligence if the top priority coming from management wasn't quality and if someone can prove that, they should end up in jail. And I would not excuse the developers either. I would never ship code that I know might kill someone. I would rather quit and work as a cashier. Please re-read my original post because I never said quality isn't important. I only said that quality is not always a top priority, and that in some cases it should be a low priority. A website for a 2 week marketing campaign will never kill anyone. It would be a waste of resources to insist on anything other than just shipping it once it works.
Let's assume, for the sake of argument, that I'm wrong and correct software always costs more than incorrect software. In this scenario (which may well be the REAL scenario) you have put your finger on the important bit: we're talking about the location of the inflection point.
Allow me to reference Randall Munroe, "Is It Worth the Time?" https://xkcd.com/1205/ It's a handy chart that shows, "How long can you work on making a routine task more efficient before you're spending more time than you save? (Across five years)"
It's not precisely what we're talking about, but it's got the same flavor: how much do you expect to use the buggy software vs. the cost of correctness...
Now my point would be: The industry should have had a house-on-fire urgency around reducing the cost of correctness to shift the infection point downward so that all but the most trivial software can be made correct economically.
We should have been doing that since forever (or at least sometime after the Apollo 11 mission.) Instead we generally ignore these sorts of things.
> If you're honest with yourself and look around, you do it all the time in your own life too. You draw a diagram on a piece of paper or a white board to explain a concept and then you throw it away or erase it. You don't carve it in stone just because it will last longer and someone a hundred years from now might find it useful. You put up a simple rope barrier to keep people from stepping on newly planted grass. You don't erect the wall of China. Temporary "low quality" solutions that will later be dissembled and thrown out (or save the rope for reuse at least) are often the best fit based on the requirements.
Have you been to Daiso? It's the Japanese dollar store. Pretty much any human problem that can be solved by ten ounces of plastic can be solved in Diaso for $1.50. I'm not generally into consumer culture, but I love Daiso.
You don't need 'temporary "low quality" solutions' if you have Daiso.
It's not that I don't use hacks, or don't respect them, it's that we're so far behind where we should be in terms of off-the-shelf solutions (to programming) and we don't seem to be quick on the uptake...
> I'm very skeptical that markets would completely ignore an opportunity to beat the competition if the costs were exactly the same but the results were higher quality. But I'm going to look into this. Thanks for the reference. I'm going to read James Martin's book and see if I learn something new that will help me write better code.
God bless you! (I collect powerful ideas and I cannot tell you how many times people have said, "If $FOO is so great, why doesn't everybody use it already?"... I don't know! I don't freakin know! It makes me sad. All of human history is a tire fire, indeed.)
- - - - - - - - - - - - -
This is my reply to your later comment on this same thread.
First, wow, I'm impressed. You are actually doing the homework and I tip my hat to you with great respect. Seriously, that's the nicest thing you could have done and I really appreciate it.
Second, yes the language and presentation around these "HOS" ideas has apparently always been really bad, with the issues you describe. It also doesn't help that the necessary background knowledge and jargon wasn't wide-spread at the time.
Third, yes it was panned by Dijkstra and the Navy, I've read both of those reviews, and their objections are not without merit. But, and I say this as someone who has huge respect for Dijkstra, they were both wrong: they both missed the fundamental advantages or "paradigm", if you will, of how HOS et. al. works.
(Also, have seen that Simon Peyton Jones interview. And no, we're not just talking about Functional Programming, that's kinda orthogonal. E.g. Haskell helps you write code with fewer bugs, HOS prevents them in the first place. Another way to differentiate them is that if you're typing text in a text editor to make software you're not doing HOS regardless of the langauge.)
So, poor marketing, bad reviews, obscure principles and the general disinterest of industry led to this powerful technology languishing.
Yet, I insist there's something there. Let me try to convey my POV...
In modern terms I can describe the crucial insights of the HOS sytem concisely. Here goes:
Instead of typing text into a flat file of bytes and hoping it describes a correct program, the HOS method presents a tree of nodes that is essentially an Abstract Syntax Tree (but concrete and there's no driving syntax because there's no source text.) The developer edits the tree using only operations that maintain the correctness of the tree.
This is like "Par Edit"[1] in emacs, or a little bit like some of what J. Edwards is attempting with Subtext[2], or the old "syntax-directed programming environment called Alice"[3] for Pascal. (Again, it's not that no one has ever tried anything like this, my whole point is that powerful techniques for writing software with fewer bugs in have been around for a long time and we, in general, don't use them.)
The main difference from these is that HOS uses a very simple and restricted (but Turing complete) set of operations to modify the tree: Sequence, Branch, Loop, Parallel. (There are some "macros" built out of these operation for convenience but underneath it's just these four.)
Starting with a high-level node that stands for the completed program you gradually elaborate the tree to describe the structure of the program and the editor/IDE enforces correctness at each step. You literally cannot create an incorrect program.
Apparently normal people, accountants and such, could sit down in front of the IDE and,with a little training and coaching, learn to describe their own work processes in it and essentially write programs to automate (parts of) their own work.
I've been working towards bringing this to market, on and off, for years now. In fact, my first programming job was the result of a talk I gave on a prototype IDE at a hacker convention about fifteen years ago. I have just finished implementing type inference and type checking for my latest vehicle: a dialect of the Joy programming language. It has been slow going (I lead a chaotic life) but I'm on the cusp of having something I think will be really great. If it works, it will revolutionize software development.
Quixotic, I know, but somebody's gotta tilt at those windmills...
Anyway, thank you again for taking the time to look into this. I can't tell you what that means to me personally. I know the "Provably Correct" book is terribly written, but I urge you to try to look beyond that. All I can really honestly tell you is that I'm convinced there's something really important and useful there.
[1] "ParEdit (paredit.el) is a minor mode for performing structured editing of S-expression data." https://www.emacswiki.org/emacs/ParEdit
[2] http://www.subtext-lang.org/
[3] "In a syntax directed editor, you edit a program, not a piece of text. The editor works directly on the program as a tree -- matching the syntax trees by which the language is structured. The units you work with are not lines and chracters but terms, expressions, statements and blocks. " https://www.templetons.com/brad/alice.html