Hacker Newsnew | past | comments | ask | show | jobs | submit | Kednicma's commentslogin

I wonder whether maths really can be self-taught for more than about two generations. It has been known since the beginning of maths that it must be taught by teachers who understand the ideas to students who challenge the descriptions.


Yes, Math can be self-taught. I am a living example. I went through school, up to 10th grade (junior high school) with only getting minimum passing grades. I was worse at everything except History. That was the only thing that I liked. And even at History I was getting like medium grades, not the highest marks.

Then in the summer vacation between 10th grade and 11th grade I got hit with a hobby of Electronics. I read by mistake an old book, from 60's, that was in my father's book collection, of how to create a radio for short waves that was powered by a potato (the vegetable, not the sarcasm!). And I went down through that rabbit hole. I started to learn electronic components and what they do. That required Physics, which I knew nothing about it - so I started to learn Physics as well. Well, have you ever encountered Physics to be, in practice, applicable without Math? So I started to learn Math all by myself. Between ages of 16 to 18, while I was as Senior in high school I recovered all previous 10 years when I slacked worse than a worm. That allowed me to go to University where I got hit by another hobby - programming. Both these hobbies still stay with me to this day.

So yeah, I would argue not only is possible to learn Math all by yourself, but I'd say that doing so you actually learn it even better then when is forced upon you, even if you're good at it.


This is a good point, but I think it is relevant to ask whether most people learning math in school have the experience you describe. I certainly did not. My math education consisted purely of recitation of things I didn't fully understand, and I got very good grades all the way through.


When you say self-taught do you mean even without using a really good text? Or do you mean that it actually can't be taught without a teacher? If you are saying that math can't be taught without a good teacher I don't really see how that is "known".


Even if it requires a good teacher, does society benefit from forcing it on individuals at a time they don't want to learn it? Or would we be better served by making good teachers available to those who want to learn it, at the time they want to learn it?

I'd -love- to go through some of my college level math classes; both the ones I took, and the ones i didn't need to take. They weren't especially relevant then. They're slightly more now, but more than that, I'm -interested- now. But self-learning takes too much of an investment of time (when I run into something I don't know I have to research -that-, and it becomes this infinite process of diving down rabbit holes, rather than having someone who has gone before who can give me a sufficient answer to unblock me on my original question).


Newton famously taught himsef Calculus. As did Leibnitz.


If anything, math is probably one of the easiest to self teach. The basics (which goes all the way up early university level) hasn't really changed in 100s of years, and learning resources are readily available, and you don't need any special equipment.


thats mostly a romantic fantasy. reality is poring over books and practicing by doing the exercises.


Do your analysis per-capita, please; as a former gigging musician, I assure you that it doesn't matter how much money is being given to the Big Four by streaming services, it only matters how many gigging hours there are per week. And the number of gigging hours per week is limited, and the amount of music produced per gigging hour has not changed in centuries.

This is a bad, ignorant example. Not just wrong, but flagrantly, wildly, outrageously misinformed.


There's evidence against strong Sapir-Whorf in the logical-language subculture. They originally wanted people to learn logic by learning Loglan, but today the speakers of Lojban and Toaq are usually either fluent or logical but not both.

The weakest forms of Sapir-Whorf are obviously true, via Zipf's law; being able to shorten long phrases into short nonce words allows for faster communication, which allows for normalization of concepts, in a positive feedback loop. In English, for example, it's no accident that the shortest two words are "a" and "I" and that they are also the most two common ways to refer to things; "u" is on the way there, too!


Two days ago: https://news.ycombinator.com/item?id=24766148

Unsure how Aeon is doing this; some sort of URL trickery?


It is essential when we look at deep time, or any sufficiently high-dimensional and detailed time series, to remember that we are only looking at a tiny slice. We always like to talk of "the tree of life", which might mislead folks into thinking that we get a clean cross-section of every branch of some high-dimensional tree. But, in truth, what we get is more like a tiny wedge cut out from a beanstalk with many central vines; we have only small leaves and cuttings from a mighty thick overgrowth of life.


Very true! I was also fascinated to find that it wasn't uncommon in the fossil record to have evolutionary lineages of species branch and later recombine. See: https://theconversation.com/dna-dating-how-molecular-clocks-...

> DNA holds the story of our ancestry – how we’re related to the familiar faces at family reunions as well as more ancient affairs: how we’re related to our closest nonhuman relatives, chimpanzees; how Homo sapiens mated with Neanderthals; and how people migrated out of Africa, adapting to new environments and lifestyles along the way. And our DNA also holds clues about the timing of these key events in human evolution.


Also, some of the numbers don't seem that big in context. But this situation is 7 million years - plenty of time for evolution to make lots of changes! It just doesn't seem like that big of a gap when we talk about hundreds of millions of years.

The scale is astonishing.


In a much more recent, much smaller gap, horses originated in North America but died out there, only to be later reintroduced from the Old World.

https://en.wikipedia.org/wiki/Evolution_of_the_horse

> "The evolution of the horse, a mammal of the family Equidae, occurred over a geologic time scale of 50 million years, ... Much of this evolution took place in North America, where horses originated but became extinct about 10,000 years ago."


Yep. 7 million years is a little under 0.2% of the age of the earth.


Although the first 20% were useless in terms of life.


I'll put to you what I put to Bostrom in my analysis further downthread: What, exactly, do you think we should be doing which we aren't currently doing? Everything he implies that we should be working on, we are working on.


We are working on it with only so many resources dedicated to the effort (which, right up with/on-top-of climate-change, is one of the most impactful problems to ever be solved as far as I can fathom). This is secondary however.

My __main__ concern is that these efforts are vulnerable to being rendered null due to short-sighted, dogmatic, legislation, a la similar restrictions on things like CRISPR and stem-cell research. Gauging from responses I've seen in this thread here and in the past, if it was a matter of a simple, single democratic vote on "Should we eliminate/drastically decrease the negative physical effects of aging?", I have serious doubts that the end total would be in favor of that action; an overwhelming number of people seem to hold this stockholm-syndrome-y view of death/aging. __THAT__ is the part that concerns me, and that is the/a part that I think concerns Bostrom, and what I believe the story is trying to address.


Yes: http://extremelearning.com.au/unreasonable-effectiveness-of-...

Sibling comments are completely correct. Graphics simulations often require quasirandom sequences; the particular sequence is not important, but any correlations in the sequence will be visible in the output as artifacts, so we want a decorrelated sequence.

If this is not enough of a real-world example for you, then Monte Carlo methods also show up in weather modeling, financial planning, and election forecasting. In those predictive systems, there is extreme uncertainty and dependence on initial conditions, which we model with quasirandom sequences and seed numbers respectively. By running many simulations with different random sequences and then examining all of the results statistically, we can get good estimates of future behavior.

Edit: Oh, you're not in good faith. Okay, cool story. Best of luck with whatever imaginary idea of "unpredictability" you're trying to define, but you might want to return to actual maths at some point.


Quasirandom is unrelated to anything we're talking about. Quasirandom intentionally makes the output very different from random (avoiding clustering).

Every other thing in the PCG website and in this thread attempts to be similar to random (with varying degrees of success). No one here is trying to intentionally be different from random.

> Sibling comments are completely correct.

No sibling comment says anything about quasirandom. The sibiling comment mentioning graphics (by Karliss) actually sort of disagrees with you, and says that we want stuff to be as close to random as possible for graphics so that players don't see patterns. Of course there can be multiple types of graphics situations, some where quasirandom would be useful (the blog you linked), some where a real random approximation would be useful (Karliss's situation).


We aren't completely starved yet of the bottom-up approach, but I agree that it's somewhat limited. We can explain choice and free will in bottom-up terms, which complements the article's explanations of memory and signaling.

In the classic video game "Link's Awakening" (chosen to fit the article's theme), there is a maze of signs. Each sign points towards another sign. Reading the signs in the order that they point to each other, following the chain of arrows, leads to a prize. The player is local and does not know where the prize is, but the signs encode global information about the maze. The player's memory is limited and can only remember one sign at a time, but that is sufficient. It seems to me that cells communicate and act using similar local/global distinctions.

> We reject a simplistic essentialism where humans have ‘real’ goals, and everything else has only metaphorical ‘as if’ goals.

This is the philosophical meat of the article, and the tough takeaway for the reader. The reader must admit that their own goals, since they are human, are not quite "real" in a way which somehow transcends the accidental success of blindly-evolving low-level components. Rather, humans do the same sort of predictive modeling, blind guessing, and lucky incidence that we see in "simpler" life forms.


Ha, hilariously naïve. I still have the "Black Dragon Fallacy" written down as something that deserves a full writeup, but in short: What's the missile made of, and how does Bostrom propose that we build it?

Bostrom brings us "fine phrases and hollow rhetoric," mostly. Sure, we should do something about aging, but what, exactly, are we failing to do as a society here? He seems to think that the problem is that we're treating aging and death as inevitable, but science already has marched past that position; instead, we now know that aging is part of a tradeoff involving cancer and is closely tied to maintenance of DNA as cells reproduce in multicellular organisms.

Further, the notion of agency is hopelessly confused by the design of the fable. Humans are deliberately sending other humans to the dragon while the missile is ready to go, in the story, and Bostrom insists that we are supposed to regret this. However, when we move back through the analogy to the real world, then the way that humans send other humans to the dragon is via war. Will ending aging end war? How?

Anthropomorphizing psychopomps may have been a mistake, since it has led to Bostrom imagining that if we just collect all of the psychopomps into one really big mean dragon, and then kill the dragon, that we'll have defeated death. Easy peasy!


I think the main idea is to realize that aging is a disease we all have and it's lethal and we aren't doing enough to combat it. What's the budget for senescence research and why isn't it two or three orders of magnitude more?

Why isn't every ninety year old automatically enrolled in an experimental program to reduce senescence? Eighty? Seventy?

We should be desperate and taking desperate measures to fight aging tooth and nail. Instead, we seem to be casually studying it.

If we applied the world's productive and research efforts, and obliterated redtape, how much progress would we make?


> I think the main idea is to realize that aging is a disease we all have and it's lethal and we aren't doing enough to combat it.

But...its not.

Its a label for the aggregate of the accumulated effects many different conditions and traumas. Its a very loose multicause syndrome, not a disease.

> Why isn't every ninety year old automatically enrolled in an experimental program to reduce senescence?

Because consent, among other reasons.

> We should be desperate and taking desperate measures to fight aging tooth and nail.

On a social level I think this is wrong for the same reason it is often wrong to desperately scrap at extending life on an individual level: the expected return in terms of life extension does not warrant the expected cost in terms of immediate quality of life.


...Are you suggesting sacrificing the old in order to save the old?


Yes, of course. Given that they are going to die regardless, using the elderly for experimentation seems reasonable. We could be as humane and considerate and provide anaesthesia and care as much as possible, but it's a false kindness to let the dying die because trying to save them is risky and uncomfortable.

With twenty years of massive effort like this, and hard risk taking, would we have improvements against aging? Would the human-years saved be massively greater than the human-years we lost in experimentation? I think both answers are likely "Yes".


> ...Are you suggesting sacrificing the old in order to save the old?

Sacrificing the liberty and quality of life of the current old for the benefit of the future (probably just rich) old.


I don't know if death can really be considered a disease, since it's a fundamental part of evolution. If anything, lack of death could be detrimental to a species.


You're wrong. Death is not a fundamental part of evolution.

Natural selection does not imply that the only type of selective pressure is survival. Death is merely one factor that affects reproduction. Further, only premature death matters directly for reproduction. Most people die long after they reproduce (or have had a chance to reproduce).

That's not to say that curing aging won't change the trajectory of the human species' evolution, but to be frank I don't really care about the human species. I care about humans. If someone said: "we should continually kill off the weakest 50% of humans to make the species stronger", I'd say that person is a monster.


According to https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4635662/ the only non-specific cause of death for senility (R54) is insufficient for cause of death and must be split into predicted causes if no positively identified disease or disorder was the cause of death.

Postponing death is a matter of treating diseases and disorders and preventing accidents and violence.

Evolution has non-death mechanisms too; bacteria exchange plasmids. With genetic modifications we're beyond the need for death anyway if we want species-level improvements.


> Evolution has non-death mechanisms too

Very importantly: different reproduction rates. This is already more important in human evolution than premature death.


"humans were far too heavy to fly and in any case lacked feathers"

Science has certainly not "marched past" the issue of life (or health) extension. Knowing that it is related to cancer or whatever does not end the story. That is like saying we now know humans don't have feathers.

Regarding agency, the story the humans have no real choice in the matter because the dragon will kill them all otherwise. Ending aging would be much more valuable than ending war. Wars do not kill billions of people. Of course, it is possible to destroy all life on the planet in a war, but this is orthogonal to solving other problems.

"You are hilariously naive to think about solving one problem when there is another."


Please pay attention. I did not say "cancer or whatever". I said cancer [0], a specific family of diseases characterized by normal cells becoming cancerous [1], a state marked by unbounded growth and self-reinforcing DNA damage. Cancer and aging are intimately linked via telomeres [2], part of the structure of cellular DNA. Indeed, quoting the first sentence of [3]:

> Telomeres, the caps on the ends of eukaryotic chromosomes, play critical roles in cellular aging and cancer.

On war, we lose millions of people regularly [4]. We lose about a million people to genocide every year [5]. These are people that, in Bostrom's parlance, we have put on the train to go see the dragon; we sacrificed them for nothing at all. Nothing in Bostrom's tale suggests that, having defeated the dragon, we will stop killing millions of people.

On nuclear war or other disastrous climate change, something you only allude to, the Doomsday Clock [6] is currently at less than two minutes to midnight, and has never been closer. It is widely agreed that we are on the very edge of self-annihilation and that we expend a tremendous amount of political effort simply not destroying ourselves.

I can see that you're a relatively young and inexperienced account; I hope that you do some reading and improve your understanding of biology and history, rather than continuing to lean on mystic or mythic influences for your worldview.

[0] https://en.wikipedia.org/wiki/Cancer

[1] https://en.wikipedia.org/wiki/Carcinogenesis

[2] https://en.wikipedia.org/wiki/Telomere

[3] https://en.wikipedia.org/wiki/Telomeres_in_the_cell_cycle

[4] https://en.wikipedia.org/wiki/List_of_wars_by_death_toll

[5] https://en.wikipedia.org/wiki/Genocide

[6] https://en.wikipedia.org/wiki/Doomsday_Clock


You still need to explain why some whales live hundreds of years, and some trees live thousands.

If you believe that telomere shortening is the issue, we should be investigating therapies to encourage telomere lengthening.

We have access to powerful technologies (e.g. CRISPR), and we can develop technologies that are more powerful still.


Hi, to be gentle and brief: Lengthening telomeres can provoke cancerous behavior. Therefore we cannot simply lengthen telomeres by giving people more telomerase; we need a more holistic approach which understands the cancer/senescence tradeoff.

It is not a problem for a creature to live for a long time; the inevitability of cancer seems to itself be genetic and part of the human experience but not for all life. You mention whales, but lobsters are even more interesting: They do manufacture telomerase throughout their lives, and they are not killed by cancer in old age, but by being unable to molt and continue growing. Trees are interesting too; they must always grow in order to keep living, but past a certain size, the physics of water limits their ability to grow.

Indeed, if we want to understand trees and whales, my first guideline would be that, because they are so large, the rules for cellular homeostasis are different at that scale. The things which allow us or lobsters to live for long times are not the things which allow whales or trees to live for long times.


We are failing to sufficiently fund longevity research, and the basic science and engineering needed to enable it.

We are spending a ton of money on healthcare (analogous to the trains), and almost nothing on the root cause of most of these health problems.

The reason the fable is effective is that it frames aging as an adversary, which makes it much easier to understand the necessity of spending resources to defeat it.

Bostrom would say that your concern that aging and cancer are intertwined is akin to a scientist seeing that the scale is impenetrable to all known materials, and then giving up.

Somehow, whales live hundreds of years, and trees live thousands. What's different about their biology that accounts for this? Why aren't their bodies wracked with cancer?


Some of these have aged so perfectly that I only need substitute a few letters:

> Python —"the infantile disorder"—, by now nearly 30 years old, is hopelessly inadequate for whatever computer application you have in mind today: it is now too clumsy, too risky, and too expensive to use.

> It is practically impossible to teach good programming to students that have had a prior exposure to JS: as potential programmers they are mentally mutilated beyond hope of regeneration.

> The use of Java cripples the mind; its teaching should, therefore, be regarded as a criminal offence.

The quotes about languages were always controversial, weren't they? But it's clear now in retrospect what Dijkstra was complaining about. He found FORTRAN to trick people into thinking that programming was merely about specifying arithmetic operations in a certain order, considered BASIC to require mental models which rendered folks memetically blind to actual machine behaviors, and thought COBOL tried to be legible to management but ended up being confusing to everybody.

> Many companies that have made themselves dependent on AWS-equipment (and in doing so have sold their soul to the devil) will collapse under the sheer weight of the unmastered complexity of their data processing systems.

Yep.

> In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to Python, so that they can share each other's programs, bugs included.

Reproducibility is a real problem, and sharing code is just the first step. It's an embarrassment to physics and mathematics that we don't have a single holistic repository of algorithms, but have to rebuild everything from scratch every time. (Perlis would tell Dijkstra that this is an inevitable facet of computing, and Dijkstra would reply that Perlis is too accepting of humanity's tendency to avoid effort.)

> You would rather that I had not disturbed you by sending you this.

Heh, yeah, let's see what the comment section is like.


What a great comment. I'm sure it'll get flagged into oblivion, though - thus directly proving Dijkstra's point.


What's with the hate for python. I've noticed some hate this irl and it's strange. Python is one of the best languages out there right now in terms of ease of use and future maintainability (with types).


First let me state that I do not hate Python, but I would suppose 2 items are on the top of the list for people that do. The first and the one that makes me dislike Python (not hate) is that Python enshrined into the language syntax that whitespace and structure count towards correctness. If it is not properly spaced according to the spec it does not work. Some people see this as an arbitrary constraint put on a developer and do not like it. Now I will give you others see it as a way to keep code readable, both arguments have their merits. Pretty much anything beyond noting how the two camps see it devolves into a holy war. Personally being a person that sees the beauty of LISP derived languages lack of syntax, I fall into the first camp and don't see the value of it for me.

I think the second issue, is it is fairly safe to say that the path from Python 2 to 3 was not well thought out and has been a disaster. A lot of people where burned by it, and it left a bad taste in a lot of peoples mouths.

That being said, Python enjoys a huge userbase so I would not worry about the hate, it's just not the language of choice for some people and that is fine.


I disagree with a lot of this. I feel these are minor minor complaints. You hear this kind of stuff for lisp too. For example, keeping track of parens is just as annoying as keeping track of indentation.

Also the migration from python 2 to 3 is actually the best migration ever. Breaking changes on a code that has a massive library around it. All code is migrated along with the libraries. I haven't heard of any migration done as successfully as python. It required movement of the language and the entire community to make it happen.

Most migrations refrain from breaking changes and instead they take the less risky route and make the language backwards compatible at the cost of being more bloated. See C++ 17 and C++ 20.

But despite this, yeah these are complaints that I've heard, but this is totally unrelated to the hate I see. Like parentheses in lisp is something to complain about but not something to hate lisp for.

The hate I believe is more egotistical than anything. One interviewer told me I could code in any language other than python because python makes things too simple. The hate is because they believe python programmers are stupider than normal programmers.

That being said my favorite language is not python. It's my least favorite language out of all the languages I know well but not a language I hate.


I've written Python for over a decade. There are two problems with Python.

First, the excellent readability leads directly into hard-to-read code structures. This might seem paradoxical but Dijkstra insisted that the same thing happened in FORTRAN, and I'm willing to defer to his instinct that there's something about the "shut up and calculate" approach that physicists have which causes a predilection for both FORTRAN and Python.

Second, Python 2 to Python 3 was horrible, and created political problems which hadn't existed before. Now, at the end of the transition, we can see how badly it was managed; Python 2 could have been retrofitted with nearly every breaking change and it would have been lower-friction. Instead, there's now millions of lines of rotting Python 2 code which will never be updated again. Curiously, this happened in the FORTRAN world too; I wasn't around for it, but FORTRAN 77 was so popular compared to future revisions and standardizations that it fractured the FORTRAN community.


>Second, Python 2 to Python 3 was horrible, and created political problems which hadn't existed before. Now, at the end of the transition, we can see how badly it was managed; Python 2 could have been retrofitted with nearly every breaking change and it would have been lower-friction.

This doesn't make any logical sense. Your saying take the breaking changes in python 3 and put it into python 2? That's just a version number. You can call it version 2.999999.8 and do all the changes in there and the outcome is identical.

No. Every breaking change must have a downstream change in every library that uses that breaking change. That's the reality of breaking changes. No way around it.

Tell me of such a migration as huge as python 2->3 that was as successful. For sure there were huge problems along the way and it took forever. However I have heard of very very few migrations in the open source world that ended up with an outcome as successful as python.

>First, the excellent readability leads directly into hard-to-read code structures.

I don't agree with this either. You refer to fortran but most programmers here haven't used it so you'll have to provide an example for readers to see your point.


I'm not going to argue Python politics with you, but suffice it to say that only a few communities have had such a bad major version upgrade experience. Here are some off the top of my head for comparison, from roughest to smoothest:

* Perl 5 to Perl 6: So disastrous that they rolled back and Perl 6 is now known as Raku

* PHP5 to PHP7: Burn my eyes out, please! But of course PHP has unique user pressures, and a monoculture helps a lot

* Python 2.4 to Python 2.7: Done in several stages, including deprecation of syntax, rolling out of new keywords, introduction of backwards-compatible objects and classes, and improvements to various semantic corner cases

* Haskell 98 to Haskell 2010: GHC dominated the ecosystem and now Haskell 98 is only known for being associated with Hugs, which knows nothing newer

* C++03 and earlier to C++11: Failed to deprecate enough stuff, but did successfully establish a permanent 3yr release cadence

* C99 to C11: Aside from the whole Microsoft deal, this was perfect; unfortunately Microsoft's platforms are common in the wild

Now consider how many Python 3 features ended up backported to Python 2 [0] and how divisive the upgrade needed to be in the end.

On readability, you'll just have to trust me that when Python gets to millions of lines of code per application, the organization of modules into packages becomes obligatory; the module-to-module barrier isn't expressive enough to support all of the readable syntax that people want to use for composing objects. If you want a FORTRAN example, look at Cephes [1], a C library partially ported from FORTRAN. The readability is terrible, the factoring is terrible, and it cannot be improved because FORTRAN lacked the abstractive power necessary for higher-order factoring, and so does C. Compare and contrast with Numpy [2], a popular numeric library for Python which is implemented in (punchline!) FORTRAN and C.

[0] https://docs.python.org/2/whatsnew/2.7.html#python-3-1-featu...

[1] https://github.com/jeremybarnes/cephes

[2] https://github.com/numpy/numpy


"Failed to deprecate enough stuff"

Did you by any chance observe any of the folks involved officially saying that deprecating things was one of the goals? I thought keeping working code working has always been of the C++'s official goals.


I love this quote: "There are only two kinds of languages: the ones people complain about and the ones nobody uses". Bjarne Stroustrup.


There's a lot to complain about for python. But I see genuine hate. People, groups and companies who literally refuse to use it.

I had an interviewer tell me that I couldn't code up the solution in python. I think it might be because python is so easy that people look down on it.


There's a certain cultural subset which tries to bolster their self-assessed superiority by rejecting things which are popular. This is especially common for people coming of a certain academic bent who are constantly playing one-upmanship games desperately trying to be the smartest person in the room.

Python annoys those people because it's both relatively easy to get started with and far, far more successful than whatever their current favorite language is, and this is portrayed as people not getting it rather than having a more insightful discussion about whether other people might reasonably make decisions based on different needs, background, and resources rather than stupidity.


Your comment rings true as I'm one of those people who reject things which are popular. Thanks for pointing it out!


What modern language would Dijkstra approve of?


This is a very difficult question. We know somewhat his preferences, because he worked on implementing ALGOL 60 [0][1], but unfortunately we are blocked by a bit of incommensurability; in that time, garbage collection was not something that could be taken for granted. As a result, what he might have built in our era is hard to imagine.

That said, he did have relatively nice things to say about Haskell [2] and preferred Haskell to Java:

> Finally, in the specific comparison of Haskell versus Java, Haskell, though not perfect, is of a quality that is several orders of magnitude higher than Java, which is a mess (and needed an extensive advertizing campaign and aggressive salesmanship for its commercial acceptance).

I imagine that he would have liked something structured, equational, declarative, and modular; he would have wanted to treat programs as mathematical objects, as he says in [2]. Beyond that, though, we'll never know. He left some predictions like [3] but they are vague.

[0] https://en.wikipedia.org/wiki/ALGOL_60

[1] https://www.cs.utexas.edu/users/EWD/MCReps/MR35.PDF

[2] https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...

[3] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/E...


He would probably form a completely different opinion, the world is nothing like what could be anticipated 30 years go.

The first languages and their compilers were strongly driven by hardware constraints, a kilobyte of memory costing an arm and a leg.

Imagine storing function names in memory to compile the program, it doesn't fit in 1 kB. Imagine storing the whole source code in memory for processing, it doesn't work when there is less than a 1 MB of memory available.

It's ridiculous today but it's real reasons why things were made global back then or why C/pascal split the code between a header and a source file.


There are so many languages available today that I'm sure there are plenty he would have approved of. For example, I think he might have appreciated Zig. If you read his work it's pretty easy to see his top priority is managing complexity and limiting surprise.


> his top priority is managing complexity and limiting surprise

Zig doesn't do either of those things. There are a fair amount of criticisms of the mental model of the author that I've seen voiced - some including security.

What's worse, the community surrounding Zig (in particular, the Discord community) operates more like a cult - any negative questioning gets you shunned.

I was personally a huge fan of Zig until a number of questionable design decisions and dismissed bug reports lead me to believe it will forever remain a toy language. I can't imagine Dijkstra approving.


That’s too bad. I was judging based on the overview of Zig that was recently posted here. I gladly defer to your more informed opinion on the subject, but I’ll maintain Dijkstra would have liked the design goal of executing the procedure as written absent fancy obfuscated control structures.

The cult-like attitude that many programmers have about languages certainly supports Dijkstra’s claims about the immaturity of the field.


I suspect, none of them. If he did approve of one, it would probably be Haskell.

And, for all his complaining, I don't know of any language that he authored. He's sure good at telling everyone that they're doing it wrong, though...


His rant-y EWDs seem to be the only ones people know. But his others go into much more detail about what he thinks and why. And while he didn't (to the best of my knowledge) directly create any language, he did implement an Algol 60 compiler and was involved in language design efforts throughout his life.

He helped create the ideas of Structured Programming, which most of us now take for granted, since pretty much every language in popular use these days are based on these ideas.


Dijkstra co-created a rather influential language called Algol 60[1]. It and its immediate descendants are still used in scholarly CS work because it's so good for clearly describing algorithms. Past that, the influence of Algol 60 on virtually all modern programming languages is hard to overstate.

[1] https://en.wikipedia.org/wiki/ALGOL_60


Well, he co-authored a compiler. From what your link says, he wasn't part of the committee that created the language.


StandardML


Whichever one grandparent commenter likes best.


Your flame-bait revision is a perfect demonstration of what's wrong with the original post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: