Hacker Newsnew | past | comments | ask | show | jobs | submit | snapdangle's commentslogin

Perl and CPAN definitely do not get enough flowers for their contribution to TDD.


You might want to go ahead and inform those investors who bought Amazon or Booking.com stock in the early 2000s that their orders of magnitudes of returns were really no such thing.


With which JavaScript version? At the time HOP was written I remember JavaScript still doing all it's for loops in C-style and there was certainly no arrow syntax to make anonymous JS functions manageable.

There were also no functional methods available, as related by the restriction to C-style for loops. (Array Iteration methods didn't appear in the spec until 2009, so I don't see how you could have map or select or anything else functional).


Thank you for taking the time to break this down!

Your description and summary blocks have started me wondering... has there been much work done on converting APL statements to more verbose explanations?

The sigil-to-meaning mapping is likely automatic for seasoned APL programmers but it might make such one liners more effective for evangelist purposes by making them somewhat more self-documenting.


it indeed becomes much easier to understand after ~ a year. this thread included some discussion about making APL-family languages easier to read for novices: https://news.ycombinator.com/item?id=20728715

I'm of the opinion that code should be documented. Short description, arguments, return, purpose etc. However rebinding primitives kindve defeats the purpose after a little familiarity. For example:

commute ← {⍵ ⍶ ⍺} ⍝ or commute ← { ⍺ ⍶⍨ ⍵ }

will get tedious quickly. Its better to lean into the terseness of the language, rather than attempt to transform it into something it isn't. There has been work done for easing the learning curve, for example https://aplcart.info/ . Keep in mind aplcart generally targets dyalog, however many of the idioms should work for alternate implementations with possibly some tweaking.


I'd be very interested in reading more about the differentiating features of GNU APL. There is a great big announcement at the beginning of the GNU APL docs that talks of a decision needing to be made around a gap in the ISO specification. Is that the primary source of controversy or is there more discussion I could read somewhere else?

I've been looking for a good "diff" of the language differences between Dyalog APL and GNU APL, as much as to understand the extent of the progress since 'APL2' (or whatever the ISO standard name is considered equivalent to it) as any for any specific list of what GNU APL can't do relative to it's modern commercial compatriots.

(EDIT: Fixed final sentence to be a complete thought/sentence).


dyalog has a few more operators. for example @ & ⍠. It lets you use statement separators in lambdas. In gnu apl you can simulate that with '⊣'. It allows multiline lambdas. Dyalog uses ⍺⍺ & ⍵⍵ for operators vs ⍶ & ⍹. Dyalog lets you use ∘ as a composition operator. Dyalog generally has better performance, which becomes noticeable with large arrays. It has a lot more libraries available. Gnu APL behaves more like a traditional unix based interpreter. Gnu APL has erlang, python, lua, and C interfaces.


I'm awaiting delivery of a printed Mastering Dyalog APL book while reading this! I landed on kdb+ after seeking a reasonable alternative to the so called "best in class" Elastic Search/Kibana tooling, fell in love with K once I understood that the syntactic terseness is all for the sake of fitting the entire interpreter into L2 cache, and have now landed at the decision that learning Iverson's classic is the only way to satisfy my desire to live a life free of them dang stinking loops!


I don’t believe that the terseness of k is necessary to fit into the I$. I think you could have reasonably longer operators and do fine.

I think it’s partly about style, partly about having a small number of operators (which compose well together), partly on using simple data structures (it doesn’t take much code to iterate an array.

I’m not even particularly convinced that fitting in the instruction cache is a trick that magically makes everything fast anyway. Most of your memory accesses in a typical data processing program (ie the kind of program one would write in k) will be to the data and hopefully these will be nearly always linear and mostly sequential accesses.


The original K design was laid out in the 1980s when the constraints were even tighter than what they are today. The utilization of very short operators means not only the interpreter easily fits into cache but also the custom function definitions you have written will as well.

When dealing with high performance computing or real time processing of high volumes of data, any fetch to RAM for loading a function call to dispatch is going to have _some_ impact in a tight loop. Add that up for all the libraries you have loaded for your application verses a ground up implementation in K... Does that whole thing live in L3 along with the VM or intepreter + dependencies underneath it? It's doubtful.

My experience was simply using their Kx's free Developer IDE and experiencing the performance differential on datasets myself. YMMV but my (admittedly limited) experience leads me to believe that there is a serious case to be made for the performance advantages of having all your computational logic living as close to your computational cores as possible.

See also the PhD by author of the OP article where he presents language where:

"The entire source code to the compiler written in this method requires only 17 lines of simple code compared to roughly 1000 lines of equivalent code in the domain-specific compiler construction framework, Nanopass, and requires no domain specific techniques, libraries, or infrastructure support."

Linked from the article, available here: https://scholarworks.iu.edu/dspace/handle/2022/24749


Surely none of that is an argument for terse user-facing syntax? Anything the user types can trivially be converted into K's "bytecode", ahead-of-time.


No, it’s not.

The arguments for terse user facing syntax are related, though:

The ability to see a whole program (17 lines vs 1000 lines) means you need much less human “working memory” or whatever the biological equivalent of cache is, to reason about the program.

It also means you can use your visual system’s pattern matching abilities because patterns are 5-10 characters rather than 5-10 pages as they often are in C++.

Totally different hardware, but it’s still about the L1 and registers.....


I agree with these points. Also keeping a language small allows it all to fit in your brain (though of course there are all those idioms in apl) which isn’t really true of larger languages


True but if I understand correctly (and I'd be happy to be corrected by someone with more K experience), the smaller size of input data should also have an impact on real world parser performance (including the memory usage involved therein).


Any sane interpreter would parse things beforehand and so you shouldn’t expect a significant fraction of performance to be parsing dependent for anything except a trivially sized dataset. Two reasons to care a lot about parsing performance:

1. Silly reasons mean that you need to block while you parse and reducing this blocking time is important (eg JavaScript. One trick is to optimistically assume that the script won’t do a document.write and try to process the html after it. Another would be to eg only parse enough of a function definition to find out where it ends and only parse it more (or block on parsing it) if it gets called)

2. You get a large amount of source code that needs to be quickly parsed all at once to be interactive (eg JavaScript and booted web pages)

I think in the case of k you only parse a small amount at a time in interactive use and any non interactive use would be for a big enough job (yet still generally small enough program) that parsing time would not be significant.

If parsing efficiency really mattered, surely the language would use something easier to parse (eg something rpn based to avoid parens, using single characters for each operator and just disallowing variable names from containing those characters).

In summary: I think parsing speed is basically irrelevant to whether k is fast or not, so long as the parser is reasonably fast, it’s efficiency shouldn’t be a constraint on the language.


According the recent 18.0 release presentation, it sounds like they are aware of how different an experience this is relative to other interpreted "scripting" languages. To that end, they have added support for a Run function that will execute with access to to the invocation arguments.

EDIT: They also discuss how the upcoming convergence of .Net Framework and .Net Core is going to have a major impact on their cross-platform capabilities.


That is great to hear; perhaps some of the issues I have will be addressed when version 18.0 is released (which, despite the present tense in your comment, hasn't happened yet). I really hope so – as I said, there's a lot to like in APL.


That is categorically untrue. Verbal cultures in Australia have kept an accurate account of a coastline that existed over 10,000 years ago.

Meanwhile, the supposed collective knowledge in books is constantly second guessed and taken with grains of salt ("bronze age army sizes couldn't possibly be the size they write that they are", "the winner writes the history", etc).


Nitpick: "the winner writes the history" is an important part of studying history whether written or oral. People who write books or repeat old stories have biases, opions on how world works and open agendas. They leave out stuff they think dont matter or crosses taboos or does not make people look like as they are supposed to. Just as contemporary people lie, especially politicians and leadership, ancient civilizations discovered advantages of bending the truth or lying too.

Being aware of that and being aware of point of view is important part of historical work.


What are a few examples of when we've caught, instead of merely suspected, a lying ancient historian?


According to Plato, Socrates himself defended "noble lie" as in lie that "to maintain social harmony or to advance an agenda".

Besides, "ancient historians" were not historians in our sense of the word at all, so I don't understand why are you asking for historian specifically. They wrote a lot of obvious myths (falus appearing out of fire) and there is no expectation for those texts to be accurate.

Ancient texts were written for purpose and with limited knowledge (just like any contemporary text) and it would be oddly naive to think anything else.


I'm asking if there are any instances where an ancient author is known to have lied about something that could be believed by a modern historian if they didn't know any better. For example maybe some Sumerian committed tax fraud on clay tablets, but we caught them thousands of years later. I'm wondering if any specific examples of that are out there.

Edit: to be clear, I'm asking for a specific example, not an argument that an example probably exists out there somewhere.


Have you ever read in popular media about a topic that you understand well? (it is hard to get an unskewed picture even if the author has no agenda)

What do you think happens with truth tellers and their work if it is not aligned with the current elites version of the truth?

For example, who do you think has been worse for whistleblowers , Bush or Obama?


>Verbal cultures in Australia have kept an accurate account of a coastline that existed over 10,000 years ago.

I've heard this as well, but I never heard any convincing evidence that the native's myth didn't happen to align with reality. Further, this sort of 10,000 year old knowledge seems to be the exception rather than the rule.


100 years might be the limit of how long knowledge can last and still be useful enough though. 100 years is probably 6 generations.

Imagine explaining a complex process, like how to change the ink cartridge on a printer, via a chain of chinese whispers. Now put 20 years between each step of the chain, and remember all the intermediate people can't practice or try the skill they are described - they simply have to imagine it, since they have no fire to play with.

If the knowledge lasts that long, when a forest fire does happen, people only have one chance to figure out how to keep it alive. Accidentally put wet wood on it just once, and it's dead again for another 100 years.


It was often not Chinese whispers though -- that's dismissive. Our being "modern" there's a tendency to think because they didn't have ink cartridges and internets they were stupid and all knowledge ephemeral. How about an alternative that fits with aboriginal, and African oral culture:

Imagine explaining a complex process, using language carefully structured to be memorable, to the next generation. Then spending hours repeating, testing and checking their recollection over the coming days and years to ensure their memory is as yours. Rather like rote learning of tables and other "modern" learning. 2x2=4 doesn't become something else that way.

Each generation gets a complex story they may not see the applicability of, but if it's evolved to be important in the culture to remember, maybe they figured ways to remember until it is useful again. Africans did, pre-Medieval Europeans did, and for the longest period known, aborigines did. Why not these?


Agree that oral transmission was an important & practiced skill. But I do wonder about how well it let you transmit "technological" knowledge, in a manner that could be turned back into practice. Do we have any examples of people doing this?

My counter-example is various failures to reproduce early industrial-revolution processes... from memory, wasn't there a stage when the French were pushing to catch up in iron-making, and sent spies to England, from whose accounts they could not make the process work? Despite having not just words, but materials and examples of the result. (The solution, eventually, was to pay people who had the knack to move there.)


I think you underestimate how much better a pre-literate culture is at memorization, compared to a culture with the luxury of writing. It would be more useful to find an example of a similar pre-literate culture to make your point.


No, the point I was trying to make is that even with absolutely perfect transmission (which is the best they could hope for) it can be very difficult to translate words back into actions. It's also hard to learn golf or dancing from a book (again, perfect error-free memorisation) because there's a lot of knowledge which doesn't fit well into words. Muscle memory, once you've got it.

I presume this was also true of the making of stone tools, or pottery. And of the recognition of edible plants & mushrooms. All of these are skills which I'd be surprised to see transmitted over a long time-lapse. (Without being at all surprised by the memorisation of stories, at a level I could never match.)


The most recent example might be the Australian aborigines fire rituals. After this year's bush fires there have been many calls to use their fire ritual burnings once again. I gather this has been done in the Northern Territories for a few years, and is far more successful than advanced, technological and knowledge filled approaches (Western arrogance that our way must be better) that pushed the traditional out for decades. They seem to have kept more than enough to be far better at it than those meant to know. How well they work in a significantly changed climate is another question, but it appears to work better.

Speculating wildly here, we don't know the Neanderthals didn't ritualise the activity into a dance or an act to retain some of the process as well as the words. As we do with dancing, martial arts, even theatre or early stages of ancient apprenticeships. That might transmit the muscle memory of golf or stone tool making -- without the practised skill. How far that remains applicable using a stick in place of a golf club, or pine cone in place of a lump of flint is impossible to guess, but puts you closer than mere words.

I have to assume they wouldn't suffer the Wikipedia tendency to explain the technical so technically perfect (including all obscure jargon) that it's often bordering on impossible for an intelligent outsider, deeply skilled in other technical fields, to follow. :)


What's the time-period for the firebreaks? I mean when these skills last used, even if on a smallish scale?

OK, ritualising a "how to ride a bicycle dance" seems like it could be a way to pass more information than a perfectly repeated poem / book. (Perhaps thinking of oral tradition as meaning Homer not how to chip flint is a blind spot in how we think about such things?) Would still be extremely curious to know of any examples where this actually happened.


People passed down complex oral culture with no "practical use" – myths and stories – very reliably over dozens, maybe hundreds of generations. We became very good at using rhythm and rhyme as error correction mechanisms to ensure a low frequency of replication errors. But anyway, forest fires simply wouldn't have been that rare in the first place.


"People passed down complex oral culture with no "practical use"

You're not arguing with what you're responding to, are you? Knowledge with no practical use can accommodate a lot more imprecision. Reliability in a context where details aren't tested is a fuzzy concept.


Can you point to more information regarding the first bit?


There's been studies and research into this quite a few times. I'm sure there's lots more out on the web.

Here's a story I remember reading recently: https://www.abc.net.au/news/2015-10-16/research-gives-merit-...

The key takeaway from all I've seen is it's not just telling stories, but like some of the more recent pre-literacy Western oral traditions, is more like a formal passing on, testing, repeating to ensure the message is passed on accurately.


I was just reading about a culture that passed on knowledge orally by listening to important people on their deathbed. This was unfortunate in the context of the 1918 flu epidemic.


I believe there was at least one plains indian tribe that banned any discussion of their historic stories unless there were at least a dozen members present.


Thank you for raising this very significant detail to the comment section!


It reads like a spoof, like if Paul McCartney wrote a manifesto for why he should be considered the cutting edge of rock.


Raku includes an incredible amount of new tools for writing one-liners. Furthermore, you can convert your one-liner into a command line app simply by wrapping the code in a `sub MAIN($pos-arg, :$named-arg) { ... }`.


Yes but it has a VM penalty for at least more than 500 ms in my box, which is kind if dissapointing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: