Hacker Newsnew | past | comments | ask | show | jobs | submit | autopoiesis's commentslogin

So the Android work has been merged? Does this mean anything for Hu Jianwei's builds (of Emacs and Termux) at [1]? Having them in an obscure SourceForge repo seems less than optimal; getting Obtainium to understand the repo structure was not super fun...

[1] https://sourceforge.net/projects/android-ports-for-gnu-emacs...


I believe that he (which is who ported Emacs to Android) will keep uploading nightly builds in there. The stable Emacs 30.1 release will shortly be available in both the GNU FTP server and F-Droid


Isn't the solution to have much shorter copyright terms? Software could be closed source at first, its implementation costs recouped, then opened by default when its copyright term lapses. New releases could still be closed, so income could continue. Set the term at 5-10 years, rather than >70.


This doesn't really work for projects that want to be closed source, as they can just not publish the source. After the 10 years, people can copy the binary, but that doesn't really give you a whole lot of benefit.

And if a project does want to be open source eventually, they can already license their code that way.


Couple it with a generalized right to repair: source code is what's needed in order to be able to repair the software that you use. If beyond the support period or the copyright term (whichever is least), the materials needed to repair the product must be released.


No, you just make that a prerequisite for the software copyright. If you don't submit the code, you don't get the protection.

Same idea as for patents vs trade secrets.


But you'd also need some way to stop derivatives becoming copyrightable again. Currently the only way to achieve this is copyleft licences.


This is precisely the question answered by the OP. The answer is, "because there is a whole spectrum of things you might mean by 'diversity', of which 'number of distinct species' is only one extremum".


And also, I assume, because the concept of "species" isn't all that well defined?


It is well defined: a group of living organisms consisting of similar individuals capable of exchanging genes or interbreeding


Ring species make your definition non-transitive. The same with species that can interbreed but exhibit hybrid breakdown.


I invite you to examine the notes of the international ornithological congress... The difference between species and subspecies is quite subtle, and subject to interpretation, because no one is really going to do the experiment to find out if two individuals of geographically district populations can actually still interbreed.


So if you have a few grams of soil and want to know how many species of micro organisms are in there, you're setting them up with dates to see which ones will end up breeding?


One of a number of definitions. It is one that allows lions and tigers to be the same species.


Does he provide an example of other definition of diversity that makes sense in biological context?


Yes, the two extremes are captured by the common metrics of "species richness" which is the pure "how many unique species are there", and "species evenness", which depends on how evenly distributed the species are. A community in which 99% of individuals are species A and the remaining 1% are from species B-G is exactly as species rich as a community in which there are equal numbers of individuals of each species, but it is much less even (and therefore, under one extreme of diversity, less diverse). In different contexts and for different ecological questions, these two different versions of diversity can matter more or less, and there are metrics which take both into account, but this is a fully generalized solution which shows you relative diversity along the entire spectrum from "all I care about is richness" to "all I care about is evenness".

-edit- by the way, since it may not be obvious to everyone, the reason why an ecologist might care bout evenness is because extremely rare species are often not very important to the wider community. From an ecological function perspective, there is very little difference between my above example of the 99%/1% community and a community that is 100% species A. So an community with two, equally populous species might have more functional diversity than a community with one very abundant species and several more, very rare species.


Metric spaces are enriched categories. They are enriched over the positive reals. The 'hom' between a pair of points is then simply a number: their distance.


And, these non-negative real numbers, which are these homs, are “hom objects”, so regarded as objects in “the category with as objects the non-negative real numbers, and as morphisms, the ‘being greater than or equal to’ “ ? Is that right?

So, I guess, (\R_{>= 0}, >=, +, 0) is like, a monoidal category with + as the monoidal operation?

So like, for x,y,z in the metric space, the

well, from hom(x,y) and hom(y,z) I guess the idea is there is a designated composition morphism

from hom(x,y) monoidalProduct hom(y,z) to hom(x,z)

which is specifically,

hom(x,y)+hom(y,z) >= hom(x,z)

(I said designated, but there is only the one, which is just the fact above.)

I.e. d(x,y)+d(y,z) >= d(x,z)

(Note: I didn’t manage to “just guess” this. I’ve seen it before, and was thinking it through as part of remembering how the idea worked. I am commenting this to both check my understanding in case I’m wrong, and to (assuming I’m remembering the idea correctly) provide an elaboration on what you said for anyone who might want more detail.)


> are “hom objects”, so regarded as objects in “the category with as objects the non-negative real numbers, and as morphisms, the ‘being greater than or equal to’ “ ?

This works, but it's not quite what you want in most cases. There's a lot of stuff that requires you to enrich over a closed category, so instead we define `Hom(a,b)` to be `max(b - a, 0)` (which you can very roughly think of as replacing the mere proposition `a < b` with its "witnesses"). See https://www.emis.de/journals/TAC/reprints/articles/1/tr1.pdf for more.


Indeed they are. I'm saying it may not be the right context in this case.

At least what they seem to be doing has little to do with metrics, and a lot more to do with probability distributions.


It's not clear what you're seeking. Probabilities appear because the magnitude of a space is a way of 'measuring' it -- and thus magnitude is closely related to entropy. Of course, you can follow your nose and find your way beyond mere spaces, and this may lead you to the notion of 'magnitude homology' [1]. But it's not clear that this generalization is the best way to introduce the idea of magnitude to ecology.

[1] https://arxiv.org/abs/1711.00802


Sadly, it's mostly nonsense. It first tells you to travel in the wrong direction, then go back on yourself, then take a route that doesn't exist (Green Park to Westminster on Victoria, in 2 stops, when it would only take 1 on the Jubilee)...


Reads like it might be nearly ready to play Mornington Crescent, though.


Unfortunately AI was banned (in tournament play, anyway) in the year 1641.


Not, however, if the Duke of Gloucester Variant is in play, which is popular in the shires. Though, naturally, standard Sudbury-type openings are discouraged in that case, unless one is trying for a Wembley Roundabout.


That's true, I forgot because I've never played a lot of DoG myself, since I was raised in the Eastern Dialectic school of thought and never cared much for artichokes anyway.


Somewhat relieving, if it was accurate I was going to say it was probably the most impressive GPT result I've seen so far.


What are the inputs and byproducts of the manufacturing process for your proprietary sorbent, and what are their environmental impacts, if any?


The paper (arxiv:2103.04689) linked by eutropia above has some empirical evidence on the ML side, showing that performance of predictive coding is not so far off backprop. And there is no shortage of suggestions for how neural circuits might work around the strict requirements of backprop-like algorithms.

cs702's original comment above is excessively hyperbolic: the compositional structure of Bayesian inversion is well known and is known to coincide structurally with the backward/forward structure of automatic differentiation. And there have been many papers before this one showing how predictive coding approximates backprop in other cases, so it is no surprise that it can do so on graphs, too. I agree with the ICLR reviewers that this paper is borderline and not in itself a major contribution. But that does not mean that this whole endeavour, of trying to find explicit mathematical connections between biological and artificial learning, is ill motivated.


>the compositional structure of Bayesian inversion is well known

/u/tsmithe's results on that are well known, now? I can scarcely find anyone to collaborate with who understands them!


It sounds like you would be interested in the book / course '7 Sketches in Compositionality' by David Spivak and Brendan Fong, which studies precisely those ideas categorically, with a particular focus on systems that you might broadly call 'computational': http://math.mit.edu/~dspivak/teaching/sp18/

It has been discussed on Hacker News at least a couple of times previously -- fairly recently, even. You might be interested to look at these discussions:

https://news.ycombinator.com/item?id=20376325

https://news.ycombinator.com/item?id=19701767

Edit to add:

You might also be interested to learn that categorical approaches to linguistics typically take as their starting point monoidal categories, in which there are notions of 'parallel' as well as 'sequential' composition. It turns out that the usual categorical semantics for linguistics shares a lot with the categorical semantics for quantum mechanics: roughly, meanings are vectors, like quantum states. You can read more about doing (finite-dimensional) quantum mechanics entirely using string diagrams (the formal diagrammatic calculus of monoidal categories) in the work of Bob Coecke, who also played a large part in originating these approaches to linguistics.

For example, on the quantum side, an excellent book is 'Picturing Quantum Processes' [0]. And on the linguistics side, the paper linked in the article is a good start: https://arxiv.org/abs/1003.4394

[0] Not freely available, but some slides are at https://www.cs.ox.ac.uk/ss2014/programme/Bob.pdf

Edit, again:

There is also of course Bartosz Milewski's book / blog series 'Category Theory for Programmers', which introduces category theory from the perspective of Haskell and C++ programming: https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

But the best introduction to category theory I have read is Leinster's book, 'Basic Category Theory': https://arxiv.org/abs/1612.09375

And as you might have guessed, I do agree with your statement!


> It turns out that the usual categorical semantics for linguistics shares a lot with the categorical semantics for quantum mechanics

so?


One other thing that has a lot in common with those two is algebraic data types. Products and sums crop up in all these areas. Maybe it's enough to say that with category theory, we feel like we are revealing the "elementary particles" (or rules) of all of these systems.


Type theory has also been applied to both linguistics and quantum mechanics.

What does it mean that both category theory and type theory have been applied to both linguistics and quantum mechanics?

"categorical semantics for linguistics" gives 0 hits in Google btw.

"categorical semantics for quantum mechanics" gives 5 hits all of which reference the same paper by Bob Coecke titled “Strongly Compact Closed Semantics”, which uses the phrase only once.


I'm showing 1M and 200K Google results for those phrases, respectively.


Those exact phrases? In quotation marks?


No, those result numbers do not include quotation marks. With the quotes I show the same results as you.


I see you've got Opus on your to-do list. I would really appreciate that! I find Opus (appropriately configured) to be audibly indistinguishable from CD audio, and it would really help with the bandwidth requirements.

I've always been really excited by the possibilities implied by PulseAudio's network capabilities, but disappointed by their latency and bandwidth requirements. Roc + Opus would be amazing.


Check out https://github.com/eugenehp/trx for Opus streaming inspiration, I've played around with their code and found it easy to work with. Opus would be great with ROC because in case of buffer over/under runs the codec provides features to mask dropouts based on previous content. This is critical when using Wi-Fi.


Thanks.

> Opus would be great with ROC because in case of buffer over/under runs the codec provides features to mask dropouts based on previous content. This is critical when using Wi-Fi.

Are you talking about its PLC or FEC? I didn't test it yet and I'm interested if people are using both of them with music.

BTW it would be also interesting to combine our FECFRAME support with Opus.


Good to know. Yes, Opus will be is one of the highest priorities for us after we'll make the very first (0.1) release.


Excellent; a few years ago I even started hacking on my own transport, very roughly as a PA module, but life got in the way and it never got very far. So I'm very pleased to see this great project. Thanks, and good luck!


Debian testing. An amazing compromise between stability and recency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: