I just read the abstract and some of the discussion. He is saying something interesting, however the language is pretty opaque if you aren't doing philosophy of science.
As I understand the abstract (ruthlessly oversimplified):
1. Emergent behvior is interesting.
2. We don't really understand how to theoretically model emergent systems. Emergent properties are high level, the stuff we can measure is low level. Connecting them in a principled fashion is hard.
3. We don't know if the limits are in the catgorization or aquisition of knowledge.
4. We are proposing a pragmatic bottom up approach which, like granger causality, bypasses some of the hard parts.
5. We test the approach on an artificially hard problem and got good results.
6. We think this has use elsewhere
It helps if you understand he's working in theoretical biology where they see emergent systems at all levels and the inability to model these systems jn a principled fashion is a real drag.
> 2. We don't really understand how to theoretically model emergent systems. Emergent properties are high level, the stuff we can measure is low level. Connecting them in a principled fashion is hard.
I feel like this is just outright wrong because we have a history of being able to model it. Saying we don't understand it is to deny years of good theoretical modelling with real results.
I stopped reading the article after a few sentences, so thank you for the summary.
What's funny is that we've shown time and again that we have ways to model emergent complexity and even have generalized predictive ability. Turing's reaction diffusion is a perfect example of that.
In my experience modelling using "agents" that interact, and then having absolutely enormous scale, often demonstrates emergent properties that greatly resemble those of biological systems- which isn't totally surprising, given how biological systems work.
Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.
By that standard this work appears, shall we say, limited. “Emergence” is a conceptual dead end that’s pragmatically comparable to a just-so story.
Even though you're being downvoted, I think your question has some merit.
The paper is annoyingly opaque, and it would take me a few hours, at least, to validate whether it's interesting at all. This is typical of crackpot papers, so I'll simply reject it.
Also, if it were genius, it would probably have received more than 6 citations after five years.
I'm perfectly happy with the terminology, setup, and analysis. I studied physics in my bachelor's, complex systems in my master's, and have been ingesting philosophy at a leisurely rate all the while and have been working with concepts like this long enough that I can see this paper fits comfortably into the existing body of work.
I don't know what your background is but I think it's a reasonable framework for studying the connections between micro- and macro-scale dynamics. I'm not speaking to the usefulness of the approach, but it's certainly a valid contribution to the science. I'm sure the vast majority of researchers at SFI would agree (I'm not affiliated with SFI).
If it helps, one of the papers[1] that cited this one has a bit more of a concrete application of this line of thinking. You will also note that TFA is published by NIH and [1] by APS. I'm sorry you feel it's a crackpot paper but I'd call it hubris to dismiss something you clearly have little familiarity with out of hand.
You are most certainly correct that I have too little familiarity with the field.
Rereading my comment I see that I accuse the author of being a crackpot, which was not my intention. I merely wanted to point out that opaqueness is more often an indicator of being uninteresting rather than genius. Given the limited amount of time we have, I'd rather err on the side of missing out on a breakthrough.
What is more likely, is that this paper is neither of the extremes, but my comment did not allow for that position. And even more likely, as you point out, is that I experience opaqueness because I lack knowledge to understand most of it.
I still don't like the long and complex sentences though :)
From the title, I was hoping for something on how to get more complex behavior to emerge from machine learning systems. Or at least a discussion of why self-improving systems seem to max out after a while. It's not about that. Not even close.
A good rule of thumb, I've decided, is: if the language is sloppy, so is the thinking. Maybe this is just laziness on my part, but there you are. If you can't communicate it clearly, you don't understand it.