Just from the abstract I learned two concepts. Granger causality and Downward causation. Do scientists actually know these concepts like the back of their hand and are able to read that abstract in perfect clarity?
People working in emergence might be familiar with those terms, but the abstract is not very good.
The point of the abstract is to provide the paper's main points, methods, and findings, so that the uninitiated reader could understand whether the paper is relevant to them. We have too many papers in various databases to spend hours decoding the abstracts.
I would have rephrased the abstract like this:
"We study why it's difficult to model systems with emergent behaviors. Through logic and experiments, we tried to create such models, and found that the more constrained a system is, the more challenging it is to model. Our work helps resolve conflicting theories and aims to support further research."
Then, the more in-depth explanation can go into "Technical Abstract" just below.
I used to write "obfuscated" abstracts as well when I was new in academia, and I'm not proud of them. But I used to worry a lot about people perceiving my work as shallow if I didn't. Over time, my abstracts became "We wanted to know X and tried Y. We discovered Z." And my peers quite liked them, as opposed to the earlier ones.
There are many problems with "posturing" abstracts - they are opaque to most readers, they kill the paper's reach, and many people will just dismiss them as posturing, even if they are not and there's real substance. But that's what happens when you obscure the substance well enough with jargon.
This is something peer review should be good for: curating quality of abstracts so knowledge becomes more direct, easily categorized and understandable.
I had to look up both terms. Granger causality seems like an intuitive concept and I am thinking the unique contribution of Granger is making it computationally rigorous. Wikipedia, at least, doesn't give a rigorous definition of downward causation - nothing sufficient to distinguish it from lots of other similar concepts.
And...that abstract is very, very difficult to read. My most charitable explanation is that the author is carefully using terms with very precise meanings in scientific philosophy, but I have my doubts.
Both terms are well defined within discourse in the philosophy of science (well defined in the analytic philosophy sense, not mathematical sense)
Personally the resistance I see in a lot of scientists to participating in Philosophical discussions (or even using their terms) limits what could be very fertile ground for new work for both fields
PS: whenever Philosophy is brought up, use Stanford's Encyclopedia rather than Wikipedia. A lot of niche concepts get misrepresented on the latter
Downward causation is certainly part of the core terminology when discussing the philosophy of emergence, like strong vs. weak emergence. At least some scientists are also familiar with it: https://www.preposterousuniverse.com/blog/2011/08/01/downwar...
I am not a scientist, but I had a few Stats and Econometrics courses - Granger causality was used fairly often. Someone who studied Philosophy would probably say the same about Downward causation, even though I haven't heard of it before. I doubt scientists knows all jargons from all fields, but if it's from a field close to their own then they probably do.
Not a scientist but I learned those two concept from following the question of 'what the hell is a causality?' and reading related articles. My tentative conclusion is that we still haven't figure it out what causality is, and the main question mark is the emergence and downward causation. Granger causality is just a cool technique to infer causality from correlation, directly contradicting the mantra 'correlation is not causation'. They are just saying they also developed a cool technique to infer causality from data.
We know what causality is. A fully formal definition that encompasses all human intuition for causality isn't fully realized yet, but we know in general what causation is and we have been doing science to establish causation for a while now.
Granger causality is not actually causality. A time delay is required for cause and effect, but a correlation between two events can have a time delay as well.
Therefore seeing a delay between two events does not mean the first event caused the second event. But establishing causation between two events does mean that there is a time delay between the first and second event.
Causative experiments are much more rigorous then relying on the existence of time delay to establish causation. Medicine for example does not rely on granger causality.
As a physicist, this is the first time I've encountered either term. That doesn't seem unusual in general, as every field will have terms that require very narrow definitions whose distinction is irrelevant outside the field (a.k.a. "jargon"). On investigation, I'd describe "Granger causality" as "correlation with a time offset" and "downward causation" as "causation where the proximate cause requires a higher level of abstraction than the effect".
Tangentially, if anybody could explain why "Granger causality" is named "causality", I'd appreciate it. From the descriptions I've found, it does not in any way require a causal relationship, and naming it as a type of "causality" would only serve to conflate correlation and causation. I cannot fathom of any reason why it should be named "causation", without assuming intent to confuse.
>Tangentially, if anybody could explain why "Granger causality" is named "causality", I'd appreciate it. From the descriptions I've found, it does not in any way require a causal relationship, and naming it as a type of "causality" would only serve to conflate correlation and causation. I cannot fathom of any reason why it should be named "causation", without assuming intent to confuse.
I'm not a scientist, but I think I know why. Think in terms of the field epidemiology. That field is largely observational. All experiments can formally only come to correlational conclusions but the intent and end goal of the field in spirit is to come to a causative conclusion.
Take for example say if I wanted to answer the question if Sexual intercourse causes people to be infected with HIV. I can't do a causative experiment here as that would involve a double blind test of actually infecting sample groups with HIV via sex. As an epidemiologist my only tools available are correlative experiments where I observe things in the real world.
You see the disconnect here right? Correlative answers are useless, we want to know what "causes" HIV, but the only thing I have as a scientist are correlative experiments. How would I come to a causative conclusion? The answer is I can't. Formally I can only establish correlations then comment about the correlations qualitatively and explain why I think there's an underlying causation underneath.
Well let's say in my experimentation (the observational one) I observe that people were randomly getting HIV before sexual intercourse and after sexual intercourse. That eliminates causality all together because we know that cause MUST come before effect.
However let's say in my experimentation everyone who got HIV, consistently got it AFTER they had sex. That does not eliminate causation... but it doesn't establish it either. However, it does informally bring the results closer to causation. So I won't call it "causation" per se. Just call it something similar "granger causation." Hence the term. That should answer your question.
Again I'm not a scientist, but I can only guess as to why a physicist never encountered these terms. I think it's because you guys only do a special type of correlative experiment where you correlate whether certain observational Data fits a mathematical model. Experimental physics isn't asking causative questions. It's more asking how well does the data fit a particular model. It's purely correlative, there's no drive to answer anything causative here because it doesn't make sense.
When experimental data closely correlates with say Newtons law of motion does it make sense to say Newtons laws caused the data to come out that way? Not really, it sounds off. Also how would you do a double blind test here?
Ultimately causative experiments involve the experimenter inserting himself into the experiment and using himself as the source of causation in order to establish causation itself. In medicine they do this by deliberately giving medicine to a group of people to see the effect and not giving it to another group of people to also see the negative effect in the data. It seems that if you were to do this with physics it would involve you removing the laws of physics from the universe then putting it back to see how it effects the experimental data... not possible and also doesn't fully align with the goal of physics experiments.
Oh, I'm not at all discounting the utility of it. As a statistical tool, being able to exclude either A->B or B->A based on the sign of the offset is really useful as a filter.
> However, it does informally bring the results closer to causation. So I won't call it "causation" per se. Just call it something similar "granger causation."
I think this is the main point where my confusion comes from. Calling it "Granger causation" makes it sound like a special case of causation, causation with a stronger condition tacked on. For something stronger than generic correlation but weaker than causation, it seems like "Granger correlation" would be a better term.
> Experimental physics isn't asking causative questions. It's more asking how well does the data fit a particular model. It's purely correlative, there's no drive to answer anything causative here because it doesn't make sense.
I don't think this is accurate. When asking how well data fits a model, typically the model asserts some causation. For a model predicting "If A, then B", an experiment would set up condition A, and observe whether B also occurs. The role of experimental design is to produce an environment in nothing else could cause B, such that a correlation could only be produced from causation.
(Granted, from an epistemological viewpoint, Descartes could still doubt such a case and call it purely correlative, but that veers from physics to philosophy.)
> When experimental data closely correlates with say Newtons law of motion does it make sense to say Newtons laws caused the data to come out that way?
No, but it would make sense to say that Newton's Laws states that a causal relationship exists. For the statement "An object at rest will stay at rest, unless acted upon by an outside force.", in an environment with no outside force, the model predicts that the effect of "an object stays at rest" is caused by the initial condition of "an object is at rest".
>The role of experimental design is to produce an environment in nothing else could cause B, such that a correlation could only be produced from causation.
Causation is not established from isolated correlation. If I completely isolate two atomic clocks but I start off those atomic clocks at the same time it does not mean one atomic clock, causes the ticking of the other even though their ticks are in sync or they have "granger causation" and can have no other form of influence.
Causation is only established by having the experimenters hand within the experiment itself. If A causes B then I have to turn A on and turn A off randomly and see if B responds as predicted. That is how causation is established. Isolation helps with this but the critical factor here is that experimental intervention is the thing that establishes causation. Remember: Correlation is an observation, causation is an intervention, then subsequent observation to see how the system reacted to the intervention.
Physics experiments focus more on the observational side of things. The causation is more meta. You're not asking if A causes B, more your asking if the concept of "A causing B" even exists.
>No, but it would make sense to say that Newton's Laws states that a causal relationship exists
This doesn't make sense. Newtons laws or physics in general define what causality means. Right? It defines the rules for how one particle "influences" another particle... hence it defines the nature of causality itself.
Do you see the difference here? You're not investigating whether or not A causes B. You're investigating the definition of "causes." Hence it's a observational experiment. It's a much more meta... and as a result becomes purely correlative as we can only observe physics, we can't change or intervene within the experiment itself to change physics.
I hadn't heard the latter philosophical term but there are related concepts I'm familiar with. I'm more technically driven than many of the other neuroscientists I work with so I'd imagine they don't all know Granger causality. They actually might be more likely to know Downard causation given some of the philosophical drive behind their current work.
Causation is fundamental to science. So I'm curious as to how many scientists know the details of science itself. Something like granger causation is not something I know. But I'm not a scientist either.
I just read the abstract and some of the discussion. He is saying something interesting, however the language is pretty opaque if you aren't doing philosophy of science.
As I understand the abstract (ruthlessly oversimplified):
1. Emergent behvior is interesting.
2. We don't really understand how to theoretically model emergent systems. Emergent properties are high level, the stuff we can measure is low level. Connecting them in a principled fashion is hard.
3. We don't know if the limits are in the catgorization or aquisition of knowledge.
4. We are proposing a pragmatic bottom up approach which, like granger causality, bypasses some of the hard parts.
5. We test the approach on an artificially hard problem and got good results.
6. We think this has use elsewhere
It helps if you understand he's working in theoretical biology where they see emergent systems at all levels and the inability to model these systems jn a principled fashion is a real drag.
> 2. We don't really understand how to theoretically model emergent systems. Emergent properties are high level, the stuff we can measure is low level. Connecting them in a principled fashion is hard.
I feel like this is just outright wrong because we have a history of being able to model it. Saying we don't understand it is to deny years of good theoretical modelling with real results.
I stopped reading the article after a few sentences, so thank you for the summary.
What's funny is that we've shown time and again that we have ways to model emergent complexity and even have generalized predictive ability. Turing's reaction diffusion is a perfect example of that.
In my experience modelling using "agents" that interact, and then having absolutely enormous scale, often demonstrates emergent properties that greatly resemble those of biological systems- which isn't totally surprising, given how biological systems work.
Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.
By that standard this work appears, shall we say, limited. “Emergence” is a conceptual dead end that’s pragmatically comparable to a just-so story.
Even though you're being downvoted, I think your question has some merit.
The paper is annoyingly opaque, and it would take me a few hours, at least, to validate whether it's interesting at all. This is typical of crackpot papers, so I'll simply reject it.
Also, if it were genius, it would probably have received more than 6 citations after five years.
I'm perfectly happy with the terminology, setup, and analysis. I studied physics in my bachelor's, complex systems in my master's, and have been ingesting philosophy at a leisurely rate all the while and have been working with concepts like this long enough that I can see this paper fits comfortably into the existing body of work.
I don't know what your background is but I think it's a reasonable framework for studying the connections between micro- and macro-scale dynamics. I'm not speaking to the usefulness of the approach, but it's certainly a valid contribution to the science. I'm sure the vast majority of researchers at SFI would agree (I'm not affiliated with SFI).
If it helps, one of the papers[1] that cited this one has a bit more of a concrete application of this line of thinking. You will also note that TFA is published by NIH and [1] by APS. I'm sorry you feel it's a crackpot paper but I'd call it hubris to dismiss something you clearly have little familiarity with out of hand.
You are most certainly correct that I have too little familiarity with the field.
Rereading my comment I see that I accuse the author of being a crackpot, which was not my intention. I merely wanted to point out that opaqueness is more often an indicator of being uninteresting rather than genius. Given the limited amount of time we have, I'd rather err on the side of missing out on a breakthrough.
What is more likely, is that this paper is neither of the extremes, but my comment did not allow for that position. And even more likely, as you point out, is that I experience opaqueness because I lack knowledge to understand most of it.
I still don't like the long and complex sentences though :)
From the title, I was hoping for something on how to get more complex behavior to emerge from machine learning systems. Or at least a discussion of why self-improving systems seem to max out after a while. It's not about that. Not even close.
A good rule of thumb, I've decided, is: if the language is sloppy, so is the thinking. Maybe this is just laziness on my part, but there you are. If you can't communicate it clearly, you don't understand it.