I'd even argue that the declining rate of scientific advancement is due to the academic track moving towards the same short-term thinking that plagues parts of the private sector. When the incentive structure is towards pumping out publications, there is way less breathing room for the patient development of good science and novel research. Plus, null results coming from excellent research are treated as useless, so the incentive is towards finding obvious, positive results, especially for early-career scientists.
The total result of the current academic incentive structure is towards the frequent publication of safe, boring positive results, especially pre-tenure. Academic research needs to become LESS like the quarterly return driven private sphere, not MORE like it.
1. The incentive is to get grants. Papers, sure...but grants really. The problem is that grants are effectively smaller than they used to be due to inflation, and so you have to have multiple R01 level grants to fund the lab. Grants must be understandable and seem feasible to get funded...the competition is incredibly high....so this limits attempted scope. So to survive, you write grants that are simple and easy to achieve.
2. In general, the problems are harder today than they were 40 years ago. We are constantly delving into problems plagued by noise and heterogeneity. This makes progress much tougher.
I feel like there's some fundamental fallacy in the idea that "a declining rate of scientific advancement" is a sign that the field is somehow being corrupted or rotting out from the inside.
Science isn't like other commodities. In most of recorded history it is only ever produced, never destroyed [1], and the product is basically free to replicate [2]. The result is massive inflation: it might be hard to make a profit growing corn the same way we did 200 years ago, but doing a 200 year old science experiment is utterly pointless outside a classroom demonstration.
So making science that is worth paying for is just always going to get harder. And yet we equate science with other industries when we expect anything less than billion dollar experiments to yield fundamentally interesting results. This doesn't mean science is somehow getting worse, or that the practitioners are to blame, it just means it's evolving to attack much more difficult problems.
All this being said, there are plenty of ways to reform to keep the progress going: reproducibility is theoretically easier than ever, and yet many journals aren't requiring open datasets or public code. We need to keep the pressure on to evolve in a positive way, not just throw up our hands because things are harder than they were when we knew less.
[1]: Ok, there are some examples were lots of information was destroyed, and a bias from what is recorded.
[2]: I don't mean repeating the same experiment, just that the results from one experiment are trivially disseminated to millions of people.
Could we blame the "industrialisation" of PhDs on that we should expect less impact from each researcher and thus the obvious policy the keep interesting research happen is more researchers?
Pretty influential one: https://www.aeaweb.org/articles?id=10.1257/aer.20180338 "The number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find."
That one, though, is because we are running into physical limits: if we want to build things out of atoms, we can't make features that are half an atom thick. Even above that scale, physical effects that used to be ignorable, like quantum tunneling, no longer are.
From the late 70s through about 2005, scaling semiconductor generations was easy. MOSFET scaling followed rules formulated by Dennard, which provided a fairly easy method of scaling semiconductor designs from one generation to the next, keeping power density roughly constant and continually improving performance. The problem is that by around 2005, if you did it that way, your gates were no longer switches, they were dimmers, and leakage power started to dominate, and that meant that chip architectures had to change radically to keep on scaling.
So, we can no longer just scale designs from one generation to the next, we have to come up with completely new approaches. That's much harder.
You could argue that those current researchers are doing a lot more than those in the 70s. It is difficult to quantify how much harder the doubling problem becomes every time, and how much more effort it takes to solve. But the fact that, after decades of yearly exponential improvement, costs have consistently grown only linearly. More specifically, only x18 cost for doubling after roughly 40 iterations (2^40 is massive). I mean that’s phenomenal by any standards.
All of this is the result of scaling issues. For most of the history of science, it was a endeavor pursued by very few people. Then we started sending everyone to university, eviscerated our economies, and expanded the research workforce a thousandfold. More maybe.
There is a dearth of rewarding research to pursue, even less grant money, and in such a crowded ingroup people become hyper-competitive at status-seeking activities. Now we have entire catalogs of journals that are pretty much just publication mills. There are entire continents whose papers can't be trusted to be anything but outright fabrications. No meaningful reform is possible.
There's tons of very interesting and rewarding research to pursue. It's hard to see the forest for the trees because so much of the research pursued currently is neither interesting nor rewarding. You have to be brave, creative, and independently minded in order to realize that this research is just around the corner. The current academic system doesn't select for people with these traits (rather, it selects for people who are good at taking tests and following rules).
Yes there are low-quality papers out there but I'd rather have 100 low-quality papers if it gives us 1 truly insightful piece of research. Any expert worth their salt can read a paper and judge its veracity very quickly, and it is those high-quality papers that get cited.
Even when one of those high-quality publications gets shown to be false, it moves the field forward. Real science is incremental and slow.
The term for parasitoids that attack other parasitoids is a "hyperparasitoid". I did my PhD on parasitoids that attack aphids, but I've never heard of a hyper-hyperparasitoid, do you have any reference to that example?
I was under the impression that it’s fairly common?
The caterpillar: Often a pest species like the tomato leafminer
Primary parasitoid: Cotesia glomerata
Secondary parasitoid: Lysibia nana and some species from the genus Gelis like agilis
Tertiary parasitoid: Certain species within the Trichogramma or Eulophidae families.
I recently discovered the existence of hyperparasitoid wasps much to the delight of my entomologist friend. That these things fly and have working nervous system (apparently ditching the neuronal nuclei during metamorphosis?), the ability to navigate etc. continues to blow my mind. They are so tiny!
It's a pretty widely known thing that studying charismatic megafauna gets you lots of money. However, they're also generally WAY more of a pain to study. Fewer individuals, larger home ranges, expensive permits, etc. A good friend studies basking sharks and the shark research world is insanely competitive, full of crazy type A folks. Compared to the insect ecology world (where I come from), which is full of pretty chill stoners and weirdos.
That's interesting. I first read your comment, and I thought you were referring to the personalities of those studying an animal gradually changing as they unconsciously model their own behavior after that animal. This seems highly plausible to me.
But then I re-read it, and the second time it seemed to mean that the members of a group studying a certain animal would self select to favour those who are fascinated by and admire that animal. This also seems highly plausible to me.
Either way, it seems fitting that the world of shark study is full of "apex predator" researchers.
I think some of it is self-selection, but I also think some of it is a filtering effect based on the much more competitive and stressful atmosphere. Getting funding and permits and equipment to study sharks is a way more stressful process than walking around in a field collecting bugs. Not that collecting bugs is always easy, but the barrier to entry is way lower.
> They smell like the animal. Presumably this results in some correlation in their gut flora
Frankly, this claims are starting to be a little offensive.
Biologists take baths also when needed. Exactly the same as any other people. This would be not different than claiming that people that breed pigs, ends looking and smelling like pigs. Is not a productive way to drive this kind of conversations.
I started watching harp videos and they're all so calm and composed, like harp music lol. I really wonder if they're attracted to the harp because they're harp-like or if they became harp-like because of playing the harp.
The idea that you can change your personality just by choosing a field of study is for some reason very appealing to me.
> studying charismatic megafauna gets you lots of money
May I ask how so? Is it from producing popular documentaries or something? Or is there the research grant money from conservationist institutions or societies? (I can see how that wouldn't be there for parasites).
Donors. They want something that makes them feel good. Something to talk about at a cocktail party. LCM: large charismatic mammal; I'm saving pandas is better than the parasitic anderson wasp* that lives out its lifecycle in some gross fungus thing
Your point is well taken that pandas are probably a quick pitch for the right crowd, but “lots of money” seems optimistic for wildlife/conservation biology funding. That said, raising money for diseases of __human__ inhabitants of the developing world is typically quite a challenge. Counter to the focus of this thread, obesity is where it’s at right now for abundant funding.
It comes down to a lot of factors, but a big one is the diversity of funding sources. If you study a species that doesn't have broad appeal, you're probably looking at NSF funding, maybe USDA. If you study a game animal you can tap into all sorts of funding sources, like Ducks Unlimited, etc. You might be able to get state or federal wildlife agency funding. You might get a weird, wealthy donor who like sharks.
NSF grants are still the most important and impressive for most subfields within ecology, but the competition is fierce.
A parasitoid lays multiple eggs in a caterpillar host. The larvae eventually hatch out of the host's body, but do NOT kill it. They then need to pupate outside the host, which leaves them vulnerable to predation. Their former host, the caterpillar whose body they just violently erupted from, will then act as a BODYGUARD. It will body slam any insects that approach, knocking them away from the pupae. Truly the stuff of science fiction.
There's a nematomorph parasite that infects crickets, and part of its life cycle is aquatic. It will induce crickets to jump into water and drown themselves (there are some crazy videos of this on YouTube). This study found that the allochthonous input (land to water) coming from the crickets jumping into a Japenese stream was a large part of an endangered trout species' diet. In short, his trout was kept alive because of a parasite driving crickets to drown themselves.
> This study found that the allochthonous input (land to water) coming from the crickets jumping into a Japenese stream was a large part of an endangered trout species' diet. In short, his trout was kept alive because of a parasite driving crickets to drown themselves.
The summary doesn't seem to follow from the finding. The fact that you mostly just eat crickets that walk up and ask to be eaten doesn't immediately imply that, if the crickets stopped doing that, you'd starve to death. It should be easy to understand the choice to go with a low-effort option even if there's also a higher-effort option available.
There's an inherent cost to foraging, so a high-quality food item that requires little effort is a much greater net energy benefit. When we're talking about an endangered species whose margins are quite slim to begin with, this can be a big difference maker. A couple dead trout reduces the population size, increases inbreeding depression, things aren't looking so good. I certainly oversimplified the mechanisms here, but a change in 60% of an organism's diet is not easily dismissed.
What's weird is that it then... basically acts like a tongue? It doesn't seem to be massively detrimental to its host, but it's absolutely insane to see a fish's mouth open and then there's just like, a little guy hanging out in there.
Jesus fuck. I knew about these guys but did not know they perform this by hanging out in fish gills until they get a breeding pair, them one crawls from the gills to the tongue to clamp on and replace the damned thing, and later they reproduce and spread young from the gills!
Tangentially: assuming I had the drive + health to make the change, do you feel it would be a particularly challenging move for a software engineer to abandon their career to go back to school and study insects or other arthropods? I'm not a competitive person at all, and I saw your other comment above about the field, but I still imagine the money available for studying bugs is a tiny fraction of that for writing them.
That's an interesting question- I think it would be challenging in some respects, which would likely differ during and after school.
During a PhD program, you're not gonna make much money, but you will probably enjoy the classes and research, particularly if you have some money saved up from your current career to help smooth the bumps of living TA paycheck to TA paycheck. I really loved my PhD program, but was also in my early 20s and living like a poor grad student wasn't as big a deal.
As far as long-term career prospects go, I think things are a bit more challenging. There are opportunities to work for state or federal agencies, particularly if you focus on agricultural insect pests. Otherwise it's pretty much academia, and the job prospects there are pretty slim. Unlike other domains where there lots of non-professor jobs, for entomology and related fields, there are far fewer. Labs tend to be fairly small, so the total # of jobs nationally is also pretty small.
My advisor always used to say that he never knew anyone who didn't make it into a tenure-track position, if they were willing to hang on long enough. He also acknowledged that hanging on for a long time can suck! I wanted to start a family and live in a place where I had a community, so I left academia and work as a data scientist for a public transit agency near lots of friends and family.
What I'd say is that if you have a sense of the long-term career you want (agency scientist, ag researcher, tenure-track prof) and can go into grad school with a solid plan, you can make it happen. Entomology isn't a terribly expensive field, so you can do a lot without much funding, and that freedom can be really wonderful! But it's not going to be a particularly lucrative career, and competition for stable tenure track jobs is high, requiring a lot of geographic flexibility.
As you can see, I've got a lot of thoughts on this! If it's something you're seriously considering and want to talk more, let me know and I'd be happy to chat sometime.
Don't you think you're underplaying your hand here a little? The mechanism of this behavioral modification is in itself both beautiful and extremely spooky: https://en.m.wikipedia.org/wiki/Bracovirus
I genuinely would like to hear about a more important piece of software for businesses. Or more used. In those respects I'd have to agree with each of those statements.
I think "for businesses" is probably the caveat here. It feels like a trope to mention it at this point, but I think the world would suffer greater consequences if, say, the Linux kernel broke.
I'd also argue that Excel isn't really the most "powerful" per se, but the most accessible and convenient for sure.
SAP or Salesforce is often the critical piece of software in many an organization. My current assignment (I don't work with SAP though, thankfully) is an energy company with ~3 million customers, each and everyone's data is managed in SAP along energy usage, billing/invoicing, the works. Small army of people managing and maintaining it, too.
But I do find this a fascinating question. What enables human flourishing more? instant messaging, word, excel, cad, photoshop, databases? Or something even more esoteric. I remember someone saying if MacOS disapearred tomorrow, we'll adjust, if older versions of windows disappeared, the world stops.
First statement maybe yes. Hard to find any software used in so many places with so much money / importance etc.
Second statement hard no. It is a good balance of ease of use, familarity and power but for sure not even close to being the most powerful tool to crunch numbers on scale.
I've used this example when teaching students about models used for predictive power vs. models used to better understand mechanisms:
You have a model that predicts with great accuracy that rowdy teens will TP your house this Friday night, so you sit up late waiting to scare them off.
You have a model with less predictive power, but more discernible parameters. It tells you the parameter for whether or not houses have their front lights turned on has a high impact on likelihood of TP. You turn your front lights on and go to bed early.
Sometimes we want models that produce highly accurate predictions, sometimes we want models that provide mechanistic insights that allow for other types of action. They're different simplifications/abstractions of reality that have their time and place, and can lead you astray in their own ways.
Can't you run the model in reverse? Brute force through various random parameters to the model to figure out which ones make a difference? Sure, it could have absurd dimensionality, but then it would be unlikely one could even grasp how to begin. After all, AlphaGo couldn't write a book for Humans about how to play go as well as it can.
That's what model interpretability research is. You can train an interpretable model from the uninterpretable teacher, you can look at layer activations and how they correspond to certain features, or apply a hundred other domain-specific methods depending on your architecture. [0]
Sadly, insight is always lost. In a noisy world where even with the best regularization, some fitting on it, or higher order features that describe it, is inevitable for maximizing prediction accuracy, especially if you don't have the right tools to model it (like transformers adapting to lacking registers [1]) and yet a lot of parameters within chosen architecture.
What's worse, bad expectations are often much worse than none. If your loan had been denied by a fully opaque black box, you may be offered recourse to get an actual human on the case. If they've trained an interpretable student [2], either by intentional manipulation or by pure luck, it may have obscured the effect of some meta-feature likely corresponding to something like race, thus whitewashing the stochastically racist black box. [3]
This reminds me of another thing I use when teaching: a perfect model of the entire world would be just as inscrutable of the world itself.
I think having multiple layers of abstraction can be really useful and have done it myself for some agent-based models with high levels of complexity. In some sense, these approaches can also be thought of as "in-silica experiments".
You have a model that is complex and relatively inscrutable, just like the real world, but unlike the real world, you can run lots of "experiments" quite cheaply!
"Tuition reimbursement" is definitely an accounting gimmick, at least once you're done taking classes. After qualifying exams, the idea that a PhD candidate is receiving anything that you would pay tuition for is laughable. You're doing research and teaching, both of which are things a professor does, albeit at a smaller scale and with some oversight/guidance, but that is far more akin to having a manager than having a course instructor. I'm all for including tuition as a concept when you're still taking courses, but after that point it makes no sense.
I think the typical argument about tuition being an "accounting gimmick" is that it's the univerity paying itself for something -- does it actually come out of a budget, paid for by other students? (Unclear.) If there were no "tuition", would that money instead go to the student? (Unlikely.)
The question of whether senior doctoral students who aren't taking classes anymore should be paying tuition is a good one too! In the case described in the original article, though the student was in a masters program and quite likely taking classes—so not quite this scenario.
Anecdotal data from the grad programs in my area is that at least for PhD students your supervisor pays their tuition from whatever funding source they use to pay the stipend.
Yes, and the supervisor also pays a fraction (often 50%+!) of any incoming grant money to “overhead” — to the institution, for lab space, staff, operations, etc.
Why some of this money is categorized as “tuition” and other money as “overhead” is at the root of the question I think.
Pretty skeptical of a floating "3.5%" figure without any additional context. NYT estimates that 15-26 million people participated in the protests following George Floyd's murder. That breaks the 3.5% threshold, and I don't think we have seen a whole lot of serious police reform in the US since then.
reply