As a scientist, I've seen lots of people who are overconfident in both themselves and overconfident in science. IMO, a good PhD science program helps students master a discipline, but also to recognize the limits of the discipline.
For example, 23andMe thought they would revolutionize drug development with a huge genetics dataset. In practice, genetic information alone is not sufficient to treat the majority of diseases that affect individuals and society. There is too much environmental variation affecting human biology for purely genetic approaches.
Understanding the real limits of knowledge is vital to pushing knowledge forward where we can. As a biologist, one of the things I most appreciate about Sabine Hossenfelder (a physicist) is that she highlights the limits of knowledge in her (and adjacent) fields. She gets a lot of push back (and is sometimes wrong), but having the discussion is vital to science.
Acknowledging the limits of science is not a negative attitude about science, but a positive one. A clear idea of the current limits of science (both theoretically and practically) is instrumental to pushing through them. For example, the scholarly papers highlighting the replication crisis (https://en.wikipedia.org/wiki/Replication_crisis) are actually very useful to maintaining the health of science as a human endeavor, not a critique of science. Scientists need a clear understanding of the scientific foundations that they are building on.
I find that coloquially we use the term "science" in several distinct meanings, two of which are:
1. Science as the body of knowledge
2. Science as a method / approach
I also find that mixing the meanings/perspectives/intent up in a single conversation is common, sometimes accidentally sometimes intentionally.
In my ignorance (ComSci major, so not a real science :), I would describe myself as extremely positive / confident to "Science, the approach/method" and, through that approach, pragmatic about the "Science the current body of knowledge".
In other words:
Yes, there are very much limits to what we currently know, and some of what we think we know will turn out to be wrong, subtly or catastrophically. There are definitely huge limits and uncertainties to Science the body of knowledge!
But, acknowledging it is kinda the point, and the best way to figure it out that we are currently aware of is through scientific approach. (I've just realized I might even have become a zealot that you describe, because I can't even figure out what a plausible & feasible alternative method is, if your goals are to actually figure things out. To that point, I find humility and skepticism about your current science body of knowledge a crucial part of science the method, something which most other methods lack).
I find distinction is crucial especially in political and religious discussion frameworks. Otherwise, I never know if I agree or disagree with statements regarding "limits of science" etc.
(This is all further mixed up by zillion of daily popular articles where "Science says that [...]!!!" or "[...], scientists find", which... ugh, oversimplify at best and deceive more likely)
I think most people exposed to science#3 from the inside can agree that science#2 works – and indeed works surprisingly well – despite science#3, not because of it.
> I think there is a third definition of "science":
> 3. What is actually happening in academia
While we're enumerating, I think there's a fourth definition:
4. "Science" as a belief system rather than as a tool/technology. I think in this respect, there's often an unacknowledged (or denied) blurring between science and science fiction (the more traditional "spaceship books" kind, as well as overconfident speculation). There's also a tendency to claim the prestige and authority of science for one's own personal opinions and preferences.
According to wiktionary, 'scientism' has these meanings:
1. The belief that the scientific method and the assumptions and research methods of the physical sciences are applicable to all other disciplines (such as the humanities and social sciences), or that those other disciplines are not as valuable.
2. The belief that all truth is exclusively discovered through science.
Maybe the second definition kind of fits if you stretch it. I think 'futurism', not in the sense of the artistic movement, is a closer fit; '2. The study and prediction of possible futures.'
I think scientism runs deeper than just a set of philosophical beliefs. It is more like a modern religion, or even an aesthetic.
A key feature of scientism is that science itself is never well defined or understood by its adherents, so science is a floating signifier that can mean whatever its proponents want it to mean. Typically, these are not people with firsthand experience doing science, but consumers of second and thirdhand science media and science culture (TED Talks, "I fucking love science", NASA t-shirt, "I believe in science" bumper sticker, etc.). Lack of scientific literacy results in science taking on a ritual status, where following the ritual (scientific method, peer review, etc.) produces truth, and failure to find truth is always the fault of the mislead individual scientist. Because science is the ultimate source of truth, it is also the organizing principle for society, and those "anti-science" people who would question science are dangerous and stand in the way of progress--basically a religion.
> I think scientism runs deeper than just a set of philosophical beliefs. It is more like a modern religion, or even an aesthetic.
I agree.
> A key feature of scientism is that science itself is never well defined or understood by its adherents, so science is a floating signifier that can mean whatever its proponents want it to mean. Typically, these are not people with firsthand experience doing science, but consumers of second and thirdhand science media and science culture...
I don't think that completely true. I think you have different levels of adherents of scientism, and what you say is definitely true of most of the lower levels ("consumers of second and thirdhand science media and science culture").
However the top tier consists of the creators of a decent chunk of that "science media and science culture." Many of those people are actual scientists, but ones who have sought out the public eye as "science popularizers" and are best known for their works for general audiences.
Believe it or not, I have had more than one person tell me with sincerity that observing the contents of a box is "doing science", I imagine because they believe that science is actually the only way to acquire knowledge.
Meanwhile, these people mock the religious [in their imagination] for "being" insular/fundamentalist.
I find personally that "scientism", unlike many other ism-s, is an external label. I.e. I don't think people call themselves that. It's a negative label ascribed to people one philosophically disagrees with. As such, I am skeptical of its value and use.
The word you're looking for is probably "anti-intellectualism". The tribal flavoring of the white supremacism-religious fundamentalism far-right spectrum in the US is against undisputed knowledge, history, learning, and facts in addition to STEM.
Or they could be referring to the opposite - the unquestioning adherence to all things Science, as long as those things fall within the progressive orthodoxy that has a stranglehold on academia.
This comment doesn’t make sense at all in this context. It seems like you’re rushing to put down the “white supremacism-religious fundamentalism far-right” before understanding if that’s relevant to this thread.
Agreed; I have a few friends who quit academia, and few who stayed. Some of their experiences are hope-inspiring, some are depressing. Same for those in government employ.
But I don't think as academia as the only, or even necessarily the most important place that science is happening in the world today.
I often trust science as a social process more than the scientific method.
The scientific method works best in fields such as physics and chemistry, where you have an established model of reality. The model has been extensively tested and validated, and you can use it to design experiments that will likely test what they are supposed to, taking all relevant factors into account.
Other fields, particularly those that are most affected by the replication crisis, study phenomena that are too complex for such comprehensive models. Instead of testing established mechanisms, such fields often use the scientific method to investigate black boxes. Designing experiments is harder, because it's not clear if you are measuring the right things in the right way, or which factors could plausibly affect the results. You may not even be sure if the mechanisms the experiments rely on actually exist and if they are properly understood.
I like to think that the replication crisis is the social process trying to deal with the issues resulting from overreliance on the scientific method. When you can't rely on an established body of knowledge, a focus on the method takes your attention away from questioning your assumptions and understanding them.
> I like to think that the replication crisis is the social process trying to deal with the issues resulting from overreliance on the scientific method. When you can't rely on an established body of knowledge, a focus on the method takes your attention away from questioning your assumptions and understanding them.
Like many ideologies it behaves a lot like a religion, and online discussions are chock full of artifacts.
In my case, I have a negative view of "science" because of this phenomenological aspect of it, which I consider dangerous because it results in irrational, tribal thinking (see: covid, climate change, etc)...and it ain't only the "deniers" who are guilty.
> Yes, there are very much limits to what we currently know, and some of what we think we know will turn out to be wrong, subtly or catastrophically. There are definitely huge limits and uncertainties to Science the body of knowledge!
> But, acknowledging it is kinda the point, and the best way to figure it out that we are currently aware of is through scientific approach.
Pro-science people absolutely love this meme, I encounter it several times a day in the online spaces I frequent.
This goes back to the comments on definition. Much of the misinformation on covid was largely due to the general public having a misunderstanding of what can NOT be called science. For example; when a research paper is peer reviewed by its author (e.g. Pfizer reviewing its own drug research) that's clearly not science. Conversely, when the heads of multiple top university epidemiology departments come together to speak out about it, that should be regarded as science.
What happened with COVID is that the media declared itself the authority on science, most of the public believed them, and most scientists were pushed aside or stuffed with a sock and labeled "deniers." This altogether framed science as the bad guy. That is; what you described as science in your comment is actually not describing science at all. It's describing the media.
Ironically you're presenting textbook mis-definitions of science, the exact problem being discussed in this thread.
> when a research paper is peer reviewed by its author .. that's clearly not science
Peer review is something academia evolved only relatively recently. Science long pre-dates peer review, and you can do science without peer review (or with useless peer review) just like you can write programs without code review. As recently as Einstein, peer review was being seen as some offensive newfangled thing which he had no time for.
The goal of peer review is to try and ensure that claims that are presented as being scientific actually are. It frequently fails at that task but even when it works it's still just a safety check, not an actual required component of true science.
> when the heads of multiple top university epidemiology departments come together to speak out about it, that should be regarded as science
A bunch of academics making an announcement is definitely not science. The whole point of science is that it doesn't rely on People With Titles deciding by fiat what's true. That's what religion is!
> What happened with COVID is that ... most scientists were pushed aside
>A bunch of academics making an announcement is definitely not science.
A bunch of heads of Ivy League Universities who are the foremost experts on debunking junk epidemiology and who have all contributed significantly to the field are more wortg listening to than the media who has current joint ownership and board control of Pfizer. Even if only from a Bayesian logical perspective. Conflicts of such interests don't make for good science. But of course, as the general public doesn't know the difference, you can tell anyone whatever you want. Well, anyone except actual scientists.
You're right. I should have said, good science. You're welcome to dabble as deep into whatever mental hole you like without taking any criticism. But in and of itself, such a take on science is deserving of criticism, and has been criticized by scientific philosophers for centuries.
""
> I would describe myself as extremely positive / confident to "Science, the >approach/method
""
But what does that actually mean?
How do you manifest this in your daily life and in your decisions?
My grandmother told me
God told me
I heard on the news
Well, that does not seem scientific nor following the tenants of science.
But how can you evaluate information you receive that is called science
and in so far as you know from a source of a scientist.
Esp. these days in the US everything is incredibly politicized.
There is no way to dig deep enough into every tidbit of knowledge we are
exposed to. A lot of scientific fields these days are so complicated that
you need a degree to start understanding what is going on or to evaluate
data.
If we are lucky we know a few people in different fields whom we trust.
We trust the people to we trust what the say, since they are scientists.
We all walk around and -believe- in various things we hear and elect
not to believe in others.
Then we claim that we believe in this or that "because of science".
and because of the scientific method.
But we dont know that for a fact because we dont know and probably could not
understand all the steps from beginning to end needed to ensure that the
scientific model had been applied appropriately at all stages.
I dont think real science should state
"THIS IS TRUTH BECAUSE THIS IS SCIENCE"
it should be
"This is our best understanding right now, and there are some other theories out there
that may also be valid."
I think the parent comment specifically means coming up with a falsifiable hypothesis and testing it, as opposed to the "body of knowledge" part you are talking about.
I think your distinction between 1 and 2 is very important (but, as a computational biologist, I disagree about CompSci not being science :-).
Along these lines, I think the role of consensus in science has been overly dramatized by those with various policies to push. Max Planck's principle is famous in the short form "Science progresses one funeral at a time". One of the professors I worked with as a graduate student had a sign on his desk, "First They Ignore You, Then They Laugh at You, Then They Attack You, Then You Win".
These two quotes captures an important tension in the practice of science: consensus both retards the progress of science and captures it for others to build on.
Retarding the progress of science is sometimes (often?) a good thing.
Many people regard Einstein's later career as fruitless, but by attacking quantum mechanics, he improved its foundations as well as making it much more acceptable.
It is frequently used that way, and I suppose I didn't include it because I feel it's an incorrect usage... but I'd have to agree with you that it might even be the most common :-/
What’s important is not what you think. More important is what other people think, even though their observations are at fault no differently than your own. What’s most important is how you measure compared to other people, as everything else is a biased faulty guess.
Younger people are more at risk of getting this wrong because their scope of knowledge and experience are shallower. Introspection grows with age, but when introspection is not deliberate older people are more catastrophically at risk of getting this wrong.
I'd agree with this although I understand where the idea comes from. It is difficult for people to understand what is and is not a science and it is easy to think that computer science is not a science even when you study it.
The way we learn and study topics is divorced from the original method of discovering those topics. The way people learn Computer Science is generally by absorbing the information, not by doing the experiments. So it is difficult for people to understand that the way we have this knowledge is through hypothesis forming and experimentation i.e. Science.
> As a scientist, I've seen lots of people who are overconfident in both themselves and overconfident in science.
I feel that I've never had the first problem, but have definitely had the second.
On one hand, there is the natural limitation of Science itself in terms of the type of questions (that are amenable to the scientific method) it can answer. On the other hand, it is still the best way of generating knowledge that we have.
My overconfidence was that scientists, as individuals and as a community, would always do the right thing, driven by, and honestly following, the scientific method. But in the past few years I've had to revisit this assumption several times and be reminded to always retain some healthy skepticism.
Most recent example is this climate scientist who just published in Nature, and then went ahead afterwards and penned an op-ed [0] saying he actually misrepresented the actual factors in order to get published.
> My overconfidence was that scientists, as individuals and as a community, would always do the right thing...
This is a great point. I'm shocked by how often I end up working on a project with a colleague who is taking the path of least resistance. In my field, this usually results in using decades old statistical methods than have been proven time and time again to be unsatisfactory. They just don't want to learn new methods or their technical expertise aren't good enough to learn how to implement the new approaches. So they just coast. I'm not sure what to do in these situations other than just try and set a positive example.
23&Me failed in drug development because they started from a mistaken premise- that the data they collected (very specifically, genotype arrays) would produce data that was correlated with human health closely enough to identify targets (proteins or pathways to disrupt/modify). They have a huge genetics dataset, they don't have a huge genomics dataset, and the underlying relationship between the genome and phenotypes (especially complex disease phenotypes) is a highly nonlinear function.
Environment is important but we could still have huge improvements in medical care using genomics. It's easy to obtain and still has a very strong relationship to disease, and is a problem best solved by deep learning. I've watched people tilt against this windmill for 25+ years and it's kind of funny just how bad our labelling of diseases is.
This is an inaccurate view of what 23andMe is doing.
First we did not "fail" in drug development, our collaboration with GSK yielded substantially more programs than we had anticipated when we started. The verdict on whether the strategy is successful or not will not be clear for another 5-10 years because being better at picking targets doesn't shorten the timeline for bringing a drug to market, it mostly means that you should have a modestly higher proportion of programs that end up being successful.
Second we did not start with a premise that we could or needed to do a great job of predicting disease from genetics alone, that is not required to identify good targets. You're correct we don't have huge genomics datasets, we largely use the same genomics datasets others use. Identifying targets with genetics in the simplest sense requires using genetics to identify an association at a genomic location, and then using genomics for functional interpretation of that association. And having the largest database of genetic associations enables the first step of that.
I'm not sure what you mean by "the underlying relationship between the genome and phenotypes... is a highly nonlinear function". Simple additive models account for most of the heritability of complex phenotypes.
The only thing 23&Me has done really well in stat gen is IBD (which IIRC you worked on).
I was a team from Google that evaluated 23&Me's data and technology many years ago for a business deal. We already talked about this with Anne and she confirmed what I said above. It might have been before you were hired- but I'm pretty sure we talked about building a variant store and a transpose service? I stand by my statements (note: I work at a competitor of GSK and I know all about these deals). It's not correct that being better at picking targets doesn't shorten the timeline, either- at least in the opinion of the scientists at my company.
Additive models don't really account for most of the heritability of complex phenotypes. They're what have worked best and been published so far. Complex phenotypes are nonlinear because the generative processes in biology have feedback, homeostasis, enormous numbers of individual elements... etc...
I did not work on IBD but have been doing stat gen at 23andMe for 13 years. I'd say that anything you evaluated many years ago is pretty irrelevant today as the 23andMe database (and our research effort) didn't reach an interesting size until maybe 2016 and has grown rapidly since then. Our research group has >100 peer reviewed publications, I think some of them are decent, and that's just what we publish. Most of those are genome-wide association studies so a lot might hinge on whether you find those interesting/valuable. I would not say these are methodologically innovative, but I think we do them well.
You may be correct that better targets may be faster to market. We would also hope so: better targets might be more straightforward to validate in the lab and may enable smaller trial sizes, for instance. Timelines are still long and this doesn't alter my statement that the success or failure of our target selection strategy won't be known for some years.
Do you have any evidence for additivity not accounting for most heritability? I'll give one cite (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2265475/). The fact that biology is very complex and non-linear is not inconsistent with most genetic effects being small and approximately additive.
We have never had a trait report on tongue curl, the 15-year-old blog post you linked to says it isn't a Mendelian trait, so I'm not sure where you were going with that. I can tell you that it is somewhat heritable but complex.
Excellent point. I work at a biotech startup where one major focus is "how do we fix the disease labels?". I call it the pyrite problem (your gold standard data contains fools gold), but it is known more prosaically as the mislabeling problem.
>As a biologist, one of the things I most appreciate about Sabine Hossenfelder (a physicist) is that she highlights the limits of knowledge in her (and adjacent) fields.
I only heard her arguing others are wrong because she's right. That's not a demonstration of the limits of knowledge.
The most over confident in theirselves and science people I know are PhD. With egos and arrogance greater than the solar system.
People that go early to the industry and see the “real world” tend to have a more tamed expectative of what they could achieve and science can offer.
>one of the things I most appreciate about Sabine Hossenfelder (a physicist) is that she highlights the limits of knowledge in her (and adjacent) fields.
I suppose you missed her recent video about economics? Her content is getting more and more clickbaitey by the day. Some really half-baked data creeping into her videos.
I don't follow Hossenfelder that closely, so I haven't seen any of her videos on economics. I have seen her walk back some of her misunderstanding around global warming, however.
Celebrating someone's achievements in a specific domain is not giving them carte blanche across the board. Personally, I think it more productive to celebrate someone's successes while lamenting their failures. If one thinks that mediocrity is the norm (which it is, by definition), then transcending it, even briefly, it something to celebrate and use as inspiration, even while acknowledging that all individuals have flaws (to some degree).
> Acknowledging the limits of science is not a negative attitude about science, but a positive one. A clear idea of the current limits of science (both theoretically and practically) is instrumental to pushing through them.
"The gods did not reveal, from the beginning, all things to us, but in the course of time through seeking we may learn and know things better. But as for certain truth, no man has known it, nor shall he know it, neither of the gods
Nor yet of all the things of which I speak. For even if by chance he were to utter the final truth, he would himself not know it: for all is but a woven web of guesses"
Ironically, you are taking this study at face value. This study reminds me of the "Republicans tend to be sociopaths more often" study. That ended up being completely refuted.
The study has an interesting approach to avoiding self-reporting level of confidence.
> We propose and use in the paper, an indirect measure of confidence in knowledge, defined as the ratio of incorrect to ‘don’t know’ answers in any knowledge questionnaire, as long as it had the format true/false/don’t know (or similar). The rationale is that an incorrect answer corresponds to an overestimation of one’s knowledge (more details in the main text).
On face value it seems like a good way to avoid the pitfalls of self-report surveys, perhaps also useful in affective modeling.
How do they/you remove the confound of becoming more conservative vs less confident, or do they assume that overconfidence and being conservative are opposites?
I might be motivated to be more conservative because I have a reputation to protect, but not actually be less confident. This may have an effect just as much as because I am more experienced.
I may also have just learned not to express my overconfidence interpersonally, while still exhibiting it in practice. Expressing appropriate levels of confidence is a skill, not just the absence of overconfidence.
Per my understanding of their method, the participants were just asked to do the survey without mentioning what was being measured. It's like you said, someone can manage their expression of confidence to a degree (or not, maybe), but privately they may not do that, i.e. when just doing some surveys for a study. One thing I remember from running my own studies like this: usually the participant is subjected to a series of such tests (called a battery, sometimes) in succession so they're likely to get fatigued. I think that's a tougher effect that can possibly confound studies. Though I am not sure mental fatigue matters so much for this study.
Edit: I'd share the article but have no idea where to host it.
If you still act on your confidence, but learn to give the right answers understating it in conversation, it could be that you gave those same right answers on the test reactively and then their interpretation is probably flawed.
In this case, additional knowledge wouldn’t actually represent a diminishment of your confidence in a functional sense, but a learned compensation to social feedback.
Isn't participating in such research usually fairly anonymous? And I'm guessing the researchers didn't tell the participants how they were going to interpret "I don't know" vs wrong answers?
> Expressing appropriate levels of confidence is a skill,
That's interesting, confidence can be gamed, just like other traits in personality tests
One issue though is that in realtime cognition under non-laboratory conditions, "I don't know" often isn't available, unlike when it is explicitly given as an answer.
The number of science fans I've met who sincerely proclaim they possess knowledge of the unknowable is scary.
I'm 100% guilty of this. I try not to be, but hey.
There's different levels:
1. The guy who reads the title of the journal article.
2. The guy who reads the abstract of the journal article.
3. The guy who reads the text of the article.
4. The guy who's read all the other significant research in the field and can put it in context alongside their own personal experience as a practitioner.
I'm #2, and I strive to be #3. It's really hard to be #4, especially for more than one domain.
I'm not a scientist, I'm in digital media and ads, but I've been doing this for almost 20 years and feel confident in my level of technical and business knowledge of the space.
I never realized how wrong papers could be until I read some from my industry and dug into their methodology of which I'm sufficiently knowledgeable to spot issues. I don't recall specifics as it was a while ago but I do recall the district impression I was left with which amounted to "holy crap, I wouldn't let this person pull reports on my ad account let alone run campaigns" due to the buy side 101 mistakes they made.
Since then I read all scientific papers with a huge heaping of salt.
There's a reason abstracts exist, and they usually provide enough information as to whether the paper discovered any significant findings, so whether it's worth reading at all.
If you are #1, #2 and in #2 you include reading not just the paper but the meta-analysis papers of the field, that already puts you ahead of 95% of the general public.
Not even scientists themselves are full #3 people. It's just impossible, considering the amount of work that exists in the field, and considering that most studies just confirm existing findings from 10-20-50 years ago.
From my personal experience (thus does not have statistical significance), it's not the amount of knowledge, but the position. Teachers/professors are a group of people that I interact frequently with (both my parents and parents of my friends are university professors/lecturers), and it just happens that the ones that I know the most are all overconfidence.
Basically they think they know everything, to the point of educating the doctors about medicines when they are in the hospital (as patients). I don't know what got them into this but I found the arrogance distasteful.
Maybe it's in the culture though. Back in the day teachers were respected and the relationship between teachers and students are somewhat closer to father-children than customer-merchant. Now time changes but old habits stay hard.
What do you mean by educating the doctors about medicines? I've seen people without degrees advocate for themselves as patients and be more knowledgeable about certain medications and treatments than doctors or nurses were.
This is something that always kind of gets me, because when I have something wrong with me I will do a deep dive on it and read whatever studies, publications, and fellow patient accounts I can find.
This often puts me in a position where I feel pretty confident that I have more fresh knowledge about it than the doctor before me. Someone who in all likely hood has a mild familiarity with it, and is filling in the blanks with rote medical intuition.
What I really wish is that I could get doctors to drop the veil, and just openly admit what there depth of knowledge on the illness is. I would love it if they googled stuff right in front of me. I absolutely do not expect them to have a full medical encyclopedia in their head, and I am smart enough to be able to not be put off by "I don't know/I'm not sure".
The flip side is imagine a project manager did this to a programmer or architect. The programmer would constantly need to explain why the blog post they read dated 2013 about how hadoop makes everything faster is wrong, or how that AI paper is bullshit and designed to get someone grant money. Or how someone else’s experience with Azure was great because they just shifted all their Windows stuff to it. So that could kind of stuff get frustrating for the expert.
It's interesting to imagine doctors understanding humans and their pathologies nearly as well as programmers understand the systems they work on. My guess is that'd amount to an incredible upgrade.
(Yes that level of understanding is often quite poor in absolute terms.)
But that's part of the issue, I'm not reading a blog post from 2013, I'm reading recent studies in top shelf medical journals with clear conclusions.
I may not understand the minutia of the study, but if the conclusions are "60% of people with X showed the underlying cause Y", and my doctor is saying "I have never heard of any possible association with Y, and Y has no possible connection anyway" then I feel pretty confident that the doctor is out of touch.
Rather the doctors take should be "Please send me the studies and I will review them with my actual medical knowledge" or really "I need to review the literature on this patients illness so I can be up to date while treating them for it"
That’s the promise of AI in medical care, that it can synthesize all known data relevant to observed symptoms and, on average, make a more accurate diagnosis than human doctors.
Like "you don't know as much as I do, I Googled a page and it says..." type. No they definitely know very little about the stuffs. And I personally heard and saw they tried to persuade OTHER patients, of different symptoms to listen to their "theories".
For every 5 “type” People you mention there’s 1 common person who really does spend the time doing their research and likely knows more about the drug than the doctor in some ways (typically they just lack the context of interactions and side effects). A good doctor will listen to what the patient says and then discuss with them what they think with an open mind. Good doctors are rare. Especially nowadays with all this arrogance that they know everything as you say.
Indeed, given the PE-driven profit motive of most (US) hospitals these days, people are increasingly less trusting of their diagnoses and prognoses, and more likely to take an active role in their own care and treatment.
I think with teachers and professors it can be a case of "habit".
They're used to being the smartest person in the room and telling the people around them what's what. This can sometimes rub off on interactions outside work. Same kind of thing can happen with other professions, of course.
I am 100% guilty of this. I try to consistently remind myself that having read mostly pop-sci and the occasional abstract of a scientific paper does not in any way qualify me as either a scientist or even a knowledgeable non-scientist.
I only glanced at the paper, but I wonder how much of this is explained by just random chance?
It looks like they used multiple choice quizzes to determine both knowledge in science and a propensity to respond "don't know" indicating confidence. Any "don't know" response was counted as an incorrect response, while a correct guess increased the participants "science knowledge".
Thus, a willingness to guess something at random in the multiple choice test would both increase "science knowledge" as well as make the participant appear overconfident.
I mean, the data modeling assumed people guessing were doing so completely at random without eliminating any options (In the section "Simulation").
If I'm looking at the right document, one question was about which city out of Chicago, New York, and LA have the greatest annual temperature range (accompanied with a plot).
Almost all respondents said New York or Chicago, rather than LA or "All equal".
I think lifespan plays an important part in this. I know a few aging
scientists, some with extraordinary achievements, who having
accumulated a lifetime of wisdom are finally entering the "We don't
really know" phase about every question.
Once the activation energy can be maintained (basic skill and consistency) the learning curve is the steepest at the beginning of any given field; the setbacks are still rare and surmountable (and just mostly add to the confidence). But the human mind is only able to suffer a finite amount of beating and at one point your perceived competence level will collapse dramatically, overcorrecting in the opposite direction.
However, the reward structure of overconfidence can be used to maintain the momentum and once you hit the hard wall of insurmountable incompetence congratulate yourself that you actually walked the path of knowing practically nothing.
It's the hardest thing to fully admit one's own ignorance and easy to see all around you, it's only when the giant rationalization machine buzzing inside is finally fully exhausted; for a brief moment the crushing vastness of the unknown pours in.
So in a way overconfidence is just unused fuel. Use it wisely.
I've met both overconfident PhD holders and intermediate sci knowledge "flat-earthers" (conspiracy theory propagationists). Both are really hard to talk to, and both one couldn't convince they are wrong. I observed both being wrong. And meanwhile some of conspiracy theories became true...
It's really hard to by sure of anything these days
I can imagine it's not just restricted to scientific knowledge.
And having met many of these types of people they all seem to share one trait: definitiveness.
Everything is black/white and there is always a "right" answer or approach where as people with little knowledge and those who are experts tend to be nuanced and flexible.
Like I have a bachelors degree in engineering. Is that intermediate or advanced? I feel like saying my science knowledge is anything beyond intermediate is a bit of an overstatement personally.
I have one in physics, but I have also taken a rather a lot (given the degree) in biology, chemistry, civil engineering, and electrical engineering. Went some places also not usual in math. And I've worked in IT since forever. There's a lot of stuff where I take a glance at it and realize I'm just looking at a single hull plate on a battleship.
What I am getting at is that if you have just intermediate knowledge in a bunch of places, I think it lends itself to sensing that you're just a paramecium stuck to the side of some N-dimensional construct. There's so much. I had a professor who was the expert in the second excited state of Helium-3. That was his thing. Just a single needle in the whale-sized blowfish of physics.
Each scientific field has specialised and become so dense with knowledge that even new graduates in that field would barely be classed as intermediate.
I didnt find out what standard they use, but personally I'd say intermediate sounds about right. I also have an engineering degree and while I know more than the average bear in many scientific disciplines I couldn't say that any of it is advanced.
I'm thinking of times where I went to the library to dig deeper on a topic and discovered a huge and complex topic just laying in wait.
[9]: Fernbach, P. M., Light, N., Scott, S. E., Inbar, Y. & Rozin, P. Extreme opponents of genetically modified foods know the least but think they know the most. Nat. Hum. Behav. 3, 251–256 (2019). https://www.nature.com/articles/s41562-018-0520-3
(useful to note all those surveys are 2019 pre-pandemic, before there was severe partisanization of the phrase "trust in science". I wonder how hard it would be to construct a neutral methodology post-pandemic, now that even the basic vocabulary itself is loaded with associations.)
Intermediate knowledge of what's being tested, everyone is the most overconfident on subjects they have intermediate knowledge about.
If you test high school concepts then if you are shaky about high school concepts that applies to you. People who have no clue about high school and people who understand high school well will be less overconfident on that test.
Or if you talk about college algorithms, then that is the basis for intermediate knowledge. An average comp sci grad will be the most overconfident, a person who never studied algorithms and a person who teaches algorithms for years will be less overconfident on that test.
I think this is also a root of science denialism (e.g. anti-vaxxers). I have some antivax family, and the root of their opinion is that the scientists changed their advice as the pandemic developed!
My theory is that in primary and high schools, science is taught as a series of immutable "facts". You get tested on remembering them, and when you do an "experiment" you get marked down if your reading of the litmus paper isn't 4. No sense of doubt, and no exposure to the epistemic boundaries to controleld trials, measurements, etc.
If your takeaway is that scientists uncover and explain immutable facts, hearing them say one thing in January and another in June can in fact shake your beliefs in these scientists.
In highschool I was lucky enough to join a program run by the CSIRO (Australia's Government funded scientific research body), where students would assist scientists on experiments they were conducting.
I was tasked with measuring the water resistance of a proposed environmentally friendly paper coating.
I asked my supervising scientist what numbers I should expect to get back from the experiment. I distinctly remember my surprise and thrill when he looked at me and said "I have no idea, that's why you're running the experiment".
It was such a new concept to me! A scientific experiment where the correct answer wasn't written in a teacher's lesson guide, or even known to anyone!
The science mouthpieces (aka media and government speaking on behalf of scientists) were insisting they knew the immutable facts and any questions or doubt in those immutable facts were heretical. Anyone who has an appreciation of the evolving nature of scientific research should immediately see the red flags in that.
"Antivax" is not a position of anti-science, it's a slanderous label for an alternative theory with its own compelling evidence.
Especially when dishonestly applied to so many people who had all their other shots and maybe some extras, but just weren't on board with this one brand new one.
This has been my experience, too, as well as people who understood The Science perfectly and that the authorities were just taking a best guess early on, which set them further against any kind of mandate. (Mostly older, conservative physicians with an "it'll be fine" attitude)
The Science didn't change that much (viz: masks in healthcare since forever), but the flip-flopping with messages ("don't horde masks; they won't save you!", "wait, no - everyone needs a mask!"), which was really less about The Science than about different concerns (mask effectiveness, availability) were just the "I told you so" they were looking for.
Interesting. I think I've seen a research that when it came to covid, both uneducated and highly educated were making the wrong decisions more often. Sweet spot was around master's degree. Those people knew enough to know that they don't know enough to wing it themselves and deferred to expert opinion. Both under and overeducated thought they know better.
Maybe that's why all the major materials on climate change I find are sociological, focusing on people's perception on it and on agreement between experts rather than the actual subject -
It's so I don't get intermediate knowledge and become skeptical to my own dismay.
Well, nature apparently won't publish complex articles on topics for reasons, thus causing these issues of "intermediate" science knowledge. I wonder what was cut out to get this paper through.
I am not sure why we have all these comments about confidence/overconfidence of scientists and the limits of science.
The article has some very different findings. The crucial paragraph from the abstract is:
>We find a nonlinear relationship between knowledge and confidence, with overconfidence (the confidence gap) peaking at intermediate levels of actual scientific knowledge. These high-confidence/intermediate-knowledge groups also display the least positive attitudes towards science.
It seems to that it is essentially a refinement of the Dunning-Kruger effect. It also matches some previous study I remember which found that science sceptisim and many conspiracy theories around science are most prevalent with certain engineering disciplines (I can't find the study right now, but will update with a link once I do).
It is interesting that someone mentioned Sabine Hossenfelder as a positive example, because I find much of her recent content peddles to exactly the crowd who has intermediate knowledge of and very negative attitude toward science. In particular she often comments on topics where she herself has very little understanding (I know because I have seen it for topics where I am an expert) and just pushes a scepticism opinion without much understanding, but acting like she is an authority.
> The article has some very different findings. The crucial paragraph from the abstract is:
It's kinda funny how only the abstract is referred to when trying to paraphrase the findings of the paper. I would say that the abstract by itself is more like an opinion. The real hard data backing up the abstract, should be in the article. But, it can't even be faulted because this "science" is behind a pretty hefty paywall with 40$ for the full PDF and 10$ for a 48h rent. It's basically the same price as a full movie.
The abstract is not an opinion, it is a summary of the key methods and findings written by the authors. It is the perfect place to get a short quote to outline the main results of the paper.
Now if we wanted to investigate the methods and results in detail, we certainly would have to read the full paper, however the main findings will not differ from what is in the abstract. You might find that the methods (and hence results) are not valid, and can come up with a detailed rebuttal, for that you certainly would need to read the full paper. However, I assumed here that the findings are valid, because even though I have access to the paper, this is well outside my area of expertise so I am not well placed to investigate the claims in detail and rather refrain from that.
You are right. "Opinion" was also the wrong word. I just felt that the shortness of the abstract yields some similar dangers of details and context being lost as you have when reading only a headline. For example, one wouldn't use the valuable abstract space to reiterate shortcomings of one's method even if they are listed in the paper.
This shall be henceforth known as "The Dunning-Kruger paper". ;)
Certainly, overconfidence and negative attitudes cloud public debate and popularization of conclusions. I've unfortunately come across narcissistic CEO/VC types who believe they have "all" of the answers, when instead they may have a corner of the Rosetta stone or a piece of lint.
STEM knowledge must be tempered with data and supported conclusions over non-experimental biases. The larger issues are behavioral pathologies when people appear to believe they have a superior special monopoly on knowledge, experience, or being. Another pathology is anti-intellectualism, which runs deep in America.
The article is paywalled but the research group has a 2021 arxiv publication on the same topic: "A little knowledge is a dangerous thing: excess confidence explains negative attitudes towards science"
It's a survey-based study but the obvious thing to consider is, what do they mean by 'negative attitudes towards science'? Are they talking about the scientific process itself, i.e. experiment and observation coupled to theoretical modeling as the basis of discovering how our universe functions? Or are they talking about negative attitudes towards the 'science-based' pronouncements of governmental and academic institutions?
Certainly the modern scientific process takes place at such a level of specialization that even working scientists in one field are usually unable to judge the quality of results obtained in another field without doing a lot of time-consuming research, but if a long record of failure of peer review exists (and it does) then it shouldn't be surprising when people lose faith in academic and governmental institutions - but I'd guess they still believe that the scientific process itself is valid, it's just that the received wisdom of the white-robed annoited priesthood is no longer taken at face value.
There are many reasons for this - e.g. while science may have been viewed by all as wonderful in the 1950s, many discoveries (environmental carcinogens, fossil fueled global warming, etc.) have upset major economic and institutional powers leading to coordinated attacks on the reliability of science in major media outlets. Then there's the long record of pharmaceutical skullduggery (the push to prescribe opiates for just about any condition, leading to an addiction epidemic, the failure of a wide variety of science-approved medications to live up to claims and/or the production of negative side effects (Vioxx etc.)) - and as far as the vaccine controversy, yes it was a terrible idea to put organometallic preservatives in multi-use bottles of vaccines, and yes it was done to cut costs, but the claim it led to an epidemic of autism isn't well-supported, there are many more plausible industrial sources of heavy metals to blame for high childhood exposures, but that's not as convenient for class-action lawsuits due to vaccination records, etc.
This doesn't mean that most science isn't fairly reliable, but the glaring failures are what make the headlines, and there have been quite a few of them.
If we really want to regain public trust in academic institutions, divorce proceedings aimed at kicking the corporate interests out of the academic sphere will have to be initiated, meaning for example no more exclusive private rights to NIH-financed inventions and no more revolving doors between academic institutions and pharmaceutical executive boards. Don't hold your breath, we live in an era of systematic insitutional corruption that Trofim Lysenko would have fit right into.
I do not know the reference. Is it that people claim the middle lanes are faster?
But I do know that a lot of people across a lot of topics over estimate their ability because they don’t know what they don’t know. Science as a topic isn’t special in this regard.
Jr devs are great examples. Your code is garbage and I am brilliant, just let me change this thing here and OH MY THE SYSTEM IS BROKE PLEASE HELP ME! Intermediate knowledge and negative attitudes.
> We find a nonlinear relationship between knowledge and confidence, with overconfidence (the confidence gap) peaking at intermediate levels of actual scientific knowledge.
It is the same result, just that they added the "unskilled" category. People who never drove a car will rank themselves as the worst drivers, and people who are experts rank themselves highly, so the most overconfident are those who can drive, but drives poorly, ie "intermediate skilled" relative the whole population.
It's obvious from the Abstract that they are not referring to the Dunning-Kruger Effect as it is NOT mentioned in the Abstract, which is the highlight of the paper.
Actually who the hell knows when the paper is essentially behind a paywall. The Abstract gives nothing away about what they have found.
I blame Carl Sagan[1]. Carl Sagan was the Martin Luther of science communication. He showed us how you could figure out the size of the Earth just by measuring shadows at noon. He showed a vessel full of gases creating the precursors of life with just a little electricity. He reduced the history of the universe to a 1 hour PBS show. In short, he made science understandable to anyone.
But just as the Reformation led to faith-healing and megachurch preachers, Sagan's teachings made it seem as if anyone could be a scientist. As long as a chain of reasoning made sense to you, then it was true!
Take something like whether the earth goes around the sun. Sagan showed how Mars sometimes moves backwards in the sky, and he implies that this can only happen in a heliocentric model. In reality, of course, you need much more evidence to come up with the correct model. It wasn't until Newton could predict orbits from simple equations that there was no longer any doubt.
But watching Sagan you get the idea that if you can just come up with ONE piece of evidence for whatever you believe, then that's enough to prove it! This is why flat-earthers rely on meme-like "evidence" that takes effort to refute.
In the end, it's a trade-off. Do you want people to just trust authority or do you want people to think for themselves? If we want the latter (which I think we do) then we have to put up with those who are wrong.
-----
[1] I love Carl Sagan. He was a huge influence on me, and I think he benefited society, even if I think his teachings also (ironically) stoked some pseudoscience.
It makes me think of the Its Always Sunny in Philadelphia episode where Dennis hypocritically appeals to authority in an effort to disprove creationism. The valuable point the skit makes is that there is a sort of “cult of science” where people mistake faith for reason and you have a bunch of science “believers” who dont realize it. Of course this is not what science really is.
Personally, I have always been irked by phrases like “Science say” as if science itself is a centralized institution rather than a collection of people applying the scientific method.
In a similar vein, I sometimes see posts of some natural phenomena (like ants changing colors after drinking dyed water) with captions like “Isnt science cool?” Im still trying to figure out why this irks me but its weird to imply science somehow caused this.
And she remarked, "isn't evolution amazing! It figured out so many things"
When someone says "Isn't science cool?", they are saying lots of things
* I appreciate facts/truth
* Knowing things is fun
* Figuring things out is fun
* There is so much we don't know about the universe
I think you are putting to much semantic meaning into the grammar they are using to express their sentiments. Same goes for, "science says". It isn't enjoyable to discuss things and at the same time qualify every single statement.
Its non sensical. Not minor. Electrostatics exists without science. The point is science is a tool for learning the truth, not the cause, nor the phenomena itself. It would make more sense to say “nature is cool.”
Its like looking at a painting and saying “vision is so cool.” Except worse because at least in that example you are actually using the lens (vision) to look at the phenomena.
> The valuable point the skit makes is that there is a sort of “cult of science” where people mistake faith for reason and you have a bunch of science “believers” who dont realize it. Of course this is not what science really is.
Whereas with religions, the shortcomings of it's delusional followers are attributed to it.
Science has got to have the shrewdest marketing department in the history of religion.
People trust "authority" because they don't have time, resources, training to build the body of knowledge themselves. It is not feasable neither desirable for everyone to know about everything. Societies build chain of trust and that's why institutions are important. Think of authority as a function call in a big framework which provides an answer with reasoning for that answer. In my previous function calls, the authority provided answers properly. If I get a wrong answer, there is a "bug" somewhere in the system.
>> But just as the Reformation led to faith-healing and megachurch preachers
> Quite a leap. Break that one down...
Prior to the Reformation, you needed not just Truth but Authority as well. There was an actual hierarchy, and you respected that hierarchy.
Luther said that any man could connect to God, be a spiritual leader, assemble a congregation. And so the number of religious sects increased from 1 to much more than 1, including for example faith healers and megachurches.
There's no doubt that the Reformation resulted in a proliferation of Christian sects, but attributing megachurchs and faith healing to that seems bizarre to somebody who is not a Christian. The Christian bible says that Jesus himself was a faith healer. Innumerable Catholic saints have been faith healers, and faith healing in the form of Exorcism is still taught as real by the Roman Catholic Church to this day. And megachurchs? No Christian church is more 'mega' than the RCC. They share all the defining characteristics: Huge gaudy megastructures? Yes, Cathedrals. Money flowing up through the organization? The Vatican is decked in gold. Hierarchical with the preacher at top living opulently? This describes the Roman Catholic Church to a T. And Jesus himself is said to have preached to crowds of hundreds or thousands.
The Reformation certainly lead to a splintering of Christian sects, but did not introduce faith healing and megachurches. These were already ancient, arguably originating characteristics of Christianity.
Dude, the reference you provide does not support your claim that "There are fundamental, objective principals." Not sure if you are trying to be subversive or just lazy (or both).
This study basically describes 99% of people here, lol. Because you know how to write code, you're suddenly an expert not only at that but also at economics, medicine, international relations etc etc. Every field is ready for you to disrupt it. The hubris is astonishing.
One thing I've realized about this site are the insane amount of software engineers that have a disdain for the medical field. It comes up all the time deeper into comment threads where people say medical school entrance exams being so hard is a form of "gatekeeping" - which it is, on purpose, but they're using it as a pejorative to tell themselves "I could've been a doctor too if they didn't make it so purposefully hard."
The other very very disturbing trend are the "makers" who denigrate medical devices as overly complex. One thread I'll never get over was on old pacemakers. The prevailing sentiment was they're designed for failure because "evil medical industry profit", not that, you know, they wear out.
The other part felt like watching the theory of memetics demonstrate itself in real-time. One person commented that the single small mechanical component is rated to actuate 5,000,000 times or something like that. Someone dismissively said "Well yeah but Adafruit keyboard switches are rated at 2-3 million presses, they mass-produce them, thus it can't be that hard" (adafruit's data sheet says 1,000,000 btw). Very quickly, people picked up that line and repeated that the actuator in a pacemaker isn't "actually that complicated", citing the 2-3 million keycap example. I still see that keycap "argument" pop up all the time.
BTW: Pacemakers are actually rated for about 100,000,000 stimuli cycles.
I wanted to write a very similar comment, except about cosmology instead of medicine. The amount of people with high school level physics knowledge who think they know better than almost all luminaries of the field combined just because their gut feeling is telling them something about dark matter is astonishing.
I mean I barely know about pacemakers. Just that I have a congenital heart condition and will need one in a decade or so, and have taken a hobbyists interest in learning about them/ following the field. Just the level of arrogance and just completely wrong "facts" astonished me. The bit that moves is a few strands of hair thick. In what world does "I built my own keyboard" translate to "I built my own keyboard so how much different could a pacemaker be?"
The disdain extends to all other fields because they were told software was going to eat the world - they felt like special snowflakes entitled to call everyone else special snowflakes, and it went about as well as you'd expect.
Economics is generally regarded as a social science, although some critics of the field argue that economics falls short of the definition of a science for a number of reasons, including a lack of testable hypotheses, lack of consensus, and inherent political overtones. Despite these arguments, economics shares the combination of qualitative and quantitative elements common to all social sciences [0]
Yes, economics is science, and the people who don't understand that and love to point out how economists are never right in their predictions about the economy fundamentally have no idea what the field is even about.
Economics is not about predicting the stock market or economy as a whole, nor is it about coming up with excuses for free market capitalism and neoliberal agendas. It's about studying things like market failures, tax incidence, deadweight losses etc etc etc and coming up with solutions.
So basically creating a strawman toy model of actual complex economy and solving it. That's what theoretical physicists do initally but once developed they get to prediction stage. Just like rational agent axiom is false, I wonder what other badly formulated axioms does economics harbour.
> So basically creating a strawman toy model of actual complex economy and solving it
Uh, no. Read up on what economists did to empirically study and fix the kidney market and how many lives that has saved. (before you start typing, no, it wasn't to sell kidneys to the highest bidder)
> rational agent axiom
If you knew anything about economics, you'd know that no economist subscribes to any "rational agent axiom". But you have made up a strawman in your head so enjoy poking sticks at it.
Not everyone who posts here is a coder but I do agree with you the narcissism and astro-turfing on HN is high.
I suspect the high-confidence/narcissist combination is especially high here due to this being an industry/entrepreneurial forum where being confidently wrong is a positive trait.
The astro-turfers appealing to the confidently wrong go by unnoticed to the majority.
The abstract of the study states the metric for over-confidence they developed is "the tendency to give incorrect answers rather than ‘don’t know’ responses to questions on scientific facts". I think that's true. I maintain that being confidently skeptical of confident claims and research is not the same thing.
> This study basically describes 99% of people here, lol. Because you know how to write code, you're suddenly an expert not only at that but also at economics, medicine, international relations etc etc. Every field is ready for you to disrupt it. The hubris is astonishing.
Aka "Engineer's disease." There's an overestimation of one's personal competence and the effectiveness of the tools and mental models you're most familiar with. Basically think asshole software engineer who tells everyone they're dumb and should just solve the problem like they're writing software.
Indeed. Intelligence is just I/O and gradient descent. The Universe is code. etc.
Coupled with an asocial, greed-driven and essentially anti-humanist agenda its just the worst possible moment (severe environmental stresses across the planet) to hijack whatever potential digital tech offers...
Cross field contamination a fantastic product of our interconnected world, but it's more fun to shit on people when they fail. I'm sure some folks here would've loved to tell davinci to stay in his lane.
For example, 23andMe thought they would revolutionize drug development with a huge genetics dataset. In practice, genetic information alone is not sufficient to treat the majority of diseases that affect individuals and society. There is too much environmental variation affecting human biology for purely genetic approaches.
Understanding the real limits of knowledge is vital to pushing knowledge forward where we can. As a biologist, one of the things I most appreciate about Sabine Hossenfelder (a physicist) is that she highlights the limits of knowledge in her (and adjacent) fields. She gets a lot of push back (and is sometimes wrong), but having the discussion is vital to science.
Acknowledging the limits of science is not a negative attitude about science, but a positive one. A clear idea of the current limits of science (both theoretically and practically) is instrumental to pushing through them. For example, the scholarly papers highlighting the replication crisis (https://en.wikipedia.org/wiki/Replication_crisis) are actually very useful to maintaining the health of science as a human endeavor, not a critique of science. Scientists need a clear understanding of the scientific foundations that they are building on.