Blog Image

Artur Nilsson's research blog

What is science anyway? On trust in science, critical thinking, and the Swedish covid response

Philosophy and meta-theory Posted on Wed, December 30, 2020 18:30:29

Trust in science is a central pillar of modern democracies. Reliance on the expertise of scientific authorities is a powerful heuristic, because it impossible for one person to be an expert on everything. This heuristic works best if we trust the scientists who are the leading researchers on the topic of interest. Nevertheless, trust in science should never be unconditional. The notion of the scientist as an unassailable authority is antithetical to the very idea of science. Science is the best tool we have for understanding our world, but individual scientists are not flawless arbiters of the truth—they are often wrong and sometimes irrational (e.g., subject to groupthink, emotional conviction, and reasoning biases). A scientific attitude must therefore incorporate a readiness to think critically when there are grounds for doing so.

Critical thinking is an epistemic virtue as long as it is grounded in rational argumentation and analysis of evidence. Evidence-based critique should never be dismissed just because the critic is not an authority on the topic of interest—attacking the epistemic authority of the critic is the pseudo-scientist’s game. The perspectives of epistemic outsiders might contain insights that could inform even the expert’s understanding of his or her subject area. Whatever grains of truth the outsider’s critique might contain should be harvested. Even if the critique turns out to be wrong or ill-founded, responding to the critique might produce a more nuanced understanding of the topic in question. Critical exchange in this sense is part of the very fabric of science.

Trivial as it may seem, this point is often not well understood. High levels of trust in science are not always coupled with an understanding of the scientific attitude to the pursuit of knowledge. The Swedish covid response provides a good illustration of this.

The Swedish covid response and the case of face masks

Over the past year, the Swedish Public Health Authority has made numerous severe misjudgments concerning the spread of the corona virus Sars Cov-2 in Sweden and the efficacy of preventive measures. Scientists make mistakes; to err is human. But several things are notable. First, the representatives of the Swedish Public Healthy Authority have several times casually dismissed critiques from a plethora of highly qualified virologists, epidemiologists, immunologists, mathematicians, and other academics both nationally and internationally. Second, they have consistently erred on the side of underestimating the dangers of Sars Cov-2 and the needs for precautionary measures, and they have failed to learn from their mistakes. Third, some (but of course far from all) of the claims they have made have been highly questionable and based on fallacious arguments.

In spite of this, few critical questions have been asked by journalists and science reporters. Many journalists were initially more concerned with dismissing or ridiculing critics (e.g., calling them “hobby-epidemiologists”) and later on with what scientific authorities should be trusted than by asking questions about evidence and exposing vacuous claims and blatantly fallacious arguments.

The claims about face masks made by the Swedish state epidemiologist Anders Tegnell among others are an interesting case in point. Even as scientific evidence for the efficacy of face masks in fighting the covid pandemic has grown, Tegnell has persisted in claiming that the evidence is in fact “weak” and the studies that provide evidence for the efficacy of face masks have “problems”. It is difficult to know what exactly these vague assertions are supposed to mean because no reporter has asked him, but I am guessing that he is referring to the fact that there is no positive evidence from a fully randomized controlled double-blinded trail with tens of thousands of participants yet. Tegnell has repeated claimed that the randomized controlled trial conducted by Bundgaard et al. (2020), which did not provide clear evidence for the efficacy of face masks in Denmark, is the best study on this topic so far. There are three reasons that the invocation of this study is grossly misleading (several other scientists have already commented this, including the statistician Olle Häggström here):

  1. This study only investigated whether face masks protect the person who wears the mask from being infected, but research suggests that face masks reduce the transmission of viruses mainly by preventing those who wear them from infecting others (although some masks also protect the bearer)—that is, the central hypothesis was not tested.
  2. This study was conducted in the late spring of 2020 when the transmission rate had already declined a great deal, presumably because of the seasonality of corona viruses. The potential for aerosol transmission over longer distances is much greater in the winter, and therefore face masks have greater potential utility this time of year.
  3. This study only had the statistical power to detect extremely large effects, which were in turn wildly unlikely in the first place given the selection of outcome measure and the timing of the study.

There are many other studies that have provided evidence that face masks reduce Sars Cov-2 transmission. For instance, a German natural experiment by Mitze et al. (2020) suggested that the introduction of face mask regulations in different German regions produced a 45% reduction of the number of new infections.

It is true that we do not have certain evidence that face masks effectively reduce transmission. A completely ideal controlled experiment in natural settings with very high statistical power is yet to be reported. But we do not have this kind of evidence for the effect of smoking on lung cancer or the effect of hand washing on Sars Cov-2 transmission either. We still have good reasons to believe that all these effects exist. For instance, nicotine causes cellular changes that are known to be associated with lung cancer, soap is known to dissolve viruses, and face masks have been known to block around 85-95% of virus-containing droplets and aerosols since the beginning of the current pandemic.

Some skepticism about new findings and epistemic conservatism is understandable and can, to some extent, be warranted. But the same standards of evidence should be applied to new ideas regardless of whether they are consistent with your own preconceptions or not. Anders Tegnell has made a plethora of claims with little or no evidence to back them up. For instance, he has claimed that the Swedish recommendations were effective in reducing Sars Cov-2 transmission in the spring of 2020. By his own standards of evidence (when discussing research on face masks) there is no evidence whatsoever for this claim—there is not even a control or reference point, and no attempt to rule out alternative explanations such as disease seasonality. He has also claimed that Sweden has done quite well in handling the pandemic based on anecdotal comparisons with other countries, again with no scientific grounds—for instance, without attempts to statistically account for differences between the countries. There is, as I mentioned, probably not even strong evidence for the efficacy of hand washing according to the evidentiary standards Tegnell applied to research on face masks, even though the Public Health Authority has recommended hand washing from the start while refusing to recommend usage of face masks.

Public opinion on face masks in Sweden has begun to shift recently, and the Public Health Authority has reluctantly begun to recommend (rather than “not forbid”) face mask usage during rush hour on public transportation and in hospitals. Yet in crowded malls most people use hand sanitizers incessantly but not a face mask, although the current research suggests that widespread face mask usage would be a lot more effective in combating Sars Cov-2, which is now known to be airborne. It is likely that the deficient scientific thinking on this issue and others among representatives of the Public Health Authority has caused great harm.

Final thoughts on trust in science

Science should have a very prominent position in a modern, secular, democratic society. Trust in science needs to be high for a society to thrive. But our trust in scientists should never be unconditional. Our allegiance should ultimately be with scientific argument, evidence, method, and genuine expertise rather than the provincially sanctioned authorities of the day. Trust in science is not deference for authority or worship of anything. New generations of students and science journalists should be taught to distinguish genuine science from its pale imitations and to distinguish genuine evidence-based critiques of scientific ideas from fake news, conspiracy theories, crackpot ideas, and ideological fanaticism.



Meta-theoretical myths in psychological science

Philosophy and meta-theory Posted on Wed, November 28, 2018 02:05:00

There is a lot of talk of “meta science” in psychology these days. Meta science is essentially the scientific study of science itself—or, in other words, what has more traditionally been called “science studies”. The realization that psychological science (at least as indexed by articles published in high-prestige journals) is littered with questionable research practices, false positive results, and poorly justified conclusions has undoubtedly sparked an upsurge in this area.

The meta-scientific revolution in psychology is extremely sorely needed. It is, however, really a meta-methodological revolution so far. It has done little to rectify the lack of rigorous meta-theoretical work in psychology, which dates back all the way to the behaviorist expulsion of philosophy from the field (for example, see this paper by Toulmin & Leary, 1985). Psychology is today, as philosopher of psychology André Kukla has remarked (in this book), perhaps more strongly empiricist than any scientific field has been at any point in history. Although many researchers have an extremely advanced knowledge of statistics and measurement, few have more than a superficial familiarity with contemporary philosophy of science, mind, language, and society. When psychologists discuss meta-theoretical issues, they usually do it without engaging with the relevant philosophical literature.

I will describe three meta-theoretical myths that I think are hurting theory and research in psychology. This is not a complete list. I might very well update it later.

1. Scientific explanation is equivalent to the identification of a causal mechanism

This is on all counts an extremely common assumption in psychological science. In this respect, psychological theorizing is remarkably discordant with contemporary philosophical discussions of the nature of scientific explanation. While there can be little doubt that mechanistic explanation is a legitimate form of explanation, the notion that all scientific explanations fall (or should fall) in this category has not been a mainstream view among philosophers for several decades. Even some of the once most vocal proponents of explanatory reductionism abandoned this stance long ago. One of today’s leading philosophers of science, Godfrey-Smith (2001, p. 197) goes as far as to assert (in this book) that “It is a mistake to think there is one basic relation that is the explanatory relation . . . and it is also a mistake to think that there are some definite two or three such relations. The alternative view is to recognize that the idea of explanation operates differently within different parts of science—and differently within the same part of science at different times.”

Psychology is particularly diverse in terms of levels of explanation, ranging from instincts and neurobiology to intentionality and culture-embedment. For example, functional explanations (the existence of success of something is explained in terms of its function) are very popular in cognitive psychology. In my own field, personality and social psychology, a lot of the explanations are implicitly intentional (reason-based) explanations (a mental event or behavior is explained in terms of beliefs, desires, goals, intentions, emotions, and other intentional states of a rational agent). The reasoning is often that it would be rational for people to act in a particular way (people should be inclined to do this or that because they have this or that belief, goal, value, emotion, etc.) and that this explains why they de facto tend to act in this way. Even though the researchers seldom recognize it themselves, this is not a mechanistic explanation. The cause of the action is described in intentional rather than mechanistic terms. Not all causal explanations are mechanistic explanations (a very famous essay by the philosopher Donald Davidson that first made this case can be found here).

It is of course possible to argue that these are not real scientific explanations—that the only real scientific explanations are mechanistic. The important thing to realize is that this is akin to saying that much, perhaps most, of psychological research really is not science. In fact, even the so called causal mechanisms purportedly identified in psychological research are generally quite different from those identified in the natural sciences. Psychological research is usually predicated on a probabilistic, aggregate-level notion of causality (x causes y in the population if and only if x raises the probability of y in the population on average ceteris paribus) and a notion of probabilistic, aggregate-level mediation as mechanistic explanation, while the natural sciences often employ a deterministic notion of causality.

2. Statistical techniques contain assumptions about ontology and causality

I do not know how widespread this myth really is, but I have personally encountered it many times. Certainly, statistical tests can be based on specific assumptions about the ontology (i.e., the nature of an entity or property) of the analyzed elements and the causal relations between them. But the idea that these assumptions would therefore be intrinsic to the statistical tests is fallacious. Statistical tests merely crunch numbers—that is all they do. They are predicated on statistical assumptions (e.g., regarding distributions, measurement levels, and covaration). Assumptions about ontology and causality stem wholly from the researcher who seeks to make inferences from statistical test to theoretical claims. They are, ideally, based on theoretical reasoning and appropriate empirical evidence (or, less ideally, on taken-for-granted conventions and presuppositions).

One common version of this myth is the idea that techniques such as path analysis and structural equation modeling, which fit a structural model to the data, are based on the assumption that the predictor variables cause the outcome variables. This idea is also related to the notion that tests of mediation are inextricably bound up with the pursuit of mechanistic explanation from a reductionist perspective. These ideas are false. Structural models are merely complex models of the statistical relation between variables. Mediation analyses test whether there is an indirect statistical relation between two variables through their joint statistical relation to an intermediate variable. These tests yield valuable information about the change in variable in light of the change in other variables, which is necessary but far from sufficient for making inferences about causality. The conflation of statistical techniques with “causal analysis” in the social sciences is based on historical contingencies (i.e., that it what they were initially used for), rather than rational considerations (for example, see this paper by Denis & Legerski, 2006).

Yet another related idea is that statistical tests are based on presuppositions regarding the reality of the variables that are analyzed. It is true in a trivial sense that there is little point in performing a statistical test unless you assume that the analyzed variables have at least some reference to something out there in the world—or, in other words, that something is causing variation in scores on the variable. But the critical assumption is just that something is measured (much like science in general presupposes that there is something there to be studied). Assumptions about the ontology of what is measured are up to the researcher. For example, statistical analyses of “Big Five” trait data are consistent with a wide variety of assumptions regarding the ontology of the Big Five (e.g., that they are internal causal properties, behavioral regularities, abstract statistical patterns, instrumentalist fictions, socially constructed personae). Furthermore, the finding that scores on an instrument have (or do not have) desirable statistical properties does not tell us whether the constructs it purportedly measures are in some sense real or not. A simple realistic ontology is not necessary; nor is it usually reasonable, which brings us to the third myth.

3. Psychological constructs have a simple realistic ontology

At least some versions of this myth appear to be very common in psychological science. In its extreme form, it amounts to the idea that even abstract psychological constructs correspond to real internal properties under the skin, like organs, cells, or synapses, that are cut into the joints of nature in a determinate way. There are several fundamental problems here.

First, scientific descriptions in general are replete with indeterminacy.  There are often multiple equally valid descriptions that are useful for different purposes. In biology, for example, there are several different notions of ‘species’ (morphological, genetic, phylogenetic, allopatric), with somewhat different extension, that are used in different branches of the field. In chemistry, even the period table of elements—the paradigmatic  example of a scientific taxonomy—may be less determinately “cut into the joints of nature” that popular opinion would suggest (see this paper by the philosopher of science John Dupré). In psychology, the indeterminacy is much greater still. The empirical bodies of data are often difficult to overview and assess, both the phenomena themselves and the process of measurement may be complicated, and particularly intentional descriptions have messy properties. Debates over whether, for example, personality traits, political proclivities, or emotions “really” are one-, two-, or n-dimensional are therefore, from a philosophical perspective, misguided (and, by the way, another common mistake is to confuse conceptual representations such as these ones, which can have referents but not truth values, with theories, which have truth values!) What matters is whether the models are useful. Sometimes it may be the case that multiple models have legitimate uses, for example by describing a phenomenon with different levels of granularity and bandwidth. There are practical benefits in having the scientific community unite around a common model, but this is often not motivated by the genuine superiority of one model over the competitors.

Second, psychological constructs are commonly identified in terms of individual differences between persons. They are, in this sense, statistical idealizations or convenient fictions (“the average person”) that are useful for describing between-person variation in a group. The differences exist between persons rather within any particular person (as particularly James Lamiell has argued for decades, for example in this paper). It is of course possible to study psychological attributes that we have good reasons for ascribing to individuals in terms of between-person constructs. But the opposite chain of reasoning is fallacious; it is not possible to directly infer the existence or structure of an attribute at the level of the individual from models or constructs that usefully represent between-person variation at the level of the group aggregate (see, for example, this recent paper by Fisher, Medaglia, & Jeronimus, 2018). For example, it is misleading to describe personality traits such as the “Big Five” as internal causal properties, as has often been the case (see also this interesting paper by Simon Boag). This does not (contrary to what some critics have argued) necessarily imply that suchlike between-person constructs are useless for describing the psychology of individuals, but only that a naïve realistic ontology of the phenomena that they identify is precluded.

Third, at least insofar as we employ intentional descriptions (and possibly other descriptions as well), portraying persons as basically rational agents that harbor beliefs, desires, emotions, intentions, and other intentional states, we are faced with an additional problem. On this level of description, a person’s ontology is not just causally impacted by the external world; it is in part constituted by his or her relation to the world (this is often called the ‘externalism’ of the mental). This is because intentional states derive a part of their content from those aspects of the world they represent. The world affords both the raw materials that can be represented and acted upon and frameworks for how to represent and organize these raw materials. It is, in this sense, necessary for making different kinds of intentional thought and action possible. Therefore, at least some psychological attributes exist in the person’s embedment in the world—fully understanding them requires an understanding of both the person’s international psychological properties and his or her world, including both personal circumstances of life and the collective systems of meaning that actions (both behavioral and mental) are embedded within (see, for example, this classical paper by Fay & Moon, 1977).

On top of this, we have the problem most thoroughly explicated by the philosopher of science Ian Hacking (in this book) that many psychological attributes are moving targets with an interactive ontology. This means that the labels we place on the attributes (e.g., that certain sexual orientations have been viewed as pathological, immoral, or forbidden) elicit reactions in those who have the attributes and responses from the surrounding social environment that, in turn, change the attributes.