Blog Image

Artur Nilsson's research blog

New research on bullshit receptivity

Comments on new research Posted on Mon, July 08, 2019 01:40:23

The notions of ”alternative
facts” and fake news have rapidly become viral. Although research
on receptivity to falsehoods is useful, there is also a problem here.
These notions are often used for ideological rather than scientific
purposes—the real facts of the ingroup tribe are pitted against the
lies of the other tribes. We need more research that focuses not on
what facts people subscribe to but on how they engage with evidence
and arguments, and how to promote a more scientific (as opposed to
ideological or tribalist) attitude among the public.

One
interesting new line of research focuses on the notion of receptivity
to bullshit, which the
philosopher Harry Frankfurt famously defined (in his book ”On
bullshit”) as a statement produced for a purpose other than
conveying truth (e.g., persuading or impressing others).

One
type of bullshit is that which emerges when someone does not really
know the answer to a question but tries to say something that sounds
convincing anyway in order to come off as competent. An example is
the student who is bullshitting to try to pass an exam by trying to
write something that sounds good to fool the teacher. This is a type
of bullshit focused on self-promotion. It has been addressed a recent paper by Petrocelli (2018).

Another type of bullshit is the political bullshit. This is the type of
bullshit that results when a person says whatever s/he can to place
his or her own party or ideology in the best light possible and
persuade others or convince them to join him or her. This is the type
of bullshit that often makes political debates and opinion journalism
so predictable and
boring—facts are tortured and twisted to fit into an ideological
”box”, and the whole thing is more a game of trying to ”score” a goal on the
opposite team and getting cheered on by your own team than a serious engagement in a rational debate in which you are open to pursuing the
truth and learning something new. This type of bullshit is focused on
promoting an ingroup cause or ideology rather than the self.

It is, however, the pseudo-profound bullshit that has been the main focus on recent research.


Receptivity
to pseudo-profound bullshit

Pseudo-profound bullshit is
composed of sentences designed to sound intellectually profound, through the use of buzzwords and jargon, that are actually vacuous.
This type of bullshit has a long history in intellectual (or
pseudo-intellectual) circles. There has even been a culture of
bullshitting in some academic circles, particularly in some quarters
of continental and postmodern philosophy. For instance, see this
funny Youtube-clip
, in which the philosopher John Searle recounts a
conversation in which the famous postmodernist Michel Foucault says
that in Paris you need to have at least 10% that is incomprehensible
in your writings to be considered a serious and profound thinker. The
postmodern movement was also the target of the infamous hoax
perpetrated by the physicist Alain Sokal, who was able to publish an
article crammed with bullshit in a leading postmodern journal. This
is how Sokal described the article when he made the hoax public:

I
intentionally wrote the article so that any competent physicist or
mathematician (or undergraduate physics or math major) would realize
that it is a spoof … I assemble a pastiche — Derrida and general
relativity, Lacan and topology, Irigaray and quantum gravity — held
together by vague rhetoric about “nonlinearity”, “flux” and
“interconnectedness.” Finally, I jump (again without argument) to
the assertion that “postmodern science” has abolished the concept
of objective reality. Nowhere in all of this is there anything
resembling a logical sequence of thought; one finds only citations of
authority, plays on words, strained analogies, and bald assertions.“

Another
prominent source of pseudo-profound bullshit is New Age literature,
particularly in the
alliance between pseudo-science and spirituality that has come to be
symbolized by the well-known New Age guru Deepak Chopra. A Swedish
book called ”Life through the eyes of quantum physics” that
recently hit the best-seller lists provides an
almost parodic
illustration of this sort of pseudo-profound bullshit. This book is
full of vague Chopraesque
claims about quantum
consciousness and its
”scientifically proven” power to
shape reality, including
preventing serious illnesses such as cancer, promoting success in life, altering the magnetic field of the earth, and causing miracles. The authors
not only did lacked knowledge of the basics of quantum physics, they
had no interest in it either (as interviews have made
apparent)—their interest was in selling New Age spirituality with
the help of pop-bullshitting about quantum physics and
superficial narratives about Eastern spiritual wisdom.

The
reason that pseudo-profound bullshit is so pernicious is in part, I
suspect, that it plays on the human yearning for a deep sense of
mystery and understanding of the cosmos. Our existential predicament
is mind-boggling and anxiety-provoking, and it is comforting to
believe that there are gurus or other authorities out there with a
deeper sense of the truth, and to therefore attribute your own lack of
ability to understand what they say to our own ignorance.

Recent findings

How do you study bullshit receptivity scientifically? First, you need a sample of bullshit sentences. Fortunately, there is a very simple, algorithmic way of constructing such sentences. You let a computer randomly string together impressive-sounding buzzwords into a syntatically correct sequence. There are a number of such bullshit generators available online, including the Postmodernism generator and the Wisdom of Chopra. These sentences are by definintion bullshit, since they are constructed absent concern for the truth.

In a pioneering paper that won them the Ig-Nobel Prize, Pennycook, Cheyne, Barr, Koehler, and Fugelsang (2015) constructed a set of bullshit sentences (e.g., “Wholeness quiets infinite phenomena”) through this method, with a focus on New Age-jargon, and then let people rate how profound they thought these sentences were. The found that receptivity to the bullshit sentences was associated with an intuitive cognitive style, a lack of reflectiveness, supernatural beliefs, and other related constructs. Pennycook and Rand (2019) have later also found that this sort of receptivity to pseudo-profound bullshit plays a role in receptivity to fake news.

My colleagues and I constructed a Swedish measure based on the Pennycook et al. (2015) paradigm. We have used this measure to address, among other things, the debates in political psychology over whether there are ideological asymmetries in epistemic orientations (Nilsson, Erlandsson, & Västfjäll, 2019). We found in essence that social conservatism (and particularly moral intuitions about ingroup loyalty, respect for authority, and purity) is robustly associated with receptivity to pseudo-profound bullshit, consistent with the classical notion of a “rigidity of the right”. Interestingly, we also found a particularly high bullshit receptivity among persons who vote for the green party in Sweden, and a very low bullshit receptivity among right-of-center social liberals.

What are the mechanisms driving these differences? A part of it appears to be a failure to critically engage with information. Like Pennycook and colleagues, we have found that bullshit receptivity is robustly associated with low cognitive reflection, and we have also found it to be negatively associated with numeracy and positively associated with confirmation bias.

But this cannot be the whole story. For example, the greens were close to the average in terms of cognitive reflectiveness in our study. We speculated that their high bullshit receptivity is instead due to a strong openness to ideas that is not always tempered by critical thinking. Interestingly, two papers suggesting that this is indeed a mechanism underlying bullshit receptivity appeared right after our paper was accepted for publication. Bainbridge, Quilan, Mar, and Smillie (2019) found that receptivity to pseudo-profound bullshit is associated with the personality construct “apophenia”—the tendency to see patterns where none exist—which is a form of trait openness. Walker, Turpin, Stolz, Fugelsang, and Koehler (2019) measured illusory pattern perception through a series cognitive tests rather than personality questions but came to a similar conclusion—bullshit-receptive persons tend to endorse patterns where none exist.

There may of course also be other mechanisms that contribute to receptivity to pseudo-profound bullshit. For example, Pennycook and colleagues have suggested that perceptual fluency contributes to receptivity to fake news. It is possible that persons who are commonly exposed to a specific type of pseudo-profound jargon are more likely to be receptive to this kind of bullshit.

Another great addition to this growing body of research is a paper by Čavojová, Secară, Jurkovič, and Šrol (2019), which presents conceptual replications of many of the key findings on receptivity to pseudo-profound bullshit in Slovakia and Romania. I often lament that psychology fails to take the problem of WEIRD samples and studies seriously, but these studies certiainly do. By demonstrating that the research paradigm I have discussed here is meaningful and useful outside of the U.S. and Western Europe, they put this new, fascinating field on firmer ground.

Key papers

___________________________________________________________________________

Bainbridge, T. F., Quinlan, J. A., Mar, R. A., & Smillie, L. D. (2019). Opennes/Intellect and susceptibility to pseudo-profound bullshit: A replication and extension. European Journal of Personality, 33(1), 72-88. https://doi.org/10.1002/per.2176

Čavojová, V., Secară, E-C., Jurkovič, M., & Šrol, J. (2019). Reception and willigness to share pseudo-profound bullshit and their relation to other epistemically suspect beliefs and cognitive ability in Slovakia and Romania. Applied Cognitive Psychology, 33(2), 299-311. https://doi.org/10.1002/acp.3486

Nilsson, A.,
Erlandsson, A., & Västfjäll, D. (2019). The complex relation
between receptivity to pseudo-profound bullshit and political
ideology. Personality and Social Psychology Bulletin. https://doi.org/10.1177/0146167219830415

Pennycook,
G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A.
(2015). On the reception and detection of pseudo-profound
bullshit. Judgment
and Decision Making
, 10(6),
549-563. http://journal.sjdm.org/15/15923a/jdm15923a.pdf

Pennycook,
G. & Rand, D. G. (2019). Who falls for fake news? The roles of
bullshit receptivity, overclaiming, familiarity, and analytic
thinking. Journal
of Personality
. https://doi.org/10.1111/jopy.12476

Petrocelli, J. V. (2018). Antecedents of bullshitting. Journal of Experimental Social Psychology, 76, 249-258. https://doi.org/10.1016/j.jesp.2018.03.004

Walker,
A. C., Turpin. M. H., Stolz, J. A., Fugelsang, J. A., & Koehler,
D. J. (2019). Finding meaning in the clouds: Illusory pattern
perception predicts receptivity to pseudo-profound bullshit. Judgment
and Decision Making
, 14(2),
109-119. http://journal.sjdm.org/18/181212a/jdm181212a.html
____________________________________________________________________________



Meta-theoretical myths in psychological science

Philosophy and meta-theory Posted on Wed, November 28, 2018 02:05:00

There is a lot of
talk of “meta science” in psychology these days. Meta science is essentially the scientific study of science itself—or, in other words, what has more traditionally been called “science studies”. The realization that psychological science (at least as indexed by articles published in high-prestige journals) is littered with questionable research practices, false positive results, and poorly justified conclusions has undoubtedly sparked an upsurge in this area.

The meta-scientific
revolution in psychology is extremely sorely needed. It is, however, really a
meta-methodological revolution so far.
It has done little to rectify the lack of rigorous meta-theoretical work in psychology, which dates back all the way to the
behaviorist expulsion of philosophy from the field (for example, see this paper by Toulmin & Leary, 1985). Psychology is today, as philosopher
of psychology André Kukla has remarked (in this book), perhaps more strongly
empiricist than any scientific field has been at any point in history. Although
many researchers have an extremely advanced knowledge of statistics and
measurement, few have more than a superficial familiarity with contemporary
philosophy of science, mind, language, and society. When psychologists discuss
meta-theoretical issues, they usually do it without engaging with the relevant
philosophical literature.

I will describe three
meta-theoretical myths that I think are hurting theory and research in
psychology. This is not a complete list. I might very well update it later.

1. Scientific explanation is equivalent to the identification of a
causal mechanism

This is on all counts
an extremely common assumption in psychological science. In this respect, psychological theorizing is
remarkably discordant with contemporary philosophical discussions of the nature
of scientific explanation. While there can be little doubt that mechanistic
explanation is a legitimate form of explanation, the notion that all scientific explanations fall (or
should fall) in this category has not been a mainstream view among philosophers
for several decades. Even some of the once most vocal proponents of explanatory
reductionism abandoned this stance long ago. One of today’s leading philosophers of science, Godfrey-Smith (2001, p. 197)
goes as far as to assert (in this book) that “It is a mistake
to think there is one basic relation that is the explanatory relation . . . and it is also a mistake to think
that there are some definite two or three such relations. The alternative view
is to recognize that the idea of explanation operates differently within
different parts of science—and differently within the same part of science at
different times.”

Psychology is
particularly diverse in terms of levels of explanation, ranging from instincts
and neurobiology to intentionality and culture-embedment. For example,
functional explanations (the existence of success of something is explained in
terms of its function) are very popular in cognitive psychology. In my own
field, personality and social psychology, a lot of the explanations are implicitly intentional (reason-based)
explanations (a mental event or behavior is explained in terms of beliefs,
desires, goals, intentions, emotions, and other intentional states of a
rational agent). The reasoning is often that it would be rational for people to act in a particular way (people should be inclined to do this or that
because they have this or that belief, goal, value, emotion, etc.) and that
this explains why they de facto tend to act in this way. Even though the
researchers seldom recognize it themselves, this is not a mechanistic explanation. The cause of the action is described
in intentional rather than mechanistic terms. Not all causal explanations are
mechanistic explanations (a very famous essay by the philosopher Donald Davidson that first made this case can be found here).

It is of course
possible to argue that these are not real scientific explanations—that the only
real scientific explanations are
mechanistic. The important thing to realize is that this is akin to saying that
much, perhaps most, of psychological research really is not science. In fact,
even the so called causal mechanisms purportedly identified in psychological
research are generally quite different from those identified in the natural
sciences. Psychological research is usually predicated on a probabilistic,
aggregate-level notion of causality (x causes y in the population if and only if x raises
the probability of y in the population on average ceteris paribus) and a notion of probabilistic, aggregate-level
mediation as mechanistic explanation, while the natural sciences often employ a
deterministic notion of causality.

2. Statistical techniques contain assumptions about ontology and
causality

I do not know how
widespread this myth really is, but I have personally encountered it many times. Certainly, statistical tests can be based on specific assumptions
about the ontology (i.e., the nature of an entity or property) of the analyzed
elements and the causal relations between them. But the idea that these
assumptions would therefore be intrinsic to the statistical tests is fallacious. Statistical tests merely crunch numbers—that is all they do. They
are predicated on statistical
assumptions (e.g., regarding distributions, measurement levels, and
covaration). Assumptions about ontology and causality stem wholly from the
researcher who seeks to make inferences from statistical test to theoretical
claims. They are, ideally, based on theoretical reasoning and appropriate
empirical evidence (or, less ideally, on taken-for-granted conventions and presuppositions).

One common version of
this myth is the idea that techniques such as path analysis and structural
equation modeling, which fit a structural model to the data, are based on the
assumption that the predictor variables cause the outcome variables. This idea
is also related to the notion that tests of mediation are inextricably bound up
with the pursuit of mechanistic explanation from a reductionist perspective. These
ideas are false. Structural models are merely complex models of the statistical
relation between variables. Mediation analyses test whether there is an
indirect statistical relation between two variables through their joint
statistical relation to an intermediate variable. These tests yield valuable
information about the change in variable in light of the change in other
variables, which is necessary but far from sufficient for making inferences
about causality. The conflation of statistical techniques with “causal
analysis” in the social sciences is based on historical contingencies (i.e.,
that it what they were initially used for), rather than rational considerations
(for example, see this paper by Denis & Legerski, 2006).

Yet another related
idea is that statistical tests are based on presuppositions regarding the reality of the variables that are analyzed. It is true in a trivial
sense that there is little point in performing a statistical test unless you
assume that the analyzed variables have at least some reference to something
out there in the world—or, in other words, that something is causing variation
in scores on the variable. But the critical assumption is just that something is measured (much like science
in general presupposes that there is something there to be studied).
Assumptions about the ontology of what is measured are up to the researcher.
For example, statistical analyses of “Big Five” trait data are consistent with
a wide variety of assumptions regarding the ontology of the Big Five (e.g., that they are internal causal properties, behavioral regularities, abstract statistical
patterns, instrumentalist fictions, socially constructed personae). Furthermore,
the finding that scores on an instrument have (or do not have) desirable
statistical properties does not tell us whether the constructs it purportedly
measures are in some sense real or not. A simple realistic ontology is not
necessary; nor is it usually reasonable, which brings us to the third myth.

3. Psychological constructs have a simple realistic ontology

At least some versions
of this myth appear to be very common in psychological science. In its extreme
form, it amounts to the idea that even abstract psychological constructs
correspond to real internal properties under the skin, like organs, cells, or
synapses, that are cut into the joints of nature in a determinate way. There are several fundamental
problems here.

First, scientific
descriptions in general are replete with indeterminacy. There are often
multiple equally valid descriptions that are useful for different purposes. In
biology, for example, there are several different notions of ‘species’ (morphological,
genetic, phylogenetic, allopatric), with somewhat different extension, that are
used in different branches of the field. In chemistry, even the period table of
elements—the paradigmatic example of a scientific taxonomy—may be less determinately
“cut into the joints of nature” that popular opinion would suggest (see this
paper
by the philosopher of science John Dupré). In psychology, the indeterminacy is much
greater still. The empirical bodies of data are often difficult to overview and assess, both the phenomena
themselves and the process of measurement may be complicated, and particularly intentional descriptions have messy properties. Debates over
whether, for example, personality traits, political proclivities, or emotions “really”
are one-, two-, or n-dimensional are
therefore, from a philosophical perspective, misguided (and, by the way,
another common mistake is to confuse conceptual representations such as these
ones, which can have referents but not truth values, with theories, which have truth values!) What
matters is whether the models are useful. Sometimes it may be the case that
multiple models have legitimate uses, for example by describing a phenomenon with
different levels of granularity and bandwidth. There are practical benefits in
having the scientific community unite around a common model, but this is often not motivated by the genuine superiority of one model over the competitors.

Second, psychological
constructs are commonly identified in terms of individual differences between persons. They are, in this
sense, statistical idealizations or convenient fictions (“the average person”) that
are useful for describing between-person variation in a group. The differences
exist between persons rather within any particular person (as particularly
James Lamiell has argued for decades, for example in this paper). It is of course possible to study psychological
attributes that we have good reasons for ascribing to individuals in terms of
between-person constructs. But the opposite chain of reasoning is fallacious;
it is not possible to directly infer the existence or structure of an attribute
at the level of the individual from models or constructs that usefully
represent between-person variation at the level of the group aggregate (see, for example, this recent paper by Fisher, Medaglia, & Jeronimus, 2018). For
example, it is misleading to describe personality traits such as the “Big Five”
as internal causal properties, as has often been the case (see also this interesting paper by Simon Boag). This does not (contrary
to what some critics have argued) necessarily imply that suchlike between-person
constructs are useless for describing the psychology of individuals, but only
that a naïve realistic ontology of the phenomena that
they identify is precluded.

Third, at least insofar
as we employ intentional descriptions (and possibly other descriptions as well),
portraying persons as basically rational agents that harbor beliefs, desires,
emotions, intentions, and other intentional states, we are faced with an additional
problem. On this level of description, a person’s ontology is not just causally impacted by the external world;
it is in part constituted by his or
her relation to the world (this is often called the ‘externalism’ of the
mental). This is because intentional states derive a part of their content from
those aspects of the world they represent. The world affords both the raw
materials that can be represented and acted upon and frameworks for how to represent
and organize these raw materials. It is, in this sense, necessary
for making different kinds of intentional thought and action possible. Therefore, at least some psychological attributes
exist in the person’s embedment in the world—fully understanding them requires
an understanding of both the person’s international psychological properties
and his or her world, including both personal circumstances of life and the collective systems of meaning that actions (both behavioral and mental) are embedded within (see, for example, this classical paper by Fay & Moon, 1977).

On top of this, we have the problem most thoroughly explicated by the philosopher of science Ian Hacking (in this book) that many psychological attributes are moving targets with an interactive ontology. This means that the labels we place on the attributes (e.g., that certain sexual orientations have been viewed as pathological, immoral, or forbidden) elicit reactions in those who have the attributes and responses from the surrounding social environment that, in turn, change the attributes.



Psychology is still WEIRD

Comments on new research Posted on Wed, November 14, 2018 15:20:26

Psychological science is fraught with problems. One of
these problems that has recently attracted widespread attention is the
proliferation of false positives, which is rooted in a combination of QRPs (questionable
research practices), including “p-hacking” (choosing analytical options on the
basis of whether they render significant results) and “HARKing” (hypothesizing
after the results are known), and very low statistical power (i.e., too few
participants). Overall, psychology has responded vigorously to this problem,
although much remains to be done. Numerous reforms have been put in place to
encourage open science practices and quality in research.

Another problem that has become widely recognized
recently is that psychological research often makes inferences about human
beings in general based on studies of a thin slice of humanity. As Henrich,
Heine, & Norenzayan (2010) noted in a landmark paper, participants in
psychological research are usually drawn from populations that are WEIRD (Western, Educated, Industrialized, Rich, Democratic), which
are far from representative of mankind—in fact, they turn out to frequently be
rather eccentric, even when it comes to basic cognitive, evolutionary, and social phenomena such as cooperation, reasoning styles, and
visual perception (see also this interesting preprint by Schultz, Bahrami-Rad, Beauchamp,
& Henrich that very thoroughly discusses the historical origins of WEIRD
psychology).

The paper by Henrich and colleagues has
racked up almost 5000 Google Scholar citations. Yet a recent paper by Rad,
Martingano, and Ginges (2018) suggests that the impact of the Henrich et al.
paper on actual research practices in psychology has been minimal, at least as
indexed by research published in the high-prestige journal Psychological Science. Rad et al. find that researchers persist in
relying on WEIRD samples and show little awareness of the WEIRD problem: “Perhaps the most disturbing
aspect of our analysis was the lack of information given about the WEIRDness of
samples, and the lack of consideration given to issues of cultural diversity in
bounding the conclusions” (p. 11402).

Explaining
the persistence of the WEIRD problem

How can it be that psychology has responded so
vigorously to the problem with false positives, yet so inadequately to the
WEIRD problem? Surely both problems are equally serious, are they not? I can
think of at least three possible explanations.

1. First and foremost, the WEIRD problem is a
manifestation of a much broader problem. It is a manifestation of the lasting
influence of the marriage between logical positivism and behaviorism that
shaped psychology for almost half a century. Psychological research was
supposed to yield universal facts, just like physics, by employing “neutral”,
culture-free materials and methods, a quantitative methodology, and hard-core
empiricism. Given the vast historical impact of this ideal, it is no mystery
that psychology remains both WEIRD and theoretically unsophisticated. This is
simply the implicit paradigm under which psychology has operated for more than a
century. While the problem with false positives is a problem signaling a crisis
within this paradigm, the WEIRD
problem is a meta-problem with the paradigm itself.

2. Second, it is possible that researchers do not realize
the severity of the WEIRD problem because they are immersed in a homogeneous
community of like-minded individuals with similar concerns, and their exposure
to other intellectual cultures is limited. Here it is important to note that
the WEIRD problem is not limited to participant selection. It is a problem of
testing WEIRD theories on WEIRD samples with WEIRD methods. I personally often
find psychological theories and concepts US-centric (e.g., the reification of
“liberals” and “conservatives” in political psychology or the pre-occupation
with the self and neglect of other aspects of the person’s worldview in
personality psychology)—which is not surprising given that most of the leading
researchers in psychology are from the United States—and I still live in the broader
Western cultural sphere.

3. A third possible explanation for the persistence of
the WEIRD problem is that there are many practical difficulties involved in conducting
research in non-WEIRD contexts. A lot of things could go wrong. You need high-quality
translations of research materials. You also need to obtain a reasonable degree
of measurement invariance across languages and populations to be able to make
meaningful comparisons between them. Even so, the results may not be at all
what you expected. Perhaps the theories and instruments do not perform as they
are supposed to do. Of course, on a purely scientific basis such findings would
be extremely important. But perhaps researchers still find it is easier to just
stick to studying well-known populations under well-known conditions in order
to more easily find support for their hypotheses and publish their work.

Moving forward

The WEIRD-problem needs to attain the same status as
the false positives-problem in psychology. As Rad, Martingano, and Ginges
(2018) suggest, authors need to do a much better job reporting sample characteristics, explicitly tying findings to populations, justifying the sampled population, discussing the
generalizability of the findings, and investigating existing diversity in their
samples. Journals and funders need to start encouraging
these practices. Given all the work involved in conducting non-WEIRD research
and the fierce competition over research funding and space in high-impact
journals, we are unlikely to see any real change unless the inclusion of non-WEIRD
research will give extra points.

When it comes to the problem with WEIRD perspectives, psychology
might need to become more open to scholarship born out of non-WEIRD (particularly
non-US) contexts. An increased openness to philosophical, meta-theoretical, historical,
and anthropological scholarship in general, which is for the most part
completely ignored in psychological science today, would be particularly
helpful. That would help us both to address the WEIRD-problem and to make
psychology a more theoretically sophisticated science.



The evolutionary foundations of worldviews

Comments on new research Posted on Wed, November 07, 2018 15:27:16

When taking a graduate course on evolutionary psychology a
few years ago, I thought a bit about the potential evolutionary bases of worldviews.
I was specifically interested in the opposition between humanistic and
normativistic perspectives posited by Silvan Tomkins Polarity Theory (more
information here
) that is encapsulated in the following quotation: “Is man the measure, an end in himself, an active, creative,
thinking, desiring, loving force in nature? Or must man realize himself, attain
his full stature only through struggle toward, participation in, conformity to
a norm, a measure, an ideal essence basically prior to and independent of man?”
(Tomkins, 1963).

Evolutionary
bases of of normativism and humanism

Drawing on Tomkins’ (1987) notion that “the major
dynamic of ideological differentiation and stratification arises from perceived
scarcity and the reliance upon violence to reduce such scarcity”, I suggested (in
my term paper) that conditions of resource scarcity should have fostered a
tough-minded climate where the strong and hostile could prove their worth by
contributing to resource provision, and those weak or vulnerable were met with
anger, contempt, and disgust. I suggested that humanism is to a greater extent rooted in the problem
of forming stable alliances with other persons and groups, which requires interpersonal
trust and empathy.


Because psychological traits co-evolve as entire “packages”
in response to particular adaptive contexts, it is reasonable to predict that
humanism and normativism co-vary with other psychological and physiological
traits that also help to solve the respective adaptive problems. Normativism may
have co-evolved with other traits that helped to solve the problem of resource
acquisition, such as aggressiveness, physical strength and formidability,
risk-taking, conscientiousness, persistence, and diligence—this should be true
at least among men, who are thought of as the primary resource providers in an
evolutionary context. Humanism may instead have co-evolved with traits such as
empathy, altruism, agreeableness, and concern for the welfare of individuals, which are crucial for social bonding.

Egalitarianism
and upper-body strength

Interestingly, a portion of the aforementioned
hypotheses have subsequently been tested. The results of twelve studies conducted in various
countries are reported in a recent paper by Michael Bang Petersen and Lasse
Lauritsen titled Upper-body strength and political egalitarianism:
Twelve conceptual replications
. Drawing on models of animal conflict
behavior, Petersen and Lauritsen suggest that attitudes related to resource
conflict (i.e., egalitarianism) should be related to upper-body strength among
males, which was crucial for the resolution of resource conflicts in our evolutionary
past. They argue that “formidable individuals and their allies would be more
likely to prevail in resource conflicts and needed to rely less on norms that
enforced sharing and equality within or between groups in order to prosper”.

The measures of upper-body strength employed include
both self-report measures and objective measures of formidability. The one
major limitation of these studies—and this is a major limitation—is that there was, as far as I understand it, no control
for significant environmental factors such as time spent in the gym, physical exercise
background, occupation, or use of performance enhancing drugs (although other
more indirectly relevant variables such as socioeconomic status and
unemployment experiences were taken into consideration). Nevertheless, it is
interesting to note that the authors find a clear relationship among men (but not
women) between physical formidability and social dominance orientation (which encompasses
egalitarianism) but not between formidability and right-wing authoritarianism.

Toward an
evolutionary understanding of worldviews?

In order to establish that there is genetic covariation (not just
covariation in general) between formidability and worldviews, future research
needs to do a better job controlling for crucial environmental influences
(recent studies have apparently started to do this). Behavioral genetics methods
can also be employed to more directly assess genetic covariation. In addition to
this, a broader range of worldview dimensions (e.g., normativism and humanism, which are correlated with authoritarianism and social dominance) and physiological predispositions
could easily be taken into consideration. Let us hope that this will indeed what will happen over the next years.



The “happiness pie”, genetic and environmental determinism, and free will

Comments on new research Posted on Fri, September 21, 2018 01:25:46

Nick Brown and Julia Rohrer recently posted a new
preprint titled Re-slicing the ”Happiness Pie:
A Re-examination of the Determinants of Well-being
that comments on an
influential paper by Lyubomirsky, Sheldon, and Schkade (2005) on the determinants
of well-being. Nick Brown is the amateur who debunked the mathematics of
happiness
(together with the legendary Alain Sokal of the “Sokal hoax”). He has
made a name for himself exposing shoddy work in positive psychology. This is another addition to this genre. What is particularly mind-blowing with this one is
not just the sheer lack of intellectual sophistication of the criticized paper,
but the fact that it has produced a whopping 3000 Google Scholar citations.

The central claim of the Lyubomirsky et al. paper is
that roughly 50% of the variance in well-being can be explained in terms of
genetic predispositions and that roughly 10% of it can be explained in terms of
life circumstances, leaving up to 40% to be explained in terms of intentional
activity. This decomposition of the determinants of well-being, which has come
to be known as the happiness pie, has become a cornerstone of the self-help and
coaching movements, as it appears to suggest that all persons have considerable
control over their own well-being.

Problems pointed out by Brown and Rohrer

Brown and Rohrer meticulously pick the happiness pie apart.
Here are some of the errors:

1. An additive model that divides the determinants of
well-being into three disparate portions (or pieces of a pie) is only
meaningful if all the portions are independent of each other. But there
is plenty of evidence of interactions between genes, environment, and
volitional activity.

2. No evidence is presented that the “leftover” variance
after taking genetics and the environment into consideration can be attributed
to volitional activity.

3. Measurement error, which attenuates estimates, has not been taken into consideration. If we adjust for this, there will be less “leftover”
variance.

4. When the sources for the numbers 50% and 10% are
re-examined, these numbers appear to be arbitrary, and how they were derived is
not transparent.

5. Particularly the 10% estimate appears based on sloppy
reasoning. Countless environmental factors are not measured in
the survey studies that this estimate is based on (in utero influence is a
dramatic example). Environmental factors are frequently operationalized in terms of demographic variables (which have both genetic and environmental determinants and do certainly not exhaust the full range of relevant life
circumstances).

6. Even if the 50% and 10% figures and the subtraction
logic by which 40% of the variance is leftover for intentional activity were
correct, this would still just be a population average. It would not imply that
each individual has substantial control over his or her well-being.

Moving beyond the Brown and Rohrer paper

The great irony of all this is that there is nothing here—neither in the Lyubomirsky et al. paper nor in the Brown and Rohrer paper—showing that people cannot
change their well-being through intentional activity either. The deeper problem
is that claims about the effects of will-power (and to an even greater degree
claims about free will) do not have anything at all to do with heritability and
environmental influences per se. The proper way to scientifically investigate the
extent to which people can intentionally change their well-being is to:
(a)
recruit a large group of persons who are highly motivated to do what it takes to
increase their well-being,
(b) make sure that they have all relevant resources
(or at least measure whether they do), including the best forms of therapy or
training programs, time, money, etc., and make sure that they actually do what
they are supposed to do,
(c) measure changes in their levels of well-being
compared to a control group of persons who do not engage in deliberate efforts
to change their well-being but who are otherwise comparable to the group of
persons who do.

Obviously, this is not easy to do (e.g., how do we make sure
that experiment and control group are comparable?), and even if done well, all
that we could say is that this is what we can achieve with our current state of
knowledge. It is possible that there are effective methods for increasing
well-being that have not yet been discovered or are not widely known.

Heritability coefficients and estimates of
environmental correlates in a general population are in themselves irrelevant
to this question because we do not know whether the persons who participated in
these studies did engage in persistent intentional efforts to change their
well-being or whether they had access to the best strategies for doing this. These pieces of information could possibly have some relevance if we
would know which individuals had the motivation and strategies necessary for
deliberate improvement of well-being and which did not. Even so, intentional
orientations do not emerge randomly out of thin air—they may be related to traits
such as consciousness or openness to change, which have a sizable genetic
component—so it might be tricky to find twins who differ enough in this regard.

At any rate, a case can be made that people do at least have the capacity for
intentionally changing their well-being if they are motivated to make enough
changes to their lives, without resorting to any of the dubious arguments presented by
Lyubomirsky.



Our book is finally out

My new work Posted on Tue, May 22, 2018 01:23:07

The book on philosophy of science and methodology for psychology that I have been working on together with Lars-Gunnar Lundh is finally out. Unfortunately, it is only published in Swedish so far, but I hope that we will soon be able to publish at least parts of it in English as well. If you happen to speak Swedish, you can access it from the publisher Studentlitteratur.

The reason that we wrote this book is that we felt that there was no other book that connects philosophy of science with psychological science in a sufficiently systematic and non-polemical way (the best books in this genre tend to focus on the natural sciences). Although our initial plan was to write a standard textbook, the book grew quite a bit over time. In the end, it covered traditional philosophy of science (ontology, epistemology, positivism, Popper, Kuhn…), alternative philosophies (hermeneutics and phenomenology), critiques of science (e.g., postmodernism, feminism, science studies, and meta-reserach), philosophical issues in psychology (e.g., the mind-body problem, levels of explanation, and research practices), and the basics of both quantitative and qualitative methods.

It’s nice, although a bit strange, to finally see it in print after more than four years working on it. I have come to realize that writing this book (rather than focusing on publishing papers) before having a tenured position might not have been the best career move. But on the other hand, this book will probably be a lot more useful to a lot more people than a pile of highly specialized research papers would be. We were also fortunate enough to receive the Course Literature Honor’s Prize from Studentlitteratur. Pictures from the award ceremony at Berns in Stockholm below: