Essentialism and Traditionalism in Academic Research

Essentialism and Traditionalism in Academic Research

September 19, 2022

Originally published at Economics from the Top Down

Ryan Kyger1 and Blair Fix

Philosophy of science is about as useful to scientists as ornithology is to birds.

— attributed to Richard Feynman2

Most scientists don’t worry much about philosophy; they just get on with doing ‘science’. They run experiments, analyze data, and report results. And by so doing, they fall repeatedly into known philosophical pitfalls.

This essay is about two such pitfalls: essentialism and traditionalism.

‘Essentialism’ is the view that behind real-world objects lie ‘essences’ — a type of eternal category that you cannot observe directly but is nonetheless there. Racial categories are a common type of ‘essence’. To be racist is to attribute to different groups universal qualities that define them as people.3

Given the long history of racism, it’s clear that humans need little impetus to impose categories onto the world. Still, our instinct to categorize is not always bad. In fact, it’s a key part of science. Looking for patterns is how Dmitri Mendeleev created the periodic table. It’s how John Snow discovered that cholera was water-borne. And it’s how Johannes Kepler discovered the laws of planetary motion.

So if categorizing patterns can be helpful, what makes essentialism bad? To be essentialist, in our view, is to reify a category (or theory) into a ‘higher truth’. By so doing, you don’t use evidence to inform a theory. You use theory to interpret evidence … and you don’t consider that you could be wrong.

Like most human activities, ‘essentialism’ is a social affair. Yes, you can do it by yourself, but you’ll be called a ‘crank’. To be essentialist with prestige, you must be part of a tradition. You must think x because your teacher thought x. And because your teacher was prestigious, so are you. When we combine essentialism with traditionalism, we get a powerful recipe for killing science. We interpret the world through our preferred lens, and then reward ourselves for doing it.

In this essay, we’ll look at examples of essentialism and traditionalism in economics, biology, and statistics. But we’ll start with some Greek philosophy.

Plato’s spell

Plato of Athens was perhaps the first scholar to combine essentialism and traditionalism. Unsurprisingly, his goal was political.

Plato was born during the Athenian experiment with democracy — a period marked by turmoil, war, and famine. Perhaps because of this instability (and also because his family claimed royal blood), Plato disliked democracy. Instead, he was staunchly in favor of traditional hierarchical rule:

The greatest principle of all is that nobody … should be without a leader. … [Man] should teach his soul, by long habit, never to dream of acting independently, and to become utterly incapable of it.

— Plato of Athens, quoted by Karl Popper [3][/latex]

Given his reactionary politics, Plato was disturbed by the events that surrounded him. During his life, the aristocratic order was in flux. And so Plato searched for something constant on which to cling. He found this constant in his philosophy. Social change, Plato would theorize, was like entropy. Change was never progressive, but was instead an incessant force for corruption, decay and degeneration. Or at least, that’s Karl Popper’s reading of Plato.

In his book The Open Society and its Enemies, Popper lambastes Plato for both his politics and his philosophy [3]. As Popper sees it, Plato resolved his angst about social change by imposing onto the messy real world a higher plane of eternal truth. Plato called this higher plane the ‘Forms’ — a hidden realm in which real-world objects have a perfect and unchanging representation. Behind every real-world triangle, for instance, is the perfect ‘form’ of a triangle.

When applied to mathematics, this idea seems reasonable. An ideal triangle, we all know, is a three-sided shape whose internal angles sum to 180°. Although no real-world triangle has this property exactly, we can imagine one that does. This, Plato would say, is the ‘essence’ of a triangle.

Actually, it’s the definition. You see, in mathematics, we start with a definition and explore the consequences. An example: Euclid postulated that parallel lines never intersect. From this (and other definitions), he derived the rules of Euclidean geometry. From a definition came consequences. If you think like Plato, you’d say that Euclid discovered a higher plane of truth. But that’s a scientific fallacy. The problem is that there’s no guarantee that definitions (and their consequences) have anything to do with the real world.

Case in point: on the curved surface of the Earth, Euclidean geometry is flat out wrong (pun intended). On Earth, parallel lines can intersect (lines of longitude), which means that Euclid’s ‘higher plane of truth’ is invalid. Draw a triangle big enough and you’ll find that the angles sum to more than 180°.

Back to Plato. Karl Popper was unimpressed by Plato’s essentialism because he realized that it was a recipe for pseudoscience. According to Plato, ‘essences’ were accessible only through ‘intuition’. Popper ridiculed this idea, but was not the first to do so. Plato’s spell was broken during the Enlightenment when thinkers like John Locke, David Hume, and Immanuel Kant highlighted the importance of empirical knowledge [46]. To do science, they argued, you cannot simply impose ideas and definitions onto the world. Instead, you must use real-world observations to hold ideas in check.4

Of course, like essences, scientific models are idealizations of the real world. The key difference, though, is the attitude surrounding the idea. When science is done well, hypotheses are treated as provisional and incomplete. When new evidence comes along, a good scientist must remain open to revising or discarding their model. With Plato’s essences, it is the reverse. The essence is eternally true … the unquestionable insight of a great mind. Evidence is to be interpreted in light of the insight, never the reverse.

‘Essentialism’, as we see it, is the reification of a theory — a transformation from ‘provisional explanation’ to ‘timeless truth’. The change rarely happens overnight, but is instead reinforced by repetition. Over time, scientists tend to fall in love with their theories, buttressing them from contradictory evidence. Eventually a pet theory becomes a school of thought, passed down from teacher to student. If the tradition becomes ubiquitous, the theory becomes a ‘timeless truth’. Or so it appears to those under the spell.

In this essay, we look at three academic disciplines that are (at least in part) under the spell of essentialism and traditionalism. Obsessed with the idealized free market, the discipline of economics is the worst offender. But it is not the only one. Population biologists often interpret the world through the lens of an equilibrium model that bares little resemblance to reality. And in their haste for ‘rigor’, statisticians have enshrined subjective beliefs as received wisdom.

What unifies these practices, we argue, is the merger of essentialism and traditionalism. It is a potent combination for creating and perpetuating an ideology.

Essentialism and traditionalism in economics

Mainstream economics makes so many false claims that we could write a book debunking them. (Which is why Steve Keen did just that [7].) Although economics presents itself as a hard science, under the hood it is essentialist dogma, held in place by tradition.

Take, as an example, economists’ appeal to ‘naturalness’. According to neoclassical economics, there is a ‘natural’ rate of unemployment, and there are ‘natural’ monopolies. Even the distribution of income is supposedly ‘controlled by a natural law’ [8].

Now there is nothing wrong with appealing to natural law. Indeed, scientists do it all the time. But outside economics, the term has an unambiguous meaning: a ‘natural law’ is an empirical regularity with no known exception. The laws of thermodynamics are a prime example. Left alone, objects ‘naturally’ converge to thermodynamic equilibrium. Leave a hot coffee on the table and it will soon cool to room temperature. The outcome is the same today as it was yesterday. It is the same for you as it is for me. It is a ‘natural law’.

By documenting and explaining this empirical regularity, we are following the recipe laid out by Locke, Hume, and Kant. Observe the real world and try to explain consistent patterns. When economists appeal to ‘natural law’, however, they are doing something different. Take the so-called ‘natural’ rate of unemployment. If this rate were like the laws of thermodynamics, unemployment would gravitate towards a single value. Try as you might, it would be impossible to change unemployment from this ‘natural’ rate.

Needless to say, unemployment does not work this way. Instead, it fluctuates greatly, both in the short term, and over the long term. So when economists refer to the ‘natural’ rate of unemployment, they don’t mean an empirical regularity. They mean an essence. As Milton Friedman defined it, the ‘natural’ rate of unemployment is that which is “consistent with equilibrium in the structure of real wages” [9]. So whenever (and wherever) the labor market is in ‘equilibrium’, unemployment is at its ‘natural’ rate.

So how do you tell when the market is in ‘equilibrium’? Good question … nobody knows. That’s because market ‘equilibrium’ is not something economists observe. It is something economists imagine and then project onto the world. It is an essence.

To convince yourself that this is true, pick any economics textbook and search for the part where the authors measure market ‘equilibrium’. Find the section where they construct the ‘laws’ of supply and demand from empirical observations. Look for where they measure demand curves, supply curves, marginal utility curves, and marginal cost curves. Seriously, look for these measurements. You will not find them.5

You won’t find them because they are unobservable. These concepts are essences. The equilibrium-seeking free market is an idea that economists project onto the world, and then use to interpret events. Anything that fits the vision is ‘proof’ of the essence. Anything that seems contradictory is dismissed as a ‘distortion’.

And that brings us to economics education. The core content in Econ 101 has changed little over the last half century (if not longer). And that’s not because the ‘knowledge’ is secure. It’s because the content of Econ 101 is a tradition. The point of Econ 101 is to indoctrinate the next generation in the ‘essence’ of economics. This powerful combination of essentialism and traditionalism has made economics a “highly paid pseudoscience” [10].

Essentialism and traditionalism in population biology

Compared to economics, the foundations of evolutionary biology are on sound footing. Still, we feel that elements of biology appeal to essentialism. We’ll use population biology as an example.

Population biologists study gene frequency (or more properly, genotype frequency) within a group of organisms. Among humans, for instance, most people have brown(ish) eyes, while about 10% of people have blue eyes. The goal of population biology is to explain this proportion and to understand how and why it changes with time.

The foundational hypothesis in evolutionary biology is that organisms evolve. Curiously, then, many population biologists use a model of genotype frequency that excludes evolution. The model, known as the Hardy-Weinberg principle, outlines the conditions where the genotype frequency of a population will stay in equilibrium, thus the population is said to be in ‘genetic equilibrium’. To satisfy the model, a population must reproduce sexually, be infinitely large, mate randomly, produce the same number of offspring per parent, not mutate, not migrate, and not be subject to natural selection [11,12]. In other words, the modeled population must look nothing like what we find in the real world.6

Now, this unreality is not necessarily a problem if we are clear that we are doing a mathematical thought experiment. The problem, though, is that population biologists often use the Hardy-Weinberg model to interpret reality. They’ll run statistical tests to see if the Hardy-Weinberg model provides a ‘good fit’ to empirical data. If it does, they conclude that the population is in genetic equilibrium.

Can you spot the problem? Like the equilibrium seeking market, the Hardy-Weinberg principle is an ‘essence’. You cannot directly observe that a population is in genetic equilibrium any more than you can observe that a market is in equilibrium. And like the neoclassical model of the market, the assumptions behind the Hardy-Weinberg principle are systematically violated in the real world. So on the face of it, the model ought never be used.

Here, though, population biologists take a cue from economist Milton Friedman. In his essay ‘The Methodology of Positive Economics’, Friedman argues that you cannot test a theory by comparing its assumptions to reality [13]. Instead, you must judge the assumptions by the predictions they give. If the predictions are sound, says Friedman, so too are the assumptions. (For why this is a bad idea, see George Blackford’s essay ‘On the Pseudo-Scientific Nature of Friedman’s as if Methodology’ [14].)

Here’s how population biologists apply the Friedman trick. They take a model whose assumptions are known to be false in the real world and subject it to a statistical test. They obtain one of two possible results. If the test yields a value that is below a specified threshold, a population is said to be in ‘genetic equilibrium’. Conversely, if the test yields a value above a certain threshold, then a population is said to be in ‘genetic disequilibrium’. Population biologists believe either outcome to be reasonable. But this belief ignores the false assumptions of the model, and results in a failure to reject the model itself.

To reduce this reasoning to its most absurd, imagine a hypothesis that claims: ‘if average human height is greater than 0, height is in disequilibrium’. Now, suppose we employ a ‘special procedure’ to test this hypothesis. The procedure yields two possible results:

  1. Null result: the average human height is equal to 0. Conclusion: height is in equilibrium.
  2. Alternative result: the average human height is greater than 0. Conclusion: height is in disequilibrium.

With either result, we ‘demonstrate’ something about human height. Or rather, by applying a model that we know to be false, we fool ourselves into thinking so.

Like economists’ faith in the equilibrium-seeking free market, many population biologists have come to treat the Hardy-Weinberg principle as an essential truth. This belief then gets passed from teacher to student as a matter of ‘tradition’. Why use the model? Because your prestigious teacher did.

To be fair to population biologists, the appeal to tradition has never been as great as in economics. And today, a growing number of biologists use non-equilibrium models that do not depend on any of the Hardy-Weinberg assumptions [15]. That said, the enduring appeal of the Hardy-Weinberg model speaks to the ideological potency of essentialism and traditionalism.

Essentialism and traditionalism in statistics

There is an old saying that mathematics brought rigor to economics, but also mortis.7 We might say the same thing of statistics.

Now, on the face of it, this accusation seems unfair, since statistics is mathematics. But the fact is that statistics was developed as mathematical tool for scientists — a tool to help judge a hypothesis. Above all else, scientists want to know if their hypothesis is ‘correct’. The problem, though, is that making this judgment is inherently subjective. The evidence for (or against) a hypothesis is always contingent and incomplete. And so scientists must make a judgment call.

The purpose of statistics is to put numbers to this judgment call by quantifying uncertainty. It’s a useful exercise, but not one that removes subjectivity. Suppose that I find there is a 90% probability that a hypothesis is correct. I still need to make a judgment call about how to proceed. In other words, statistics can aid decisions, but cannot make them for us.

Unfortunately, in many corners of science, statistical tools have become reified as the thing they were never designed to be: a decision-making algorithm. Scientists apply the tools of (standard) statistics as though they were an essential truth, a ritualistic algorithm for judging a hypothesis.

Let’s have a look at the ritual. Suppose we have a coin, and we want to know if it is ‘balanced’, meaning the ‘innate’ probability of heads or tails is 50:50. We’ve put scare quotes around the word ‘innate’ here because it is an ‘essence’. You cannot observe the probability of heads or tails for a single toss. It is a mathematical abstraction that is forever beyond our grasp. In the real world, all that we can see is the long-run behavior of the coin.

In standard (‘frequentist’) statistics, we assume that the long-run frequency of the coin gives insight into its ‘innate’ probability of heads or tails. Here’s how the algorithm works. First, assume that the coin is ‘balanced’. Second, calculate the probability that a balanced coin would give the observed frequency (or greater) of heads. Third, choose a ‘critical value’ below which you reject your hypothesis that the coin is balanced.

If you’ve every taken an introductory stats course, you know this algorithm by heart. Now here’s the problem. The algorithm seems to remove the subjective element to evaluating a hypothesis. And yet it does not. Notice that after we compute our statistic (the p-value), we are still left with uncertainty. Suppose, for instance, that we tossed a coin several thousand times and got slightly more heads than tails. Suppose that the probability of a balanced coin giving this proportion of heads (or greater) is about 1 in 100. Given this information, we must still decide between two scenarios:

  1. The coin is balanced, but we have witnessed an improbable outcome.
  2. The coin is unbalanced, and we have witnessed a probable outcome.

The choice is a subjective one. To be fair, most statisticians will admit as much, noting that the choice for a ‘critical threshold’ (below which we reject the null hypothesis) is arbitrary. They will also admit that in many real-world scenarios, the assumptions behind the calculation of p-values are violated, meaning the value is meaningless.

Unfortunately, when null-hypothesis tests get applied by scientists (especially social scientists) these problems are forgotten. Instead, scientists appeal to tradition. Never mind that the threshold for ‘statistical significance’ is arbitrary. The traditional value is 5%. If your p-value is less than this magic value, your results are ‘significant’.

With this tradition in hand, we get a seemingly objective procedure for discovering the ‘truth’. But in reality, it is an essence — a series of “ad hoc algorithms that maintain the facade of scientific objectivity” [17]. These algorithms have had a devastating effect on science, since they basically give a recipe for how to get a ‘significant’ result:

  1. Play with your data until you get a p-value less than 5%.
  2. Publish.
  3. Get cited.
  4. Get tenure.
  5. (Never check if your results are valid.)

Fortunately, there is a growing movement to reform hypothesis testing. One option is to pre-register experiments to remove researchers’ ability to game statistics. Another option is to lower the ‘traditional’ level of statistical significance.

While we welcome both changes, we note that they do not solve the fundamental problem, which is that judging a hypothesis is always subjective. For that reason, we favor a transition to Bayesian statistics. We won’t dive into the details here, but in short, Bayesian statistics is up front about the subjective element of judging a hypothesis. In fact, when you use the Bayesian method, this subjectivity gets baked into the calculations (in what Bayesians call a ‘prior probability’).

Despite what we see as the clear advantages of Bayesian statistics, most students are still taught the ‘frequentist’ approach. Why? Because that is the tradition, which is taught as an essential truth.

Totems of Essentialism

In his satirical essay ‘Life Among the Econ’, Axel Leijonhufvud describes economics as if it were a ‘primitive’ society. The ‘Econ’, he observes, are a curious tribe with odd rituals and mysterious ‘totems’ to which they worship. The most important ‘totem’ consists of “two carved sticks joined together in the middle somewhat in the form of a pair of scissors” [18]. Leijonhufvud is, of course, describing the intersecting supply and demand curves with ‘market equilibrium’ at their center.

We think that describing this model as a ‘totem’ is appropriate. The totem is not meant to describe reality. It is meant to define it. The ‘Econ’ do not test the totem of the equilibrium-seeking free market. They use it as a ritual to justify social behavior. When ‘reality’ gets in the way, Leijonhufvud observes, one of two things may happen:

Either he [an Econ] will accuse the member performing the ceremony of having failed to follow ritual in some detail or other, or else defend the man’s claim that the gold is there by arguing that the digging for it has not gone deep enough.

— Axel Leijonhufvud [18][/latex]

In other words, the totem is ‘true’, the evidence be damned.

While the economic model of the free market is the most brazen totem, there are many others in modern science — models that have been elevated to a ‘higher truth’. We’ve highlighted two such models here: the Hardy-Weinberg model of genetic equilibrium, and the frequentist method of statistical hypothesis testing.

Curiously, these three totems have a similar appearance, as show below. All three are pleasingly symmetric, drawing the eye to the center.

Figure 1: Essentialist totems in economics, biology and statistics. Clockwise from top left: the neoclassical model of the equilibrium-seeking market, the Hardy-Weinberg model of genetic equilibrium, and the normal distribution — the ‘equilibrium’ behavior of infinitely many random samples.

On the face of it, this resemblance is odd, since the three models deal with unrelated topics. To the Platonist, it may seem that scientists have unearthed an ‘essential form’ to reality. What seems more likely to us, however, is that we have unearthed an aesthetic preference.

Many scientists believe that a good theory ought to be ‘beautiful’. But why should the laws of nature respect human aesthetics? If they always did, it would be astonishing. But as physicist Sabine Hossenfelder shows in her book Lost in Math, the appeal to aesthetics leads scientists astray more often than it leads them to the truth [19].

Appealing to aesthetics, then, is a bad way to do science. But it is a great way to ingrain an idea in the human psyche. Each year, millions of students are shown the central totem of economics — “two carved sticks joined together in the middle somewhat in the form of a pair of scissors”. The totem is pleasingly simple and symmetric — so much so that few students will ever forget it.

And that is the point.

Never mind that the totem of the equilibrium-seeking free market has no contact with reality. What matters is that the totem be memorable … easily imprinted on the psyche, easily imposed onto the world, and easily passed down to the next generation: an eternal truth to be promulgated without question.

When it comes to essentialist totems, economics is the low-hanging fruit. But as we’ve tried to demonstrate, essentialism and traditionalism thrive elsewhere in science. It is an old problem that cuts to the core of how humans think. As Plato’s enduring spell shows, all too often we find ideas more seductive than facts.



Notes

[Cover image: Raphael’s depiction of Plato, surrounded by a Lissajous curve.]

  1. Ryan Kyger is an independent researcher who cares deeply about the integrity of scientific research.↩
  2. Like so many famous quotes, there’s no evidence that Richard Feynman uttered these words. In 1991, Willis Harman attributed the ornithology phrase to an anonymous scientist. But the sentiment seems to have been borrowed from an earlier (1974) remark about aesthetics:

    … aesthetics is to artists what ornithology is to birds.

    ↩

  3. On the link between essentialism and racism Laurie Wastell writes:

    Sadly, millions of years of evolution have made humans very good at being tribal; we have an in-group and an out-group, and we are all too adept at convincing ourselves that the out-group is inherently evil, dangerous or other. Essentialist thinking facilitates this hugely because it encourages us to see people as representing an abstract idea associated with their group rather than as an individual human …

    On that front, researchers have found that essentialist attitudes about race correlate strongly with explicit prejudice [1,2]. Our guess is that essentialism goes beyond explicit racism, and is actually a key part of all hierarchical class systems. When one class rules another, the ruling class is inevitably endowed with an imaginary set of superior traits. And the lower class is endowed with an imaginary set of inferior traits.↩

  4. Summarizing this Enlightenment skepticism of ‘pure reason’, Karl Popper writes:

    … pure speculation or reason, whenever it ventures into a field in which it cannot possibly be checked by experience, is liable to get involved in contradictions or ‘antinomies’ and to produce what [Kant] unambiguously described as ‘mere fancies’; ‘nonsense’; ‘illusions’; ‘a sterile dogmatism’; and ‘a superficial pretension to the knowledge of everything’. [3]

    ↩

  5. For an accessible introduction to the problems with the neoclassical theory of free markets, we recommend Jonathan Nitzan’s video Neoclassical Political Economy: Skating on Thin Ice.↩
  6. It’s worth noting the question that led to the Hardy-Weinberg model. Simple intuition suggests that dominant alleles (like those for brown eyes) should drive to extinction recessive alleles (like those for blue eyes). If this intuition is correct, recessive alleles should never persist in a population. And yet in the real world they do. Why? The Hardy-Weinberg model demonstrates that under certain circumstances, your intuition is incorrect: recessive alleles can persist indefinitely.↩
  7. The rigor-mortis quotation is attributed, at various times, to both the ecological economist Kenneth Boulding and to the economic historian Robert Heilbroner. We can find no written record for Boulding’s statement. Heilbroner, on the other hand, said the following in 1979: “the prestige accorded to mathematics in economics has given it rigor, but, alas, also mortis” [16].↩

References

[1] Mandalaywala TM, Amodio DM, Rhodes M. Essentialism promotes racial prejudice by increasing endorsement of social hierarchies. Social Psychological and Personality Science 2018; 9: 461–9.

[2] Chen JM, Ratliff KA. Psychological essentialism predicts intergroup bias. Social Cognition 2018; 36: 301–23.

[3] Popper KR. The open society and its enemies. vol. 119. Princeton University Press; 2020.

[4] Locke J. An essay concerning human understanding. Kay & Troutman; 1847.

[5] Hume D. An enquiry concerning human understanding. Routledge; 2016.

[6] Kant I. Critique of pure reason. 1781. Modern Classical Philosophers, Cambridge, MA: Houghton Mifflin 1908: 370–456.

[7] Keen S. Debunking economics: The naked emperor of the social sciences. New York: Zed Books; 2001.

[8] Clark JB. The distribution of wealth. New York: Macmillan; 1899.

[9] Friedman M. The role of monetary policy. In:. Essential readings in economics, Springer; 1995, pp. 215–31.

[10] Levinovitz AJ. The new astrology. Aeon Magazine 2016.

[11] Edwards A. GH hardy (1908) and Hardy–Weinberg equilibrium. Genetics 2008; 179: 1143–50.

[12] Abramovs N, Brass A, Tassabehji M. Hardy-Weinberg equilibrium in the large scale genomic sequencing era. Frontiers in Genetics 2020; 11: 210.

[13] Friedman M. Essays in positive economics. Chicago: University of Chicago Press; 1953.

[14] Blackford G. On the pseudo-scientific nature of Friedman’s as if methodology. Real-World Economics 2016.

[15] Brandvain Y, Wright SI. The limits of natural selection in a nonequilibrium world. Trends in Genetics 2016; 32: 201–10.

[16] Heilbroner RL. Modern economics as a chapter in the history of economic thought. History of Political Economy 1979; 11: 192–8.

[17] Diamond GA, Kaul S. Prior convictions: Bayesian approaches to the analysis and interpretation of clinical megatrials. Journal of the American College of Cardiology 2004; 43: 1929–39.

[18] Leijonhufvud A. Life among the econ. Economic Inquiry 1973; 11: 327–37.

[19] Hossenfelder S. Lost in math: How beauty leads physics astray. Hachette UK; 2018.