Forum Replies Created
-
AuthorReplies
-
March 12, 2023 at 12:14 pm in reply to: Population decline and the disappearance of nationhood #249475
Timely that you bring this issue up, Scott. I’m currently reading Richerson and Boyd’s book on cultural evolution, ‘Not By Genes Alone’. Towards the end of the book, they bring up the idea of cultural traits that are ‘maladaptions’ in a Darwinian sense. In other words, traits that lead to a reduction in offspring. In those terms, the Western Liberalism constitutes a ‘maladaption’. Obviously if trends continued, humans would go extinct.
The irony is that the trend comes as the human species literally dominates the Earth, with populations and rates of material consumption that simply have no precedent. I, for one, think low fertility is the least of our worries. If we manage to reduce our population to a sustainable level, I’m sure forms of culture will emerge that keep reproduction stable. Anyway, the fertility decline is exactly what we need right now.
Yes, we had a bot attack yesterday. I deleted the account, but not before the spammer created about 150 new topics. What a pain in the butt.
Hi Scott,
Thanks for the comments. Glad to know people are interested in this research. Some thoughts.
Christianity is not an ideology, it is a religion. One might say: religion > culture > ideology.
I’ve never been one to worry much about definitions. But I would probably put those words in a different order, at least in my understanding of human behavior.
Start with culture. It is everything that we do … the entire corpus of ideas and behaviors that constitute a society. Culture’s always have an ideological component (or many ideological components). Historically, religion (i.e. belief in a supernatural God) was the most important ideology. Today, the dominant ideology is secular.
To your main point, yes counting words is a very crude way of capturing an ideology. Counting words is not a substitute for actually reading and engaging with ideas. That said, we can do far more expansive analysis with word frequency that would be possible by reading the actual literature. I mean like 10 order of magnitude more data. I think there’s something to be said for that breadth.
Am I overstating or misreading if I seem to observe the thesis weighs the words in question equally
Yes and know. To define jargon, I weight words by their relative frequency in the text corpus. To count the frequency of this jargon in mainstream English, I just sum the relative frequency of these jargon words. Different jargon words contribute differently to the sum, based on their own frequency.
Note that I also use a different measure called the language similarity index. This measures the similarity of the entire corpus. So it absolutely does not weight each word equally.
Is it possible for language to be mere data? If I survey the contents of my room and notice many books and one wife must I conclude the books are more valuable?
I’m not sure I understand the question. Anything can be ‘mere data’. It’s what we do with the data the gives meaning. The point with word frequency is that it reveals what we’re talking about. From there, we can infer values … but with obvious difficulties. For example, I might talk a great deal about wars because I abhor them, not because I like them.
The Church (and hence Christianity) has had much to say than simply Biblical texts on a wide range of topics, including economics. I wonder why you limited your sample source to Biblical texts.
Very true. Likewise, economists have much to say other than what is in undergrad textbooks. But we have to start somewhere. I don’t think anyone would disagree that the Bible is the core of the Christian canon, or that econ textbooks are the core of the economics canon. If we want’ to do more nuanced analysis, we can select different parts of the canon.
On that front, something I’ve been meaning to do is digitize the Real-World Economics Review catalogue and see how the frequency of heterodox jargon has changed over time.
- This reply was modified 1 year, 10 months ago by Blair Fix.
Very interesting research, James. Here are some possible ways to measure hype that would work with negative values:
Nomenclature:
prediction = predicted EPS
actual = actual EPS
abs = absolute valueHype as the log of absolute value of the prediction error
hype = log( abs( prediction – actual) )
Same thing, but as a fraction of the absolute value of the actual EPS
hype = log( abs( prediction – actual) / abs(actual ) )
With these definitions, ‘hype’ becomes a general measure of prediction error, which is not necessarily what you want. But as Scott notes, prior to taking the absolute values, you could record the sign of (prediction – actual), and then add it back after taking the log. That way, you distinguish between pessimism and hype and also constrain the spread in the measurement.
To reiterate, I think the language of ‘vectors’ is unnecessary. Debreu is just doing basic accounting:
z = an itemized list of the things I’ve sold
p = a price list of these itemsThe value of my sale is the sum of the item quantities times the item price. Really basic stuff dressed up in pompous verbiage to appear profound.
January 14, 2023 at 12:52 pm in reply to: CasP RG v. 1.00: Paul Feyerabend’s Against Method (Open Jan 01, 2023). #248840Thanks for getting the ball rolling, Chris.
I read Feyerabend’s book with great interest. In the end, I find it both enlightening and infuriating.
Let’s start with the enlightening part. I found Feyerabend’s discussion of the theory behind empirical evidence to be fascinating. For example, I’d never considered the worldview that comes with perceiving the earth as a flat, unmoving plane. In this cosmology, there is a universal ‘up’, and objects fall in a straight line. All of the everyday evidence seems to support this view.
It was only when people thought hard about the motions of the planets that they had to rethink this view. In particular, it was the retrograde motion of planets that made little sense if the Earth was at rest. But if the Earth orbited the sun, the ‘common sense’ facts of daily life had to be re-interpreted using the law of inertia (an object in motion stays in motion unless acted on by an external force).
In Feyerabend’s opinion, Galileo simply invented the law of of inertia to rescue his geocentric view. Interpreted through the lens of Karl Popper, this kind of ‘ad hoc’ theorizing is bad — anti-scientific even.
Here’s where I start to have problems. Feyerabend clearly revels in controversy, which is fine. But does he accurately represent the ideas he’s criticizing? I’m no so sure.
For Popper, there was no ‘method’ for the creation of ideas, hypotheses and theories. They could come from anywhere. That said, once the idea has been formulated, it must be ‘consistent with the evidence’, otherwise it is ‘falsified’.
(I use scare quotes here because Feyerabend is correct to question what it means to be ‘consistent with the evidence’ and how we know a theory has been ‘falsified’. More on that later.)
So in a Popperian sense, a more charitable (to Popper) interpretation of Galileo’s actions was that he was engaging in theory building. He had a hypothesis that the Earth orbited the sun. To make that hypothesis consistent with the observations of daily life (objects fall down), he needed to add a secondary hypothesis: objects in motion tend to stay in motion … meaning unless there’s a difference in relative velocity, you can’t tell if something’s moving.
Now Feyerabend is correct to point out that when two theories have very different world views, there is much controversy about how to test each theory, and what counts as ‘conflicting’ evidence.
On this front, gravity continues to be a center of controversy. You’ll frequently hear, for example, that there is no evidence that contradicts Einstein’s theory of gravity (general relativity). But this claim is patently untrue. The fact is that everywhere we look in the universe, general relativity fails. When we look inside galaxies, the visible mass is moving far faster than general relativity allows. And yet very few people think this is evidence ‘against’ general relativity. Why? Because they assume that Einstein was correct. As such, the data gets interpreted as evidence for missing mass: dark matter.
The astronomer and astrophysicist Stacy McGaugh writes consistently great material on this controversy. I highly recommend that you read his blog, Triton Station.
In short, McGaugh notes that the whole dark-matter odyssey amounts to an a priori assumption that general relativity is correct. A better interpretation of the evidence, however, is that there is an ‘acceleration discrepancy’ between what general relativity predicts and what is observed. This discrepancy can mean one of two things:
- The discrepancy is caused by missing mass
- General relativity is wrong
Largely for sociological reasons, most scientists take the first option.
Anything goes
My other big qualm with Against Method is Feyerabend’s catchphrase ‘anything goes’.
Let’s review how he gets there. Feyerabend looks at our theories of science and concludes that at various times, famous scientists (particularly Galileo) have disobeyed all of the supposed ‘rules’ of the scientific method. Since these rules don’t hold, anything goes.
This reasoning is superficially convincing. But closer investigation reveals that it is a complete non-sequitur. (I read quite a few reviews of Feyerabend, and this is the most common criticism.)
We can more easily see the problem by applying the same reasoning to theories of physics. It is a well know fact — acknowledged by virtually all physicists — that there is no coherent theory that explains everything about nature. At present, all of our theories are contradicted by some form of evidence. So when it comes to doing physics, anything goes.
Seems silly, right?
It is equally silly when it comes to the ‘scientific method’. As far as I can tell, most philosophers agree that we have no theory that explains everything that scientists do. (It would be astonishing if we did … right up there with psychohistory.)
But so what? Nothing in science is about absolutes. There are always gray areas. The point, though, is to find patterns that tend to hold. Clearly some approaches to science work better than others. Does Feyerabend really believe that testing a drug through prayer is as good as testing it through a double-blind clinical trial? (I don’t think he does, but his penchant for controversy leaves it an open question.)
Now, I take Feyerabend’s point that science is healthy when there are a plurality of ideas. On that front, the worst thing we can do is kill off an idea because it contradicts our idea of the ‘scientific method’. But are there any examples of this actually happening?
My reading of Popper, for example, was that he was trying to provide a way to kill off long-lived ideas that refused to go away. Religion, for example, is usually framed so that it is amenable to all evidence (that’s the convenient thing about an all powerful God). In response to these long-lived ideologies, Popper wanted a basic criteria for a ‘scientific’ theory. In simple terms, the theory had to put forward criteria by which it could be wrong. The best option was to make a clear, a priori prediction that could be ‘falsified’.
I, for one, am convinced that this is a basic criteria for a good theory. But it’s just the start. In general, Popper was naive about how difficult it was to convincingly ‘falsify’ a theory. Sometimes it takes centuries and the work of thousands of scientists.
To wrap up, I think Feyerabend could have written the same book, but with the following catchline: “All of our theories of the scientific method are wrong; some are probably more useful than others.”
December 23, 2022 at 9:14 am in reply to: Effective Discount Rate or the Reciprocal of the Trailing P/E #248781The effective discount rate is, indeed, the reciprocal of the trailing P/E ratio. There is nothing particularly inventive or insightful about it. For that reason alone, I question its value to CasP even pedagogically.
I’m not sure I follow this logic. Yes, what I call the ‘effective discount rate’, is the inverse of the trailing P/E ratio. But I don’t follow how this means there is nothing insightful about it. What is interesting is that capitalist agree to capitalize by relating prices to earnings. There is no objective reason they must do so. It is an ideological choice.
Now, educated capitalists know that they should care about price-earnings ratios. So my analysis doesn’t tell them anything new. But that’s not the point. As I see it, what CasP does is similar to what anthropologists do when they study foreign cultures. These cultures clearly know what they believe — for example that a king’s power stems from god — but the anthropologist looks at this ideology from a structural perspective, rather than simply accepting it as ‘the way things are’.
The effective discount rate has no value to CasP analytically because it cannot be correlated to the expected rates of return and, by definition, they must be identical. If they could be correlated, they would be correlated, but they have not been, and the data you adduce in 1. and 2. don’t contradict that conclusion (which is why I find them to be a non-responsive deflection).
I think the main argument, in CasP, is that capitalist claim to look to the future, but in practice, look to the past. So there is a herd behavior where capitalists agree — with varying degrees of uniformity — that future earnings will be similar to the recent past.
Of course, this belief system is a rich topic for inquiry that needs a lot more research. For example, I’d like to see sociologists get involved and do qualitative analysis of capitalist’s actual expectations. It would be interesting to see how these beliefs relate to the things like the systemic fear index.
My reading of your comments is that there is much more research to be done. If a capitalist claims to be using capitalization formula x, then we should take them at their word and study how this formula is applied.
To wrap things up, I think what Jonathan is getting at is that capital as power research starts with a very general hypothesis (capitalization is an act of power), but then typically asks very specific empirical questions. Speaking for myself, I find your comments quite interesting. But perhaps you could simplify them into specific research questions that we might answer with data.
- This reply was modified 2 years ago by Blair Fix.
Thanks, Rowan, for bringing my attention to this paper. I read it with interest.
First off, I enjoyed the quips from Feynman about adding temperature. Unfortunately, I don’t agree with the author’s criticism of ‘price vectors’. But I’m getting ahead of myself.
Let me start by being clear that I think General Equilibrium Theory is garbage, for many many reasons. That said, this paper focuses on a very specific criticism, which is that GET proposes that prices can be treated as ‘vectors’, when the author claims this is untrue.
On that front, I’ve always found economists’ use of the word ‘vector’ to be utterly pretentious. A price vector is nothing but a list of prices. But clearly economists want it to sound fancy, so they bring in the language of vector physics.
Now to the main claim in the article, which is that price lists do not satisfy the property of ‘vectors’ because they cannot be added. George writes:
Now it is meaningless to add the price of a good on Monday to the price of the good on Tuesday.
I think this is wrong. When I calculate my monthly living expenses, do I not add the price of commodities purchased at different times? And doesn’t everyone consider the resulting cost total valid?
In other words, George’s comparison to temperature is incorrect. No, we cannot add temperatures. Yes, we can add prices … we do it all the time.
None of this is to say that general equilibrium theory has any merit. It does not. But I see no problem in treating prices as an n-dimensional list and then doing mathematics with it. The problem with general equilibrium lies in the name itself, the supposition that you can explain prices in terms of market equilibrium. None of the steps work, as Jonathan Nitzan elegantly shows in this video:
I’m game.
David Graeber and David Wengrow’s The Dawn of Everything is on the top of my list.
Thanks, James, for digging into the details of Villarreal’s model. I can’t comment on this model specifically, as I haven’t put in the time to understand it. (Thanks for doing that, by the way.)
However, I will comment on models in general. First, when someone publishes a model, you have to expect that the results will be ‘good’, meaning it reproduces what they designed it to reproduce. If the model didn’t do that, the person in question wouldn’t publish their results. So start with the assumption that all published models will have fairly small error.
Now, if you believe Milton Friedman, you should believe these model precisely because the error is small. Never mind the assumptions, he said, they don’t matter. Only ‘predictions’ matter.
Good scientists know this is bullshit. Everything is in the assumptions. So when you see that a model gives ‘good’ results, you have to tear apart its assumptions. If those are bad, then the model is bad.
Also, you need to distinguish between ‘tuning’ and ‘testing’. Models have parameters that we can tune to fit data. That’s fine. But it’s not a test of the model. To test the model, you take these tuned parameters and apply them to another dataset.
Anyway, I agree with your conclusion. Building and explaining models so that other people can understand and use them takes an excruciating amount of work. Which is why I’m not commenting on the details of Villarreal’s model … I’m busy doing modeling of my own!
I think the qualities of a good researcher are simply curiosity and a willingness to learn and explore (and to be confused). If you have those qualities, the other skills (mostly learning how to manipulate data, find sources, etc.) can be learned.
As for what makes for success, it depends on how you define it. If you can clearly communicate your research, people will be interested in what you do. That does not mean, though, that you will be a successful academic.
The unfortunate reality, especially in political economy, is that academic success has largely to do with the people you manage to impress and the ideas you espouse. If you go against the orthodoxy, your career options are limited.
Hi Adam,
I think the use of public firms to categorize dominant capital is mostly a matter of convenience. The capitalization of these firms is well known and captured by various databases. The capitalization of private firms is much harder to obtain, especially en masse.
Hi Rowan,
It seems as though the epidemiology experts mostly accept your criticism of R0. Here’s what the CDC has to say about it:
In the hands of experts, R0 can be a valuable concept. However, the process of defining, calculating, interpreting, and applying R0 is far from straightforward. The simplicity of an R0 value and its corresponding interpretation in relation to infectious disease dynamics masks the complicated nature of this metric. Although R0 is a biological reality, this value is usually estimated with complex mathematical models developed using various sets of assumptions. The interpretation of R0 estimates derived from different models requires an understanding of the models’ structures, inputs, and interactions.
So basically, R0 is a model parameter, the usefulness of which depends on how much you trust the underlying model. Same goes for basically any parameter in observational science.
The concept itself seems to me to be sound. Some viruses are more infectious than others. We ought to be able to measure this difference. The trouble is doing so directly — by setting up an experiment with controlled conditions in which you try to infect people — is unethical and will never be done. So we’re left estimating infectiousness from the observed spread of a disease, which is affected by many many factors that scientists must model.
In other words, any model-based estimate of R0 comes with a host of caveats, which non-experts rarely discuss. Still, I’m not sure I understand your bigger point. At this point, COVID is far too widespread to ever be eliminated. So we are stuck living with it.
On that front, as a scientist I am interested to see how the evolution of the virus plays out. From the theory of multilevel selection, we get a clear prediction. What is best for an individual virus is to replicate as fast as possible. However, that isn’t good for the virus as a group, since the faster you replicate, the faster you kill your host. And when the host is dead, you die.
So the thinking is that diseases should evolve towards being more infectious— spreading between hosts easily — but being less deadly. The ‘equilibrium’ point, though, depends on the mode of transition. Water-born diseases like cholera can spread long after the host is dead. So there is little pressure to reduce deadliness. But with airborne diseases, transmission needs close contact with a living host. So these diseases should become more infectious and less dearly with time. That seems to be exactly what has happened with COVID (regardless of how accurately we know R0).
If the theory holds, then long term, we expect COVID to fade into a background virus like the common cold or the flu. (I’m told that some cold viruses are actually corona viruses.) I guess we shall see what happens.
There are accounting judgment calls and measurement issues lurking behind this (I’d be delighted to share what I know of those), so it’s not some perfect picture handed down from Mt. Horeb.
Please share, Steve! Also, I know that Troy often remarks on the importance of studying accounting if you want to understand capitalism. Curious that it’s rarely taught in economics. It would be great to have an accessible resource that outlines all the different ways of defining/measuring income.
About ‘unearned’ versus ‘earned’ income. If the goal is to spark a political movement, then I think that language is appropriate. Certainly many people on the left will agree that capital gains on property are ‘unearned’. Hell, even mainstream economists think capital gains are ‘unproductive’, which is why they keep it out of the national accounts.
However, for the scientific study of capitalism, I’m hesitant to use that language, largely because it is so easy to criticize.
Actually, I’m not even learning R generally. I’m focusing on the tidyverse.
That’s a good point, Troy. Programming languages are just platforms to share code using a common language. So coding in R or python or C++ can mean many different things, depending on the libraries you are using. That’s something I appreciate more and more.
When analyzing data in R, I spend most of my time using data.table. And of course, like you, I’m grateful that Hadley Wickham has contributed so much to the R ecosystem.
When I code in C++, I’ve found the Armadillo library to be extremely helpful.
Funny that you mention the Tech billionaires. Armadillo is a heavy-duty linear algebra library, and I’m told that big tech uses it extensively. But it was written for free by … academics.
-
AuthorReplies