Evidence produced within quantitative disciplines like economics and finance carries an aura of gospel. The numbers, models, and forecasts we see in economic reports and market analyses in the news and reports seem certain, authoritative, and unarguable.
Built on large data sets that are analyzed with widely accepted theories and tools, economic and financial evidence have become hugely influential in governance and business—so much so that more qualitative approaches have been sidelined.
Even political economy, the original economics, has been pushed away in favor of what’s now called ‘evidence-based decision-making’. The presumption is that numerical data is the only solid information, and that the analytical tools used in economic and market analysis are reliable.
Of course, as we know now, this faith in economic evidence can be dangerous. As markets crashed around the world during the global financial crisis of 2007–2008, confidence in all kinds of quantitative modeling crashed with them. It became evident that society’s shepherds were not to be found in the financial industry. Economics and finance need to be more skeptical about their evidence if they are to serve society well. What can be done?
Evidence-Based Decision Making?
The problem is not only human greed. Rather, as Nicholas Nassim Taleb points out in his book Fooled by Randomness, few people understand the limits of the statistical models that they create. When people place unwarranted faith in their models, they can end up making worse decisions than if they had used no model at all. For example, according to Taleb, many models do not factor in unlikely events, preferring to generate averages and trends that exclude the real-world effects of anomalous or outlying events. When these “black swans” do eventually occur, financial predictions fail and people lose money that they are not prepared to lose.
For example, many Dutch pensioners saw their expected retirement funds plummet as a result of the global financial crisis of 2008. This created a public outcry, since consumers had been lead to believe that their funds were stable and reliable. Since then, Dutch pension legislation requires companies to be much more open with customers about investment risks. However, because the pension providers themselves seem not to have understood or accounted for this risk, the problem cannot be solved through disclosure to consumers alone. After all, it caught financial professionals and economists around the world by surprise.
When it comes to modeling, the road to crisis is paved by good intentions as well as bad ones. No matter how sound and complete your data, you cannot turn it into sound evidence unless you understand how reliability, risk, biases, and other contingencies (even unknown ones) affect a model’s accuracy.
If one problem is a shallow understanding of the origins, qualities, and limits of evidence itself, another problem is that the amount of evidence-based decision making actually taking place is grossly overestimated. Professionals working in economics and finance are just as human as the rest of us. Most business and government decisions, no matter how data-driven the sector, are determined by multiple factors: data, politics, intuition, practicality, and bias, to name a few.
Several ethnographers have documented the complex influences on behavior and decision making in finance. Caitlyn Zaloom, in Out of the Pits: Traders and Technology from Chicago to London (2006), describes the psychological and demographic features of traders that made them successful on the old-style, frenetic trading floors. Success was far less about crunching the numbers, and far more dependent on a trader understanding market shifts intuitively, being willing to take risks, and performing a competitive masculinity. The “evidence” they depended on was embodied and subjective.
At the company level, Karen Ho, in her book Liquidated: An Ethnography of Wall Street (2009),discusses at length the attitudes and language Wall Street bankers use to describe their craft. Far from presenting a sober and balanced view of their place in the world, bankers’ attitudes are competitive at best, and can verge on full megalomania. As one banker told Ho: “Basically, nothing gets done these days on a large-scale basis without Wall Street approving it. You can’t build a plant in China. You can’t build a highway in China. You can’t build a highway in Brazil” (investment banker, cited in Ho 2005:71).
Ho’s work suggests that much of what makes “Wall Street” (which no longer exists as such) competitive is not its greater computing power or modeling, but its reputation and ability to maintain dominance. The attitudes of Wall Street professionals, especially their belief in their superiority, compel them to stay ahead of the pack.
Notably, these same gung-ho attitudes were part of what brought markets down in the financial crisis. In the aftermath of the financial crisis, journalist and anthropologist Gillian Tett investigated (2009) how attitudes drove the development of disastrous financial products. She contends that the inventors of the problematic financial tools believed they were making something that would help society by diffusing risk. Their enthusiasm—whether it was driven by doing good or making a lot of money—blinded them to the real risks involved, even though several people had sounded the alarm on the basis of solid evidence.
All of this could be taken as an argument for more quantitative evidence, not less. If people are so biased, so prone to make mistakes, shouldn’t we take humans out of the equation as much as possible and let machines do the job?
Plural Sources of Evidence
Some people place high hopes in the ability of so-called artificial intelligence to make up for humanity’s deficiencies. They imagine AI will produce analyses and forecasts that are far more robust—and more socially friendly—than humans could ever make without it. There is good reason to doubt that this will prove possible.
In the meantime, there is no such thing as a decision made by a machine alone, or on the basis of purely quantitative data. The idea that quants pay no attention to context is somewhat overstated. Academics working in finance and business schools, for example, rely on ready-made data sets to do their analysis, but generally pay attention to the human activities that the data represent in order to make their analyses as reliable as possible.
However, in their recent EPIC article,Tye Rattenbury and Dawn Nafus contend that collaborations between data scientists and ethnographers would produce far better results than either of these professions working alone.
The same could be said of economists and ethnographers. Even central banks—traditionally the home of conservative economics—depend on real-life observations. In his book Economy of Words: Communicative Imperatives in Central Banks (2013), Douglas Holmes writes about how monetary policy committees sometimes have to make decisions quickly, such as to avert a financial crisis. Lags in quantitative data collection means that they cannot entirely depend upon it to guide their decisions. Instead, they consult with a spectrum of people in the economy, including shop owners, merchants, and distributors.
Central banks use this information to judge what direction the economy is moving in and make decisions based on it. However, they refer to it as “anecdotal evidence,” downplaying its validity as data and its importance in decision-making. One could imagine that central banks’ evidence-gathering efforts would improve if they re-framed it as “qualitative data collection” and invested in professionals who are experts in rigorous qualitative and mixed methods.
This shift in attitudes is perhaps most visible in universities. Students are leading the charge to force academic universities to change how economics is taught, especially in Europe. Movements like Rethinking Economics contend that classic economic theory does not prepare them to work in the real world. They are demanding that more social science, behavioural economics, and alternative economics are incorporated into their courses. They understand that they need a broad familiarity with different ways of thinking about humans and economics if they want to make analyses that make sense. If we are to create what Keith Hart terms a “human economy,” then we need models that account for human behaviour.
Good Evidence Requires Good Attitudes
What must we do to create better evidence? First and foremost we need to change our attitudes and educate ourselves in a broader range of disciplines.
Both quals and quants need to recognize that our ways of making evidence are limited, and our biases run deep. We tend to promote misconceptions about what our disciplines actually do and how society values them.
We need to take a vocational approach to our work, which means constantly challenging the frameworks we use to produce evidence and actively asking ourselves, “Is there another way to interpret this data? If I were looking at this data from a different perspective, what kind of evidence would I see?”
EPIC is just one of many communities where we can challenge our own and other people’s attitudes with respect to what evidence we will accept. I look forward to seeing what challenges emerge from these discussions.
References
Ho, Karen. 2009. Liquidated: An Ethnography of Wall Street. Duke University Press.
Holmes, Douglas R. 2013. Economy of Words: Communicative Imperatives in Central Banks. The University of Chicago Press.
Rattenbury, Tye and Dawn Nafus. 2018. Data Science and Ethnography: What’s Our Common Ground, and Why Does It Matter? EPIC, 7 March, https://www.epicpeople.org/data-science-and-ethnography/
Rethinking Economics, http://www.rethinkeconomics.org/
Taleb, Nassim Nicholas. 2001. Fooled by Randomness. Random House.
Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. Random House.
Tett, Gillian. 2009. Fool’s Gold: How the Bold Dream of a Small Tribe at JP Morgan was Corrupted by Wall Street Greed and Unleashed a Catastrophe. Simon and Schuster.
Zaloom, Caitlin. 2006. Out of the Pits: Traders and Technology from Chicago to London. University of Chicago Press.
Erin B. Taylor is an economic anthropologist specializing in research on financial behaviour. She is the author of Materializing Poverty: How the Poor Transform Their Lives (2013, AltaMira). She’s also co-author of the Consumer Finance Research Methods Toolkit and many book chapters, journal articles, and working papers. Erin holds the positions of Principal Consultant at Canela Consulting and Senior Researcher at Holland FinTech.
Consumer Finance in a Mobile Age: Methods for Researching Changing User Behaviour, Erin B. Taylor
Ethnographer on Wall Street: Karen Ho, a Profile, Rachel C. Flemming
Rethinking Financial Literacy with Design Anthropology, Marijke Rijsberman
0 Comments