Link para o artigo original: https://www.man.com/insights/ri-podcast-Andrew-King
Listen to Jason Mitchell discuss with Professor Andrew King, Boston University, about how sustainable investing is facing its own replication crisis.
Is sustainable investing facing its own replication crisis? Listen to Jason Mitchell discuss with Professor Andrew King, Boston University, about what the replication crisis represents for sustainable finance; how to think about the incentive problems affecting academic research; and why academic journals and the academic-practitioner community need to be more open to the replication and challenge of existing studies.
Recording date: 13 December 2024
Professor Andrew King
Professor Andrew King is the Questrom Professor in Management and Professor of Strategy and Innovation at Boston University’s Questrom School of Business. Professor King is a leading authority on environmental performance and innovation. His research established whether and when firms can find ways to profitably reduce their impact on the environment. His empirical tests of the efficacy of industry self-regulation helped change both private and public policy. His current research explores open-source innovation and knowledge sharing. Professor King has been a Marvin Bower Fellow at the Harvard Business School, an Aspen Institute Faculty Pioneer, and an Academy of Management Journal Best Paper Award winner.
Episode Transcript
Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity.
Jason Mitchell:
I’m Jason Mitchell, head of Responsible Investment Research at Man Group. You’re listening to A Sustainable Future, a podcast about what we’re doing today to build a more sustainable world tomorrow.
Hi, everyone. Welcome back to the podcast, and I hope everyone is staying well. So, is sustainable finance facing its own replication crisis? That’s the big question of this episode. And for context, the ability to replicate research findings is generally regarded as the cornerstone of science. For example, if a medical paper publishes certain findings, you’d probably want to know that those claims could be replicated given what could be potentially at stake.
But what if you can’t replicate a paper’s findings? Look, I’ll be honest, this episode is a bit of a hornet’s nest. On one hand, the replication crisis feels like a small and myopic feature in the world of academia. On the other hand, it points to a larger crisis of confidence and even theory.
In other words, how do we know what we know if we can’t replicate it? Now, it’s important to highlight that the replication crisis isn’t unique to sustainable finance. Many other fields have already gone through their own replication crisis like medicine, economics, and behavioural psychology. Camp Harvey at Duke University is quoted as saying that half of published research findings in empirical finance are suspected to be false.
So imagine you’re an academic and you can’t replicate the findings of one of the most seminal papers in sustainable investing over the last decade. And the results you get from randomised data is good or better than the paper’s data, and academic journals rebuff your replication work because after all, they want to publish impactful papers with significant positive findings, not negative ones.
So what do you do? All of this is to say it’s great to have Professor Andrew King, one of the most prominent academics looking at replication in sustainable investing on the podcast. We discuss what the replication crisis represents in sustainable finance, how to think about the incentive problems impacting academic research, and why academic journals and the academic practitioner community need to be more open to the replication and challenge of existing studies.
Andy is professor of Strategy & Innovation at Boston University’s Questrom School of Business. Andy is a leading authority on environmental performance and innovation. His research established whether and when firms can find ways to profitably reduce their impact on the environment. His empirical tests of the efficacy of industry’s self-regulation helped change both private and public policy.
His current research explores open source innovation and knowledge sharing. Andy has been a Marvin Bower fellow at the Harvard Business School, and Aspen Institute faculty pioneer, and an Academy of Management Journal Best Paper award winner. Welcome to the podcast, Professor Andrew King. It’s great to have you here, and thank you for taking the time,
Andrew King:
Jason, great to be here.
Jason Mitchell:
Excellent. So Andy, we’ve definitely got a lot to talk about, but I want to start out with a little bit of reflection. In 2022, you co-authored an article with Ken Pucker titled ESG & Alpha: Sales or Substance, where you lead off with this statement, quote, “Managers of ESG investments create false hope, oversell-out performance, and contribute to the delay of long pass due regulatory action,” end quote.
Now, a lot has certainly changed over the last three years. Since you wrote that, academics have called ESG dead. Some of the biggest institutional investors have disavowed it. Conservatives have politicised. Its state attorney generals have tried to censor it. And ESG fund performance, generally speaking, has disappointed. And almost no one seems happy with the state of sustainable finance regulation. So what are you thinking now? Does your view still hold?
Andrew King:
So great question. So when Ken Pucker who was the former COO of Timberland, the shoe company, and I wrote that article, ESG was writing high. It’s come down somewhat since then in terms of both because of the attacks that you describe and because of the lower returns.
And so I think you’re right. It’s less perhaps impactful right now, but I would say it’s going to be back either in its current form or in a modified form. Over the last three decades, we’ve seen this happen multiple times with these ideas of sort of voluntary solutions to sustainability problems. They go away, and then they come back a little later. This one may come back, I think, as something called impact accounting, which is not just measuring the impact on the stockholder but on the planet as well. So we’ll have to watch that.
Jason Mitchell:
Okay. Got it. So let’s talk about the replication crisis. You’ve identified problems with some of the most seminal papers in the area of sustainability. So first, how prevalent is this? And second, how would you characterise the root causes of this? And by that, I mean …. is largely unintentional mistakes and I guess a best efforts process, which frankly can always happen. Are they the result of a kind of willful blindness or self-delusion by researchers who passionately believe in the value of sustainability? Or is it a more problematic issue driven instead by research motivated to deliver a specific outcome perhaps because of commercial interests or more subtle conflict interests?
Andrew King:
Well, I can answer some of those questions. Those are very, very important questions that I wish I had answers to all of them. But let me start from what I know best and then proceed.
So years ago in the strategy literature, not just in the ESG literature, a colleague and I, Brent Goldfarb, tried to estimate the number of false positives. And we think that there are probably 20% of the papers, the coefficients are really not different from zero.
So 20% are wrong. And then the problem is also the other coefficients are probably inflated. So that gives you a scale. I think that’s an undercount because there are lots of ways that papers can have problems. The measures aren’t quite right or the empirical strategy doesn’t work or whatever.
So I think this is a big problem. I sometimes say we’re in the steroids era of social science. And I have been a doper because I have engaged in questionable research practises when I was a university scholar where you look for results so hard, you get them even though they may not really be there.
So then, you asked me about sort of motivations. Is this blindness, delusion, ignorance, desire to find an answer that will be useful or appealing? Probably all of the above in my own experience, certainly desire and then also ignorance weighed in. And so yes, this is a big, big problem.
Jason Mitchell:
So can you put all of this into context? In one of your latest papers titled Do Sustainable Companies have Better Financial Performance? Revisiting a Seminal Study which examines the claims of outperformance from an earlier academic paper published in 2014, that’s widely seen as an important foundation that helped promote sustainable investing strategies, can you lift the text off your paper and summarise your thesis?
Andrew King:
Yeah. So that paper was influential for a variety of reasons. It was written by some academics at Harvard. It appeared in a very good journal. It used methods that were unique or new to the space, seemed to provide evidence of causality rather than a correlation, had a very big result. Looks like high sustainability firms outperformed. And they outperformed not only on stock return but also on accounting performance.
It linked that to sustainability practises, and it appeared at a time when that kind of evidence, I think, was particularly valuable both to scholars but also to people on Wall Street who were marketing funds. And when Ken Pucker mentioned earlier and I did some interviews with people on Wall Street, that was one of the papers they brought up.
So it was a big paper. And when I replicated it, what I discovered which is common is it’s very hard to exactly follow the path of the original authors and what they did.
So I ended up running thousands of different ways they could have done their analysis. And what I’m unable to do with respect to their finding on stock return is to even replicate the sign and significance of their results. So I can’t find support for it, even a single case.
For accounting, I can find cases. But what you’re due science, you’re always comparing against, well, could this just be random? And in all cases, randomness does very well. It often beats the actual data. So we have no basis or I have no basis for claiming there is a connection between sustainability and financial performance.
And then the other thing is in doing that, I discovered other problems with the papers, with measures, with methods, with the creation of the data that I think add additional concern about the original findings.
Jason Mitchell:
Andy, can you peel us back a little bit more? Is it a replication issue or is it a question of robustness? In other words, does the paper fall short on robustness checks because, I guess, minor perturbations alter the results, or is it a replication issue where you really can’t achieve similar results?
Andrew King:
Well, if what you’re asking is can I reproduce their result, the answer is no. But that’s partially because they’re not very clear about what they did. Now, that happens in all papers.
In this case, I reached out to them multiple times to get greater detail about what they did and they never respond. So I then ran thousands of attempts to see, well, maybe, this is what they did.
And in a few cases, I can replicate some of the results. But as I said earlier, randomness generally beats the data. So given that, I think there’s two things here. There are elements of this paper that cannot be reproduced. I believe that their actual matching method would not occur at all frequently. I estimate one intended to 67 times, which is fewer. That would be like you’d have to look at all the stars in the universe, samples on all the stars in the universe to be able to make it work.
It seems pretty unlikely. It’s possible, but not likely. So I think it can’t be reproduced. And I think it doesn’t have enough information in it. The data do not have enough information in it to say what they claim and they say.
Jason Mitchell:
And just to be clear, what’s the claim? It’s effectively if you invest in high sustainability stocks, [inaudible 00:12:08]
Andrew King:
Yes. If I can riff off that, they’re claiming that high sustainability firms outperform in terms of both stock market and accounting performance. And one of the interesting things I discovered as I went through the paper is if you actually dig into their own claims, their own claims come apart.
So they basically do six tests in this study, two on the stock market returns, and four on accounting returns. The two on the stock market, one of them, they make a mistake. So they actually do not have what’s called a significant effect. And on the other four on accounting, they never give you a significant test. So basically, the study strangely doesn’t really have even as written the evidence to support its conclusion.
Jason Mitchell:
I want to back up just a little bit because we’re talking about a replication crisis, which feels like a small and frankly myopic feature in the world of academia. But to what degree does all this represent a larger crisis of theory of confidence, essentially an epistemological crisis for the world of sustainable investing?
Andrew King:
Well, I’m glad that you’ve separated those two. So I got back into this space. I had been involved in sustainability since the ’70s actually. I wrote the Rhode Island Energy Management manual or part of it in 1979, ’80, and then had gone off and was doing other work on epistemology.
How do we know what we know? And became increasingly concerned that we couldn’t always adjudicate whether what we were reading in scientific reports was trustworthy. And then started working on methods to improve that. All that’s to say is this problem that we’re discussing is a bigger problem.
And empirical research has become so complicated. Computers have become so powerful. You can now run so many models. You can do it in so many different ways that you can often generate results from noise. And so that is the bigger concern I have. ESG is an area that I was trying to apply some of the possible solutions, and I think ESG has the problem like everyone else. Could ESG be worse? And I think possibly. There was a lot of money, tens of trillions of dollars of public invested.
Funds were contributing or partnering with academics to do the research. We want to believe that win-win works. That it’s in the interest of companies to do what’s good for the planet. So I think it’s possible that the problem is worse in the ESG space, but there’s no evidence of that. It’s just a hunch.
Jason Mitchell:
We’ve talked about the pressure in the academic world to demonstrate novel and statistically significant results and order to get published, I guess, along the idea of the publish or perish maximin in academia. It’s also interesting to me that Cam Harvey at Duke University has pointed out, at least in his research, that flawed methodologies likely mean that at least half of the published papers in empirical finance are wrong.
One consequence is that there’s arguably less incentive for the replication and challenge of existing studies. And look, maybe, it’s even worse. Maybe, there’s a disincentive in all of this. You yourself have experienced pushback from journal editors and your attempts to get some of your replication work published. So talk about the state of replication and finance. Is it becoming more accepted as having value?
Andrew King:
I think it is becoming more accepted, but it’s becoming more accepted from very low. I was about to say zero, but that’s not quite right. But replication has been seen as a sort of secondary activity. It’s in a survey of accountants who do this kind of finance and accounting work that was seen as a career killer.
As you say, it was disliked by journals. I had a journal editor tell me we don’t like it when people replicated another paper. So it’s improving off of that, but it’s got a long way to go. I am pleased though because I have heard from some junior scholars around the world who are very happy that I’m doing what I’m doing and coming up with other things that they think should be replicated. So I’m hopeful that we’ll get there, but we’re starting at from a very low place.
Jason Mitchell:
So what do we need to do to promote replication in a more constructive way?
Andrew King:
I think first thing we need to do is change the incentives and the reward. In a paper by Rob Bloomfield and co-authors, they interviewed a bunch of people. And here are a couple of quotes. One said, “Once a paper is published, it’s more harmful to one’s career to point out the fraud than the one committing it.” And then another person wrote, “Replication studies don’t get cited, and journals don’t publish them, nor do people get promoted for replication studies.”
So one thing is to change that. And replication is then rewarded. I would love to see a world where a PhD student, every PhD student had to replicate a paper as part of their PhD process.
Jason Mitchell:
I guess, technically speaking, where do you see these problems hiding in academic papers? Is this a causality problem? Are researchers not adjusting for common risk factors or is it about P-hacking and over-fitting models, i.e., the idea that researchers either consciously or unconsciously twist the data to draw out a compelling but potentially false relationship between variables?
Andrew King:
How dare you try to limit our ability to create results? We can create them in all kinds of ways. We can create them by P-hacking. P-hacking is you run 500 models, and then you pick the result that you like from the model that gives you the best result. And I’ve done that.
P-hacking can also be you have a treatment effect, which really probably means A, but you say it means B. We can also do things by deciding to eliminate some data points from the sample. We can do it by having different kinds of measures and then picking the measure that works best. We can throughout reports results that don’t confirm what we want to show. We have unlimited ability to be creative in this process. So the problems appear everywhere.
Jason Mitchell:
So for practitioners concerned about all this, what advice would you give to watch out for? What would a practitioner identify? What should they look for? What guidance would you give?
Andrew King:
I think it’s very hard to find errors or problems in papers reading them. In the papers that I’ve replicated, I didn’t know that there were going to be problems. The very first one that I did that was sort of controversial, I did because we promised reviewers on a previous paper we would do it. We had no reason to believe there was any issue.
And the problems are often really buried. It’s very hard to find them. But I think the thing is there is good advice that we should give, which is don’t trust any single paper, which whether or not the paper is true should follow because when we do statistics and we report a few coefficients, you’re getting one draw from a population.
So, yeah, it might be true. But if we’re really going to fill out what the coefficient is and how much it varies, we have to do lots of studies. So I think when you read a paper, you should say, “Huh, interesting.” I’ll put that in the interesting idea, possibly through area of my brain. And if you see it 10 times, then you say, “Okay. Probably true.” But I think we got to be much more careful with single papers.
Jason Mitchell:
And just to be clear, is the issue about replication in general or replication out of sample? Is it a problem where sustainability researchers are using complex opaque methodologies and models to achieve statistical significance? And if it is, how do we find ways to encourage a shift towards more parsimonious models?
Andrew King:
Well, I think you asked two questions here. One is you asked, is this a internal or external validity? I’m translating it to the wonky academic language. Is this about problems within the sample within the data that they have or is this taking the information you have from the sample and extrapolating it to the broader population?
I think it’s both. The extrapolation to the broader population is often more dicey. And then your second question you asked I think was if this is about complicated models and the ability to use different complicated forms, parameterizations and stuff to get a result, do we need to drive people back towards simpler kinds of work?
And my colleague here down the hall, Tim Simcoe professor in the Questrom School of Business, he has a proposal for etiquette, which is you start with basic stuff. You start with the simplest models and graphs. And then you proceed to the more detailed stuff. And I think that’s a good approach.
Myself and some others also propose that you generate hundreds, sometimes thousands, of results. And you report them all. It’s a little bit like this. Some people describe the empirical process as a walk-through a garden of forking paths. You come in, and you can turn one way or another. And then depending on each choice you make, as you walk through the garden, you exit into a different spot. And so you don’t really learn much from the garden from one walk through. So what we really should do is walk through thousands of times maybe every path, map the whole garden, and then report that.
Jason Mitchell:
I want to go back to something you mentioned earlier. The question is sustainable investing more vulnerable to P-hacking? If it is, is it because it’s a more nascent industry, or is it because the stakes are higher? I say higher stakes because writing about maybe it’s diversity or the impact of climate on portfolio returns, these all carry certain political policy implications which seem very, very different from an academic in quantitative finance writing about something like the factor zoo.
Andrew King:
It’s a great question, and I don’t know the answer. I can speculate a little bit. First of all, I think ESG goes back more than a decade. Under a different name, it goes back even to the ’90s, maybe even further, where it was studied as CSR or pollution prevention, or things like that.
Second of all, I think this is an area where there’s a great deal of hope that influences scholars. We want to believe that the profit motive will bring about sustainable outcomes. And the people who are involved in that effort know all each other. And we don’t want to be too critical of each other because we’re all kind of involved in an attempt to make the world a better place and respect each other’s morals and efforts.
And then third, there’s money. The funding is now, over the last 10 years, a lot of it’s come from green investors. And they’re interested in a particular kind of outcome with some colleagues proposed to get some money from one of these groups. And they got wind that our results would be sceptical and declined. And I think that kind of problem I think is influencing it as well. So hope, comradeship, money, all those things I think are influencing it.
Jason Mitchell:
You’ve alluded to this a little bit, but talk more about what the feedback of your work has been like for you personally. You clearly take sustainability issues like climate change and biodiversity loss very seriously. And I don’t think anyone would include you in the tribe of deniers. Yet you’ve come under attack from what might be considered your natural tribe of the sustainability concerned within academia and outside academia.
It seems really strange to me, especially when replication is generally regarded as the cornerstone of science. So would you change anything? What do you think you’ve learned about the personal cost of academic scepticism? And what do we need to do to enable constructive and robust criticism in polarised times like these?
Andrew King:
Yeah. Well, again, two questions. So the first question is sort of what is the response to replication in general, and how has that affected me? And the second question is how does this fit in the sort of ESG community? Is that right?
Jason Mitchell:
Absolutely.
Andrew King:
Okay. Well, replication has been, until recently, very hard to do. And if you think about it, journals don’t want to admit that they goofed. They don’t want to admit that a paper that they accepted and has a flaw in it. Maybe, even a glaring flaw is mistaken, particularly when it becomes a famous paper.
There is a famous paper related to sustainability where two coefficients are compared. One coefficient is 1.6. And the other one is 0.8. And the authors conclude that 0.8 is larger than 1.6. Why? Because 0.8 is statistically significant, and 1.6 is not. That does not follow.
And yet that paper is the most cited paper in that journal again and other very well respected journal since it was published. So journals don’t want to fix these problems, in my opinion. Now, there are exceptions. There are some courageous editors that are changing this. And I think it will evolve over time. But you have to beat the door down, and that’s what I’m having to do. Usually, when I do a replication, I send it to the original journal, and I get rejected. And then I have to send it to a secondary journal, and I have to keep the pressure on to try to get the journals to correct the record, which I think is important.
People reading the documents don’t know. They see this. They know it’s cited. It’s famous. They learned about it. Why should they think there’s any problem with it? So we got to do this. We got to make this change.
Jason Mitchell:
So the big question, do you think your career has benefited or has it suffered? Look, I’ll be completely honest. I feel like I can’t help but ask you to frame the conversation we’re having now against something you recently wrote on LinkedIn. You wrote quote, “This week, I heard that colleagues at another university, not Boston University, they worry that I am damaging my career. And they asked a co-author of mine why I would take the risk and what I was trying to achieve,” end quote.
Tell me if I’m wrong, but it feels like there’s a clear unmistakable air of academic suicide in that statement. What are you trying to achieve? It really feels like replication has come to represent almost a radioactive stigmatised term in ESG academia.
Andrew King:
At this point, I’m not that worried about my career, which then allows me to do the kinds of things I’m doing now. I’m a chair at a prestigious university. It has had effects on friendships and things like that. I have colleagues that I like and some former protégés that won’t read when I’m writing because they don’t want to have a conflict.
I can’t write with some former junior colleagues because I’m worried about the impact [inaudible 00:29:31]. Those are all meaningful. People won’t return my emails. Those are all meaningful problems. But this is the great thing about being a chair. There’s not a lot people can do to me.
So the real risk is pretty small, and I’m hoping that you demonstrate that, yeah, there are risks. And there are problems, but there’re not so great and encourage others to do some similar kind of work.
And as I’ve said, I’ve been surprised that, really, the number of junior people that reached out to me and said, “Yeah, I know. But I still want to do it.” So that’s great. And I want to also build bridges to the journals as well. I know it’s hard because I both want to push on the journals and get them to change, but I also want them to understand that I get it. Adjudicating the quality of these papers when they first come through is very difficult and some of these papers that I am now finding problematic, I would’ve accepted. So we need a solution to this that isn’t either denial or extremely rigorous first round review.
Jason Mitchell:
It’s interesting. Since you posted your papers criticising the studies, they’ve continued to be widely cited and promoted, included in some cases by the authors themselves in a similar circumstance. I’m going back to Alex Edmans who’s pointed out that the McKinsey study on diversity continues to be widely cited, circulated, and promoted despite some fairly profound problems with the claims in that paper.
What responsibility do you think academic and practitioner authors of research have once problems with their research have been highlighted? Do you think the responsibilities are different for academics versus practitioners?
Andrew King:
I’m going to stay with academics. I have a lot of respect for Alex Edmans, and I trust what he might have to say about this, about practitioners. A good friend of mine, Brent Goldfarb, is saying that we maybe should all have to sign a Socratic oath or something like that.
I have two answers. That’s why I’m struggling here. I think it’s your responsibility if you think that you did something wrong to say, “I did something wrong.” And on one of my LinkedIn pages, I blogged about a paper that I wrote where I said, “We made this mistake.” Now I haven’t done that for every single paper I’ve done because every paper has something wrong with it. So at what point do you want to say, “Oh, yes, that standard error here is off.”
There’s a problem in that table. I’m fumbling this because I don’t really have a great answer. So the reason I’m struggling here a little bit is I don’t really know how to think about that and what the right answer is. And I’m not entirely sure even whether it’s my place to tell people what the moral thing to do is.
I would say that I think I personally feel like that we have some responsibility to make sure that at least our part of the record is correct. Correct is a funny word. At least justified like the claims we made follow from the data. Online, I’ve admitted to some problems with my own papers where there’s missing empirical requirements for identifying causality and things like that.
There are other papers that I’ve said online. I ran too many models. And so the estimates, while they follow from the data, but I ran so many models that I may have selected a model. This is very hard to do, this problem of doing empirical research and knowing whether what you’re saying is true.
I say to PhD students, “Getting a PhD, writing a paper is a moral struggle.” You’re constantly saying to yourself, “Am I lying?” because I’m looking for signal in the data. But I can also look so hard to find signal in nothing. And you’ve got to constantly be saying that.
So how you decide whether that’s true or not is beyond me. I think that’s something that every person’s got to do. But I think there’s no loss to when someone replicates your research or questions it to say, “Yeah, we should do that,” because all papers are, as I said earlier, a draw from the data of the world. And even if that draw given what it was done as right, it may not follow for the general inference that comes from it.
So I’ve supported people doing research and doing replications of my own research. I’m doing it right now with a group. And they have found that I’ve been wrong. I may have gotten the effect of the responsible care programme right on pollution and wrong on accidents. That happens. So I can’t say about the moral responsibility, but I think that it is practically and strategically a good idea for us to just admit when we have problems and help science move along.
Jason Mitchell:
I want to talk about the reaction of academic journals to your critiques a little bit more. If I understand correctly, some journals rejected your paper, but they also came back with what sounds like far-fetched explanations. In other words, in one case that the authors of the paper you tried to replicate made a typo around statistical significance. What’s the backstory to this?
Andrew King:
First of all, getting into a journal is very difficult. So the journal process, you often, editors, are saying, “Nope, I’m going to reject you.” On a replication, I think you have a little bit of an extra hurdle because that paper that’s published, particularly if it’s an influential one, is part of their journal.
So they want to be careful in making corrections. But the broader point you bring up is correct. I do feel this is my emotional response that journals are sometimes coming up with X reasons to reject the paper. At one journal, as I said earlier, we were told, “Well, we don’t like it when people replicate other journals of other papers.”
And another journal, we said, “Well, you’re guessing at what the original authors did.” So since you don’t exactly know, I don’t see any reason to publish the replication. Well, the reason that we were guessing is that the original authors wouldn’t talk to us.
So basically then, you’re giving any author a way out, which is to just refuse to work with anyone who’s replicating their study. And then I’ve been told that they didn’t like our tone. I’ve never been rejected in my life for having a bad tone in the paper, but I had a bad tone. I have a bad tone right now when I am speaking. So that seemed weird to me in terms of the specific example you give that I pointed out in a paper that what seemed to be, or the authors said was a statistically significant result was not statistically significant. You could rerun the analysis and it wasn’t.
And the journal reached out to them and the authors wrote back and said, “Yes, we meant to write not statistically significant.” It may not be clear to your audience, but that is such a big deal that the not seems unlikely. It’s a little bit like saying at your wedding, “I didn’t mean to say I do. I meant to say I don’t do.” That’s kind of the hub of the whole thing. And if you forgot to put in not, well, that seems either really sloppy or problematic.
Jason Mitchell:
Understand. What are the options available to you to escalate these issues? If some journals simply don’t want to address them, are there avenues?
Andrew King:
None. But LinkedIn, online blogging. I have tried at one journal, I appeal to the associate editor. I appeal to the then editor. I’ve now appealed to a new editor who seems a little more open, and we’ll see.
Now, you mentioned the journals. The journals is a broad category, and journals are changing. One of the journals in the business school area is Strategic Management Journal, I think, is more open to replications. And I’ve had success there. And I’m in the middle of another one. And there are new journals coming up that are really trying to be about replication.
But the problem for them is that they’re still less prestigious. And so even if you publish there, it may have no impact on the original paper. You brought this up earlier, the science says that even if you replicate or the evidence seems to be, even if you replicate a paper and find that it can’t be replicated or it’s got problems, it continues to be cited at the same rate as before. So it’s almost like we’ve gotten to a point where science isn’t self-correcting, social science anyway.
Jason Mitchell:
It’s interesting. It seems the reputations of the authors of the papers you critique are largely untainted despite having multiple papers debunked. Moreover, they remain widely viewed as leaders, titans really, in sustainability-advising companies and giving high visibility talks.
What are the parallels? How do you reconcile this with the fate of someone else like, I don’t know, Francesca Gino, whose work has also been debunked. I don’t know if she’s appropriate or not, is a parallel. But why is there such inconsistent treatment? Francesca Gino endured investigations, but no similar investigations have been done here. This not only means that people are potentially misled. But I guess it also provides a perverse incentive potentially to future researchers.
Andrew King:
It’s a great question. And you could add into that Amy Cuddy from years ago about power poses, which is that a little more parallel. So let me just quickly, Francesca Gino is accused of changing her data, like actually going in and changing the numbers in an Excel spreadsheet.
Amy Cuddy, who argued that if you pose in a particular way before you do an interview or something, you will do better and it’ll change your hormones is, I don’t want to use accused because some of it came from her co-author, but it seems like perhaps they tried a bunch of models and got a result that they liked, the more classic P-hacking. And Amy Cuddy was drummed out of academia. And Francesca Gino has been put on leave, as I understand.
So then why is it that if someone is found the paper is not replicated, they’re not punished similarly? Well, I think in those two original cases, there seems to be evidence almost of a hand doing something. I’m not a lawyer here. In the other one, there is more ambiguity about how it came to pass that the results appeared.
And so it could be deceitful or it could just be error. And I’m not accusing these people of deceit, error. And as I said, I’ve done error. So if everybody who made a mistake in science and published something that had an error in it was drummed out, we wouldn’t have anyone in science.
So when do we decide it’s a violation of the code? And when do we say, “No, you’re within the code, but there’s a mistake or something. The sample is unusual”? I don’t have a great answer to that. If I find, and this may happen that something appears impossible or fabricated, I’m interested in feedback on this even, I think that’s my responsibility probably to report it.
Jason Mitchell:
Let’s end on a final question and a more uplifting one. Where is the silver lining in all of this if one even exists? Is it more rigorous research, greater scrutiny among academics and practitioners? What else?
Andrew King:
Okay. So I think there is this overlying. First of all, there are more and more scholars interested in this problem of how do we learn from the published literature and how do we make it work better? I’m embarrassed to say I was 60 before I really began to think about this question, which was almost everything we learned or know, we learned because someone told it to us or we read it.
But on what justification should you use that? I think about our current world. Should we trust everything we hear? No. So we have to somehow adjudicate what to trust and what not to trust. And so that process of how we determine what to trust is super critical for academics, and journals, and so forth. And increasingly, there are people working on that problem. And that’s, I think, very exciting. So silver lining there.
The second one is the sustainability movement is morphing. A lot of the people who are interested in sustainability are coming around and saying, “Well, golly. Maybe, we can’t just trust firms are going to do this voluntarily. We have to have new regulation, new practises of different types that will bring about the effects that we want.”
I think that’s very healthy. And many of those people are people that like Ken Pucker with the former CEO of Timberland who’ve been in this space for a very long period of time. Auden Schendler, the Aspen Skiing Company is another one. So I think that’s very promising.
And then finally, there are now getting to be more people interested in this replication improvement of our empirical methods, new forms of journals, improved reporting. So I think there’s hope, but it’s hope at the kind of green seed level. So what we hope is that it will grow into a sapling and then a mighty oak. Whether it will or not, I don’t know, but I think there is hope.
Jason Mitchell:
Good. That’s a great way to end. So it’s been fascinating to talk about what the replication crisis represents for sustainable investing, how we can prevent incentive problems from impacting academic research and why academic journals and the academic practitioner community needs to be more open to the replication and challenge of existing studies. So I’d really like to thank you for your time and insights.
I’m Jason Mitchell, head of Responsible Investment Research at Man Group here today with Andy King, professor of Strategy & Innovation at Boston University’s Questrom School of Business. Andy, thank you so much for this.
Andrew King:
Thank you very, very much for having me.
Jason Mitchell:
I’m Jason Mitchell. Thanks for joining us. Special thanks to our guests and, of course, everyone that helped produce this show. To check out more episodes of this podcast, please visit us at man.com/ri-podcast.
This information herein is being provided by GAMA Investimentos (“Distributor”), as the distributor of the website. The content of this document contains proprietary information about Man Investments AG (“Man”) . Neither part of this document nor the proprietary information of Man here may be (i) copied, photocopied or duplicated in any way by any means or (ii) distributed without Man’s prior written consent. Important disclosures are included throughout this documenand should be used for analysis. This document is not intended to be comprehensive or to contain all the information that the recipient may wish when analyzing Man and / or their respective managed or future managed products This material cannot be used as the basis for any investment decision. The recipient must rely exclusively on the constitutive documents of the any product and its own independent analysis. Although Gama and their affiliates believe that all information contained herein is accurate, neither makes any representations or guarantees as to the conclusion or needs of this information.
This information may contain forecasts statements that involve risks and uncertainties; actual results may differ materially from any expectations, projections or forecasts made or inferred in such forecasts statements. Therefore, recipients are cautioned not to place undue reliance on these forecasts statements. Projections and / or future values of unrealized investments will depend, among other factors, on future operating results, the value of assets and market conditions at the time of disposal, legal and contractual restrictions on transfer that may limit liquidity, any transaction costs and timing and form of sale, which may differ from the assumptions and circumstances on which current perspectives are based, and many of which are difficult to predict. Past performance is not indicative of future results. (if not okay to remove, please just remove reference to Man Fund).