George Mason professor Bryan Caplan agrees that the dominant ‘misinformation’ narrative is wrong. We both think misinformation is widespread – essentially the natural state of the world. Caplan, however, thinks the whole idea is a sideshow. Caplan argues:
(1) people are not even rational enough for misinformation to make a difference. Whether they get good info or bad info doesn’t matter. They will believe wacky things regardless.
(2) there’s misinformation on all sides so it kind of balances out. There’s no reason to believe misinformation causes contagions in one direction but not the other.
(3) if social media firms don’t cancel nearly everyone they don’t like, then cancellation can’t be a big deal.
I believe we’re dealing with a much broader phenomenon. I think the reaction to the perceived dilemma – the War on Misinformation – is a real and big problem.
Caplan thinks the problem is not an absence of information or the presence of misinformation but irrationality. Caplan wrote a good book called The Myth of the Rational Voter. It’s not that people don’t know much. They don’t, he argues. The problem is that whether they know anything or not, they can’t or don’t even attempt to make heads or tails of it. They are irrational.
Updating his thesis for ‘misinformation,’ he argues:
The story focuses exclusively on the flaws of speakers, without acknowledging the flaws of the listeners. Misinformation won’t work unless the listeners are themselves naive, dogmatic, emotional, or otherwise intellectually defective. In economic jargon, the problem is that the story mistakes an information problem for a rationality problem.
The motivation for this crucial omission is fairly obvious. Blaming listeners for their epistemic vices sounds bad. It makes the accuser sound elitist, if not arrogant. Blaming a few high-status liars for the world’s problems is a lot more compatible with Social Desirability Bias than blaming billions of low-status fools who fail to choose to exercise their common sense.
But what if the fools are the high-status experts? What if Caplan’s “naive, dogmatic, emotional” listeners are the pseudo-elites running institutions in media, government, and business?
Or, let’s give the experts more credit and assume they are smart and rational. What if censorship and other mechanisms which suppress the free flow of information leave the decision-makers ignorant? Then, even rational experts can make big mistakes.
I’ve argued the War on Misinformation was a chief cause of the calamitous policy and medical responses to Covid-19. Social media, however, was just one tool of info-suppression. Legacy media, government agencies, medical societies, health systems, and universities also suppressed alternate views through censorship and intimidation. Together, they advanced singular and simplistic views of the problem and potential solutions.
It struck me during Covid that many smart people – physicians, epidemiologists, CEOs, policy experts of various sorts – are not especially good at thinking about complex systems. About a big, sprawling, multi-faceted set of challenges. In this case, involving virology, epidemiology, social psychology, economics, and much more. In fact, very few people are good at integrating multiple dynamic systems (even ignoring massive uncertainty). It’s the nature of the beast. So I agree with Caplan that irrationality is part of the problem. Information deficits, however, can amplify irrationality, and hearing better arguments can help correct rationality deficits.
The tragedy is that some our best thinkers – those physicians, epidemiologists, and analysts who correctly identified the complex challenges and offered better solutions – were silenced. Indeed, the need for more information grows with the complexity of the puzzle.
If these alternate voices broke through the walls of censorship, more thoughtful experts would have been exposed to better explanations of the disease and more effective treatment strategies. They would have better understood the costs (not merely the supposed benefits) of various public health measures. The likelihood of achieving a cascade of good sense would have gone up.
I think a little censorship can lead to a lot of self-censorship. And that during Covid self-censorship resulted in a serious case of ‘knowledge falsification’ – the failure of experts to share important insights which could improve the state of the world.
In recent weeks, Bill Gates, CDC director Dr. Rochelle Walensky, and other health experts have admitted that, until now, they didn’t know crucial facts about the Covid vaccines. Yet physicians, scientists, and journalists were thrown off Twitter, Facebook, and YouTube and fired from hospitals and medical schools for saying the things a year ago which Gates and Walensky now lament are true.
That, in large part, is an information problem. It’s a growing disorder which I attempted to explain here: Dysinformation: How the exaflood caused an information sickness.
Social Media Malpractice
Despite its blaring Covid failures, however, the medical field is doubling down on certitude and censorship.
Writing in the New England Journal of Medicine (NEJM), Drs. Richard J. Baron and Yul D. Ejnes outline a new process for policing the speech of physicians. Baron is the CEO of the American Board of Internal Medicine (ABIM), which certifies more than 200,000 physicians.
“Medicine has a truth problem,” they write.
In the era of social media and heavily politicized science, “truth” is increasingly crowdsourced: if enough people like, share, or choose to believe something, others will accept it as true. This way of determining “truth” doesn’t involve scientific methods; it relies instead on “the wisdom of crowds,” which has particular power in a democratic society in which leaders and policies are chosen by the will of the group. Such choices anchor concepts like freedom and liberty. But they may not be helpful in determining whether a building will collapse, whether your brakes will stop your car — or whether a medication or vaccine works.
Obviously, medical associations and state boards of licensure can play important roles in promoting best practices and the quality of personnel.1 Self-regulation of professional speech and behavior is often preferable to government regulation.
But what if these authorities are wrong on big questions? And what if, beyond their mistaken assertions, they stifle discussion of crucial evidence and alternate hypotheses? Their mistakes will remain undiscovered and lead to worse public health.
It’s not that these aren’t legitimate and difficult questions. My argument is that overly confident authorities, without introspection, are shifting the balance too far away from open, decentralized debate among good-faith citizens and toward centralized certitude.
The doctors conclude:
There aren’t always right answers, but some answers are clearly wrong.
Sounds reasonable. But has ABIM admitted when it was “clearly wrong”? The failure of the medical field to acknowledge and grapple with its serial Covid failures undermines the credibility of new efforts, like that of ABIM, to expand and formalize the suppression of potentially life-saving insights.
Licensure can of course also restrict the supply of services in ways that are harmful to would-be providers, consumers, and the state of public knowledge.