Abstract
Experts are sharply divided concerning the prevalence and influence of misinformation. Some have emphasized the severe epistemic and political threats posed by misinformation and have argued that some such threats have been realized in the real world. Others have argued that such concerns overstate the prevalence of misinformation and the gullibility of ordinary persons. Rather than taking a stand on this issue, I consider what would follow from the supposition that this latter perspective is correct. I argue that, if the prevalence and influence of misinformation are indeed overstated, then many reports as to the prevalence and influence of misinformation constitute a kind of higher-order misinformation. I argue that higher-order misinformation presents its own challenges. In particular, higher-order misinformation, ironically, would lend credibility to the very misinformation whose influence it exaggerates. Additionally, higher-order misinformation would lead to underestimations of the reasons favoring opposing views. In short, higher-order misinformation constitutes misleading higher-order evidence concerning the quality of the evidence on which individuals form their beliefs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A recent survey of experts conducted by the World Economic Forum found that misinformation and disinformation are perceived to pose the single greatest global risk in the short-term, beating out such other factors as extreme weather events and interstate armed conflicts (World Economic Forum, 2024). Scholars of misinformation and other commentators were quick to criticize such perceptions of risk, alleging in part that these perceptions reflect an exaggerated and oversimplified understanding of the effects of misinformation and disinformation (Williams, 2024). This incident reflects a broader divide concerning the prevalence and influence of misinformation and disinformation. Especially since 2016, many have suggested that misinformation and disinformation pose an extreme epistemic threat and have argued that these phenomena have driven or exacerbated a wide range of challenges. Others have argued that the influence of misinformation and disinformation is far more limited than surface appearances might suggest.
Here, I do not attempt to adjudicate between the treatment of misinformation and disinformation as a crisis, and the backlash to this crisis narrative. Instead, I will assume for the sake of argument that the backlash is largely correct. I then argue that, if the backlash is indeed correct, we nonetheless face an underrecognized challenge of misinformation—a challenge of higher-order misinformation. Higher-order misinformation, as I understand it, is misinformation about the prevalence and influence of misinformation and disinformation. I argue that such higher-order misinformation poses underappreciated and serious epistemic and political challenges.
2 Misinformation and disinformation
This paper focuses in large part on misinformation and disinformation, both of which are contested concepts. It will thus be important to clarify our targets by defining these terms. It should be stated from the outset that we do not seek definitions that capture clear pre-theoretic concepts of misinformation and disinformation. While conceptual analysis of this sort is a familiar project in analytic philosophy, the scope of its application is, at best, limited. Misinformation and disinformation are technical terms used in diverse contexts by diverse parties for diverse purposes and whose meanings are, in part, matters of stipulation. Thus, while I attempt to offer definitions of these concepts that largely align with ordinary understandings and usages, the definitions on which I settle are also motivated in part by their usefulness. I do not expect these definitions to be suitable to all purposes.
Some standard approaches define misinformation in terms of false claims (De Ridder, 2021). This definition leaves it open that misinformation might be generated and spread with no ill-intent through, for example, good-faith mistakes. Disinformation might then be defined as false claims spread with the intention of deceiving a target.
These definitions are, however, somewhat too narrow for present purposes. First, not all misinformation and disinformation can be properly described in terms of “claims.” Manipulated photos, audio recordings, and video footage can, for example, constitute misinformation or disinformation. Second, although misinformation and disinformation are often false—or, in the case of photos, audio recordings, and video content—inaccurate in a broader sense, they need not be. A true but misleading claim or statistic, or a decontextualized photo might predictably distort an audience’s thinking. Thus, for present purposes, I take such things as candidates for misinformation and disinformation.
Thus, rather than understanding misinformation and disinformation in terms of false claims, I understand these as counterfeit or fake counterparts of legitimate forms of information (Fallis & Mathiesen, 2019; Harris, 2022, 2024). Whereas some claims, statistics, photos, videos and so on may be understood as legitimate forms of information, false claims, fraudulent or misleading statistics, photoshopped pictures, and deepfake videos may be understood as misinformation or disinformation. What identifies these as misinformation is their proneness to distorting the thinking of consumers, which may involve the formation or entrenchment of false beliefs, the development or strengthening of spurious mental associations, and so on. The psychological effects of an item may depend on contextual factors and thus whether something constitutes misinformation or disinformation may depend on the context in which it is encountered.
Thus far, I have lumped together misinformation and disinformation. Is there a distinction worth highlighting? As I will understanding things, both misinformation and disinformation are defined in terms of the effects described above. However, unlike mere misinformation, disinformation is intended to produce these effects (Fallis, 2014; Floridi, 2011; Harris, 2023; Jaster & Lanius, 2021). Thus, whether something constitutes disinformation or mere misinformation will depend on its history. Notably, I do not assume that whether something constitutes disinformation depends on the intentions of the person passing it on at a particular time. Rather, what matters is whether the production of the sorts of ill-effects described above by the sharing of misleading information is part of a plan to intentionally produce such effects. Consider two examples. First, we might imagine that misleading information about a foreign conflict is deliberately generated by a party aiming to cause widespread deception. Even when that misleading information is subsequently shared on social media by credulous parties, it remains disinformation. Similarly, we might imagine an authoritarian government generating fake documents in support of a conspiracy theory. Even if these documents are then credulously reported by reporters in that country or abroad, they remain disinformation. Such situations are well-described as involving “useful idiots” who play a role in enacting a plan to produce certain ill-effects, but can in principle do so with no deceptive intent or ill-intent more generally.
For present purposes, the key points concerning the distinction between misinformation and disinformation are as follows. First, disinformation is a kind of misinformation—that which is intended to produce the sorts of ill-effects described above. Second, whether something is disinformation or non-disinformative misinformation depends on whether it is spread as part of a plan to produce such ill-effects. Thus, whether something is disinformation or mere misinformation is not strictly a matter of its content or its tendency to produce certain effects. For these reasons, I will in what follows discuss the prevalence and influence of misinformation and, ultimately, the challenge of second-order misinformation. However, it should be understood that the category of misinformation includes the category of disinformation.
3 The crisis narrative and the backlash
The recent history of the study of misinformation can be boiled down, very roughly, to a struggle between two camps. One camp emphasizes the dangers of fake news, conspiracy theories, deepfakes, and other forms of misinformation, while the other camp argues that the dangers their counterparts associate with misinformation are hyperbolic. I refer to the narrative promoted by the first camp as the crisis narrative, and I refer to the dissenting position as the backlash. It is worth reiterating that characterizing the recent study of misinformation in terms of these two camps is an oversimplification. For one thing, some contributors to the debate may think that the threats posed by certain forms of misinformation are underappreciated, while the threats posed by others are overstated. For another, some may hold that certain fears concerning misinformation are ill-founded, while others are entirely legitimate. In short, it is both in principle possible, and not altogether uncommon, for scholars to straddle both camps. Still, distinguishing between these two camps will allow for a clearer discussion of the underappreciated threat of higher-order misinformation.
At least in the western world, the recent wave of academic and media interest in misinformation is largely due to two factors. The first is a series of events, including Brexit, the 2016 election of Donald Trump to the US Presidency, the COVID-19 pandemic, and the storming of the US Capitol building in early 2021. These events are widely thought to have been caused or—in the case of the pandemic—exacerbated, by misinformation. Brexit and the election of Donald Trump, for instance, are often thought to have been promoted by misinformation spread in part by Russian social media trolls (Jamieson, 2020) and by opportunistic fake news publishers (Hughes & Waismel-Manor, 2021). In the case of COVID-19 pandemic, it has often been suggested that the pandemic was accompanied by an “infodemic” (Balakrishnan et al., 2022; World Health Organization, 2020). In particular, it has often been claimed that conspiracy theories and other forms of misinformation drove failure to comply with pandemic restrictions, reduced the willingness to be vaccinated, and promoted bogus cures and preventatives, thereby causing avoidable deaths and other complications. In a particularly dramatic example, some empirical evidence indicates that a significant number of individuals attempted to combat COVID-19 by drinking bleach (Gharpure et al., 2020). In a similar fashion, the storming of the US Capitol is often thought to have been closely linked to the QAnon conspiracy theory and, more generally, to conspiracy theories about the integrity of the 2020 US Presidential election. The rise of QAnon is sometimes thought to be part of an emerging “golden age of conspiracy theories” (Willingham, 2020). This suggestion is bolstered by polling illustrating a high degree of agreement with various outlandish conspiratorial claims, including those central to QAnon (PRRI Staff, 2022). More generally, survey data and events like those described in this paragraph are sometimes taken as indications that we have reached a stage of “post-truth” (McIntyre, 2018).
The second major factor driving recent western concerns about misinformation is the rise of generative artificial intelligence—especially deepfakes. In the broadest sense, deepfakes are items of media content generated through models trained on large bodies of existing media. This technology allows, at least in principle, for the rapid generation of fake but lifelike photos, audio content, and video footage. While deepfakes have to this point mostly taken the form of fake, nonconsensual pornography (Cox, 2019; Steele, 2023), the technology lends itself to the creation of misinformation. For example, in January 2024, a robocall using the faked voice of Joe Biden discouraged voters from participating in the 2024 Democratic primary (Swenson & Weissert, 2024). Commentators have suggested that deepfakes pose a severe threat to knowledge of the world and to democracy. Thus, it has been suggested that deepfakes threaten to bring about the “information apocalypse” (Warzel, 2018), an “epistemic maelstrom” (Rini, 2020), and indeed the “collapse of reality” (Foer, 2018).
Nearly every point discussed thus far in this section has been contested as part of what I am here calling the backlash, proponents of which sometimes regard the crisis narrative as a kind of “moral panic” (Carlson, 2020; Nyhan, 2020). Starting with Brexit and the election of Donald Trump, it has been argued that the influence of misinformation on these events is far more limited than the crisis narrative would suggest. For one thing, empirical studies have suggested that Russian trolls had limited interaction with voters and that the voters who experienced such interactions were, by and large, already committed partisans (Eady et al., 2023).
In addition to direct empirical evidence of the limited influence of misinformation, the backlash has also been characterized by a proliferation of theoretical grounds for questioning the scope of that influence. It has been argued, for example, that the model according to which misinformation causes changes in mental states and thereby inspires counter-normative behavior is overly simple. Some have argued, for example, that endorsement of anti-vaccine misinformation and vaccine hesitancy stem from the same cause, rather than the latter being caused by the former. Mercier (2020), for instance, suggests that vaccine hesitancy is a natural tendency in light of the counterintuitive nature of vaccination. More generally, Mercier suggests that panics about the influence of misinformation reflect an inflated estimation of the gullibility of ordinary persons. Rather than being easily duped, ordinary persons are strategic in their consumption and dissemination of information. In a similar vein, Williams (2022, 2023a) suggests that the proliferation of misinformation is due in large part to the motivations that individuals have to hold certain self-serving beliefs. Because belief is involuntary, individuals cannot simply choose to believe what they would like. Instead, they require epistemic support for their beliefs. Together, the desire to hold certain (often false) beliefs and the inability to do so without evidential support create the conditions for the emergence of a “marketplace of rationalizations.” In this model, misinformation is not a simple cause of false beliefs. Rather, the desire to hold certain false beliefs creates the conditions for the proliferation of misinformation.
The crisis narrative is driven in part by anecdotal and survey evidence suggesting the commonality of belief in conspiracy theories and other outlandish falsehoods. Such evidence has been called into doubt on several grounds. For example, although it is often thought that the popularity of conspiracy theories has exploded in recent years, and especially during the COVID-19 pandemic, some empirical evidence suggests otherwise. Despite widespread media reporting suggesting the increased popularity of the QAnon conspiracy theory in the height of the COVID-19 pandemic, Adam Enders and colleagues found that explicit support for QAnon was modest and stable during this time (Enders et al., 2022). Additionally, although survey evidence often suggests a high degree of agreement with conspiracy theories and other outlandish claims, scholars have questioned whether such evidence indicates genuine beliefs. For example, some have argued that indications of apparent agreement with obvious falsehoods are indicative of “expressive responding” or “partisan cheerleading,” rather than sincere belief (Hannon & de Ridder, 2021; Levy, 2022a, Chap. 1; Schaffner & Luks, 2018). That such agreement does not indicate sincere belief is indicated by the fact that, when offered incentives for accurate responses, rates of false responses drop significantly (Bullock et al., 2015). Similarly, outside of the survey context, and especially on social media, it has been suggested that the endorsement and sharing of misinformation likewise serves to signal individuals’ political loyalties (Ganapini, 2023). Additionally, Levy (2022b) argues that individuals sometimes sincerely but falsely report belief in conspiracy theories and other falsehoods because such individuals have effectively become absorbed in a state of play within which they struggle to determine what they themselves believe. Relatedly, Ganapini (2022) has argued that such falsehoods are often imagined, sometimes in an absorbing way that obscures what the subject believes, rather than believed. One further complication is that agreement with outlandish conspiracy theories in survey contexts may be due to mere trolling. For example, while some subjects claim to have attempted to combat COVID-19 by drinking bleach, subsequent data indicates that the same subjects who claim to have done so also claim to have died of a fatal heart attack (Altay et al., 2023; Litman et al., 2023).
What of the threat of generative artificial intelligence? Dire warnings about these misinformative potential of these technologies have so far been largely unrealized. Indeed, even the fake of Joe Biden described above was reportedly commissioned by a Democratic political operative who claimed to have aimed to bring attention to the political dangers of artificial intelligence (Seitz-Wald, 2024). What is more, the apocalyptic narrative surrounding deepfakes has been questioned, in part, on the grounds that this narrative exaggerates the novelty of such technologies. Britt Paris and Joan Donovan (2019), for example, locate deepfakes within a long lineage of techniques for manipulating media content. Habgood-Coote (2023) notes that “faking” has been part of the practice of photography since its inception, and thus is hardly unique to deepfakes.
I have also previously argued that dramatic concerns about the impacts of deepfakes tend to oversimplify the evidential force of media content. This force is derived, in part, from its surrounding social context (Harris, 2021). Thus, even if there exists a highly realistic deepfake, it will have limited evidential weight if it is not shared through a trusted channel of information. Similarly, even if there are many deepfakes in the social epistemic environment, these will not by themselves undercut the evidential weight of media content that is shared through trusted channels.
In this section, I have recounted the basis for the crisis narrative as well as the ways in which proponents of the backlash have responded to elements of that narrative. In what follows, I will assume that the substance of the backlash is largely correct. This should not be taken as an endorsement of every aspect of the backlash. In addition to the complications raised above, specific aspects of the backlash might be questioned on various grounds. My aim is thus the relatively modest one of considering what would be true if the substance of the backlash were largely correct.
4 Higher-order misinformation
Let us suppose, then, that the general thrust of the backlash to the crisis narrative is correct. We are supposing that the prevalence and influence of misinformation are far more limited than discussions in academia and the popular press would suggest. In short, we are supposing that, as Sacha Altay and colleagues (2023) have suggested, there is widespread “misinformation on misinformation”Footnote 1. For reasons that will emerge in what follows, I prefer to refer to this phenomenon as higher-order misinformation, that is, misinformation about the prevalence or influence of misinformation.
It is worth emphasizing here that Altay and colleagues’ category of misinformation on misinformation includes both misinformation about how much misinformation there is and misinformation about how effective misinformation is in influencing attitudes. A failure to appreciate that informational content makes up only a small share of the content with which ordinary persons interact might contribute to the first form of misinformation. An exaggerated conception of the gullibility of ordinary persons might contribute to the latter form (Altay & Acerbi, 2023; Mercier, 2017, 2020).
Notably, some elements of the backlash suggest that, if there is such misinformation on misinformation, it is driven in part by the behaviors of those individuals who are suspected of being susceptible to first-order misinformation. Consider, for example, those survey respondents who, in survey contexts, affirm outlandish falsehoods, including elements of the QAnon conspiracy theory. Suppose that, as some contributors to the backlash suggest, many such affirmations are not reports of sincere beliefs, but instead serve to express disapproval of Democrats, mainstream media figures, and so on. If this is indeed what is happening in such contexts, then many survey respondents are themselves passing on a sort of higher-order misinformation. By falsely reporting belief in outlandish falsehoods, they provide misleading evidence for the influence of misinformation on their own mental states.
Much misinformation of this sort concerns the prevalence of false beliefs and the susceptibility of beliefs to the malign influence of misinformation. One concern about such misinformation is that it encourages false beliefs about the prevalence of a certain kind of false beliefs—that is, it encourages false higher-order beliefs. Because it neatly captures this feature, I prefer the term “higher-order misinformation” to “misinformation on misinformation.” Misinformation of this sort might lead one, for example, to overestimate how many others believe in QAnon and other outlandish falsehoods. In the next sections, I consider some underappreciated implications of this point.
5 Higher-order misinformation and credibility judgments
In this section, I argue that certain forms of higher-order misinformation, especially those that exaggerate the prevalence of misinformed belief, can be expected to have the character of a self-fulfilling prophecy, tending to promote the very beliefs whose commonality they, at least initially, overestimate. In this way, higher-order misinformation can be expected, somewhat ironically, to amplify the deceptive influence of misinformation. A key premise in this line of argument is that the beliefs of others serve as evidence of the truth of the propositions believed. Given this premise, exaggerated reports of the commonality of beliefs in certain outlandish propositions can be expected to promote belief in those very propositions.
This key premise is widely accepted by epistemologists, albeit often implicitly in the context of discussions of epistemic dependence on others. Such dependence is most commonly discussed in the context of the epistemology of testimony, within which it is often remarked that a great many of our beliefs are based on the testimony of others. However, epistemic dependence ought not be construed narrowly, as occurring only in contexts of testimony-based belief. Even if we directly form our beliefs based on others’ testimony, we often aim, in so doing, to form our beliefs based on others’ beliefs. Thus, if a source asserts that p, but one independently discovers that the source does not really believe that p, one will in typical cases cease to regard the source’s assertion as a good reason to believe that p. There are perhaps exceptions to this general tendency. One of these is suggested by Jennifer Lackey’s (2008, p. 48) widely-cited Creationist Teacher example. In this example, a biology teacher who privately believes in intelligent design asserts the reality of evolution by natural selection, something she does not believe in, in class. But she is nonetheless motivated by the aim of providing students with the best possible information. A student possessed of all this information might regard the teacher’s assertion as a good reason to believe in the reality of evolution by natural selection, even though the teacher does not have the corresponding belief. Such cases are atypical, however. More to the point, that a reasonably competent person believes that p is some evidence that p, even if this evidence is typically accessible only indirectly through that person’s assertions. Indeed, even in Creationist Teacher, it seems plausible enough that a student possessed of the information described above would regard the teacher’s private beliefs as some reason to doubt the reality of evolution by natural selection, even if this reason is outweighed by the teacher’s assertion.
The Creationist Teacher case is an unusual one, in which there is some reason to expect the teacher’s outward assertions to be more reliable indicators of the truth than the person’s private beliefs. Other cases of this sort might be imagined. A thoughtful agent who recognizes her susceptibility to certain biases might work to correct for these biases in her outward assertions, thereby consistently producing testimony that is better aligned with reality than her own private beliefs. In practice, however, testimony is typically more susceptible to distortions than private beliefs, for the simple reason that testimony can be deliberately shaped by the testifier’s deceptive intentions. Thus, at least in some cases, it would be preferable to form one’s beliefs based directly on others’ beliefs, rather than on their testimony.
Thus far in this section, I have sought to show that, although epistemic dependence is typically discussed in terms of dependence on others’ testimony, this need not be taken to indicate that another’s testimony is, strictly speaking, a better indicator of the truth than that person’s private beliefs. As the evident desirability of a hypothetical truth serum suggests, we regard others’ beliefs as highly epistemically valuable, even if information about these beliefs it typically only accessible by way of others’ testimony.
With this point in mind, let us return to the issue of higher-order misinformation. Supposing again that the backlash is substantively correct, some such misinformation takes the form of inflated reports as to the commonality of belief in various falsehoods. Consider a hypothetical but realistic case. Suppose that a report appears in a major newspaper indicating that roughly 45% of Americans (falsely) believe that the results of the 2020 US Presidential election were significantly distorted by the occurrence of widespread fraud, including ballot fabrication and the hacking of voting machines. Suppose further that, at the time of the study, only a small fraction of those reporting this belief genuinely held it. Others, who privately doubted the occurrence of widespread fraud, falsely reported this belief as a means of signaling their partisan loyalties, expressing their disdain for the newspaper conducting the survey, or for some other reason beyond sincere belief. Thus, the reported survey results constitute a highly inflated estimate of the commonality of belief in widespread fraud.
What is a reader to make of such survey results? This answer will no doubt depend on the reader’s prior beliefs. Some readers, especially those who supported the winning candidate, are likely to treat the results as indicative of how unreasonable opposing partisans are. I discuss this point further in Sect. 5. Others, especially those familiar with elements of the backlash, may think that the results should not be interpreted literally.
But what of those who supported the losing candidate? For those who sincerely believed in the occurrence of widespread fraud, they will naturally take such results as some vindication of their conspiratorial suspicions. The results may thus entrench false beliefs they already had. In this case, the more interesting group is the group of supporters of the losing candidate who did not initially believe the allegations of widespread fraud, although they might have pretended to hold this belief in certain contexts. For such persons, these survey results—especially when compounded with broader reporting on the popularity of the fraud narrative—will naturally be taken as some evidence that there really was widespread voter fraud. After all, if a substantial portion of the population—including a majority of one’s co-partisansFootnote 2—believes something, then one will naturally treat such beliefs as some evidence for the truth of the thing believed. In this way, inflated reports of the popularity of certain misinformed beliefs threaten, ironically, to support just such beliefs.
The preceding remarks suggest that a state in which large numbers of individuals are falsely believed to hold certain false beliefs is an unstable state. Insofar as perceptions of others’ beliefs are treated as reasons to believe, false beliefs to the effect that certain propositions are widely believed can be expected to encourage false beliefs in those propositions. But it might be suggested that, insofar as individuals are aware of their own tendencies to falsify their beliefs, they will not regard profession of beliefs by others as strong evidence concerning their actual beliefsFootnote 3. Thus, inflated reports as to the prevalence of false beliefs will not lead to the development of actual false beliefs.
This reasoning is overly optimistic. First, there is little reason to assume that individuals think that their own reasons for professing certain beliefs will generally be the same as the reasons had by others. Indeed, insofar as individuals falsely profess certain beliefs as a way of ingratiating themselves with a certain group, or to signal loyalty to that group, there is, at least in some cases, reason to think that they regard members of that group as genuinely holding those beliefs. Admittedly, there are almost certain to be some cases in which the falsity of professed beliefs is a matter of common knowledge. This is plausibly the case, for example, with many of the outlandish ascriptions of extraordinary abilities directed toward some authoritarian leaders. Mercier (2020, Chap. 12) argues convincingly that such ascriptions serve as credible signals of loyalty, especially insofar as they indicate the speakers’ willingness to limit their opportunities to be accepted into competing groups (see also Williams, 2022). However, not all falsehoods are equally outlandish to all parties, and we ought not assume that endorsements of misinformation are always transparent to the parties involved. To illustrate, consider again the allegations of widespread electoral fraud discussed above. From a certain perspective, such allegations will be transparently absurd. But we ought not assume that all parties occupy this perspective. In contrast to claims to the effect that authoritarian leaders are possessed of supernatural abilities (the ascription of teleportation to Kim Jong-Il (Mercier, 2020, p. 190), for example), the claim that an election was subject to widespread fraud is comparatively grounded. History offers no examples of leaders who could teleport, but it does offer examples of fraudulent elections. Thus, where others express the belief that a given election was fraudulent, such expressions are likely to be regarded, in some cases, as sincere and credible.
Second, consider what would have to be the case if those falsifying their beliefs recognize their co-partisans as doing the same. This would require remarkable sophistication on the part of the falsifiers, effectively requiring them to recognize their role in a large-scale collective pretense that is not obvious to outsiders. The degree to which professions of belief in outlandish falsehoods is a matter of partisan cheerleading is a matter controversy among academics. It would be surprising if this was a matter of common knowledge to insiders.
We thus have reason to think that even those who would be inclined to insincerely endorse false claims would not regard professions of those same false beliefs by others as insincere. Combined with the assumption that individuals will generally treat apparent beliefs by others, and especially trusted others, as reasons to believe the same, it follows that inflated reports as to the commonality of misinformed beliefs are likely to cause misinformed beliefs. In this way, although inflated reports as to the influence of misinformation begin as false there is reason to expect them, through a dynamic of self-fulfilling prophecy, to become more accurate over time.
It may be worth considering this last point in terms of higher-order evidence. In general, higher-order evidence is evidence about the quality of evidence. Assuming that the people in question meet some threshold of reliability, the fact that a certain group of people believe a given proposition is some reason to think that there is good evidence for that proposition. What is more, if information about the particular evidential basis for a certain belief is available, this may constitute higher-order evidence for the quality of that particular evidence. To illustrate, let us return to the example above. Suppose that there is a particular body of (what is in fact misleading) evidence based on which it is thought, erroneously, that roughly 45% of Americans believe that the results of the 2020 US Presidential election were substantially affected by fraud. If one believes that such a significant number of people hold that belief in question based on the body of evidence in question, one will naturally treat this as some reason not only to accept the belief in question, but to regard the relevant evidence as of somewhat high quality. In this way, higher-order misinformation that presents an inflated picture of the influence of misinformation threatens to lead to misperceptions about both what is true and the quality of the available evidence bearing on what is true.
To conclude this section, it is worth considering an objection. One might worry that higher-order misinformation cannot constitute the sort of self-fulfilling prophecy that I have suggested because it does not provide the right sort of novel evidence to the relevant partiesFootnote 4. To grasp the objection, let us return to the example above. We might suppose that supporters of the losing candidate claim to believe that the election was fraudulent because they know that this is what supporters of that candidate generally do. In this case, reporting indicating the large numbers of people who profess to believe the election was fraudulent will not provide new information to supporters—they already knew that was what they claimed.
This line of objection oversimplifies the reasons for which supporters of the losing candidate might profess the belief that the election was fraudulent. Some supporters might do so because they expect other supporters to do the same. But others might do so for other reasons. For one example, some supporters might do so because they have seen the losing candidate make this claim and they wish to express their support for that candidate. For another, some supporters might do so to express their disdain toward the opposing side or, indeed, toward those conducting the survey. More generally, insincere professions of belief in falsehood need not be based on the expectation that others will do the sameFootnote 5, and thus inflated reports of belief in falsehood can provide the needed sort of novel (misleading) evidence to allow higher-order misinformation to act as a self-fulfilling prophecy.
6 Higher-order misinformation and (mis)perceptions of disagreement
One effect commonly associated with misinformation is the feeling of disorientation, a pervasive uncertainty as to what is true and what is false (Benkler et al., 2018, p. 24). Given an epistemic environment populated with a mixture of legitimate information and its misinformational counterparts, individuals may despair of the ability to distinguish between the two. It has been argued that producing this feeling of disorientation is a chief aim of certain forms of disinformation and propaganda (Pomerantsev, 2014).
In this section, I argue that for the somewhat surprising conclusion that, in addition to causing disorientation, misinformation sometimes functions to produce an unwarranted confidence in the correctness of one’s own positions. The problem I highlight here is the potential for higher-order misinformation to generate misconceptions about the bases of others’ beliefs. In particular, insofar as the influence of misinformation in shaping beliefs is overstated, individuals are likely to underestimate the reasonableness of others and the availability of legitimate support for their views. In this way, higher-order misinformation may confer an unwarranted sense of certainty on those who accept it.
Let us illustrate the foregoing abstract suggestions with an example. Suppose that a dangerous and contagious virus is spreading through a community, although the precise dangerousness and contagiousness of the virus are unknown. Some members of the community propose that the best course of action is to enforce strict restrictions on face-to-face interactions, where these restrictions include the closing of certain businesses and schools. They believe p, that the proposal should be implemented. Others are strongly opposed to any such restrictions, and believe ~ p, that the proposal should not be implemented. Opponents of the proposal offer various grounds for their opposition. Some offer relatively grounded opposition, citing economic costs and consequences for the education and socialization of young people in the community. Others offer more outlandish bases for their opposition, insinuating that the proposal is part of a broader elite-led conspiratorial plot to dominate the community. Because of their sensational nature, these conspiratorial allegations attract greater attention, including by members of the press, than the more grounded bases for opposition. Because they recognize the need to maintain a broad coalition, those opposed to the proposal for grounded reasons sometimes intermingle with those that raise conspiratorial allegations, and even suggest that such suspicions should be taken seriously. As a consequence, those that endorse the proposal overestimate, albeit rationally in light of the prominence of conspiracy theories and their promoters, the degree to which opposition is based on such conspiratorial suspicions.
Under these circumstances, it would be natural, and indeed rational, for those that believe p to discount the epistemic significance of opposition to that proposal. Although disagreement by one’s peers plausibly offers some reason to doubt one’s own beliefs, disagreement by those that are not recognized to be peers ought not reduce one’s confidence in one’s beliefs, at least not to a similar degree. If you and I arrive at different results when we calculate what we each owe after deciding to split the bill, and I recognize you as equally reliable in simple mathematical calculations, then the fact that you reached a different result is at least some reason for me to reduce my confidence in my own calculation (Christensen, 2007). However, if, in a similar situation, we arrive at different results but I recognize you to be much worse at simple mathematical calculations than myself, there will be little pressure on me to reduce my confidence in my own result.
Let us return to our central case. It is worth acknowledging from the outset that, although this a recognizable case of disagreement, it is a far messier case than the examples of disagreement often considered in the epistemological literature on the topic. Disagreement concerning p involves many individuals who, we may stipulate, have little information about the competence and evidential situation of those who agree with them and those who disagree with them. Thus, I think, it would be futile to attempt to specify precisely how parties to the disagreement ought to update their beliefs in light of the judgments of others. Still, we can at least say the following about this case. To the extent that those who believe p justifiedly believe that their counterparts’ beliefs that ~ p are based on outlandish conspiratorial suspicions, rather than relatively plausible if perhaps misguided concerns, their willingness to conciliate should be, and likely will be, mitigated.
The more general lesson of this section is that higher-order misinformation that exaggerates the influence of a body of misinformation threatens to produce underestimations of the legitimacy of other considerations favorable to similar beliefs and policies to the ones supported by that body of misinformation. In short, higher-order misinformation sometimes functions to make strawmen of the opposition (cf. Mercier, 2017, p. 115). The example provided is, it will be noticed, closely based on real controversies that arose during the height of the COVID-19 pandemic. But the potential of higher-order misinformation to distort perceptions of the reasons bearing on a particular belief or policy may be realized in a wide range of other cases. Recall that part of the backlash described in Sect. 2 is the claim that conspiracy theories are far less widely believed than a great deal of reporting in the popular press would suggest. But, while the degree to which conspiracy theories enjoy sincere belief has been contested, it is not generally contested that, across a wide range of topics, outlandish conspiracy theories have arisen. Consider just a few examples. Proposals for 15-minute cities have been met with outlandish conspiracy theories alleging that such proposals are intended to introduce new regimes of surveillance and control (Silva, 2023). Content moderation policies on social media have been opposed on the grounds that these are part of a plot by big tech to suppress free speech and enforce orthodoxies favorable to elites (de Keulenaar et al., 2021; Thompson, 2022). Some opponents of COVID-19 vaccine mandates have claimed that these vaccines contain microchips that are part of a plot by Bill Gates to surveil ordinary civilians (Goodman & Carmichael, 2020).
Readers may be of different minds concerning the cases against some or all of the policies described above, and against corresponding beliefs concerning what ought to be done. I expect it will be allowed, however, that the conspiracy theories cited are not the best available reasons against the policies in question. Thus, insofar as higher-order misinformation exaggerates the degree to which opposition to these policies is due to belief in such conspiracy theories, higher-order misinformation underestimates the force of certain considerations that, even if ultimately misguided, are at least more plausible than the corresponding conspiracy theories. In this way, those that accept higher-order misinformation concerning the influence of conspiracy theories and other forms of misinformation on their opponents’ beliefs are at risk of dismissing such beliefs, and the grounds for them, too readily.
In the cases described thus far, many readers are likely to think that, even if higher-order misinformation leads to too-quick dismissals of the beliefs of the opposition, dismissing these beliefs is ultimately appropriate. It is thus worth highlighting that nothing in the mechanism described here would prevent higher-order misinformation from leading to the too-quick dismissal of even entirely correct beliefs. Consider a relatively complex matter concerning which a given layperson has difficulty assessing the first-order evidence, and thus looks to social evidence concerning what others believe and why as a cue to the quality of that first-order evidence. Suppose that the relevant question is whether to believe q. The layperson in question is aware of several considerations for and against q, but is not sure how to assess certain considerations against q. While the layperson sees some such considerations as plausible, he regards others as highly outlandish. Higher-order misinformation, derived from survey data and interviews with those that (outwardly) reject q suggest that the vast majority of those who reject q do so based on considerations that the layperson deems highly outlandish. In fact, the more grounded reasons for rejecting q are weighty, and indeed q is false. However, because the layperson reasons that, if these reasons for rejecting q were powerful, most people who reject q would do so on these bases, the layperson concludes that these reasons against q must be weak. In other words, the higher-order misinformation in this case furnishes misleading higher-order evidence to the effect that certain evidence against q is weak. Even if q is in fact false, higher-order evidence might lead a reasonable person to wrongly dismiss quality evidence and thus accept q. More generally, higher-order misinformation might lead not only to excessive certainty about the truth, but ultimately to false beliefs on target matters.
Thus far I have said little about the effects of generative artificial intelligence, and exaggerations of its impacts, on the challenges highlighted here. Put simply, deepfakes and other forms of generative artificial intelligence threaten to supercharge the present challenge by providing simple and convenient grounds for dismissing opposing views. As Rini (2020) has emphasized, deepfakes in particular provide a basis for readily dismissing evidence that might otherwise be taken to support opposing views. Whereas video footage might once have been powerful evidence against one’s existing beliefs, the serious possibility that any given video footage is fake severely compromises such evidence. Suppose, however, that deepfakes pose far less of a threat than is commonly supposed, and thus that rumors as to the prevalence and influence of deepfakes are a sort of higher-order misinformation. Such reports might lead one to overestimate the degree to which one’s opponents’ beliefs are based on deepfakes and related forms of misinformation. In this case, despite the limited direct influence of deepfakes, the appeal to the possibility of deepfakes might nonetheless be used to dismiss opposition to one’s own views. We need not puzzle as to why others disagree with us or treat this disagreement as a reason to consider our own views more carefully, if we have at hand a convenient explanation as to how our opponents can be consistently wrong.
7 Concluding remarks
I have introduced the concept of higher-order misinformation and I have argued that such misinformation is likely to lead to two types of challenges. First, higher-order misinformation, ironically, offers misleading higher-order evidence as to the credibility of the ground-level misinformation whose prevalence and influence it concerns. Second, higher-order misinformation presents an oversimplified picture of the force of reasons bearing on certain beliefs and policies, and for this reason threatens to encourage the overly-hasty, or indeed entirely misguided, dismissal of opposing views.
On the face of things, there seems to be tension between these two points. On the one hand, higher-order misinformation threatens to problematically inflate the force of target bodies of evidence. On the other hand, higher-order misinformation threatens to problematically deflate the force of target bodies of evidence. Despite appearances, there is no tension here. Whether higher-order misinformation leads to the overestimation or underestimation of a body of evidence will depend on the background beliefs of the person that consumes it. For some, especially those already inclined to favor a certain belief or policy, and to respect those who endorse it, higher-order misinformation concerning the body of evidence bearing on that belief or policy is likely to lead to overestimation of the quality of that evidence. For others, especially those disinclined toward that belief or policy, and who have little respect for those who endorse it, higher-order misinformation concerning the body of evidence bearing on that belief or policy is likely to lead to underestimation of the quality of that evidence. In this way, higher-order misinformation can be expected to exacerbate existing polarization.
The challenge of polarization is a familiar and vexing one. Considering the issue through the lens of higher-order misinformation helps to shed light on a potential, partial remedy. I have argued that polarization may be fed by varying perceptions on the degree to which support or opposition to a given belief or policy is based on misinformation. The challenge of polarization may thus be to some extent mitigated through efforts by opposing groups to disavow the sort of misinformation that, while congenial to their own positions, leads to underestimations of the grounds for those positions.
To conclude, it is worth reiterating that the ascription of these consequences to higher-order misinformation is contingent on the supposition that key elements of the backlash are correct and in particular that the influence of misinformation has been substantially overestimated. Whether or to what degree this supposition is accurate remains a matter of contention. This paper has identified some of the problems that would be caused by higher-order misinformation that overestimates the influence of misinformation. But, it should be also be emphasized that there are dangers associated with underestimating the influence of misinformation. For one thing, if we mistakenly fail to attribute beliefs to others that they themselves endorse, we arguably commit a kind of epistemic injustice (Fricker, 2007). For another, if we fail to appreciate the extent to which beliefs and preferences are based on misinformation and disinformation, we may consequently take such beliefs and preferences too seriously, thus depriving more deserving ideas of attention. It is thus important to further study the actual prevalence and influence of misinformation in general and higher-order misinformation in particular.
Notes
In a similar vein, Mercier (2020, pp. 262–265) discusses “gullibility about gullibility” and Dan Williams argues that “the current panic about a misinformation epidemic is itself rooted in fake news” (2023b).
As others have noted, individuals have good reason to expect their co-partisans to be relatively competent and benevolent, and thus to be relatively reliable (Rini, 2017). For this reason, it is to be expected that individuals will often place heightened epistemic weight on what are reported to be the beliefs of their co-partisans.
Thanks to an anonymous referee for raising this objection.
For example, in Brian F. Schaffner and Samantha Luks’ (2018) study of expressive responding concerning the relative sizes of Obama and Trump’s inauguration crowds, it is not suggested that Trump supporters provide false responses because they expect others to do the same. Rather, the authors suggest that it is the desire to show support for Trump that drives expressive responding.
References
Altay, S., & Acerbi, A. (2023). People believe misinformation is a threat because they assume others are gullible. New Media & Society, 14614448231153379. https://doi.org/10.1177/14614448231153379
Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on misinformation: Conceptual and methodological challenges. Social Media + Society, 9(1), 205630512211504. https://doi.org/10.1177/20563051221150412
Balakrishnan, V., Ng, W. Z., Soo, M. C., Han, G. J., & Lee, C. J. (2022). Infodemic and fake news– a comprehensive overview of its global magnitude during the COVID-19 pandemic in 2021: A scoping review. International Journal of Disaster Risk Reduction, 78, 103144. https://doi.org/10.1016/j.ijdrr.2022.103144
Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (1st ed.). Oxford University PressNew York. https://doi.org/10.1093/oso/9780190923624.001.0001
Bjerring, J. C., Hansen, J. U., & Pedersen, N. J. L. L. (2014). On the rationality of pluralistic ignorance. Synthese, 191(11), 2445–2470. https://doi.org/10.1007/s11229-014-0434-1
Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2015). Partisan Bias in factual beliefs about politics. Quarterly Journal of Political Science, 10(4), 519–578. https://doi.org/10.1561/100.00014074
Carlson, M. (2020). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information Communication & Society, 23(3), 374–388. https://doi.org/10.1080/1369118X.2018.1505934
Christensen, D. (2007). Epistemology of disagreement: The Good News. Philosophical Review, 116(2), 187–217. https://doi.org/10.1215/00318108-2006-035
Cox, J. (2019, October 7). Most Deepfakes Are Used for Creating Non-Consensual Porn, Not Fake News. Vice. https://www.vice.com/en/article/7x57v9/most-deepfakes-are-porn-harassment-not-fake-news
de Keulenaar, E., Burton, A. G., & Kisjes, I. (2021). Deplatforming, demotion and folk theories of Big Tech persecution. Fronteiras - Estudos Midiáticos, 23(2), 118–139. https://doi.org/10.4013/fem.2021.232.09
De Ridder, J. (2021). What’s so bad about misinformation? Inquiry: A Journal of Medical Care Organization, Provision and Financing, 1–23. https://doi.org/10.1080/0020174X.2021.2002187
Eady, G., Paskhalis, T., Zilinsky, J., Bonneau, R., Nagler, J., & Tucker, J. A. (2023). Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior. Nature Communications, 14(1), 62. https://doi.org/10.1038/s41467-022-35576-9
Enders, A. M., Uscinski, J. E., Klofstad, C. A., Wuchty, S., Seelig, M. I., Funchion, J. R., Murthi, M. N., Premaratne, K., & Stoler, J. (2022). Who supports QAnon? A case study in political extremism. The Journal of Politics, 84(3), 1844–1849. https://doi.org/10.1086/717850
Fallis, D. (2014). The Varieties of Disinformation. In L. Floridi & P. Illari (Eds.), The Philosophy of Information Quality (Vol. 358, pp. 135–161). Springer International Publishing. https://doi.org/10.1007/978-3-319-07121-3_8
Fallis, D., & Mathiesen, K. (2019). Fake news is counterfeit news. Inquiry: A Journal of Medical Care Organization, Provision and Financing, 1–20. https://doi.org/10.1080/0020174X.2019.1688179
Floridi, L. (2011). The philosophy of information. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199232383.001.0001
Foer, F. (2018, April 8). The Era of Fake Video Begins. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/05/realitys-end/556877/
Fricker, M. (2007). Epistemic injustice: Power and the Ethics of Knowing. Oxford University Press.
Ganapini, M. B. (2022). Absurd stories, ideologies & motivated cognition. Philosophical Topics, 50(2), 21–40.
Ganapini, M. B. (2023). The signaling function of sharing fake stories. Mind & Language, 38(1), 64–80. https://doi.org/10.1111/mila.12373
Gharpure, R., Hunter, C. M., Schnall, A. H., Barrett, C. E., Kirby, A. E., Kunz, J., Berling, K., Mercante, J. W., Murphy, J. L., & Garcia-Williams, A. G. (2020). Knowledge and practices regarding Safe Household Cleaning and Disinfection for COVID-19 Prevention—United States, May 2020. MMWR Morbidity and Mortality Weekly Report, 69(23), 705–709. https://doi.org/10.15585/mmwr.mm6923e2
Goodman, J., & Carmichael, F. (2020, May 29). Coronavirus: Bill Gates ‘microchip’ conspiracy theory and other vaccine claims fact-checked. https://www.bbc.com/news/52847648
Grosz, D. (2020). The irrationality of pluralistic ignorance. Episteme, 17(2), 195–208. https://doi.org/10.1017/epi.2018.35
Habgood-Coote, J. (2023). Deepfakes and the epistemic apocalypse. Synthese, 201(3), 103. https://doi.org/10.1007/s11229-023-04097-3
Hannon, M., & de Ridder, J. (2021). The point of political belief. In M. Hannon, & J. de Ridder (Eds.), Routledge Handbook of Political Epistemology (pp. 156–166). Routledge.
Harris, K. R. (2021). Video on demand: What deepfakes do and how they harm. Synthese, 199(5–6), 13373–13391. https://doi.org/10.1007/s11229-021-03379-y
Harris, K. R. (2022). Real fakes: The Epistemology of Online Misinformation. Philosophy & Technology, 35(3), 83. https://doi.org/10.1007/s13347-022-00581-9
Harris, K. R. (2023). Beyond Belief: On Disinformation and Manipulation. Erkenntnis. https://doi.org/10.1007/s10670-023-00710-6
Harris, K. R. (2024). Misinformation, Content Moderation, and Epistemology: Protecting Knowledge (1st ed.). Routledge. https://doi.org/10.4324/9781032636900
Hughes, H. C., & Waismel-Manor, I. (2021). The Macedonian Fake News Industry and the 2016 US election. PS: Political Science & Politics, 54(1), 19–23. https://doi.org/10.1017/S1049096520000992
Jamieson, K. H. (2020). Cyberwar: How Russian hackers and trolls helped elect a President: What we don’t, can’t, and do Know. Oxford University Press. https://doi.org/10.1093/oso/9780190058838.001.0001
Jaster, R., & Lanius, D. (2021). Speaking of fake news: Definitions and dimensions. In S. Bernecker, A. K. Flowerree, & T. Grundmann (Eds.), The epistemology of fake news (pp. 19–45). Oxford University Press.
Lackey, J. (2008). Learning from words: Testimony as a source of knowledge. Oxford University Press.
Levy, N. (2022a). Bad beliefs: Why they happen to good people (First edition). Oxford University Press.
Levy, N. (2022b). Conspiracy theories as serious play. Philosophical Topics, 50(2), 1–19. https://doi.org/10.5840/philtopics202250214
Litman, L., Rosen, Z., Hartman, R., Rosenzweig, C., Weinberger-Litman, S. L., Moss, A. J., & Robinson, J. (2023). Did people really drink bleach to prevent COVID-19? A guide for protecting survey data against problematic respondents. PLOS ONE, 18(7), e0287837. https://doi.org/10.1371/journal.pone.0287837
McIntyre, L. C. (2018). Post-truth. MIT Press.
Mercier, H. (2017). How gullible are we? A review of the evidence from psychology and Social Science. Review of General Psychology, 21(2), 103–122. https://doi.org/10.1037/gpr0000111
Mercier, H. (2020). Not born yesterday: The Science of who we trust and what we believe. Princeton University Press. https://doi.org/10.1515/9780691198842
Nyhan, B. (2020). Facts and myths about misperceptions. Journal of Economic Perspectives, 34(3), 220–236. https://doi.org/10.1257/jep.34.3.220
Paris, B., & Donovan, J. (2019, September 18). Deepfakes and Cheap Fakes. Data & Society; Data & Society Research Institute. https://datasociety.net/library/deepfakes-and-cheap-fakes/
Pomerantsev, P. (2014, September 9). How Vladimir Putin Is Revolutionizing Information Warfare. The Atlantic. https://www.theatlantic.com/international/archive/2014/09/russia-putin-revolutionizing-information-warfare/379880/
PRRI Staff (2022, February 24). The Persistence of QAnon in the Post-Trump Era: An Analysis of Who Believes the Conspiracies| PRRI. PRRI. https://www.prri.org/research/the-persistence-of-qanon-in-the-post-trump-era-an-analysis-of-who-believes-the-conspiracies/
Rini, R. (2017). Fake news and partisan epistemology. Kennedy Institute of Ethics Journal, 27(S2), 43–64.
Rini, R. (2020). Deepfakes and the Epistemic Backstop. Philosophers’ Imprint, 20(24), 1–16.
Schaffner, B. F., & Luks, S. (2018). Misinformation or expressive responding? Public Opinion Quarterly, 82(1), 135–147. https://doi.org/10.1093/poq/nfx042
Seitz-Wald, A. (2024, February 26). Democratic operative admits to commissioning fake Biden robocall that used AI. NBC News. https://www.nbcnews.com/politics/2024-election/democratic-operative-admits-commissioning-fake-biden-robocall-used-ai-rcna140402
Silva, M. (2023, October 3). 15 minute cities: How they got caught in conspiracy theories. https://www.bbc.com/news/uk-politics-66990302
Steele, C. (2023, October 18). The Internet Is Full of Deepfakes, and Most of Them Are Porn. PCMag UK. https://uk.pcmag.com/the-why-axis-serie/149211/the-internet-is-full-of-deepfakes-and-most-of-them-are-porn
Swenson, A., & Weissert, W. (2024). New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary. Associated. https://modules.wearehearken.com/associated-press/embed/11386/share
Thompson, S. A. (2022, February 23). Fed Up With Google, Conspiracy Theorists Turn to DuckDuckGo. The New York Times. https://www.nytimes.com/2022/02/23/technology/duckduckgo-conspiracy-theories.html
Warzel, C. (2018, February 12). Believable: The Terrifying Future Of Fake News. Buzzfeed News. https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake-news
Williams, D. (2022). Identity-defining beliefs on Social Media. Philosophical Topics, 50(2), 41–64. https://doi.org/10.5840/philtopics202250216
Williams, D. (2023a). The marketplace of rationalizations. Economics and Philosophy, 39(1), 99–123. https://doi.org/10.1017/S0266267121000389
Williams, D. (2024). Misinformation and disinformation are not the top global threats over the next two years. https://www.conspicuouscognition.com/p/misinformation-and-disinformation
Williams, D. (2023b, June 7). The Fake News about Fake News. Boston Review. https://www.bostonreview.net/articles/the-fake-news-about-fake-news/
Willingham, A. J. (2020, October 3). How the pandemic and politics gave us a golden age of conspiracy theories. CNN. https://www.cnn.com/2020/10/03/us/conspiracy-theories-why-origins-pandemic-politics-trnd/index.html
World Economic Forum (2024). The Global Risks Report 2024, 19th edition. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
World Health Organization (2020). Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation. https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation
Acknowledgements
I would like to thank two anonymous referees for their feedback on earlier drafts of this paper. This research was funded in part by the Austrian Science Fund (FWF) [https://doi.org/10.55776/COE3]. For open access purposes, the author has applied a CC BY public copyright license to any author-accepted manuscript version arising from this submission. The sole responsibility for the content of this publication lies with the author.
Funding
Open access funding provided by University of Vienna.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author has no competing interests to report.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harris, K.R. Higher-order misinformation. Synthese 204, 127 (2024). https://doi.org/10.1007/s11229-024-04763-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-024-04763-0