DOI: https://doi.org/10.62229/rrfaxvi-1/3
Abstract: This text critically examines whether psychometric scales represent a robust measurement choice when studying conspiracy theories: a key philosophical and methodological gap in the literature on conspiracy theories. I call into question whether such scales have content validity, predictive validity and whether studies employing these instruments manifest external validity, respectively. These issues manifest differently across the two types of scales examined. The adequate development of applied scales is unfeasible because it is impossible to objectively define an ideal combination of items that fully captures the conspiratorial themes they aim to measure. Applied scales will, then, always have limited content validity, which will not only impair our ability to understand whether they really measure the construct in question but will also prevent us from using them in a standardized way. While generic scales may seem superior to applied scales in that they allow for standardized measures, they seem to suffer from the same problem due to the theoretically limitless number of dimensions needed to fully capture conspiratorial ideation. Consequently, the degree to which the predictions made on the basis of these scales are valid (i.e. predictive validity) and their generalizability (i.e. external validity) becomes unclear. In this text, I argue that the employment of psychometric scales does not represent a robust method of measuring conspiracy theories. This situation raises concerns regarding the current state of the literature, since these instruments are widely used in this research area. Given the discussed shortcomings, I propose a novel approach to measurement, one that involves indirect assessment of conspiracy theories. Moreover, a better alternative to existing measures is considered, namely discourse analysis.
Keywords: conspiracy theories, psychometric scales, measurement validity, content validity, methodological critique
Daniel-Radu Iordache[1]
(Re)Theorizing the measurement of conspiracy theories
1. (Re)Theorizing the Measurement of Conspiracy Theories
Conspiracy theories have a long history, and yet they started to pique researchers’ interest mostly in the last two decades (Douglas et al., 2017). Some evidence suggests that belief in, and dissemination of, conspiracy theories were frequent as far back as Antiquity, particularly in ancient Rome. When the Great Fire of Rome broke out, a lot of Christians entertained the theory according to which Nero asked his subordinates to burn the city, in order to rebuild it according to his own ideals. In retaliation, Nero initiated his own conspiratorial account of the event, which ultimately led to the severe punishment of many Christians (van Prooijen & Douglas, 2017, p. 326). Despite this vast history, the first attempts at a thorough review of the literature on conspiracy theories did not appear until after 2015 (e.g. Douglas et al., 2017; Douglas & Sutton, 2018; Douglas et al., 2019).
The current text aims to advance the state of this research field by addressing a methodological and philosophical gap, namely whether the usage of psychometric scales constitutes a suitable way of measuring conspiratorial beliefs[2]. I will argue they do not since these scales could never fully acount for the theoretically infinite number of possible conspirational narratives that can be advanced for a certain event (Enders et al., 2021). In turn, this impacts the accuracy of predictions made on the basis of the results, rendering their generalizability uncertain. Consequently, I end by proposing a novel approach to measuring conspiracy theories.
2. What is a conspiracy theory?
Most conspiracy theories are narratives[3] in which a malicious actor works in secret towards fulfilling some nefarious goal at the expense of society at large (Douglas et al., 2019). Conspiracy theories concern patterns that (non-factively) explain how people, events and objects are correlated, resulting in the belief of an imminent threat (van Prooijen and van Vugt, 2018).
One famous example of a conspiracy theory, that has no less than 175 versions, calls into question the apparently mysterious death of Princess Diana (Griffin, 2022). According to these theories, what happened on the tragic night of the car crash in 1997 was not an accident but rather was orchestrated by somebody who wanted to murder Diana. Who? The agents of the British state that could not bear the fact that she may had been pregnant; the driver of the car that was not in fact drunk, as per the official records; the paparazzi that may have created an environment in which the murder could look like an accident; the negligent doctors that cared for Diana before her death; the driver of another car that also presumably killed Diana’s lover beforehand; and the list goes on and on. The theories were so popular at the time of the accident that the police launched a huge investigation to assess whether the claims had any merit. Even though the vast majority of the conspiratorial accounts have been debunked, suspicions still resurface in the wider public, even after so many years.
2.1 What do conspiracy theories have in common with fake news?
Some researchers place conspiracy theories under the larger umbrella of fake news (e.g. Research Guides: Fake News and Information Literacy: What Is Fake News?, n.d.; IONOS editorial team, 2020). However, it must be noted that the two concepts only partially overlap. In this section, I will focus on one important point of convergence: both of them involve misinformation or disinformation. This distinction has implications for understanding conspiracy theories, since the reasons why people believe in them and the reasons why people distribute them tend to be conflated in the literature (Douglas et al., 2019).
In some instances, the belief in, and the dissemination of, fake news or conspiracy theories are driven by a genuine, poorly informed concern over a potentially true report of an event (i.e. misinformation) (Buchanan & Kempley, 2021). The study of conspiracy theories reveals a close association between conspiracies and misinformation (e.g. Buchanan, 2020; Lobato et al., 2020; Pennycook et al., 2020; Wong et al., 2021, etc.). Previous research has shown that individuals who believe in a given conspiracy theory also tend to disseminate that idea further in order to ensure that it is represented in the general informational landscape (Bessi et al., 2015). Understanding conspiracy theories as misinformation implies that they have something to do with risk aversion and trust.
To be risk averse while conspiracy theorizing involves not letting your guard down in case the danger you are afraid of actually occurs, even if you are not always sure that danger exists. Treating potential perils as such constituted an adaptive advantage throughout evolutionary history, and this partly explains how conspiracy theories may have helped our ancestors survive (van Prooijen & van Vugt, 2018). According to the authors, conspiracy theories made it possible for us to detect and avoid potentially malevolent coalitions that could harm us by triggering awareness and action (becoming more cautious, fleeing, or by preemptive counterattacks) when certain cues were perceived in the environment. If, for instance, tribe A suffered for a long time due to a shortage of food, whereas tribe B is known to be abundant in resources, and B knows of A’s situation, B has reasons to believe A could plan an attack. Were B not to become suspicious and vigilant towards A’s behavior through conspiracy theorizing, B could be exterminated. B’s reasoning is arguably conspiratorial in this scenario, because its people speculate about A’s alleged bad intentions, they create a broader narrative as to why A is dangerous, search for clues indicating a secret attack, etc. One possible cue that could trigger group B’s skepticism concerning group A is the absence of prior interactions between the two groups, which makes A’s behavior unpredictable in the eyes of group B.
Van Prooijen and van Vugt (2018) claim that B’s behavior could occur only if human cognition developed a separate conspiracy thinking system, whose activation was prompted by our interaction with the environment. This system would allow us to assess, manage and act upon risks even if they were not real, by generating belief in, and communication of, conspiracy theories whose role was to enhance our vigilance. Unsurprisingly, under certain conditions, the system predisposes us even to this day to fall prey to conspiracy theories that alert us to the malicious intent of actors that presumably want to threaten not just us as individuals, but the group as a whole. Conspiracy theories allow us to protect our own group from dangerous out-groups (Douglas et al., 2017) and to scapegoat potential intruders (Jolley et al., 2018), just as it may have allowed B’s people to unite against A’s people. From the perspective of risk aversion, conspiracy theorizing that takes the form of misinformation runs as follows: I endorse a conspiratorial perspective, I believe in its truthfulness and disseminate it to my peers so that all of us become vigilant against the unseen enemy.
What about the relationship between trust and conspiracy theories? According to Pierre (2020), people are not attracted to conspiracy theories themselves, but rather to narratives that reject what gets to count as official records, which are deemed untrustworthy. The tendency is fueled by a chronic lack of trust in official epistemic authorities from inside a state (e.g. doctors, politicians, rich people, policemen) or outside of it (e.g. the European Union), that supposedly control the flow of information. As a consequence, the more trust-shattering experiences and interactions an individual has with an epistemic authority, the stronger the inclination to go down the rabbit hole in search of biased alternative “truths”. For instance, consider the case of the recent pandemic crisis, in which people’s deficit of trust proved key: individuals found themselves alone, with no support, surrounded by blame-games, suspicions, lack of compliance, all of which arguably increased individuals’ openness to conspiracy theories (Jakovljevic et al., 2020).
On occasion, fake news and conspiracy theories are disseminated by people wanting to gain certain benefits, like political status or money (i.e. disinformation) (Ahmed et al., 2020; Buchanan & Kempley, 2021). Such “conspiracy entrepreneurs” (Campion-Vincent, 2015) need not endorse the fictions they disseminate; therefore, some people disinform, whereas others get misinformed. It is difficult to assess the magnitude of this phenomenon, but we have reasons to believe it is significant. Consider, for instance, the huge communities created by the likes of Donald Trump, a character known for his tendency to exploit conspiratorial accounts, even fabricated ones, for his own gain (Douglas et al., 2019, p. 23). By “constant but careful deployment of conspiracy theories” (Bergmann & Butter, 2020, p. 338), Trump addresses both the ones that deem true his conspiratorial accounts, but also the ones that do not. Moreover, “by using the <<safety net>> of hearsay, Trump ensures that he can always deny allegations that he is spreading conspiracy theories” (Bergmann & Butter, 2020, p. 339). Looking by his following on certain social media channels (e.g. at the time of this writing, Trump gathers an astounding number of 87.2M followers on X), it can be stated that he managed to create a community in which a lot of people get misinformed through deliberate disinformation.
2.2 How do conspiracy theories differ from fake news?
A semantic difference can be observed: we refer to fake news as being news, as opposed to conspiracy theories that are referred to as theories. Unlike fake news, conspiracy theories are full-blown perspectives that can minutely describe what is happening, who is responsible and, most importantly, why is this happening; they are subversive and oppositionist in their nature, which iswhy their believers sometimes self-isolate from their peers, or they get excluded from their previous groups (Douglas et al., 2017; Douglas et al., 2019; Pierre, 2020).
The fact that conspiracy theories are theories matters if we are to understand the polarization between individuals who embrace conspiratorial perspectives and their critics, as the term “theory” may refer to at least three different things: an established account, a hypothesis, or a hunch (Duetz, 2023, p. 441). Different perspectives as to what represents a theory may generate different perspectives as to what constitutes relevant evidence for that theory; these differences may be “so far apart that bridging the divide between their respective positions seems impossible” (Duetz, 2023, p. 447). Thus, it can be argued that it is not only the content of conspiracy theories that generates conflicts, but also their very nature as theories, that can be supported or dismissed by appeal to evidence, which individuals construe differently.
What is it about the content of conspiracy theories that sparks such controversy? The answer lies in identifying another crucial difference between fake news and conspiracy theories: unlike fake news, that are factually untrue (Here’s How You Can Spot Fake News Online, 2022), conspiratorial accounts ”are close enough to verifiability to be plausible and are at the same time unfalsifiable enough to be unverifiable” (Albarracín, 2021, p. 376). For example, you can theoretically verify whether airlines spray chemicals into the air, which makes the claim at least somewhat plausible. However, at the same time, the narrative is too vague to be verified or proven false. Therefore, although the arguments advanced by conspiracy theories seem to be testable in principle (i.e. they can be supported or not by evidence), most of the time they do not get definitively disproven, in contrast to fake news. This happens because a conspiracy theory always makes room for mistakes on behalf of its theorizer by calling upon uncertainty and speculative plots maneuvered by nefarious minds that are actively trying to cover up their tracks. So, if a conspiracy actually turns out to be true, it does not matter for the believer if most of the initial premises supporting the theory were wrong or inaccurate, what really matters is that there was indeed a conspiracy waiting to be found.
As such, some may argue that believing in conspiracy theories is epistemologically unwarranted or unreasonable, and their believers are gullible. For instance, Napolitano (2021, as cited in Duetz, 2022) suggests that the endorsement of conspiracy theories represents an irrational stance that persists in spite of counterarguments or evidence that undermine the theories. In other cases, such a behavior may be classified as an epistemic malfunction, that determines the believer to act in accordance with the conspiracy theory, in spite of undefeated and easily available evidence that make the probability of a conspiracy happening very low (e.g. Simion, 2023).
In reality, the line between what is rational and irrational when believing in conspiracy theories is blurrier than it might seem. Not only is conspiracy theorizing a universal phenomenon (van Prooijen & van Vugt, 2018), but also “conspiracy beliefs are common […], so everyone is to some extent likely to believe in conspiracy theories” (Douglas & Sutton, 2018, p. 259). Therefore, the rationality of conspiratorial narratives should not be considered only in relation to their content, but also to what makes them appealing to particular people. For instance, Machiavellians, who have a paranoic and cynical outlook on life (e.g. Paulhus, 2014), may be attracted to conspiracy beliefs in part due to their suspicious nature (Brotherton & Eser, 2015; Kay, 2021), while people with a precarious financial situation may use conspiracy theories to blame the ones responsible for their situations (Jolley et al., 2018). At other times, the existence of conspiracy beliefs may actually encourage governments to be more transparent or to uncover disparities between official accounts (Douglas et al., 2019). Finally, it is important to note that some conspiracies have actually turned out to be real (e.g. the Watergate scandal; Zapata, 2024a). Therefore, a fair understanding of conspiracy theorizing needs to take such facts into consideration.
2.3 Psychological mechanisms underlying belief formation and maintenance
Up to this point, we explored several ways in which fake news and conspiracy theories overlap but also differ from each other. On the one hand, the two concepts overlap insofar as both can manifest either as misinformation or disinformation. Understanding conspiracy theorizing as a form of misinformation reveals its close association with lack of trust and risk aversion. When conspiracy theories are used to disinform, their content may be fictional. On the other hand, fake news and conspiracy theories differ to the extent that the former consist of simple true or false statements, whereas the latter represent fully-fledged unfalsifiable interpretations, that can explain in great detail how, why and who may want to harm us from the shadows. Due to their occasionally far-fetched explanations, some wonder if belief in conspiracy theories is rational at all (Napolitano, 2021, as cited in Duetz, 2022), but the answer is not as clear-cut as it might seem. In what follows, I will focus on the psychological mechanisms underlying conspiratorial belief formation, and also their maintenance.
One specific moment in which conspiracy theories seem to thrive and flourish is at the onset of a crisis (e.g. Buturoiu et al., 2021; Zeng, 2021). A crisis often triggers accelerated change in a society, whose management requires distinct power structures, rules, norms, and behaviors (Van Prooijen & Douglas, 2017). Crises thwart the fulfillment of our epistemic, existentialist, and social needs, consequently predisposing us to endorse conspiracy beliefs (the deficit model; Douglas et al., 2017). According to the deficit model, this is due to people’s desire to understand their environment (i.e. epistemic needs), with conspiracy theories providing quick, apparently coherent, and satisfactory explanations as to who is guilty and why the course of the events is as such, making sense of the situation. Understanding what is happening is a prerequisite for having the capacity to act upon the environment, highlighting the fact that we have an existentialist need to feel in control of external entities because it gives us predictability. Conspiracy theories not only restore the predictability of the environment by showing us who to be wary of and what might happen next, but they also place us in a position to reject the official narrative. Finally, conspiratorial perspectives may also help us fulfill certain social needs, as they allow us to protect the image of our own group, to denigrate intruders and outsiders, and to feel special because we know something that others do not. Given that conspiracy theories’ popularity is dynamic during crises (e.g. Bruns et al., 2020), it is likely that the needs generating these theories to change throughout crises as well.
The deficit model (Douglas et al., 2017) seems to be particularly useful in explaining the conspiracy theorizing that takes place during crises. Yet, some conspiracy theories do not seem to be related in any way to a crisis. For instance, to this day, some people still believe that Sir Paul McCartney is in fact dead, and that he was murdered by the other Beatles following an argument in 1966; to cover up their tracks, the Beatles hired a look-alike (Pappas & Radford, 2023). Even if such conspiracy theories are not generated by a crisis per se, some authors argue that they might emerge from the subjective perceptions of a nation in crisis (van Prooijen & Douglas, 2017). If we consider that conspiracy theories are universal and they never seem to go away (Douglas et al., 2017), these premises would imply that humanity is in constant crisis, which seems unlikely because the very idea of stability – be it political, economic, or social – would not be conceivable in a never-ending crisis. The fact that some countries are more stable than others further disproves the never-ending crisis scenario (e.g. Political Stability by Country 2024, n.d.).
Therefore, we have to agree that, to some extent, conspiracy theories are not necessarily related to a crisis period. One possible explanation as to why they survive outside these moments is that conspiracy theories slowly turn into other forms of narratives following the onset of the crisis that generated them, morphing over time into coherent stories that eventually replace the historically official account of the events (van Prooijen & Douglas, 2017). That is, people begin to think that the conspiracy theory is the real historical explanation of the event, and then they pass it on from generation to generation as if it was a real fact. For instance, van Prooijen & Douglas (2017) note that there are still some Americans for whom the existence of a hidden plot that resulted in the death of J. F. Kennedy (JFK) constitutes historical truth, as opposed to the lone-gunner scenario (Zapata, 2024b).
Another possible answer might be that conspiracy theories peak during crises but are then actively supported by the ones for whom experience gets to constitute a good enough reason to view future life events in conspiratorial terms. While we seem to have an innate tendency towards conspiracy thinking (van Prooijen & van Vugt, 2018), not everyone feels the need to go down the rabbit hole of conspiracy theorizing. And the ones who do so are more likely to adopt conspiratorial beliefs after experiencing a loss of trust through negative, repeated interactions with others (Pierre, 2020). Each of these interactions may reinforce a mindset of caution and suspicion, slowly favoring an increasingly conspiratorial perspective. When the lack of trust goes beyond a critical threshold, its target is automatically perceived as dangerous and antagonized through conspiracy theories, even absent relevant evidence.
When conspiracy theories become entrenched in people’s minds, these may start favoring the conspiratorial narrative over the official one in future unrelated contexts. This further reinforces the content of people’s beliefs, which is why some authors argue that conspiracy theories form a monological belief system, in which there is a functional interdependence between its elements (Converse, 1964, as cited in Enders et al., 2021). As Enders et al. (2021) put it, “the more conspiracy beliefs one holds, the more likely they are to express belief in other conspiracy theories” (p. 256). Acquiring more and more conspiracy beliefs increases the probability of an individual becoming radicalized, to the point where they may rely exclusively on conspiracy theories to construct their understanding of reality (Miller, 2020b; Pierre, 2020). If this process is supported by a social network that validates and rewards commitment to such beliefs, theorists may actually act upon them, inflicting potentially major costs (Ahmed et al., 2020; Kruglanski et al., 2022).
As such, conspiracy theorizing progresses over time. It can be likened to a virus, slowly infecting and taking over the cognitive system. However, its onset and development differ. While everybody starts from an initial t0 in which conspiracy thinking only represents an evolutionary predisposition (van Prooijen and van Vugt, 2018), it can be argued that each one of us is on a different path toward potential radicalization. For this development to be set in motion, a critical moment seems necessary (i.e. a crisis). Even though some general-purpose mechanisms are at play (e.g. trust-shattering experiences with epistemic authorities, deficits caused by the critical moment, or reinforcement from peers), the variables involved in the process differ from person to person. Precisely for this reason, each person’s conspiracy ideation is unique.
3. Psychometric scales in the measurement of conspiracy theories
3.1 Fundamentals of psychometric scales
A psychometric scale is usually a self-report psychological instrument that can be used to measure a variety of mental attributes, such as attitudes or personality traits (APA Dictionary of Psychology, n.d.-a; Robinson, 2018). Typically, scales employ a Likert format, allowing respondents to indicate their degree of agreement or disagreement with specific, pre-determined items by selecting from a range of closed response options that are summed up afterwards in total indices (Robinson, 2018).In order to be considered psychometrically viable, scales must be reliable and valid (Paola, 2020; Psychological Testing | Definition, Types, Examples, Importance, & Facts, 2022). For the scope of this paper, we will focus on validity, which refers to the degree to which an instrument actually measures what it intends to measure (e.g. McCrae et al., 2011). Validity is an essential characteristic of any psychometric measurement, mainly because psychology usually studies intangible concepts that are not directly observable; therefore, the study of latent variables implies observing them indirectly (Paola, 2020). This is precisely the reason for which, on a lower level, validity broadly reflects different nuances of the psychological measurement.
Firstly, we must assess whether an instrument adequately covers all of the relevant dimensions for the measured construct. That is, the instrument must possess content validity (Robinson, 2018). As such, identifying the best combination of items to be included is crucial for the development of adequate scales. Needless to say, if the theoretical foundations underlying the targeted concept are not well understood, the selection of items becomes arbitrary. Without relevant items, one cannot hope to draw meaningful inferences or to predict real-world outcomes from the results collected while applying the scale. Put differently, content validity improves a scale’s predictive validity, meaning the degree to which the scale can predict external criteria that is known to be correlated with the measured construct (e.g. Newson et al., 2000). However, being able to predict external criteria in a controlled study environment is not enough, because researchers’ findings are of no use if they cannot be generalized to broader, real-world contexts. Thus, a psychometrically viable scale increases the external validity of the study (Findley et al., 2021).
In addition to validity, the standardization of measurement is another crucial aspect of psychometrically sound assessments, that cannot be overlooked. Whereas validity represents a characteristic of the instrument itself, standardization refers to the manner in which it is applied. To be standardized, an instrument must be uniformly used during the administration, scoring, and interpretation of the evaluation (Fischer et al., 2010). Such procedures are essential to ensure that “all participants take the same test under the same conditions and are scored by the same criteria” (APA Dictionary of Psychology, n.d.-b). Needless to say, standardization represents one of the most important steps towards achieving a high level of validity (Cicchetti, 1994). Among other things, it minimizes the risk of human error while interpreting results, creates a more controlled environment, reduces the influence of confounding factors, and establishes baseline conditions for comparing not only individuals, but also different groups. This last point is particularly important, because a result alone cannot convey any meaningful information without an established way of connecting it to other results, which is exactly what standardization does.
Finally, due to their self-report nature, psychometric scales’ answers are not inherently right or wrong; they just reflect a person’s predisposition towards one side of the spectrum (e.g. Schwarz, 1999). For instance, the question of whether one likes talking with strangers – a common item in extroversion assessments – does not have an inherently correct or incorrect answer, because it refers to a subjective evaluation of one’s personality. This item differs from ones typically found in intelligence tests, which often aim to compare a participant’s results against a predetermined performance standard. Even though the respondent may have an objective inclination towards introversion or extroversion, that could be different from what he reports of himself. This is problematic precisely because it is not clear to what extent people are good self-reporters (e.g. Devaux & Sassi, 2016).
3.2 How are psychometric scales used in the literature on conspiracy theories?
Two types of psychometric scales have been developed in the literature on conspiracy theories (Goreis & Voracek, 2019; Swami et al., 2017). On the one hand, there are generic scales, that are used to evaluate the overall tendency of the respondent to perceive and understand the environment through conspiratorial explanations. Even though the person may have conspiracy beliefs related to particular events, this paradigm focuses on the extent to which the respondent generally believes that the world functions according to conspiratorial motives and is the product of all sorts of conspiracies (i.e. a conspiratorial mindset). The underlying assumption of these scales is that a person with a developed conspiratorial mindset will be more prone to adopt conspiracy theories related to particular events, so it is futile to measure belief in thematic conspiracy theories. As we have seen, repeated negative experiences with an actor can lead someone to perceive that actor as harmful and to antagonize him through conspiracy theories, even in the absence of proof (Pierre, 2020). As a result, it would be pointless to ask respondents about particular events where the individual appeared suspicious, since they are likely to be perceived as such regardless of circumstances. That is precisely why in generic scales we find items on the lines of “important matters are voluntarily kept away from our knowledge” (Conspiracy Mentality Questionnaire – CMQ; Bruder et al., 2013), that are general, non-specific, and not related in any way with concrete events. In contrast, an item such as “Xi Jinping voluntarily kept away important matters from our knowledge throughout the pandemic” would not be considered generic, since it refers to the COVID-19 crisis.
Applied scales are the exact opposites of generic scales, because they test the endorsement of conspiratorial explanations referring to particular contexts or crises. For instance, in the pandemic period, we heard a lot of conspiracy theories specifically linked with COVID-19, explaining the why’s, the how’s, the what’s and the who’s of the sanitary crisis (for a systematic review, see van Mulukom et al., 2022). These instruments measure belief in particular conspiracy theories, with subjects ranging from the assassination of JFK to the harmful substances emanated by smoke detectors (Imhoff & Lamberty, 2017), as opposed to generic scales that measure the conspiratorial mindset of that person. In these instruments, we find items such as “the assassination of J. F. Kennedy was not committed by the lone gunman […], but was rather a detailed, organized conspiracy to kill the president” (Belief in Conspiracy Theories Inventory; Swami et al., 2010, p. 753). This item highlights a specific crisis, namely the death of the American president, rather than a general state of affairs dominated by conspiracies.
Given that a propensity for conspiratorial thinking often correlates with belief in specific conspiracy theories (Enders et al., 2021), one might reasonably question the value of using applied scales. Still, it must be emphasized that the exact conspiracy theory one believes in can translate into different behaviors and consequences. For example, climate change conspiracy theories may thwart authorities’ efforts to combat the environmental crisis (Douglas & Sutton, 2015), but that may not be the case for COVID-19 conspiracy theories. It has also been shown that COVID-19 conspiracy theories uniquely predict hoarding behaviors (van Mulukom et al., 2022), but that is not the case for climate change conspiracy theories. In other words, the content of psychometric scales predict different outcomes (Oleksy et al., 2021). This is precisely why it is crucial to understand the nuances of respondents’ beliefs, as individuals with a prominent conspiratorial mindset do not necessarily believe in all existing conspiracy theories. Conspiratorial mindsets, if any existed, might perhaps result in conspiratorial webs of belief (Quine & Ullian 1978), not limited to specific conspiracy theories but spanning varying topics one could take attitudes with regard to.
4. The Current Criticism
In what follows, I will develop a critique of psychometric scales that disputes the idea that the usage of these scales represents a suitable way of measuring conspirational beliefs.
4.1 Applied Scales
I will start by discussing applied scales’ most evident flaw: so far, nobody has developed a guideline of objective standards to be considered when creating or using applied scales in the literature on conspiracy theories (Enders et al. 2021, p. 4). The creation and usage of these scales refer to different facets of the problem. While the creation of applied scales concerns content itself, their usage pertains to the ways in which scales are used in research contexts.
A growing body of evidence shows how much the content of the instrument really matters in predicting different criteria (e.g. Imhoff & Lamberty, 2020; Oleksy et al., 2021; van Mulukom et al., 2022). Each time an applied scale is created, researchers have to arbitrarily choose the items to be included (e.g. Gligorić et al., 2021; Imhoff & Lamberty, 2020; Jolley et al., 2019; Miller, 2020a; Oleksy et al., 2021; Stoica & Umbreș, 2021), subsequently influencing the outcomes with which the content gets to be correlated. This point is illustrated by two studies claiming to have studied COVID-19 conspiracy theories, in the same culture and with the same population (Romanians), that had strikingly different results: Stoica and Umbreș (2021) observed that education correlates positively with COVID-19 conspiratorial beliefs, whereas Buturoiu et al. (2021) claim that the correlation is negative. Without clear standards as to how many items to use, how broad or narrow to formulate them and even how many response options to allow in Likert scales (for a discussion on this topic, see Sutton & Douglas, 2022), cases such as the presented one strongly suggest that the scale used in at least one of the studies (or perhaps both) lack content validity. However, considering the vast amount of variations that a particular conspiracy theory could have, it is no wonder that researchers have a hard time finding the best combination of items to correctly assess them. Moreover, some conspiracy theories do not even fit into one theme, so the task of choosing items becomes even more complicated. Lack of content validity undermines not only the inferences we draw on the basis of results (i.e. predictive validity), but also their generalizability to other conditions (i.e. external validity). While these types of validity refer to somewhat different particularities in terms of measurement accuracy, the failure to have representative content also impairs the predictions we draw on the basis of the results and their applicability in different contexts.
It seems, therefore, that researchers have no explicit guidelines for using applied scales in a standardized manner. Standardization is essential to ensure that “all participants take the same test under the same conditions and are scored by the same criteria” (APA Dictionary of Psychology, n.d.-b). If assessment items are changed each time the evaluation is conducted, there are no fundamentals on which to create consistent baseline conditions to evaluate all participants. That is, the results of different studies claiming to measure variations of the same conspiracy theory become incomparable. Considering that a result alone cannot convey meaningful information without an established way of connecting it to other results, the usage of applied scales seems questionable.
The main reason why the elaboration of standards might not even be possible is that many conspiratorial statements pertain to their believer’s identity, political orientation or group membership. Consequently, we can generate an astounding number of different conspiratorial explanations for particular situations (Enders et al., 2021), because no two people have the exact same set of beliefs. This means that applied scales should be constantly adapted and updated to keep up with the wide and ever-changing variability of conspiratorial content.
Conspiracy theories also develop on a social level. They gain a lot of momentum and gain different peaks of popularity during crises (Buturoiu et al., 2021; Zeng, 2021). Therefore, they can quickly become “outdated”. As the media continuously reports new information and misinformation about current events, popular narratives may change. In addition to the influence of the media, consider how prominent public figures may be motivated to deliberately spread conspiracy theories – even fictional ones – in pursuit of personal gain (Dale, 2020; Douglas et al., 2019, p. 23), effectively contributing to sudden and swift changes in mainstream conspiratorial narratives. While one may argue that conspiracy theorists steer towards a critical mass of thematic conspiracy theories, the rapid development and increasing complexity of the dominant conspiratorial discourse make it hopeless to develop lasting standards for creating and using applied scales.
A possible counterargument might envisage developing applied scales when conspiracy theories will have reached their climax during a crisis. In reply, note that it may be impossible to predict the timing of such a moment, since each crisis is unique[4]. Besides, it is plausible to think that dominant thematic conspiracy theories may have gained a multidimensional nature before reaching full maturity (Swami et al., 2017). Consequently, the crisis may have already occurred and passed by the time researchers can understand and accurately incorporate these dimensions into applied scales, undermining efforts at measuring belief in specific (thematic) conspiracies.
Let’s illustrate the process of using and creating applied scales. In this endeavor, I will only consider COVID-19 applied scales, for ease of understanding. Typically, researchers employed at least one conspiratorial proposition about the origin of SARS-CoV-2 while creating these instruments (van Mulukom et al., 2022). For instance, Lobato et al. (2020, p. 3) asked respondents about the claim that “COVID-19 was created in a lab as a bioweapon”. Similarly, Achimescu et al. (2021, p. 305) used this statement: “the virus was created by some powerful individuals to make money”. Additionally, Miller (2020a, p. 2) inquired whether people agreed with the idea that the “virus is a biological weapon intentionally released by China”, while Chan et al. (2021, p. 3) asked about the notion that “the novel coronavirus was stolen by Chinese spies from a laboratory in Canada”. Naturally, all of these can be considered conspiracy theories, because they fulfill the prerequisite conditions to be qualified as such. Moreover, all the references pertain to the same issue – the origin of the virus – but involve distinct nuances. Even though we can see a somewhat recurrent theme, that is the artificial creation of the virus, each item portrays different layers of this conspiracy theory. For instance, Miller’s (2020a) item accuses Chinese people of the creation of the virus, whereas Chan et al.’s (2021) implies that the Americans are to blame.
Therefore, we have to admit that participants would likely respond differently to each item, despite dealing with the same underlying topic. Participant might agree that the virus was created even when they do not believe that it was stolen by Chinese spies. And yet, the scale might force them to choose: rate this item favorably since they believe the virus was artificially fabricated, or disagree with it because they don’t fully accept its proposed explanation? A neutral response would fail to accurately reflect their true preference. Consequently, scores regarding belief in COVID-19 conspiracy theories might vary due to measurement inconsistencies. In the following lines, I examine three possible ways of addressing this situation and find none of them satisfactory.
Firstly, consider the scenario in which all the four variants are used when creating the scale. Although this increases the likelihood of obtaining an exhaustive scale, it would likely result in an excessively lengthy questionnaire that could induce respondent fatigue. For instance, it has been shown that longer survey completion times are associated with higher rates of distorted response patterns, such as straight-lining (Herzog & Bachman, 1981) or rushed, shortened response behaviors towards the end of the assessment (Galesic & Bosnjak, 2009). Besides, increasing the amount of items alone does not necessarily guarantee a psychometrically sound scale on its own, as multiplying the items may result, in fact, in the erroneous inclusion of irrelevant aspects, thus decreasing the scale’s validity (e.g. Robinson, 2018). As such, a trade-off between the number of items and their content is essential while developing accurate psychological measurements. Moreover, increasing the number of items may result in the inclusion of contradictory items. While some sources claim that some respondents do endorse contradictory conspiracy theories simultaneously (e.g. Miller, 2020b), it is not at all clear whether this pattern is real or merely reflects expressive responding (Schaffner & Luks, 2018). That is, participants may have evaluated the items in a favourable manner either because they were not sure which version of the same theory to believe in more, or out of a desire for emphasis.
Secondly, a researcher can use the most general statement when creating the instrument. However, not only does the mainstream narrative develop constantly, but what seems relevant in terms of conspiracy theorizing today may become obsolete by tomorrow. Therefore, choosing the most general statement unnecessarily restricts the ever-growing variety of conspiracy theories we face. In addition, some conspiracy theories may concomitantly tap into multiple themes. As such, choosing the most general variant of the theory may not be an easy endeavor.
Finally, we may choose to use a specific item, rather than the most general alternative from the four options. In fact, this is how researchers solved the dilemma when creating and using applied scales: by choosing conspiratorial accounts that seem to be very popular and combining them (Gligorić et al., 2021; Imhoff & Lamberty, 2020; Jolley et al., 2019; Miller, 2020a; Oleksy et al., 2021; Stoica & Umbreș, 2021). However, we can see how this may allow for too much subjectivity on the side of the researcher that is conducting the study. Also, there is a high chance of choosing unrepresentative items for the conspiracy theories under scrutiny.
None of the ways mentioned to address the situation seem satisfactory, suggesting that applied scales are not a psychometrically viable solution for measuring conspiracy beliefs. Besides, these instruments have other additional limitations. For example: the inclusion of context and cultural biases (e.g. the relevance of JFK conspiracy theories may be limited to the US), their incapacity to gauge personal experiences (where direct interaction with a crisis may lead to a different type of conspiracy belief compared to the ones that are obtained only from secondary sources), and the risk of reinforcing the very beliefs the scales purport to measure (by providing participants with another opportunity to engage with the theories, the scales may effectively lend credibility to the theories; Buchanan, 2020). Additionally, researchers tend to focus only on high-profile events while creating applied scales (e.g. the assassination of celebrities, pandemics, etc.), potentially overlooking obscure conspiracy theories that may be more helpful in understanding the broader picture of the factors underlying belief in conspiracy theories. For instance, conspiracy theories tackle topics as mundane as the harmful effects of smoke detectors (Imhoff & Lamberty, 2017), but it is hard to believe researchers have solid reasons to include such contents in applied scales. Overall, these issues severely constrain the utility of applied scales in the research of conspiracy theories.
4.2 Generic Scales
Generic scales are the exact opposite of applied scales: they are made up of general items that do not refer to particular real-world contexts, but rather to the elements of a conspiratorial view of the world. The underlying assumption of this approach is that a conspiratorial mentality exists, and that it makes its holder more prone to use conspiratorial terms to explain real situations (Swami et al., 2017). For instance, if I tend to perceive the world as being controlled by nefarious forces, I may be inclined to believe that the same forces may have also played a role in the assassination of JFK, hence the futility of using an applied scale to measure this belief. Unlike applied scales, whose content varies based on the specific theme or topic, generic scales maintain a consistent set of items. This actually allows for the ideal of standardized measurement to be achieved when using generic scales.
Despite the obvious advantages of generic measurements over the applied ones, my contention is that not even generic scales are psychometrically adequate to accurately measure conspiracy beliefs. Just as was the case with applied scales, the idea that we can generate an abundance of conspiratorial explanations for particular situations (Enders et al., 2021) implies that a person’s conspiratorial worldview can also be constituted along a large number of coordinates (i.e. dimensions to be measured through generic scales; Swami et al., 2017). That is, a person may use conspiracy theories to antagonize whatever actor they want while creating their conspiratorial ideation: we can have generic conspiratorial perspectives about doctors, researchers, governments, Illuminati, Jews, Polish, Chinese, Americans, Russians, Ukrainians, Muslims, corporations, rich people, white people, black people, and the list can go on and on. Since it is unclear which factors are relevant and which must be excluded, generic scales also lack content validity.
Earlier I mentioned that conspiracy theorizing is not static, but it rathers develops over time (Bruns et al., 2020). In the same vein, conspiratorial ideation is unique from person to person. Therefore, a generic scale should be able to tap into all possible dimensions of a conspiratorial worldview that an individual could possess, while also accounting for the fact that the conspiratorial mindset of each person may be in a different developmental stage.
More than that, conspiratorial ideation is culturally specific. Recall the evolutionary origins of conspiracy theories used by our ancestors to protect themselves from hostile groups (van Prooijen & van Vugt, 2018). This seems to suggest that a nation’s contemporary conspiracy theorizing reflects its unique history. As individuals interacted with different environments throughout their history, some conspiracy-related cues may have been more prevalent in certain settings, leading each nation to emphasize particular conspiratorial elements in their ideation. This last idea is of great help to show that, while generic scales more accurately reflect the ideal of standardized measurement, their external validity may be restricted to particular cultures.
To support this position, let us consider the most commonly used generic scales, as per Swami et al. (2017): the Belief in Conspiracy Theories Inventory (BCTI; Swami et al., 2010), the Conspiracy Mentality Questionnaire (CMQ; Bruder et al., 2013), and the Generic Conspiracist Beliefs Scale (GCBS; Brotherton et al., 2013). Take, for instance, one item according to which ”the government agencies closely monitorize all citizens” (CMQ). Such a statement may be more salient in cultures like the Romanian one, in which the state actually strictly surveilled the activity of its citizens throughout much of the communist period. In the same vein, conspiracy theories pertaining to terrorist activity (e.g. ”the government permits or perpetrates acts of terrorism on its own soil, disguising its involvement”- GCBS) could be more appealing to Americans and less to Romanians. Arguably, the US have historically experienced a higher level of terrorist activity than Romania, as is shown by the two Terrorism Indices specific for each country; in Romania, the reported Terrorism Index was 1.06 in 2021 (Institute for Economics and Peace, n.d.), whereas the American one was 4.96 (Institute for Economics and Peace, n.d). These examples convey the idea that the content of widely used generic scales may not be relevant beyond the countries in which they were developed. So, developing a universal, one-size-fits-all generic scale appears unrealizable.
The current state of generic instruments seems to further support this conclusion. In a first of its kind study, Swami et al. (2017) meta-analyzed each of the above instruments (i.e. BCTI, CMQ, and GCBS) in relation to their multidimensional nature and discovered an alarming situation: the generic scales currently in use suffer from significant problems. While BCTI manifested factorial validity, the degree to which it really taps into conspiratorial ideation is unknown. In other words, it is not clear whether BCTI actually measures belief in generic conspiracy theories. The situation is not even surprising, since the items seem to be extracted from an applied scale (e.g. “Princess Diana’s death was not an accident […]”, “The assassination of JFK was not committed by the lone gunman […]”, etc.). The same analysis revealed that CMQ had poor factorial validity, which suggests that some items may not in fact reflect a tendency toward conspiratorial ideation. Finally, GCBS did not seem to pass the psychometric assessment either, with Swami et al. (2017) expressing concerns over the use of this measure. In their own words, “the GCBS […] may tap multiple dimensions that do not cohere very well” (Swami et al., 2017, p. 23).
In short, the most commonly used generic scales seem to suffer from the same problem as applied scales, that is, content validity. This limitation impacts not only the predictions we can make from the data collected with these instruments (i.e. predictive validity), but also the degree to which we can generalize the findings (i.e. external validity). Generic scales fare better than applied scales with regards to their standardized applications, but they are not psychometrically adequate to accurately measure the construct they claim to measure: conspiratorial ideation.
5. Implications and future directions
Since none of the scales discussed meet content validity requirements, I tentatively conclude that psychometric scales do not represent an adequate method of measuring conspiratorial beliefs. This issue influences not only the predictions we make on the basis of these scales, but also their generalizability to real-world contexts. Given that objective standards for these instruments are not forthcoming, I conjecture that psychometric scales will most likely face these issues in the future, as well.
Even though it can be argued that this domain is still in its prime (Douglas et al., 2017), the fact that psychometric scales represent the main method of measuring the phenomenon (Douglas et al., 2019) raises serious concerns when it comes to the validity of what is generally known regarding conspiracy theories. Lack of standardization in applied scales makes results reported using these instruments virtually incomparable. Recall the striking contradiction between the two studies claiming to had studied COVID-19 conspiracy theories, in the same culture and with the same population (Romanians) (Buturoiu et al., 2021; Stoica & Umbreș, 2021). If we are to assume that both of them measured belief in COVID-19 conspiracy theories, then the natural course of action would be to conduct further research to test the relationship. However, the current criticism suggests that this assumption may be unwarranted, and that what one study found was actually a correlation between something and higher levels of education, while the other identified a relationship between something else and lower levels of education. While the degree to which this phenomenon is representative of the applied scales literature is uncertain, its existence represents a tremendous problem that allows us to better understand why social psychology is facing a replication crisis (Trafimow, 2018; Yaffe, 2019). In the same vein, generic scales often face dimensionality issues, despite their standardized application (Swami et al., 2017). So, the same concerns could also be raised about the literature that employed these instruments to measure conspiratorial perspectives.
If we are to make real progress in this area, a good alternative may be constituted by discourse analysis, a method already used to some extent (Douglas et al., 2019). The superiority of this approach lies in its flexibility – researchers are not constrained to present respondents with a predefined set of items to agree or disagree with, as is the case with applied and generic scales. Instead, discourse analysis allows researchers to study conspiracy theories as they are naturally communicated in people’s everyday lives. Given a sufficiently large sample, discourse analysis may help us understand what the most relevant elements of a culture’s conspiratorial ideation are. By analyzing a person’s conspiratorial discourse, one should be able to identify the frequency with which some themes occur. One may check how many times somebody invokes an unfalsifiable explanation of an event, an us-vs.-them rhetoric, or clues that the they experienced trust-shattering experiences with epistemic authorities. One drawback of this method might be that we would have to clearly understand how to separate conspiratorial discourses from other, similar ones (i.e. populist discourses; Pirro & Taggart, 2023).
Another potentially fruitful route may be represented by the creation of a new type of scale in the literature on conspiracy theories. To be a better contender than existing ones, it should be able to address the limitations of existing instruments, and it should be firmly grounded in the current understanding of conspiracy theories. However, my discussion above implies that such a scale can only be conceivable if it paradoxically did not directly measure conspiratorial beliefs. Thus, instead of measuring conspiracy theories, maybe we should focus on what is known so far to be generating them: the unfulfillment of epistemic, existentialist and social needs (Douglas et al., 2017). By measuring the extent to which people feel these needs, we may indirectly assess the probability of a person endorsing conspiracy theories. I will attach below an attempt to create a scale along the lines of the above suggestions.
Figure 1. An attempt at a scale indirectly measuring conspiracy theories, inspired by the deficit model (Douglas et al., 2017)
There are several things to be noted in regard to this novel proposal. Firstly, while I have tried to include a similar number of items across the three needs, a content-valid approach may imply having a disproportionate amount of items for each need specified by the deficit model (Douglas et al., 2017). Secondly, consider the fact that conspiracy theories may develop in peaks throughout crises (e.g. Bruns et al., 2020), and the epistemic, existential and social needs of people endorsing these narratives will likely change during these peaks. That is, applying the scale in different moments of time could result in stark differences observed for the same individual. As such, given that conspiracy theorizing develops on an individual level as well, the scale should probably be used only for longitudinal study designs (APA Dictionary of Psychology, n.d.-c). Thirdly, while the deficit model (Douglas et al., 2017) is indeed a compelling explanation as to why people believe in conspiracy theories, let us not forget that certain conspiracy theories are not related to crises (Pappas & Radford, 2023), which is the fertile ground for the appearance and development of epistemic, control and social needs. Therefore, there may appear some situations in which the scale would either fail to detect a conspiratorial mindset if the scale is applied outside the times of a crisis (since the person will not have the respective needs at that moment), or it would erroneously detect a conspiratorial mindset if it is applied during a crisis, on a person that does not necessarily employ conspiratorial views, but whose epistemic, control and social needs appeared due to the crisis. All of these presumptions require further study. Last but not least, there seems to be no other way to test the utility of this scale without comparing it with the scales currently in use. In this endeavor, my recommendation would be to assess this scale by reference to generic instruments only, as the underlying assumptions of the two approaches appear to be similar.
6. Conclusions
This text advances the state of the current literature on conspiracy theories by evaluating whether psychometric scales are an appropriate method for measuring conspiracy theories. My answer is a negative one, due to these scales’ problems in regard to three critical assumptions of an accurate assessment: content validity, predictive validity and external validity. The inability to objectively define the best combination of items to be included in applied scales raises serious issues when it comes to the degree to which their items can be considered representative for the construct they purport to measure. In turn, this restricts their standardization, leading to a situation in which independent results cannot be compared to each other. As for generic scales, the existence of a theoretically limitless number of conspiratorial actors that people could theorize about and the fact that each of these elements could vary in importance from person to person suggest that it is difficult to construct a scale complex enough to measure all of this variance in conspiratorial beliefs. Unsurprisingly, this situation is reflected by the generic scales in use (Swami et al., 2017).
All of the above considerations convey an alarming message about the current state of the literature on conspiracy theories, since psychometric scales seem to be prominent in this research (Douglas et al., 2019). I proposed a change of paradigm in terms of measurement, one that involves an indirect assessment of such narratives. Other methods – such as discourse analysis – may also prove more useful than applied and generic scales in characterizing conspiratorial ideation.
[1] Daniel-Radu Iordache is a graduate of the “Mind the Brain” master’s program in cognitive science within the Faculty of Philosophy at the University of Bucharest.
[2] I will adopt Douglas et al.’s (2019) definition of “conspiracy belief” as being “a belief in a specific conspiracy theory, or a set of conspiracy theories” (p. 4).
[3] I will use terms such as “narratives”, “explanations”, “accounts”, “perspectives”, “statements”, “claims”, “stories” in an interchangeable manner, as referring to the broader concept of conspiracy theories. However, I acknowledge that each one of these words may refer to different aspects of conspiracy theories (Thanks to an anonymous reviewer for pointing this out). For instance, referring to them as “stories” may imply they are fictitious, which may then allude to the irrationality of their believers.
[4] While it is clear that crises may generally be divided into categories (e.g. social crises, humanitarian crises, economic crises, etc.), here we are referring to the particular features of these contexts, features that arguably differ from situation to situation.
References
Achimescu, V., Sultănescu, D., & Sultănescu, D. C. (2021). The path from distrusting Western actors to conspiracy beliefs and noncompliance with public health guidance during the COVID-19 crisis. Journal of Elections, Public Opinion and Parties, 31(sup1), 299–310. https://doi.org/10.1080/17457289.2021.1924746
Ahmed, W., Vidal-Alaball, J., Downing, J., & López Seguí, F. (2020). COVID-19 and the 5G Conspiracy Theory: Social Network Analysis of Twitter Data. Journal of medical Internet research, 22(5), e19458. https://doi.org/10.2196/19458
APA Dictionary of Psychology. (n.d.-a). https://dictionary.apa.org/psychological-scale
APA Dictionary of Psychology. (n.d.-b). https://dictionary.apa.org/standardization
APA Dictionary of Psychology. (n.d.-c). https://dictionary.apa.org/longitudinal-design
Bergmann, E., & Butter, M. (2020). Conspiracy Theory and Populism. In M. Butter & P. Knight (Eds.), Routledge Handbook of Conspiracy Theories (1st ed., pp. 330–343). Routledge. https://doi.org/10.4324/9780429452734-3_6
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs Conspiracy: Collective Narratives in the Age of Misinformation. PLOS ONE, 10(2), e0118093. https://doi.org/10.1371/journal.pone.0118093
Brotherton, R., & Eser, S. (2015). Bored to fears: Boredom proneness, paranoia, and conspiracy theories. Personality and Individual Differences, 5.G=
Brotherton, R., French, C. C., & Pickering, A. D. (2013). Measuring Belief in Conspiracy Theories: The Generic Conspiracist Beliefs Scale. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00279
Bruder, M., Haffke, P., Neave, N., Nouripanah, N., & Imhoff, R. (2013). Measuring Individual Differences in Generic Beliefs in Conspiracy Theories Across Cultures: Conspiracy Mentality Questionnaire. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00225
Bruns, A., Harrington, S., & Hurcombe, E. (2020). ‘Corona? 5G? or both?’: The dynamics of COVID-19/5G conspiracy theories on Facebook. Media International Australia, 18. https://doi.org/10.1177/1329878X2094611
Buchanan, T. (2020). Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLOS ONE, 33. https://doi.org/10.1371/journal.pone.0239666
Buchanan, T., & Kempley, J. (2021). Individual differences in sharing false political information on social media: Direct and indirect effects of cognitive-perceptual schizotypy and psychopathy. Personality and Individual Differences, 11. https://doi.org/10.1016/j.paid.2021.111071
Buturoiu, R., Udrea, G., Oprea, D.-A., & Corbu, N. (2021). Who Believes in Conspiracy Theories about the COVID-19 Pandemic in Romania? An Analysis of Conspiracy Theories Believers’ Profiles. Societies, 11(4), 138. https://doi.org/10.3390/soc11040138
Campion-Vincent, V. (2015). Remarks on conspiracy theory entrepreneurs. Diogenes, 62(3–4), 64–70. https://doi.org/10.1177/0392192120945606
Chan, H.-W., Chiu, C. P.-Y., Zuo, S., Wang, X., Liu, L., & Hong, Y. (2021). Not-so-straightforward links between believing in COVID-19-related conspiracy theories and engaging in disease-preventive behaviours. Humanities and Social Sciences Communications, 8(1), 104. https://doi.org/10.1057/s41599-021-00781-2
Cicchetti, D. V. (1994). Guidelines, Criteria, and Rules of Thumb for Evaluating Normed and Standardized Assessment Instruments in Psychology. Psychological Assessment, 6(4), 284–290. https://doi.org/10.1037/1040-3590.6.4.284
Dale, D. (2020, September 2). Fact check: A guide to 9 conspiracy theories Trump is currently pushing | CNN politics. CNN. https://edition.cnn.com/2020/09/02/politics/fact-check-trump-conspiracy-theories-biden-covid-thugs-plane/index.html
Devaux, M., & Sassi, F. (2016). Social disparities in hazardous alcohol use: Self-report bias may lead to incorrect estimates. The European Journal of Public Health, 26(1), 129–134. https://doi.org/10.1093/eurpub/ckv190
Douglas, K. M., & Sutton, R. M. (2015). Climate change: Why the conspiracy theories are dangerous. Bulletin of the Atomic Scientists, 71(2), 98–106. https://doi.org/10.1177/0096340215571908
Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The Psychology of Conspiracy Theories. Current Directions in Psychological Science, 26(6), 538-542. https://doi.org/10.1177/0963721417718261
Douglas, K. M., & Sutton, R. M. (2018). Why conspiracy theories matter: A social psychological analysis. European Review of Social Psychology, 29(1), 256–298. https://doi.org/10.1080/10463283.2018.1537428
Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding Conspiracy Theories. Political Psychology, 40(S1), 3–35. https://doi.org/10.1111/pops.12568
Duetz, J. C. M. (2022). Conspiracy Theories are Not Beliefs. Erkenntnis. https://doi.org/10.1007/s10670-022-00620-z
Duetz, J. C. M. (2023). What Does It Mean for a Conspiracy Theory to be a “Theory”? Social Epistemology, 37(4), 438–453. https://doi.org/10.1080/738552930
Enders, A. M., Uscinski, J. E., Klofstad, C. A., Seelig, M. I., Wuchty, S., Murthi, M. N., Premaratne, K., & Funchion, J. R. (2021). Do conspiracy beliefs form a belief system? Examining the structure and organization of conspiracy beliefs. Journal of Social and Political Psychology, 9(1), 255–271. https://doi.org/10.5964/jspp.5649
Findley, M. G., Kikuta, K., & Denly, M. (2021). External Validity. In Annual Review of Political Science (Vol. 24, Issue Volume 24, 2021, pp. 365–393). Annual Reviews. https://doi.org/10.1146/annurev-polisci-041719-102556
Galesic, M., & Bosnjak, M. (2009). Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey. Public Opinion Quarterly, 73(2), 349–360. https://doi.org/10.1093/poq/nfp031
Gligorić, V., Silva, M. M., Eker, S., Hoek, N., Nieuwenhuijzen, E., Popova, U., & Zeighami, G. (2021). The usual suspects: How psychological motives and thinking styles predict the endorsement of well‐known and COVID ‐19 conspiracy beliefs. Applied Cognitive Psychology, 35(5), 1171–1181. https://doi.org/10.1002/acp.3844
Goreis, A., & Voracek, M. (2019). A Systematic Review and Meta-Analysis of Psychological Research on Conspiracy Beliefs: Field Characteristics, Measurement Instruments, and Associations With Personality Traits. Frontiers in psychology, 10, 205. https://doi.org/10.3389/fpsyg.2019.00205
Griffin, A. (2022, December 20). Princess Diana conspiracy theories: Eight reasons people believe her death in Paris wasn’t all it seems. The Independent. https://www.independent.co.uk/life-style/royal-family/princess-diana-death-conspiracy-theories-b2248362.html
Here’s how you can spot fake news online. (2022, May 20). World Economic Forum. https://www.weforum.org/agenda/2017/12/heres-how-you-can-spot-fake-news-online/
Herzog, A. R., & Bachman, J. G. (1981). Effects of Questionnaire Length on Response Quality. Public Opinion Quarterly, 45(4), 549. https://doi.org/10.1086/268687
Imhoff, R., & Lamberty, P. K. (2017). Too special to be duped: Need for uniqueness motivates conspiracy beliefs. European Journal of Social Psychology, 47(6), 724–734. https://doi.org/10.1002/ejsp.2265
Imhoff, R., & Lamberty, P. (2020). A Bioweapon or a Hoax? The Link Between Distinct Conspiracy Beliefs About the Coronavirus Disease (COVID-19) Outbreak and Pandemic Behavior. Social Psychological and Personality Science, 11(8), 1110–1118. https://doi.org/10.1177/1948550620934692
IONOS editorial team. (2020, July 27). What is fake news? Definition, types, and how to detect them. IONOS Digital Guide. https://www.ionos.com/digitalguide/online-marketing/social-media/what-is-fake-news/
Jolley, D., Douglas, K. M., Leite, A. C., & Schrader, T. (2019). Belief in conspiracy theories and intentions to engage in everyday crime. British Journal of Social Psychology, 58(3), 534-549. https://doi.org/10.1111/bjso.12311
Jolley, D., Douglas, K. M., & Sutton, R. M. (2018). Blaming a Few Bad Apples to Save a Threatened Barrel: The System-Justifying Function of Conspiracy Theories. Political Psychology, 39(2), 465–478. http://www.jstor.org/stable/45094751
Kay, C. S. (2021). Actors of the most fiendish character: Explaining the associations between the Dark Tetrad and conspiracist ideation. Personality and Individual Differences, 171, 110543. https://doi.org/10.1016/j.paid.2020.110543
Kruglanski, A. W., Molinario, E., Ellenberg, M., & Di Cicco, G. (2022). Terrorism and conspiracy theories: A view from the 3N model of radicalization. Current Opinion in Psychology, 47, 101396. https://doi.org/10.1016/j.copsyc.2022.101396
Lobato, E. J. C., Powell, M., Padilla, L. M. K., & Holbrook, C. (2020). Factors Predicting Willingness to Share COVID-19 Misinformation. Frontiers in Psychology, 11, 566108. https://doi.org/10.3389/fpsyg.2020.566108
McCrae, R. R., Kurtz, J. E., Yamagata, S., & Terracciano, A. (2011). Internal Consistency, Retest Reliability, and Their Implications for Personality Scale Validity. Personality and Social Psychology Review, 15(1), 28–50. https://doi.org/10.1177/1088868310366253
Miller, J. M. (2020a). Do COVID-19 Conspiracy Theory Beliefs Form a Monological Belief System? Canadian Journal of Political Science, 53(2), 319–326. https://doi.org/10.1017/S0008423920000517
Miller, J. (2020b). Psychological, Political, and Situational Factors Combine to Boost COVID-19 Conspiracy Theory Beliefs. Canadian Journal of Political Science, 53(2), 327-334. doi:10.1017/S000842392000058X
Newsome, S., Day, A. L., & Catano, V. M. (2000). Assessing the predictive validity of emotional intelligence. Personality and Individual Differences, 29(6), 1005–1016. https://doi.org/10.1016/S0191-8869(99)00250-0
Oleksy, T., Wnuk, A., Maison, D., & Łyś, A. (2021). Content matters. Different predictors and social consequences of general and government-related conspiracy theories on COVID-19. Personality and Individual Differences, 168, 110289. https://doi.org/10.1016/j.paid.2020.110289
Paola, G. (2020). The Importance of Using Valid and Reliable Measures in Psychology and Psychiatry.
Pappas, S., & Radford, B. (2023, July 11). 20 of the best conspiracy theories. livescience.com. https://www.livescience.com/11375-top-ten-conspiracy-theories.html
Paulhus, D. L. (2014). Toward a taxonomy of dark personalities. Current Directions in Psychological Science, 23(6), 421–426. https://doi.org/10.1177/0963721414547737
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention. Psychological science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054
Pierre, J. M. (2020). Mistrust and misinformation: A two-component, socio-epistemic model of belief in conspiracy theories. Journal of Social and Political Psychology, 8(2), 617–641. https://doi.org/10.5964/jspp.v8i2.1362
Pirro, A. L., & Taggart, P. (2023). Populists in power and conspiracy theories. Party Politics, 29(3), 413-423. https://doi.org/10.1177/13540688221077071
Political Stability by country 2024. (n.d.). https://worldpopulationreview.com/country-rankings/political-stability-by-country
Psychological testing | Definition, Types, Examples, Importance, & Facts. (2022). Encyclopedia Britannica. Retrieved June 12, 2024, from https://www.britannica.com/science/psychological-testing/Other-characteristics
Quine, W.V.O. & Ullian, J.S. (1978/2021). The Web of Belief. McGraw-Hill Education. Translated in Romanian as Ţesătura opiniilor, transl. M. Dumitru. Iaşi: Polirom.
Google ScholarResearch Guides: Fake News and Information Literacy: What is Fake News? (n.d.). https://researchguides.uoregon.edu/fakenews/issues/defining
Robinson, M. A. (2018). Using multi-item psychometric scales for research and practice in human resource management. Hum Resour Manage. 57:739 750. https://doi.org/10.1002/hrm.21852
Schaffner, B. F., & Luks, S. (2018). Misinformation or Expressive Responding? What an Inauguration Crown Can Tell Us about the source of Political Misinformation in Surveys. Public Opinion Quarterly, 82(1), 135–147. https://doi.org/10.1093/poq/nfx042
Schwarz, N. (1999). How the Questions Shape the Answers. American Psychologist.
Simion, M. (2023). Resistance to evidence and the duty to believe. Philosophy and Phenomenological Research, 108(1), 203–216. https://doi.org/10.1111/phpr.12964
Stoica, C. A., & Umbreș, R. (2021). Suspicious minds in times of crisis: Determinants of Romanians’ beliefs in COVID-19 conspiracy theories. European Societies, 23(sup1), S246–S261. https://doi.org/10.1080/14616696.2020.1823450
Sutton, R. M., & Douglas, K. M. (2022). Agreeing to disagree: Reports of the popularity of Covid-19 conspiracy theories are greatly exaggerated. Psychological Medicine, 52(4), 791–793. https://doi.org/10.1017/S0033291720002780
Swami, V., Chamorro‐Premuzic, T., & Furnham, A. (2010). Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology, 24(6), 749–761. https://doi.org/10.1002/acp.1583
Swami, V., Barron, D., Weis, L., Voracek, M., Stieger, S., & Furnham, A. (2017). An examination of the factorial and convergent validity of four measures of conspiracist ideation, with recommendations for researchers. PLOS ONE, 12(2), e0172617. https://doi.org/10.1371/journal.pone.0172617
Trafimow, D. (2018). An a priori solution to the replication crisis. Philosophical Psychology, 31(8), 1188–1214. https://doi.org/10.1080/09515089.2018.1490707
van Mulukom, V., Pummerer, L. J., Alper, S., Bai, H., Čavojová, V., Farias, J., Kay, C. S., Lazarevic, L. B., Lobato, E. J. C., Marinthe, G., Pavela Banai, I., Šrol, J., & Žeželj, I. (2022). Antecedents and consequences of COVID-19 conspiracy beliefs: A systematic review. Social Science & Medicine, 301, 114912. https://doi.org/10.1016/j.socscimed.2022.114912
van Prooijen, J.-W., & Douglas, K. M. (2017). Conspiracy theories as part of history: The role of societal crisis situations. Memory Studies, 10(3), 323–333. https://doi.org/10.1177/1750698017701615
van Prooijen, J.-W., & van Vugt, M. (2018). Conspiracy Theories: Evolved Functions and Psychological Mechanisms. Perspectives on Psychological Science, 13(6), 770–788. https://doi.org/10.1177/1745691618774270
Wong, A., Ho, S., Olusanya, O., Antonini, M. V., & Lyness, D. (2021). The use of social media and online communications in times of pandemic COVID-19. Journal of the Intensive Care Society, 6.
Yaffe, J. (2019). From the Editor—Do We Have a Replication Crisis in Social Work Research? Journal of Social Work Education, 55(1), 1–4. https://doi.org/10.1080/10437797.2019.1594399
Zapata, C. (2024a, April 16). The Watergate Scandal – Timeline, summary & Deep throat | HISTORY. https://www.history.com/topics/1970s/watergate
Zapata, C. (2024b April 16). Assassination of John F. Kennedy – Facts, investigation, photos | HISTORY. HISTORY. https://www.history.com/topics/us-presidents/jfk-assassination
Zeng, J. (2021). Theoretical typology of deceptive content (Conspiracy Theories). DOCA – Database of Variables for Content Analysis. https://doi.org/10.34778/5g
Iordache