Elsevier

Acta Psychologica

Volume 179, September 2017, Pages 114-123
Acta Psychologica

The citation effect: In-text citations moderately increase belief in trivia claims

https://doi.org/10.1016/j.actpsy.2017.07.010Get rights and content

Highlights

  • True and false trivia claims were presented with and without in-text citations.

  • Six experiments varied the difficulty of the claims and subject instructions.

  • Citations sometimes led to higher truth ratings and sometimes did not.

  • A meta-analysis of six experiments showed that citations led to higher truth ratings.

  • Citations make statements slightly more believable.

Abstract

Authors use in-text citations to provide support for their claims and to acknowledge work done by others. How much do such citations increase the believability of an author's claims? It is possible that readers (especially novices) might ignore citations as they read. Alternatively, citations ostensibly serve as evidence for a claim, which justifies using them as a basis for a judgment of truth. In six experiments, subjects saw true and false trivia claims of varying difficulty presented with and without in-text citations (e.g., The cat is the only pet not mentioned in the bible) and rated the likelihood that each statement was true. A mini meta-analysis summarizing the results of all six experiments indicated that citations had a small but reliable effect on judgments of truth (d = 0.13, 95% CI [0.06, 0.20]) suggesting that subjects were more likely to believe claims that were presented with citations than without. We discuss this citation effect and how it is similar and different to related research suggesting that nonprobative photos can increase judgments of truth.

Introduction

One foundation of good critical thinking is the ability to evaluate the credibility of claims. “Can sharks swim backwards?” “Was President Obama born in the United States?” “Can vaccines cause autism?” In today's society we are inundated with facts, stories, and claims from myriad online sources that vary widely in credibility. After a tumultuous presidential election, concerns about truth and credibility have become a national issue in 2017 – Time magazine even featured a cover story with the title “Is Truth Dead?” (Scherer, 2017). Fortunately, a goal for many educators is to help their students become better critical thinkers; education should help students learn to critically read texts, skillfully evaluate evidence, and develop habits of skepticism. Such education is important, because research suggests that people accept statements or claims as true unless they are prompted in some way to look deeper into a claim's evidence, believability, or importance (see Gilbert, Krull, & Malone, 1990 for evidence; Gilbert, 1991 for an overview).1

In academic writing, one signal of evidence is the use of in-text or parenthetical citations (e.g., the Gilbert references above). Authors use references to acknowledge the work of others and to provide support for their claims. The current experiments were designed to examine how much parenthetical citations affect the believability of trivia claims. On one hand, in-text citations provide useful information. They show that authors have done their research and guide interested readers to external sources that will provide evidence. Thus, in-text citations are a probative source of information; it is rational to use them when forming a judgment of truth. On the other hand, it is unclear how much readers attend to in-text citations. Research suggests that non-experts (aka students) vary widely in the degree to which they look at citations while reading and how they use the information in a citation to draw inferences from the text (Sparks and Rapp, 2011, Strømsø et al., 2013).

Complicating matters further, several lines of research have demonstrated that truth judgments can be affected by a variety of factors, many of which are illogical or nonprobative (meaning they do not actually provide any additional diagnostic information). Newman, Garry, Bernstein, Kantner, and Lindsay (2012) use the term “truthiness” (borrowed from the comedian Stephen Colbert) to describe subjective feelings of truth. For example, in one widely read study, McCabe and Castel (2008) had students read science articles that included either pictures of a brain scan, a bar graph, or no accompanying image. The students who saw the brain images while reading rated the passage as having better scientific reasoning compared to students in the other conditions, even though the passages were identical. McCabe and Castel argued that the brain images were persuasive because they provided a physical representation of an abstract cognitive idea. Although this brain image finding has been difficult to replicate (see Michael, Newman, Vuorre, Cumming, & Garry, 2013 for a meta-analysis), it does colorfully demonstrate how irrelevant information might affect someone's judgment.

A second line of research concerns what is called the truth effect (or sometimes the illusory-truth effect)–the finding that people are more likely to think that a statement is true if they have seen it before than if they are seeing it for the first time (Begg et al., 1992, Hasher et al., 1977; for a meta-analysis see Dechêne, Stahl, Hansen, & Wänke, 2010). In other words, simply repeating a fact multiple times makes people more likely to believe it. Begg et al. (1992) and others (e.g., Nadarevic and Erdfelder, 2014, Unkelbach, 2007) have suggested that repeated statements are more familiar and that familiarity translates into more fluent processing of the item. The increased fluency is then mistakenly interpreted as a signal of truth. Begg et al. (1992) argued that it is illogical to use repetition in forming a judgment of truth (Unkelbach & Stahl, 2009 cite Wittgenstein as suggesting that using repetition to determine truth is like buying a second copy of a newspaper to see if the first is correct). In contrast, Unkelbach, 2007, Reber and Unkelbach, 2010 has argued that repetition may be a valid basis for making a truth judgment; hearing a statement a second time–especially if it comes from a new source–provides converging evidence that the statement is true. Regardless of whether using repetition as a basis for truth is valid, it is clear that simply repeating a statement can increase the degree to which a statement is seen as true.

Moreover, fluency has been shown to influence truth judgments in a variety of ways, not just through increased familiarity. For example, people are more likely to believe that a trivia statement is true if it is presented in an easy to read format–a dark blue font against a white background–than a difficult to read format–a yellow font against a white background (Reber & Schwarz, 1999). Schwarz (2015) has argued that when people are deciding whether a claim is true they evaluate it against a set of five criteria (is the belief shared by others; is the belief supported by evidence; is the belief compatible with other things that one believes; does the belief have internal coherence; and is the source of the claim credible). Importantly, while people will evaluate different types of information for each of those criteria, fluency can affect the conclusions drawn from all of them. Information that is presented in an easy to process manner can inflate truth ratings via any of the above mechanisms.

One final striking example of how fluency can inflate truth ratings comes from a recent line of research that shows that presenting a photo along with a trivia claim makes subjects more likely to believe a statement, even when the photo does not provide any diagnostic information about the veracity of the claim (Cardwell et al., 2016, Fenn et al., 2013, Newman et al., 2012, Newman et al., 2015). In one study (Newman et al., 2012, Experiment 3), for example, subjects saw a series of true and false trivia claims presented with or without an accompanying photo and were asked to judge whether the statements were true or false. Critically, all the photos were nonprobative – they were topically related to the claims, but did not provide any additional evidence about the truth of the claim. For example, the claim “Macadamia nuts are in the same evolutionary family as peaches” would appear with a picture of macadamia nuts. Despite not providing additional useful information, the photos led to a truth bias–subjects were more likely to accept a statement as true if it was presented with a photo than without. Newman et al. suggested that the photos helped people create “pseudoevidence”–subjects attributed the fluency of processing the photo as an indicator of truth, or used ambiguous information in the photo to confirm a hypothesis. Additional studies have shown that this truth bias persists over time (Fenn et al., 2013), that the photo has to be topically related to the trivia claim (e.g., the picture can't be completely unrelated, Newman et al., 2015), and that the presence of photos can even lead people to falsely remember past experiences (Cardwell et al., 2016).

In sum, truth judgments can be influenced by many factors, including ones that are nonprobative or irrational. Despite the research described above, no studies (to our knowledge) have examined whether in-text citations increase the perceived truthfulness of statements. Our interest in this question was partially inspired by an anonymous reviewer from a different paper (Putnam, Sungkhasettee, & Roediger, 2016) who suggested that including more references in a review on effective study strategies would make students more likely to believe the claims we made in our paper. We were skeptical that undergraduates would be persuaded by additional in-text citations and decided to investigate the question ourselves.

In the current experiments subjects saw true and false trivia statements presented with or without parenthetical citations and judged the truth of each statement. Across the experiments we used materials of varying difficulty, provided different instructions that sometimes emphasized what an in-text citation was, and manipulated the presence of citations both within and between subjects. Finally, we combined the evidence from each experiment in a mini meta-analysis to provide a more precise estimate of the effects of citations on truth judgments. Overall, we had two competing predictions. In contrast to the nonprobative photos used by Newman et al. (2012), citations are probative–they provide evidence or support for a claim. If nonprobative information can increase truth ratings, then probative information should as well. Therefore, our first hypothesis was that presenting in-text citations would increase the perceived truthfulness of the statements. Alternatively, parenthetical citations lack the visual appeal of photos and readers might ignore citations unless prompted to examine them (e.g., Gilbert, 1991). Thus, our second hypothesis was that subjects would provide similar truth ratings for statements presented with and without a citation.

Section snippets

Experiment 1A

Experiments 1A and 1B were identical, except that the variable citations was manipulated between-subjects in 1A and within-subjects in 1B. We expected that highlighting the difference between a statement with a citation and a statement without a citation (i.e., using a within-subjects design) would be more likely to show that citations affected truth ratings, whereas the between-subjects design would provide a stronger test for the same hypothesis. Experiment 1A and 1B were preregistered on the

Experiment 1B

Experiment 1B was run at the same time as Experiment 1A and was identical, except that citations was manipulated within-subjects, rather than between; we expected that in-text citations would be more likely to increase truth ratings in a within-subjects design when individuals were exposed to both conditions. Otherwise, the materials and procedure were identical.

Experiment 2A

Experiments 1A and 1B suggested that citations did not increase the believability of trivia claims. One possibility is that subjects might have already known the truth status of some of the trivia claims. Indeed, other research suggests that the illusory-truth effect disappears when the actual truth status of a claim is known (Dechêne et al., 2010). If subjects have prior knowledge about a claim then they will rely on their own knowledge, rather than the presence of an in-text citation to shape

Experiment 2B

Experiment 2B was identical to Experiment 2A except that citations was manipulated within-subjects, rather than between-subjects. As in Experiment 1, we expected that the within-subjects design would be more likely to show an effect of citations.

Experiment 3A

Four experiments showed that numerically, statements with citations were rated as more likely to be true than statements without citations, but none of these comparisons were statistically significant. The SDT analysis in Experiment 2B, however, hinted that citations might inflate truth ratings in some situations. In Experiment 3A our goal was to conduct a high-powered experiment. First, although power analyses and examining previous research (e.g., Newman et al., 2012) indicated that our

Experiment 3B

Experiment 3B was a direct replication of Experiment 3A.

Meta-analysis of all experiments

Given the similar design and mixed results of the experiments reported here, we conducted a mini meta-analysis to provide a quantitative estimate of the effects of citation presence on truth ratings (readers who are interested in the details of conducting a mini meta-analysis should consult Cummings, 2012). Cummings (2012) argues that meta-analysis, even when conducted on a small scale as in the current study, can lead to large increases in precision of measuring effects and can help clarify

General discussion

The goal of these experiments was to determine how much in-text citations affect the perceived “truthiness” of trivia claims. Despite some variability in results across the experiments, the mini meta-analysis (see Fig. 7) supports the hypothesis that there is a small citation effect: subjects were more likely to rate a statement as true if it was presented with a citation than without a citation. This is the first time, to our knowledge, that citations have been shown to lead to higher

Acknowledgments

We thank K. Andrew DeSoto and Julia Strand for providing feedback on this manuscript, Jae Eun Lee, and Lucia Ray for their assistance, and Carleton College for supporting this research.

References (35)

  • B.A. Cardwell et al.

    Nonprobative photos rapidly lead people to believe claims about their own (and other people's) pasts

    Memory and Cognition

    (2016)
  • J. Colgrove et al.

    Could it happen here? Vaccine risk controversies and the specter of derailment

    Health Affairs

    (2005)
  • S.L. Connelly et al.

    Age and reading: The impact of distraction

    Psychology and Aging

    (1991)
  • G. Cummings

    Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis

    (2012)
  • A. Dechêne et al.

    The truth about the truth: A meta-analytic review of the truth effect

    Personality and Social Psychology Review

    (2010)
  • D.T. Gilbert

    How mental systems believe

    American Psychologist

    (1991)
  • D.T. Gilbert et al.

    Unbelieving the unbelievable: Some problems in the rejection of false information

    Journal of Personality and Social Psychology

    (1990)
  • Cited by (8)

    • From simple agents to information sources: Readers' differential processing of story characters as a function of story consistency

      2021, Acta Psychologica
      Citation Excerpt :

      As stated in the introduction, this paper focuses on source memory. However, the importance of sourcing also relates to the ability to use source information to evaluate the reliability of what is being read (Putnam & Phelps, 2017). The relationship between epistemic validation processes, such as source evaluation, and other comprehension processes, such as the construction of a multi-level representation of discourse, may be closer than traditionally assumed (e.g., O'Brien & Cook, 2016; Richter & Maier, 2017).

    • Does having a voice disorder hurt credibility?

      2020, Journal of Communication Disorders
      Citation Excerpt :

      In the task, participants listen to a series of esoteric trivia statements (e.g., “camels have three eyelids to protect themselves from sand”), some of which are uttered in an atypical voice, and participants rate these statements as definitely false, probably false, probably true, or definitely true. Statement credibility tasks, in which participants rate the truth-value of rare information, have been used as a measure of perceived credibility in many studies in the social and cognitive psychology literature (e.g., Bacon, 1979; Begg, Anas, & Farinacci, 1992; Fenn, Newman, Pezdek, & Garry, 2013; Frances, Costa, & Baus, 2018; Hanzlíková & Skarnitzl, 2017; Lev-Ari & Keysar, 2010; Mutter, Lindsey, & Pliske, 1995; Newman et al., 2015; Putnam & Phelps, 2017; Schwartz, 1982; Souza & Markman, 2013; Stocker, 2017), lending credence to their validity. By using a statement credibility task to assess whether statements produced in an atypical voice are judged as less credible, the current study aims to contribute to the literature on perceptions of people who have a communication disorder.

    • The cognitive processes underlying false beliefs

      2022, Journal of Consumer Psychology
    • Judging Truth

      2020, Annual Review of Psychology
    View all citing articles on Scopus
    View full text