Notes

Interest, Information Sources, and Involvement

1 People become involved with S&T through many kinds of nonclassroom activities beyond attending informal science institutions. Examples of such activities include participating in government policy processes, going to movies that feature S&T, attending talks or lectures, bird watching, and building computers. Citizen science is a term used for activities by citizens with no specific science training who participate in the research process through activities such as observation, measurement, or computation. Nationally representative data on this sort of involvement with S&T are unavailable.

Public Knowledge about S&T

1 Survey items that test factual knowledge sometimes use easily comprehensible language at the cost of scientific precision. This may prompt some highly knowledgeable respondents to believe that the items blur or neglect important distinctions and, in a few cases, may lead respondents to answer questions incorrectly. In addition, the items do not reflect the ways that established scientific knowledge evolves as scientists accumulate new evidence. Although the text of the factual knowledge questions may suggest a fixed body of knowledge, it is more accurate to see scientists as making continual, and often subtle, modifications in how they understand existing data in light of new evidence.

2 Earlier NSF surveys used for the Indicators report employed additional questions to measure understanding of probability. Bann and Schwerin (2004) identified a smaller number of questions that could be administered to develop a comparable indicator. Starting in 2004, the NSF surveys used these questions for the trend factual knowledge scale. This scale does not include the questions aimed at studying scientific reasoning and understanding (e.g., questions about probability or the design of an experiment).

3 Declines such as those seen in 2012 need to be regarded with caution. In that case, the percentage of Americans who correctly answered the initial multiple-choice question about how to conduct a pharmaceutical trial stayed stable between 2010 and 2012. It was only the follow-up question that asked respondents to use their own words to justify the use of a control group that saw a decline. For this question, interviewers record the response, and then trained coders use a standard set of rules to judge whether the response is correct. Although the instructions and training have remained the same in different years, small changes in survey administration practices can sometimes substantially affect such estimates.

4 Because astrology is based on systematic observation of planets and stars, respondents might believe that this makes it “sort of scientific.” The fact that those with more formal education and higher factual science knowledge scores are consistently more likely to fully reject astrology suggests that this nuance has only a limited effect on results. Another problem is that some respondents may also confuse astrology with astronomy, and such confusion seems most likely to occur in some of the same groups (i.e., relatively lower education and factual knowledge) that might be predicted to get the question wrong. This could artificially inflate the number of wrong responses. However, the question comes immediately after a question that asks respondents whether they ever “read a horoscope or personal astrology report,” which offers respondents a hint that astrology is not astronomy. Also noteworthy is the fact that a Pew Forum on Religion & Public Life study (2009) using a different question found that 25% of Americans believe in “astrology, or that the position of the stars and planets can affect people’s lives.” Gallup found the same result with the same question in 2005 (Lyons 2005). In contrast, similar to 2014, the 2010 GSS found that 6% saw astrology as “very scientific,” and 28% said they saw astrology as “sort of scientific” (34% total). The Pew Research Center found that 73% could distinguish between astrology and astronomy and that there were few demographic differences, beyond education (Funk and Goo 2015).

Public Attitudes about S&T in General

1 Methodological issues make fine-grained comparisons of data from different survey years particularly difficult for this question. For example, although the question content and interviewer instructions were identical in 2004 and 2006, the percentage of respondents who volunteered “about equal” (an answer not among the choices given) was substantially different. This difference may have been produced by the change from telephone interviews in 2004 to in-person interviews in 2006 (although telephone interviews in 2001 produced results that are similar to those in 2006). More likely, customary interviewing practices in the three different organizations that administered the surveys affected their interviewers’ willingness to accept responses other than those that were specifically offered on the interview form, including “don’t know” responses.

Public Attitudes about Specific S&T-Related Issues

1 There is some evidence from a large-scale experimental study that the wording used in such questions (“global warming” or “climate change”) can have an effect on reported beliefs about global climate change (Schuldt, Konrath, and Schwarz 2011). Other studies, however, suggested that such wording differences have limited effect (Dunlap 2014; European Commission 2008; Villar and Krosnick 2010).

PREVIOUS SECTION