Karl Popper (1902–1994) was a philosopher of science who sought to describe how the process of science ought to work in a logical sense. As Popper saw it,[1] science was concerned with developing testable models, works of imagination that attempted to describe aspects of the behavior of the physical world. The key work of the scientist, Popper argued, was to try to falsify a model through experimentation. The more a model was tested, and survived those tests, the more scientific credence it accumulated. But even if a model survived a hundred, or a thousand, or ten thousand different tests, it was never assured of passing the next one, and, accordingly, was still at risk of falsification.
This is known as “the problem of induction”. Induction refers to the process of observing specific data and then extrapolating from there to a general rule or law. The problem is that repeated observations, no matter how many, can never securely predict what might be observed in the future. The example Popper used to explain induction was the color of swans. Before Australia was discovered it was noted that all swans that had been observed were white. By induction it was argued that all swans are white—as a general rule. Later, black swans were observed in Australia. This invalidated the generalization that all swans are white.
Let’s go through that process step by step. A scientist notes that all swans they have seen are white. They go to the library and search for documents about swans. All documents describe white swans. From the available data our scientist concludes, by the process of induction, that all swans are white. Subsequently our scientist becomes aware of data from Australia that some swans are black. At this point the scientist must either modify or discard the original hypothesis, which has been invalidated. Clearly his original induction was based on incomplete data. But this is almost always the case in science; we never have all possible data.
In Popper’s view, arguing from specific experimental data to construct general models of cause and effect always involves some doubt, because we may not have yet carried out a specific type of experiment which shows that our general model is wrong. We can never know if some new experiment carried out at some point in the future will show that we were incorrect. For this reason, we can never say from a strictly logical viewpoint that a scientific theory is “true”.
Popper was thinking about science in a rigidly logical sense. In a practical sense, however, hypotheses that have stood up to a great deal of testing unscathed are taken to be very firm. In fact, some of these well supported generalizable relationships are recognized as scientific laws. It is unquestionable, for example, that technology has made significant progress based on scientific theories. Mobile phones are evidence for that. But Popper was averse to the idea of induction and sought to find another logical basis for science, although arguably he ended up having to smuggle the same concept in by other means.
Popper also sought to answer the demarcation problem—namely how to distinguish science from non-science. He suggested that a theory which is theoretically impossible to falsify is not science. Popper’s idea has been very influential, and indeed, the notion that unfalsifiable theories are not science has become a common belief.[2]
For example, a scientist might contend that many different universes exist entirely independent from one another, and that these other universes are entirely outside of and separate from our universe. This proposition—commonly known as the multiverse—cannot be tested experimentally and therefore cannot be falsified, so, according to Popper’s criterion, such a theory is not scientific.
Popper’s falsifiability criterion is very commonly cited in the origins debate, to the extent that laypeople may not realize this was simply the musing of a twentieth century philosopher. The reason for its popularity with secular humanist sympathizers is readily apparent. Popper’s falsifiability test is used to subtly support the contention that the biblical account, which includes God’s creative acts, is of no value in origins. As an example, consider the National Academies’ statement in Science, Evolution and Creationism.[3]
Because … appeals to the supernatural are not testable [which includes falsifiability] using the rules and processes of scientific inquiry, they cannot be a part of science.
We wholeheartedly agree with this statement when applied to scientific evidence. However, if the former statement is extended to imply that because the biblical account is not part of science, it has no value in the origins debate, then we strongly disagree.
This type of implication derives from logical positivism, a now defunct philosophical school that was very influential in the early decades of the twentieth century. Logical positivism held that only ideas that were verifiable through observation or logic had informational value, a proposition that was strongly criticized by Popper for its reliance on induction. However, logical positivism was eventually discarded because it is self-refuting: the central thesis of logical positivism was itself not able to be proved by observation or logic. Nonetheless, logical positivism continues to influence the origins debate in practice, via statements from the secular humanist camp that the non-falsifiable concepts of the Bible cannot be a part of science and consequently are of no value in the origins debate.
In contrast with Popper’s ideas, the next philosophy of science we examine, from Richard Feynman, is studiously ignored in the origins debate.
[1] Popper, 2002.
[2] For instance, “Falsifiability in Medicine” is an article which explains the use of Popper’s falsifiability criterion in evaluating evidence relating to COVID-19 treatments (Taran, Adhikari, & Fan, 2021).
[3] National Academy of Sciences and Institute of Medicine, 2008, p. 39.
