The Demarcation Problem
The problem of demarcating science from pseudoscience is a deceiving difficult dilemma. Not only a philosophical game, the definition of science has historically had great impacts on education, political policy, and public funding of research – not to mention the many cases of imprisonment and assassination of supposed pseudoscientists by the Catholic Church and others (26, Lakatos, C&C). Conventional wisdom argues that science is an objective discipline, concerned with making observations about the world that lead to universal truths, defined using natural laws (805, C&C). Throughout this essay, we will instead see that science is not so easily defined and that some of the most powerful thinkers in defining science contradict this definition, even while attempting to confirm it.
In order to look at some potential solutions for defining science, we will need to define a few relevant terms to focus our discussion. To begin, I will describe why scientists are interested in matters of justification instead of discovery; then, I will explain why they are specifically interested in justification through induction instead of deduction; and finally, we will discover the importance of approaching the study of science through empiricism.
After our argument space is formed, I turn to the logical positivists – the founders of modern philosophy of science – to garner some help in defining science. Their first attempt to demarcate science from non-science (or nonsense) was to use the concept of verifiability. As this theory has a number of shortcomings, I turn to Karl Popper‘s falsifiability and the ways it solves some of the problems with verifiability. However, falsifiability has its own unresolvable issues in attempting to demarcate science from pseudoscience. Faced with these mounting problems, Thomas Kuhn all but concedes defeat in his relativist attempts at defining science. However, I will show that we can define science by focusing on the consistent results it has given us toward constructing useful technology and reliable medicine.
Science is interested in providing stable justifications and predictions of phenomena in the real world. This pursuit can be seen in contrast to the act of discovery, in which a scientist comes upon new justifications about the world that alter our previous assumptions. Unlike justification, discovery has no method to it. It is sometimes not even relatable to new observations, but rather to simple thought experiments. For example, Ptolemy and Copernicus could both explain the same observations of the heavens using either a geocentric or heliocentric model, respectively. Copernicus’ discovery simply provided a less complex model of the heavens that turned out to eventually provide a foundation for more progressive scientific pursuits, like Newtonian physics. Because it is difficult to define any consistent method of discovery, science must be solely concerned with studying the justifications to be defended after a discovery is made.
There are only two forms of justification: deduction and induction. It is crucial to understand the distinction between these two methods in order to appreciate why science rules out one of them; fortunately, it is a small and simple distinction. In deductive statements, one comes upon a conclusion based solely on a list of given premises: e.g. If all humans are mortal and Socrates’ mother is a human, then Socrates’ mother is mortal. Unfortunately, deductive justification is unusable in science. The conclusions it reaches are simply rewordings of the given premises. Science uses induction because with it we can draw new conclusions that are not already laid out by the given premises: e.g. If all humans are mortal and Socrates’ mother is a human, then Socrates is a mortal. This conclusion is not deductive because it does not follow necessarily from the given premises. Instead, induction allows us to assume implicit premises, in this case that all mothers bear human children. Implicit premises can become complicated and tentative and therefore can cause us some trouble in claiming accuracy over our conclusions. However, in order to make claims about the world that extend beyond specific evidence, we must use inductive justification.
Philosophers often use the term “synthetic” to describe inductive justification of real world claims. In fact, science could be said to be a synthetic discipline; it is a discipline that relies on implicit premises in all of its applications. While necessary, philosophers of science aim to reduce the synthetic nature of science as much as possible. In fact, the first modern approach to demarcating science was the empirical method, which deals solely with the explicit premises of inductive justifications. It defines the observations and experiments that scientists can ‘count’ as proper science (4, Popper, C&C). While implicit premises would still be necessary in addition to these explicit premises in order to make scientific claims about the world, the empirical method gave early philosophers of science a specific tool, which they could use to logically demarcate pseudoscientific practices from science. These philosophers, who called themselves logical positivists, argued that to produce a meaningful claim, one must always return to the tangible observations that result from that claim. This means that one could make synthetic claims about the universe, as long as there is a ‘logical’ deconstruction back to specific observations.
This deconstruction is most clearly demonstrated through the verifiability of a claim with tangible observations. As Galileo might have said: “Don’t believe Saturn has rings? Borrow my telescope!” The ability to verify synthetic justifications through empiricism, consequently, has become a keystone practice of modern science. It provides science an empirical dogmatism, in which detractors are unable to argue with the evidence since it is necessarily clear to their eyes as well. Furthermore, once statements that are unverifiable are delegated as non-science (or nonsense), scientists are allowed to ignore those questions in religion or ethics that cannot possibly be verified through empirical observation. For example, the Kantian imperative that “one ought to treat people as ends rather than means” cannot be science since empirical evidence cannot verify it (40, Ruse, C&C). By ignoring such disciplines, we need not argue that they cannot come to their own truths about the world, but rather that they are not science.
However, verification is not a flawless tool for demarcating science from pseudoscience. If we allow any verification to count as science, we would be required to include theories such as Freud’s psycho-analysis, Marx’s theory of history, or any number of sociologically liberal theories, which are able to find verification of their theories in every conceivable observation. Ask a feminist what historical fact could be revealed that would not verify her belief that there should be equality between the sexes, and she could not give you one. Present a Freudian psychologist with two situations, which necessarily contradict each other, and he will argue that both verify his theories (6, Popper, C&C). While these theories clearly can be verified, their verifications seem empty, because there is no possibility of refutation. They are simply systems of assumptions or frames with which to look at any possible world. Science should be concerned with this world, however, so any theory of science must be able to make predictions of specific events, instead of simply interpreting these events after the fact.
Karl Popper argues that scientific predictions should be risky: that is to say, conventional wisdom would assume that they could not happen (7, Popper, C&C). When Einstein successfully predicted that light from distant stars would seem shifted away from the sun when viewed during the day, he was making a risky prediction (7, Popper, C&C). It was risky because without his justification for the phenomenon, no one had any reason to believe that the light from stars would look different during the day than at night. Popper expands on this idea by arguing that “the criterion of scientific status of a theory is its falsifiability, or refutability, or testability” (7, Popper, C&C). He therefore, not only requires that a scientific theory be testable for confirming evidence (verifiability), but also that a scientific theory necessarily “forbids certain things to happen” (7, Popper, C&C). By requiring scientific theories to be falsifiable through prohibition, Popper strengthens scientific pedigree by removing the aforementioned psychological and sociological theories, which cannot be falsified by any conceivable evidence.
Popper’s falsifiability also attempts to prune acceptable verifications by requiring that they not only result from “risky predictions,” but also from an attempt to falsify a theory (7, Popper, C&C). This stems from his belief that the scientific discipline should be one of integrity (41, Popper, C&C). Since “it is easy to obtain confirmations, or verifications, for nearly every theory” (7, Popper, C&C), Popper argues that we must focus our efforts on attempting to falsify our theories. If we attempt to falsify a theory and instead the theory is verified, we can be impressed; otherwise, the verifying observations should be ignored. Conversely, if a theory is falsified, Popper argues, scientists are allowed to revise the theory to fit the new anomaly as long as they concede some of the theory’s “scientific status” (7, Popper C&C). The history of science is filled with such ignored refutations; Thomas Kuhn argues that the ability for scientists to ignore a small number of anomalies is beneficial to producing scientific progress.
The ideal of falsifiability is to bring rigor to science. It forces scientists to make claims that can be repeatedly tested and thereby attempts to remove dogma from the discipline. In this way, Popper’s arguments fit with the conventional objective approach to science. However, Popper’s claims of integrity and simultaneous lenience toward anomalies necessarily undermine falsifiability. Arguing that a theory must concede scientific status if it accepts anomalies is a sneaky way of allowing the common scientific practices that all but ignore his refutation criterion. Sufficiently powerful scientific theories often act as dogma and are therefore treated with a reverence equal to religious or sociological theories. A scientist cannot claim integrity just for attempting to falsify her theories, if she acts like a Freudian and finds ways to twist the observations into confirming evidence, or outright ignores the anomalies. By not directly addressing this contradiction, Popper’s claims are markedly weakened. His claims to objectivity are further weakened by his appeal to the integrity of the scientist. Valuing evidence based on circumstance necessarily adds subjectivity into science.
Thomas Kuhn saw that even the most rigorous definitions of science seem to allow subjective posturing and therefore, took a more relativistic model of science. While I mentioned earlier that Kuhn sees verification as necessary and progressive in day-to-day scientific practices, he undermines science as a whole by arguing that what is considered science changes throughout history in such a way that there is no objective way (outside of time or place) to demarcate a scientific belief from a pseudoscientific belief. Science, Kuhn argues, is like politics: institutions believe that certain ways are better than others at different points throughout history; however, it is impossible to be more or less certain of our basic assumptions about the world. Within a democracy (a specific political paradigm) there can be progress: an economy can grow, schools can be built, people can be given healthcare. However, if a revolution occurs and the country becomes socialist, the government is not inherently better or worse than before, but simply begins to follow a different set of assumptions.
Kuhn sees science in the same light: scientific paradigms come and go, each necessarily contradicting their predecessor, and yet each takes such a drastically different view of the world that it is impossible to argue for any kind of grand accumulation of knowledge (89, Kuhn, C&C). Finally, since all theories are based on a finite amount of evidence, we can never be sure that some new observation will not come and force us to completely rethink our assumptions. While this relativistic approach to science does seem to hold truth in the historical practice of science, it seems clear that science does have some salvageable objective quality that can place its practice outside of time and place.
I contend that science can be seen as an efficient discipline for developing useful tools and medicines. While there definitely is a theoretical aspect to science, science is at its strongest when scientists experiment with and engineer new phenomena in the world. Just as the logical positivists believed that theories should dismantle into tangible observations, it might be necessary to demarcate science based on the technology it can produce. Ian Hacking argues that this technology-focused phenomenology is the only way to argue for deep truths in science. I believe it is the only way to separate science from pseudoscience. While much of science begins with intangible theories, our ability to put these theories into action – like sending a man to the moon – is a testament to the validity and value of these theories. I believe it is wrong to be completely relativistic when many older paradigms would have no way to account for the possibility of spaceships or radio or computers or computer games based on EEG helmets.
It is true that when scientific theories first arise, they are often without technological applications. Focusing on technology will therefore make it hard to demarcate theoretical physics from intelligent design, psychology from astrology, and so on. In these cases, we may return to falsifiability and risky predictions, which, though they are not perfect, may give us some ability to prune away those theories that we would like to keep out of science. If we are not as worried about making hard demarcations between specific theories, we can simply argue that the scientific worth or value or meaning of a theory is based on its application or applicability and therefore, theories can simply have more or less meaning based on the potential for them to be applied to technology. I believe that this pursuit of technological utility as scientific status symbol, allows us to prune just enough of pseudoscience to remove those disciplines that do not provide useful or novel services, without removing scientific theories that could potentially make novel contributions. While we might have traditionally argued that good theories mean certain things will or will not happen in nature, the last century of technological progress has allowed us to instead focus on our own ability to bring about new phenomena. Since science is necessarily a human pursuit, it seems natural for it to rely on our ability to alter our environment and ourselves based on our theories. The problem of demarcating science from pseudoscience would seem to be alleviated, at least partially, if we focus on the physical applications of our theories.
Curd Martin & Cover, J.A. Philosophy of Science: The Central Issues. Norton & Company. 1998.