Ammar Cephas Plumber

What is scientific realism?

Apr 13, 2019

Introduction

Scientific realism is the view that the aim of science is to describe real (though unobservable) entities or phenomena and that scientific theory, as it is further tested and refined, increasingly approximates the truth. Critics of this position may disagree that truth is knowable or instead might allege that posited unobservables serve only pragmatic functions in generating sound predictions, theoretical cohesion, and explanation but cannot be assessed as truth-like until they can be observed. In this essay, I will assess the strength of two such arguments against scientific realism: the pessimistic meta-induction and underdetermination. About the first, I will argue that the pessimistic meta-induction casts significant doubt on the truth-likeness of contemporary scientific theories. Similarly, I defend the underdetermination thesis using relevant examples to demonstrate its pragmatic significance.

Pessimistic Meta-Induction

The pessimistic meta-induction is an argument put forth by Larry Laudan suggesting that the historical failures of theories previously believed to be true creates an inductive basis for rejecting the realism of contemporary scientific theories as well. There are a few ways realists have responded to the pessimistic meta-induction. One is to restrict realism to particular kinds of theories—namely, those that are mature and temporal- or use-novel—in an attempt to minimize the evidentiary grounds for Laudan’s inductive claim. What might make mature theories more truth-like is that they have coherence with principles of theories in other domains, well-entrenched domain- and method-restricting implications of its own, and poses a limit on what other theories may be proposed. While seems intuitive that these criteria can distinguish some theories from others, what is less clear is that maturity is an epistemic criterion or that it can be objectively identified. Regarding the epistemic significance of maturity, it is apparent that mature theories have been overturned in the past, and, therefore, maturity alone cannot guarantee a theory’s truth. Psillos provides two examples of mature theories of this kind: the caloric theory of heat and nineteenth-century optical ether theories. Both of these were highly integrated with theories of other scientific fields and significantly influenced experimental methods, but both were eventually replaced by theories that were subsequently assessed as more empirically adequate. For this reason, it seems that maturity is not a guarantor of truth, so some other basis would be needed for assessing contemporary theories as truth-like. In addition to mature theories having been overturned, it is not obvious why, logically, the attributes that render a theory more mature would also render the theory more truth-like. Even if a theory coheres with principles in other scientific domains, the network of theories that the theory in question coheres with could also be false. Consider the following:

  1. All scientific theories in vogue at time x depend on f as a real entity or phenomenon
  2. Theory k, which does not depend on f, coheres with all other theories in vogue at time x
  3. f is shown to be non-existent, which reveals all theories except for k to be false

In this case, because all other theories in vogue at time x were proven false, k’s coherence with them does not render k any more truth-like. Thus, it seems that coherence with other theories does not hold any intrinsic epistemic weight. Entrenchment, if taken to mean degree of confirmation, is perhaps a more defensible and epistemically meaningful facet of maturity than is coherence. However, as previously noted, even theories that appeared well-confirmed were subsequently overturned. Thus, entrenchment, while perhaps making truth-likeness more probable, does not guarantee it. In the absence of a guarantee, it is not clear why any theory ought to be believed or regarded as truth as opposed to something milder like confidence in the theory’s empirical adequacy and usefulness.

Two other criteria that realists propose for the induction to be meaningful are temporal and use novelty. Temporal novelty is the successful prediction of phenomena as of yet unobserved at the time of the theory’s formulation. Use novelty is the assimilation of preexisting observations that were not built into the theory. Leplin proposes two conditions for assessing a result as use-novel:

  1. Independence condition: A minimally adequate reconstruction of the reasoning leading to a theory that does not cite the type of observation
  2. Uniqueness condition: There is a type of observation that the theory predicts that no alternative theory does

While it is clear why novelty may add to the credibility of a theory, it is not obvious that this additional credibility amounts to a guarantee of truth-likeness. After all, there have been theories that were novel in the past that were later replaced by a more empirically adequate theory. The Ptolemaic system successfully predicted novel astronomical phenomena for centuries before it was eventually replaced by the Copernican system, which explained and predicted less phenomena than the Ptolemaic system at the time of its inception. It is not clear why something being the most novel now should render it more truth-like than something that was the most use novel during Ptolemy’s time merely in light of its recency. For these reasons, like coherence with other theories, neither can novelty guarantee truth-likeness. A realist might hold that the two criteria can together—but not independently—guarantee truth-likeness. However, some further reason is necessary for making such a claim. If a theory that is coherent is not guaranteed to be truth-like and a theory that predicts novel phenomena is not guaranteed to be truth-like, a theory that is both is also not guaranteed to be truth-like but is merely more likely to be truth-like. Whether or not the theory is in fact truth-like is unknowable, as the relevant criteria are the same as those that might lead a realist to think something is truth-like, which I have shown not to guarantee truth-likeness. Thus, I argue that there is only reason for assessing a theory as empirically adequate but no rational basis for regarding it as truth-like.

Another defense of realism against the pessimistic meta-induction is entity realism—the claim that truth can be identified in the continuity of referential terms that have led to past theories being successful. Psillos refers to this argument as the divide et impera move. There are two premises to such a claim. First, the realist must show that there is, in fact, continuity of terms between theories. Second, it must be shown that the terms that are carried over are those that are most conducive to empirical adequacy, which is allegedly an indication of truth-likeness. It is important to note, before proceeding, that the conceptual terms of interest are those that are unobservable, as observable entities are the datapoints to be explained by theories. Accordingly, unobservable explanations are those that realists controversially claim to be true in light of the empirical success of theories. The first premise—the continuity of theoretical terms—is not obvious for a few reasons. Paradigm shifts have consistently yielded theoretical frameworks that are entirely unrecognizable relative to a prior one. For instance, can it be said that a Ptolemaic orbit refers to the same unobservable as a Copernican orbit despite the term being the same? The former involves Earth-centric retrograde and prograde motion, while the latter involves a variety of different features such as annual revolution, uniformity, and daily rotation. To demonstrate referential continuity, a realist must first establish that terms in different theoretical frameworks do, in fact, refer to the same unobservable entities—no matter how differently they are described. The second premise of the divide et impera move, as I mentioned, is that the terms carried over are those responsible for the empirical adequacy of past theories (to the extent that they were, in fact, empirically adequate). This premise seems more defensible than the first, but it is not. The reason one might think it to be reasonable is as follows: a scientist would not opt to carry over a posited entity if he/she did not assess it as important to the empirical success of the new theory. However, what renders the premise problematic is that such an assessment has no logical basis. As the Duhem-Quine thesis points out, initial and background conditions are needed to derive predictions. Thus, experiments between rival theories cannot logically establish which constituent hypothesis is false, only that one theoretical system as a whole is more empirically adequate. Duhem argues that scientists can use their “good sense” to identify what auxiliary or background assumptions are the source of a theory’s empirical inadequacy, but he provides no account of what principles this good sense makes use of. Without identifying reliably operative principles that ensure that scientists only discard false assumptions, there is little reason for thinking that truth-like terms are most persistent in science. After all, scientists’ “good sense” is fallible. For these reasons, I find the pessimistic meta-induction to be a compelling argument against scientific realism.

Underdetermination of Theories by Evidence

Underdetermination arguments proceed roughly as follows: multiple theories are consistent with the body of observed data such that neither theory can be assessed as more empirically adequate. Therefore, an assessment of relative truth-likeness is impossible. There are multiple versions of this argument. The first is referred to as “weak underdetermination”. This argument holds that for each theory, there exists some hypothetical alternative theory that is equally compatible with available evidence. For this reason, there is no reason to assess one theory as true and the other as false. Interestingly, a version of this problem can be seen in the endeavor of curve-fitting: for any set of points, there exist an infinite number of curves that pass through them. One obvious solution to this problem is gathering additional data until one theory emerges as more empirically adequate. However, this development does not solve the problem as the emergent theory will still have another competitor theory that matches its fit even with the new data.

A possible response is to deny that weak empirical equivalence provides commensurate grounds for believing competing theories. One theory might contain ad hoc specifications or other features that render the theory less belief-worthy on inductive grounds. The reason for such an assessment may be that theories with ad hoc specifications have previously proven less empirically adequate than those without such features. But, unless there are inductive grounds for thinking that one theory is less likely to be true than another, it is unclear on what basis one can discriminate between two empirically equivalent theories.

Strong underdetermination is when multiple theories are not only equally consistent with the observed data but also yield the same predictions about data yet to be amassed. An example of strong underdetermination is Descartes’s suggestion that one cannot know whether or not one is dreaming, as all possible observational data is consistent with both theories. It is possible to eschew possibilities such as this one—or Creationism, determinism, etc.—on the grounds that they are not relevant considerations: they are significantly divorced from observable reality and have no actionable implications. However, there could possibly (but perhaps implausibly) exist two strongly empirically equivalent theories that are both relevant. Even in this case, one response could be that the two theories have no meaningful difference, as they are empirically equivalent and indistinguishable by means of verification. Therefore, the two theories have the same meaning. This argument is quite detrimental to the realist position, which must hold that the content of a theory (the real entities, phenomena, and explanatory details it asserts) matters in addition to its verifiable predictions. While the content of the theory is certainly important, I consider it important for reasons different from those emphasized by realists. As van Fraassen suggests, the content of a theory is important for pragmatic (as opposed to epistemic) reasons: its simplicity, explanatory insight, and coherence with other theories. These attributes offer value to the theory’s user, and that value is what makes one empirically equivalent theory preferable to another.

In deciding whether concerns about underdetermination hold weight, it is useful to consider the argument in light of real examples. In the status quo, much string theory research is taking place, and a multitude of posited causal mechanisms have emerged. Several versions of string theory have proven thus far to be empirically equivalent, and the choice of which one any scientist ought to study further is a pragmatic consideration, as no epistemic criteria has decisively singled out one as more likely to be true than any other. During scientific revolutions, multiple theories are seen as empirically adequate, and, while one may emerge as more empirically adequate in time, the possibility of empirically equivalent theories yet to be discovered renders talk of truth unhelpful and perhaps misguided.

Conclusion

For the aforementioned reasons, I find both the underdetermination thesis and pessimistic meta-induction to be persuasive criticisms of scientific realism. It is unclear on what grounds the scientific realist asserts theory to be truth-like besides empirical adequacy, which alone does not guarantee truth. Alongside epistemic considerations, pragmatic ones, too, seem to have guided much of scientific research and dictated which conceptual entities and explanations persevered under progressive theoretical frameworks. For these reasons, both entity realism and scientific realism more generally seem unnecessary for progress to be accounted for.


← Back to all articles