The Competitiveness of Nations
in a Global Knowledge-Based Economy
December 2002
Web 1/4
Scott Gordon
The history and philosophy of social science
Chapter 18: The foundations of science
Routledge,
pp.
589-668
Introduction
1. The rise and fall of positivism
(1) Observations are concept-laden.
(2) Observations are hypothesis-laden
(3) Observations are value-laden
(4) Observations are interest-laden
(5) Observations are laden with culture-specific
ontologies
2. Current epistemological theories
(a) Predictive instrumentalism
(b) Conventionalism
(c) Rhetorical analysis
(d) Phenomenology
(e) Evolutionary epistemology
(f) Kuhn’s paradigm model
(g) Lakatos’s methodology of scientific research
programmes
(h) The ‘strong programme’ in the sociology of
science
3. Cognitive instrumentalism
(a) Science, intelligibility, and public
knowledge
(b) Theories, facts, and empirical adequacy
(c) The problem orientation of science
(d) Science and non-science
B. THE STUDY OF SOCIAL PHENOMENA
1. Social science and natural science
2. Mentation, individualism, and holism
3. The problem of objectivity
A famous remark by Immanuel Kant about the
complementarity of ‘concepts’ and ‘percepts’ has been paraphrased by Imre
Lakatos to contend that ‘philosophy of science without history of science is
empty; history of science without philosophy of science is blind’.
In the preceding chapters of this book
I have tried to follow Kant’s advice that philosophy and intellectual history
should be blended by discussing philosophical questions as occasion has
offered within the framework of a (more or less) chronological account of the
development of the social sciences. This
procedure, convenient for the writer, has, I hope, also served the needs of
the reader; but we have not yet confronted directly the central issues that
are addressed by the philosophy of science in general and the particular
philosophical problems that are encountered in attempting to apply ‘scientific
methods’ to the study of social phenomena. These
matters have received a great deal of attention, especially during the past
half-century or so, from professional philosophers and social scientists.
This literature, however, has settled
few, if any, of the epistemic problems of natural or social science.
On the contrary, we live in an era in
which, while scientists claim to be making progress at a faster pace than ever
before, philosophers have thrown a cloud of doubt upon their enterprise by
raising fundamental issues concerning the basic foundations of knowledge
which, though largely disregarded by practising scientists, cannot be ignored
if one is to avoid the blindness that Kant spoke of.
In this chapter I will sketch and
appraise the recent developments in the philosophy of science that have raised
these doubts, discuss the main suggestions that have been advanced by those
who contend that some radical new approach to the understanding of the
scientist’s beliefs about the world is required, and discuss the special
problems that are encountered when the object of the scientific enterprise is
to advance our knowledge of human society.
The reader of the preceding pages will know
already that I have a high regard for science and for its contributions to
Western civilization. Criticism of the
logical foundations of science, and warranted concern about the effects of
some of its applications, do not negate the fact that science has furnished us
with reliable knowledge about the world we inhabit and has enabled us to
589
conquer many of the ills that, until just
yesterday on the time-scale of man’s existence, ubiquitously beset the human
condition. In saying this I am
referring not only to the progress of pure science in revealing the structure
and organization of nature, nor only to technological progress in the form of
such things as eyeglasses, electric motors, antibiotics, and hybrid corn.
Equally, or more, significant is the
role that science has played in emancipating us from certain metaphysical
beliefs that made the social lives of our ancestors fearful, servile, and
miserable. We no longer throw women,
bound hand and foot, into a pond to ascertain whether or not they are witches,
not because scientists have devised a better test, but because the scientific
way of thinking has undermined belief in occult powers.
The four primary forces that
physicists tell us are the bases of our universe are incomprehensible to the
layman, but they are quite unlike the forces that mystics of old invoked to
bully, maim, and murder the powerless members of their communities.
In Chapter 8 above we examined, in the context
of political theory and social philosophy, the notions of ‘progress’ and
‘perfection’. We found there that,
while some social philosophers have been content with the assurance that man
can improve his social life, others will settle for no less than a perfect
social order. For the latter, any flaw
in the social order is sufficient to condemn it altogether.
The literature of the philosophy of
science is punctuated by a similar opposition.
Some regard the philosophy of science as undertaking to explain how
our knowledge of the world has been able to grow more reliable and more
extensive; others view it as an exercise in apodictics — the search for
principles that guarantee the absolute certainty of knowledge.
Just as utopian social philosophers
are unable to find any functioning society that meets their demand for
perfection, apoclictic philosophers of science find that the practices of
working scientists must be denounced, because they cannot guarantee certainty.
In section A of this chapter I will
begin by examining the historical background of the demand for certainty and
its modern embodiment in the philosophy of ‘positivism’.
Then I will discuss various
philosophies that have sought to occupy the domain that became vacant when it
was finally realized that certainty is impossible.
Finally, I shall present a brief
account of an ‘instrumentalist’ philosophy of science, which takes the stance
that objectivity and progress in our search for knowledge are possible, even
though certainty is not.
1. The rise and fall of positivism
The philosophy we shall be examining here is the
theory of the foundations of knowledge promulgated in the 1920’s by the
590
byAuguste Comte but, as we noted above in
Chapter 12 D, there is little affinity between the positivism that Comte and
Saint-Simon and their disciples espoused and the epistemological doctrine
that, following the work of the Vienna Circle, was widely accepted by
philosophers of science and by most practising scientists who explicitly
considered the epistemic foundations of their craft.
Rudolph Carnap, one of the members of
the Circle, suggested the term ‘logical empiricism’ in order to avoid the
association with the ideas of Comte that ‘positivism’ conveyed.
There was, however, one important
point on which their views were the same. Comte
had adopted the term to signify that science can furnish knowledge of which
one can say that one is not the least bit doubtful.
The
Comte invented the term, but not the idea.
As a mathematician he was heir to a
tradition that went back to the development, in ancient Greece, of knowledge
derived by logical deduction from propositions that were construed to be
self-evident ‘axioms’ and, therefore, indubitably true.
The corpus of Euclidian geometry,
which contained many propositions concerning the properties of space that were
not self-evident in themselves, was viewed as beyond dispute because it was
derived from axioms. In the era
that we call the ‘scientific revolution’, Euclidian geometry was widely
regarded as the ideal which all seekers of truth should aspire to emulate.
Descartes, in his Discourse on
Method (1637), undertook to deduce, from a single indubitable axiom, not
only new mathematical propositions, but the orbits of the planets, the
existence of God, and the location of the human soul.
None the less, philosophers of science did not
abandon the quest for certainty. Biologists,
geologists, and even physicists might have had to regard
591
their explanations of natural phenomena as
tentative, subject to modification, but the methodology of scientific
investigation need not itself be construed as unavoidably contingent.
The
Before we embark on an examination of how
positivism undertook to realize its epistemic goals, we must note another
trend in thought which, during the nineteenth century especially, claimed to
have discovered a method of cognitive certainty.
This was romanticism, and its method
was intuition. According to the
romantics, man’s capacity for obtaining knowledge by intuition is not
restricted to the propositions about space that provide the foundational
axioms of Euclidian geometry. The
power of intuition can enable us (or, at least, some of us) to apprehend
infallibly the real nature of the world and its fundamental properties, its
metaphysics that lies beneath its physics, the transcendental entities and
forces that are more fundamental than the immediate appearances of things and
events. This line of thought, a
revival of Platonism, had more influence in the arts than in the sciences,
but, especially through Hegel, it had a considerable impact upon European
philosophy. In stating their
principles of epistemology, the positivists aimed to destroy the metaphysical
pretensions of romanticism. In this
they were successful, but they went too far, claiming that science has no need
of any metaphysical assumptions about the world and that the presence of such
assumptions in a theory is sufficient warrant to reject it as pseudo-science.
But we are getting ahead of the story.
Let us turn now to examine the
principles that the positivists sought to establish as the proper philosophy
of science.
In dealing with the ideas of any group of people
in general terms, one unavoidably does less than justice to the individual
members. The
592
which stemmed from the efforts of various
philosophers, following Kant, to treat his novel notion that there are
concepts, such as space and time, that are both a priori and
‘synthetic’ (i.e. empirical) as posing a semantic problem, and the insistence
on close examination of the language in which thought is expressed by
philosophers such as G. E. Moore. The
Social scientists paid little attention to the
Vienna Circle philosophers, but we should keep in mind, as we consider their
doctrines, that the members of the Circle, and most of their successors,
regarded positivist principles as applying, without amendment, to the social
sciences. These principles were viewed
as mandatory normative rules for the investigation of all phenomena.
The Vienna Circle philosophers,
despite holding the view that physics is the archetypical science, did not
undertake merely to describe the methods that physicists and other
successful scientists employ; their aim was canonical, to prescribe
methodological maxims for all rational procedures of inquiry.
Euclidian geometry, as we have seen, undertook
to establish indubitable propositions about reality by logical analysis, using
premises that were considered as factually true by ‘self-evidence’.
The positivists had no objection to
the use of deductive logic but they were wary of the notion of self-evidently
true factual propositions. In their
view, the only reliable source of factual information about the real world is
the empirical data we obtain by our senses. Euclidian
geometry claimed that the world cannot be otherwise, a contention that
had been cast down by the construction of non-Eucidian geometries.
The positivists took the stance that
the task of science is to tell us how the world is and, in this
enterprise, a priori axioms, or metaphysical assumptions, or any
other notions that do not represent observable entities are not permissible.
The positivists were ultra-empiricist
in insisting that the concepts of science must refer only to sensory-world
things and events and that the language of scientific discourse must be
strictly representational. They were
greatly influenced in this by Ludwig Wittgenstein’s Tractatus
Logico-philosophicus (1921). This
advanced the view (which Wittgenstein later abandoned) that a language of
593
communication consists of terms that directly
correspond to sensory-world entities. One
may, as an individual, have thoughts that do not consist of ‘pictures’ of the
real world outside one’s mind, but such thoughts cannot be expressed in
language, for language cannot be private; it is a social phenomenon.
The positivists took the same view
and, going further than Wittgenstein, declared that statements that do not
represent observable entities are simply meaningless noises or unintelligible
marks on paper, and applied this severe judgement not only to professional
scientific discourse but to all domains of human communication.
According to the initial positivist view, the
task of the scientist is to describe the world, not to explain it.
Any purported explanation of a
phenomenon, the why of its occurrence, is an effort to delineate its
causes, and causation is not a legitimate concept.
In this the positivists followed David
Hume’s view that causation is not an observable property.
We may observe that one event
regularly precedes another, for example, but we are not justified in calling
one the cause and the other the effect. Our
senses inform us only that they are empirically associated; causal connection
is a theoretical inference that neither factual observation nor deductive
logic can support. The later ‘logical
empiricists’ did not take such an abstemious stance.
The ‘covering law’ model of science
advanced by Carl Hempel, as we have seen in our examination of ‘The
Methodology of History’ (Chapter 14), advanced the view that the central task
of human inquiry is to explain phenomena, and indeed, that non-observable
entities - causal connections - play an essential role in explanation.
Hempel and other ‘logical empiricists’
viewed science as proceeding by making theoretical ‘hypotheses’ which need not
necessarily refer to observable entities as long as inferences can logically
be deduced from them that are verifiable by direct observation.
This revision of positivism, though
more defensible than the epistemological stance of the
We might note first that the rules prescribed by
the
594
it prescribes for scientific inquiry.
This contention is defensible but,
nevertheless, the test of ‘self-reference’ (that no epistemic proposition may
demand criteria of validity that it itself cannot meet) would seem to be
legitimate, if not crucial. Recently a
number of writers have argued that the philosophy of science must itself be an
empirical science, using as its primary data the history of science and the
practices of contemporary scientists. This
extension of positivist descriptivism is prominent in the work of Thomas Kuhn,
Imre Lakatos, and a number of writers on the ‘sociology of science’.
These approaches are of special
interest to the social scientist because they emphasize the point that
knowledge is a social fact and that scientific investigation is a social
phenomenon. We will examine these
views anon.
One of the main objectives of the
any attempt to describe the nature or even to
assert the existence of something lying beyond the reach of empirical
observation must consist in the enunciation of pseudo-propositions, a
pseudo-proposition being a series of words that may seem to have the structure
of a sentence but is in fact meaningless.
The positivists may have intended to attack the
notion, still prevalent in the modern world, that there are invisible spirits,
occult forces, and divine powers, beyond the reach of human cognition, that
exercise influence on worldly events. In
doing so, however, they denied not only scientific status but even
unsophisticated intelligibility, or ‘meaning’, to a large domain of human
thought: poetry and the other fine arts, ethics and other disciplines engaged
in the study of values, and all forms of religious belief.
It is one thing to point out that
there is a difference between beliefs that are supported by empirical science
and those that are not; it is quite another to claim that the latter are
necessarily nonsensical. According to
the canonical demands of positivism, the social science disciplines that
employ non-observational concepts such as ‘motives’, ‘preferences’, and other
states of mind, even though they make use of empirical data, would have to be
reconstructed so as to eliminate such concepts if they were not to be
dismissed as worthless.
Quite apart from their failure to apply the
canons of positivism to their philosophy of science, the early positivists did
not rigorously adhere to them in their own scientific work.
The most striking example was Otto
Neurath.
595
Neurath and Rudolf Carnap were the members of
the
Neurath’s views have some special interest for
us because he was a professional sociologist, and they may perhaps be taken as
providing some information of historic interest concerning the
596
for the governance of the world.
It is noteworthy that the early
positivists, while insisting that a meaningful language must not employ
valuational and emotive terms, did not forgo the use of such terms in
advancing the hegemonic claims of their philosophy.
The linguistic orientation of the Vienna Circle
led to a dead end, not because of failure to abide by its own canons of
meaningful language, but because the positivist programme shifted the focus of
concern from the methods of scientific inquiry to the verbal statements used
in scientific discourse. Epistemology
was collapsed into the linguistic study of syntax and semantics.
The linguistic analysts who were
inspired by positivism made significant contributions, but statements about
real-world entities are not the entities in themselves.
In pursuing the linguistic
implications of their doctrines the positivists abandoned their empiricism,
and positivist philosophy degenerated into attenuated scholastic discourses on
how scientists should talk about what they do.
Neurath and Carnap even rejected the
view that linguistic scientific propositions are verifiable by experience,
contending that a complex of such propositions is self-verifying if the
members of the complex support one another. The
‘truth’ of a single proposition is, according to this view, simply its
‘meaning’ in the complex. Such a
stance, in effect, makes the verbal coherence of linguistic discourse the
dominant epistemic criterion of science, asserts the primacy of
definitions, and demotes sense data to, at best, a minor role.
The aim of the
One of the most serious weaknesses of early
positivism was that it appeared to reject the use of any criteria to enable
one to establish the domain of a scientific investigation by demarcating
relevant from irrelevant factors.
Without using a causal theory, how can one decide, say, that it is not
necessary to take the density of Mars into account when investigating the
shape of the DNA molecule? Astrology,
which the positivists derided, employs concepts that refer to observable
phenomena. How can its claims be
dismissed without using an a priori metaphysical conception of reality
that allows one to regard the positions of the planets as irrelevant to human
events? According to the
Recognition of the necessary role of theory in
scientific investigation led to the reformulation of positivism as an
epistemic doctrine that focuses upon the explanation of a delimited class of
phenomena by means of procedures in which empirical evidence is used to test
the validity of theoretical propositions concerning causal linkages.
As we noted in considering the INUS
model of causation (Chapter 3 A 3 above) no real-world phenomenon can be
explained by reference to a single causal factor, since all phenomena result
from a set of factors. Lightning
may be called the cause of a forest fire in an abbreviated
597
account but a full statement would have to list
the other factors that are necessary, such as dryness, the presence of
combustible material, etc. In a famous
paper published in 1948 (‘The Logic of Explanation’, Philosophy of Science)
Carl Hempel and Paul Oppenheim argued that a full account of such a
phenomenon would also have to include a statement of the relevant ‘governing
laws’, such as, for example, that when the temperature of dry wood is raised
beyond 400°C it commences to oxidize rapidly.
Universal statements or ‘laws’ are necessary components of causal
explanation, even of singular phenomena such as a particular forest fire.
But how do we come by such general
governing laws? They are not
generalizations derived from immediate observation.
They are theoretical hypotheses which,
together with other postulated conditions, enable one to deduce certain
conclusions that refer to observable phenomena.
So, the argument goes, in this way the
laws can be verified by sense data. Thus,
for example, the occurrence of a forest fire, and many other singular events,
including ones produced in laboratory experiments, certify the truth of the
general law that wood begins to oxidize rapidly when its temperature is raised
above 400°C. A reformulation of
positivism that became widely accepted construed scientific explanation to be
a form of argument using general covering laws which, though ‘hypothetical’,
are legitimate because they have been verified, indirectly, by empirical
experience.
This philosophy of science was not new.
Its essentials had been stated a
century earlier by (among others) John Stuart Mill in his System of Logic
(1843). Moreover, many practising
scientists explicitly stated equivalent epistemic doctrines or were implicitly
guided by them. This takes nothing
away from the importance of Hempel’s argument.
In view of the contentions of the
The basic form of the deductive-nomological
model is equivalent to that of the Aristotelian syllogism, which we examined
above in Chapter 3 A 2. It has three
parts: (1) a proposition that is asserted to be universally true of a class of
phenomena, i.e. a general law that covers all members of the class; (2) a
proposition asserting that a particular phenomenon is a member of this class;
(3) a proposition that is derived from (1) and (2) as a matter of logical
deduction. If, for example, we say
that (1) all swans are white; that (2) a particular entity is a swan; then it
follows (3) that the entity is white. The
formal logic of this procedure is impeccable, but the empirical truth
of (3) rests upon
598
the empirical truth of (1) and (2).
Both these premises are problematic.
Particular entities do not naturally
arrange themselves neatly into classes; a classification system is a human
artefact that is imposed upon the observation data.
So, therefore, propositions such as
(2) are not purely empirical, they contain a ‘theoretical’ component,
or, as some philosophers say, empirical observations are ‘theory-laden’.
It will be convenient if we defer
discussion of this problem until a later point, and focus here upon
propositions such as (1) above which assert the existence of universal laws.
In order to maintain the empirical certainty of
inferences obtained by the deductive-nomological procedure, the universal law
premise must be empirically certain. To
say that ‘many’ or even ‘most’ swans are white will not serve.
It is not even formally sufficient to
note that all swans that have ever been observed have been white, since there
are, and have been, many unobserved swans in the world, and of course, future
swans are not observable. In fact,
this particular universal proposition had to be abandoned when black swans
were found in
In developing his own philosophy of science
Popper seized upon the limitation of the modus ponens mode of logic
that we noted when discussing it above (Chapter 3 A 2).
If the premises of a syllogism are
true, the conclusion must also be true. But
this theorem is not reversible, that is, it does not permit one to say that if
the conclusion is true the premises must be true.
Such an assertion would commit the
logical fallacy of ‘affirming the consequent’.
True conclusions can be logically derived from false premises.
For example, the propositions that (1)
all professional physicists are Marxists, and (2) Otto Neurath was a
professional physicist, lead logically to the conclusion that (3) Neurath was
a Marxist. If (1) is a theoretical
hypothesis, then (3) is empirical evidence that helps to confirm it, since (3)
is true. But (1) is not true.
In order to avoid arguments that allow
true empirical evidence to confirm false theories, Popper contended that
scientific reasoning must use the modus tollens mode of deduction,
which draws inferences about the premises from the observed falsity of the
conclusion. The empirical truth of a
conclusion tells us nothing for
599
certain about the premises from which it is
logically derived; but the empirical falsity of a conclusion is a certain
indicator that at least one of the premises must be false.
The famous Michelson-Morley
experiment, for example, was conducted in order to test the proposition that
there is a medium, called ‘ether’, through which light travels.
The procedure was to deduce certain
observable consequences that must logically follow if this proposition were
true. The experiment was set up to
test one of these consequences by means of a measuring apparatus.
The data did not conform to the
predicted value, thereby falsifying the currently accepted theory of light and
casting doubt upon the concept of an ether. This
‘negative experiment’ played a significant role in subsequent work in
theoretical physics which, according to some historians, led to Einstein’s
theory of relativity. Popper took this
procedure as an archetypical exemplification of scientific method.
Scientific knowledge, he maintained,
is acquired by means of successive Conjectures and Refutations (the
title of one of his books). Theories
are tentative ‘conjectures’. They
cannot be verified by empirical evidence, but they can be refuted.
We build up our knowledge of the world
by ascertaining what is not true.
This ingenious ‘solution’ to the problem of
induction appeared to place the enterprise of science on a solid epistemic
footing. Popper’s central thesis had
been, apparently unbeknownst to him, clearly stated previously by William
Stanley Jevons (Principles of Science, 1874), whom we encountered in
Chapter 17 as one of the founders of ‘marginal utility’ theory in economics.
But the context of Popper’s statement
was that, at the time that it was made, the philosophy of the
Popper’s thesis that science proceeds by
falsifying theories proved, however, to be as flawed as the claim that it
proceeds by setting up empirical tests that can verify them.
Again, the fact that a causal analysis
involves attributing a phenomenal observation to a set of conditions is
the heart of the problem. The
universal law that wood burns when its temperature rises above 400°C is a
necessary element in such a set, but it is not logically sufficient, in
itself, to predict a forest fire. If a
lightning strike, or a discarded match, or an unattended
600
camp fire, or even the deliberate action of an
arsonist, fails to start a forest fire, it does not demonstrate conclusively
that the law must be wrong, since the failure may be due to the absence of
other necessary factors. This point
had been made, a generation before Popper’s Logik, by Pierre Duhem, in
1906, and was restated by Willard van Orman Quine in 1951.
The ‘Duhem-Quine’ thesis, as it is now
called, does not say that falsifying observations are worthless in evaluating
a theory, but it is a compelling argument against the contention that such
observations are unambiguous evidence that the theory is wrong.
In his Logik Popper rejected
this thesis, but later he admitted that empirical evidence can only test a set
of propositions and modified his falsification argument, most significantly by
asserting that a theory cannot be rejected unless another theory is available
that is better, according to certain criteria which he tried to establish.
This was an important concession, since it, in effect, involved the
notion that scientific knowledge grows by means of a contest between
alternative theories, not simply through a confrontation between theory and
empirical evidence.
So far we have considered only the logic
of scientific explanation and confirmation. Another
attack came from a different angle, questioning the reliability of sense data
themselves. No one would argue that
empirical observations are completely free of error.
Science can contend with that, by
better instrumentation, multiple observations, refined methods of statistical
collection, etc. But what if the
observations, however made, are guided by an a priori theory?
In such a circumstance the theory can
be neither verified nor falsified by the factual data, because so-called
‘facts’ are commingled with the theory that is to be tested.
Some philosophers, most prominently
Norwood Russell Hanson (Patterns of Discovery, 1958), contended that
this problem is ubiquitous, and insurmountable.
No factual data are free of theory,
and none can be made free, since a theory of some sort is necessary in order
to make any factual observation. The
notion that theories can be tested by independent empirical evidence
must be abandoned. This argument,
which appeared to be supported by psychological findings as well as
philosophical considerations, gave the coup de grace to all versions of
positivist epistemology, including Popper’s, and indeed called into doubt the
very possibility of constructing an objective body of scientific knowledge.
This problem would appear to be serious enough
when one construes the enterprise of science as the construction of theories
that are verified by, or at least not falsified by, empirical tests.
It becomes more serious still if one
takes the view that the role of empirical evidence is not to test a single
theory, but to enable one to choose among alternative theories.
Louis Althusser, for example, contends
that one cannot choose between the economic theories of David Ricardo and Karl
Marx because they are incommensurable, each having its own standards of
validity (Reading Capital, 1970). According
to this view, treating Ricardian and Marxian value theory as both having been
falsified by the same empirical evidence (that the capital-labour ratio is not
uniform across
601
industries - see Chapters 9 A and 13 D 1 above)
represents a failure to understand the nature of scientific inquiry.
W. V. 0. Quine formulated this problem
more concretely in the terms of standard epistemology, without resort to the
notion that observations are theory-laden, as the ‘underdetermination thesis’.
Stated briefly, this maintains that if
more than one set of causal factors is sufficient to account for a phenomenon,
then the empirical observation of it cannot tell us which set is operative,
even if the observation is totally objective and not theory-laden.
Let us consider for example a problem in medical
diagnostics. According to
physiological theory, a painful swelling in the ankle joint might be due to
(a) an injury, (b) a bacterial or viral infection, (c) an auto-immune disease
such as arthritis, or (d) blood cancer (leukemia).
These are quite different biological
processes. The observation data (the
swelling) are insufficient to determine which of them is the cause of the
swelling. Modern medicine is not
stumped by this sort of ambiguity, for other observations can be made to
narrow the possibilities and, in many cases, reduce them to one.
But Quine’s point is that the central
problem is not an empirical one but epistemic, since it is always possible to
postulate additional theories that may account for the phenomenon.
With a little theoretical
inventiveness we may add to the above list such things as (e) environmental
contamination, (f) childhood sexual trauma, (g) the conjunction of the
planets, and (h) witchcraft. How do we
then choose between the contending theories? Some
theories, for example ones like (f) and (g), might be rejected on the
grounds that they rest upon unacceptable metaphysical presumptions.
However much one might be persuaded
that this was so, it could not be proved; but even the adoption of a severely
constrained mechanistic ontology would not do away with the problem of
under-determination, since an unlimited number of mechanistic explanations can
be postulated. Popper tried to solve
the problem of theory choice by establishing criteria that would compare
competing theories in terms of their ‘truth-value.’
The attempt failed, and it now seems
clear that other types of criteria must be employed.
A criterion of theory choice that has a long
lineage in the philosophy of science, going back at least to the heretical
William of Ockham in the fourteenth century, says that, among equally
explanatory theories, the simplest is the best.
But we have no warrant for believing that the world is simple, or, as
602
that serve to render reality intelligible to the
human mind. Given our limited
intellectual powers, simple theories are better on pragmatic grounds than
equally explanatory complex ones. Indeed,
a perfect representational model, if it could be constructed, would
necessarily be as incomprehensible as reality itself.
Some modern macroeconomic models,
consisting of hundreds of equations, while still far from capturing the
complexity of the economy, seem already to have reached the limit of
intelligibility. The computer prints
out the solutions to the equations but its masters have difficulty explaining
the why of these results in economic (as opposed to mathematical)
terms. The criterion of simplicity,
which accepts with equanimity that theories will be ‘unrealistic’, is based on
the notion that theories are human creations designed to serve utilitarian
purposes. We shall return to this
point below.
So far we have focused on the flaws in the
ultra-empiricist epistemology put forward by the Vienna Circle philosophers,
in its reformulation by Hempel and others into the ‘deductive-nomological’
model of scientific explanation, and in Popper’s thesis that a body of secure
knowledge can be progressively developed by using the information provided by
the empirical refutation of conjectural hypotheses.
But the presence of a flaw in an
epistemic thesis is not fatal, unless one takes the perfectionist view that
the beliefs one holds about the world constitute scientific knowledge only if
there are objective empirical grounds for regarding them as altogether beyond
doubt. For the non-perfectionist the
issue is: how important are these epistemic flaws for the enterprise of
science? In considering this question
I shall concentrate upon the ‘problem of induction’ and the notion that all
observations are ‘theory-laden’.
So far as scientists themselves are concerned,
it seems that the problem of induction is not recognized even as a caution,
much less as an impassable barrier to progress.
When necessary, a scientist will,
without a qualm, use ‘Avogadro’s number’, which, though it has been computed
from a limited set of specific cases, asserts that all gases, at equal
temperature and pressure, contain 6.023 x 1023
molecules per gram molecular weight. In
the Handbook of Chemistry and Physics there are literally hundreds of
thousands of such universal numerical statements for particular elements and
compounds: boiling points, melting points, solubilities, densities, X-ray
diffraction angles, etc., most of which are not even given with ± qualifiers.
Biologists have studied intensively
the genetics of only a small number of organic species, yet they make
universal statements about the general laws of genetic transmission with only
slightly less confidence than physicists do when referring to all copper as
having the same thermal conductivity. For
the working scientist, the problem of induction is, clearly, not perceived as
a problem. Are scientists wrong to
behave in this way? A moment’s
reflection is sufficient to tell us that if scientists were to heed the
injunction against universal empirical statements, the work of scientific
investigation would not be improved, but would come to a halt altogether.
If a philosopher were to tell a
scientist that he had no warrant for asserting that the melting point of gold
was 1,064.43°C because he had not melted all the gold in
603
the universe, the scientist would be well
justified in curtly bidding him to be gone.
It is not reason, but the abuse of reason, to
insist that no universal statement should be made about a class of phenomena
unless all members of the class have been examined.
The most that the philosophical
empiricist can reasonably demand is that we regard such statements as
inferences drawn from limited experience that may be generalized as
probably true universally, and recognize that different general statements
may be embraced with different degrees of confidence, excluding only the
probability extremes of 0 and 1. This
was recognized more than a century ago by W. S. Jevons, who declared that ‘the
theory of probability is an essential part of logical method’ because ‘no
inductive conclusions are more than probable’ (Principles of Science,
1874, p. vi) and, implicitly, by J. S. Mill in contending that all general
laws, such as those used in economics, are statements of ‘tendency’ (‘On the
Definition of Political Economy and on the Method of Investigation Proper to
It’, Essays on some Unsettled Questions in Political Economy, 1844).
Carl Hempel extended his covering law
model of scientific explanation to include explanations based upon
law-statements that are statistical (‘The Logic of Functional Analysis’, in
Llewellyn Gross, ed., Symposium on Sociological Theory, 1959),
thus greatly reducing the weight of the ‘problem of induction’.
In a widely used textbook on
scientific method Ronald N. Giere says, concerning Galileo’s law of the
pendulum, ‘the generalization, “All real pendulums satisfy Galileo’s law,” is
surely false. But the hypothesis that
most real pendulums approximately satisfy the law might be true.
This is really all that science
requires.’ This view, which replaces
the utopian demand for certainty with the utilitarian one of explanatory
adequacy, has been advanced by philosophers such as Abraham Kaplan and Bas C.
Van Fraassen. It raises some special
problems for any science whose findings are used as a guide to action, since
probability theory, as such, does not tell us how much risk we should be
willing to take of accepting a false theory or rejecting a true one (this
point will be discussed further in section B 3 below).
But so far as the celebrated problem
of induction is concerned, working scientists are right to be unconcerned, and
not to worry much over whether theoretical hypotheses should be verified or
falsified. Neither can furnish certain
knowledge, but imperfect confirming and falsifying procedures can both supply
empirical evidence that may be used in building up our cognition of the world.
The notion that observations are ‘theory-laden’,
is a more serious and more far-reaching attack on scientific method because it
says, in effect, that we cannot rely upon the information supplied by sense
data. David Hume initiated the long
debate over induction by pointing out that observation of particular entities
does not warrant the making of universal statements about all members of the
class to which they belong; Russell Hanson and others say that we cannot even
claim that the particular observations are valid, because observations are
necessarily controlled by prior theories. Empirical
data are subject not only to
604
the randomly distributed errors that arise from
imperfect precision in measurement, but to unavoidable systematic bias.
Upon examination, however, this
problem too diminishes greatly in significance.
(For trenchant critiques of the
Hansonian thesis see Israel Scheffler, Science and Subjectivity, 1982,
especially chapter 2, and Ian Hacking, Representing and Intervening,
1983, chapter 10.) The nub of the
issue is that the word ‘theory’ in the phrase ‘theory-laden’ is used
imprecisely, failing to differentiate between a number of quite different
types of controls that may impose themselves upon factual observations.
In the discussion of this issue that
has taken place in recent years five distinct contentions have been advanced,
though often confounded.
(1) Observations are concept-laden.
In order to make an empirical
observation we must make use of generic concepts that enable us to order the
sensations we receive. As I look about
me at this moment I see such things as a computer, books, files, windows; I
hear the furnace fan and a car passing by, I smell coffee; and soon.
The sensations are classified by means
of concepts such as ‘furnace fan’ and ‘window’ that I have learned to apply.
In scientific research we also use
such ordering concepts. A chemist can
observe ‘benzene rings’, an economist ‘imports’ and ‘exports’, and a
sociologist ‘crime’ only because each already knows how to identify what he
observes. In science, such concepts
are ‘theoretical’ because they are derived from a theory about the world.
Thus, for example, the concept of
‘phlogiston’ was part of an explanatory theory about the mechanism of
combustion. It is no longer used;
instead scientists speak of ‘oxidation’, which derives from a different
theory. But the concepts used by
an explanatory theory are not the same as the theory.
Concepts are like the nouns in a
sentence; they assert nothing in themselves. Theoretical
sentences assert something about how the world works.
That observations are concept-laden
cannot be denied, but it does not mean that explanatory theories cannot be
subjected to empirical test. On the
contrary, without such concepts scientific tests, as well as ordinary life,
would be impossible. In so far as the
claim that observations are ‘theory-laden’ refers to the fact that
observations are concept-laden it is true but, in itself, this does not cast
doubt upon the possibility of using empirical evidence to evaluate a theory.
The crucial contention is the one we
examine next.
(2) Observations are hypothesis-laden.
Empiricism demands that
theoretical hypotheses be subject to test by observational data.
If the observations are so controlled
by the hypothesis itself that contradictory observations are not possible,
then indeed this demand cannot be met. But
a procedure in which a control of this sort is exercised is simply bad
science; it is not an inherent characteristic of science, as Hanson and others
have claimed. The point can be shown
by an illustration. In the
Statistical Abstract of the United States we find, for example, data on
U.S. ‘interest rates’ and the ‘trade balance’, the latter computed by
subtracting ‘imports’ from ‘exports’. To
compile these data, theoretical concepts must be employed.
Now let us take a theoretical
hypothesis such as, say, that the level of interest rates acts as an
important causal factor in
605
determining the trade balance.
The data are clearly independent of
this hypothesis and can therefore serve, by the use of appropriate econometric
techniques, as an objective test of it. Economists,
like other scientists, are perfectly aware of the fact that data can be
massaged to support a theoretical hypothesis.
This is a practical problem in maintaining the honesty of scientific
work. It is not a fundamental
epistemic difficulty, as Hanson claimed.
(3) Observations are value-laden. This is the contention that aesthetic, moral, religious, political, or ideological values contaminate the empirical process. That they may do so and in fact sometimes do is incontrovertible but, as with (2) above, the claim that this presents an insurmountable epistemic difficulty is incorrect. In the social sciences, and indeed in all scientific work that has social policy implications, the contamination of empirical evidence by value judgements is a danger that one must guard against. It is not so deeply embedded in the methodology of scientific investigation as Hanson and others have claimed, but it raises an issue of special importance for the social sciences, since they are more oriented to social problems and social policy than are the natural sciences. We shall return to this matter below in section B 3.
(4) Observations are interest-laden.
This is the notion that scientists
have personal interests or interests that derive from their membership of a
social or economic class, or a national group, etc.
This thesis, which has been especially
prominent in the radical literature of the social sciences, can be disposed of
by simply repeating the arguments advanced under (2) and (3) above.
But one additional point is worth
making: the thesis fails the test of self-reference.
When Joseph Stalin declared that
Mendelian genetics was ‘bourgeois’, reflecting the class interests of Western
biologists, did he not expose himself to the parallel contention that his
acceptance of Lysenko’s views on genetics reflected the interests of the
ruling class of a communist state? Fortunately,
such a game of epistemic tit-for-tat is not all that can be done to contradict
such claims. Lysenkoism was undermined
by its inability to serve as the foundation of a successful empirical research
programme in biology and by its failure to produce the predicted practical
results when applied to Soviet agriculture.
(5) Observations are laden with
culture-specific ontologies. This
is a more general contention than the other four.
It recognizes that every mature human
is the product of an enculturation process, and that cultures may differ from
one another in their fundamental conceptions of the nature of the world.
The individual who is raised from infancy to maturity in a
twentieth-century Western society is programmed, so to say, to view the world
in a different way from one who is enculturated into a Buddhist society, or
one brought up in a social environment where belief in magical powers is part
of the pervading culture.
According to this view, what we call ‘scientific knowledge’ reflects the
metaphysical beliefs of only a part of humankind, and perhaps indeed the
smaller part. The empirical
observations, made by scientists are laden with the particular ontological
outlook of their culture. Science
is therefore culture-relative, not objective in any general sense.
606
That humans are the products of enculturation,
and that cultures differ, cannot be denied. Indeed,
I have stressed these points repeatedly in this book.
But this does not force one to the
conclusion that the findings of science are so culture-bound that no claim to
objective validity can be certified. Let
us take, for example, the view that rain can be caused to fall by the
performance of certain prescribed ceremonies such as, say, a ritual dance.
This view is held in some societies
and not in others, reflecting different ontological conceptions.
That such different views are held is
clear, but it does not mean that a rain-dance does indeed cause rain to fall
when it is performed by believers. If
this were so the world would be even stranger than physicists tell us it is;
it would be whatever one believed it to be. According
to such a view, matter is the creation of mind, and by an act of mentation one
could create any kind of world one wished, not only different for different
cultures but, in principle, different for every individual.
The world is perceived
differently by different cultures and even by different individuals, but this
does not mean that in fact there are many worlds.
The aim of science is to transcend the subjectivity of individual
perceptions and the control of cultural conceptions, and come to know a world
that is external to ourselves. We have
ample evidence, if from nothing else than the practical success of science,
that this aim is not incapable of realization.
This is perhaps more difficult for the social sciences, since in those
disciplines we are trying to transcend the control of culturally embedded
conceptions in the study of culture itself. But
there is no warrant for the view that the social sciences are irredeemably
subjective, or culture-relative to a degree that prevents them from arriving
at reasonably objective inferences about social phenomena.
Where do we emerge, then, from this examination
of the ‘problem of induction’ and the contention that empirical observations
are ‘theory-laden’? If these and
allied criticisms of the methodology of science had to be taken seriously the
consequences would be profound. As
Israel Scheffler puts it:
The overall tendency of such criticism has been
to call into question the very conception of scientific thought as a
responsible enterprise of reasonable men. The
extreme alternative that threatens is the view that theory is not controlled
by data, but that data are manufactured by theory that rival hypotheses cannot
be rationally evaluated, there being no neutral court of observational appeal
nor any shared stock of meanings; that scientific change is a product not of
evidential appraisal and logical judgment, but of intuition, persuasion, and
conversation; that reality does not constrain the thought of the scientist but
is rather itself a projection of that thought. (Science and Subjectivity,
1982, p. xi)
However, as Scheffler recognizes, we are not
forced to this conclusion. The
criticisms of the positivist epistemic programme did not succeed in
demonstrating that it, and all other claims that science can furnish objective
knowledge, are fatally flawed. Like
the positivists themselves, their critics went too far, claiming in effect
that if scientific theories cannot be certain they cannot
607
be objective, and that objectivity must
therefore be abandoned, even as an ideal. During
the past twenty years or so the literature of the philosophy of science has
been punctuated by the contention that positivism has been utterly
discredited, root and branch, and that some radically different approach to
the philosophy of science is required. We
go on now to review this literature or, at least, those parts of it that are
of interest for the philosophy of social science.
The Competitiveness of Nations
in a Global Knowledge-Based Economy
December 2002