The Competitiveness of Nations
in a Global Knowledge-Based Economy
December 2002
Web 3/4
Scott Gordon
The history and philosophy of social science
Chapter 18: The foundations of science
Routledge,
pp.
589-668
Introduction
A. THE PHILOSOPHY OF SCIENCE
1. The rise and fall of positivism
(1) Observations are concept-laden.
(2) Observations are hypothesis-laden
(3) Observations are value-laden
(4) Observations are interest-laden
(5) Observations are laden with culture-specific
ontologies
2. Current epistemological theories
(a) Predictive instrumentalism
(b) Conventionalism
(c) Rhetorical analysis
(d) Phenomenology
(e) Evolutionary epistemology
(f) Kuhn’s paradigm model
(g) Lakatos’s methodology of scientific research
programmes
(h) The ‘strong programme’ in the sociology of
science
3. Cognitive instrumentalism
(a) Science, intelligibility, and public
knowledge
(b) Theories, facts, and empirical adequacy
(c) The problem orientation of science
B. THE STUDY OF SOCIAL PHENOMENA
1. Social science and natural science
2. Mentation, individualism, and holism
3. The problem of objectivity
We often speak of ‘scientific knowledge’ in ways
that imply that it is different from other kinds of knowledge.
This is a useful and justifiable locution, but it can also be
misleading. Science is best viewed,
not as a body of knowledge, but as an activity - the search for truth, not the
possession of it. If apodictic truth
were discovered, science would come to an end.
Cognitive instrumentalism takes the view that the task of the
philosopher of science is to examine the nature of this search activity with
the object of explaining its capacity to yield reliable (but not certain)
knowledge of the world.
(a) Science, intelligibility, and public
knowledge
We have two basic tools at our command in
investigating the world: logic, and factual data.
A theory concerning a real-world
phenomenon is particularized logic. Instead
of saying, for example, ‘If all A is B, and if X is an A, then X is B,’ we
replace these letters, which stand for anything and everything, by particular
terms that refer to the phenomena of current interest such as: ‘If all swans
are
624
white, and if the large birds on the river
One of the legacies of positivism that has been
especially difficult to shake off is that we have knowledge of the world when
we have constructed a literal picture-model of it.
This has been especially tenacious
because physics, the archetypical science, seems to construct such models.
Upon reflection however, this is
clearly not the case. Even Bohr’s
model of the atom, or the Newtonian model of the planetary system, depicts
only certain aspects of the phenomena it addresses; and modern particle
physics can hardly be described in terms of picture-models at all.
When we say that Bohr’s model enables
one to ‘see’ how the atom is structured, or to ‘grasp’ its structure, we are
speaking metaphorically. What we mean
is that the model renders this aspect of the real world rationally
intelligible. There are many
non-picture models in science, such as, for example, Darwin’s theory of
organic species, the economist’s market model of price determination, and the
political scientist’s checks-and-balances model of constitutional
organization. These too enable us to
‘see’ or ‘grasp’ certain aspects of reality in the sense of rational
intelligibility. Science is an
activity that uses logic and factual data to understand the way of the world
in rational terms. The understanding
so obtained is ‘public knowledge’ because it can be communicated to others
with minimal ambiguity, and shared by an indefinite number of people without
any depreciation of cognitive value.
The positivistic picture-model of knowledge led
philosophers of science to neglect the fact that scientists not only try to
depict the world but they actively engage in manipulating it.
Ian Hacking points this out in
noting the role that experimentation has played in the search for knowledge,
since the ‘scientific revolution’ of the seventeenth century; science consists
of both ‘representing’ the world and ‘intervening’ in its processes
(Representing and Intervening, 1983). But,
long before the rise of experimental science, men were manipulating the world
in their everyday practical activities of agriculture, metallurgy, cooking,
etc., which did not simply accept the world as it was, but modified it for
utilitarian purposes. Science did not
begin in the seventeenth century. Its
roots lie with those of our far-off ancestors who viewed man’s capacity to
manipulate the world as intelligible in logical and empirical terms, and tried
to communicate their understanding to others.
The seventeenth century was
625
revolutionary, not in creating something
entirely new, but in greatly extending the domain of rational intelligibility.
‘Science,’ says Geliner, is ‘a type of
cognition which has radically, qualitatively transformed man’s relation to
things: nature has ceased to be a datum and become eligible for genuine
comprehension and manipulation’ (Relativism and the Social Sciences, p.
120).
Practical arts such as agriculture and
metallurgy can be pursued simply on the basis of observed sequences of
phenomena, relying upon the experienced but uncomprehended stability of
nature. Such ‘recipe’ procedures work,
but they are not science. Failure to
appreciate this point is the basic error of ‘predictive instrumentalism’ as a
theory of scientific epistemology. Science
undertakes to explain why such procedures work by explicating the causal
connections between phenomena. What is
communicated to others and is added to the accumulating corpus of public
knowledge as ‘science’ are not the recipes for practical action, however
successful they may be, but the rational understanding of nature that renders
the why of their working intelligible.
‘Cognitive instrumentalism’ is an epistemological theory that views
the logical constructs called ‘theories’ and the sense data called ‘facts’ as
instruments that are used in the process of cognition.
We cannot obtain immediate and
irrefragable knowledge of the way of the world, but we can make it
intelligible by the use of such tools. The
concepts that science uses are much more complex than the artless ones
employed in simple propositions about the whiteness of swans.
As one philosopher puts it:
the concepts of science are the working tools of
scientific thought. They are the ways
in which the scientist has learned to understand complex phenomena, to
realize their relations to each other, and to represent these in communicable
form. Among the most wonderful of
those things we consider inventions of science are the concepts of science.
They are, in effect, the sophisticated
instrumentation, the high technology of scientific thought and discourse.
(Marx W. Wartofsky, Conceptual Foundations of Scientific Thought, 1968,
pp. 4 f.)
Empirical data too are far removed from the
brute facts that our unaided senses can supply.
The biologist grinds up some organic
material, whirls it about in a centrifuge, and then places it in a
spectrophotometer, which delivers electrical signals to a computer that prints
out a graph of the light absorbance of the specimen at different wavelengths.
Then he records as ‘data’ that the
material he started with contains a certain type of chlorophyll.
Like theories, such data should also
be regarded as instruments employed in a cognitive enterprise.
(b) Theories, facts, and empirical adequacy
Karl Popper forcefully argued that theories are
‘conjectures’ about the world, which can be accepted as having scientific
status only if they are so framed that it is possible for empirical data to
falsify them. Popper’s objective was
to
626
establish a criterion that enables one to
distinguish between ‘scientific’ and ‘non-scientific’ propositions.
Anyone may make conjectures about the
world; the scientist is obligated to make falsifiable ones.
Popper’s falsification criterion has
not proved to be sustainable; nor has the positivist criterion that theories
should be capable of empirical verification. None
the less it seems reasonable to demand that theories should, somehow, be
submitted to empirical test. Ernest
Nagel phrases the matter very broadly
It is the desire for explanations which are at
once systematic and controllable by factual evidence that generates science…
[It is] the deliberate policy of science to expose its cognitive claims to the
repeated challenge of critically probative observational data... (The
Structure of Science, 1961, pp.4, 12)
Cognitive instrumentalism takes the view that
this places the wrong construction on the relation between theories and
observation facts. It treats
scientific inquiry as if there is a court of nature, so to speak, where the
theorist advocate pleads a case and the empiricist jury renders a verdict,
with the philosopher of science acting as a presiding judge who sees to it
that proper rules of scientific procedure are obeyed.
The instrumentalist view of science is
quite different. It is more like a
workshop, where theories and factual data are used as complementary tools
employed in a co-operative process of cognition.
Or, to modify one of Alfred Marshall’s
metaphors, theories and facts are like the two jaws of a pair of pliers which
‘grasp’ some part of reality between them. A
theory, by itself, can do no cognitive work. But
neither can data alone. Contrary to
Nagel’s notion, facts do not control theories any more than theories control
facts. They work together.
The empirical quality of a theory,
therefore, is a matter not of the testability of one by the other, but of
their functional articulation as instruments of inquiry aimed at a specific
scientific problem. Scientific
progress takes place when new theories are developed that articulate with a
wider range of known facts, and when new facts are obtained that articulate
well with existing theories.
In the philosophic literature a contrast is
sometimes drawn between ‘instrumentalism’ and ‘realism’ on the ground that
realists construe theories as representations of reality, while
instrumentalists do not. This is
overdrawn. Instrumentalists can accept
a picture-model of an aspect of reality, such as Bohr’s model of the atom, as
an effective instrument of investigation if it proves to have cognitive value
in practice. That such models are
representational is beside the point. In
fact, only a few branches of science employ models that have literal
representational qualities, and even in these domains the models are often
highly unrealistic. Edmund Halley used
a planetary model consisting of only two bodies; yet he was able to calculate
the date of return of the comet that bears his name with impressive accuracy.
The ‘ideal gas laws’ describe a model
that applies only to non-existent gases whose molecules have no volume; the
theories of levers and pendulums apply to no real levers or pendulums; yet no
‘realist’ rejects these models as unscientific, or even as wrong.
Moreover,
627
there are some branches of science where
alternative incompatible theoretical conceptions of the same subject matter
are employed. Physicists sometimes
treat light as a wave and sometimes as a stream of particles; chemists
sometimes regard a liquid as composed of discrete particles and sometimes as a
continuous medium; economists sometimes apply the ‘cartel model’ to an
organization of producers attempting to exercise market control and sometimes
the ‘price-leadership model’, or the ‘basing-point model’.
Such diverse conceptions cannot all be
‘true’, but each is usable as a cognitive device in appropriate circumstances.
A well developed science has a rich
repertoire of such devices, which gives it versatility and scope.
Instrumentalism is sometimes rejected by realist
philosophers as an epistemology that has no concern for truth.
This is incorrect; what divides these
two philosophies of science are different views about the relation between
theories and facts in inquiries that are aimed at finding out what is true.
In the instrumentalist view, it is not
disembodied ‘science’ that explains phenomena; human scientists do, using the
cognitive instruments of logical theory construction and observation.
One could say, for example, that the
Mendelian laws of genetics explain the incidence of sickle-cell anaemia, but
the instrumentalist would insist that this should be interpreted as meaning
that biologists use the Mendelian laws to explain the phenomenon.
Scientific explanation is a human
activity.
The main difficulty with the notion that
theories must be empirically ‘true’ is that it leaves no middle ground between
‘true’ and ‘false’; they are treated as logical contradictories, ‘false’ being
construed as ‘not true’ and vice versa. May
Brodbeck asserts that ‘knowledge is the body of true belief; we cannot know
that which is false’ (Readings in the Philosophy of the Social Sciences,
1968, p. 81). If this were so we
would have very little knowledge at all, since most current beliefs, including
scientific ones, are in the absolute sense false, and we do not know which
ones will be discarded tomorrow and which may last for a century.
The philosophy of science must provide
room for beliefs that do not meet such a hard truth criterion.
Bas C. Van Fraassen, as part of an
extended defence of instrumentalism (The Scientific Image, 1980),
advances the weaker criterion of ‘empirical adequacy’.
Whereas the realist insists that a
theory must be a literally true description of the subject domain in all its
details, and the concepts of a theory must refer to entities that actually
exist, Van Fraassen’s ‘constructive empiricism’ demands only that a theory
should be adequate to deal with the specific problem that the scientist
undertakes to solve. We are not called
upon to believe that a scientific theory is empirically true; only that it is
empirically adequate. In deciding
between competing theories, the operative criterion is not their relative
degrees of truth-likeness, but their comparative usefulness as instruments of
investigation. Scientific explanation
is essentially an exercise in pragmatics. Some
problems cannot be successfully tackled because we lack a theory that is
adequate to the task, or because adequate factual data are not available, but
there are many others that can be investigated
628
with, admittedly imperfect, theories and data.
As Peter Medawar puts it, ‘science is
the art of the soluble’ (Pluto’s Republic, 1982).
As science progresses, more and more
of the world can be rendered intelligible, but completeness and perfection
remain beyond reach.
Except for the empiricist approaches to
epistemology such as those of Thomas Kuhn, Imre Lakatos, and the Edinburgh
school, and Paul Feyerabend’s philosophical anarchism, which declares that
‘anything goes’, the philosophy of science undertakes to prescribe normative
rules for the conduct of science. The
early positivists demanded that all theoretical concepts should refer to
observable entities. Karl Popper
demanded that theoretical propositions should be empirically falsifiable, at
least in principle. Carl Hempel
demanded that all explanations of specific phenomena should show them to be
instances of empirically true general ‘covering laws’.
Does cognitive instrumentalism, which
regards theories and observation facts as instruments of inquiry, advance
prescriptive rules? It seems to me
that, implicitly, it makes two kinds of demands.
First, theories should be coherent,
and not offend against any of the standard rules of formal logic.
The fallacy of ignoratio elenchi
is perhaps one that the instrumentalist would be especially anxious to
warn against. This is the fallacy of
contending that one has answered one question when one has in fact answered a
different one. Secondly, empirical
data should be derived and used according to principles of sound practice.
These range all the way from the rule
that data should not be manufactured to serve the scientist’s personal
interests or beliefs to the insistence that data should be processed according
to the best available techniques of mathematical statistics.
These are the criteria that scientists
themselves employ in reviewing one another’s work.
As a prescriber of good scientific
conduct the philosopher is unlikely to be able to go further.
(c) The problem orientation of science
When speaking in general terms one might say
that science consists of the investigation of the way of the world.
But no science takes ‘the world’ as
its province. Even the most
comprehensive of them focus upon much more restricted and specific domains.
A particular science can be defined in
terms of the problems that it addresses but, except in broad terms, it is not
possible to give a perdurable and timeless statement of them because the
interests of scientists change and the boundaries between the sciences shift.
Defining a science therefore consists of stating its problems at a
particular time. The solutions to
these problems do not constitute eternal truths; they are explanations that
scientists, for the time, consider to be serviceable in making some
limited aspect of reality intelligible. Even
with respect to a given problem, a theory that is subsequently discarded as
untrue may, in its time, render such service.
The Ptolemaic model of the planetary motions is now regarded as false,
but before the Copernican model was developed by Kepler and
629
provided a rational account of the universe that
articulated with empirical observations. If
theories and facts are regarded as cognitive instruments, it is easy to
understand why the Ptolemaic model solved a certain scientific problem and why
Newtonian mechanics solved it better. It
would not be rational, today, to view the heavens in Ptolemaic terms, but it
was the most rational way of doing so a few centuries ago.
When a science undertakes to address a new
problem, the theoretical and empirical instruments appropriate to the task may
be different from those applied to the older problem.
In Chapter 17 above we saw that the
development of neoclassical microeconomics not only replaced the classical
theory of value with a better one but shifted the focus of attention from the
problem of economic development to the static efficiency of resource
allocation. Keynesian theory
undertook to replace classical (and neoclassical) monetary theory, but it also
involved a shift of focus, from both economic development and allocative
efficiency to the problem of the general underutilization of a society’s
productive resources. In some respects
classical, neoclassical, and Keynesian theories offered alternative
explanations; in other respects they were complementary, addressing different
problems. If one insists that an
economic theory must be a true representation of ‘the economy’ we must choose
between them. But if theories are
regarded as instruments for tackling particular problems, all of them can be
comfortably included in the economist’s repertoire.
On the plane of pure science the choice between
competing theories rests upon their instrumental usefulness in providing
rational explanations of phenomena. On
the applied plane an additional criterion must be adduced: the concepts of a
theory must be translatable into terms that permit one to modify the world.
In choosing between two theories, one
may be superior in its explanatory capacities, but the other may offer better
opportunities for application. In
economics, for example, the theory of general equilibrium is superior to all
others as a rational explanation of how a market economy functions, but it is
of very little use in tackling practical problems.
This tension between the pure and the
applied may be present in all sciences, but it is especially important in the
social sciences. The difference
between predictive instrumentalism and cognitive instrumentalism as social
science epistemologies is that the former says that we need only be able to
predict events, while the latter says that we need to understand their causes
or, rather, we need to understand them sufficiently to act rationally, and in
terms of concepts that enable one to engage in such action.
The reason why Keynesian theory made
such a dramatic impact upon the economists of the 1930s is that it explained
the phenomenon of mass unemployment in terms that supported the desire to
combat it by means of practical public policy devices.
One often encounters comments expressing a
general appraisal of the comparative worth of the various sciences; for
example that physics is the premier science or that economics is superior to
sociology, or that all the social
630
sciences are inferior to all the natural
sciences. Such appraisals are based on
a failure to recognize the problem orientation of science and the epistemic
implications of this with regard to comparative evaluation.
Physicists and chemists are very good
at addressing problems that belong to their professional domains.
Their record in analysing social
problems is negligible (see, for example, the various writings of Frederick
Soddy, Nobel Prize-winning chemist, on monetary theory and other economic
issues). An implication of
instrumentalist epistemology is that scientific procedures can be
comparatively appraised only with reference to the same, or similar, problems.
To compare the effectiveness of
physics in respect to physical problems with the effectiveness of economics in
respect to economic problems is to commit an ignoratio elenchi of a
gross sort. It follows also that there
is no warrant for believing that the social sciences could necessarily be
improved by adopting specific models and concepts that have been successful in
the natural sciences. Numerous
attempts have been made to model social phenomena as analogous to Newtonian
celestial mechanics or evolutionary biology, or to apply concepts such as
entropy or metabolism, but these have been more noteworthy as displays of
scholastic ingenuity than as contributions to our understanding of social
processes. Cognitive instrumentalism
requires that theoretical models should be applicable to the problem one
wishes to solve. That a model or
concept is useful in one domain provides no assurance that it will have
cognitive value in another.
So much for comparisons of disciplines that are
called ‘sciences’. What about the more
general distinction between ‘scientific’ and ‘non-scientific’ modes of
cognition? Demarcating them one from
another was a main objective of
Some philosophers regard the establishment of a
criterion that distinguishes scientific propositions from non-scientific ones
as a matter of the highest importance. For
Karl Popper a satisfactory criterion of demarcation is essential to protect
the edifice of modern Western thought from the attacks of relativists and
sceptics who question the possibility of objective knowledge, and refuse to
grant science a cognitive status different from that of religious revelation,
political ideology, or personal intuition. Israel
Sheffler speaks of ‘the moral import of science’ as springing from its
insistence on ‘responsible belief’, that is, beliefs justified by logic
and evidence, in contradistinction from beliefs that are not, in this sense,
‘responsibly’ held (Science and Subjectivity, 1982, passim).
Ernest Geilner says that
‘epistemological principles are basically normative and ethical: they are
prescriptions for the conduct of cognitive life’ (Relativism and
631
the Social Sciences,
1985, p. 34), the kind of life he regards as
morally worthy. If such grave issues
hinged on the establishment of a demarcation criterion, Western civilization
would be in a parlous state, since no defensible criterion has so far been
defined. Nevertheless, even without it
science has been a powerful force in our cognitive life and scientists have
effectively challenged those who claim to have come, by non-scientific means,
into possession of knowledge about the world.
Even ‘creationists’ now feel obliged to show that their rejection of
Darwinian theory in favour of the Book of Genesis is founded upon ‘scientific’
considerations.
In viewing this matter, social scientists have
more reason for concern than natural scientists.
With the prominent exception of the
theory of organic evolution, few of the propositions of natural science are
now attacked on theological or ideological grounds.
The day is long past when Galileo had
to submit to the superior authority of the Church on matters of nature.
There is, however, a continuous open
season on the propositions of the social sciences, which, for various reasons,
cannot as readily be defended as having ‘scientific’ status.
Moreover, social scientists often have
to ward off attacks from natural scientists, sometimes as naive and prejudiced
as ones derived from strong political ideologies and other idealist fancies.
More often than not, the natural
scientist who becomes interested in a social question will rush into print
without consulting the literature on it and, moreover, without bringing to the
subject the same constraints of logic and empiricism that he regards as
obligatory in his own domain of expertise (see, for example, Gary Werskey’s
history of the ‘science and society movement’ in England during the 1930’s,
The Visible College, 1978).
As a philosophy of science, cognitive
instrumentalism cannot supply the cleanly defined criterion of demarcation
between science and non-science that some regard as essential.
But in certain respects it can do
better than other philosophies. Positivism
and its successors tried to establish the notion that a scientific proposition
is one that can be tested empirically. A
non-scientific proposition is not testable. According
to this criterion, if a fourteenth-century dervish had declared in a trance
that the sun is stationary and the earth is a revolving sphere, it would be a
scientific proposition because it could be tested empirically.
Yet something seems amiss here.
It cannot be that the theory was
advanced by a dervish, for, according to positivist canons, the scientific
quality of a proposition depends on what the proposition states, not its
source. Cognitive instrumentalism
agrees that the source is irrelevant, but it rejects the view that a
proposition can be scientific or non-scientific in itself.
If scientific concepts and theories
are construed as tools of cognition, then the central issue is whether the
dervish’s statement was usable in this fashion.
In the fourteenth century the notion
that the sun is stationary and the earth revolves was incapable of employment
as a cognitive instrument. If a
carpenter, living in a
remote place without electricity, comes into possession of a power saw, it
would not be, for him, a tool of carpentry. So
also with the tools of scientific inquiry.
632
This epistemological view explains why some
notions which are worthless speculations in one era achieve scientific status
in a later one. When Democritus (fifth
century ac.) asserted that solid matter really consists of very small
particles in motion, it was not a scientific proposition.
It could not, then, be ‘responsibly’
held, as Scheffler would say. Today’s
physicists universally accept it. Cognitive
instrumentalism is a kind of relativism, to be sure, but not the sort that
Popper and others decry as denying the possibility of objective knowledge.
A concept or a theory is objectively
tested by its heuristic capacity, the assistance it renders to the work of
scientific inquiry in a particular field and in the context of the existing
state of knowledge. The innovator
in science is ‘ahead
of his time’, but if he is very much ahead his ideas are worthless.
In anticipation of an issue that will engage our
attention in the next section we might note at this point that, from the
instrumentalist standpoint, concepts referring to human mental entities such
as motives, preferences, and beliefs are not inherently non-scientific.
That they are properties of
consciousness rather than material things does not mean that they lack
explanatory capacity. On the contrary,
in dealing with social phenomena, which result from the behaviour of
individual persons, they can be, and have been, effectively employed by the
social sciences.
Where do we stand, then, on the issue of
demarcation? Assuredly there is a
difference between astronomy and astrology, between Darwinian theory and
creationism, between macroeconomic theory and the notion that changes in the
pace of economic activity reflect the operation of transcendental ‘cycles’ or
supposed ‘natural rhythms’. But what
is the difference
between scientific and non-scientific modes of thought?
To come to grips with this, let us
examine a specific case that philosophers and scientists (Feyerabend excepted)
would assign to the non-scientific category, a case of witchcraft and ‘demonic
possession’. (The following
illustration is taken, with some changes, from my Social Science and Modem
Man, 1970, pp. 7 f.)
In his The Devils of Loudon
(1952) Aldous Huxley
gives an account of the trial of one Urbain Grandier, who was burnt at the
stake for witchcraft in the early seventeenth century.
The events that led to this event took
place in the small French town of
633
tormented writhings they performed feats of
extraordinary strength and endurance, and so on.
In modern courts of law less empirical
evidence would be regarded as sufficient to show that a crime had been
committed. But no Western court would
consider a charge of witchcraft, no matter how much empirical evidence of
demonic possession was adduced. Nor
would any philosopher or scientist give such a proposition even hypothetical
status. We simply do not believe
in demons or witches.
With this illustration before us we can see that any epistemic demarcation between science and non-science is extremely difficult, perhaps indeed impossible. The Church authorities at Loudon had a well formed theory to work with; they insisted on logical argument; they demanded empirical evidence. Neither positivist nor instrumentalist epistemology can produce a criterion of demarcation that will permit one to consign the notion of witchcraft to ‘non-science’. The difference between the modern scientist and the seventeenth-century theologian is essentially metaphysical; it is based upon different ontological conceptions of reality. The ‘revolution’ in science that was under way at the time of Grandier’s trial generated new views concerning the methodology of scientific investigation, but its more significant impact upon the culture of the West, which even the great Newton failed to appreciate, was in creating a metaphysical outlook that rejects preternatural forces. Demons have been cast out of our world, not by burning witches and performing rituals of ‘exorcism’, but by the success of science as a cognitive and pragmatic enterprise. The metaphysical presumption of science may be wrong but, so far, the burden of secure evidence indicates that it is rational to believe otherwise.
634
The Competitiveness of Nations
in a Global Knowledge-Based Economy
December 2002