• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/168

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

168 Cards in this Set

  • Front
  • Back

Cambridge change

A non-genuine change. If I turn pale, I am changing, whereas your turning pale is only a Cambridge change in me. When I acquire the property of being such that you are pale, I do not change. In general, an object’s acquiring a new property is not a sufficient condition for that object to change (although some other object may genuinely change). Thus also, my being such that you are pale counts only as a Cambridge property of me, a property such that my gaining or losing it is only a Cambridge change. Cambridge properties are a proper subclass of extrinsic properties: being south of Chicago is considered an extrinsic property of me, but since my moving to Canada would be a genuine change, being south of Chicago cannot, for me, be a Cambridge property.


The concept of a Cambridge change reflects a way of thinking entrenched in common sense, but it is difficult to clarify, and its philosophical value is controversial. Neither science nor formal semantics, e.g., supports this viewpoint. Perhaps Cambridge changes and properties are, for better or worse, inseparable from a vague, intuitive metaphysics.

Cambridge Platonists

A group of seventeenth-century philosopher-theologians at the University of Cambridge, principally including Benjamin Whichcote (1609-83), often designated the father of the Cambridge Platonists; Henry More; Ralph Cudworth (1617-88); and John Smith (1616-52). Whichcote, Cudworth, and Smith received their university education in or were at some time fellows of Emmanuel College, a stronghold of the Calvinism in which they were nurtured and against which they rebelled under mainly Erasmian, Arminian, and Neoplatonic influences. Other Cambridge men who shared their ideas and attitudes to varying degrees were Nathanael Culverwel (1618?-51), Peter Sterry (1613-72), George Rust (d.1670), John Worthington (1618-71), and Simon Patrick (1625-1707).


As a generic label, ‘Cambridge Platonists’ is a handy umbrella term rather than a dependable signal of doctrinal unity or affiliation. The Cambridge Platonists were not a self-constituted group articled to an explicit manifesto; no two of them shared quite the same set of doctrines or values. Their Platonism was not exclusively the pristine teaching of Plato, but was formed rather from Platonic ideas supposedly prefigured in Hermes Trismegistus, in the Chaldean Oracles, and in Pythagoras, and which they found in Origen and other church fathers, in the Neoplatonism of Plotinus and Proclus, and in the Florentine Neoplatonism of Ficino. They took contrasting and changing positions on the important belief (originating in Florence with Giovanni Pico della Mirandola) that Pythagoras and Plato derived their wisdom ultimately from Moses and the cabala. They were not equally committed to philosophical pursuits, nor were they equally versed in the new philosophies and scientific advances of the time.


The Cambridge Platonists’ concerns were ultimately religious and theological rather than primarily philosophical. They philosophized as theologians, making eclectic use of philosophical doctrines (whether Platonic or not) for apologetic purposes. They wanted to defend “true religion,” namely, their latitudinarian vision of Anglican Christianity, against a variety of enemies: the Calvinist doctrine of predestination; sectarianism; religious enthusiasm; fanaticism; the “hide-bound, strait-laced spirit” of Interregnum Puritanism; the “narrow, persecuting spirit” that followed the Restoration; atheism; and the impieties incipient in certain trends in contemporary science and philosophy. Notable among the latter were the doctrines of the mechanical philosophers, especially the materialism and mechanical determinism of Hobbes and the mechanistic pretensions of the Cartesians.


The existence of God, the existence, immortality, and dignity of the human soul, the existence of spirit activating the natural world, human free will, and the primacy of reason are among the principal teachings of the Cambridge Platonists. They emphasized the positive role of reason in all aspects of philosophy, religion, and ethics, insisting in particular that it is irrationality that endangers the Christian life. Human reason and understanding was “the Candle of the Lord” (Whichcote’s phrase), perhaps their most cherished image. In Whichcote’s words, “To go against Reason, is to go against God … Reason is the Divine Governor of Man’s Life; it is the very Voice of God.” Accordingly, “there is no real clashing at all betwixt any genuine point of Christianity and what true Philosophy and right Reason does determine or allow” (More). Reason directs us to the self-evidence of first principles, which “must be seen in their own light, and are perceived by an inward power of nature.” Yet in keeping with the Plotinian mystical tenor of their thought, they found within the human soul the “Divine Sagacity” (More’s term), which is the prime cause of human reason and therefore superior to it. Denying the Calvinist doctrine that revelation is the only source of spiritual light, they taught that the “natural light” enables us to know God and interpret the Scriptures.


Cambridge Platonism was uncompromisingly innatist. Human reason has inherited immutable intellectual, moral, and religious notions, “anticipations of the soul,” which negate the claims of empiricism. The Cambridge Platonists were skeptical with regard to certain kinds of knowledge, and recognized the role of skepticism as a critical instrument in epistemology. But they were dismissive of the idea that Pyrrhonism be taken seriously in the practical affairs of the philosopher at work, and especially of the Christian soul in its quest for divine knowledge and understanding. Truth is not compromised by our inability to devise apodictic demonstrations. Indeed Whichcote passed a moral censure on those who pretend “the doubtfulness and uncertainty of reason.”


Innatism and the natural light of reason shaped the Cambridge Platonists’ moral philosophy. The unchangeable and eternal ideas of good and evil in the divine mind are the exemplars of ethical axioms or noemata that enable the human mind to make moral judgments. More argued for a “boniform faculty,” a faculty higher than reason by which the soul rejoices in reason’s judgment of the good.


The most philosophically committed and systematic of the group were More, Cudworth, and Culverwel. Smith, perhaps the most intellectually gifted and certainly the most promising (note his dates), defended Whichcote’s Christian teaching, insisting that theology is more “a Divine Life than a Divine Science.” More exclusively theological in their leanings were Whichcote, who wrote little of solid philosophical interest, Rust, who followed Cudworth’s moral philosophy, and Sterry. Only Patrick, More, and Cudworth (all fellows of the Royal Society) were sufficiently attracted to the new science (especially the work of Descartes) to discuss it in any detail or to turn it to philosophical and theological advantage. Though often described as a Platonist, Culverwel was really a neo-Aristotelian with Platonic embellishments and, like Sterry, a Calvinist. He denied innate ideas and supported the tabula rasa doctrine, commending “the Platonists … that they lookt upon the spirit of a man as the Candle of the Lord, though they were deceived in the time when ’twas lighted.”


The Cambridge Platonists were influential as latitudinarians, as advocates of rational theology, as severe critics of unbridled mechanism and materialism, and as the initiators, in England, of the intuitionist ethical tradition. In the England of Locke they are a striking counterinstance of innatism and non-empirical philosophy.

camera obscura

A darkened enclosure that focuses light from an external object by a pinpoint hole instead of a lens, creating an inverted, reversed image on the opposite wall. The adoption of the camera obscura as a model for the eye revolutionized the study of visual perception by rendering obsolete previous speculative philosophical theories, in particular the emanation theory, which explained perception as due to emanated copy-images of objects entering the eye, and theories that located the image of perception in the lens rather than the retina. By shifting the location of sensation to a projection on the retina, the camera obscura doctrine helped support the distinction of primary and secondary sense qualities, undermining the medieval realist view of perception and moving toward the idea that consciousness is radically split off from the world.

Campanella, Tommaso


(1568 - 1639)

Italian theologian, philosopher, and poet. He joined the Dominican order in 1582. Most of the years between 1592 and 1634 he spent in prison for heresy and for conspiring to replace Spanish rule in southern Italy with a utopian republic. He fled to France in 1634 and spent his last years in freedom. Some of his best poetry was written while he was chained in a dungeon; and during less rigorous confinement he managed to write over a hundred books, not all of which survive. His best-known work, The City of the Sun (1602; published 1623), describes a community governed in accordance with astrological principles, with a priest as head of state. In later political writings, Campanella attacked Machiavelli and called for either a universal Spanish monarchy with the pope as spiritual head or a universal theocracy with the pope as both spiritual and temporal leader. His first publication was Philosophy Demonstrated by the Senses (1591), which supported the theories of Telesio and initiated his lifelong attack on Aristotelianism. He hoped to found a new Christian philosophy based on the two books of nature and Scripture, both of which are manifestations of God. While he appealed to sense experience, he was not a straightforward empiricist, for he saw the natural world as alive and sentient, and he thought of magic as a tool for utilizing natural processes. In this he was strongly influenced by Ficino. Despite his own difficulties with Rome, he wrote in support of Galileo.

Campbell, Norman Robert


(1880 - 1949)

British physicist and philosopher of science. A successful experimental physicist, Campbell (with A. Wood) discovered the radioactivity of potassium. His analysis of science depended on a sharp distinction between experimental laws and theories. Experimental laws are generalizations established by observations. A theory has the following structure. First, it requires a (largely arbitrary) hypothesis, which in itself is untestable. To render it testable, the theory requires a “dictionary” of propositions linking the hypothesis to scientific laws, which can be established experimentally. But theories are not merely logical relations between hypotheses and experimental laws; they also require concrete analogies or models. Indeed, the models suggest the nature of the propositions in the dictionary. The analogies are essential components of the theory, and, for Campbell, are nearly always mechanical. His theory of science greatly influenced Nagel’s The Structure of Science (1961).

Camus, Albert


(1913 - 1960)

French philosophical novelist and essayist who was also a prose poet and the conscience of his times. He was born and raised in Algeria, and his experiences as a fatherless, tubercular youth, as a young playwright and journalist in Algiers, and later in the anti-German resistance in Paris during World War II informed everything he wrote. His best-known writings are not overtly political; his most famous works, the novel The Stranger (written in 1940, published in 1942) and his book-length essay The Myth of Sisyphus (written in 1941, published in 1943) explore the notion of “the absurd,” which Camus alternatively describes as the human condition and as “a widespread sensitivity of our times.” The absurd, briefly defined, is the confrontation between ourselves—with our demands for rationality and justice—and an “indifferent universe.” Sisyphus, who was condemned by the gods to the endless, futile task of rolling a rock up a mountain (whence it would roll back down of its own weight), thus becomes an exemplar of the human condition, struggling hopelessly and pointlessly to achieve something. The odd antihero of The Stranger, on the other hand, unconsciously accepts the absurdity of life. He makes no judgments, accepts the most repulsive characters as his friends and neighbors, and remains unmoved by the death of his mother and his own killing of a man. Facing execution for his crime, he “opens his heart to the benign indifference of the universe.”


But such stoic acceptance is not the message of Camus’s philosophy. Sisyphus thrives (he is even “happy”) by virtue of his scorn and defiance of the gods, and by virtue of a “rebellion” that refuses to give in to despair. This same theme motivates Camus’s later novel, The Plague (1947), and his long essay The Rebel (1951). In his last work, however, a novel called The Fall published in 1956, the year before he won the Nobel prize for literature, Camus presents an unforgettably perverse character named Jean-Baptiste Clamence, who exemplifies all the bitterness and despair rejected by his previous characters and in his earlier essays. Clamence, like the character in The Stranger, refuses to judge people, but whereas Meursault (the “stranger”) is incapable of judgment, Clamence (who was once a lawyer) makes it a matter of philosophical principle, “for who among us is innocent?” It is unclear where Camus’s thinking was heading when he was killed in an automobile accident (with his publisher, Gallimard, who survived).

Canguilhem, Georges


(1904 - 1996)

French historian and philosopher of science. Canguilhem succeeded Gaston Bachelard as director of the Institut d’Histoire des Sciences et des Techniques at the University of Paris. He developed and sometimes revised Bachelard’s view of science, extending it to issues in the biological and medical sciences, where he focused particularly on the concepts of the normal and the pathological (The Normal and the Pathological, 1966). On his account norms are not objective in the sense of being derived from value-neutral scientific inquiry, but are rooted in the biological reality of the organisms that they regulate.


Canguilhem also introduced an important methodological distinction between concepts and theories. Rejecting the common view that scientific concepts are simply functions of the theories in which they are embedded, he argued that the use of concepts to interpret data is quite distinct from the use of theories to explain the data. Consequently, the same concepts may occur in very different theoretical contexts. Canguilhem made particularly effective use of this distinction in tracing the origin of the concept of reflex action.

Cantor, Georg


(1845 - 1918)

German mathematician, one of a number of late nineteenth-century mathematicians and philosophers (including Frege, Dedekind, Peano, Russell, and Hilbert) who transformed both mathematics and the study of its philosophical foundations. The philosophical import of Cantor’s work is threefold. First, it was primarily Cantor who turned arbitrary collections into objects of mathematical study, sets. Second, he created a coherent mathematical theory of the infinite, in particular a theory of transfinite numbers. Third, linking these, he was the first to indicate that it might be possible to present mathematics as nothing but the theory of sets, thus making set theory foundational for mathematics. This contributed to the view that the foundations of mathematics should itself become an object of mathematical study. Cantor also held to a form of principle of plenitude, the belief that all the infinities given in his theory of transfinite numbers are represented not just in mathematical (or “immanent” reality), but also in the “transient” reality of God’s created world.


Cantor’s main, direct achievement is his theory of transfinite numbers and infinity. He characterized (as did Frege) sameness of size in terms of one-to-one correspondence, thus accepting the paradoxical results known to Galileo and others, e.g., that the collection of all natural numbers has the same cardinality or size as that of all even numbers. He added to these surprising results by showing (1874) that there is the same number of algebraic (and thus rational) numbers as there are natural numbers, but that there are more points on a continuous line than there are natural (or rational or algebraic) numbers, thus revealing that there are at least two different kinds of infinity present in ordinary mathematics, and consequently demonstrating the need for a mathematical treatment of these infinities. This latter result is often expressed by saying that the continuum is uncountable. Cantor’s theorem of 1892 is a generalization of part of this, for it says that the set of all subsets (the power-set) of a given set must be cardinally greater than that set, thus giving rise to the possibility of indefinitely many different infinities. (The collection of all real numbers has the same size as the power-set of natural numbers.) Cantor’s theory of transfinite numbers (1880-97) was his developed mathematical theory of infinity, with the infinite cardinal numbers (the x-, or aleph-, numbers) based on the infinite ordinal numbers that he introduced in 1880 and 1883. The x-numbers are in effect the cardinalities of infinite well-ordered sets. The theory thus generates two famous questions, whether all sets (in particular the continuum) can be well ordered, and if so which of the x-numbers represents the cardinality of the continuum. The former question was answered positively by Zermelo in 1904, though at the expense of postulating one of the most controversial principles in the history of mathematics, the axiom of choice. The latter question is the celebrated continuum problem. Cantor’s famous continuum hypothesis (CH) is his conjecture that the cardinality of the continuum is represented by x, the second aleph. CH was shown to be independent of the usual assumptions of set theory by Gödel (1938) and Cohen (1963). Extensions of Cohen’s methods show that it is consistent to assume that the cardinality of the continuum is given by almost any of the vast array of x-numbers. The continuum problem is now widely considered insoluble.


Cantor’s conception of set is often taken to admit the whole universe of sets as a set, thus engendering contradiction, in particular in the form of Cantor’s paradox. For Cantor’s theorem would say that the power-set of the universe must be bigger than it, while, since this power-set is a set of sets, it must be contained in the universal set, and thus can be no bigger. However, it follows from Cantor’s early (1883) considerations of what he called the “absolute infinite” that none of the collections discovered later to be at the base of the paradoxes can be proper sets. Moreover, correspondence with Hilbert in 1897 and Dedekind in 1899 (see Cantor, Gesammelte Abhandlungen mathematischen und philosophischen Inhalts, 1932) shows clearly that Cantor was well aware that contradictions will arise if such collections are treated as ordinary sets.

cardinal virtues

Prudence (practical wisdom), courage, temperance, and justice. Medievals deemed them cardinal (from Latin cardo, ‘hinge’) because of their important or pivotal role in human flourishing. In Plato’s Republic, Socrates explains them through a doctrine of the three parts of the soul, suggesting that a person is prudent when knowledge of how to live (wisdom) informs her reason, courageous when informed reason governs her capacity for wrath, temperate when it also govern her appetites, and just when each part performs its proper task with informed reason in control. Development of thought on the cardinal virtues was closely tied to the doctrine of the unity of the virtues, i.e., that a person possessing one virtue will have them all.

Carlyle, Thomas


(1795 - 1881)


Scottish-born essayist, historian, and social critic, one of the most popular writers and lecturers in nineteenth-century Britain. His works include literary criticism, history, and cultural criticism. With respect to philosophy, his views on the theory of history are his most significant contributions. According to Carlyle, great personages are the most important causal factor in history. On Heroes, Hero-Worship and the Heroic in History (1841) asserts, “Universal History, the history of what man has accomplished in this world, is at bottom the History of the Great Men who have worked here. They were the leaders of men, these great ones; the modellers, patterns, and in a wide sense creators, of whatsoever the general mass of men contrived to do or to attain; all things that we see standing accomplished in the world are properly the outer material result, the practical realisation and embodiment, of Thoughts that dwelt in the Great Men sent into the world: the soul of the whole world’s history, it may justly be considered, were the history of these.”


Carlyle’s doctrine has been challenged from many different directions. Hegelian and Marxist philosophers maintain that the so-called great men of history are not really the engine of history, but merely reflections of deeper forces, such as economic ones, while contemporary historians emphasize the priority of “history from below”—the social history of everyday people—as far more representative of the historical process.

Carnap, Rudolf


(1891 - 1970)

German-born American philosopher, one of the leaders of the Vienna Circle, a movement loosely called logical positivism or logical empiricism. He made fundamental contributions to semantics and the philosophy of science, as well as to the foundations of probability and inductive logic. He was a staunch advocate of, and active in, the unity of science movement.


Carnap received his Ph.D. in philosophy from the University of Jena in 1921. His first major work was Die Logische Aufbau der Welt (1928), in which he sought to apply the new logic recently developed by Frege and by Russell and Whitehead to problems in the philosophy of science. Although influential, it was not translated until 1967, when it appeared as The Logical Structure of the World. It was important as one of the first clear and unambiguous statements that the important work of philosophy concerned logical structure: that language and its logic were to be the focus of attention. In 1935 Carnap left his native Germany for the United States, where he taught at the University of Chicago and then at UCLA.


Die Logiche Syntax der Sprach (1934) was rapidly translated into English, appearing as The Logical Syntax of Language (1937). This was followed in 1941 by Introduction to Semantics, and in 1942 by The Formalization of Logic. In 1947 Meaning and Necessity appeared; it provided the groundwork for a modal logic that would mirror the meticulous semantic development of first-order logic in the first two volumes. One of the most important concepts introduced in these volumes was that of a state description. A state description is the linguistic counterpart of a possible world: in a given language, the most complete description of the world that can be given.


Carnap then turned to one of the most pervasive and important problems to arise in both the philosophy of science and the theory of meaning. To say that the meaning of a sentence is given by the conditions under which it would be verified (as the early positivists did) or that a scientific theory is verified by predictions that turn out to be true, is clearly to speak loosely. Absolute verification does not occur. To carry out the program of scientific philosophy in a realistic way, we must be able to speak of the support given by inconclusive evidence, either in providing epistemological justification for scientific knowledge, or in characterizing the meanings of many of the terms of our scientific language. This calls for an understanding of probability, or as Carnap preferred to call it, degree of confirmation. We must distinguish between two senses of probability: what he called probability1, corresponding to credibility, and probability2, corresponding to the frequency or empirical conception of probability defended by Reichenbach and von Mises. ‘Degree of confirmation’ was to be the formal concept corresponding to credibility.


The first book on this subject, written from the same point of view as the works on semantics, was The Logical Foundations of Probability (1950). The goal was a logical definition of ‘c(h,e)’: the degree of confirmation of a hypothesis h, relative to a body of evidence e, or the degree of rational belief that one whose total evidence was e should commit to h. Of course we must first settle on a formal language in which to express the hypothesis and the evidence; for this Carnap chooses a first-order language based on a finite number of one-place predicates, and a countable number of individual constants. Against this background, we perform the following reductions: ‘c(h,e)’ represents a conditional probability; thus it can be represented as the ratio of the absolute probability of h & e to the absolute probability of e. Absolute probabilities are represented by the value of a measure function m, defined for sentences of the language. The problem is to define m. But every sentence in Carnap’s languages is equivalent to a disjunction of state descriptions; the measure to be assigned to it must, according to the probability calculus, be the sum of the measures assigned to its constituent state descriptions. Now the problem is to define m for state descriptions. (Recall that state descriptions were part of the machinery Carnap developed earlier.) The function c† is a confirmation function based on the assignment of equal measures to each state description. It is inadequate, because if h is not entailed by e, c†(h,e) = m†(h), the a priori measure assigned to h. We cannot “learn from experience.” A measure that does not have that drawback is m*, which is based on the assignment of equal measures to each structure description. A structure description is a set of state descriptions; two state descriptions belong to the same structure description just in case one can be obtained from the other by a permutation of individual constants. Within the structure description, equal values are assigned to each state description.


In the next book, The Continuum of Inductive Methods, Carnap takes the rate at which we learn from experience to be a fundamental parameter of his assignments of probability. Like measures on state descriptions, the values of the probability of the singular predictive inference determine all other probabilities. The “singular predictive inference” is the inference from the observation that individual 1 has one set of properties, individual 2 has another set of properties, etc., to the conclusion: individual j will have property k.


Finally, in the last works (Studies in Inductive Logic and Probability, vols. I [1971] and II [1980], edited with Richard Jeffrey) Carnap offered two long articles constituting his Basic System of Inductive Logic. This system is built around a language having families of attributes (e.g., color or sound) that can be captured by predicates. The basic structure is still monadic, and the logic still lacks identity, but there are more parameters. There is a parameter λ that reflects the “rate of learning from experience”; a parameter η that reflects an inductive relation between values of attributes belonging to families. With the introduction of arbitrary parameters, Carnap was edging toward a subjective or personalistic view of probability. How far he was willing to go down the subjectivist garden path is open to question; that he discovered more to be relevant to inductive logic than the “language” of science seems clear.


Carnap’s work on probability measures on formal languages is destined to live for a long time. So too is his work on formal semantics. He was a staunch advocate of the fruitfulness of formal studies in philosophy, of being clear and explicit, and of offering concrete examples. Beyond the particular philosophical doctrines he advocated, these commitments characterize his contribution to philosophy.

Carroll, Lewis pen name of Charles Lutwidge Dodgson


(1832 - 1898)

English writer and mathematician. The eldest son of a large clerical family, he was educated at Rugby and Christ Church, Oxford, where he remained for the rest of his uneventful life, as mathematical lecturer (until 1881) and curator of the senior commonroom. His mathematical writings (under his own name) are more numerous than important. He was, however, the only Oxonian of his day to contribute to symbolic logic, and is remembered for his syllogistic diagrams, for his methods for constructing and solving elaborate sorites problems, for his early interest in logical paradoxes, and for the many amusing examples that continue to reappear in modern textbooks. Fame descended upon him almost by accident, as the author of Alice’s Adventures in Wonderland (1865), Through the Looking Glass (1872), The Hunting of the Snark (1876), and Sylvie and Bruno (1889-93); saving the last, the only children’s books to bring no blush of embarrassment to an adult reader’s cheek.


Dodgson took deacon’s orders in 1861, and though pastorally inactive, was in many ways an archetype of the prim Victorian clergyman. His religious opinions were carefully thought out, but not of great philosophic interest. The Oxford movement passed him by; he worried about sin (though rejecting the doctrine of eternal punishment), abhorred profanity, and fussed over Sunday observance, but was oddly tolerant of theatergoing, a lifelong habit of his own. Apart from the sentimental messages later inserted in them, the Alice books and Snark are blessedly devoid of religious or moral concern. Full of rudeness, aggression, and quarrelsome, if fallacious, argument, they have, on the other hand, a natural attraction for philosophers, who pillage them freely for illustrations. Humpty-Dumpty, the various Kings and Queens, the Mad Hatter, the Caterpillar, the White Rabbit, the Cheshire Cat, the Unicorn, the Tweedle brothers, the Bellman, the Baker, and the Snark make fleeting appearances in the pages of Russell, Moore, Broad, Quine, Nagel, Austin, Ayer, Ryle, Blan-shard, and even Wittgenstein (an unlikely admirer of the Mock Turtle). The first such allusion (to the March Hare) is in Venn’s Symbolic Logic (1881). The usual reasons for quotation are to make some point about meaning, stipulative definition, the logic of negation, time reversal, dream consciousness, the reification of fictions and nonentities, or the absurdities that arise from taking “ordinary language” too literally. (For exponents of word processing, the effect of running Jabberwocky through a spell-checker is to extinguish all hope for the future of Artificial Intelligence.)


Though himself no philosopher, Carroll’s unique sense of philosophic humor keeps him (and his illustrator, Sir John Tenniel) effortlessly alive in the modern age. Alice has been translated into seventy-five languages; new editions and critical studies appear every year; imitations, parodies, cartoons, quotations, and ephemera proliferate beyond number; and Carroll societies flourish in several countries, notably Britain and the United States. P.He.

Cārvāka

Indian materialism. Its varieties share the view that the mind is simply the body and its capacities, but differ as to whether every mental property is simply a physical property under some psychological description (reductive materialism) or there are emergent irreducibly mental properties that are caused by physical properties and themselves have no causal impact (epiphenomenalism). Some Cārvāka epistemologists, at least according to their critics, accept only perception as a reliable source of knowledge, but in its most sophisticated form Cārvāka, not unlike logical positivism, allows inference at least to conclusions that concern perceptually accessible states of affairs.

Cassirer, Ernst


(1874 - 1945

German philosopher and intellectual historian. He was born in the German city of Breslau (now Wroclaw, Poland) and educated at various German universities. He completed his studies in 1899 at Marburg under Hermann Cohen, founder of the Marburg School of neo-Kantianism. Cassirer lectured at the University of Berlin from 1906 to 1919, then accepted a professorship at the newly founded University of Hamburg. With the rise of Nazism he left Germany in 1933, going first to a visiting appointment at All Souls College, Oxford (1933-35) and then to a professorship at the University of Göteborg, Sweden (1935-41). In 1941 he went to the United States; he taught first at Yale (1941-44) and then at Columbia (1944-45).


Cassirer’s works may be divided into those in the history of philosophy and culture and those that present his own systematic thought. The former include major editions of Leibniz and Kant; his four-volume study The Problem of Knowledge (vols. 1-3, 1906-20; vol. 4, 1950), which traces the subject from Nicholas of Cusa to the twentieth century; and individual works on Descartes, Leibniz, Kant, Rousseau, Goethe, the Renaissance, the Enlightenment, and English Platonism. The latter include his multivolume The Philosophy of Symbolic Forms (1923-29), which presents a philosophy of human culture based on types of symbolism found in myth, language, and mathematical science; and individual works concerned with problems in such fields as logic, psychology, aesthetics, linguistics, and concept formation in the humanities. Two of his best-known works are An Essay on Man (1944) and The Myth of the State (1946).


Cassirer did not consider his systematic philosophy and his historical studies as separate endeavors; each grounded the other. Because of his involvement with the Marburg School, his philosophical position is frequently but mistakenly typed as neo-Kantian. Kant is an important influence on him, but so are Hegel, Herder, Wilhelm von Humboldt, Goethe, Leibniz, and Vico. Cassirer derives his principal philosophical concept, symbolic form, most directly from Heinrich Hertz’s conception of notation in mechanics and the conception of the symbol in art of the Hegelian aesthetician, Friedrich Theodor Vischer. In a wider sense his conception of symbolic form is a transformation of “idea” and “form” within the whole tradition of philosophical idealism. Cassirer’s conception of symbolic form is not based on a distinction between the symbolic and the literal. In his view all human knowledge depends on the power to form experience through some type of symbolism. The forms of human knowledge are coextensive with forms of human culture. Those he most often analyzes are myth and religion, art, language, history, and science. These forms of symbolism constitute a total system of human knowledge and culture that is the subject matter of philosophy.


Cassirer’s influence is most evident in the aesthetics of Susanne Langer (1895-1985), but his conception of the symbol has entered into theoretical anthropology, psychology, structural linguistics, literary criticism, myth theory, aesthetics, and phenomenology. His studies of the Renaissance and the Enlightenment still stand as groundbreaking works in intellectual history.

Castañeda, Hector-Neri


(1924 - 1991)

American analytical philosopher. Heavily influenced by his own critical reaction to Quine, Chisholm, and his teacher Wilfrid Sellars, Castañeda published four books and more than 175 essays. His work combines originality, rigor, and penetration, together with an unusual comprehensiveness—his network of theory and criticism reaches into nearly every area of philosophy, including action theory; deontic logic and practical reason; ethics; history of philosophy; metaphysics and ontology; philosophical methodology; philosophy of language, mind, and perception; and the theory of knowledge. His principal contributions are to metaphysics and ontology, indexical reference, and deontic logic and practical reasoning.


In metaphysics and ontology, Castañeda’s chief work is guise theory, first articulated in a 1974 essay, a complex and global account of language, mind, ontology, and predication. By holding that ordinary concrete individuals, properties, and propositions all break down or separate into their various aspects or guises, he theorizes that thinking and reference are directed toward the latter. Each guise is a genuine item in the ontological inventory, having properties internally and externally. In addition, guises are related by standing in various sameness relations, only one of which is the familiar relation of strict identity. Since every guise enjoys bona fide ontological standing, whereas only some of these actually exist, Castañeda’s ontology and semantics are Meinongian. With its intricate account of predication, guise theory affords a unified treatment of a wide range of philosophical problems concerning reference to nonexistents, negative existentials, intentional identity, referential opacity, and other matters.


Castañeda also played a pivotal role in emphasizing the significance of indexical reference. If, e.g., Paul assertively utters ‘I prefer Chardonnay’, it would obviously be incorrect for Bob to report ‘Paul says that I prefer Chardonnay’, since the last statement expresses (Bob’s) speaker’s reference, not Paul’s. At the same time, Castañeda contends, it is likewise incorrect for Bob to report Paul’s saying as either ‘Paul says that Paul prefers Chardonnay’ or ‘Paul says that Al’s luncheon guest prefers Chardonnay’ (when Paul is Al’s only luncheon guest), since each of these fail to represent the essentially indexical element of Paul’s assertion. Instead, Bob may correctly report ‘Paul says that he himself prefers Chardonnay’, where ‘he himself’ is a quasi-indicator, serving to depict Paul’s reference to himself qua self. For Castañeda (and others), quasi-indicators are a person’s irreducible, essential means for describing the thoughts and experiences of others. A complete account of his view of indexicals, together with a full articulation of guise theory and his unorthodox theories of definite descriptions and proper names, is contained in Thinking, Language, and Experience (1989).


Castañeda’s main views on practical reason and deontic logic turn on his fundamental practition-proposition distinction. A number of valuable essays on these views, together with his important replies, are collected in James E. Tomberlin, ed., Agent, Language, and the Structure of the World (1983), and Tomberlin, ed., Hector-Neri Castañeda (1986). The latter also includes Castañeda’s revealing intellectual autobiography.

casuistry

The case-analysis approach to the interpretation of general moral rules. Casuistry starts with paradigm cases of how and when a given general moral rule should be applied, and then reasons by analogy to cases in which the proper application of the rule is less obvious—e.g., a case in which lying is the only way for a priest not to betray a secret revealed in confession. The point of considering the series of cases is to ascertain the morally relevant similarities and differences between cases. Casuistry’s heyday was the first half of the seventeenth century. Reacting against casuistry’s popularity with the Jesuits and against its tendency to qualify general moral rules, Pascal penned a polemic against casuistry from which the term never recovered (see his Provincial Letters, 1656). But the kind of reasoning to which the term refers is flourishing in contemporary practical ethics. B.W.H.

categorical theory

A theory all of whose models are isomorphic. Because of its weak expressive power, in first-order logic with identity only theories with a finite model can be categorical; without identity no theories are categorical. A more interesting property, therefore, is being categorical in power: a theory is categorical in power α when the theory has, up to isomorphism, only one model with a domain of cardinality α. Categoricity in power shows the capacity to characterize a structure completely, only limited by cardinality. For example, the first-order theory of dense order without endpoints is categorical in power ω the cardinality of the natural numbers. The first-order theory of simple discrete orderings with initial element, the ordering of the natural numbers, is not categorical in power ω. There are countable discrete orders, not isomorphic to the natural numbers, that are elementary equivalent to it, i.e., have the same elementary, first-order theory. In first-order logic categorical theories are complete. This is not necessarily true for extensions of first-order logic for which no completeness theorem holds. In such a logic a set of axioms may be categorical without providing an informative characterization of the theory of its unique model. The term ‘elementary equivalence’ was introduced around 1936 by Tarski for the property of being indistinguishable by elementary means. According to Oswald Veblen, who first used the term ‘categorical’ in 1904, in a discussion of the foundations of geometry, that term was suggested to him by the American pragmatist John Dewey.

categoricity

The semantic property belonging to a set of sentences, a “postulate set,” that implicitly defines (completely describes, or characterizes up to isomorphism) the structure of its intended interpretation or standard model. The best-known categorical set of sentences is the postulate set for number theory attributed to Peano, which completely characterizes the structure of an arithmetic progression. This structure is exemplified by the system of natural numbers with zero as distinguished element and successor (addition of one) as distinguished function. Other exemplifications of this structure are obtained by taking as distinguished element an arbitrary integer, taking as distinguished function the process of adding an arbitrary positive or negative integer and taking as universe of discourse (or domain) the result of repeated application of the distinguished function to the distinguished element. (See, e.g., Russell’s Introduction to the Mathematical Philosophy, 1918.)


More precisely, a postulate set is defined to be categorical if every two of its models (satisfying interpretations or realizations) are isomorphic (to each other), where, of course, two interpretations are isomorphic if between their respective universes of discourse there exists a one-to-one correspondence by which the distinguished elements, functions, relations, etc., of the one are mapped exactly onto those of the other. The importance of the analytic geometry of Descartes involves the fact that the system of points of a geometrical line with the “left-of relation” distinguished is isomorphic to the system of real numbers with the “less-than” relation distinguished. Categoricity, the ideal limit of success for the axiomatic method considered as a method for characterizing subject matter rather than for reorganizing a science, is known to be impossible with respect to certain subject matters using certain formal languages. The concept of categoricity can be traced back at least as far as Dedekind; the word is due to Dewey.

category

An ultimate class. Categories are the highest genera of entities in the world. They may contain species but are not themselves species of any higher genera. Aristotle, the first philosopher to discuss categories systematically, listed ten, including substance, quality, quantity, relation, place, and time. If a set of categories is complete, then each entity in the world will belong to a category and no entity will belong to more than one category. A prominent example of a set of categories is Descartes’s dualistic classification of mind and matter. This example brings out clearly another feature of categories: an attribute that can belong to entities in one category cannot be an attribute of entities in any other category. Thus, entities in the category of matter have extension and color while no entity in the category of mind can have extension or color.

category mistake

The placing of an entity in the wrong category. In one of Ryle’s examples, to place the activity of exhibiting team spirit in the same class with the activities of pitching, batting, and catching is to make a category mistake; exhibiting team spirit is not a special function like pitching or batting but instead a way those special functions are performed. A second use of ‘category mistake’ is to refer to the attribution to an entity of a property which that entity cannot have (not merely does not happen to have), as in ‘This memory is violet’ or, to use an example from Carnap, ‘Caesar is a prime number’. These two kinds of category mistake may seem different, but both involve misunderstandings of the natures of the things being talked about. It is thought that they go beyond simple error or ordinary mistakes, as when one attributes a property to a thing which that thing could have but does not have, since category mistakes involve attributions of properties (e.g., being a special function) to things (e.g., team spirit) that those things cannot have. According to Ryle, the test for category differences depends on whether replacement of one expression for another in the same sentence results in a type of unintelligibility that he calls “absurdity.”

category theory

A mathematical theory that studies the universal properties of structures via their relationships with one another. A category C consists of two collections Obc and Morc’, the objects and the morphisms of C, satisfying the following conditions: (i) for each pair (a, b) of objects there is associated a collection Morc (a, b) of morphisms such that each member of Morc belongs to one of these collections; (ii) for each object a of Obc’, there is a morphism ida’, called the identity on a; (iii) a composition law associating with each morphism f: a → b and each morphism g:b → c a morphism gf:a → c, called the composite of f and g; (iv) for morphisms f:a → b, g:b → c, and h: c → d, the equation h(gf) = (hg)f holds; (v) for any morphism f:a → b, we have idbf = f and fida = f. Sets with specific structures together with a collection of mappings preserving these structures are categories. Examples: (1) sets with functions between them; (2) groups with group homomorphisms; (3) topological spaces with continuous functions; (4) sets with surjections instead of arbitrary maps constitute a different category. But a category need not be composed of sets and set-theoretical maps. Examples: (5) a collection of propositions linked by the relation of logical entailment is a category and so is any preordered set; (6) a monoid taken as the unique object and its elements as the morphisms is a category. The properties of an object of a category are determined by the morphisms that are coming out of and going in this object. Objects with a universal property occupy a key position. Thus, a terminal object a is characterized by the following universal property: for any object b there is a unique morphism from b to a. A singleton set is a terminal object in the category of sets. The Cartesian product of sets, the product of groups, and the conjunction of propositions are all terminal objects in appropriate categories. Thus category theory unifies concepts and sheds a new light on the notion of universality.

causal law

A statement describing a regular and invariant connection between types of events or states, where the connections involved are causal in some sense. When one speaks of causal laws as distinguished from laws that are not causal, the intended distinction may vary. Sometimes, a law is said to be causal if it relates events or states occurring at successive times, also called a law of succession: e.g., ‘Ingestion of strychnine leads to death.’ A causal law in this sense contrasts with a law of coexistence, which connects events or states occurring at the same time (e.g., the Wiedemann-Franz law relating thermal and electric conductivity in metals).


One important kind of causal law is the deterministic law. Causal laws of this kind state exceptionless connections between events, while probabilistic or statistical laws specify probability relationships between events. For any system governed by a set of deterministic laws, given the state of a system at a time, as characterized by a set of state variables, these laws will yield a unique state of the system for any later time (or, perhaps, at any time, earlier or later). Probabilistic laws will yield, for a given antecedent state of a system, only a probability value for the occurrence of a certain state at a later time. The laws of classical mechanics are often thought to be paradigmatic examples of causal laws in this sense, whereas the laws of quantum mechanics are claimed to be essentially probabilistic.


Causal laws are sometimes taken to be laws that explicitly specify certain events as causes of certain other events. Simple laws of this kind will have the form ‘Events of kind F cause events of kind G’; e.g., ‘Heating causes metals to expand’. A weaker related concept is this: a causal law is one that states a regularity between events which in fact are related as cause to effect, although the statement of the law itself does not say so (laws of motion expressed by differential equations are perhaps causal laws in this sense). These senses of ‘causal law’ presuppose a prior concept of causation.


Finally, causal laws may be contrasted with teleological laws, laws that supposedly describe how certain systems, in particular biological organisms, behave so as to achieve certain “goals” or “end states.” Such laws are sometimes claimed to embody the idea that a future state that does not as yet exist can exert an influence on the present behavior of a system. Just what form such laws take and exactly how they differ from ordinary laws have not been made wholly clear, however.

causal theory of proper names

The view that proper names designate what they name by virtue of a kind of causal connection to it. This view is a special case, and in some instances an unwarranted interpretation, of a direct reference view of names. On this approach, proper names, e.g., ‘Machiavelli’, are, as J.S. Mill wrote, “purely denotative…. they denote the individuals who are called by them; but they do not indicate or imply any attributes as belonging to those individuals” (A System of Logic, 1879). Proper names may suggest certain properties to many competent speakers, but any such associated information is no part of the definition of the name. Names, on this view, have no definitions. What connects a name to what it names is not the latter’s satisfying some condition specified in the name’s definition. Names, instead, are simply attached to things, applied as labels, as it were. A proper name, once attached, becomes a socially available device for making the relevant name bearer a subject of discourse.


On the other leading view, the descriptivist view, a proper name is associated with something like a definition. ‘Aristotle’, on this view, applies by definition to whoever satisfies the relevant properties-e.g., is ‘the teacher of Alexander the Great, who wrote the Nicomachean Ethics’. Russell, e.g., maintained that ordinary proper names (which he contrasted with logically proper or genuine names) have definitions, that they are abbreviated definite descriptions. Frege held that names have sense, a view whose proper interpretation remains in dispute, but is often supposed to be closely related to Russell’s approach. Others, most notably Searle, have defended descendants of the descriptivist view. An important variant, sometimes attributed to Frege, denies that names have articulable definitions, but nevertheless associates them with senses. And the bearer will still be, by definition (as it were), the unique thing to satisfy the relevant mode of presentation.


The direct reference approach is sometimes misleadingly called the causal theory of names. But the key idea need have nothing to do with causation: a proper name functions as a tag or label for its bearer, not as a surrogate for a descriptive expression. Whence the (allegedly) misleading term ‘causal theory of names’? Contemporary defenders of Mill’s conception like Keith Donnellan and Kripke felt the need to expand upon Mill’s brief remarks. What connects a present use of a name with a referent? Here Donnellan and Kripke introduce the notion of a “historical chains of communication.” As Kripke tells the story, a baby is baptized with a proper name. The name is used, first by those present at the baptism, subsequently by those who pick up the name in conversation, reading, and so on. The name is thus propagated, spread by usage “from link to link as if by a chain” (Naming and Necessity, 1980). There emerges a historical chain of uses of the name that, according to Donnellan and Kripke, bridges the gap between a present use of the name and the individual so named.


This “historical chain of communication” is occasionally referred to as a “casual chain of communication.” The idea is that one’s use of the name can be thought of as a causal factor in one’s listener’s ability to use the name to refer to the same individual. However, although Kripke in Naming and Necessity does occasionally refer to the chain of communication as causal, he more often simply speaks of the chain of communication, or of the fact that the name has been passed “by tradition from link to link” (p. 106). The causal aspect is not one that Kripke underscores. In more recent writings on the topic, as well as in lectures, Kripke never mentions causation in this connection, and Donnellan questions whether the chain of communication should be thought of as a causal chain.


This is not to suggest that there is no view properly called a “causal theory of names.” There is such a view, but it is not the view of Kripke and Donnellan. The causal theory of names is a view propounded by physicalistically minded philosophers who desire to “reduce” the notion of “reference” to something more physicalistically acceptable, such as the notion of a causal chain running from “baptism” to later use. This is a view whose motivation is explicitly rejected by Kripke, and should be sharply distinguished from the more popular anti-Fregean approach sketched above.

causation

The relation between cause and effect, or the act of bringing about an effect, which may be an event, a state, or an object (say, a statue). The concept of causation has long been recognized as one of fundamental philosophical importance. Hume called it “the cement of the universe”: causation is the relation that connects events and objects of this world in significant relationships. The concept of causation seems pervasively present in human discourse. It is expressed by not only ‘cause’ and its cognates but by many other terms, such as ‘produce’, ‘bring about’, ‘issue’, ‘generate’, ‘result’, ‘effect’, ‘determine’, and countless others. Moreover, many common transitive verbs (“causatives”), such as ‘kill’, ‘break’, and ‘move’, tacitly contain causal relations (e.g., killing involves causing to die). The concept of action, or doing, involves the idea that the agent (intentionally) causes a change in some object or other; similarly, the concept of perception involves the idea that the object perceived causes in the perceiver an appropriate perceptual experience. The physical concept of force, too, appears to involve causation as an essential ingredient: force is the causal agent of changes in motion. Further, causation is intimately related to explanation: to ask for an explanation of an event is, often, to ask for its cause. It is sometimes thought that our ability to make predictions, and inductive inference in general, depends on our knowledge of causal connections (or the assumption that such connections are present): the knowledge that water quenches thirst warrants the predictive inference from ‘X is swallowing water’ to ‘X’s thirst will be quenched’. More generally, the identification and systematic description of causal relations that hold in the natural world have been claimed to be the preeminent aim of science. Finally, causal concepts play a crucial role in moral and legal reasoning, e.g., in the assessment of responsibilities and liabilities.


Event causation is the causation of one event by another. A sequence of causally connected events is called a causal chain. Agent causation refers to the act of an agent (person, object) in bringing about a change; thus, my opening the window (i.e., my causing the window to open) is an instance of agent causation. There is a controversy as to whether agent causation is reducible to event causation. My opening the window seems reducible to event causation since in reality a certain motion of my arms, an event, causes the window to open. Some philosophers, however, have claimed that not all cases of agent causation are so reducible. Substantival causation is the creation of a genuinely new substance, or object, rather than causing changes in preexisting substances, or merely rearranging them. The possibility of substantival causation, at least in the natural world, has been disputed by some philosophers. Event causation, however, has been the primary focus of philosophical discussion in the modern and contemporary period.


The analysis of event causation has been controversial. The following four approaches have been prominent: the regularity analysis, the counterfactual analysis, the manipulation analysis, and the probabilistic analysis. The heart of the regularity (or nomological) analysis, associated with Hume and J.S. Mill, is the idea that causally connected events must instantiate a general regularity between like kinds of events. More precisely: if c is a cause of e, there must be types or kinds of events, F and G, such that c is of kind F, e is of kind G, and events of kind F are regularly followed by events of kind G. Some take the regularity involved to be merely de facto “constant conjunction” of the two event types involved; a more popular view is that the regularity must hold as a matter of “nomological necessity” — i.e., it must be a “law.” An even stronger view is that the regularity must represent a causal law. A law that does this job of subsuming causally connected events is called a “covering” or “subsumptive” law, and versions of the regularity analysis that call for such laws are often referred to as the “covering-law” or “nomic-subsumptive” model of causality.


The regularity analysis appears to give a satisfactory account of some aspects of our causal concepts: for example, causal claims are often tested by re-creating the event or situation claimed to be a cause and then observing whether a similar effect occurs. In other respects, however, the regularity account does not seem to fare so well: e.g., it has difficulty explaining the apparent fact that we can have knowledge of causal relations without knowledge of general laws. It seems possible to know, for instance, that someone’s contraction of the flu was caused by her exposure to a patient with the disease, although we know of no regularity between such exposures and contraction of the disease (it may well be that only a very small fraction of persons who have been exposed to flu patients contract the disease). Do I need to know general regularities about itchings and scratchings to know that the itchy sensation on my left elbow caused me to scratch it? Further, not all regularities seem to represent causal connections (e.g., Reid’s example of the succession of day and night; two successive symptoms of a disease). Distinguishing causal from non-causal regularities is one of the main problems confronting the regularity theorist.


According to the counterfactual analysis, what makes an event a cause of another is the fact that if the cause event had not occurred the effect event would not have. This accords with the idea that cause is a condition that is sine qua non for the occurrence of the effect. The view that a cause is a necessary condition for the effect is based on a similar idea. The precise form of the counterfactual account depends on how counterfactuals are understood (e.g., if counterfactuals are explained in terms of laws, the counterfactual analysis may turn into a form of the regularity analysis).


The counterfactual approach, too, seems to encounter various difficulties. It is true that on the basis of the fact that if Larry had watered my plants, as he had promised, my plants would not have died, I could claim that Larry’s not watering my plants caused them to die. But it is also true that if George Bush had watered my plants, they would not have died; but does that license the claim that Bush’s not watering my plants caused them to die? Also, there appear to be many cases of dependencies expressed by counterfactuals that, however, are not cases of causal dependence: e.g., if Socrates had not died, Xanthippe would not have become a widow; if I had not raised my hand, I would not have signaled. The question, then, is whether these non-causal counterfactuals can be distinguished from causal counterfactuals without the use of causal concepts. There are also questions about how we could verify counterfactuals—in particular, whether our knowledge of causal counterfactuals is ultimately dependent on knowledge of causal laws and regularities.


Some have attempted to explain causation in terms of action, and this is the manipulation analysis: the cause is an event or state that we can produce at will, or otherwise manipulate, to produce a certain other event as an effect. Thus, an event is a cause of another provided that by bringing about the first event we can bring about the second. This account exploits the close connection noted earlier between the concepts of action and cause, and highlights the important role that knowledge of causal connections plays in our control of natural events. However, as an analysis of the concept of cause, it may well have things backward: the concept of action seems to be a richer and more complex concept that presupposes the concept of cause, and an analysis of cause in terms of action could be accused of circularity.


The reason we think that someone’s exposure to a flu patient was the cause of her catching the disease, notwithstanding the absence of an appropriate regularity (even one of high probability), may be this: exposure to flu patients increases the probability of contracting the disease. Thus, an event, X, may be said to be a probabilistic cause of an event, Y, provided that the probability of the occurrence of Y, given that X has occurred, is greater than the antecedent probability of Y. To meet certain obvious difficulties, this rough definition must be further elaborated (e.g., to eliminate the possibility that X and Y are collateral effects of a common cause). There is also the question whether probabilistic causation is to be taken as an analysis of the general concept of causation, or as a special kind of causal relation, or perhaps only as evidence indicating the presence of a causal relationship. Probabilistic causation has of late been receiving increasing attention from philosophers.


When an effect is brought about by two independent causes either of which alone would have sufficed, one speaks of causal overdetermination. Thus, a house fire might have been caused by both a short circuit and a simultaneous lightning strike; either event alone would have caused the fire, and the fire, therefore, was causally overdetermined. Whether there are actual instances of overdetermination has been questioned; one could argue that the fire that would have been caused by the short circuit alone would not have been the same fire, and similarly for the fire that would have been caused by the lightning alone.


The steady buildup of pressure in a boiler would have caused it to explode but for the fact that a bomb was detonated seconds before, leading to a similar effect. In such a case, one speaks of preemptive, or superseding, cause. We are apt to speak of causes in regard to changes; however, “unchanges,” e.g., this table’s standing here through some period of time, can also have causes: the table continues to stand here because it is supported by a rigid floor. The presence of the floor, therefore, can be called a sustaining cause of the table’s continuing to stand.


A cause is usually thought to precede its effect in time; however, some have argued that we must allow for the possibility of a cause that is temporally posterior to its effect—backward causation (sometimes called retrocausation). And there is no universal agreement as to whether a cause can be simultaneous with its effect—concurrent causation. Nor is there a general agreement as to whether cause and effect must, as a matter of conceptual necessity, be “contiguous” in time and space, either directly or through a causal chain of contiguous events—contiguous causation.


The attempt to “analyze” causation seems to have reached an impasse; the proposals on hand seem so widely divergent that one wonders whether they are all analyses of one and the same concept. But each of them seems to address some important aspect of the variegated notion that we express by the term ‘cause’, and it may be doubted whether there is a unitary concept of causation that can be captured in an enlightening philosophical analysis. On the other hand, the centrality of the concept, both to ordinary practical discourse and to the scientific description of the world, is difficult to deny. This has encouraged some philosophers to view causation as a primitive, one that cannot be further analyzed. There are others who advocate the extreme view (causal nihilism) that causal concepts play no role whatever in the advanced sciences, such as fundamental physical theories of space-time and matter, and that the very notion of cause is an anthropocentric projection deriving from our confused ideas of action and power.

causa sui (Latin, ‘cause of itself’)

An expression applied to God to mean in part that God owes his existence to nothing other than himself. It does not mean that God somehow brought himself into existence. The idea is that the very nature of God logically requires that he exists. What accounts for the existence of a being that is causa sui is its own nature.

Cavell, Stanley Louis


(1926 - )

American philosopher whose work has explored skepticism and its consequences. He was Walter M. Cabot Professor of Aesthetics and General Value Theory at Harvard from 1963 until 1997. Central to Cavell’s thought is the view that skepticism is not a theoretical position to be refuted by philosophical theory or dismissed as a mere misuse of ordinary language; it is a reflection of the fundamental limits of human knowledge of the self, of others, and of the external world, limits that must be accepted—in his term “acknowledged” - because the refusal to do so results in illusion and risks tragedy.


Cavell’s work defends J.L. Austin from both positivism and deconstructionism (Must We Mean What We Say?, 1969, and The Pitch of Philosophy, 1994), but not because Cavell is an “ordinary language” philosopher. Rather, his defense of Austin has combined with his response to skepticism to make him a philosopher of the ordinary: he explores the conditions of the possibility and limits of ordinary language, ordinary knowledge, ordinary action, and ordinary human relationships. He uses both the resources of ordinary language and the discourse of philosophers, such as Wittgenstein, Heidegger, Thoreau, and Emerson, and of the arts. Cavell has explored the ineliminability of skepticism in Must We Mean What We Say?, notably in its essay on King Lear, and has developed his analysis in his 1979 magnum opus, The Claim of Reason. He has examined the benefits of acknowledging the limits of human self-understanding, and the costs of refusing to do so, in a broad range of contexts from film (The World Viewed, 1971; Pursuits of Happiness, 1981; and Contesting Tears, 1996) to American philosophy (The Senses of Walden, 1972; and the chapters on Emerson in This New Yet Unapproachable America, 1989, and Conditions Handsome and Unhandsome, 1990).


A central argument in The Claim of Reason develops Cavell’s approach by looking at Wittgenstein’s notion of criteria. Criteria are not rules for the use of our words that can guarantee the correctness of the claims we make by them; rather, criteria bring out what we claim by using the words we do. More generally, in making claims to knowledge, undertaking actions, and forming interpersonal relationships, we always risk failure, but it is also precisely in that room for risk that we find the possibility of freedom. This argument is indebted not only to Wittgenstein but also to Kant, especially in the Critique of Judgment.


Cavell has used his view as a key to understanding classics of the theater and film. Regarding such tragic figures as Lear, he argues that their tragedies result from their refusal to accept the limits of human knowledge and human love, and their insistence on an illusory absolute and pure love. The World Viewed argues for a realistic approach to film, meaning that we should acknowledge that our cognitive and emotional responses to films are responses to the realities of the human condition portrayed in them. This “ontology of film” prepared the way for Cavell’s treatment of the genre of comedies of remarriage in Pursuits of Happiness. It also grounds his treatment of melodrama in Contesting Tears, which argues that human beings must remain tragically unknown to each other if the limits to our knowledge of each other are not acknowledged.


In The Claim of Reason and later works Cavell has also contributed to moral philosophy by his defense—against Rawls’s critique of “moral perfectionism” - of “Emersonian perfectionism”: the view that no general principles of conduct, no matter how well established, can ever be employed in practice without the ongoing but never completed perfection of knowledge of oneself and of the others on and with whom one acts. Cavell’s Emersonian perfectionism is thus another application of his Wittgensteinian and Kantian recognition that rules must always be supplemented by the capacity for judgment.

Cavendish, Margaret

Duchess of Newcastle (1623-1673), English author of some dozen works in a variety of forms. Her central philosophical interest was the developments in natural science of her day. Her earliest works endorsed a kind of atomism, but her settled view, in Philosophical Letters (1664), Observations upon Experimental Philosophy (1666), and Grounds of Natural Philosophy (1668), was a kind of organic materialism. Cavendish argues for a hierarchy of increasingly fine matter, capable of self-motion. Philosophical Letters, among other matters, raises problems for the notion of inert matter found in Descartes, and Observations upon Experimental Philosophy criticizes microscopists such as Hooke for committing a double error, first of preferring the distortions introduced by instruments to unaided vision and second of preferring sense to reason.

Celsus


(late second century A.D.?)

Anti-Christian writer known only as the author of a work called The True Doctrine (Alethēs Logos), which is quoted extensively by Origen of Alexandria in his response, Against Celsus (written in the late 240s). The True Doctrine is mainly important because it is the first anti-Christian polemic of which we have significant knowledge. Origen considers Celsus to be an Epicurean, but he is uncertain about this. There are no traces of Epicureanism in Origen’s quotations from Celsus, which indicate instead that he is an eclectic Middle Platonist of no great originality, a polytheist whose conception of the “unnameable” first deity transcending being and knowable only by “synthesis, analysis, or analogy” is based on Plato’s description of the Good in Republic VI. In accordance with the Timaeus, Celsus believes that God created “immortal things” and turned the creation of “mortal things” over to them. According to him, the universe has a providential organization in which humans hold no special place, and its history is one of eternally repeating sequences of events separated by catastrophes.

certainty

The property of being certain, which is either a psychological property of persons or an epistemic feature of proposition-like objects (e.g., beliefs, utterances, statements). We can say that a person, S, is psychologically certain that p (where ‘p’ stands for a proposition) provided S has no doubt whatsoever that p is true. Thus, a person can be certain regardless of the degree of epistemic warrant for a proposition. In general, philosophers have not found this an interesting property to explore. The exception is Peter Unger, who argued for skepticism, claiming that (1) psychological certainty is required for knowledge and (2) no person is ever certain of anything or hardly anything. As applied to propositions, ‘certain’ has no univocal use. For example, some authors (e.g., Chisholm) may hold that a proposition is epistemically certain provided no proposition is more warranted than it. Given that account, it is possible that a proposition is certain, yet there are legitimate reasons for doubting it just as long as there are equally good grounds for doubting every equally warranted proposition. Other philosophers have adopted a Cartesian account of certainty in which a proposition is epistemically certain provided it is warranted and there are no legitimate grounds whatsoever for doubting it.


Both Chisholm’s and the Cartesian characterizations of epistemic certainty can be employed to provide a basis for skepticism. If knowledge entails certainty, then it can be argued that very little, if anything, is known. For, the argument continues, only tautologies or propositions like ‘I exist’ or ‘I have beliefs’ are such that either nothing is more warranted or there are absolutely no grounds for doubt. Thus, hardly anything is known. Most philosophers have responded either by denying that ‘certainty’ is an absolute term, i.e., admitting of no degrees, or by denying that knowledge requires certainty (Dewey, Chisholm, Wittgenstein, and Lehrer). Others have agreed that knowledge does entail absolute certainty, but have argued that absolute certainty is possible (e.g., Moore).


Sometimes ‘certain’ is modified by other expressions, as in ‘morally certain’ or ‘metaphysically certain’ or ‘logically certain’. Once again, there is no universally accepted account of these terms. Typically, however, they are used to indicate degrees of warrant for a proposition, and often that degree of warrant is taken to be a function of the type of proposition under consideration. For example, the proposition that smoking causes cancer is morally certain provided its warrant is sufficient to justify acting as though it were true. The evidence for such a proposition may, of necessity, depend upon recognizing particular features of the world. On the other hand, in order for a proposition, say that every event has a cause, to be metaphysically certain, the evidence for it must not depend upon recognizing particular features of the world but rather upon recognizing what must be true in order for our world to be the kind of world it is—i.e., one having causal connections. Finally, a proposition, say that every effect has a cause, may be logically certain if it is derivable from “truths of logic” that do not depend in any way upon recognizing anything about our world. Since other taxonomies for these terms are employed by philosophers, it is crucial to examine the use of the terms in their contexts.

Chang Hsüeh-ch’eng


(1738 - 1801)

Chinese historian and philosopher who devised a dialectical theory of civilization in which beliefs, practices, institutions, and arts developed in response to natural necessities. This process reached its zenith several centuries before Confucius, who is unique in being the sage destined to record this moment. Chang’s teaching, “the Six Classics are all history,” means the classics are not theoretical statements about the tao (Way) but traces of it in operation. In the ideal age, a unity of chih (government) and chiao (teaching) prevailed; there were no private disciplines or schools of learning and all writing was anonymous, being tied to some official function. Later history has meandered around this ideal, dominated by successive ages of philosophy, philology, and literature. P.J.I.

Chang Tsai


(1020 - 1077)

Chinese philosopher, a major Neo-Confucian figure whose Hsi-ming (“Western Inscription”) provided much of the metaphysical basis for Neo-Confucian ethics. It argues that the cosmos arose from a single source, the t’ai chi (Supreme Ultimate), as undifferentiated ch’i (ether) took shape out of an inchoate, primordial state, t’ai-hsü (the supremely tenuous). Thus the universe is fundamentally one. The sage “realizes his oneness with the universe” but, appreciating his particular place and role in the greater scheme, expresses his love for it in a graded fashion. Impure endowments of ch’i prevent most people from seeing the true nature of the world. They act “selfishly” but through ritual practice and learning can overcome this and achieve sagehood. P.J.I.

character

The comprehensive set of ethical and intellectual dispositions of a person. Intellectual virtues—like carefulness in the evaluation of evidence - promote, for one, the practice of seeking truth. Moral or ethical virtues—including traits like courage and generosity—dispose persons not only to choices and actions but also to attitudes and emotions. Such dispositions are generally considered relatively stable and responsive to reasons.


Appraisal of character transcends direct evaluation of particular actions in favor of examination of some set of virtues or the admirable human life as a whole. On some views this admirable life grounds the goodness of particular actions. This suggests seeking guidance from role models, and their practices, rather than relying exclusively on rules. Role models will, at times, simply perceive the salient features of a situation and act accordingly. Being guided by role models requires some recognition of just who should be a role model. One may act out of character, since dispositions do not automatically produce particular actions in specific cases. One may also have a conflicted character if the virtues one’s character comprises contain internal tensions (between, say, tendencies to impartiality and to friendship). The importance of formative education to the building of character introduces some good fortune into the acquisition of character. One can have a good character with a disagreeable personality or have a fine personality with a bad character because personality is not typically a normative notion, whereas character is.

Charron, Pierre


(1541 - 1603)

French Catholic theologian who became the principal expositor of Montaigne’s ideas, presenting them in didactic form. His first work, The Three Truths (1595), presented a negative argument for Catholicism by offering a skeptical challenge to atheism, non-Christian religions, and Calvinism. He argued that we cannot know or understand God because of His infinitude and the weakness of our faculties. We can have no good reasons for rejecting Christianity or Catholicism. Therefore, we should accept it on faith alone. His second work, On Wisdom (1603), is a systematic presentation of Pyrrhonian skepticism coupled with a fideistic defense of Catholicism. The skepticism of Montaigne and the Greek skeptics is used to show that we cannot know anything unless God reveals it to us. This is followed by offering an ethics to live by, an undogmatic version of Stoicism. This is the first modern presentation of a morality apart from any religious considerations. Charron’s On Wisdom was extremely popular in France and England. It was read and used by many philosophers and theologians during the seventeenth century. Some claimed that his skepticism opened his defense of Catholicism to question, and suggested that he was insincere in his fideism. He was defended by important figures in the French Catholic church.

cheapest-cost avoider

In the economic analysis of law, the party in a dispute that could have prevented the dispute, or minimized the losses arising from it, with the lowest loss to itself. The term encompasses several types of behavior. As the lowest-cost accident avoider, it is the party that could have prevented the accident at the lowest cost. As the lowest-cost insurer, it is the party that could been have insured against the losses arising from the dispute. This could be the party that could have purchased insurance at the lowest cost or self-insured, or the party best able to appraise the expected losses and the probability of the occurrence. As the lowest-cost briber, it is the party least subject to transaction costs. This party is the one best able to correct any legal errors in the assignment of the entitlement by purchasing the entitlement from the other party. As the lowest-cost information gatherer, it is the party best able to make an informed judgment as to the likely benefits and costs of an action.

Ch’en Hsien-chang


(1428 - 1500)

Chinese poet-philosopher. In the early Ming dynasty Chu Hsi’s li-hsüeh (learning of principles) had been firmly established as the orthodoxy and became somewhat fossilized. Ch’en opposed this trend and emphasized “self-attained learning” by digging deep into the self to find meaning in life. He did not care for book learning and conceptualization, and chose to express his ideas and feelings through poems. Primarily a Confucian, he also drew from Buddhism and Taoism. He was credited with being the first to realize the depth and subtlety of hsin-hsüeh (learning of the mind), later developed into a comprehensive philosophy by Wang Yang-ming.

ch’eng

Chinese term meaning ‘sincerity’. It means much more than just a psychological attitude. Mencius barely touched upon the subject; it was in the Confucian Doctrine of the Mean that the idea was greatly elaborated. The ultimate metaphysical principle is characterized by ch’eng, as it is true, real, totally beyond illusion and delusion. According to the classic, sincerity is the Way of Heaven; to think how to be sincere is the Way of man; and only those who can be absolutely sincere can fully develop their nature, after which they can assist in the transforming and nourishing process of Heaven and Earth.

Ch’eng Hao (1032 - 1085), Ch’eng Yi (1033 - 1107)

Chinese philosophers, brothers who established mature Neo-Confucianism. They elevated the notion of li (pattern) to preeminence and systematically linked their metaphysics to central ethical notions, e.g. hsing (nature) and hsin (heart/mind).


Ch’eng Hao was more mystical and a stronger intuitionist. He emphasized a universal, creative spirit of life, jen (benevolence), which permeates all things, just as ch’i (ether/vital force) permeates one’s body, and likened an “unfeeling” (i.e., unbenevolent) person to an “unfeeling” (i.e., paralyzed) person. Both fail to realize a unifying “oneness.”


Ch’eng Yi presented a more detailed and developed philosophical system in which the li (pattern) in the mind was awakened by perceiving the li in the world, particularly as revealed in the classics, and by t’ui (extending/inferring) their interconnections. If one studies with ching (reverential attentiveness), one can gain both cognitively accurate and affectively appropriate “real knowledge,” which Ch’eng Yi illustrates with an allegory about those who “know” (i.e., have heard that) tigers are dangerous and those who “know” because they have been mauled.


The two brothers differ most in their views on self-cultivation. For Ch’eng Hao, it is more an inner affair: setting oneself right by bringing into full play one’s moral intuition. For Ch’eng Yi, self-cultivation was more external: chih chih (extending knowledge) through ko wu (investigating things). Here lie the beginnings of the major schools of Neo-Confucianism: the Lu-Wang and Ch’eng-Chu schools.

cheng ming also called Rectification of Names

A Confucian program of language reform advocating a return to traditional language. There is a brief reference to cheng ming in Analects 13:3, but Hsün Tzu presents the most detailed discussion of it. While admitting that new words (ming) will sometimes have to be created, Hsün Tzu fears the proliferation of words, dialects, and idiolects will endanger effective communication. He is also concerned that new ways of speaking may lend themselves to sophistry or fail to serve such purposes as accurately distinguishing the noble from the base.

ch’i

Chinese term for ether, air, corporeal vital energy, and the “atmosphere” of a season, person, event, or work. Ch’i can be dense/impure or limpid/pure, warm/rising/active or cool/settling/still. The brave brim with ch’i; a coward lacks it. Ch’i rises with excitement or health and sinks with depression or illness. Ch’i became a concept coordinate with li (pattern), being the medium in which li is embedded and through which it can be experienced. Ch’i serves a role akin to ‘matter’ in Western thought, but being “lively” and “flowing,” it generated a distinct and different set of questions. P.J.I.

Chiao Hung


(1540? - 1620)

Chinese historian and philosopher affiliated with the T’ai-chou school, often referred to as the left wing of Wang Yang-ming’s hsin-hsüeh (learning of the mind). However, he did not repudiate book learning; he was very erudite, and became a forerunner of evidential research. He believed in the unity of the teachings of Confucianism, Buddhism, and Taoism. In opposition to Chu Hsi’s orthodoxy he made use of insights of Ch’an (Zen) Buddhism to give new interpretations to the classics. Learning for him is primarily and ultimately a process of realization in consciousness of one’s innate moral nature.

Chia Yi


(200 - 168 B.C.)

Chinese scholar who attempted to synthesize Legalist, Confucian, and Taoist ideas. The Ch’in dynasty (221-206 B.C.) used the Legalist practice to unify China, but unlimited use of cruel punishment also caused its quick downfall; hence the Confucian system of li (propriety) had to be established, and the emperor had to delegate his power to able ministers to take care of the welfare of the people. The ultimate Way for Chia Yi is hsü (emptiness), a Taoist idea, but he interpreted it in such a way that it is totally compatible with the practice of li and the development of culture.

ch’ien, k’un

In traditional Chinese cosmology, the names of the two most important trigrams in the system of I-Ching (the Book of Changes). Ch’ien (≡) is composed of three undivided lines, the symbol of yang, and k’un (≡ ≡) three divided lines, the symbol of yin. Ch’ien means Heaven, the father, creativity; k’un means Earth, the mother, endurance. The two are complementary; they work together to form the whole cosmic order. In the system of I-Ching, there are eight trigrams, the doubling up of two trigrams forms a hexagram, and there are a total of sixty-four hexagrams. The first two hexagrams are also named ch’ien (≡) and k’un (≡ ≡).

Ch’ien-fu Lun

Chinese title of Comments of a Recluse (second century A.D.), a Confucian political and cosmological work by Wang Fu. Divided into thirty-six essays, it gives a vivid picture of the sociopolitical world of later Han China and prescribes practical measures to overcome corruption and other problems confronting the state. There are discussions on cosmology affirming the belief that the world is constituted by vital energy (ch’i). The pivotal role of human beings in shaping the world is emphasized. A person may be favorably endowed, but education remains crucial. Several essays address the perceived excesses in religious practices. Above all, the author targets for criticism the system of official appointment that privileges family background and reputation at the expense of moral worth and ability. Largely Confucian in outlook, the work reflects strong utilitarian interest reminiscent of Hsün Tzu.

Ch’ien Mu


(1895 - 1990)

Chinese historian, a leading contemporary New Confucian scholar and cofounder (with T’ang Chün-i) of New Asia College in Hong Kong (1949). Early in his career he was respected for his effort to date the ancient Chinese philosophers and for his study of Confucian thought in the Han dynasty (206 B.C.-A.D. 220). During World War II he wrote the Outline of Chinese History, in which he developed a nationalist historical viewpoint stressing the vitality of traditional Chinese culture. Late in his career he published his monumental study of Chu Hsi (1130-1200). He firmly believed the spirit of Confucius and Chu Hsi should be revived today.

chih

Chinese term roughly corresponding to ‘knowledge’. A concise explanation is found in the Hsün Tzu: “That in man by which he knows is called chih; the chih that accords with actuality is called wisdom (chih).” This definition suggests a distinction between intelligence or the ability to know and its achievement or wisdom, often indicated by its homophone. The later Mohists provide more technical definitions, stressing especially the connection between names and objects. Confucians for the most part are interested in the ethical significance of chih. Thus chih, in the Analects of Confucius, is often used as a verb in the sense ‘to realize’, conveying understanding and appreciation of ethical learning, in addition to the use of chih in the sense of acquiring information. And one of the basic problems in Confucian ethics pertains to chih-hsing ho-i (the unity of knowledge and action).

chih 2

Chinese term often translated as ‘will’. It refers to general goals in life as well as to more specific aims and intentions. Chih is supposed to pertain to the heart/mind (hsin) and to be something that can be set up and attained. It is sometimes compared in Chinese philosophical texts to aiming in archery, and is explained by some commentators as “directions of the heart/mind.” Confucians emphasize the need to set up the proper chih to guide one’s behavior and way of life generally, while Taoists advocate letting oneself respond spontaneously to situations one is confronted with, free from direction by chih.

chih-hsing ho-i

Chinese term for the Confucian doctrine, propounded by Wang Yang-ming, of the unity of knowledge and action. The doctrine is sometimes expressed in terms of the unity of moral learning and action. A recent interpretation focuses on the non-contingent connection between prospective and retrospective moral knowledge or achievement. Noteworthy is the role of desire, intention, will, and motive in the mediation of knowledge and action as informed by practical reasonableness in reflection that responds to changing circumstances. Wang’s doctrine is best construed as an attempt to articulate the concrete significance of jen, the Neo-Confucian ideal of the universe as a moral community. A.S.C.

Chinese Legalism

The collective views of the Chinese “school of laws” theorists, so called in recognition of the importance given to strict application of laws in the work of Shang Yang (390-338 B.C.) and his most prominent successor, Han Fei Tzu (d. 223 B.C.). The Legalists were political realists who believed that success in the context of Warring States China (403-221 B.C.) depended on organizing the state into a military camp, and that failure meant nothing less than political extinction. Although they challenged the viability of the Confucian model of ritually constituted community with their call to law and order, they sidestepped the need to dispute the ritual-versus-law positions by claiming that different periods had different problems, and different problems required new and innovative solutions.


Shang Yang believed that the fundamental and complementary occupations of the state, agriculture and warfare, could be prosecuted most successfully by insisting on adherence to clearly articulated laws and by enforcing strict punishments for even minor violations. There was an assumed antagonism between the interests of the individual and the interests of the state. By manipulating rewards and punishments and controlling the “handles of life and death,” the ruler could subjugate his people and bring them into compliance with the national purpose. Law would replace morality and function as the exclusive standard of good. Fastidious application of the law, with severe punishments for infractions, was believed to be a policy that would arrest criminality and quickly make punishment unnecessary.


Given that the law served the state as an objective and impartial standard, the goal was to minimize any reliance upon subjective interpretation. The Legalists thus conceived of the machinery of state as operating automatically on the basis of self-regulating and self-perpetuating “systems.” They advocated techniques of statecraft (shu) such as “accountability” (hsing-ming), the demand for absolute congruency between stipulated duties and actual performance in office, and “doing nothing” (wu-wei), the ruler residing beyond the laws of the state to reformulate them when necessary, but to resist reinterpreting them to accommodate particular cases.


Han Fei Tzu, the last and most influential spokesperson of Legalism, adapted the military precept of strategic advantage (shih) to the rule of government. The ruler, without the prestige and influence of his position, was most often a rather ordinary person. He had a choice: he could rely on his personal attributes and pit his character against the collective strength of his people, or he could tap the collective strength of the empire by using his position and his exclusive power over life and death as a fulcrum to ensure that his will was carried out. What was strategic advantage in warfare became political purchase in the government of the state. Only the ruler with the astuteness and the resolve to hoard and maximize all of the advantages available to him could guarantee continuation in power. Han Fei believed that the closer one was to the seat of power, the greater threat one posed to the ruler. Hence, all nobler virtues and sentiments — benevolence, trust, honor, mercy — were repudiated as means for conspiring ministers and would-be usurpers to undermine the absolute authority of the throne. Survival was dependent upon total and unflagging distrust.

Traditional Chinese philosophy


Its history may be divided into six periods:


(1) Pre-Ch’in, before 221 B.C.
Spring and Autumn, 722-481 B.C.
Warring States, 403-222 B.C.
(2) Han, 206 B.C. — A.D. 220
Western (Former) Han, 206 B.C. — A.D. 8
Hsin, A.D. 9 — 23
Eastern (Later) Han, A.D. 25 — 220
(3) Wei-Chin, 220 — 420
Wei, 220 — 65
Western Chin, 265 — 317
Eastern Chin, 317 — 420
(4) Sui-Tang, 581 — 907
Sui, 581 — 618
Tang, 618 — 907
Five Dynasties, 907 — 60
(5) Sung-(Yüan)-Ming, 960 — 1644
Northern Sung, 960 — 1126
Southern Sung, 1127 — 1279
Yuan (Mongol), 1271 — 1368
Ming, 1368 — 1644
(6) Ch’ing (Manchu), 1644 — 1912


In the late Chou dynasty (1111-249 B.C.), before Ch’in (221 — 206 B.C.) unified the country, China entered the so-called Spring and Autumn period and the Warring States period, and Chou culture was in decline. The so-called hundred schools of thought were contending with one another; among them six were philosophically significant:



(a) Ju-chia (Confucianism) represented by Confucius (551 — 479 B.C.), Mencius (371 — 289 B.C.?), and Hsün Tzu (fl. 298 — 238 B.C.)
(b) Tao-chia (Taoism) represented by Lao Tzu (sixth or fourth century B.C.) and Chuang Tzu (between 399 and 295 B.C.)
(c) Mo-chia (Mohism) represented by Mo Tzu (fl. 479 — 438 B.C.)
(d) Ming-chia (Logicians) represented by Hui Shih (380 — 305 B.C.), Kung-sun Lung (b.380 B.C.?)
(e) Yin-yang-chia (Yin-yang school) represented by Tsou Yen (305 — 240 B.C.?)
(f) Fa-chia (Legalism) represented by Han Fei (d. 233 B.C.)


Thus, China enjoyed her first golden period of philosophy in the Pre-Ch’in period. As most Chinese philosophies were giving responses to existential problems then, it is no wonder Chinese philosophy had a predominantly practical character. It has never developed the purely theoretical attitude characteristic of Greek philosophy.


During the Han dynasty, in 136 B.C., Confucianism was established as the state ideology. But it was blended with ideas of Taoism, Legalism, and the Yin — yang school. An organic view of the universe was developed; creative thinking was replaced by study of the so-called Five Classics: Book of Poetry, Book of History, Book of Changes, Book of Rites, and Spring and Autumn Annals. As the First Emperor of Ch’in burned the Classics except for the I-Ching, in the early Han scholars were asked to write down the texts they had memorized in modern script. Later some texts in ancient script were discovered, but were rejected as spurious by modern-script supporters. Hence there were constant disputes between the modern-script school and the ancient-script school.


Wei-Chin scholars were fed up with studies of the Classics in trivial detail. They also showed a tendency to step over the bounds of rites. Their interest turned to something more metaphysical; the Lao Tzu, the Chuang Tzu, and the I-Ching were their favorite readings. Especially influential were Hsiang Hsiu’s (fl. A.D. 250) and Kuo Hsiang’s (d. A.D. 312) Commentaries on the Chuang Tzu, and Wang Pi’s (226 — 49) Commentaries on the Lao Tzu and I-Ching. Although Wang’s perspective was predominantly Taoist, he was the first to brush aside the hsiang-shu (forms and numbers) approach to the study of the I-Ching and concentrate on i-li (meanings and principles) alone. Sung philosophers continued the i-li approach, but they reinterpreted the Classics from a Confucian perspective.


Although Buddhism was imported into China in the late Han period, it took several hundred years for the Chinese to absorb Buddhist insights and ways of thinking. First the Chinese had to rely on ko-i (matching the concepts) by using Taoist ideas to transmit Buddhist messages. After the Chinese learned a great deal from Buddhism by translating Buddhist texts into Chinese, they attempted to develop the Chinese versions of Buddhism in the Sui — Tang period. On the whole they favored Mahayana over Hinayana (Theravada) Buddhism, and they developed a much more life-affirming attitude through Hua-yen and T’ien-tai Buddhism, which they believed to represent Buddha’s mature thought. Ch’an went even further, seeking sudden enlightenment instead of scripture studies. Ch’an, exported to Japan, has become Zen, a better-known term in the West.


In response to the Buddhist challenge, the Neo-Confucian thinkers gave a totally new interpretation of Confucian philosophy by going back to insights implicit in Confucius’s so-called Four Books: the Analects, the Mencius, The Great Learning, and the Doctrine of the Mean (the latter two were chapters taken from the Book of Rites). They were also fascinated by the I-Ching. They borrowed ideas from Buddhism and Taoism to develop a new Confucian cosmology and moral metaphysics. Sung-Ming Neo-Confucianism brought Chinese philosophy to a new height; some consider the period the Chinese Renaissance. The movement started with Chou Tun-i (1017 — 73), but the real founders of Neo-Confucianism were the Ch’eng brothers: Ch’eng Hao (1032 — 85) and Ch’eng Yi (1033 — 1107). Then came Chu Hsi (1130 — 1200), a great synthesizer often compared with Thomas Aquinas or Kant in the West, who further developed Ch’eng Yi’s ideas into a systematic philosophy and originated the so-called Ch’eng-Chu school. But he was opposed by his younger contemporary Lu Hsiang-shan (1139 — 93). During the Ming dynasty, Wang Yang-ming (1472 — 1529) reacted against Chu Hsi by reviving the insight of Lu Hsiang-shan, hence the so-called Lu-Wang school.


During the Ch’ing dynasty, under the rule of the Manchus, scholars turned to historical scholarship and showed little interest in philosophical speculation. In the late Ch’ing, K’ang Yu-wei (1858 — 1927) revived the modern-script school, pushed for radical reform, but failed miserably in his attempt.



Contemporary Chinese philosophy


Three important trends can be discerned, intertwined with one another: the importation of Western philosophy, the dominance of Marxism on Mainland China, and the development of contemporary New Confucian philosophy. During the early twentieth century China awoke to the fact that traditional Chinese culture could not provide all the means for China to enter into the modern era in competition with the Western powers. Hence the first urgent task was to learn from the West.


Almost all philosophical movements had their exponents, but they were soon totally eclipsed by Marxism, which was established as the official ideology in China after the Communist takeover in 1949. Mao Tse-tung (1893 — 1976) succeeded in the line of Marx, Engels, Lenin, and Stalin. The Communist regime was intolerant of all opposing views. The Cultural Revolution was launched in 1967, and for a whole decade China closed her doors to the outside world. Almost all the intellectuals inside or outside of the Communist party were purged or suppressed. After the Cultural Revolution was over, universities were reopened in 1978. From 1979 to 1989, intellectuals enjoyed unprecedented freedom. One editorial in People’s Daily News said that Marx’s ideas were the product of the nineteenth century and did not provide all the answers for problems at the present time, and hence it was desirable to develop Marxism further. Such a message was interpreted by scholars in different ways. Although the thoughts set forth by scholars lacked depth, the lively atmosphere could be compared to the May Fourth New Culture Movement in 1919. Unfortunately, however, violent suppression of demonstrators in Peking’s Tiananmen Square in 1989 put a stop to all this. Control of ideology became much stricter for the time being, although the doors to the outside world were not completely closed.


As for the Nationalist government, which had fled to Taiwan in 1949, the control of ideology under its jurisdiction was never total on the island; liberalism has been strong among the intellectuals. Analytic philosophy, existentialism, and hermeneutics all have their followers; today even radicalism has its attraction for certain young scholars.


Even though mainstream Chinese thought in the twentieth century has condemned the Chinese tradition altogether, that tradition has never completely died out. In fact the most creative talents were found in the contemporary New Confucian movement, which sought to bring about a synthesis between East and West. Among those who stayed on the mainland, Fung Yu-lan (1895 — 1990) and Ho Lin (1902 — 92) changed their earlier views after the Communist takeover, but Liang Sou-ming (1893 — 1988) and Hsiung Shih-li (1885 — 1968) kept some of their beliefs. Ch’ien Mu (1895 — 1990) and Tang Chün-i (1909 — 78) moved to Hong Kong and Thomé H. Fang (1899 — 1976), Hsü Fu-kuan (1903 — 82), and Mou Tsung-san (1909 — 95) moved to Taiwan, where they exerted profound influence on younger scholars. Today contemporary New Confucianism is still a vital intellectual movement in Hong Kong, Taiwan, and overseas; it is even studied in Mainland China. The New Confucians urge a revival of the traditional spirit of jen (humanity) and sheng (creativity); at the same time they turn to the West, arguing for the incorporation of modern science and democracy into Chinese culture.


The New Confucian philosophical movement in the narrower sense derived inspiration from Hsiung Shih-li. Among his disciples the most original thinker is Mou Tsung-san, who has developed his own system of philosophy. He maintains that the three major Chinese traditions — Confucian, Taoist, and Buddhist — agree in asserting that humans have the endowment for intellectual intuition, meaning personal participation in tao (the Way). But the so-called third generation has a much broader scope; it includes scholars with varied backgrounds such as Yu Ying-shih (b. 1930), Liu Shu-hsien (b. 1934), and Tu Wei-ming (b. 1940), whose ideas have impact on intellectuals at large and whose selected writings have recently been allowed to be published on the mainland. The future of Chinese philosophy will still depend on the interactions of imported Western thought, Chinese Marxism, and New Confucianism.

ching

Chinese term meaning ‘reverence’, ‘seriousness’, ‘attentiveness’, ‘composure’. In early texts, ching is the appropriate attitude toward spirits, one’s parents, and the ruler; it was originally interchangeable with another term, kung (respect). Among Neo-Confucians, these terms are distinguished: ching reserved for the inner state of mind and kung for its outer manifestations. This distinction was part of the Neo-Confucian response to the quietistic goal of meditative calm advocated by many Taoists and Buddhists. Neo-Confucians sought to maintain an imperturbable state of “reverential attentiveness” not only in meditation but throughout all activity. This sense of ching is best understood as a Neo-Confucian appropriation of the Ch’an (Zen) ideal of yi-hsing san-mei (universal samādhi), prominent in texts such as the Platform Sutra. P.J.I.

ch’ing

Chinese term meaning (1) ‘essence’, ‘essential’; (2) ‘emotion’, ‘passions’. Originally, the ch’ing of x was the properties without which x would cease to be the kind of thing that it is. In this sense it contrasts with the nature (hsing) of x: the properties x has if it is a flourishing instance of its kind. By the time of Hsün Tzu, though, ch’ing comes to refer to human emotions or passions. A list of “the six emotions” (liu ch’ing) soon became fairly standard: fondness (hao), dislike (wu), delight (hsi), anger (nu), sadness (ai), and joy (le). B.W.V.N.

Chisholm, Roderick Milton


(1916 - 99)

Influential American philosopher whose publications spanned the field, including ethics and the history of philosophy. He is mainly known as an epistemologist, metaphysician, and philosopher of mind. In early opposition to powerful forms of reductionism, such as phenomenalism, extensionalism, and physicalism, Chisholm developed an original philosophy of his own. Educated at Brown and Harvard (Ph.D., 1942), he spent nearly his entire career at Brown.


He is known chiefly for the following contributions. (a) Together with his teacher and later his colleague at Brown, C.J. Ducasse, he developed and long defended an adverbial account of sensory experience, set against the sense-datum act-object account then dominant. (b) Based on deeply probing analysis of the free will problematic, he defended a libertarian position, again in opposition to the compatibilism long orthodox in analytic circles. His libertarianism had, moreover, an unusual account of agency, based on distinguishing transeunt (event) causation from immanent (agent) causation. (c) In opposition to the celebrated linguistic turn of linguistic philosophy, he defended the primacy of intentionality, a defense made famous not only through important papers, but also through his extensive and eventually published correspondence with Wilfrid Sellars. (d) Quick to recognize the importance and distinctiveness of the de se, he welcomed it as a basis for much de re thought. (e) His realist ontology is developed through an intentional concept of “entailment,” used to define key concepts of his system, and to provide criteria of identity for occupants of fundamental categories. (f) In epistemology, he famously defended forms of foundationalism and internalism, and offered a delicately argued (dis)solution of the ancient problem of the criterion.


The principles of Chisholm’s epistemology and metaphysics are not laid down antecedently as hard-and-fast axioms. Lacking any inviolable antecedent privilege, they must pass muster in the light of their consequences and by comparison with whatever else we may find plausible. In this regard he sharply contrasts with such epistemologists as Popper, with the skepticism of justification attendant on his deductivism, and Quine, whose stranded naturalism drives so much of his radical epistemology and metaphysics. By contrast, Chisholm has no antecedently set epistemic or metaphysical principles. His philosophical views develop rather dialectically, with sensitivity to whatever considerations, examples, or counterexamples reflection may reveal as relevant. This makes for a demanding complexity of elaboration, relieved, however, by a powerful drive for ontological and conceptual economy.

choice sequence

A variety of infinite sequence introduced by L.E.J. Brouwer to express the non-classical properties of the continuum (the set of real numbers) within intuitionism. A choice sequence is determined by a finite initial segment together with a “rule” for continuing the sequence. The rule, however, may allow some freedom in choosing each subsequent element. Thus the sequence might start with the rational numbers 0 and then ½, and the rule might require the n + 1st element to be some rational number within (½)n of the nth choice, without any further restriction. The sequence of rationals thus generated must converge to a real number, r. But r’s definition leaves open its exact location in the continuum. Speaking intuitionistically, r violates the classical law of trichotomy: given any pair of real numbers (e.g., r and ½), the first is either less than, equal to, or greater than the second.


From the 1940s Brouwer got this non-classical effect without appealing to the apparently nonmathematical notion of free choice. Instead he used sequences generated by the activity of an idealized mathematician (the creating subject), together with propositions that he took to be undecided. Given such a proposition, P — e.g. Fermat’s last theorem (that for n > 2 there is no general method of finding triplets of numbers with the property that the sum of each of the first two raised to the nth power is equal to the result of raising the third to the nth power) or Goldbach’s conjecture (that every even number is the sum of two prime numbers) — we can modify the definition of r: The n + 1st element is ½ if at the nth stage of research P remains undecided. That element and all its successors are ½ + (½)n if by that stage P is proved; they are ½ — (½)n if P is refuted. Since he held that there is an endless supply of such propositions, Brouwer believed that we can always use this method to refute classical laws.


In the early 1960s Stephen Kleene and Richard Vesley reproduced some main parts of Brouwer’s theory of the continuum in a formal system based on Kleene’s earlier recursion-theoretic interpretation of intuitionism and of choice sequences. At about the same time — but in a different and occasionally incompatible vein — Saul Kripke formally captured the power of Brouwer’s counterexamples without recourse to recursive functions and without invoking either the creating subject or the notion of free choice. Subsequently Georg Kreisel, A.N. Troelstra, Dirk Van Dalen, and others produced formal systems that analyze Brouwer’s basic assumptions about open-futured objects like choice sequences.

Chomsky, Noam


(1928 - )

Preeminent American linguist, philosopher, and political activist who has spent his professional career at the Massachusetts Institute of Technology. Chomsky’s best-known scientific achievement is the establishment of a rigorous and philosophically compelling foundation for the scientific study of the grammar of natural language. With the use of tools from the study of formal languages, he gave a far more precise and explanatory account of natural language grammar than had previously been given (Syntactic Structures, 1957). He has since developed a number of highly influential frameworks for the study of natural language grammar (e.g., Aspects of the Theory of Syntax, 1965; Lectures on Government and Binding, 1981; The Minimalist Program, 1995). Though there are significant differences in detail, there are also common themes that underlie these approaches. Perhaps the most central is that there is an innate set of linguistic principles shared by all humans, and the purpose of linguistic inquiry is to describe the initial state of the language learner, and account for linguistic variation via the most general possible mechanisms.


On Chomsky’s conception of linguistics, languages are structures in the brains of individual speakers, described at a certain level of abstraction within the theory. These structures occur within the language faculty, a hypothesized module of the human brain. Universal Grammar is the set of principles hard-wired into the language faculty that determine the class of possible human languages. This conception of linguistics involves several influential and controversial theses. First, the hypothesis of a Universal Grammar entails the existence of innate linguistic principles. Secondly, the hypothesis of a language faculty entails that our linguistic abilities, at least so far as grammar is concerned, are not a product of general reasoning processes. Finally, and perhaps most controversially, since having one of these structures is an intrinsic property of a speaker, properties of languages so conceived are determined solely by states of the speaker. On this individualistic conception of language, there is no room in scientific linguistics for the social entities determined by linguistic communities that are languages according to previous anthropological conceptions of the discipline.


Many of Chomsky’s most significant contributions to philosophy, such as his influential rejection of behaviorism (“Review of Skinner’s Verbal Behavior,” Language, 1959), stem from his elaborations and defenses of the above consequences (cf. also Cartesian Linguistics, 1966; Reflections on Language, 1975; Rules and Representations, 1980; Knowledge of Language, 1986). Chomsky’s philosophical writings are characterized by an adherence to methodological naturalism, the view that the mind should be studied like any other natural phenomenon. In recent years, he has also argued that reference, in the sense in which it is used in the philosophy of language, plays no role in a scientific theory of language (“Language and Nature,” Mind, 1995).

Chou Tun-yi


(1017 - 1073)

Chinese Neo-Confucian philosopher. His most important work, the T’aichi t’u-shuo (“Explanations of the Diagram of the Supreme Ultimate”), consists of a chart, depicting the constituents, structure, and evolutionary process of the cosmos, along with an explanatory commentary. This work, together with his T’ungshu (“Penetrating the I-Ching”), introduced many of the fundamental ideas of Neo-Confucian metaphysics. Consequently, heated debates arose concerning Chou’s diagram, some claiming it described the universe as arising out of wu (non-being) and thus was inspired by and supported Taoism. Chou’s primary interest was always cosmological; he never systematically related his metaphysics to ethical concerns.

ch’üan

Chinese term for a key Confucian concept that may be rendered as meaning ‘weighing of circumstances’, ‘exigency’, or ‘moral discretion’. A metaphorical extension of the basic sense of a steelyard for measuring weight, ch’üan essentially pertains to assessment of the importance of moral considerations to a current matter of concern. Alternatively, the exercise of ch’üan consists in a judgement of the comparative importance of competing options answering to a current problematic situation. The judgment must accord with li (principle, reason), i.e., be a principled or reasoned judgment. In the sense of exigency, ch’üan is a hard case, i.e., one falling outside the normal scope of the operation of standards of conduct. In the sense of ‘moral discretion’, ch’üan must conform to the requirement of i (rightness).

Chuang Tzu also called Chuang Chou (4th century B.C.)

Chinese Taoist philosopher. According to many scholars, ideas in the inner chapters (chapters 1 to 7) of the text Chuang Tzu may be ascribed to the person Chuang Tzu, while the other chapters contain ideas related to his thought and later developments of his ideas. The inner chapters contain dialogues, stories, verses, sayings, and brief essays geared toward inducing an altered perspective on life. A realization that there is no neutral ground for adjudicating between opposing judgments made from different perspectives is supposed to lead to a relaxation of the importance one attaches to such judgments and to such distinctions as those between right and wrong, life and death, and self and others. The way of life advocated is subject to different interpretations. Parts of the text seem to advocate a way of life not radically different from the conventional one, though with a lessened emotional involvement. Other parts seem to advocate a more radical change; one is supposed to react spontaneously to situations one is confronted with, with no preconceived goals or preconceptions of what is right or proper, and to view all occurrences, including changes in oneself, as part of the transformation process of the natural order.

Chu Hsi


(1130 - 1200)

Neo-Confucian scholar of the Sung dynasty (960-1279), commonly regarded as the greatest Chinese philosopher after Confucius and Mencius. His mentor was Ch’eng Yi (1033-1107), hence the so-called Ch’eng-Chu School. Chu Hsi developed Ch’eng Yi’s ideas into a comprehensive metaphysics of li (principle) and ch’i (material force). Li is incorporeal, one, eternal, and unchanging, always good; ch’i is physical, many, transitory, and changeable, involving both good and evil. They are not to be mixed or separated. Things are composed of both li and ch’i. Chu identifies hsing (human nature) as li, ch’ing (feelings and emotions) as ch’i, and hsin (mind/heart) as ch’i of the subtlest kind, comprising principles. He interprets ko-wu in the Great Learning to mean the investigation of principles inherent in things, and chih-chih to mean the extension of knowledge. He was opposed by Lu Hsiang-shan (1139-93) and Wang Yang-ming (1472-1529), who argued that mind is principle. Mou Tsung-san thinks that Lu’s and Wang’s position was closer to Mencius’s philosophy, which was honored as orthodoxy. But Ch’eng and Chu’s commentaries on the Four Books were used as the basis for civil service examinations from 1313 until the system was abolished in 1905.

chung, shu

Chinese philosophical terms important in Confucianism, meaning ‘loyalty’ or ‘commitment’, and ‘consideration’ or ‘reciprocity’, respectively. In the Analects, Confucius observes that there is one thread running through his way of life, and a disciple describes the one thread as constituted by chung and shu. Shu is explained in the text as not doing to another what one would not have wished done to oneself, but chung is not explicitly explained. Scholars interpret chung variously as a commitment to having one’s behavior guided by shu, as a commitment to observing the norms of li (rites) (to be supplemented by shu, which humanizes and adds a flexibility to the observance of such norms), or as a strictness in observing one’s duties toward superiors or equals (to be supplemented by shu, which involves considerateness toward inferiors or equals, thereby humanizing and adding a flexibility to the application of rules governing one’s treatment of them). The pair of terms continued to be used by later Confucians to refer to supplementary aspects of the ethical ideal or self-cultivation process; e.g., some used chung to refer to a full manifestation of one’s originally good heart/mind (hsin), and shu to refer to the extension of that heart/mind to others.

Chung-yung

A portion of the Chinese Confucian classic Book of Rites. The standard English title of the Chung-yung (composed in the third or second century B.C.) is The Doctrine of the Mean, but Centrality and Commonality is more accurate. Although frequently treated as an independent classic from quite early in its history, it did not receive canonical status until Chu Hsi made it one of the Four Books. The text is a collection of aphorisms and short essays unified by common themes. Portions of the text outline a virtue ethic, stressing flexible response to changing contexts, and identifying human flourishing with complete development of the capacities present in one’s nature (hsing), which is given by Heaven (t’ien). As is typical of Confucianism, virtue in the family parallels political virtue.

chün-tzu

Chinese term meaning ‘gentleman’, ‘superior man’, ‘noble person’, or ‘exemplary individual’. Chün-tzu is Confucius’s practically attainable ideal of ethical excellence. A chün-tzu, unlike a sheng (sage), is one who exemplifies in his life and conduct a concern for jen (humanity), li (propriety), and i (rightness/righteousness). Jen pertains to affectionate regard to the well-being of one’s fellows in the community; li to ritual propriety conformable to traditional rules of proper behavior; and i to one’s sense of rightness, especially in dealing with changing circumstances. A chün-tzu is marked by a catholic and neutral attitude toward preconceived moral opinions and established moral practices, a concern with harmony of words and deeds. These salient features enable the chün-tzu to cope with novel and exigent circumstances, while at the same time heeding the importance of moral tradition as a guide to conduct. A.S.C.

Church, Alonzo


(1903 - 1995)

American logician, mathematician, and philosopher, known in pure logic for his discovery and application of the Church lambda operator, one of the central ideas of the Church lambda calculus, and for his rigorous formalizations of the theory of types, a higher-order underlying logic originally formulated in a flawed form by Whitehead and Russell. The lambda operator enables direct, unambiguous, symbolic representation of a range of philosophically and mathematically important expressions previously representable only ambiguously or after elaborate paraphrasing. In philosophy, Church advocated rigorous analytic methods based on symbolic logic. His philosophy was characterized by his own version of logicism, the view that mathematics is reducible to logic, and by his unhesitating acceptance of higher-order logics. Higher-order logics, including second-order, are ontologically rich systems that involve quantification of higher-order variables, variables that range over properties, relations, and so on. Higher-order logics were routinely used in foundational work by Frege, Peano, Hilbert, Gödel, Tarski, and others until around World War II, when they suddenly lost favor. In regard to both his logicism and his acceptance of higher-order logics, Church countered trends, increasingly dominant in the third quarter of the twentieth century, against reduction of mathematics to logic and against the so-called “ontological excesses” of higher-order logic. In the 1970s, although admired for his high standards of rigor and for his achievements, Church was regarded as conservative or perhaps even reactionary. Opinions have softened in recent years.


On the computational and epistemological sides of logic Church made two major contributions. He was the first to articulate the now widely accepted principle known as Church’s thesis, that every effectively calculable arithmetic function is recursive. At first highly controversial, this principle connects intuitive, epistemic, extrinsic, and operational aspects of arithmetic with its formal, ontic, intrinsic, and abstract aspects. Church’s thesis sets a purely arithmetic outer limit on what is computationally achievable. Church’s further work on Hilbert’s “decision problem” led to the discovery and proof of Church’s theorem — basically that there is no computational procedure for determining, of a finite-premised first-order argument, whether it is valid or invalid. This result contrasts sharply with the previously known result that the computational truth-table method suffices to determine the validity of a finite-premised truth-functional argument. Church’s thesis at once highlights the vast difference between propositional logic and first-order logic and sets an outer limit on what is achievable by “automated reasoning.”


Church’s mathematical and philosophical writings are influenced by Frege, especially by Frege’s semantic distinction between sense and reference, his emphasis on purely syntactical treatment of proof, and his doctrine that sentences denote (are names of) their truth-values.

Churchland, Patricia Smith


(1943 - )

Canadian-born American philosopher and advocate of neurophilosophy. She received her B.Phil. from Oxford in 1969 and held positions at the University of Manitoba and the Institute for Advanced Studies at Princeton, settling at the University of California, San Diego, with appointments in philosophy and the Institute for Neural Computation.


Skeptical of philosophy’s a priori specification of mental categories and dissatisfied with computational psychology’s purely top-down approach to their function, Churchland began studying the brain at the University of Manitoba medical school. The result was a unique merger of science and philosophy, a “neurophilosophy” that challenged the prevailing methodology of mind. Thus, in a series of articles that includes “Fodor on Language Learning” (1978) and “A Perspective on Mind-Brain Research” (1980), she outlines a new neurobiologically based paradigm. It subsumes simple non-linguistic structures and organisms, since the brain is an evolved organ; but it preserves functionalism, since a cognitive system’s mental states are explained via high-level neurofunctional theories. It is a strategy of cooperation between psychology and neuroscience, a “co-evolutionary” process eloquently described in Neurophilosophy (1986) with the prediction that genuine cognitive phenomena will be reduced, some as conceptualized within the commonsense framework, others as transformed through the sciences.


The same intellectual confluence is displayed through Churchland’s various collaborations: with psychologist and computational neurobiologist Terrence Sejnowski in The Computational Brain (1992); with neuroscientist Rodolfo Llinas in The Mind-Brain Continuum (1996); and with philosopher and husband Paul Churchland in On the Contrary (1998) (she and Paul Churchland are jointly appraised in R. McCauley, The Churchlands and Their Critics, 1996). From the viewpoint of neurophilosophy, interdisciplinary cooperation is essential for advancing knowledge, for the truth lies in the intertheoretic details.

Churchland, Paul M.


(1942 - )

Canadian-born American philosopher, leading proponent of eliminative materialism. He received his Ph.D. from the University of Pittsburgh in 1969 and held positions at the Universities of Toronto, Manitoba, and the Institute for Advanced Studies at Princeton. He is professor of philosophy and member of the Institute for Neural Computation at the University of California, San Diego.


Churchland’s literary corpus constitutes a lucidly written, scientifically informed narrative where his neurocomputational philosophy unfolds. Scientific Realism and the Plasticity of Mind (1979) maintains that, though science is best construed realistically, perception is conceptually driven, with no observational given, while language is holistic, with meaning fixed by networks of associated usage. Moreover, regarding the structure of science, higher-level theories should be reduced by, incorporated into, or eliminated in favor of more basic theories from natural science, and, in the specific case, commonsense psychology is a largely false empirical theory, to be replaced by a non-sentential, neuroscientific framework. This skepticism regarding “sentential” approaches is a common thread, present in earlier papers, and taken up again in “Eliminative Materialism and the Propositional Attitudes” (1981).


When fully developed, the non-sentential, neuroscientific framework takes the form of connectionist network or parallel distributed processing models. Thus, with essays in A Neurocomputational Perspective (1989), Churchland adds that genuine psychological processes are sequences of activation patterns over neuronal networks. Scientific theories, likewise, are learned vectors in the space of possible activation patterns, with scientific explanation being prototypical activation of a preferred vector. Classical epistemology, too, should be neurocomputationally naturalized. Indeed, Churchland suggests a semantic view whereby synonymy, or the sharing of concepts, is a similarity between patterns in neuronal state-space. Even moral knowledge is analyzed as stored prototypes of social reality that are elicited when an individual navigates through other neurocomputational systems. The entire picture is expressed in The Engine of Reason, the Seat of the Soul (1996) and, with his wife Patricia Churchland, by the essays in On the Contrary (1998). What has emerged is a neurocomputational embodiment of the naturalist program, a panphilosophy that promises to capture science, epistemology, language, and morals in one broad sweep of its connectionist net.

Church’s thesis

The thesis, proposed by Alonzo Church at a meeting of the American Mathematical Society in April 1935, “that the notion of an effectively calculable function of positive integers should be identified with that of a recursive function….” This proposal has been called Church’s thesis ever since Kleene used that name in his Introduction to Metamathematics (1952). The informal notion of an effectively calculable function (effective procedure, or algorithm) had been used in mathematics and logic to indicate that a class of problems is solvable in a “mechanical fashion” by following fixed elementary rules. Underlying epistemological concerns came to the fore when modern logic moved in the late nineteenth century from axiomatic to formal presentations of theories. Hilbert suggested in 1904 that such formally presented theories be taken as objects of mathematical study, and metamathematics has been pursued vigorously and systematically since the 1920s. In its pursuit, concrete issues arose that required for their resolution a delimitation of the class of effective procedures. Hilbert’s important Entscheidungsproblem, the decision problem for predicate logic, was one such issue. It was solved negatively by Church and Turing—relative to the precise notion of recursiveness; the result was obtained independently by Church and Turing, but is usually called Church’s theorem. A second significant issue was the general formulation of the incompleteness theorems as applying to all formal theories (satisfying the usual representability and derivability conditions), not just to specific formal systems like that of Principia Mathematica.


According to Kleene, Church proposed in 1933 the identification of effective calculability with λ-definability. That proposal was not published at the time, but in 1934 Church mentioned it in conversation to Gödel, who judged it to be “thoroughly unsatisfactory.” In his Princeton Lectures of 1934, Gödel defined the concept of a recursive function, but he was not convinced that all effectively calculable functions would fall under it. The proof of the equivalence between λ-definability and recursiveness (by Church and Kleene) led to Church’s first published formulation of the thesis as quoted above. The thesis was reiterated in Church’s “An Unsolvable Problem of Elementary Number Theory” (1936). Turing introduced, in “On Computable Numbers, with an Application to the Entscheidungsproblem” (1936), a notion of computability by machines and maintained that it captures effective calculability exactly. Post’s paper “Finite Combinatory Processes, Formulation 1” (1936) contains a model of computation that is strikingly similar to Turing’s. However, Post did not provide any analysis; he suggested considering the identification of effective calculability with his concept as a working hypothesis that should be verified by investigating ever wider formulations and reducing them to his basic formulation. (The classic papers of Gödel, Church, Turing, Post, and Kleene are all reprinted in Davis, ed., The Undecidable, 1965.)


In his 1936 paper Church gave one central reason for the proposed identification, namely that other plausible explications of the informal notion lead to mathematical concepts weaker than or equivalent to recursiveness. Two paradigmatic explications, calculability of a function via algorithms or in a logic, were considered by Church. In either case, the steps taken in determining function values have to be effective; and if the effectiveness of steps is, as Church put it, interpreted to mean recursiveness, then the function is recursive. The fundamental interpretative difficulty in Church’s “step-by-step argument” (which was turned into one of the “recursiveness conditions” Hilbert and Bernays used in their 1939 characterization of functions that can be evaluated according to rules) was bypassed by Turing. Analyzing human mechanical computations, Turing was led to finiteness conditions that are motivated by the human computer’s sensory limitations, but are ultimately based on memory limitations. Then he showed that any function calculable by a human computer satisfying these conditions is also computable by one of his machines. Both Church and Gödel found Turing’s analysis convincing; indeed, Church wrote in a 1937 review of Turing’s paper that Turing’s notion makes “the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately.”


This reflective work of partly philosophical and partly mathematical character provides one of the fundamental notions in mathematical logic. Indeed, its proper understanding is crucial for (judging) the philosophical significance of central metamathematical results—like Gödel’s incompleteness theorems or Church’s theorem. The work is also crucial for computer science, artificial intelligence, and cognitive psychology, providing in these fields a basic theoretical notion. For example, Church’s thesis is the cornerstone for Newell and Simon’s delimitation of the class of physical symbol systems, i.e. universal machines with a particular architecture; see Newell’s Physical Symbol Systems (1980). Newell views the delimitation “as the most fundamental contribution of artificial intelligence and computer science to the joint enterprise of cognitive science.” In a turn that had been taken by Turing in “Intelligent Machinery” (1948) and “Computing Machinery and Intelligence” (1950), Newell points out the basic role physical symbol systems take on in the study of the human mind: “the hypothesis is that humans are instances of physical symbol systems, and, by virtue of this, mind enters into the physical universe…. this hypothesis sets the terms on which we search for a scientific theory of mind.”

Cicero, Marcus Tullius


(106 - 43 B.C.)

Roman statesman, orator, essayist, and letter writer. He was important not so much for formulating individual philosophical arguments as for expositions of the doctrines of the major schools of Hellenistic philosophy, and for, as he put it, “teaching philosophy to speak Latin.” The significance of the latter can hardly be overestimated. Cicero’s coinages helped shape the philosophical vocabulary of the Latin-speaking West well into the early modern period.


The most characteristic feature of Cicero’s thought is his attempt to unify philosophy and rhetoric. His first major trilogy, On the Orator, On the Republic, and On the Laws, presents a vision of wise statesmen-philosophers whose greatest achievement is guiding political affairs through rhetorical persuasion rather than violence. Philosophy, Cicero argues, needs rhetoric to effect its most important practical goals, while rhetoric is useless without the psychological, moral, and logical justification provided by philosophy. This combination of eloquence and philosophy constitutes what he calls humanitas—a coinage whose enduring influence is attested in later revivals of humanism—and it alone provides the foundation for constitutional governments; it is acquired, moreover, only through broad training in those subjects worthy of free citizens (artes liberales). In philosophy of education, this Ciceronian conception of a humane education encompassing poetry, rhetoric, history, morals, and politics endured as an ideal, especially for those convinced that instruction in the liberal disciplines is essential for citizens if their rational autonomy is to be expressed in ways that are culturally and politically beneficial.


A major aim of Cicero’s earlier works is to appropriate for Roman high culture one of Greece’s most distinctive products, philosophical theory, and to demonstrate Roman superiority. He thus insists that Rome’s laws and political institutions successfully embody the best in Greek political theory, whereas the Greeks themselves were inadequate to the crucial task of putting their theories into practice. Taking over the Stoic conception of the universe as a rational whole, governed by divine reason, he argues that human societies must be grounded in natural law. For Cicero, nature’s law possesses the characteristics of a legal code; in particular, it is formulable in a comparatively extended set of rules against which existing societal institutions can be measured. Indeed, since they so closely mirror the requirements of nature, Roman laws and institutions furnish a nearly perfect paradigm for human societies. Cicero’s overall theory, if not its particular details, established a lasting framework for anti-positivist theories of law and morality, including those of Aquinas, Grotius, Suárez, and Locke.


The final two years of his life saw the creation of a series of dialogue-treatises that provide an encyclopedic survey of Hellenistic philosophy. Cicero himself follows the moderate fallibilism of Philo of Larissa and the New Academy. Holding that philosophy is a method and not a set of dogmas, he endorses an attitude of systematic doubt. However, unlike Cartesian doubt, Cicero’s does not extend to the real world behind phenomena, since he does not envision the possibility of strict phenomenalism. Nor does he believe that systematic doubt leads to radical skepticism about knowledge. Although no infallible criterion for distinguishing true from false impressions is available, some impressions, he argues, are more “persuasive” (probabile) and can be relied on to guide action.


In Academics he offers detailed accounts of Hellenistic epistemological debates, steering a middle course between dogmatism and radical skepticism. A similar strategy governs the rest of his later writings. Cicero presents the views of the major schools, submits them to criticism, and tentatively supports any positions he finds “persuasive.” Three connected works, On Divination, On Fate, and On the Nature of the Gods, survey Epicurean, Stoic, and Academic arguments about theology and natural philosophy. Much of the treatment of religious thought and practice is cool, witty, and skeptically detached—much in the manner of eighteenth-century philosophes who, along with Hume, found much in Cicero to emulate. However, he concedes that Stoic arguments for providence are “persuasive.” So too in ethics, he criticizes Epicurean, Stoic, and Peripatetic doctrines in On Ends (45) and their views on death, pain, irrational emotions, and happiness in Tusculan Disputations (45). Yet, a final work, On Duties, offers a practical ethical system based on Stoic principles. Although sometimes dismissed as the eclecticism of an amateur, Cicero’s method of selectively choosing from what had become authoritative professional systems often displays considerable reflectiveness and originality.

circular reasoning

Reasoning that, when traced backward from its conclusion, returns to that starting point, as one returns to a starting point when tracing a circle. The discussion of this topic by Richard Whatley (1787-1863) in his Logic (1826) sets a high standard of clarity and penetration. Logic textbooks often quote the following example from Whatley:


To allow every man an unbounded freedom of speech must always be, on the whole, advantageous to the State; for it is highly conducive to the interests of the Community, that each individual should enjoy a liberty perfectly unlimited, of expressing his sentiments.


This passage illustrates how circular reasoning is less obvious in a language, such as English, that, in Whatley’s words, is “abounding in synonymous expressions, which have no resemblance in sound, and no connection in etymology.” The premise and conclusion do not consist of just the same words in the same order, nor can logical or grammatical principles transform one into the other. Rather, they have the same propositional content: they say the same thing in different words. That is why appealing to one of them to provide reason for believing the other amounts to giving something as a reason for itself.


Circular reasoning is often said to beg the question. ‘Begging the question’ and petitio principii are translations of a phrase in Aristotle connected with a game of formal disputation played in antiquity but not in recent times. The meanings of ‘question’ and ‘begging’ do not in any clear way determine the meaning of ‘question begging’.


There is no simple argument form that all and only circular arguments have. It is not logic, in Whatley’s example above, that determines the identity of content between the premise and the conclusion. Some theorists propose rather more complicated formal or syntactic accounts of circularity. Others believe that any account of circular reasoning must refer to the beliefs of those who reason. Whether or not the following argument about articles in this dictionary is circular depends on why the first premise should be accepted:



1. The article on inference contains no split infinitives.
2. The other articles contain no split infinitives. Therefore,
3. No article contains split infinitives.


Consider two cases. Case I: Although (2) supports (1) inductively, both (1) and (2) have solid outside support independent of any prior acceptance of (3). This reasoning is not circular. Case II: Someone who advances the argument accepts (1) or (2) or both, only because he believes (3). Such reasoning is circular, even though neither premise expresses just the same proposition as the conclusion. The question remains controversial whether, in explaining circularity, we should refer to the beliefs of individual reasoners or only to the surrounding circumstances.


One purpose of reasoning is to increase the degree of reasonable confidence that one has in the truth of a conclusion. Presuming the truth of a conclusion in support of a premise thwarts this purpose, because the initial degree of reasonable confidence in the premise cannot then exceed the initial degree of reasonable confidence in the conclusion.

citta-mātra

The Yogācāra Buddhist doctrine that there are no extramental entities, given classical expression by Vasubandhu in the fourth or fifth century A.D. The classical form of this doctrine is a variety of idealism that claims (1) that a coherent explanation of the facts of experience can be provided without appeal to anything extramental; (2) that no coherent account of what extramental entities are like is possible; and (3) that therefore the doctrine that there is nothing but mind is to be preferred to its realistic competitors. The claim and the argument were and are controversial among Buddhist metaphysicians.

civil disobedience

A deliberate violation of the law, committed in order to draw attention to or rectify perceived injustices in the law or policies of a state. Illustrative questions raised by the topic include; how are such acts justified, how should the legal system respond to such acts when justified, and must such acts be done publicly, nonviolently, and/or with a willingness to accept attendant legal sanctions?

Clarke, Samuel


(1675 - 1729)

English philosopher, preacher, and theologian. Born in Norwich, he was educated at Cambridge, where he came under the influence of Newton. Upon graduation Clarke entered the established church, serving for a time as chaplain to Queen Anne. He spent the last twenty years of his life as rector of St. James, Westminster.


Clarke wrote extensively on controversial theological and philosophical issues — the nature of space and time, proofs of the existence of God, the doctrine of the Trinity, the incorporeality and natural immortality of the soul, freedom of the will, the nature of morality, etc. His most philosophical works are his Boyle lectures of 1704 and 1705, in which he developed a forceful version of the cosmological argument for the existence and nature of God and attacked the views of Hobbes, Spinoza, and some proponents of deism; his correspondence with Leibniz (1715-16), in which he defended Newton’s views of space and time and charged Leibniz with holding views inconsistent with free will; and his writings against Anthony Collins, in which he defended a libertarian view of the agent as the undetermined cause of free actions and attacked Collins’s arguments for a materialistic view of the mind. In these works Clarke maintains a position of extreme rationalism, contending that the existence and nature of God can be conclusively demonstrated, that the basic principles of morality are necessarily true and immediately knowable, and that the existence of a future state of rewards and punishments is assured by our knowledge that God will reward the morally just and punish the morally wicked.

class

Term sometimes used as a synonym for ‘set’. When the two are distinguished, a class is understood as a collection in the logical sense, i.e., as the extension of a concept (e.g. the class of red objects). By contrast, sets, i.e., collections in the mathematical sense, are understood as occurring in stages, where each stage consists of the sets that can be formed from the non-sets and the sets already formed at previous stages. When a set is formed at a given stage, only the non-sets and the previously formed sets are even candidates for membership, but absolutely anything can gain membership in a class simply by falling under the appropriate concept. Thus, it is classes, not sets, that figure in the inconsistent principle of unlimited comprehension. In set theory, proper classes are collections of sets that are never formed at any stage, e.g., the class of all sets (since new sets are formed at each stage, there is no stage at which all sets are available to be collected into a set).

classical republicanism also known as civic humanism

A political outlook developed by Machiavelli in Renaissance Italy and by James Harrington (1611-77) in seventeenth-century England, modified by eighteenth-century British and Continental writers and important for the thought of the American founding fathers.


Drawing on Roman historians, Machiavelli argued that a state could hope for security from the blows of fortune only if its (male) citizens were devoted to its well-being. They should take turns ruling and being ruled, be always prepared to fight for the republic, and limit their private possessions. Such men would possess a wholly secular virtù appropriate to political beings. Corruption, in the form of excessive attachment to private interest, would then be the most serious threat to the republic. Harrington’s utopian Oceana (1656) portrayed England governed under such a system. Opposing the authoritarian views of Hobbes, it described a system in which the well-to-do male citizens would elect some of their number to govern for limited terms. Those governing would propose state policies; the others would vote on the acceptability of the proposals. Agriculture was the basis of economics, but the size of estates was to be strictly controlled. Harringtonianism helped form the views of the political party opposing the dominance of the king and court. Montesquieu in France drew on classical sources in discussing the importance of civic virtue and devotion to the republic.


All these views were well known to Jefferson, Adams, and other American colonial and revolutionary thinkers; and some contemporary communitarian critics of American culture return to classical republican ideas.

Clement of Alexandria


( A.D. c. 150 - c. 215)

Formative teacher in the early Christian church who, as a “Christian gnostic,” combined enthusiasm for Greek philosophy with a defense of the church’s faith. He espoused spiritual and intellectual ascent toward that complete but hidden knowledge or gnosis reserved for the truly enlightened. Clement’s school did not practice strict fidelity to the authorities, and possibly the teachings, of the institutional church, drawing upon the Hellenistic traditions of Alexandria, including Philo and Middle Platonism. As with the law among the Jews, so, for Clement, philosophy among the pagans was a pedagogical preparation for Christ, in whom logos, reason, had become enfleshed. Philosophers now should rise above their inferior understanding to the perfect knowledge revealed in Christ. Though hostile to gnosticism and its speculations, Clement was thoroughly Hellenized in outlook and sometimes guilty of Docetism, not least in his reluctance to concede the utter humanness of Jesus.

Clifford, W(illiam) K(ingdon)


(1845 - 1879)

British mathematician and philosopher. Educated at King’s College, London, and Trinity College, Cambridge, he began giving public lectures in 1868, when he was appointed a fellow of Trinity, and in 1870 became professor of applied mathematics at University College, London. His academic career ended prematurely when he died of tuberculosis. Clifford is best known for his rigorous view on the relation between belief and evidence, which, in “The Ethics of Belief,” he summarized thus: “It is wrong always, everywhere, and for anyone, to believe anything on insufficient evidence.” He gives this example. Imagine a shipowner who sends to sea an emigrant ship, although the evidence raises strong suspicions as to the vessel’s seaworthiness. Ignoring this evidence, he convinces himself that the ship’s condition is good enough and, after it sinks and all the passengers die, collects his insurance money without a trace of guilt. Clifford maintains that the owner had no right to believe in the soundness of the ship. “He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.” The right Clifford is alluding to is moral, for what one believes is not a private but a public affair and may have grave consequences for others. He regards us as morally obliged to investigate the evidence thoroughly on any occasion, and to withhold belief if evidential support is lacking. This obligation must be fulfilled however trivial and insignificant a belief may seem, for a violation of it may “leave its stamp upon our character forever.” Clifford thus rejected Catholicism, to which he had subscribed originally, and became an agnostic. James’s famous essay “The Will to Believe” criticizes Clifford’s view. According to James, insufficient evidence need not stand in the way of religious belief, for we have a right to hold beliefs that go beyond the evidence provided they serve the pursuit of a legitimate goal.

closure

A set of objects, O, is said to exhibit closure or to be closed under a given operation, R, provided that for every object, x, if x is a member of O and x is R-related to any object, y, then y is a member of O. For example, the set of propositions is closed under deduction, for if p is a proposition and p entails q, i.e., q is deducible from p, then q is a proposition (simply because only propositions can be entailed by propositions). In addition, many subsets of the set of propositions are also closed under deduction. For example, the set of true propositions is closed under deduction or entailment. Others are not. Under most accounts of belief, we may fail to believe what is entailed by what we do, in fact, believe. Thus, if knowledge is some form of true, justified belief, knowledge is not closed under deduction, for we may fail to believe a proposition entailed by a known proposition. Nevertheless, there is a related issue that has been the subject of much debate, namely: Is the set of justified propositions closed under deduction? Aside from the obvious importance of the answer to that question in developing an account of justification, there are two important issues in epistemology that also depend on the answer.


Subtleties aside, the so-called Gettier problem depends in large part upon an affirmative answer to that question. For, assuming that a proposition can be justified and false, it is possible to construct cases in which a proposition, say p, is justified, false, but believed. Now, consider a true proposition, q, which is believed and entailed by p. If justification is closed under deduction, then q is justified, true, and believed. But if the only basis for believing q is p, it is clear that q is not known. Thus, true, justified belief is not sufficient for knowledge. What response is appropriate to this problem has been a central issue in epistemology since E. Gettier’s publication of “Is Justified True Belief Knowledge?” (Analysis, 1963).


Whether justification is closed under deduction is also crucial when evaluating a common, traditional argument for skepticism. Consider any person, S, and let p be any proposition ordinarily thought to be knowable, e.g., that there is a table before S. The argument for skepticism goes like this:



1. If p is justified for S, then, since p entails q, where q is ‘there is no evil genius making S falsely believe that p’, q is justified for S.
2. S is not justified in believing q.



Therefore, S is not justified in believing p.


The first premise depends upon justification being closed under deduction.


See also epistemic logic, epistemology, justification, skepticism.

Coase theorem

A non-formal insight by Ronald Coase (Nobel Prize in Economics, 1991): assuming that there are no (transaction) costs involved in exchanging rights for money, then no matter how rights are initially distributed, rational agents will buy and sell them so as to maximize individual returns. In jurisprudence this proposition has been the basis for a claim about how rights should be distributed even when (as is usual) transaction costs are high: the law should confer rights on those who would purchase them were they for sale on markets without transaction costs; e.g., the right to an indivisible, unsharable resource should be conferred on the agent willing to pay the highest price for it.

Cockburn, Catherine (Trotter)


(1679 - 1749)

English philosopher and playwright who made a significant contribution to the debates on ethical rationalism sparked by Clarke’s Boyle lectures (1704-05). The major theme of her writings is the nature of moral obligation. Cockburn displays a consistent, non-doctrinaire philosophical position, arguing that moral duty is to be rationally deduced from the “nature and fitness of things” (Remarks, 1747) and is not founded primarily in externally imposed sanctions. Her writings, published anonymously, take the form of philosophical debates with others, including Samuel Rutherforth, William Warburton, Isaac Watts, Francis Hutcheson, and Lord Shaftesbury. Her best-known intervention in contemporary philosophical debate was her able defense of Locke’s Essay in 1702. S.H.

Cogito ergo sum (Latin, ‘I think, therefore I am’)

The starting point of Descartes’s system of knowledge. In his Discourse on the Method (1637), he observes that the proposition ‘I am thinking, therefore I exist’ (je pense, donc je suis) is “so firm and sure that the most extravagant suppositions of the skeptics were incapable of shaking it.” The celebrated phrase, in its better-known Latin version, also occurs in the Principles of Philosophy (1644), but is not to be found in the Meditations (1641), though the latter contains the fullest statement of the reasoning behind Descartes’s certainty of his own existence.

cognitive dissonance

Mental discomfort arising from conflicting beliefs or attitudes held simultaneously. Leon Festinger, who originated the theory of cognitive dissonance in a book of that title (1957), suggested that cognitive dissonance has motivational characteristics. Suppose a person is contemplating moving to a new city. She is considering both Birmingham and Boston. She cannot move to both, so she must choose. Dissonance is experienced by the person if in choosing, say, Birmingham, she acquires knowledge of bad or unwelcome features of Birmingham and of good or welcome aspects of Boston. The amount of dissonance depends on the relative intensities of dissonant elements. Hence, if the only dissonant factor is her learning that Boston is cooler than Birmingham, and she does not regard climate as important, she will experience little dissonance. Dissonance may occur in several sorts of psychological states or processes, although the bulk of research in cognitive dissonance theory has been on dissonance in choice and on the justification and psychological aftereffects of choice. Cognitive dissonance may be involved in two phenomena of interest to philosophers, namely, self-deception and weakness of will. Why do self-deceivers try to get themselves to believe something that, in some sense, they know to be false? One may resort to self-deception when knowledge causes dissonance. Why do the weak-willed perform actions they know to be wrong? One may become weak-willed when dissonance arises from the expected consequences of doing the right thing. G.A.G.

cognitive psychotherapy

An expression introduced by Brandt in A Theory of the Good and the Right (1979) to refer to a process of assessing and adjusting one’s desires, aversions, or pleasures (henceforth, “attitudes”). This process is central to Brandt’s analysis of rationality, and ultimately, to his view on the justification of morality.


Cognitive psychotherapy consists of the agent’s criticizing his attitudes by repeatedly representing to himself, in an ideally vivid way and at appropriate times, all relevant available information. Brandt characterizes the key definiens as follows: (1) available information is “propositions accepted by the science of the agent’s day, plus factual propositions justified by publicly accessible evidence (including testimony of others about themselves) and the principles of logic”; (2) information is relevant provided, if the agent were to reflect repeatedly on it, “it would make a difference,” i.e., would affect the attitude in question, and the effect would be a function of its content, not an accidental byproduct; (3) relevant information is represented in an ideally vivid way when the agent focuses on it with maximal clarity and detail and with no hesitation or doubt about its truth; and (4) repeatedly and at appropriate times refer, respectively, to the frequency and occasions that would result in the information’s having the maximal attitudinal impact. Suppose Mary’s desire to smoke were extinguished by her bringing to the focus of her attention, whenever she was about to inhale smoke, some justified beliefs, say that smoking is hazardous to one’s health and may cause lung cancer; Mary’s desire would have been removed by cognitive psychotherapy.


According to Brandt, an attitude is rational for a person provided it is one that would survive, or be produced by, cognitive psychotherapy; otherwise it is irrational. Rational attitudes, in this sense, provide a basis for moral norms. Roughly, the correct moral norms are those of a moral code that persons would opt for if (i) they were motivated by attitudes that survive the process of cognitive psychotherapy; and (ii) at the time of opting for a moral code, they were fully aware of, and vividly attentive to, all available information relevant to choosing a moral code (for a society in which they are to live for the rest of their lives). In this way, Brandt seeks a value-free justification for moral norms—one that avoids the problems of other theories such as those that make an appeal to intuitions.

cognitive science

An interdisciplinary research cluster that seeks to account for intelligent activity, whether exhibited by living organisms (especially adult humans) or machines. Hence, cognitive psychology and artificial intelligence constitute its core. A number of other disciplines, including neuroscience, linguistics, anthropology, and philosophy, as well as other fields of psychology (e.g., developmental psychology), are more peripheral contributors. The quintessential cognitive scientist is someone who employs computer modeling techniques (developing computer programs for the purpose of simulating particular human cognitive activities), but the broad range of disciplines that are at least peripherally constitutive of cognitive science have lent a variety of research strategies to the enterprise. While there are a few common institutions that seek to unify cognitive science (e.g., departments, journals, and societies), the problems investigated and the methods of investigation often are limited to a single contributing discipline. Thus, it is more appropriate to view cognitive science as a cross-disciplinary enterprise than as itself a new discipline.


While interest in cognitive phenomena has historically played a central role in the various disciplines contributing to cognitive science, the term properly applies to cross-disciplinary activities that emerged in the 1970s. During the preceding two decades each of the disciplines that became part of cogntive science gradually broke free of positivistic and behavioristic proscriptions that barred systematic inquiry into the operation of the mind. One of the primary factors that catalyzed new investigations of cognitive activities was Chomsky’s generative grammar, which he advanced not only as an abstract theory of the structure of language, but also as an account of language users’ mental knowledge of language (their linguistic competence). A more fundamental factor was the development of approaches for theorizing about information in an abstract manner, and the introduction of machines (computers) that could manipulate information. This gave rise to the idea that one might program a computer to process information so as to exhibit behavior that would, if performed by a human, require intelligence.


If one tried to formulate a unifying question guiding cognitive science research, it would probably be: How does the cognitive system work? But even this common question is interpreted quite differently in different disciplines. We can appreciate these differences by looking just at language. While psycholinguists (generally psychologists) seek to identify the processing activities in the mind that underlie language use, most linguists focus on the products of this internal processing, seeking to articulate the abstract structure of language. A frequent goal of computer scientists, in contrast, has been to develop computer programs to parse natural language input and produce appropriate syntactic and semantic representations.


These differences in objectives among the cognitive science disciplines correlate with different methodologies. The following represent some of the major methodological approaches of the contributing disciplines and some of the problems each encounters.



Artificial intelligence


If the human cognition system is viewed as computational, a natural goal is to simulate its performance. This typically requires formats for representing information as well as procedures for searching and manipulating it. Some of the earliest AI programs drew heavily on the resources of first-order predicate calculus, representing information in propositional formats and manipulating it according to logical principles. For many modeling endeavors, however, it proved important to represent information in larger-scale structures, such as frames (Marvin Minsky), schemata (David Rumelhart), or scripts (Roger Schank), in which different pieces of information associated with an object or activity would be stored together. Such structures generally employed default values for specific slots (specifying, e.g., that deer live in forests) that would be part of the representation unless overridden by new information (e.g., that a particular deer lives in the San Diego Zoo). A very influential alternative approach, developed by Allen Newell, replaces declarative representations of information with procedural representations, known as productions. These productions take the form of conditionals that specify actions to be performed (e.g., copying an expression into working memory) if certain conditions are satisfied (e.g., the expression matches another expression).



Psychology


While some psychologists develop computer simulations, a more characteristic activity is to acquire detailed data from human subjects that can reveal the cognitive system’s actual operation. This is a challenging endeavor. While cognitive activities transpire within us, they frequently do so in such a smooth and rapid fashion that we are unaware of them. For example, we have little awareness of what occurs when we recognize an object as a chair or remember the name of a client. Some cognitive functions, though, seem to be transparent to consciousness. For example, we might approach a logic problem systematically, enumerating possible solutions and evaluating them serially. Allen Newell and Herbert Simon have refined methods for exploiting verbal protocols obtained from subjects as they solve such problems. These methods have been quite fruitful, but their limitations must be respected. In many cases in which we think we know how we performed a cognitive task, Richard Nisbett and Timothy Wilson have argued that we are misled, relying on folk theories to describe how our minds work rather than reporting directly on their operation. In most cases cognitive psychologists cannot rely on conscious awareness of cognitive processes, but must proceed as do physiologists trying to understand metabolism: they must devise experiments that reveal the underlying processes operative in cognition. One approach is to seek clues in the errors to which the cognitive system is prone. Such errors might be more easily accounted for by one kind of underlying process than by another. Speech errors, such as substituting ‘bat cad’ for ‘bad cat’, may be diagnostic of the mechanisms used to construct speech. This approach is often combined with strategies that seek to overload or disrupt the system’s normal operation. A common technique is to have a subject perform two tasks at once — e.g., read a passage while watching for a colored spot. Cognitive psychologists may also rely on the ability to dissociate two phenomena (e.g., obliterate one while maintaining the other) to establish their independence. Other types of data widely used to make inferences about the cognitive system include patterns of reaction times, error rates, and priming effects (in which activation of one item facilitates access to related items). Finally, developmental psychologists have brought a variety of kinds of data to bear on cognitive science issues. For example, patterns of acquisition times have been used in a manner similar to reaction time patterns, and accounts of the origin and development of systems constrain and elucidate mature systems.



Linguistics


Since linguists focus on a product of cognition rather than the processes that produce the product, they tend to test their analyses directly against our shared knowledge of that product. Generative linguists in the tradition of Chomsky, for instance, develop grammars that they test by probing whether they generate the sentences of the language and no others. While grammars are certainly germane to developing processing models, they do not directly determine the structure of processing models. Hence, the central task of linguistics is not central to cognitive science. However, Chomsky has augmented his work on grammatical description with a number of controversial claims that are psycholinguistic in nature (e.g., his nativism and his notion of linguistic competence). Further, an alternative approach to incorporating psycholinguistic concerns, the cognitive linguistics of Lakoff and Langacker, has achieved prominence as a contributor to cognitive science.



Neuroscience


Cognitive scientists have generally assumed that the processes they study are carried out, in humans, by the brain. Until recently, however, neuroscience has been relatively peripheral to cognitive science. In part this is because neuroscientists have been chiefly concerned with the implementation of processes, rather than the processes themselves, and in part because the techniques available to neuroscientists (such as single-cell recording) have been most suitable for studying the neural implementation of lower-order processes such as sensation. A prominent exception was the classical studies of brain lesions initiated by Broca and Wernicke, which seemed to show that the location of lesions correlated with deficits in production versus comprehension of speech. (More recent data suggest that lesions in Broca’s area impair certain kinds of syntactic processing.) However, other developments in neuroscience promise to make its data more relevant to cognitive modeling in the future. These include studies of simple nervous systems, such as that of the aplysia (a genus of marine mollusk) by Eric Kandel, and the development of a variety of techniques for determining the brain activities involved in the performance of cognitive tasks (e.g., recording of evoked response potentials over larger brain structures, and imaging techniques such as positron emission tomography). While in the future neuroscience is likely to offer much richer information that will guide the development and constrain the character of cognitive models, neuroscience will probably not become central to cognitive science. It is itself a rich, multidisciplinary research cluster whose contributing disciplines employ a host of complicated research tools. Moreover, the focus of cognitive science can be expected to remain on cognition, not on its implementation.


So far cognitive science has been characterized in terms of its modes of inquiry. One can also focus on the domains of cognitive phenomena that have been explored. Language represents one such domain. Syntax was one of the first domains to attract wide attention in cognitive science. For example, shortly after Chomsky introduced his transformational grammar, psychologists such as George Miller sought evidence that transformations figured directly in human language processing. From this beginning, a more complex but enduring relationship among linguists, psychologists, and computer scientists has formed a leading edge for much cognitive science research. Psycholinguistics has matured; sophisticated computer models of natural language processing have been developed; and cognitive linguists have offered a particular synthesis that emphasizes semantics, pragmatics, and cognitive foundations of language.



Thinking and reasoning


These constitute an important domain of cognitive science that is closely linked to philosophical interests. Problem solving, such as that which figures in solving puzzles, playing games, or serving as an expert in a domain, has provided a prototype for thinking. Newell and Simon’s influential work construed problem solving as a search through a problem space and introduced the idea of heuristics — generally reliable but fallible simplifying devices to facilitate the search. One arena for problem solving, scientific reasoning and discovery, has particularly interested philosophers. Artificial intelligence researchers such as Simon and Patrick Langley, as well as philosophers such as Paul Thagard and Lindley Darden, have developed computer programs that can utilize the same data as that available to historical scientists to develop and evaluate theories and plan future experiments. Cognitive scientists have also sought to study the cognitive processes underlying the sorts of logical reasoning (both deductive and inductive) whose normative dimensions have been a concern of philosophers. Philip Johnson-Laird, for example, has sought to account for human performance in dealing with syllogistic reasoning by describing a processing of constructing and manipulating mental models. Finally, the process of constructing and using analogies is another aspect of reasoning that has been extensively studied by traditional philosophers as well as cognitive scientists.



Memory, attention, and learning


Cognitive scientists have differentiated a variety of types of memory. The distinction between long- and short-term memory was very influential in the information-processing models of the 1970s. Short-term memory was characterized by limited capacity, such as that exhibited by the ability to retain a seven-digit telephone number for a short period. In much cognitive science work, the notion of working memory has superseded short-term memory, but many theorists are reluctant to construe this as a separate memory system (as opposed to a part of long-term memory that is activated at a given time). Endel Tulving introduced a distinction between semantic memory (general knowledge that is not specific to a time or place) and episodic memory (memory for particular episodes or occurrences). More recently, Daniel Schacter proposed a related distinction that emphasizes consciousness: implicit memory (access without awareness) versus explicit memory (which does involve awareness and is similar to episodic memory). One of the interesting results of cognitive research is the dissociation between different kinds of memory: a person might have severely impaired memory of recent events while having largely unimpaired implicit memory. More generally, memory research has shown that human memory does not simply store away information as in a file cabinet. Rather, information is organized according to preexisting structures such as scripts, and can be influenced by events subsequent to the initial storage. Exactly what gets stored and retrieved is partly determined by attention, and psychologists in the information-processing tradition have sought to construct general cognitive models that emphasize memory and attention. Finally, the topic of learning has once again become prominent. Extensively studied by the behaviorists of the precognitive era, learning was superseded by memory and attention as a research focus in the 1970s. In the 1980s, artificial intelligence researchers developed a growing interest in designing systems that can learn; machine learning is now a major problem area in AI. During the same period, connectionism arose to offer an alternative kind of learning model.



Perception and motor control


Perceptual and motor systems provide the inputs and outputs to cognitive systems. An important aspect of perception is the recognition of something as a particular kind of object or event; this requires accessing knowledge of objects and events. One of the central issues concerning perception questions the extent to which perceptual processes are influenced by higher-level cognitive information (top-down processing) versus how much they are driven purely by incoming sensory information (bottom-up processing). A related issue concerns the claim that visual imagery is a distinct cognitive process and is closely related to visual perception, perhaps relying on the same brain processes. A number of cognitive science inquiries (e.g., by Roger Shepard and Stephen Kosslyn) have focused on how people use images in problem solving and have sought evidence that people solve problems by rotating images or scanning them. This research has been extremely controversial, as other investigators have argued against the use of images and have tried to account for the performance data that have been generated in terms of the use of propositionally represented information. Finally, a distinction recently has been proposed between the What and Where systems. All of the foregoing issues concern the What system (which recognizes and represents objects as exemplars of categories). The Where system, in contrast, concerns objects in their environment, and is particularly adapted to the dynamics of movement. Gibson’s ecological psychology is a long-standing inquiry into this aspect of perception, and work on the neural substrates is now attracting the interest of cognitive scientists as well.



Recent developments


The breadth of cognitive science has been expanding in recent years. In the 1970s, cognitive science inquiries tended to focus on processing activities of adult humans or on computer models of intelligent performance; the best work often combined these approaches. Subsequently, investigators examined in much greater detail how cognitive systems develop, and developmental psychologists have increasingly contributed to cognitive science. One of the surprising findings has been that, contrary to the claims of William James, infants do not seem to confront the world as a “blooming, buzzing confusion,” but rather recognize objects and events quite early in life. Cognitive science has also expanded along a different dimension. Until recently many cognitive studies focused on what humans could accomplish in laboratory settings in which they performed tasks isolated from real-life contexts. The motivation for this was the assumption that cognitive processes were generic and not limited to specific contexts. However, a variety of influences, including Gibsonian ecological psychology (especially as interpreted and developed by Ulric Neisser) and Soviet activity theory, have advanced the view that cognition is much more dynamic and situated in real-world tasks and environmental contexts; hence, it is necessary to study cognitive activities in an ecologically valid manner.


Another form of expansion has resulted from a challenge to what has been the dominant architecture for modeling cognition. An architecture defines the basic processing capacities of the cognitive system. The dominant cognitive architecture has assumed that the mind possesses a capacity for storing and manipulating symbols. These symbols can be composed into larger structures according to syntactic rules that can then be operated upon by formal rules that recognize that structure. Jerry Fodor has referred to this view of the cognitive system as the “language of thought hypothesis” and clearly construes it as a modern heir of rationalism. One of the basic arguments for it, due to Fodor and Zenon Pylyshyn, is that thoughts, like language, exhibit productivity (the unlimited capacity to generate new thoughts) and systematicity (exhibited by the inherent relation between thoughts such as ‘Joan loves the florist’ and ‘The florist loves Joan’). They argue that only if the architecture of cognition has languagelike compositional structure would productivity and systematicity be generic properties and hence not require special case-by-case accounts. The challenge to this architecture has arisen with the development of an alternative architecture, known as connectionism, parallel distributed processing, or neural network modeling, which proposes that the cognitive system consists of vast numbers of neuronlike units that excite or inhibit each other. Knowledge is stored in these systems by the adjustment of connection strengths between processing units; consequently, connectionism is a modern descendant of associationism. Connectionist networks provide a natural account of certain cognitive phenomena that have proven challenging for the symbolic architecture, including pattern recognition, reasoning with soft constraints, and learning. Whether they also can account for productivity and systematicity has been the subject of debate.


Philosophical theorizing about the mind has often provided a starting point for the modeling and empirical investigations of modern cognitive science. The ascent of cognitive science has not meant that philosophers have ceased to play a role in examining cognition. Indeed, a number of philosophers have pursued their inquiries as contributors to cognitive science, focusing on such issues as the possible reduction of cognitive theories to those of neuroscience, the status of folk psychology relative to emerging scientific theories of mind, the merits of rationalism versus empiricism, and strategies for accounting for the intentionality of mental states. The interaction between philosophers and other cognitive scientists, however, is bidirectional, and a number of developments in cognitive science promise to challenge or modify traditional philosophical views of cognition. For example, studies by cognitive and social psychologists have challenged the assumption that human thinking tends to accord with the norms of logic and decision theory. On a variety of tasks humans seem to follow procedures (heuristics) that violate normative canons, raising questions about how philosophers should characterize rationality. Another area of empirical study that has challenged philosophical assumptions has been the study of concepts and categorization. Philosophers since Plato have widely assumed that concepts of ordinary language, such as red, bird, and justice, should be definable by necessary and sufficient conditions. But celebrated studies by Eleanor Rosch and her colleagues indicated that many ordinary-language concepts had a prototype structure instead. On this view, the categories employed in human thinking are characterized by prototypes (the clearest exemplars) and a metric that grades exemplars according to their degree of typicality. Recent investigations have also pointed to significant instability in conceptual structure and to the role of theoretical beliefs in organizing categories. This alternative conception of concepts has profound implications for philosophical methodologies that portray philosophy’s task to be the analysis of concepts.

Cohen, Hermann


(1842 - 1918)


German Jewish philosopher who originated and led, with Paul Natorp (1854-1924), the Marburg School of neo-Kantianism. He taught at Marburg from 1876 to 1912. Cohen wrote commentaries on Kant’s Critiques prior to publishing System der Philosophie (1902-12), which consisted of parts on logic, ethics, and aesthetics. He developed a Kantian idealism of the natural sciences, arguing that a transcendental analysis of these sciences shows that “pure thought” (his system of Kantian a priori principles) “constructs” their “reality.” He also developed Kant’s ethics as a democratic socialist ethics. He ended his career at a rabbinical seminary in Berlin, writing his influential Religion der Vernunft aus den Quellen des Judentums (“Religion of Reason out of the Sources of Judaism,” 1919), which explicated Judaism on the basis of his own Kantian ethical idealism. Cohen’s ethical-political views were adopted by Kurt Eisner (1867-1919), leader of the Munich revolution of 1918, and also had an impact on the revisionism (of orthodox Marxism) of the German Social Democratic Party, while his philosophical writings greatly influenced Cassirer.

coherence theory of truth

The view that either the nature of truth or the sole criterion for determining truth is constituted by a relation of coherence between the belief (or judgment) being assessed and other beliefs (or judgments).


As a view of the nature of truth, the coherence theory represents an alternative to the correspondence theory of truth. Whereas the correspondence theory holds that a belief is true provided it corresponds to independent reality, the coherence theory holds that it is true provided it stands in a suitably strong relation of coherence to other beliefs, so that the believer’s total system of beliefs forms a highly or perhaps perfectly coherent system. Since, on such a characterization, truth depends entirely on the internal relations within the system of beliefs, such a conception of truth seems to lead at once to idealism as regards the nature of reality, and its main advocates have been proponents of absolute idealism (mainly Bradley, Bosanquet, and Brand Blanshard). A less explicitly metaphysical version of the coherence theory was also held by certain members of the school of logical positivism (mainly Otto Neurath and Carl Hempel).


The nature of the intended relation of coherence, often characterized metaphorically in terms of the beliefs in question fitting together or dovetailing with each other, has been and continues to be a matter of uncertainty and controversy. Despite occasional misconceptions to the contrary, it is clear that coherence is intended to be a substantially more demanding relation than mere consistency, involving such things as inferential and explanatory relations within the system of beliefs. Perfect or ideal coherence is sometimes described as requiring that every belief in the system of beliefs entails all the others (though it must be remembered that those offering such a characterization do not restrict entailments to those that are formal or analytic in character). Since actual human systems of belief seem inevitably to fall short of perfect coherence, however that is understood, their truth is usually held to be only approximate at best, thus leading to the absolute idealist view that truth admits of degrees.


As a view of the criterion of truth, the coherence theory of truth holds that the sole criterion or standard for determining whether a belief is true is its coherence with other beliefs or judgments, with the degree of justification varying with the degree of coherence. Such a view amounts to a coherence theory of epistemic justification. It was held by most of the proponents of the coherence theory of the nature of truth, though usually without distinguishing the two views very clearly.


For philosophers who hold both of these views, the thesis that coherence is the sole criterion of truth is usually logically prior, and the coherence theory of the nature of truth is adopted as a consequence, the clearest argument being that only the view that perfect or ideal coherence is the nature of truth can make sense of the appeal to degrees of coherence as a criterion of truth.

coherentism

In epistemology, a theory of the structure of knowledge or justified beliefs according to which all beliefs representing knowledge are known or justified in virtue of their relations to other beliefs, specifically, in virtue of belonging to a coherent system of beliefs. Assuming that the orthodox account of knowledge is correct at least in maintaining that justified true belief is necessary for knowledge, we can identify two kinds of coherence theories of knowledge: those that are coherentist merely in virtue of incorporating a coherence theory of justification, and those that are doubly coherentist because they account for both justification and truth in terms of coherence. What follows will focus on coherence theories of justification.


Historically, coherentism is the most significant alternative to foundationalism. The latter holds that some beliefs, basic or foundational beliefs, are justified apart from their relations to other beliefs, while all other beliefs derive their justification from that of foundational beliefs. Foundationalism portrays justification as having a structure like that of a building, with certain beliefs serving as the foundations and all other beliefs supported by them. Coherentism rejects this image and pictures justification as having the structure of a raft. Justified beliefs, like the planks that make up a raft, mutually support one another. This picture of the coherence theory is due to the positivist Otto Neurath. Among the positivists, Hempel shared Neurath’s sympathy for coherentism. Other defenders of coherentism from the late nineteenth and early twentieth centuries were idealists, e.g., Bradley, Bosanquet, and Brand Blanshard. (Idealists often held the sort of double coherence theory mentioned above.)


The contrast between foundationalism and coherentism is commonly developed in terms of the regress argument. If we are asked what justifies one of our beliefs, we characteristically answer by citing some other belief that supports it, e.g., logically or probabilistically. If we are asked about this second belief, we are likely to cite a third belief, and so on. There are three shapes such an evidential chain might have: it could go on forever, if could eventually end in some belief, or it could loop back upon itself, i.e., eventually contain again a belief that had occurred “higher up” on the chain. Assuming that infinite chains are not really possible, we are left with a choice between chains that end and circular chains. According to foundationalists, evidential chains must eventually end with a foundational belief that is justified, if the belief at the beginning of the chain is to be justified. Coherentists are then portrayed as holding that circular chains can yield justified beliefs.


This portrayal is, in a way, correct. But it is also misleading since it suggests that the disagreement between coherentism and foundationalism is best understood as concerning only the structure of evidential chains. Talk of evidential chains in which beliefs that are further down on the chain are responsible for beliefs that are higher up naturally suggests the idea that just as real chains transfer forces, evidential chains transfer justification. Foundationalism then sounds like a real possibility. Foundational beliefs already have justification, and evidential chains serve to pass the justification along to other beliefs. But coherentism seems to be a nonstarter, for if no belief in the chain is justified to begin with, there is nothing to pass along. Altering the metaphor, we might say that coherentism seems about as likely to succeed as a bucket brigade that does not end at a well, but simply moves around in a circle.


The coherentist seeks to dispel this appearance by pointing out that the primary function of evidential chains is not to transfer epistemic status, such as justification, from belief to belief. Indeed, beliefs are not the primary locus of justification. Rather, it is whole systems of belief that are justified or not in the primary sense; individual beliefs are justified in virtue of their membership in an appropriately structured system of beliefs. Accordingly, what the coherentist claims is that the appropriate sorts of evidential chains, which will be circular — indeed, will likely contain numerous circles — constitute justified systems of belief. The individual beliefs within such a system are themselves justified in virtue of their place in the entire system and not because this status is passed on to them from beliefs further down some evidential chain in which they figure. One can, therefore, view coherentism with considerable accuracy as a version of foundationalism that holds all beliefs to be foundational. From this perspective, the difference between coherentism and traditional foundationalism has to do with what accounts for the epistemic status of foundational beliefs, with traditional foundationalism holding that such beliefs can be justified in various ways, e.g., by perception or reason, while coherentism insists that the only way such beliefs can be justified is by being a member of an appropriately structured system of beliefs.


One outstanding problem the coherentist faces is to specify exactly what constitutes a coherent system of beliefs. Coherence clearly must involve much more than mere absence of mutually contradictory beliefs. One way in which beliefs can be logically consistent is by concerning completely unrelated matters, but such a consistent system of beliefs would not embody the sort of mutual support that constitutes the core idea of coherentism. Moreover, one might question whether logical consistency is even necessary for coherence, e.g., on the basis of the preface paradox. Similar points can be made regarding efforts to begin an account of coherence with the idea that beliefs and degrees of belief must correspond to the probability calculus. So although it is difficult to avoid thinking that such formal features as logical and probabilistic consistency are significantly involved in coherence, it is not clear exactly how they are involved. An account of coherence can be drawn more directly from the following intuitive idea: a coherent system of belief is one in which each belief is epistemically supported by the others, where various types of epistemic support are recognized, e.g., deductive or inductive arguments, or inferences to the best explanation. There are, however, at least two problems this suggestion does not address. First, since very small sets of beliefs can be mutually supporting, the coherentist needs to say something about the scope a system of beliefs must have to exhibit the sort of coherence required for justification. Second, given the possibility of small sets of mutually supportive beliefs, it is apparently possible to build a system of very broad scope out of such small sets of mutually supportive beliefs by mere conjunction, i.e., without forging any significant support relations among them. Yet, since the interrelatedness of all truths does not seem discoverable by analyzing the concept of justification, the coherentist cannot rule out epistemically isolated subsystems of belief entirely. So the coherentist must say what sorts of isolated subsystems of belief are compatible with coherence.


The difficulties involved in specifying a more precise concept of coherence should not be pressed too vigorously against the coherentist. For one thing, most foundationalists have been forced to grant coherence a significant role within their accounts of justification, so no dialectical advantage can be gained by pressing them. Moreover, only a little reflection is needed to see that nearly all the difficulties involved in specifying coherence are manifestations within a specific context of quite general philosophical problems concerning such matters as induction, explanation, theory choice, the nature of epistemic support, etc. They are, then, problems that are faced by logicians, philosophers of science, and epistemologists quite generally, regardless of whether they are sympathetic to coherentism.


Coherentism faces a number of serious objections. Since according to coherentism justification is determined solely by the relations among beliefs, it does not seem to be capable of taking us outside the circle of our beliefs. This fact gives rise to complaints that coherentism cannot allow for any input from external reality, e.g., via perception, and that it can neither guarantee nor even claim that it is likely that coherent systems of belief will make contact with such reality or contain true beliefs. And while it is widely granted that justified false beliefs are possible, it is just as widely accepted that there is an important connection between justification and truth, a connection that rules out accounts according to which justification is not truth-conducive. These abstractly formulated complaints can be made more vivid, in the case of the former, by imagining a person with a coherent system of beliefs that becomes frozen, and fails to change in the face of ongoing sensory experience; and in the case of the latter, by pointing out that, barring an unexpected account of coherence, it seems that a wide variety of coherent systems of belief are possible, systems that are largely disjoint or even incompatible.

Collier, Arthur


(1680 - 1732)

English philosopher, a Wiltshire parish priest whose Clavis Universalis (1713) defends a version of immaterialism closely akin to Berkeley’s. Matter, Collier contends, “exists in, or in dependence on mind.” He emphatically affirms the existence of bodies, and, like Berkeley, defends immaterialism as the only alternative to skepticism. Collier grants that bodies seem to be external, but their “quasi-externeity” is only the effect of God’s will. In Part I of the Clavis Collier argues (as Berkeley had in his New Theory of Vision, 1709) that the visible world is not external. In Part II he argues (as Berkeley had in the Principles, 1710, and Three Dialogues, 1713) that the external world “is a being utterly impossible.” Two of Collier’s arguments for the “intrinsic repugnancy” of the external world resemble Kant’s first and second antinomies. Collier argues, e.g., that the material world is both finite and infinite; the contradiction can be avoided, he suggests, only by denying its external existence.


Some scholars suspect that Collier deliberately concealed his debt to Berkeley; most accept his report that he arrived at his views ten years before he published them. Collier first refers to Berkeley in letters written in 1714-15. In A Specimen of True Philosophy (1730), where he offers an immaterialist interpretation of the opening verse of Genesis, Collier writes that “except a single passage or two" in Berkeley’s Dialogues, there is no other book “which I ever heard of" on the same subject as the Clavis. This is a puzzling remark on several counts, one being that in the Preface to the Dialogues, Berkeley describes his earlier books. Collier’s biographer reports seeing among his papers (now lost) an outline, dated 1708, on “the question of the visible world being without us or not,” but he says no more about it. The biographer concludes that Collier’s independence cannot reasonably be doubted; perhaps the outline would, if unearthed, establish this.

Collingwood, R(obin) G(eorge)


(1889 - 1943)

English philosopher and historian. His father, W.G. Collingwood, John Ruskin’s friend, secretary, and biographer, at first educated him at home in Coniston and later sent him to Rugby School and then Oxford. Immediately upon graduating in 1912, he was elected to a fellowship at Pembroke College; except for service with admiralty intelligence during World War I, he remained at Oxford until 1941, when illness compelled him to retire. Although his Autobiography expresses strong disapproval of the lines on which, during his lifetime, philosophy at Oxford developed, he was a university “insider.” In 1934 he was elected to the Waynflete Professorship, the first to become vacant after he had done enough work to be a serious candidate. He was also a leading archaeologist of Roman Britain.


Although as a student Collingwood was deeply influenced by the “realist” teaching of John Cook Wilson, he studied not only the British idealists, but also Hegel and the contemporary Italian post-Hegelians. At twenty-three, he published a translation of Croce’s book on Vico’s philosophy. Religion and Philosophy (1916), the first of his attempts to present orthodox Christianity as philosophically acceptable, has both idealist and Cook Wilsonian elements. Thereafter the Cook Wilsonian element steadily diminished. In Speculum Mentis (1924), he investigated the nature and ultimate unity of the four special ‘forms of experience’ — art, religion, natural science, and history — and their relation to a fifth comprehensive form — philosophy. While all four, he contended, are necessary to a full human life now, each is a form of error that is corrected by its less erroneous successor. Philosophy is error-free but has no content of its own: “The truth is not some perfect system of philosophy: it is simply the way in which all systems, however perfect, collapse into nothingness on the discovery that they are only systems.” Some critics dismissed this enterprise as idealist (a description Collingwood accepted when he wrote), but even those who favored it were disturbed by the apparent skepticism of its result. A year later, he amplified his views about art in Outlines of a Philosophy of Art.


Since much of what Collingwood went on to write about philosophy has never been published, and some of it has been negligently destroyed, his thought after Speculum Mentis is hard to trace. It will not be definitively established until the more than 3,000 pages of his surviving unpublished manuscripts (deposited in the Bodleian Library in 1978) have been thoroughly studied. They were not available to the scholars who published studies of his philosophy as a whole up to 1990.


Three trends in how his philosophy developed, however, are discernible. The first is that as he continued to investigate the four special forms of experience, he came to consider each valid in its own right, and not a form of error. As early as 1928, he abandoned the conception of the historical past in Speculum Mentis as simply a spectacle, alien to the historian’s mind; he now proposed a theory of it as thoughts explaining past actions that, although occurring in the past, can be rethought in the present. Not only can the identical thought “enacted” at a definite time in the past be “reenacted” any number of times after, but it can be known to be so reenacted if physical evidence survives that can be shown to be incompatible with other proposed reenactments. In 1933-34 he wrote a series of lectures (posthumously published as The Idea of Nature) in which he renounced his skepticism about whether the quantitative material world can be known, and inquired why the three constructive periods he recognized in European scientific thought, the Greek, the Renaissance, and the modern, could each advance our knowledge of it as they did. Finally, in 1937, returning to the philosophy of art and taking full account of Croce’s later work, he showed that imagination expresses emotion and becomes false when it counterfeits emotion that is not felt; thus he transformed his earlier theory of art as purely imaginative. His later theories of art and of history remain alive; and his theory of nature, although corrected by research since his death, was an advance when published.


The second trend was that his conception of philosophy changed as his treatment of the special forms of experience became less skeptical. In his beautifully written Essay on Philosophical Method (1933), he argued that philosophy has an object — the ens realissimum as the one, the true, and the good — of which the objects of the special forms of experience are appearances; but that implies what he had ceased to believe, that the special forms of experience are forms of error. In his Principles of Art (1938) and New Leviathan (1942) he denounced the idealist principle of Speculum Mentis that to abstract is to falsify. Then, in his Essay on Metaphysics (1940), he denied that metaphysics is the science of being qua being, and identified it with the investigation of the “absolute presuppositions” of the special forms of experience at definite historical periods.


A third trend, which came to dominate his thought as World War II approached, was to see serious philosophy as practical, and so as having political implications. He had been, like Ruskin, a radical Tory, opposed less to liberal or even some socialist measures than to the bourgeois ethos from which they sprang. Recognizing European fascism as the barbarism it was, and detesting anti-Semitism, he advocated an antifascist foreign policy and intervention in the Spanish civil war in support of the republic. His last major publication, The New Leviathan, impressively defends what he called civilization against what he called barbarism; and although it was neglected by political theorists after the war was won, the collapse of Communism and the rise of Islamic states are winning it new readers.

combinatory logic

A branch of formal logic that deals with formal systems designed for the study of certain basic operations for constructing and manipulating functions as rules, i.e. as rules of calculation expressed by definitions.


The notion of a function was fundamental in the development of modern formal (or mathematical) logic that was initiated by Frege, Peano, Russell, Hilbert, and others. Frege was the first to introduce a generalization of the mathematical notion of a function to include propositional functions, and he used the general notion for formally representing logical notions such as those of a concept, object, relation, generality, and judgment. Frege’s proposal to replace the traditional logical notions of subject and predicate by argument and function, and thus to conceive predication as functional application, marks a turning point in the history of formal logic. In most modern logical systems, the notation used to express functions, including propositional functions, is essentially that used in ordinary mathematics. As in ordinary mathematics, certain basic notions are taken for granted, such as the use of variables to indicate processes of substitution.


Like the original systems for modern formal logic, the systems of combinatory logic were designed to give a foundation for mathematics. But combinatory logic arose as an effort to carry the foundational aims further and deeper. It undertook an analysis of notions taken for granted in the original systems, in particular of the notions of substitution and of the use of variables. In this respect combinatory logic was conceived by one of its founders, H.B. Curry, to be concerned with the ultimate foundations and with notions that constitute a “prelogic.” It was hoped that an analysis of this prelogic would disclose the true source of the difficulties connected with the logical paradoxes.


The operation of applying a function to one of its arguments, called application, is a primitive operation in all systems of combinatory logic. If f is a function and x a possible argument, then the result of the application operation is denoted (fx). In mathematics this is usually written f(x), but the notation (fx) is more convenient in combinatory logic. The German logician M. Schönfinkel, who started combinatory logic in 1924, observed that it is not necessary to introduce functions of more than one variable, provided that the idea of a function is enlarged so that functions can be arguments as well as values of other functions. A function F(x,y) is represented with the function f, which when applied to the argument x has, as a value, the function (fx), which, when applied to y, yields F(x,y), i.e. ((fx)y) = F(x,y). It is therefore convenient to omit parentheses with association to the left so that fx1 …xn is used for ((…(fx1…)xn). Schönfinkel’s main result was to show how to make the class of functions studied closed under explicit definition by introducing two specific primitive functions, the combinators S and K, with the rules kxy = x, and Sxyz = xz(yz). (To illustrate the effect of S in ordinary mathematical notation, let f and g be functions of two and one arguments, respectively; then Sfg is the function such that Sfgx = f(x,g(x)).) Generally, if a(x1,…,xn) is an expression built up from constants and the variables shown by means of the application operation, then there is a function F constructed out of constants (including the combinators S and K), such that Fx1…xn = a(x1,…,xn). This is essentially the meaning of the combinatory completeness of the theory of combinators in the terminology of H. B. Curry and R. Feys, Combinatory Logic (1958); and H.B. Curry, J.R. Hindley, and J.P. Seldin, Combinatory Logic, vol. II (1972).


The system of combinatory logic with S and K as the only primitive functions is the simplest equation calculus that is essentially undecidable. It is a type-free theory that allows the formation of the term ff, i.e. self-application, which has given rise to problems of interpretation. There are also type theories based on combinatory logic. The systems obtained by extending the theory of combinators with functions representing more familiar logical notions such as negation, implication, and generality, or by adding a device for expressing inclusion in logical categories, are studied in illative combinatory logic.


The theory of combinators exists in another, equivalent form, namely as the type-free λ-calculus created by Church in 1932. Like the theory of combinators, it was designed as a formalism for representing functions as rules of calculation, and it was originally part of a more general system of functions intended as a foundation for mathematics. The λ-calculus has application as a primitive operation, but instead of building up new functions from some primitive ones by application, new functions are here obtained by functional abstraction. If a(x) is an expression built up by means of application from constants and the variable x, then a(x) is considered to define a function denoted λx.a (x), whose value for the argument b is a(b), i.e. (λx.a (x))b = a(b). The function λx.a(x) is obtained from a(x) by functional abstraction. The property of combinatory completeness or closure under explicit definition is postulated in the form of functional abstraction. The combinators can be defined using functional abstraction (i.e., K = λx.λy.x and S = λx.λy.λz.xz(yz)), and conversely, in the theory of combinators, functional abstraction can be defined. A detailed presentation of the λ-calculus is found in H. Barendregt, The Lambda Calculus, Its Syntax and Semantics (1981).


It is possible to represent the series of natural numbers by a sequence of closed terms in the λcalculus. Certain expressions in the λ-calculus will then represent functions on the natural numbers, and these λ-definable functions are exactly the general recursive functions or the Turing computable functions. The equivalence of λ-definability and general recursiveness was one of the arguments used by Church for what is known as Church’s thesis, i.e., the identification of the effectively computable functions and the recursive functions. The first problem about recursive undecidability was expressed by Church as a problem about expressions in the λ calculus.


The λ-calculus thus played a historically important role in the original development of recursion theory. Due to the emphasis in combinatory logic on the computational aspect of functions, it is natural that its method has been found useful in proof theory and in the development of systems of constructive mathematics. For the same reason it has found several applications in computer science in the construction and analysis of programming languages. The techniques of combinatory logic have also been applied in theoretical linguistics, e.g. in so-called Montague grammer.


In recent decades combinatory logic, like other domains of mathematical logic, has developed into a specialized branch of mathematics, in which the original philosophical and foundational aims and motives are of little and often no importance. One reason for this is the discovery of the new technical applications, which were not intended originally, and which have turned the interest toward several new mathematical problems. Thus, the original motives are often felt to be less urgent and only of historical significance. Another reason for the decline of the original philosophical and foundational aims may be a growing awareness in the philosophy of mathematics of the limitations of formal and mathematical methods as tools for conceptual clarification, as tools for reaching “ultimate foundations”.

commentaries on Aristotle

The term commonly used for the Greek commentaries on Aristotle that take up about 15,000 pages in the Berlin Commentaria in Aristotelem Graeca (1882—1909), still the basic edition of them. Only in the 1980s did a project begin, under the editorship of Richard Sorabji, of King’s College, London, to translate at least the most significant portions of them into English. They had remained the largest corpus of Greek philosophy not translated into any modern language.


Most of these works, especially the later, Neoplatonic ones, are much more than simple commentaries on Aristotle. They are also a mode of doing philosophy, the favored one at this stage of intellectual history. They are therefore important not only for the understanding of Aristotle, but also for both the study of the pre-Socratics and the Hellenistic philosophers, particularly the Stoics, of whom they preserve many fragments, and lastly for the study of Neoplatonism itself — and, in the case of John Philoponus, for studying the innovations he introduces in the process of trying to reconcile Platonism with Christianity.


The commentaries may be divided into three main groups.


(1) The first group of commentaries are those by Peripatetic scholars of the second to fourth centuries A.D., most notably Alexander of Aphrodisias (fl. c.200), but also the paraphraser Themistius (fl. c.360). We must not omit, however, to note Alexander’s predecessor Aspasius, author of the earliest surviving commentary, one on the Nicomachean Ethics — a work not commented on again until the late Byzantine period. Commentaries by Alexander survive on the Prior Analytics, Topics, Metaphysics I — V, On the Senses, and Meteorologics, and his now lost ones on the Categories, On the Soul, and Physics had enormous influence in later times, particularly on Simplicius.


(2) By far the largest group is that of the Neoplatonists up to the sixth century A.D. Most important of the earlier commentators is Porphyry (232—c.309), of whom only a short commentary on the Categories survives, together with an introduction (Isagoge) to Aristotle’s logical works, which provoked many commentaries itself, and proved most influential in both the East and (through Boethius) in the Latin West. The reconciling of Plato and Aristotle is largely his work. His big commentary on the Categories was of great importance in later times, and many fragments are preserved in that of Simplicius. His follower Iamblichus was also influential, but his commentaries are likewise lost. The Athenian School of Syrianus (c.375—437) and Proclus (410—85) also commented on Aristotle, but all that survives is a commentary of Syrianus on Books III, IV, XIII, and XIV of the Metaphysics.


It is the early sixth century, however, that produces the bulk of our surviving commentaries, originating from the Alexandrian school of Ammonius, son of Hermeias (c.435 — 520), but composed both in Alexandria, by the Christian John Philoponus (c.490 — 575), and in (or at least from) Athens by Simplicius (writing after 532). Main commentaries of Philoponus are on Categories, Prior Analytics, Posterior Analytics, On Generation and Corruption, On the Soul I — II, and Physics; of Simplicius on Categories, Physics, On the Heavens, and (perhaps) On the Soul.


The tradition is carried on in Alexandria by Olympiodorus (c.495 — 565) and the Christians Elias (fl. c.540) and David (an Armenian, nicknamed the Invincible, fl. c.575), and finally by Stephanus, who was brought by the emperor to take the chair of philosophy in Constantinople in about 610. These scholars comment chiefly on the Categories and other introductory material, but Olympiodorus produced a commentary on the Meteorologics.


Characteristic of the Neoplatonists is a desire to reconcile Aristotle with Platonism (arguing, e.g., that Aristotle was not dismissing the Platonic theory of Forms), and to systematize his thought, thus reconciling him with himself. They are responding to a long tradition of criticism, during which difficulties were raised about incoherences and contradictions in Aristotle’s thought, and they are concerned to solve these, drawing on their comprehensive knowledge of his writings. Only Philoponus, as a Christian, dares to criticize him, in particular on the eternity of the world, but also on the concept of infinity (on which he produces an ingenious argument, picked up, via the Arabs, by Bonaventure in the thirteenth century). The Categories proves a particularly fruitful battleground, and much of the later debate between realism and nominalism stems from arguments about the proper subject matter of that work.


The format of these commentaries is mostly that adopted by scholars ever since, that of taking one passage, or lemma, after another of the source work and discussing it from every angle, but there are variations. Sometimes the general subject matter is discussed first, and then details of the text are examined; alternatively, the lemma is taken in subdivisions without any such distinction. The commentary can also proceed explicitly by answering problems, or aporiai, which have been raised by previous authorities. Some commentaries, such as the short one of Porphyry on the Categories, and that of Iamblichus’s pupil Dexippus on the same work, have a “catechetical” form, proceeding by question and answer. In some cases (as with Wittgenstein in modern times) the commentaries are simply transcriptions by pupils of the lectures of a teacher. This is the case, for example, with the surviving “commentaries” of Ammonius. One may also indulge in simple paraphrase, as does Themistius on Posterior Analysis, Physics, On the Soul, and On the Heavens, but even here a good deal of interpretation is involved, and his works remain interesting.


An important offshoot of all this activity in the Latin West is the figure of Boethius (c.480 — 524). It is he who first transmitted a knowledge of Aristotelian logic to the West, to become an integral part of medieval Scholasticism. He translated Porphyry’s Isagoge, and the whole of Aristotle’s logical works. He wrote a double commentary on the Isagoge, and commentaries on the Categories and On Interpretation. He is dependent ultimately on Porphyry, but more immediately, it would seem, on a source in the school of Proclus.


(3) The third major group of commentaries dates from the late Byzantine period, and seems mainly to emanate from a circle of scholars grouped around the princess Anna Comnena in the twelfth century. The most important figures here are Eustratius (c.1050 — 1120) and Michael of Ephesus (originally dated c.1040, but now fixed at c.1130). Michael in particular seems concerned to comment on areas of Aristotle’s works that had hitherto escaped commentary. He therefore comments widely, for example, on the biological works, but also on the Sophistical Refutations. He and Eustratius, and perhaps others, seem to have cooperated also on a composite commentary on the Nicomachean Ethics, neglected since Aspasius. There is also evidence of lost commentaries on the Politics and the Rhetoric.


The composite commentary on the Ethics was translated into Latin in the next century, in England, by Robert Grosseteste, but earlier than this translations of the various logical commentaries had been made by James of Venice (fl. c.1130), who may have even made the acquaintance of Michael of Ephesus in Constantinople. Later in that century other commentaries were being translated from Arabic versions by Gerard of Cremona (d.1187). The influence of the Greek commentary tradition in the West thus resumed after the long break since Boethius in the sixth century, but only now, it seems fair to say, is the full significance of this enormous body of work becoming properly appreciated.

commentaries on Plato

A term designating the works in the tradition of commentary (hypomnema) on Plato that may go back to the Old Academy (Crantor is attested by Proclus to have been the first to have “commented” on the Timaeus). More probably, the tradition arises in the first century B.C. in Alexandria, where we find Eudorus commenting, again, on the Timaeus, but possibly also (if the scholars who attribute to him the Anonymous Theatetus Commentary are correct) on the Theaetetus. It seems also as if the Stoic Posidonius composed a commentary of some sort on the Timaeus. The commentary form (such as we can observe in the biblical commentaries of Philo of Alexandria) owes much to the Stoic tradition of commentary on Homer, as practiced by the second-century B.C. School of Pergamum. It was normal to select (usually consecutive) portions of text (lemmata) for general, and then detailed, comment, raising and answering “problems” (aporiai), refuting one’s predecessors, and dealing with points of both doctrine and philology.


By the second century A.D. the tradition of Platonic commentary was firmly established. We have evidence of commentaries by the Middle Platonists Gaius, Albinus, Atticus, Numenius, and Cronius, mainly on the Timaeus, but also on at least parts of the Republic, as well as a work by Atticus’s pupil Herpocration of Argos, in twenty-four books, on Plato’s work as a whole. These works are all lost, but in the surviving works of Plutarch we find exegesis of parts of Plato’s works, such as the creation of the soul in the Timaeus (35a — 36d). The Latin commentary of Calcidius (fourth century A.D.) is also basically Middle Platonic.


In the Neoplatonic period (after Plotinus, who did not indulge in formal commentary, though many of his essays are in fact informal commentaries), we have evidence of much more comprehensive exegetic activity. Prophyry initiated the tradition with commentaries on the Phaedo, Cratylus, Sophist, Philebus, Parmenides (of which the surviving anonymous fragment of commentary is probably a part), and the Timaeus. He also commented on the myth of Er in the Republic. It seems to have been Porphyry who is responsible for introducing the allegorical interpretation of the introductory portions of the dialogues, though it was only his follower Iamblichus (who also commented on all the above dialogues, as well as the Alcibiades and the Phaedrus) who introduced the principle that each dialogue should have only one central theme, or skopos. The tradition was carried on in the Athenian School by Syrianus and his pupils Hermeias (on the Phaedrus — surviving) and Proclus (Alcibiades, Cratylus, Timaeus, Parmenides — all surviving, at least in part), and continued in later times by Damascius (Phaedo, Philebus, Parmenides) and Olympiodorus (Alcibiades, Phaedo, Gorgias — also surviving, though sometimes only in the form of pupils’ notes).


These commentaries are not now to be valued primarily as expositions of Plato’s thought (though they do contain useful insights, and much valuable information); they are best regarded as original philosophical treatises presented in the mode of commentary, as is so much of later Greek philosophy, where it is not originality but rather faithfulness to an inspired master and a great tradition that is being striven for.

common good

A normative standard in Thomistic and Neo-Thomistic ethics for evaluating the justice of social, legal, and political arrangements, referring to those arrangements that promote the full flourishing of everyone in the community. Every good can be regarded as both a goal to be sought and, when achieved, a source of human fulfillment. A common good is any good sought by and/or enjoyed by two or more persons (as friendship is a good common to the friends); the common good is the good of a “perfect” (i.e., complete and politically organized) human community - a good that is the common goal of all who promote the justice of that community, as well as the common source of fulfillment of all who share in those just arrangements.


‘Common’ is an analogical term referring to kinds and degrees of sharing ranging from mere similarity to a deep ontological communion. Thus, any good that is a genuine perfection of our common human nature is a common good, as opposed to merely idiosyncratic or illusory goods. But goods are common in a deeper sense when the degree of sharing is more than merely coincidental: two children engaged in parallel play enjoy a good in common, but they realize a common good more fully by engaging each other in one game; similarly, if each in a group watches the same good movie alone at home, they have enjoyed a good in common but they realize this good at a deeper level when they watch the movie together in a theater and discuss it afterward. In short, common good includes aggregates of private, individual goods but transcends these aggregates by the unique fulfillment afforded by mutuality, shared activity, and communion of persons.


As to the sources in Thomistic ethics for this emphasis on what is deeply shared over what merely coincides, the first is Aristotle’s understanding of us as social and political animals: many aspects of human perfection, on this view, can be achieved only through shared activities in communities, especially the political community. The second is Christian Trinitarian theology, in which the single Godhead involves the mysterious communion of three divine “persons,” the very exemplar of a common good; human personhood, by analogy, is similarly perfected only in a relationship of social communion.


The achievement of such intimately shared goods requires very complex and delicate arrangements of coordination to prevent the exploitation and injustice that plague shared endeavors. The establishment and maintenance of these social, legal, and political arrangements is “the” common good of a political society, because the enjoyment of all goods is so dependent upon the quality and the justice of those arrangements. The common good of the political community includes, but is not limited to, public goods: goods characterized by non-rivalry and non-excludability and which, therefore, must generally be provided by public institutions. By the principle of subsidiarity, the common good is best promoted by, in addition to the state, many lower-level non-public societies, associations, and individuals. Thus, religiously affiliated schools educating non-religious minority children might promote the common good without being public goods.

compactness theorem

A theorem for first-order logic: if every finite subset of a given infinite theory T is consistent, then the whole theory is consistent. The result is an immediate consequence of the completeness theorem, for if the theory were not consistent, a contradiction, say ‘P and not-P’, would be provable from it. But the proof, being a finitary object, would use only finitely many axioms from T, so this finite subset of T would be inconsistent.


This proof of the compactness theorem is very general, showing that any language that has a sound and complete system of inference, where each rule allows only finitely many premises, satisfies the theorem. This is important because the theorem immediately implies that many familiar mathematical notions are not expressible in the language in question, notions like those of a finite set or a well-ordering relation.


The compactness theorem is important for other reasons as well. It is the most frequently applied result in the study of first-order model theory and has inspired interesting developments within set theory and its foundations by generating a search for infinitary languages that obey some analog of the theorem.

complementary class

The class of all things not in a given class. For example, if C is the class of all red things, then its complementary class is the class containing everything that is not red. This latter class includes even non-colored things, like numbers and the class C itself. Often, the context will determine a less inclusive complementary class. If B⊆ A, then the complement of B with respect to A is A — B. For example, if A is the class of physical objects, and B is the class of red physical objects, then the complement of B with respect to A is the class of non-red physical objects.

complexe significabile (plural: complexe significabilia)

Also called complexum significabile, in medieval philosophy, what is signified only by a complexum (a statement or declarative sentence), by a that-clause, or by a dictum (an accusative + infinitive construction, as in: ‘I want him to go’). It is analogous to the modern proposition. The doctrine seems to have originated with Adam de Wodeham in the early fourteenth century, but is usually associated with Gregory of Rimini slightly later. Complexe significabilia do not fall under any of the Aristotelian categories, and so do not “exist” in the ordinary way. Still, they are somehow real. For before creation nothing existed except God, but even then God knew that the world was going to exist. The object of this knowledge cannot have been God himself (since God is necessary, but the world’s existence is contingent), and yet did not “exist” before creation. Nevertheless, it was real enough to be an object of knowledge. Some authors who maintained such a view held that these entities were not only signifiable in a complex way by a statement, but were themselves complex in their inner structure; the term ‘complexum significabile’ is unique to their theories. The theory of complexe significabilia was vehemently criticized by late medieval nominalists.

compossible

Capable of existing or occurring together. E.g., two individuals are compossible provided the existence of one of them is compatible with the existence of the other. In terms of possible worlds, things are compossible provided there is some possible world to which all of them belong; otherwise they are incompossible. Not all possibilities are compossible. E.g., the extinction of life on earth by the year 3000 is possible; so is its continuation until the year 10,000; but since it is impossible that both of these things should happen, they are not compossible. Leibniz held that any non-actualized possibility must be incompossible with what is actual.

comprehension

As applied to a term, the set of attributes implied by a term. The comprehension of ‘square’, e.g., includes being four-sided, having equal sides, and being a plane figure, among other attributes. The comprehension of a term is contrasted with its extension, which is the set of individuals to which the term applies. The distinction between the extension and the comprehension of a term was introduced in the Port-Royal Logic by Arnauld and Pierre Nicole in 1662. Current practice is to use the expression ‘intension’ rather than ‘comprehension’. Both expressions, however, are inherently somewhat vague.

compresence

An unanalyzable relation in terms of which Russell, in his later writings (especially in Human Knowledge: Its Scope and Limits, 1948), took concrete particular objects to be analyzable. Concrete particular objects are analyzable in terms of complexes of qualities all of whose members are compresent. Although this relation can be defined only ostensively, Russell states that it appears in psychology as “simultaneity in one experience” and in physics as “overlapping in space-time.” Complete complexes of compresence are complexes of qualities having the following two properties: (1) all members of the complex are compresent; (2) given anything not a member of the complex, there is at least one member of the complex with which it is not compresent. He argues that there is strong empirical evidence that no two complete complexes have all their qualities in common. Finally, space-time point-instants are analyzed as complete complexes of compresence. Concrete particulars, on the other hand, are analyzed as series of incomplete complexes of compresence related by certain causal laws.

computability

Roughly, the possibility of computation on a Turing machine. The first convincing general definition, A.N. Turing’s (1936), has been proved equivalent to the known plausible alternatives, so that the concept of computability is generally recognized as an absolute one. Turing’s definition referred to computations by imaginary tape-processing machines that we now know to be capable of computing the same functions (whether simple sums and products or highly complex, esoteric functions) that modern digital computing machines could compute if provided with sufficient storage capacity. In the form ‘Any function that is computable at all is computable on a Turing machine’, this absoluteness claim is called Turing’s thesis. A comparable claim for Alonzo Church’s (1935) concept of λcomputability is called Church’s thesis. Similar theses are enunciated for Markov algorithms, for S.C. Kleene’s notion of general recursiveness, etc. It has been proved that the same functions are computable in all of these ways. There is no hope of proving any of those theses, for such a proof would require a definition of ‘computable’ — a definition that would simply be a further item in the list, the subject of a further thesis. But since computations of new kinds might be recognizable as genuine in particular cases, Turing’s thesis and its equivalents, if false, might be decisively refuted by discovery of a particular function, a way of computing it, and a proof that no Turing machine can compute it.


The halting problem for (say) Turing machines is the problem of devising a Turing machine that computes the function h (m, n) = 1 or 0 depending on whether or not Turing machine number m ever halts, once started with the number n on its tape. This problem is unsolvable, for a machine that computed h could be modified to compute a function g(n), which is underfined (the machine goes into an endless loop) when h(n, n) = 1, and otherwise agrees with h(n, n). But this modified machine — Turing machine number k, say — would have contradictory properties: started with k on its tape, it would eventually halt if and only if it does not. Turing proved unsolvability of the decision problem for logic (the problem of devising a Turing machine that, applied to argument number n in logical notation, correctly classifies it as valid or invalid) by reducing the halting problem to the decision problem, i.e., showing how any solution to the latter could be used to solve the former problem, which we know to be unsolvable.

computer theory

The theory of the design, uses powers, and limits of modern electronic digital computers. It has important bearings on philosophy, as may be seen from the many philosophical references herein.


Modern computers are a radically new kind of machine, for they are active physical realizations of formal languages of logic and arithmetic. Computers employ sophisticated languages, and they have reasoning powers many orders of magnitude greater than those of any prior machines. Because they are far superior to humans in many important tasks, they have produced a revolution in society that is as profound as the industrial revolution and is advancing much more rapidly. Furthermore, computers themselves are evolving rapidly.


When a computer is augmented with devices for sensing and acting, it becomes a powerful control system, or a robot. To understand the implications of computers for philosophy, one should imagine a robot that has basic goals and volitions built into it, including conflicting goals and competing desires. This concept first appeared in Karel Čapek’s play Rossum’s Universal Robots (1920), where the word ‘robot’ originated.


A computer has two aspects, hardware and programming languages. The theory of each is relevant to philosophy.


The software and hardware aspects of a computer are somewhat analogous to the human mind and body. This analogy is especially strong if we follow Peirce and consider all information processing in nature and in human organisms, not just the conscious use of language. Evolution has produced a succession of levels of sign usage and information processing: self-copying chemicals, self-reproducing cells, genetic programs directing the production of organic forms, chemical and neuronal signals in organisms, unconscious human information processing, ordinary languages, and technical languages. But each level evolved gradually from its predecessors, so that the line between body and mind is vague.


The hardware of a computer is typically organized into three general blocks: memory, processor (arithmetic unit and control), and various input-output devices for communication between machine and environment. The memory stores the data to be processed as well as the program that directs the processing. The processor has an arithmetic-logic unit for transforming data, and a control for executing the program. Memory, processor, and input-output communicate to each other through a fast switching system.


The memory and processor are constructed from registers, adders, switches, cables, and various other building blocks. These in turn are composed of electronic components: transistors, resistors, and wires. The input and output devices employ mechanical and electromechanical technologies as well as electronics. Some input-output devices also serve as auxiliary memories; floppy disks and magnetic tapes are examples. For theoretical purposes it is useful to imagine that the computer has an indefinitely expandable storage tape. So imagined, a computer is a physical realization of a Turing machine. The idea of an indefinitely expandable memory is similar to the logician’s concept of an axiomatic formal language that has an unlimited number of proofs and theorems.


The software of a modern electronic computer is written in a hierarchy of programming languages. The higher-level languages are designed for use by human programmers, operators, and maintenance personnel. The “machine language” is the basic hardware language, interpreted and executed by the control. Its words are sequences of binary digits or bits. Programs written in intermediate-level languages are used by the computer to translate the languages employed by human users into the machine language for execution.


A programming language has instructional means for carrying out three kinds of operations: data operations and transfers, transfers of control from one part of the program to the other, and program self-modification. Von Neumann designed the first modern programming language.


A programming language is general purpose, and an electronic computer that executes it can in principle carry out any algorithm or effective procedure, including the simulation of any other computer. Thus the modern electronic computer is a practical realization of the abstract concept of a universal Turing machine. What can actually be computed in practice depends, of course, on the state of computer technology and its resources.


It is common for computers at many different spatial locations to be interconnected into complex networks by telephone, radio, and satellite communication systems. Insofar as users in one part of the network can control other parts, either legitimately or illegitimately (e.g., by means of a “computer virus”), a global network of computers is really a global computer. Such vast computers greatly increase societal interdependence, a fact of importance for social philosophy.


The theory of computers has two branches, corresponding to the hardware and software aspects of computers.


The fundamental concept of hardware theory is that of a finite automaton, which may be expressed either as an idealized logical network of simple computer primitives, or as the corresponding temporal system of input, output, and internal states.


A finite automaton may be specified as a logical net of truth-functional switches and simple memory elements, connected to one another by idealized wires. These elements function synchronously, each wire being in a binary state (0 or 1) at each moment of time t = 0, 1, 2, …. Each switching element (or “gate”) executes a simple truth-functional operation (not, or, and, nor, not-and, etc.) and is imagined to operate instantaneously (compare the notions of sentential connective and truth table). A memory element (flip-flop, binary counter, unit delay line) preserves its input bit for one or more time-steps.


A well-formed net of switches and memory elements may not have cycles through switches only, but it typically has feedback cycles through memory elements. The wires of a logical net are of three kinds: input, internal, and output. Correspondingly, at each moment of time a logical net has an input state, an internal state, and an output state. A logical net or automaton need not have any input wires, in which case it is a closed system.


The complete history of a logical net is described by a deterministic law: at each moment of time t, the input and internal states of the net determine its output state and its next internal state. This leads to the second definition of ‘finite automaton’: it is a deterministic finite-state system characterized by two tables. The transition table gives the next internal state produced by each pair of input and internal states. The output table gives the output state produced by each input state and internal state.


The state analysis approach to computer hardware is of practical value only for systems with a few elements (e.g., a binary-coded decimal counter), because the number of states increases as a power of the number of elements. Such a rapid rate of increase of complexity with size is called the combinatorial explosion, and it applies to many discrete systems. However, the state approach to finite automata does yield abstract models of law-governed systems that are of interest to logic and philosophy. A correctly operating digital computer is a finite automaton. Alan Turing defined the finite part of what we now call a Turing machine in terms of states. It seems doubtful that a human organism has more computing power than a finite automaton.


A closed finite automaton illustrates Nietzche’s law of eternal return. Since a finite automaton has a finite number of internal states, at least one of its internal states must occur infinitely many times in any infinite state history. And since a closed finite automaton is deterministic and has no inputs, a repeated state must be followed by the same sequence of states each time it occurs. Hence the history of a closed finite automaton is periodic, as in the law of eternal return.


Idealized neurons are sometimes used as the primitive elements of logical nets, and it is plausible that for any brain and central nervous system there is a logical network that behaves the same and performs the same functions. This shows the close relation of finite automata to the brain and central nervous system. The switches and memory elements of a finite automaton may be made probabilistic, yielding a probabilistic automaton. These automata are models of indeterministic systems.


Von Neumann showed how to extend deterministic logical nets to systems that contain self-reproducing automata. This is a very basic logical design relevant to the nature of life.


The part of computer programming theory most relevant to philosophy contains the answer to Leibniz’s conjecture concerning his characteristica universalis and calculus ratiocinator. He held that “all our reasoning is nothing but the joining and substitution of characters, whether these characters be words or symbols or pictures.” He thought therefore that one could construct a universal, arithmetic language with two properties of great philosophical importance. First, every atomic concept would be represented by a prime number. Second, the truth-value of any logically true-or-false statement expressed in the characteristica universalis could be calculated arithmetically, and so any rational dispute could be resolved by calculation. Leibniz expected to do the computation by hand with the help of a calculating machine; today we would do it on an electronic computer. However, we know now that Leibniz’s proposed language cannot exist, for no computer (or computer program) can calculate the truth-value of every logically true-or-false statement given to it. This fact follows from a logical theorem about the limits of what computer programs can do. Let E be a modern electronic computer with an indefinitely expandable memory, so that E has the power of a universal Turing machine. And let L be any formal language in which every arithmetic statement can be expressed, and which is consistent. Leibniz’s proposed characteristica universalis would be such a language. Now a computer that is operating correctly is an active formal language, carrying out the instructions of its program deductively. Accordingly, Gödel’s incompleteness theorems for formal arithmetic apply to computer E. It follows from these theorems that no program can enable computer E to decide of an arbitrary statement of L whether or not that statement is true. More strongly, there cannot even be a program that will enable E to enumerate the truths of language L one after another. Therefore Leibniz’s characteristica universalis cannot exist.


Electronic computers are the first active or “live” mathematical systems. They are the latest addition to a long historical series of mathematical tools for inquiry: geometry, algebra, calculus and differential equations, probability and statistics, and modern mathematics.


The most effective use of computer programs is to instruct computers in tasks for which they are superior to humans. Computers are being designed and programmed to cooperate with humans so that the calculation, storage, and judgment capabilities of the two are synthesized. The powers of such human-computer combines will increase at an exponential rate as computers continue to become faster, more powerful, and easier to use, while at the same time becoming smaller and cheaper. The social implications of this are very important.


The modern electronic computer is a new tool for the logic of discovery (Peirce’s abduction). An inquirer (or inquirers) operating a computer interactively can use it as a universal simulator, dynamically modeling systems that are too complex to study by traditional mathematical methods, including non-linear systems. Simulation is used to explain known empirical results, and also to develop new hypotheses to be tested by observation. Computer models and simulations are unique in several ways: complexity, dynamism, controllability, and visual presentability. These properties make them important new tools for modeling and thereby relevant to some important philosophical problems.


A human—computer combine is especially suited for the study of complex holistic and hierarchical systems with feedback (cf. cybernetics), including adaptive goal-directed systems. A hierarchical-feedback system is a dynamic structure organized into several levels, with the compounds of one level being the atoms or building blocks of the next higher level, and with cyclic paths of influence operating both on and between levels. For example, a complex human institution has several levels, and the people in it are themselves hierarchical organizations of self-copying chemicals, cells, organs, and such systems as the pulmonary and the central nervous system.


The behaviors of these systems are in general much more complex than, e.g., the behaviors of traditional systems of mechanics. Contrast an organism, society, or ecology with our planetary system as characterized by Kepler and Newton. Simple formulas (ellipses) describe the orbits of the planets. More basically, the planetary system is stable in the sense that a small perturbation of it produces a relatively small variation in its subsequent history. In contrast, a small change in the state of a holistic hierarchical feedback system often amplifies into a very large difference in behavior, a concern of chaos theory. For this reason it is helpful to model such systems on a computer and run sample histories. The operator searches for representative cases, interesting phenomena, and general principles of operation.


The human—computer method of inquiry should be a useful tool for the study of biological evolution, the actual historical development of complex adaptive goal-directed systems. Evolution is a logical and communication process as well as a physical and chemical process. But evolution is statistical rather than deterministic, because a single temporal state of the system results in a probabilistic distribution of histories, rather than in a single history. The genetic operators of mutation and crossover, e.g., are probabilistic operators. But though it is stochastic, evolution cannot be understood in terms of limiting relative frequencies, for the important developments are the repeated emergence of new phenomena, and there may be no evolutionary convergence toward a final state or limit. Rather, to understand evolution the investigator must simulate the statistical spectra of histories covering critical stages of the process.


Many important evolutionary phenomena should be studied by using simulation along with observation and experiment. Evolution has produced a succession of levels of organization: self-copying chemicals, self-reproducing cells, communities of cells, simple organisms, haploid sexual reproduction, diploid sexuality with genetic dominance and recessiveness, organisms composed of organs, societies of organisms, humans, and societies of humans. Most of these systems are complex hierarchical feedback systems, and it is of interest to understand how they emerged from earlier systems. Also, the interaction of competition and cooperation at all stages of evolution is an important subject, of relevance to social philosophy and ethics.


Some basic epistemological and metaphysical concepts enter into computer modeling. A model is a well-developed concept of its object, representing characteristics like structure and function. A model is similar to its object in important respects, but simpler; in mathematical terminology, a model is homomorphic to its object but not isomorphic to it. However, it is often useful to think of a model as isomorphic to an embedded subsystem of the system it models. For example, a gas is a complicated system of microstates of particles, but these microstates can be grouped into macrostates, each with a pressure, volume, and temperature satisfying the gas law PV = KT. The derivation of this law from the detailed mechanics of the gas is a reduction of the embedded subsystem to the underlying system. In many cases it is adequate to work with the simpler embedded subsystem, but in other cases one must work with the more complex but complete underlying system.


The law of an embedded subsystem may be different in kind from the law of the underlying system. Consider, e.g., a machine tossing a coin randomly. The sequence of tosses obeys a simple probability law, while the complex underlying mechanical system is deterministic. The random sequence of tosses is a probabilistic system embedded in a deterministic system, and a mathematical account of this embedding relation constitutes a reduction of the probabilistic system to a deterministic system. Compare the compatibilist’s claim that free choice can be embedded in a deterministic system. Compare also a pseudorandom sequence, which is a deterministic sequence with adequate randomness for a given (finite) simulation. Note finally that the probabilistic system of quantum mechanics underlies the deterministic system of mechanics.


The ways in which models are used by goal-directed systems to solve problems and adapt to their environments are currently being modeled by human—computer combines. Since computer software can be converted into hardware, successful simulations of adaptive uses of models could be incorporated into the design of a robot. Human intentionality involves the use of a model of oneself in relation to others and the environment. A problem-solving robot using such a model would constitute an important step toward a robot with full human powers.


These considerations lead to the central thesis of the philosophy of logical mechanism: a finite deterministic automaton can perform all human functions. This seems plausible in principle (and is treated in detail in Merrilee Salmon, ed., The Philosophy of Logical Mechanism: Essays in Honor of Arthur W. Burks, 1990). A digital computer has reasoning and memory powers. Robots have sensory inputs for collecting information from the environment, and they have moving and acting devices. To obtain a robot with human powers, one would need to put these abilities under the direction of a system of desires, purposes, and goals. Logical mechanism is a form of mechanism or materialism, but differs from traditional forms of these doctrines in its reliance on the logical powers of computers and the logical nature of evolution and its products. The modern computer is a kind of complex hierarchical physical system, a system with memory, processor, and control that employs a hierarchy of programming languages. Humans are complex hierarchical systems designed by evolution — with structural levels of chemicals, cells, organs, and systems (e.g., circulatory, neural, immune) and linguistic levels of genes, enzymes, neural signals, and immune recognition. Traditional materialists did not have this model of a computer nor the contemporary understanding of evolution, and never gave an adequate account of logic and reasoning and such phenomena as goal-directedness and self-modeling.

Comte, Auguste


(1798 - 1857)

French philosopher and sociologist, the founder of positivism. He was educated in Paris at l’École Polytechnique, where he briefly taught mathematics. He suffered from a mental illness that occasionally interrupted his work.


In conformity with empiricism, Comte held that knowledge of the world arises from observation. He went beyond many empiricists, however, in denying the possibility of knowledge of unobservable physical objects. He conceived of positivism as a method of study based on observation and restricted to the observable. He applied positivism chiefly to science. He claimed that the goal of science is prediction, to be accomplished using laws of succession. Explanation insofar as attainable has the same structure as prediction. It subsumes events under laws of succession; it is not causal. Influenced by Kant, he held that the causes of phenomena and the nature of things-in-themselves are not knowable. He criticized metaphysics for ungrounded speculation about such matters; he accused it of not keeping imagination subordinate to observation. He advanced positivism for all the sciences but held that each science has additional special methods, and has laws not derivable by human intelligence from laws of other sciences. He corresponded extensively with J.S. Mill, who encouraged his work and discussed it in Auguste Comte and Positivism (1865). Twentieth-century logical positivism was inspired by Comte’s ideas.


Comte was a founder of sociology, which he also called social physics. He divided the science into two branches — statics and dynamics dealing respectively with social organization and social development. He advocated a historical method of study for both branches. As a law of social development, he proposed that all societies pass through three intellectual stages, first interpreting phenomena theologically, then metaphysically, and finally positivistically. The general idea that societies develop according to laws of nature was adopted by Marx.


Comte’s most important work is his six-volume Cours de philosophie positive (Course in Positive Philosophy, 1830-42). It is an encyclopedic treatment of the sciences that expounds positivism and culminates in the introduction of sociology.

conceivability

Capability of being conceived or imagined. Thus, golden mountains are conceivable; round squares, inconceivable. As Descartes pointed out, the sort of imaginability required is not the ability to form mental images. Chillagons, Cartesian minds, and God are all conceivable, though none of these can be pictured “in the mind’s eye.” Historical references include Anselm’s definition of God as “a being than which none greater can be conceived” and Descartes’s argument for dualism from the conceivability of disembodied existence. Several of Hume’s arguments rest upon the maxim that whatever is conceivable is possible. He argued, e.g., that an event can occur without a cause, since this is conceivable, and his critique of induction relies on the inference from the conceivability of a change in the course of nature to its possibility. In response, Reid maintained that to conceive is merely to understand the meaning of a proposition. Reid argued that impossibilities are conceivable, since we must be able to understand falsehoods. Many simply equate conceivability with possibility, so that to say something is conceivable (or inconceivable) just is to say that it is possible (or impossible). Such usage is controversial, since conceivability is broadly an epistemological notion concerning what can be thought, whereas possibility is a metaphysical notion concerning how things can be.


The same controversy can arise regarding the compossible, or co-possible, where two states of affairs are compossible provided it is possible that they both obtain, and two propositions are compossible provided their conjunction is possible. Alternatively, two things are compossible if and only if there is a possible world containing both. Leibniz held that two things are compossible provided they can be ascribed to the same possible world without contradiction. “There are many possible universes, each collection of compossibles making one of them.” Others have argued that non-contradiction is sufficient for neither possibility nor compossibility.


The claim that something is inconceivable is usually meant to suggest more than merely an inability to conceive. It is to say that trying to conceive results in a phenomenally distinctive mental repugnance, e.g. when one attempts to conceive of an object that is red and green all over at once. On this usage the inconceivable might be equated with what one can “just see” to be impossible. There are two related usages of ‘conceivable’: (1) not inconceivable in the sense just described; and (2) such that one can “just see” that the thing in question is possible. Goldbach’s conjecture would seem a clear example of something conceivable in the first sense, but not the second.

conceptualism

The view that there are no universals and that the supposed classificatory function of universals is actually served by particular concepts in the mind. A universal is a property that can be instantiated by more than one individual thing (or particular) at the same time; e.g., the shape of this page, if identical with the shape of the next page, will be one property instantiated by two distinct individual things at the same time. If viewed as located where the pages are, then it would be immanent. If viewed as not having spatiotemporal location itself, but only bearing a connection, usually called instantiation or exemplification, to things that have such location, then the shape of this page would be transcendent and presumably would exist even if exemplified by nothing, as Plato seems to have held. The conceptualist rejects both views by holding that universals are merely concepts. Most generally, a concept may be understood as a principle of classification, something that can guide us in determining whether an entity belongs in a given class or does not. Of course, properties understood as universals satisfy, trivially, this definition and thus may be called concepts, as indeed they were by Frege. But the conceptualistic substantive views of concepts are that concepts are (1) mental representations, often called ideas, serving their classificatory function presumably by resembling the entities to be classified; or (2) brain states that serve the same function but presumably not by resemblance; or (3) general words (adjectives, common nouns, verbs) or uses of such words, an entity’s belonging to a certain class being determined by the applicability to the entity of the appropriate word; or (4) abilities to classify correctly, whether or not with the aid of an item belonging under (1), (2), or (3). The traditional conceptualist holds (1). Defenders of (3) would be more properly called nominalists. In whichever way concepts are understood, and regardless of whether conceptualism is true, they are obviously essential to our understanding and knowledge of anything, even at the most basic level of cognition, namely, recognition. The classic work on the topic is Thinking and Experience (1954) by H.H. Price, who held (4).

concursus dei

God’s concurrence. The notion derives from a theory from medieval philosophical theology, according to which any case of causation involving created substances requires both the exercise of genuine causal powers inherent in creatures and the exercise of God’s causal activity. In particular, a person’s actions are the result of the person’s causal powers, often including the powers of deliberation and choice and God’s causal endorsement. Divine concurrence maintains that the nature of God’s activity is more determinate than simply conserving the created world in existence. Although divine concurrence agrees with occasionalism in holding God’s power to be necessary for any event to occur, it diverges from occasionalism insofar as it regards creatures as causally active.

Condillac, Étienne Bonnot de


(1714 - 80)

French philosopher, an empiricist who was considered the great analytical mind of his generation. Close to Rousseau and Diderot, he stayed within the church. He is closely (perhaps excessively) identified with the image of the statue that, in the Traité des sensations (Treatise on Sense Perception, 1754), he endows with the five senses to explain how perceptions are assimilated and produce understanding (cf. also his Treatise on the Origins of Human Knowledge, 1746). He maintains a critical distance from precursors: he adopts Locke’s tabula rasa but from his first work to Logique (Logic, 1780) insists on the creative role of the mind as it analyzes and compares sense impressions. His Traité des animaux (Treatise on Animals, 1755), which includes a proof of the existence of God, considers sensate creatures rather than Descartes’s animaux machines and sees God only as a final cause. He reshapes Leibniz’s monads in the Monadologie (Monadology, 1748, rediscovered in 1980). In the Langue des calculs (Language of Numbers, 1798) he proposes mathematics as a model of clear analysis.


The origin of language and creation of symbols eventually became his major concern. His break with metaphysics in the Traité des systèmes (Treatise on Systems, 1749) has been overemphasized, but Condillac does replace rational constructs with sense experience and reflection. His empiricism has been mistaken for materialism, his clear analysis for simplicity. The “ideologues,” Destutt de Tracy and Laromiguière, found Locke in his writings. Jefferson admired him. Maine de Biran, while critical, was indebted to him for concepts of perception and the self; Cousin disliked him; Saussure saw him as a forerunner in the study of the origins of language.

condition

A state of affairs or “way things are,” most commonly referred to in relation to something that implies or is implied by it. Let p, q, and r be schematic letters for declarative sentences; and let P, Q, and R be corresponding nominalizations; e.g., if p is ‘snow is white’, then P would be ‘snow’s being white’. P can be a necessary or sufficient condition of Q in any of several senses. In the weakest sense P is a sufficient condition of Q iff (if and only if): if p then q (or if P is actual then Q is actual) — where the conditional is to be read as “material,” as amounting merely to not-(p & not-q). At the same time Q is a necessary condition of P iff: if not-q then not-p. It follows that P is a sufficient condition of Q iff Q is a necessary condition of P. Stronger senses of sufficiency and of necessity are definable, in terms of this basic sense, as follows: P is nomologically sufficient (necessary) for Q iff it follows from the laws of nature, but not without them, that if p then q (that if q then p). P is alethically or metaphysically sufficient (necessary) for Q iff it is alethically or metaphysically necessary that if p then q (that if q then p). However, it is perhaps most common of all to interpret conditions in terms of subjunctive conditionals, in such a way that P is a sufficient condition of Q iff P would not occur unless Q occurred, or: if P should occur, Q would; and P is a necessary condition of Q iff Q would not occur unless P occurred, or: if Q should occur, P would.

conditional

A compound sentence, such as ‘if Abe calls, then Ben answers,’ in which one sentence, the antecedent, is connected to a second, the consequent, by the connective ‘if … then’. Propositions (statements, etc.) expressed by conditionals are called conditional propositions (statements, etc.) and, by ellipsis, simply conditionals. The ambiguity of the expression ‘if … then’ gives rise to a semantic classification of conditionals into material conditionals, causal conditionals, counterfactual conditionals, and so on. In traditional logic, conditionals are called hypotheticals, and in some areas of mathematical logic conditionals are called implications. Faithful analysis of the meanings of conditionals continues to be investigated and intensely disputed.

conditional proof

(1) The argument form ‘B follows from A; therefore, if A then B’ and arguments of this form.
(2) The rule of inference that permits one to infer a conditional given a derivation of its consequent from its antecedent. This is also known as the rule of conditional proof or ⊃ -introduction. G.F.S

conditioning

A form of associative learning that occurs when changes in thought or behavior are produced by temporal relations among events. It is common to distinguish between two types of conditioning; one, classical or Pavlovian, in which behavior change results from events that occur before behavior; the other, operant or instrumental, in which behavior change occurs because of events after behavior. Roughly, classically and operantly conditioned behavior correspond to the everyday, folk-psychological distinction between involuntary and voluntary or goal-directed behavior. In classical conditioning, stimuli or events elicit a response (e.g., salivation); neutral stimuli (e.g., a dinner bell) gain control over behavior when paired with stimuli that already elicit behavior (e.g., the appearance of dinner). The behavior is involuntary. In operant conditioning, stimuli or events reinforce behavior after behavior occurs; neutral stimuli gain power to reinforce by being paired with actual reinforcers. Here, occasions in which behavior is reinforced serve as discriminative stimuli-evoking behavior. Operant behavior is goal-directed, if not consciously or deliberately, then through the bond between behavior and reinforcement. Thus, the arrangement of condiments at dinner may serve as the discriminative stimulus evoking the request “Please pass the salt,” whereas saying “Thank you” may reinforce the behavior of passing the salt.


It is not easy to integrate conditioning phenomena into a unified theory of conditioning. Some theorists contend that operant conditioning is really classical conditioning veiled by subtle temporal relations among events. Other theorists contend that operant conditioning requires mental representations of reinforcers and discriminative stimuli. B.F. Skinner (1904-90) argued in Walden Two (1948) that astute, benevolent behavioral engineers can and should use conditioning to create a social utopia.

conditio sine qua non (Latin, ‘a condition without which not’)

A necessary condition; something without which something else could not be or could not occur. For example, being a plane figure is a conditio sine qua non for being a triangle. Sometimes the phrase is used emphatically as a synonym for an unconditioned presupposition, be it for an action to start or an argument to get going. I.Bo.

Condorcet, Marquis de, title of Marie-Jean-Antoine-Nicolas de Caritat


((1743 - 1794()


French philosopher and political theorist who contributed to the Encyclopedia and pioneered the mathematical analysis of social institutions.

Although prominent in the Revolutionary government, he was denounced for his political views and died in prison.


Condorcet discovered the voting paradox, which shows that majoritarian voting can produce cyclical group preferences. Suppose, for instance, that voters A, B, and C rank proposals x, y, and z as follows: A: xyz, B: yzx, and C: zxy. Then in majoritarian voting x beats y and y beats z, but z in turn beats x. So the resulting group preferences are cyclical. The discovery of this problem helped initiate social choice theory, which evaluates voting systems. Condorcet argued that any satisfactory voting system must guarantee selection of a proposal that beats all rivals in majoritarian competition. Such a proposal is called a Condorcet winner. His jury theorem says that if voters register their opinions about some matter, such as whether a defendant is guilty, and the probabilities that individual voters are right are greater than 1/2, equal, and independent, then the majority vote is more likely to be correct than any individual’s or minority’s vote.


Condorcet’s main works are Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix (Essay on the Application of Analysis to the Probability of Decisions Reached by a Majority of Votes, 1785); and a posthumous treatise on social issues, Esquisse d’un tableau historique des progrès de l’esprit humain (Sketch for a Historical Picture of the Progress of the Human Mind, 1795).

confirmation

An evidential relation between evidence and any statement (especially a scientific hypothesis) that this evidence supports. It is essential to distinguish two distinct, and fundamentally different, meanings of the term: (1) the incremental sense, in which a piece of evidence contributes at least some degree of support to the hypothesis in question — e.g., finding a fingerprint of the suspect at the scene of the crime lends some weight to the hypothesis that the suspect is guilty; and (2) the absolute sense, in which a body of evidence provides strong support for the hypothesis in question — e.g., a case presented by a prosecutor making it practically certain that the suspect is guilty. If one thinks of confirmation in terms of probability, then evidence that increases the probability of a hypothesis confirms it incrementally, whereas evidence that renders a hypothesis highly probable confirms it absolutely.


In each of the two foregoing senses one can distinguish three types of confirmation: (i) qualitative, (ii) quantitative, and (iii) comparative. (i) Both examples in the preceding paragraph illustrate qualitative confirmation, for no numerical values of the degree of confirmation were mentioned. (ii) If a gambler, upon learning that an opponent holds a certain card, asserts that her chance of winning has increased from 2/3 to 3/4, the claim is an instance of quantitative incremental confirmation. If a physician states that on the basis of an X-ray, the probability that the patient has tuberculosis is .95, that claim exemplifies quantitative absolute confirmation. In the incremental sense, any case of quantitative confirmation involves a difference between two probability values; in the absolute sense, any case of quantitative confirmation involves only one probability value. (iii) Comparative confirmation in the incremental sense would be illustrated if an investigator said that possession of the murder weapon weighs more heavily against the suspect than does the fingerprint found at the scene of the crime. Comparative confirmation in the absolute sense would occur if a prosecutor claimed to have strong cases against two suspects thought to be involved in a crime, but that the case against one is stronger than that against the other.


Even given recognition of the foregoing six varieties of confirmation, there is still considerable controversy regarding its analysis. Some authors claim that quantitative confirmation does not exist; only qualitative and/or comparative confirmation are possible. Some authors maintain that confirmation has nothing to do with probability, whereas others — known as Bayesians — analyze confirmation explicitly in terms of Bayes’s theorem in the mathematical calculus of probability. Among those who offer probabilistic analyses there are differences as to which interpretation of probability is suitable in this context. Popper advocates a concept of corroboration that differs fundamentally from confirmation.


Many (real or apparent) paradoxes of confirmation have been posed; the most famous is the paradox of the ravens. It is plausible to suppose that ‘All ravens are black’ can be incrementally confirmed by the observation of one of its instances, namely, a black crow. However, ‘All ravens are black’ is logically equivalent to ‘All non-black things are non-ravens.’ By parity of reasoning, an instance of this statement, namely, any non-black non-raven (e.g., a white shoe), should incrementally confirm it. Moreover, the equivalence condition — whatever confirms a hypothesis must equally confirm any statement logically equivalent to it — seems eminently reasonable. The result appears to facilitate indoor ornithology, for the observation of a white shoe would seem to confirm incrementally the hypothesis that all ravens are black. Many attempted resolutions of this paradox can be found in the literature.

Confucianism

A Chinese school of thought and set of moral, ethical, and political teachings usually considered to be founded by Confucius. Before the time of Confucius (sixth—fifth century B.C.), a social group, the Ju (literally, ‘weaklings’ or ‘foundlings’), existed whose members were ritualists and sometimes also teachers by profession. Confucius belonged to this group; but although he retained the interest in rituals, he was also concerned with the then chaotic social and political situation and with the search for remedies, which he believed to lie in the restoration and maintenance of certain traditional values and norms. Later thinkers who professed to be followers of Confucius shared such concern and belief and, although they interpreted and developed Confucius’s teachings in different ways, they are often regarded as belonging to the same school of thought, traditionally referred to by Chinese scholars as Ju-chia, or the school of the Ju. The term ‘Confucianism’ is used to refer to some or all of the range of phenomena including the way of life of the Ju as a group of ritualists, the school of thought referred to as Ju-chia, the ethical, social, and political ideals advocated by this school of thought (which include but go well beyond the practice of rituals), and the influence of such ideals on the actual social and political order and the life of the Chinese.


As a school of thought, Confucianism is characterized by a common ethical ideal which includes an affective concern for all living things, varying in degree and nature depending on how such things relate to oneself; a reverential attitude toward others manifested in the observance of formal rules of conduct such as the way to receive guests; an ability to determine the proper course of conduct, whether this calls for observance of traditional norms or departure from such norms; and a firm commitment to proper conduct so that one is not swayed by adverse circumstances such as poverty or death. Everyone is supposed to have the ability to attain this ideal, and people are urged to exercise constant vigilance over their character so that they can transform themselves to embody this ideal fully. In the political realm, a ruler who embodies the ideal will care about and provide for the people, who will be attracted to him; the moral example he sets will have a transforming effect on the people.


Different Confucian thinkers have different conceptions of the way the ethical ideal may be justified and attained. Mencius (fourth century B.C.) regarded the ideal as a full realization of certain incipient moral inclinations shared by human beings, and emphasized the need to reflect on and fully develop such inclinations. Hsün Tzu (third century B.C.) regarded it as a way of optimizing the satisfaction of presocial human desires, and emphasized the need to learn the norms governing social distinctions and let them transform and regulate the pursuit of satisfaction of such desires. Different kinds of Confucian thought continued to evolve, yielding such major thinkers as Tung Chung-shu (second century B.C.) and Han Yü (A.D. 768-824). Han Yü regarded Mencius as the true transmitter of Confucius’s teachings, and this view became generally accepted, largely through the efforts of Chu Hsi (1130-1200). The Mencian form of Confucian thought continued to be developed in different ways by such major thinkers as Chu Hsi, Wang Yang-ming (1472-1529), and Tai Chen (1723-77), who differed concerning the way to attain the Confucian ideal and the metaphysics undergirding it. Despite these divergent developments, Confucius continued to be revered within this tradition of thought as its first and most important thinker, and the Confucian school of thought continued to exert great influence on Chinese life and on the social and political order down to the present century.

Confucius also known as K’ung Ch’iu, K’ung Tzu, Kung Fu-tzu (sixth-fifth century B.C.)

Chinese thinker usually regarded as founder of the Confucian school of thought. His teachings are recorded in the Lun Yü or Analects, a collection of sayings by him and by disciples, and of conversations between him and his disciples. His highest ethical ideal is jen (humanity, goodness), which includes an affective concern for the wellbeing of others, desirable attributes (e.g. filial piety) within familial, social, and political institutions, and other desirable attributes such as yung (courage, bravery). An important part of the ideal is the general observance of li (rites), the traditional norms governing conduct between people related by their different social positions, along with a critical reflection on such norms and a preparedness to adapt them to present circumstances. Human conduct should not be dictated by fixed rules, but should be sensitive to relevant considerations and should accord with yi (rightness, duty). Other important concepts include shu (consideration, reciprocity), which involves not doing to another what one would not have wished done to oneself, and chung (loyalty, commitment), interpreted variously as a commitment to the exercise of shu, to the norms of li, or to one’s duties toward superiors and equals. The ideal of jen is within the reach of all, and one should constantly reflect on one’s character and correct one’s deficiencies. Jen has transformative powers that should ideally be the basis of government; a ruler with jen will care about and provide for the people, who will be attracted to him, and the moral example he sets will inspire people to reform themselves.

conjunction

The logical operation on a pair of propositions that is typically indicated by the coordinating conjuction ‘and’. The truth table for conjunction is


PQP-and-Q


TTT


TFF


FTF


FFF


Besides ‘and’, other coordinating conjunctions, including ‘but’, ‘however’, ‘moreover’, and ‘although’, can indicate logical conjunction, as can the semicolon ‘;’ and the comma ‘,’.

conjunction elimination

(1) The argument form ‘A and B; therefore, A (or B)’ and arguments of this form.
(2) The rule of inference that permits one to infer either conjunct from a conjunction. This is also known as the rule of simplification or ∧ -elimination.

conjunction introduction

(1) The argument form ‘A,B; therefore, A and B’ and arguments of this form.
(2) The rule of inference that permits one to infer a conjunction from its two conjuncts. This is also known as the rule of conjunction introduction, ∧ -introduction, or adjunction.

connected

Said of a relation R where, for any two distinct elements x and y of the domain, either xRy or yRx. R is said to be strongly connected if, for any two elements x and y, either xRy or yRx, even if x and y are identical. Given the domain of positive integers, for instance, the relation < is connected, since for any two distinct numbers a and b, either a < b or b < a. < is not strongly connected, however, since if a = b we do not have either a < b or b < a. The relation ≤, however, is strongly connected, since either a ≤ b or b ≤ a for any two numbers, including the case where a = b. An example of a relation that is not connected is the subset relation ⊆, since it is not true that for any two sets A and B, either A ⊆ B or B ⊆ A.

connectionism

An approach to modeling cognitive systems which utilizes networks of simple processing units that are inspired by the basic structure of the nervous system. Other names for this approach are neural network modeling and parallel distributed processing. Connectionism was pioneered in the period 1940-65 by researchers such as Frank Rosenblatt and Oliver Selfridge. Interest in using such networks diminished during the 1970s because of limitations encountered by existing networks and the growing attractiveness of the computer model of the mind (according to which the mind stores symbols in memory and registers and performs computations upon them). Connectionist models enjoyed a renaissance in the 1980s, partly as the result of the discovery of means of overcoming earlier limitations (e.g., development of the back-propagation learning algorithm by David Rumelhart, Geoffrey Hinton, and Ronald Williams, and of the Boltzmann-machine learning algorithm by David Ackley, Geoffrey Hinton, and Terrence Sejnowski), and partly as limitations encountered with the computer model rekindled interest in alternatives. Researchers employing connectionist-type nets are found in a variety of disciplines including psychology, artificial intelligence, neuroscience, and physics. There are often major differences in the endeavors of these researchers: psychologists and artificial intelligence researchers are interested in using these nets to model cognitive behavior, whereas neuroscientists often use them to model processing in particular neural systems.


A connectionist system consists of a set of processing units that can take on activation values. These units are connected so that particular units can excite or inhibit others. The activation of any particular unit will be determined by one or more of the following: inputs from outside the system, the excitations or inhibitions supplied by other units, and the previous activation of the unit. There are a variety of different architectures invoked in connectionist systems. In feedforward nets units are clustered into layers and connections pass activations in a unidirectional manner from a layer of input units to a layer of output units, possibly passing through one or more layers of hidden units along the way. In these systems processing requires one pass of processing through the network. Interactive nets exhibit no directionality of processing: a given unit may excite or inhibit another unit, and it, or another unit influenced by it, might excite or inhibit the first unit. A number of processing cycles will ensue after an input has been given to some or all of the units until eventually the network settles into one state, or cycles through a small set of such states.


One of the most attractive features of connectionist networks is their ability to learn. This is accomplished by adjusting the weights connecting the various units of the system, thereby altering the manner in which the network responds to inputs. To illustrate the basic process of connectionist learning, consider a feedforward network with just two layers of units and one layer of connections. One learning procedure (commonly referred to as the delta rule) first requires the network to respond, using current weights, to an input. The activations on the units of the second layer are then compared to a set of target activations, and detected differences are used to adjust the weights coming from active input units. Such a procedure gradually reduces the difference between the actual response and the target response.


In order to construe such networks as cognitive models it is necessary to interpret the input and output units. Localist interpretations treat individual input and output units as representing concepts such as those found in natural language. Distributed interpretations correlate only patterns of activation of a number of units with ordinary language concepts. Sometimes (but not always) distributed models will interpret individual units as corresponding to microfeatures. In one interesting variation on distributed representation, known as coarse coding, each symbol will be assigned to a different subset of the units of the system, and the symbol will be viewed as active only if a predefined number of the assigned units are active.


A number of features of connectionist nets make them particularly attractive for modeling cognitive phenomena in addition to their ability to learn from experience. They are extremely efficient at pattern-recognition tasks and often generalize very well from training inputs to similar test inputs. They can often recover complete patterns from partial inputs, making them good models for content-addressable memory. Interactive networks are particularly useful in modeling cognitive tasks in which multiple constraints must be satisfied simultaneously, or in which the goal is to satisfy competing constraints as well as possible. In a natural manner they can override some constraints on a problem when it is not possible to satisfy all, thus treating the constraints as soft. While the cognitive connectionist models are not intended to model actual neural processing, they suggest how cognitive processes can be realized in neural hardware. They also exhibit a feature demonstrated by the brain but difficult to achieve in symbolic systems: their performance degrades gracefully as units or connections are disabled or the capacity of the network is exceeded, rather than crashing.


Serious challenges have been raised to the usefulness of connectionism as a tool for modeling cognition. Many of these challenges have come from theorists who have focused on the complexities of language, especially the systematicity exhibited in language. Jerry Fodor and Zenon Pylyshyn, for example, have emphasized the manner in which the meaning of complex sentences is built up compositionally from the meaning of components, and argue both that compositionality applies to thought generally and that it requires a symbolic system. Therefore, they maintain, while cognitive systems might be implemented in connectionist nets, these nets do not characterize the architecture of the cognitive system itself, which must have capacities for symbol storage and manipulation. Connectionists have developed a variety of responses to these objections, including emphasizing the importance of cognitive functions such as pattern recognition, which have not been as successfully modeled by symbolic systems; challenging the need for symbol processing in accounting for linguistic behavior; and designing more complex connectionist architectures, such as recurrent networks, capable of responding to or producing systematic structures.

connotation

(1) The ideas and associations brought to mind by an expression (used in contrast with ‘denotation’ and ‘meaning’).
(2) In a technical use, the properties jointly necessary and sufficient for the correct application of the expression in question.

consequentialism

The doctrine that the moral rightness of an act is determined solely by the goodness of the act’s consequences. Prominent consequentialists include J. S. Mill, Moore, and Sidgwick. Maximizing versions of consequentialism — the most common sort — hold that an act is morally right if and only if it produces the best consequences of those acts available to the agent. Satisficing consequentialism holds that an act is morally right if and only if it produces enough good consequences on balance. Consequentialist theories are often contrasted with deontological ones, such as Kant’s, which hold that the rightness of an act is determined at least in part by something other than the goodness of the act’s consequences.


A few versions of consequentialism are agentrelative: that is, they give each agent different aims, so that different agents’ aims may conflict. For instance, egoistic consequentialism holds that the moral rightness of an act for an agent depends solely on the goodness of its consequences for him or her. However, the vast majority of consequentialist theories have been agent-neutral (and consequentialism is often defined in a more restrictive way so that agent-relative versions do not count as consequentialist). A doctrine is agent-neutral when it gives to each agent the same ultimate aims, so that different agents’ aims cannot conflict. For instance, utilitarianism holds that an act is morally right if and only if it produces more happiness for the sentient beings it affects than any other act available to the agent. This gives each agent the same ultimate aim, and so is agent-neutral.


Consequentialist theories differ over what features of acts they hold to determine their goodness. Utilitarian versions hold that the only consequences of an act relevant to its goodness are its effects on the happiness of sentient beings. But some consequentialists hold that the promotion of other things matters too — achievement, autonomy, knowledge, or fairness, for instance. Thus utilitarianism, as a maximizing, agent-neutral, happiness-based view is only one of a broad range of consequentialist theories.

consequentia mirabilis

The logical principle that if a statement follows from its own negation it must be true. Strict consequentia mirabilis is the principle that if a statement follows logically from its own negation it is logically true. The principle is often connected with the paradoxes of strict implication, according to which any statement follows from a contradiction. Since the negation of a tautology is a contradiction, every tautology follows from its own negation. However, if every expression of the form ‘if p then q’ implies ‘not-p or q’ (they need not be equivalent), then from ‘if not-p then p’ we can derive ‘not-not-p or p’ and (by the principles of double negation and repetition) derive p. Since all of these rules are unexceptionable the principle of consequentia mirabilis is also unexceptionable. It is, however, somewhat counterintuitive, hence the name (‘the astonishing implication’), which goes back to its medieval discoverers (or rediscoverers).

consistency

In traditional Aristotelian logic, a semantic notion: two or more statements are called consistent if they are simultaneously true under some interpretation (cf., e.g., W.S. Jevons, Elementary Lessons in Logic, 1870). In modern logic there is a syntactic definition that also fits complex (e.g., mathematical) theories developed since Frege’s Begriffsschrift (1879): a set of statements is called consistent with respect to a certain logical calculus, if no formula ‘P & -P’ is derivable from those statements by the rules of the calculus; i.e., the theory is free from contradictions. If these definitions are equivalent for a logic, we have a significant fact, as the equivalence amounts to the completeness of its system of rules. The first such completeness theorem was obtained for sentential or propositional logic by Paul Bernays in 1918 (in his Habilitationsschrift that was partially published as Axiomatische Untersuchung des Aussagen-Kalküls der “Principia Mathematica,” 1926) and, independently, by Emil Post (in Introduction to a General Theory of Elementary Propositions, 1921); the completeness of predicate logic was proved by Gödel (in Die Vollständigkeit der Axiome des logischen Funktionenkalküls, 1930). The crucial step in such proofs shows that syntactic consistency implies semantic consistency.


Cantor applied the notion of consistency to sets. In a well-known letter to Dedekind (1899) he distinguished between an inconsistent and a consistent multiplicity; the former is such “that the assumption that all of its elements ‘are together’ leads to a contradiction,” whereas the elements of the latter “can be thought of without contradiction as ‘being together.’ ” Cantor had conveyed these distinctions and their motivation by letter to Hilbert in 1897 (see W. Purkert and H.J. Ilgauds, Georg Cantor, 1987). Hilbert pointed out explicitly in 1904 that Cantor had not given a rigorous criterion for distinguishing between consistent and inconsistent multiplicities. Already in his Über den Zahlbegriff (1899) Hilbert had suggested a remedy by giving consistency proofs for suitable axiomatic systems; e.g., to give the proof of the “existence of the totality of real numbers or — in the terminology of G. Cantor — the proof of the fact that the system of real numbers is a consistent (complete) set” by establishing the consistency of an axiomatic characterization of the reals — in modern terminology, of the theory of complete, ordered fields. And he claimed, somewhat indeterminately, that this could be done “by a suitable modification of familiar methods.”


After 1904, Hilbert pursued a new way of giving consistency proofs. This novel way of proceeding, still aiming for the same goal, was to make use of the formalization of the theory at hand. However, in the formulation of Hilbert’s Program during the 1920s the point of consistency proofs was no longer to guarantee the existence of suitable sets, but rather to establish the instrumental usefulness of strong mathematical theories T, like axiomatic set theory, relative to finitist mathematics. That focus rested on the observation that the statement formulating the syntactic consistency of T is equivalent to the reflection principle Pr(a, ‘s’) → s; here Pr is the finitist proof predicate for T, s is a finitistically meaningful statement, and ‘s’ its translation into the language of T. If one could establish finitistically the consistency of T, one could be sure — on finitist grounds — that T is a reliable instrument for the proof of finitist statements.


There are many examples of significant relative consistency proofs: (i) non-Euclidean geometry relative to Euclidean, Euclidean geometry relative to analysis; (ii) set theory with the axiom of choice relative to set theory (without the axiom of choice), set theory with the negation of the axiom of choice relative to set theory; (iii) classical arithmetic relative to intuitionistic arithmetic, subsystems of classical analysis relative to intuitionistic theories of constructive ordinals. The mathematical significance of relative consistency proofs is often brought out by sharpening them to establish conservative extension results; the latter may then ensure, e.g., that the theories have the same class of provably total functions. The initial motivation for such arguments is, however, frequently philosophical: one wants to guarantee the coherence of the original theory on an epistemologically distinguished basis.

Constant, Benjamin in full, Henri-Benjamin Constant de Rebecque


(1767 - 1830)

Swiss-born defender of liberalism and passionate analyst of French and European politics. He welcomed the French Revolution but not the Reign of Terror, the violence of which he avoided by accepting a lowly diplomatic post in Braunschweig (1787 -94). In 1795 he returned to Paris with Madame de Staël and intervened in parliamentary debates. His pamphlets opposed both extremes, the Jacobin and the Bonapartist. Impressed by Rousseau’s Social Contract, he came to fear that like Napoleon’s dictatorship, the “general will” could threaten civil rights. He had first welcomed Napoleon, but turned against his autocracy. He favored parliamentary democracy, separation of church and state, and a bill of rights. The high point of his political career came with membership in the Tribunat (1800—02), a consultative chamber appointed by the Senate.


His centrist position is evident in the Principes de politique (1806-10). Had not republican terror been as destructive as the Empire? In chapters 16-17, Constant opposes the liberty of the ancients and that of the moderns. He assumes that the Greek world was given to war, and therefore strengthened “political liberty” that favors the state over the individual (the liberty of the ancients). Fundamentally optimistic, he believed that war was a thing of the past, and that the modern world needs to protect “civil liberty,” i.e. the liberty of the individual (the liberty of the moderns). The great merit of Constant’s comparison is the analysis of historical forces, the theory that governments must support current needs and do not depend on deterministic factors such as the size of the state, its form of government, geography, climate, and race. Here he contradicts Montesquieu.


The opposition between ancient and modern liberty expresses a radical liberalism that did not seem to fit French politics. However, it was the beginning of the liberal tradition, contrasting political liberty in the service of the state with the civil liberty of the citizen (cf. Mill’s On Liberty, 1859, and Berlin’s Two Concepts of Liberty, 1958). Principes remained in manuscript until 1861; the scholarly editions of Étienne Hofmann (1980) are far more recent. Hofmann calls Principes the essential text between Montesquieu and Tocqueville. It was translated into English as Constant, Political Writings (ed. Biancamaria Fontana, 1988 and 1997).


Forced into retirement by Napoleon, Constant wrote his literary masterpieces, Adolphe and the diaries. He completed the Principes, then turned to De la religion (6 vols.), which he considered his supreme achievement.

constitution

A relation between concrete particulars (including objects and events) and their parts, according to which at some time t, a concrete particular is said to be constituted by the sum of its parts without necessarily being identical with that sum. For instance, at some specific time t, Mt. Everest is constituted by the various chunks of rock and other matter that form Everest at t, though at t Everest would still have been Everest even if, contrary to fact, some particular rock that is part of the sum had been absent. Hence, although Mt. Everest is not identical to the sum of its material parts at t, it is constituted by them. The relation of constitution figures importantly in recent attempts to articulate and defend metaphysical physicalism (naturalism). To capture the idea that all that exists is ultimately physical, we may say that at the lowest level of reality, there are only microphysical phenomena, governed by the laws of microphysics, and that all other objects and events are ultimately constituted by objects and events at the microphysical level.

contextualism

The view that inferential justification always takes place against a background of beliefs that are themselves in no way evidentially supported. The view has not often been defended by name, but Dewey, Popper, Austin, and Wittgenstein are arguably among its notable exponents. As this list perhaps suggests, contextualism is closely related to the “relevant alternatives” conception of justification, according to which claims to knowledge are justified not by ruling out any and every logically possible way in which what is asserted might be false or inadequately grounded, but by excluding certain especially relevant alternatives or epistemic shortcomings, these varying from one context of inquiry to another.


Formally, contextualism resembles foundationalism. But it differs from traditional, or substantive, foundationalism in two crucial respects. First, foundationalism insists that basic beliefs be self-justifying or intrinsically credible. True, for contemporary foundationalists, this intrinsic credibility need not amount to incorrigibility, as earlier theorists tended to suppose: but some degree of intrinsic credibility is indispensable for basic beliefs. Second, substantive foundational theories confine intrinsic credibility, hence the status of being epistemologically basic, to beliefs of some fairly narrowly specified kind(s). By contrast, contextualists reject all forms of the doctrine of intrinsic credibility, and in consequence place no restrictions on the kinds of beliefs that can, in appropriate circumstances, function as contextually basic. They regard this as a strength of their position, since explaining and defending attributions of intrinsic credibility has always been the foundationalist’s main problem.


Contextualism is also distinct from the coherence theory of justification, foundationalism’s traditional rival. Coherence theorists are as suspicious as contextualists of the foundationalist’s specified kinds of basic beliefs. But coherentists react by proposing a radically holistic model of inferential justification, according to which a belief becomes justified through incorporation into a suitably coherent overall system of beliefs or “total view.” There are many well-known problems with this approach: the criteria of coherence have never been very clearly articulated; it is not clear what satisfying such criteria has to do with making our beliefs likely to be true; and since it is doubtful whether anyone has a very clear picture of his system of beliefs as a whole, to insist that justification involves comparing the merits of competing total views seems to subject ordinary justificatory practices to severe idealization. Contextualism, in virtue of its formal affinity with foundationalism, claims to avoid all such problems.


Foundationalists and coherentists are apt to respond that contextualism reaps these benefits by failing to show how genuinely epistemic justification is possible. Contextualism, they charge, is finally indistinguishable from the skeptical view that “justification” depends on unwarranted assumptions. Even if, in context, these are pragmatically acceptable, epistemically speaking they are still just assumptions.


This objection raises the question whether contextualists mean to answer the same questions as more traditional theorists, or answer them in the same way. Traditional theories of justification are framed so as to respond to highly general skeptical questions — e.g., are we justified in any of our beliefs about the external world? It may be that contextualist theories are (or should be) advanced, not as direct answers to skepticism, but in conjunction with attempts to diagnose or dissolve traditional skeptical problems. Contextualists need to show how and why traditional demands for “global” justification misfire, if they do. If traditional skeptical problems are taken at face value, it is doubtful whether contextualism can answer them.

Continental philosophy

The gradually changing spectrum of philosophical views that in the twentieth century developed in Continental Europe and that are notably different from the various forms of analytic philosophy that during the same period flourished in the Anglo-American world. Immediately after World War II the expression was more or less synonymous with ‘phenomenology’. The latter term, already used earlier in German idealism, received a completely new meaning in the work of Husserl, Later on the term was also applied, often with substantial changes in meaning, to the thought of a great number of other Continental philosophers such as Scheler, Alexander Pfander, Hedwig Conrad-Martius, Nicolai Hartmann, and most philosophers mentioned below. For Husserl the aim of philosophy is to prepare humankind for a genuinely philosophical form of life, in and through which each human being gives him—or herself a rule through reason. Since the Renaissance, many philosophers have tried in vain to materialize this aim. In Husserl’s view, the reason was that philosophers failed to use the proper philosophical method. Husserl’s phenomenology was meant to provide philosophy with the method needed.


Among those deeply influenced by Husserl’s ideas the so-called existentialists must be mentioned first. If ‘existentialism’ is construed strictly, it refers mainly to the philosophy of Sartre and Beauvoir. In a very broad sense it refers to the ideas of an entire group of thinkers influenced methodologically by Husserl and in content by Marcel, Heidegger, Sartre, or Merleau-Ponty. In this case one often speaks of existential phenomenology.


When Heidegger’s philosophy became better known in the Anglo-American world, ‘Continental philosophy’ received again a new meaning. From Heidegger’s first publication, Being and Time (1927), it was clear that his conception of phenomenology differs from that of Husserl in several important respects. That is why he qualified the term and spoke of hermeneutic phenomenology and clarified the expression by examining the “original” meaning of the Greek words from which the term was formed. In his view phenomenology must try “to let that which shows itself be seen from itself in the very way in which it shows itself from itself.” Heidegger applied the method first to the mode of being of man with the aim of approaching the question concerning the meaning of being itself through this phenomenological interpretation. Of those who took their point of departure from Heidegger, but also tried to go beyond him, Gadamer and Ricoeur must be mentioned.


The structuralist movement in France added another connotation to ‘Continental philosophy’. The term structuralism above all refers to an activity, a way of knowing, speaking, and acting that extends over a number of distinguished domains of human activity: linguistics, aesthetics, anthropology, psychology, psychoanalysis, mathematics, philosophy of science, and philosophy itself. Structuralism, which became a fashion in Paris and later in Western Europe generally, reached its high point on the Continent between 1950 and 1970. It was inspired by ideas first formulated by Russian formalism (1916—26) and Czech structuralism (1926-40), but also by ideas derived from the works of Marx and Freud. In France Foucault, Barthes, Althusser, and Derrida were the leading figures. Structuralism is not a new philosophical movement; it must be characterized by structuralist activity, which is meant to evoke ever new objects. This can be done in a constructive and a reconstructive manner, but these two ways of evoking objects can never be separated. One finds the constructive aspect primarily in structuralist aesthetics and linguistics, whereas the reconstructive aspect is more apparent in philosophical reflections upon the structuralist activity. Influenced by Nietzschean ideas, structuralism later developed in a number of directions, including poststructuralism; in this context the works of Gilles Deleuze, Lyotard, Irigaray, and Kristeva must be mentioned.


After 1970 ‘Continental philosophy’ received again a new connotation: deconstruction. At first deconstruction presented itself as a reaction against philosophical hermeneutics, even though both deconstruction and hermeneutics claim their origin in Heidegger’s reinterpretation of Husserl’s phenomenology. The leading philosopher of the movement is Derrida, who at first tried to think along phenomenological and structuralist lines. Derrida formulated his “final” view in a linguistic form that is both complex and suggestive. It is not easy in a few sentences to state what deconstruction is. Generally speaking one can say that what is being deconstructed is texts; they are deconstructed to show that there are conflicting conceptions of meaning and implication in every text so that it is never possible definitively to show what a text really means. Derrida’s own deconstructive work is concerned mainly with philosophical texts, whereas others apply the “method” predominantly to literary texts. What according to Derrida distinguished philosophy is its reluctance to face the fact that it, too, is a product of linguistic and rhetorical figures. Deconstruction is here that process of close reading that focuses on those elements where philosophers in their work try to erase all knowledge of its own linguistic and rhetorical dimensions. It has been said that if construction typifies modern thinking, then deconstruction is the mode of thinking that radically tries to overcome modernity. Yet this view is simplistic, since one also deconstructs Plato and many other thinkers and philosophers of the premodern age.


People concerned with social and political philosophy who have sought affiliation with Continental philosophy often appeal to the so-called critical theory of the Frankfurt School in general and to Habermas’s theory of communicative action in particular. Habermas’s view, like the position of the Frankfurt School in general, is philosophically eclectic. It tries to bring into harmony ideas derived from Kant, German idealism, and Marx, as well as ideas from the sociology of knowledge and the social sciences. Habermas believes that his theory makes it possible to develop a communication community without alienation that is guided by reason in such a way that the community can stand freely in regard to the objectively given reality. Critics have pointed out that in order to make this theory work Habermas must substantiate a number of assumptions that until now he has not been able to justify.

contingent

Neither impossible nor necessary; i.e., both possible and non-necessary. The modal property of being contingent is attributable to a proposition, state of affairs, event, or — more debatably — an object. Muddles about the relationship between this and other modal properties have abounded ever since Aristotle, who initially conflated contingency with possibility but later realized that something that is possible may also be necessary, whereas something that is contingent cannot be necessary. Even today many philosophers are not clear about the “opposition” between contingency and necessity, mistakenly supposing them to be contradictory notions (probably because within the domain of true propositions the contingent and the necessary are indeed both exclusive and exhaustive of one another). But the contradictory of ‘necessary’ is ‘non-necessary’; that of ‘contingent’ is ‘non-contingent’, as the following extended modal square of opposition shows:



These logicosyntactical relationships are preserved through various semantical interpretations, such as those involving: (a) the logical modalities (proposition P is logically contingent just when P is neither a logical truth nor a logical falsehood); (b) the causal or physical modalities (state of affairs or event E is physically contingent just when E is neither physically necessary nor physically impossible); and (c) the deontic modalities (act A is morally indeterminate just when A is neither morally obligatory nor morally forbidden).


In none of these cases does ‘contingent’ mean ‘dependent,’ as in the phrase ‘is contingent upon’. Yet just such a notion of contingency seems to feature prominently in certain formulations of the cosmological argument, all created objects being said to be contingent beings and God alone to be a necessary or non-contingent being. Conceptual clarity is not furthered by assimilating this sense of ‘contingent’ to the others.

continuum problem

An open question that arose in Cantor’s theory of infinite cardinal numbers. By definition, two sets have the same cardinal number if there is a one-to-one correspondence between them. For example, the function that sends 0 to 0, 1 to 2, 2 to 4, etc., shows that the set of even natural numbers has the same cardinal number as the set of all natural numbers, namely χ 0. That χ0 is not the only infinite cardinal follows from Cantor’s theorem: the power set of any set (i.e., the set of all its subsets) has a greater cardinality than the set itself. So, e.g., the power set of the natural numbers, i.e., the set of all sets of natural numbers, has a cardinal number greater than χ0. The first infinite number greater than χ0 is χ1; the next after that is χ2, and so on.


When arithmetical operations are extended into the infinite, the cardinal number of the power set of the natural numbers turns out to be 2χ0. By Cantor’s theorem, 2χ0 must be greater than χ0; the conjecture that it is equal to χ1 is Cantor’s continuum hypothesis (in symbols, CH or 2χ0 = χ1). Since 2χ0 is also the cardinality of the set of points on a continuous line, CH can also be stated in this form: any infinite set of points on a line can be brought into one-to-one correspondence either with the set of natural numbers or with the set of all points on the line.


Cantor and others attempted to prove CH, without success. It later became clear, due to the work of Gödel and Cohen, that their failure was inevitable: the continuum hypothesis can neither be proved nor disproved from the axioms of set theory (ZFC). The question of its truth or falsehood - the continuum problem—remains open.

contractarianism

A family of moral and political theories that make use of the idea of a social contract. Traditionally philosophers (such as Hobbes and Locke) used the social contract idea to justify certain conceptions of the state. In the twentieth century philosophers such as John Rawls have used the social contract notion to define and defend moral conceptions (both conceptions of political justice and individual morality), often (but not always) doing so in addition to developing social contract theories of the state. The term ‘contractarian’ most often applies to this second type of theory.


There are two kinds of moral argument that the contract image has spawned, the first rooted in Hobbes and the second rooted in Kant. Hobbesians start by insisting that what is valuable is what a person desires or prefers, not what he ought to desire or prefer (for no such prescriptively powerful object exists); and rational action is action that achieves or maximizes the satisfaction of desires or preferences. They go on to insist that moral action is rational for a person to perform if and only if such action advances the satisfaction of his desires or preferences. And they argue that because moral action leads to peaceful and harmonious living conducive to the satisfaction of almost everyone’s desires or preferences, moral actions are rational for almost everyone and thus “mutually agreeable.” But Hobbesians believe that, to ensure that no cooperative person becomes the prey of immoral aggressors, moral actions must be the conventional norms in a community, so that each person can expect that if she behaves cooperatively, others will do so too. These conventions constitute the institution of morality in a society.


So the Hobbesian moral theory is committed to the idea that morality is a human-made institution, which is justified only to the extent that it effectively furthers human interests. Hobbesians explain the existence of morality in society by appealing to the convention-creating activities of human beings, while arguing that the justification of morality in any human society depends upon how well its moral conventions serve individuals’ desires or preferences. By considering “what we could agree to” if we reappraised and redid the cooperative conventions in our society, we can determine the extent to which our present conventions are “mutually agreeable” and so rational for us to accept and act on. Thus, Hobbesians invoke both actual agreements (or rather, conventions) and hypothetical agreements (which involve considering what conventions would be “mutually agreeable”) at different points in their theory; the former are what they believe our moral life consists in; the latter are what they believe our moral life should consist in — i.e., what our actual moral life should model. So the notion of the contract does not do justificational work by itself in the Hobbesian moral theory: this term is used only metaphorically. What we “could agree to” has moral force for the Hobbesians not because make-believe promises in hypothetical worlds have any binding force but because this sort of agreement is a device that (merely) reveals how the agreed-upon outcome is rational for all of us. In particular, thinking about “what we could all agree to” allows us to construct a deduction of practical reason to determine what policies are mutually advantageous.


The second kind of contractarian theory is derived from the moral theorizing of Kant. In his later writings Kant proposed that the “idea” of the “Original Contract” could be used to determine what policies for a society would be just. When Kant asks “What could people agree to?,” he is not trying to justify actions or policies by invoking, in any literal sense, the consent of the people. Only the consent of real people can be legitimating, and Kant talks about hypothetical agreements made by hypothetical people. But he does believe these make-believe agreements have moral force for us because the process by which these people reach agreement is morally revealing.


Kant’s contracting process has been further developed by subsequent philosophers, such as Rawls, who concentrates on defining the hypothetical people who are supposed to make this agreement so that their reasoning will not be tarnished by immorality, injustice, or prejudice, thus ensuring that the outcome of their joint deliberations will be morally sound. Those contractarians who disagree with Rawls define the contracting parties in different ways, thereby getting different results. The Kantians’ social contract is therefore a device used in their theorizing to reveal what is just or what is moral. So like Hobbesians, their contract talk is really just a way of reasoning that allows us to work out conceptual answers to moral problems. But whereas the Hobbesians’ use of contract language expresses the fact that, on their view, morality is a human invention which (if it is well invented) ought to be mutually advantageous, the Kantians’ use of the contract language is meant to show that moral principles and conceptions are provable theorems derived from a morally revealing and authoritative reasoning process or “moral proof procedure” that makes use of the social contract idea.


Both kinds of contractarian theory are individualistic, in the sense that they assume that moral and political policies must be justified with respect to, and answer the needs of, individuals. Accordingly, these theories have been criticized by communitarian philosophers, who argue that moral and political policies can and should be decided on the basis of what is best for a community. They are also attacked by utilitarian theorists, whose criterion of morality is the maximization of the utility of the community, and not the mutual satisfaction of the needs or preferences of individuals. Contractarians respond that whereas utilitarianism fails to take seriously the distinction between persons, contractarian theories make moral and political policies answerable to the legitimate interests and needs of individuals, which, contra the communitarians, they take to be the starting point of moral theorizing.

contraposition

The immediate logical operation on any categorical proposition that is accomplished by first forming the complements of both the subject term and the predicate term of that proposition and then interchanging these complemented terms. Thus, contraposition applied to the categorical proposition ‘All cats are felines’ yields ‘All non-felines are non-cats’, where ‘non-feline’ and ‘non-cat’ are, respectively, the complements (or complementary terms) of ‘feline’ and ‘cat’. The result of applying contraposition to a categorical proposition is said to be the contrapositive of that proposition.

contraries

Any pair of propositions that cannot both be true but can both be false; derivatively, any pair of properties that cannot both apply to a thing but that can both fail to apply to a thing. Thus the propositions ‘This object is red all over’ and ‘This object is green all over’ are contraries, as are the properties of being red all over and being green all over. Traditionally, it was considered that the categorical A -proposition ‘All S’s are P’s’ and the categorical E-proposition ‘No S’s are P’s’ were contraries; but according to De Morgan and most subsequent logicians, these two propositions are both true when there are no S’s at all, so that modern logicians do not usually regard the categorical A- and E-propositions as being true contraries.

contravalid

Designating a proposition P in a logical system such that every proposition in the system is a consequence of P. In most of the typical and familiar logical systems, contravalidity coincides with self-contradictoriness.

control

An apparently causal phenomenon closely akin to power and important for such topics as international action, freedom, and moral responsibility. Depending upon the control you had over the event, your finding a friend’s stolen car may or may not be an intentional action, a free action, or an action for which you deserve moral credit. Control seems to be a causal phenomenon. Try to imagine controlling a car, say, without causing anything. If you cause nothing, you have no effect on the car, and one does not control a thing on which one has no effect. But control need not be causally deterministic. Even if a genuine randomizer in your car’s steering mechanism gives you only a 99 percent chance of making turns you try to make, you still have considerable control in that sphere. Some philosophers claim that we have no control over anything if causal determinism is true. That claim is false. When you drive your car, you normally are in control of its speed and direction, even if our world happens to be deterministic.

conventionalism

The philosophical doctrine that logical truth and mathematical truth are created by our choices, not dictated or imposed on us by the world. The doctrine is a more specific version of the linguistic theory of logical and mathematical truth, according to which the statements of logic and mathematics are true because of the way people use language. Of course, any statement owes its truth to some extent to facts about linguistic usage. For example, ‘Snow is white’ is true (in English) because of the facts that (1) ‘snow’ denotes snow, (2) ‘is white’ is true of white things, and (3) snow is white. What the linguistic theory asserts is that statements of logic and mathematics owe their truth entirely to the way people use language. Extralinguistic facts such as (3) are not relevant to the truth of such statements. Which aspects of linguistic usage produce logical truth and mathematical truth? The conventionalist answer is: certain linguistic conventions. These conventions are said to include rules of inference, axioms, and definitions.


The idea that geometrical truth is truth we create by adopting certain conventions received support by the discovery of non-Euclidean geometries. Prior to this discovery, Euclidean geometry had been seen as a paradigm of a priori knowledge. The further discovery that these alternative systems are consistent made Euclidean geometry seem rejectable without violating rationality. Whether we adopt the Euclidean system or a non-Euclidean system seems to be a matter of our choice based on such pragmatic considerations as simplicity and convenience.


Moving to number theory, conventionalism received a prima facie setback by the discovery that arithmetic is incomplete if consistent. For let S be an undecidable sentence, i.e., a sentence for which there is neither proof nor disproof. Suppose S is true. In what conventions does its truth consist? Not axioms, rules of inference, and definitions. For if its truth consisted in these items it would be provable. Suppose S is not true. Then its negation must be true. In what conventions does its truth consist? Again, no answer. It appears that if S is true or its negation is true and if neither S nor its negation is provable, then not all arithmetic truth is truth by convention. A response the conventionalist could give is that neither S nor its negation is true if S is undecidable. That is, the conventionalist could claim that arithmetic has truth-value gaps.


As to logic, all truths of classical logic are provable and, unlike the case of number theory and geometry, axioms are dispensable. Rules of inference suffice. As with geometry, there are alternatives to classical logic. The intuitionist, e.g., does not accept the rule ‘From not-not-A infer A’. Even detachment — ‘From A, if A then B, infer B’ — is rejected in some multivalued systems of logic. These facts support the conventionalist doctrine that adopting any set of rules of inference is a matter of our choice based on pragmatic considerations. But (the anti-conventionalist might respond) consider a simple logical truth such as ‘If Tom is tall, then Tom is tall’. Granted that this is provable by rules of inference from the empty set of premises, why does it follow that its truth is not imposed on us by extralinguistic facts about Tom? If Tom is tall the sentence is true because its consequent is true. If Tom is not tall the sentence is true because its antecedent is false. In either case the sentence owes its truth to facts about Tom.

convention T

A criterion of material adequacy (of proposed truth definitions) descovered, formally articulated, adopted, and so named by Tarski in connection with his 1929 definition of the concept of truth in a formalized language. Convention T is one of the most important of several independent proposals Tarski made concerning philosophically sound and logically precise treatment of the concept of truth. Various of these proposals have been criticized, but convention T has remained virtually unchallenged and is regarded almost as an axiom of analytic philosophy. To say that a proposed definition of an established concept is materially adequate is to say that it is “neither too broad nor too narrow,” i.e., that the concept it characterizes is coextensive with the established concept. Since, as Tarski emphasized, for many formalized languages there are no criteria of truth, it would seem that there can be no general criterion of material adequacy of truth definitions. But Tarski brilliantly finessed this obstacle by discovering a specification that is fulfilled by the established correspondence concept of truth and that has the further property that any two concepts fulfilling it are necessarily coextensive. Basically, convention T requires that to be materially adequate a proposed truth definition must imply all of the infinitely many relevant Tarskian biconditionals; e.g., the sentence ‘Some perfect number is odd’ is true if and only if some perfect number is odd. Loosely speaking, a Tarskian biconditional for English is a sentence obtained from the form ‘The sentence ———— is true if and only if —— ’ by filling the right blank with a sentence and filling the left blank with a name of the sentence. Tarski called these biconditionals “equivalences of the form T” and referred to the form as a “scheme.” Later writers also refer to the form as “scheme T.”

converse

(1) Narrowly, the result of the immediate logical operation called conversion on any categorical proposition, accomplished by interchanging the subject term and the predicate term of that proposition. Thus, the converse of the categorical proposition ‘All cats are felines’ is ‘All felines are cats’.
(2) More broadly, the proposition obtained from a given ‘if … then …’ (conditional) proposition by interchanging the antecedent and the consequent clauses, i.e., the propositions following the ‘if’ and the ‘then’, respectively; also, the argument obtained from an argument of the form ‘P; therefore Q’ by interchanging the premise and the conclusion.

converse, outer and inner

Respectively, the result of “converting” the two “terms” or the relation verb of a relational sentence. The outer converse of ‘Abe helps Ben’ is ‘Ben helps Abe’ and the inner converse is ‘Abe is helped by Ben’. In simple, or atomic, sentences the outer and inner converses express logically equivalent propositions, and thus in these cases no informational ambiguity arises from the adjunction of ‘and conversely’ or ‘but not conversely’, despite the fact that such adjunction does not indicate which, if either, of the two converses intended is meant. However, in complex, or quantified, relational sentences such as ‘Every integer precedes some integer’ genuine informational ambiguity is produced. Under normal interpretations of the respective sentences, the outer converse expresses the false proposition that some integer precedes every integer, the inner converse expresses the true proposition that every integer is preceded by some integer. More complicated considerations apply in cases of quantified doubly relational sentences such as ‘Every integer precedes every integer exceeding it’. The concept of scope explains such structural ambiguity: in the sentence ‘Every integer precedes some integer and conversely’, ‘conversely’ taken in the outer sense has wide scope, whereas taken in the inner sense it has narrow scope.

Conway, Anne


(c. 1630 - 1679)

English philosopher whose Principia philosophiae antiquissimae et recentissimae (1690; English translation, The Principles of the Most Ancient and Modern Philosophy, 1692) proposes a monistic ontology in which all created things are modes of one spiritual substance emanating from God. This substance is made up of an infinite number of hierarchically arranged spirits, which she calls monads. Matter is congealed spirit. Motion is conceived not dynamically but vitally. Lady Conway’s scheme entails a moral explanation of pain and the possibility of universal salvation. She repudiates the dualism of both Descartes and her teacher, Henry More, as well as the materialism of Hobbes and Spinoza. The work shows the influence of cabalism and affinities with the thought of the mentor of her last years, Francis Mercurius van Helmont, through whom her philosophy became known to Leibniz. S.H.

copula

In logic, a form of the verb ‘to be’ that joins subject and predicate in singular and categorical propositions. In ‘George is wealthy’ and ‘Swans are beautiful’, e.g., ‘is’ and ‘are’, respectively, are copulas. Not all occurrences of forms of ‘be’ count as copulas. In sentences such as ‘There are 51 states’, ‘are’ is not a copula, since it does not join a subject and a predicate, but occurs simply as a part of the quantifier term ‘there are’.

Cordemoy, Géraud de


(1626 - 1684)

French philosopher and member of the Cartesian school. His most important work is his Le discernement du corps et de l’âme en six discours, published in 1666 and reprinted (under slightly different titles) a number of times thereafter. Also important are the Discours physique de la parole (1668), a Cartesian theory of language and communication; and Une lettre écrite à un sçavant religieux (1668), a defense of Descartes’s orthodoxy on certain questions in natural philosophy. Cordemoy also wrote a history of France, left incomplete at his death.


Like Descartes, Cordemoy advocated a mechanistic physics explaining physical phenomena in terms of size, shape, and local motion, and held that minds are incorporeal thinking substances. Like most Cartesians, Cordemoy also advocated a version of occasionalism. But unlike other Cartesians, he argued for atomism and admitted the void. These innovations were not welcomed by other members of the Cartesian school. But Cordemoy is often cited by later thinkers, such as Leibniz, as an important seventeenth-century advocate of atomism.

corners also called corner quotes

Quasi-quotes, a notational device (⌐ ¬) introduced by Quine (Mathematical Logic, 1940) to provide a conveniently brief way of speaking generally about unspecified expressions of such and such kind. For example, a logician might want a conveniently brief way of saying in the metalanguage that the result of writing a wedge ‘∨’ (the dyadic logical connective for a truth-functional use of ‘or’) between any two well-formed formulas (wffs) in the object language is itself a wff. Supposing the Greek letters ‘Φ’ and ‘Ψ ’ available in the metalanguage as variables ranging over wffs in the object language, it is tempting to think that the formation rule stated above can be succinctly expressed simply by saying that if Φ and Ψ are wffs, then ‘Φ ∨ Ψ’ is a wff. But this will not do, for ‘Φ ∨ Ψ’ is not a wff. Rather, it is a hybrid expression of two variables of the metalanguage and a dyadic logical connective of the object language. The problem is that putting quotation marks around the Greek letters merely results in designating those letters themselves, not, as desired, in designating the context of the unspecified wffs. Quine’s device of corners allows one to transcend this limitation of straight quotation since quasi-quotation, e.g., ⌐Φ ∨ Ψ¬, amounts to quoting the constant contextual background, ‘# ∨ #’, and imagining the unspecified expressions Φ and Ψ written in the blanks.

corresponding conditional (of a given argument)

Any conditional whose antecedent is a (logical) conjunction of all of the premises of the argument and whose consequent is the conclusion. The two conditionals, ‘if Abe is Ben and Ben is wise, then Abe is wise’ and ‘if Ben is wise and Abe is Ben, then Abe is wise’, are the two corresponding conditionals of the argument whose premises are ‘Abe is Ben’ and ‘Ben is wise’ and whose conclusion is ‘Abe is wise’. For a one-premise argument, the corresponding conditional is the conditional whose antecedent is the premise and whose consequent is the conclusion. The limiting cases of the empty and infinite premise sets are treated in different ways by different logicians one simple treatment considers such arguments as lacking corresponding conditionals.


The principle of corresponding conditionals is that in order for an argument to be valid it is necessary and sufficient for all its corresponding conditionals to be tautological. The commonly used expression ‘the corresponding conditional of an argument’ is also used when two further stipulations are in force: first, that an argument is construed as having an (ordered) sequence of premises rather than an (unordered) set of premises; second, that conjunction is construed as a polyadic operation that produces in a unique way a single premise from a sequence of premises rather than as a dyadic operation that combines premises two by two. Under these stipulations the principle of the corresponding conditional is that in order for an argument to be valid it is necessary and sufficient for its corresponding conditional to be valid. These principles are closely related to modus ponens, to conditional proof, and to the so-called deduction theorem.

counterfactuals also called contrary-to-fact conditionals

Subjunctive conditionals that presuppose the falsity of their antecedents, such as ‘If Hitler had invaded England, Germany would have won’ and ‘If I were you, I’d run’.


Conditionals (or hypothetical statements) are compound statements of the form ‘If p, (then) q’, or equivalently ‘q if p’. Component p is described as the antecedent (protasis) and q as the consequent (apodosis). A conditional like ‘If Oswald did not kill Kennedy, then someone else did’ is called indicative, because both the antecedent and consequent are in the indicative mood. One like ‘If Oswald had not killed Kennedy, then someone else would have’ is subjunctive. Many subjunctive and all indicative conditionals are open, presupposing nothing about the antecedent. Unlike ‘If Bob had won, he’d be rich’, neither ‘If Bob should have won, he would be rich’ nor ‘If Bob won, he is rich’ implies that Bob did not win. Counterfactuals presuppose, rather than assert, the falsity of their antecedents. ‘If Reagan had been president, he would have been famous’ seems inappropriate and out of place, but not false, given that Reagan was president. The difference between counterfactual and open subjunctives is less important logically than that between subjunctives and indicatives. Whereas the indicative conditional about Kennedy is true, the subjunctive is probably false. Replace ‘someone’ with ‘no one’ and the truth-values reverse.


The most interesting logical feature of counterfactuals is that they are not truth-functional. A truth-functional compound is one whose truth-value is completely determined in every possible case by the truth-values of its components. For example, the falsity of ‘The President is a grandmother’ and ‘The President is childless’ logically entails the falsity of ‘The President is a grandmother and childless’: all conjunctions with false conjuncts are false. But whereas ‘If the President were a grandmother, the President would be childless’ is false, other counterfactuals with equally false components are true, such as ‘If the President were a grandmother, the President would be a mother’. The truth-value of a counterfactual is determined in part by the specific content of its components. This property is shared by indicative and subjunctive conditionals generally, as can be seen by varying the wording of the example. In marked contrast, the material conditional, p ⊃ q, of modern logic, defined as meaning that either p is false or q is true, is completely truth-functional. ‘The President is a grandmother ⊃ The President is childless’ is just as true as ‘The President is a grandmother ⊃ The President is a mother’. While stronger than the material conditional, the counterfactual is weaker than the strict conditional, p ≡ q, of modern modal logic, which says that p ⊃ q is necessarily true. ‘If the switch had been flipped, the light would be on’ may in fact be true even though it is possible for the switch to have been flipped without the light’s being on because the bulb could have burned out.


The fact that counterfactuals are neither strict nor material conditionals generated the problem of counterfactual conditionals (raised by Chisholm and Goodman): What are the truth conditions of a counterfactual, and how are they determined by its components? According to the “metalin-guistic” approach, which resembles the deductive-nomological model of explanation, a counterfactual is true when its antecedent conjoined with laws of nature and statements of back-ground conditions logically entails its consequent. On this account, ‘If the switch had been flipped the light would be on’ is true because the statement that the switch was flipped, plus the laws of electricity and statements describing the condition and arrangement of the circuitry, entail that the light is on. The main problem is to specify which facts are “fixed” for any given counterfactual and context. The background conditions cannot include the denials of the antecedent or the consequent, even though they are true, nor anything else that would not be true if the antecedent were. Counteridenticals, whose antecedents assert identities, highlight the difficulty: the background for ‘If I were you, I’d run’ must include facts about my character and your situation, but not vice versa. Counterlegals like ‘Newton’s laws would fail if planets had rectangular orbits’, whose antecedents deny laws of nature, show that even the set of laws cannot be all-inclusive.


Another leading approach (pioneered by Robert C. Stalnaker and David K. Lewis) extends the possible worlds semantics developed for modal logic, saying that a counterfactual is true when its consequent is true in the nearest possible world in which the antecedent is true. The counterfactual about the switch is true on this account provided a world in which the switch was flipped and the light is on is closer to the actual world than one in which the switch was flipped but the light is not on. The main problem is to specify which world is nearest for any given counterfactual and context. The difference between indicative and subjunctive conditionals can be accounted for in terms of either a different set of background conditions or a different measure of nearness.


Counterfactuals turn up in a variety of philosophical contexts. To distinguish laws like ‘All copper conducts’ from equally true generalizations like ‘Everything in my pocket conducts’, some have observed that while anything would conduct if it were copper, not everything would conduct if it were in my pocket. And to have a disposition like solubility, it does not suffice to be either dissolving or not in water: it must in addition be true that the object would dissolve if it were in water. It has similarly been suggested that one event is the cause of another only if the latter would not have occurred if the former had not; that an action is free only if the agent could or would have done otherwise if he had wanted to; that a person is in a particular mental state only if he would be have in certain ways given certain stimuli; and that an action is right only if a completely rational and fully informed agent would choose it.

counterinstance also called counterexample

(1) A particular instance of an argument form that has all true premises but a false conclusion, thereby showing that the form is not universally valid. The argument form ‘p ∨ q, ~ p / ∴ ~q’, for example, is shown to be invalid by the counterinstance ‘Grass is either red or green; Grass is not red; Therefore, grass is not green’. (2) A particular false instance of a statement form, which demonstrates that the form is not a logical truth. A counterinstance to the form ‘(p ∨ q) ⊃ p’, for example, would be the statement ‘If grass is either red or green, then grass is red’. (3) A particular example that demonstrates that a universal generalization is false. The universal statement ‘All large cities in the United States are east of the Mississippi’ is shown to be false by the counterinstance of San Francisco, which is a large city in the United States that is not east of the Mississippi. V.K.

counterpart theory

A theory that analyzes statements about what is possible and impossible for individuals (statements of de re modality) in terms of what holds of counterparts of those individuals in other possible worlds, a thing’s counterparts being individuals that resemble it without being identical with it. (The name ‘counterpart theory’ was coined by David Lewis, the theory’s principal exponent.) Whereas some theories analyze ‘Mrs. Simpson might have been queen of England’ as ‘In some possible world, Mrs. Simpson is queen of England’, counterpart theory analyzes it as ‘In some possible world, counterpart of Mrs. Simpson is queen of (a counterpart of) England’. The chief motivation for counterpart theory is a combination of two views: (a) de re modality should be given a possible worlds analysis, and (b) each actual individual exists only in the actual world, and hence cannot exist with different properties in other possible worlds. Counterpart theory provides an analysis that allows ‘Mrs. Simpson might have been queen’ to be true compatibly with (a) and (b). For Mrs. Simpson’s counterparts in other possible worlds, in those worlds where she herself does not exist, may have regal properties that the actual Mrs. Simpson lacks. Counterpart theory is perhaps prefigured in Leibniz’s theory of possibility.

count noun

A noun that can occur syntactically (a) with quantifiers ‘each’, ‘every’, ‘many’, ‘few’, ‘several’, and numerals; (b) with the indefinite article, ‘a(n)’; and (c) in the plural form. The following are examples of count nouns (CNs) paired with semantically similar mass nouns (MNs): ‘each dollar / silver’, ‘one composition / music’, ‘a bed / furniture’, ‘instructions / advice’. MNs but not CNs can occur with the quantifiers ‘much’ and ‘little’: ‘much poetry / poem(s)’, ‘little bread / loaf’. Both CNs and MNs may occur with ‘all’, ‘most’, and ‘some’. Semantically, CNs but not MNs refer distributively, providing a counting criterion. It makes sense to ask how many CNs?: ‘How many coins / gold?’ MNs but not CNs refer collectively. It makes sense to ask how much MN?: ‘How much gold / coins?’


One problem is that these syntactic and semantic criteria yield different classifications; another problem is to provide logical forms and truth conditions for sentences containing mass nouns.

Cournot, Antoine-Augustin


(1801 - 1877)

French mathematician and economist. A critical realist in scientific and philosophical matters, he was a conservative in religion and politics. His Researches into the Mathematical Principles of the Theory of Wealth (1838), though a fiasco at the time, pioneered mathematical economics. Cournot upheld a position midway between science and metaphysics. His philosophy rests on three basic concepts: order, chance, and probability. The Exposition of the Theory of Chances and Probabilities (1843) focuses on the calculus of probability, unfolds a theory of chance occurrences, and distinguishes among objective, subjective, and philosophical probability. The Essay on the Foundations of Knowledge (1861) defines science as logically organized knowledge. Cournot developed a probabilist epistemology, showed the relevance of probabilism to the scientific study of human acts, and further assumed the existence of a providential and complex order undergirding the universe. Materialism, Vitalism, Rationalism (1875) acknowledges transrationalism and makes room for finality, purpose, and God. J.L.S.

Cousin, Victor


(1792 - 1867)

French philosopher who set out to merge the French psychological tradition with the pragmatism of Locke and Condillac and the inspiration of the Scottish (Reid, Stewart) and German idealists (Kant, Hegel). His early courses at the Sorbonne (1815-18), on “absolute” values that might overcome materialism and skepticism, aroused immense enthusiasm. The course of 1818, Du Vrai, du Beau et du Bien (Of the True, the Beautiful, and the Good), is preserved in the Adolphe Garnier edition of student notes (1836); other early texts appeared in the Fragments philosophiques (Philosophical Fragments, 1826). Dismissed from his teaching post as a liberal (1820), arrested in Germany at the request of the French police and detained in Berlin, he was released after Hegel intervened (1824); he was not reinstated until 1828. Under Louis-Philippe, he rose to highest honors, became minister of education, and introduced philosophy into the curriculum. His eclecticism, transformed into a spiritualism and cult of the “juste milieu”, became the official philosophy. Cousin rewrote his work accordingly and even succeeded in having Du Vrai (third edition, 1853) removed from the papal index. In 1848 he was forced to retire. He is noted for his educational reforms, as a historian of philosophy, and for his translations (Proclus, Plato), editions (Descartes), and portraits of ladies of seventeenth-century society. O.A.H.

Couturat, Louis


(1868 - 1914)

French philosopher and logician who wrote on the history of philosophy, logic, philosophy of mathematics, and the possibility of a universal language. Coutural refuted Renouvier’s finitism and advocated an actual infinite in The Mathematical Infinite (1896). He argued that the assumption of infinite numbers was indispensable to maintain the continuity of magnitudes. He saw a precursor of modern logistic in Leibniz, basing his interpretation of Leibniz on the Discourse on Metaphysics and Leibniz’s correspondence with Arnauld. His epoch-making Leibniz’s Logic (1901) describes Leibniz’s metaphysics as panlogism. Couturat published a study on Kant’s mathematical philosophy (Revue de Métaphysique, 1904), and defended Peano’s logic, Whitehead’s algebra, and Russell’s logistic in The Algebra of Logic (1905). He also contributed to André Lalande’s Vocabulaire technique et critique de la philosophie (1926). J.-L.S.

covering law model

The view of scientific explanation as a deductive argument which contains non-vacuously at least one universal law among its premises. The names of this view include ‘Hempel’s model’, ‘Hempel-Oppenheim (HO) model’, ‘Popper-Hempel model’, ‘deductive-nomological (D-N) model’, and the ‘subsumption theory’ of explanation. The term ‘covering law model of explanation’ was proposed by William Dray.


The theory of scientific explanation was first developed by Aristotle. He suggested that science proceeds from mere knowing that to deeper knowing why by giving understanding of different things by the four types of causes. Answers to why-questions are given by scientific syllogisms, i.e., by deductive arguments with premises that are necessarily true and causes of their consequences. Typical examples are the “subsumptive” arguments that can be expressed by the Barbara syllogism:


All ravens are black. Jack is a raven. Therefore, Jack is black.
Plants containing chlorophyll are green. Grass contains chlorophyll. Therefore, grass is green.
In modern logical notation,



An explanatory argument was later called in Greek synthesis, in Latin compositio or demonstration propter quid. After the seventeenth century, the terms ‘explication’ and ‘explanation’ became commonly used.


The nineteenth-century empiricists accepted Hume’s criticism of Aristotelian essences and necessities: a law of nature is an extensional statement that expresses a uniformity, i.e., a constant conjunction between properties (‘All swans are white’) or types of events (‘Lightning is always followed by thunder’). Still, they accepted the subsumption theory of explanation: “An individual fact is said to be explained by pointing out its cause, that is, by stating the law or laws of causation, of which its production is an instance, “and” a law or uniformity in nature is said to be explained when another law or laws are pointed out, of which that law itself is but a case, and from which it could be deduced” (J.S. Mill). A general model of probabilistic explanation, with deductive explanation as a specific case, was given by Peirce in 1883.


A modern formulation of the subsumption theory was given by Hempel and Paul Oppenhelm in 1948 by the following schema of D-N explanation:



Explanandum E is here a sentence that describes a known particular event or fact (singular explanation) or uniformity (explanation of laws). Explanation is an argument that answers an explanation-seeking why-question ‘Why E?’ by showing that E is nomically expectable on the basis of general laws (r≥1) and antecedent conditions. The relation between the explanans and the explanandum is logical deduction. Explanation is distinguished from other kinds of scientific systematization (prediction, postdiction) that share its logical characteristics — a view often called the symmetry thesis regarding explanation and prediction—by the presupposition that the phenomenon E is already known. This also separates explanations from reason-seeking arguments that answer questions of the form ‘What reasons are there for believing that E?’ Hempel and Oppenheim required that the explanans have empirical content, i.e., be testable by experiment or observation, and it must be true. If the strong condition of truth is dropped, we speak of potential explanation.


Dispositional explanations, for non-probabilistic dispositions, can be formulated in the D-N model. For example, let Hx = ‘x is hit by hammer’, Bx = ‘x breaks’, and Dx = ‘x is fragile’. Then the explanation why a piece of glass was broken may refer to its fragility and its being hit:



It is easy to find examples of HO explanations that are not satisfactory: self-explanations (‘Grass is green, because grass is green’), explanations with too weak premises (‘John died, because he had a heart attack or his plane crashed’), and explanations with irrelevant information (‘This stuff dissolves in water, because it is sugar produced in Finland’). Attempts at finding necessary and sufficient conditions in syntactic and semantic terms for acceptable explanations have not led to any agreement. The HO model also needs the additional Aristotelian condition that causal explanation is directed from causes to effects. This is shown by Sylvain Bromberger’s flagpole example: the length of a flagpole explains the length of its shadow, but not vice versa. Michael Scriven has argued against Hempel that explanations of particular events should be given by singular causal statements ‘E because C’. However, a regularity theory (Humean or stronger than Humean) of causality implies that the truth of such a singular causal statement presupposes a universal law of the form ‘Events of type C are universally followed by events of type E’.


The HO version of the covering law model can be generalized in several directions. The explanans may contain probabilistic or statistical laws. The explanans-explanandum relation may be inductive (in this case the explanation itself is inductive). This gives us four types of explanations: deductive-universal (i.e., D-N), deductive-probabilistic, inductive-universal, and inductive-probabilistic (I-P). Hempel’s 1962 model for I-P explanation contains a probabilistic covering law P(G/F) = r, where r is the statistical probability of G given F, and r in brackets is the inductive probability of the explanandum given the explanans:



The explanation-seeking question may be weakened from ‘Why necessarily E?’ to ‘How possibly E?’. In a corrective explanation, the explanatory answer points out that the explanandum sentence E is not strictly true. This is the case in approximate explanation (e.g., Newton’s theory entails a corrected form of Galileo’s and Kepler’s laws).

Craig’s interpolation theorem

A theorem for first-order logic: if a sentence Ψ of first-order logic entails a sentence θ there is an “interpolant,” a sentence Φ in the vocabulary common to θ and that entails θ and is entailed by Ψ. Originally, William Craig proved his theorem in 1957 as a lemma, to give a simpler proof of Beth’s definability theorem, but the result now stands on its own. In abstract model theory, logics for which an interpolation theorem holds are said to have the Craig interpolation property. Craig’s interpolation theorem shows that first-order logic is closed under implicit definability, so that the concepts embodied in first-order logic are all given explicitly.


In the philosophy of science literature ‘Craig’s theorem’ usually refers to another result of Craig’s: that any recursively enumerable set of sentences of first-order logic can be axiomatized. This has been used to argue that theoretical terms are in principle eliminable from empirical theories. Assuming that an empirical theory can be axiomatized in first-order logic, i.e., that there is a recursive set of first-order sentences from which all theorems of the theory can be proven, it follows that the set of consequences of the axioms in an “observational” sublanguage is a recursively enumerable set. Thus, by Craig’s theorem, there is a set of axioms for this subtheory, the Craig-reduct, that contains only observation terms. Interestingly, the Craig-reduct theory may be semantically weaker, in the sense that it may have models that cannot be extended to a model of the full theory. The existence of such a model would prove that the theoretical terms cannot all be defined on the basis of the observational vocabulary only, a result related to Beth’s definability theorem.

creation ex nihilo

The act of bringing something into existence from nothing. According to traditional Christian theology, God created the world ex nihilo. To say that the world was created from nothing does not mean that there was a prior non-existent substance out of which it was fashioned, but rather that there was not anything out of which God brought it into being. However, some of the patristics influenced by Plotinus, such as Gregory of Nyssa, apparently understood creation ex nihilo to be an emanation from God according to which what is created comes, not from nothing, but from God himself. Not everything that God makes need be created ex nihilo; or if, as in Genesis 2: 7, 19, God made a human being and animals from the ground, a previously existing material, God did not create them from nothing. Regardless of how bodies are made, orthodox theology holds that human souls are created ex nihilo; the opposing view, traducianism, holds that souls are propagated along with bodies.

creationism

Acceptance of the early chapters of Genesis taken literally. Genesis claims that the universe and all of its living creatures including humans were created by God in the space of six days. The need to find some way of reconciling this story with the claims of science intensified in the nineteenth century, with the publication of Darwin’s Origin of Species (1859). In the Southern states of the United States, the indigenous form of evangelical Protestant Christianity declared total opposition to evolutionism, refusing any attempt at reconciliation, and affirming total commitment to a literal “creationist” reading of the Bible. Because of this, certain states passed laws banning the teaching of evolutionism. More recently, literalists have argued that the Bible can be given full scientific backing, and they have therefore argued that “Creation science” may properly be taught in state-supported schools in the United States without violation of the constitutional separation of church and state. This claim was challenged in the state of Arkansas in 1981, and ultimately rejected by the U.S. Supreme Court.


The creationism dispute has raised some issues of philosophical interest and importance. Most obviously, there is the question of what constitutes a genuine science. Is there an adequate criterion of demarcation between science and non-science, and will it put evolutionism on the one side and creationism on the other? Some philosophers, arguing in the spirit of Karl Popper, think that such a criterion can be found. Others are not so sure; and yet others think that some such criterion can be found, but shows creationism to be genuine science, albeit already proven false.


Philosophers of education have also taken an interest in creationism and what it represents. If one grants that even the most orthodox science may contain a value component, reflecting and influencing its practitioners’ culture, then teaching a subject like biology almost certainly is not a normatively neutral enterprise. In that case, without necessarily conceding to the creationist anything about the true nature of science or values, perhaps one must agree that science with its teaching is not something that can and should be set apart from the rest of society, as an entirely distinct phenomenon.

Crescas, Hasdai


( - 1412)

Spanish Jewish philosopher, theologian, and statesman. He was a well-known representative of the Jewish community in both Barcelona and Saragossa. Following the death of his son in the anti-Jewish riots of 1391, he wrote a chronicle of the massacres (published as an appendix to Ibn Verga, Shevet Yehudah, ed. M. Wiener, 1855). Crescas’s devotion to protecting Spanish Jewry in a time when conversion was encouraged is documented in one extant work, the Refutation of Christian Dogmas (1397-98), found in the 1451 Hebrew translation of Joseph ibn Shem Tov (Bittul ’Iqqarey ha-No⋅srim). His major philosophical work, Or Adonai (The Light of the Lord), was intended as the first of a two-part project that was to include his own more extensive systematization of halakha (Jewish law) as well as a critique of Maimonides’ work. But this second part, “Lamp of the Divine Commandment,” was never written.


Or Adonai is a philosophico-dogmatic response to and attack on the Aristotelian doctrines that Crescas saw as a threat to the Jewish faith, doctrines concerning the nature of God, space, time, place, free will, and infinity. For theological reasons he attempts to refute basic tenets in Aristotelian physics. He offers, e.g., a critique of Aristotle’s arguments against the existence of a vacuum. The Aristotelian view of time is rejected as well. Time, like space, is thought by Crescas to be infinite. Furthermore, it is not an accident of motion, but rather exists only in the soul. In defending the fundamental doctrines of the Torah, Crescas must address the question discussed by his predecessors Maimonides and Gersonides, namely that of reconciling divine foreknowledge with human freedom. Unlike these two thinkers, Crescas adopts a form of determinism, arguing that God knows both the possible and what will necessarily take place. An act is contingent with respect to itself, and necessary with respect to its causes and God’s knowledge. To be willed freely, then, is not for an act to be absolutely contingent, but rather for it to be “willed internally” as opposed to “willed externally.”


Reactions to Crescas’s doctrines were mixed. Isaac Abrabanel, despite his respect for Crescas’s piety, rejected his views as either “unintelligible” or “simple-minded.” On the other hand, Giovanni Pico della Mirandola appeals to Crescas’s critique of Aristotelian physics; Judah Abrabanel’s Dialogues of Love may be seen as accommodating Crescas’s metaphysical views; and Spinoza’s notions of necessity, freedom, and extension may well be influenced by the doctrines of Or Adonai.

criterion

Broadly, a sufficient condition for the presence of a certain property or for the truth of a certain proposition. Generally, a criterion need be sufficient merely in normal circumstances rather than absolutely sufficient. Typically, a criterion is salient in some way, often by virtue of being a necessary condition as well as a sufficient one. The plural form, ‘criteria’, is commonly used for a set of singly necessary and jointly sufficient conditions. A set of truth conditions is said to be criterial for the truth of propositions of a certain form. A conceptual analysis of a philosophically important concept may take the form of a proposed set of truth conditions for paradigmatic propositions containing the concept in question. Philosophers have proposed criteria for such notions as meaningfulness, intentionality, knowledge, justification, justice, rightness, and identity (including personal identity and event identity), among many others.


There is a special use of the term in connection with Wittgenstein’s well-known remark that “an ‘inner process’ stands in need of outward criteria,” e.g., moans and groans for aches and pains. The suggestion is that a criteriological connection is needed to forge a conceptual link between items of a sort that are intelligible and knowable to items of a sort that, but for the connection, would not be intelligible or knowable. A mere symptom cannot provide such a connection, for establishing a correlation between a symptom and that for which it is a symptom presupposes that the latter is intelligible and knowable. One objection to a criteriological view, whether about aches or quarks, is that it clashes with realism about entities of the sort in question and lapses into, as the case may be, behaviorism or instrumentalism. For it seems that to posit a criteriological connection is to suppose that the nature and existence of entities of a given sort can depend on the conditions for their intelligibility or knowability, and that is to put the epistemological cart before the ontological horse.

critical legal studies

A loose assemblage of legal writings and thinkers in the United States and Great Britain since the mid-1970s that aspire to a jurisprudence and a political ideology. Like the American legal realists of the 1920s and 1930s, the jurisprudential program is largely negative, consisting in the discovery of supposed contradictions within both the law as a whole and areas of law such as contracts and criminal law. The jurisprudential implication derived from such supposed contradictions within the law is that any decision in any case can be defended as following logically from some authoritative propositions of law, making the law completely without guidance in particular cases. Also like the American legal realists, the political ideology of critical legal studies is vaguely leftist, embracing the communitarian critique of liberalism. Communitarians fault liberalism for its alleged overemphasis on individual rights and individual welfare at the expense of the intrinsic value of certain collective goods. Given the cognitive relativism of many of its practitioners, critical legal studies tends not to aspire to have anything that could be called a theory of either law or of politics.

Critical Realism

A philosophy that at the highest level of generality purports to integrate the positive insights of both New Realism and idealism. New Realism was the first wave of realistic reaction to the dominant idealism of the nineteenth century. It was a version of immediate and direct realism. In its attempt to avoid any representationalism that would lead to idealism, this tradition identified the immediate data of consciousness with objects in the physical world. There is no intermediary between the knower and the known. This heroic tour de force foundered on the phenomena of error, illusion, and perceptual variation, and gave rise to a successor realism — Critical Realism — that acknowledged the mediation of “the mental” in our cognitive grasp of the physical world.


‘Critical Realism’ was the title of a work in epistemology by Roy Wood Sellars (1916), but its more general use to designate the broader movement derives from the 1920 cooperative volume, Essays in Critical Realism: A Cooperative Study of the Problem of Knowledge, containing position papers by Durant Drake, A.O. Lovejoy, J.B. Pratt, A.K. Rogers, C.A. Strong, George Santayana, and Roy Wood Sellars. With New Realism, Critical Realism maintains that the primary object of knowledge is the independent physical world, and that what is immediately present to consciousness is not the physical object as such, but some corresponding mental state broadly construed. Whereas both New Realism and idealism grew out of the conviction that any such mediated account of knowledge is untenable, the Critical Realists felt that only if knowledge of the external world is explained in terms of a process of mental mediation, can error, illusion, and perceptual variation be accommodated. One could fashion an account of mental mediation that did not involve the pitfalls of Lockean representationalism by carefully distinguishing between the object known and the mental state through which it is known.


The Critical Realists differed among themselves both epistemologically and metaphysically. The mediating elements in cognition were variously construed as essences, ideas, or sence-data, and the precise role of these items in cognition was again variously construed. Metaphysically, some were dualists who saw knowledge as unexplainable in terms of physical processes, whereas others (principally Santayana and Sellars) were materialists who saw cognition as simply a function of conscious biological systems. The position of most lasting influence was probably that of Sellars because that torch was taken up by his son, Wilfrid, whose very sophisticated development of it was quite influential.

critical theory

Any social theory that is at the same time explanatory, normative, practical, and self-reflexive. The term was first developed by Horkheimer as a self-description of the Frankfurt School and its revision of Marxism. It now has a wider significance to include any critical, theoretical approach, including feminism and liberation philosophy. When they make claims to be scientific, such approaches attempt to give rigorous explanations of the causes of oppression, such as ideological beliefs or economic dependence; these explanations must in turn be verified by empirical evidence and employ the best available social and economic theories. Such explanations are also normative and critical, since they imply negative evaluations of current social practices. The explanations are also practical, in that they provide a better self-understanding for agents who may want to improve the social conditions that the theory negatively evaluates. Such change generally aims at “emancipation,” and theoretical insight empowers agents to remove limits to human freedom and the causes of human suffering. Finally, these theories must also be self-reflexive: they must account for their own conditions of possibility and for their potentially transformative effects. These requirements contradict the standard account of scientific theories and explanations, particularly positivism and its separation of fact and value. For this reason, the methodological writings of critical theorists often attack positivism and empiricism and attempt to construct alternative epistemologies. Critical theorists also reject relativism, since the cultural relativity of norms would undermine the basis of critical evaluation of social practices and emancipatory change.


The difference between critical and non-critical theories can be illustrated by contrasting the Marxian and Mannheimian theories of ideology. Whereas Mannheim’s theory merely describes relations between ideas of social conditions, Marx’s theory tries to show how certain social practices require false beliefs about them by their participants. Marx’s theory not only explains why this is so, it also negatively evaluates those practices; it is practical in that by disillusioning participants, it makes them capable of transformative action. It is also self-reflexive, since it shows why some practices require illusions and others do not, and also why social crises and conflicts will lead agents to change their circumstances. It is scientific, in that it appeals to historical evidence and can be revised in light of better theories of social action, language, and rationality. Marx also claimed that his theory was superior for its special “dialectical method,” but this is now disputed by most critical theorists, who incorporate many different theories and methods. This broader definition of critical theory, however, leaves a gap between theory and practice and places an extra burden on critics to justify their critical theories without appeal to such notions as inevitable historcial progress. This problem has made critical theories more philosophical and concerned with questions of justification.

Croce, Benedetto


(1866 - 1952)


Italian philosopher. He was born at Pescasseroli, in the Abruzzi, and after 1886 lived in Naples. He briefly attended the University of Rome and was led to study Herbart’s philosophy. In 1904 he founded the influential journal La critica. In 1910 he was made life member of the Italian senate. Early in his career he befriended Giovanni Gentile, but this friendship was breached by Gentile’s Fascism. During the Fascist period and World War II Croce lived in isolation as the chief anti-fascist thinker in Italy. He later became a leader of the Liberal party and at the age of eighty founded the Institute for Historical Studies.


Croce was a literary and historical scholar who joined his great interest in these fields to philosophy. His best-known work in the English-speaking world is Aesthetic as Science of Expression and General Linguistic (1902). This was the first part of his “Philosophy of Spirit”; the second was his Logic (1905), the third his theory of the Practical (1909), and the fourth his Historiography (1917). Croce was influenced by Hegel and the Hegelian aesthetician Francesco De Sanctis (1817-83) and by Vico’s conceptions of knowledge, history, and society. He wrote The Philosophy of Giambattista Vico (1911) and a famous commentary on Hegel, What Is Living and What Is Dead in the Philosophy of Hegel (1907), in which he advanced his conception of the “dialectic of distincts” as more fundamental than the Hegelian dialectic of opposites.


Croce held that philosophy always springs from the occasion, a view perhaps rooted in his concrete studies of history. He accepted the general Hegelian identification of philosophy with the history of philosophy. His philosophy originates from his conception of aesthetics. Central to his aesthetics is his view of intuition, which evolved through various stages during his career. He regards aesthetic experience as a primitive type of cognition. Intuition involves an awareness of a particular image, which constitutes a non-conceptual form of knowledge. Art is the expression of emotion but not simply for its own sake. The expression of emotion can produce cognitive awareness in the sense that the particular intuited as an image can have a cosmic aspect, so that in it the universal human spirit is perceived. Such perception is present especially in the masterpieces of world literature. Croce’s conception of aesthetic has connections with Kant’s “intuition” (Anschauung) and to an extent with Vico’s conception of a primordial form of thought based in imagination (fantasia).


Croce’s philosophical idealism includes fully developed conceptions of logic, science, law, history, politics, and ethics. His influence to date has been largely in the field of aesthetics and in historicist conceptions of knowledge and culture. His revival of Vico has inspired a whole school of Vico scholarship. Croce’s conception of a “Philosophy of Spirit” showed it was possible to develop a post-Hegelian philosophy that, with Hegel, takes “the true to be the whole” but which does not simply imitate Hegel.

crucial experiment

A means of deciding between rival theories that, providing parallel explanations of large classes of phenomena, come to be placed at issue by a single fact. For example, the Newtonian emission theory predicts that light travels faster in water than in air; according to the wave theory, light travels slower in water than in air. Dominique François Arago proposed a crucial experiment comparing the respective velocities. Léon Foucault then devised an apparatus to measure the speed of light in various media and found a lower velocity in water than in air. Arago and Foucault concluded for the wave theory, believing that the experiment refuted the emission theory. Other examples include Galileo’s discovery of the phases of Venus (Ptolemaic versus Copernican astronomy), Pascal’s Puy-de-Dôme experiment with the barometer (vacuists versus plenists), Fresnel’s prediction of a spot of light in circular shadows (particle versus wave optics), and Eddington’s measurement of the gravitational bending of light rays during a solar eclipse (Newtonian versus Einsteinian gravitation). At issue in crucial experiments is usually a novel prediction.


The notion seems to derive from Francis Bacon, whose New Organon (1620) discusses the “Instance of the Fingerpost (Instantia — later experimentum — crucis),” a term borrowed from the post set up at crossroads to indicate several directions. Crucial experiments were emphasized in early nineteenth-century scientific methodology — e.g., in John F. Herschel’s A Preliminary Discourse on the Study of Natural Philosophy (1830). Duhem argued that crucial experiments resemble false dilemmas: hypotheses in physics do not come in pairs, so that crucial experiments cannot transform one of the two into a demonstrated truth. Discussing Foucault’s experiment, Duhem asks whether we dare assert that no other hypothesis is imaginable and suggests that instead of light being either a simple particle or wave, light might be something else, perhaps a disturbance propagated within a dielectric medium, as theorized by Maxwell. In the twentieth century, crucial experiments and novel predictions figured prominently in the work of Imre Lakatos (1922-74). Agreeing that crucial experiments are unable to overthrow theories, Lakatos accepted them as retroactive indications of the fertility or progress of research programs.

Crusius, Christian August


(1715 - 1775)

German philosopher, theologian, and a devout Lutheran pastor who believed that religion was endangered by the rationalist views especially of Wolff. He devoted his considerable philosophical powers to working out acute and often deep criticisms of Wolff and developing a comprehensive alternative to the Wolffian system. His main philosophical works were published in the 1740s. In his understanding of epistemology and logic Crusius broke with many of the assumptions that allowed Wolff to argue from how we think of things to how things are. For instance, Crusius tried to show that the necessity in causal connection is not the same as logical necessity. He rejected the Leibnizian view that this world is probably the best possible world, and he criticized the Wolffian view of freedom of the will as merely a concealed spiritual mechanism.


His ethics stressed our dependence on God and his commands, as did the natural law theory of Pufendorf, but he developed the view in some strikingly original ways. Rejecting voluntarism, Crusius held that God’s commands take the form of innate principles of the will (not the understanding). Everyone alike can know what they are, so (contra Wolff) there is no need for moral experts. And they carry their own motivational force with them, so there is no need for external sanctions. We have obligations of prudence to do what will forward our own ends; but true obligation, the obligation of virtue, arises only when we act simply to comply with God’s law, regardless of any ends of our own. In this distinction between two kinds of obligation, as in many of his other views, Crusius plainly anticipated much that Kant came to think. Kant when young read and admired his work, and it is mainly for this reason that Crusius is now remembered.

Cudworth, Damaris

Lady Masham (1659-1708), English philosopher and author of two treatises on religion, A Discourse Concerning the Love of God (1690) and Occasional Thoughts in Reference to a Virtuous Christian Life (1705). The first argues against the views of the English Male-branchian, John Norris; the second, ostensibly about the importance of education for women, argues for the need to establish natural religion on rational principles and explores the place of revealed religion within a rational framework. Cudworth’s reputation is founded on her long friendship with John Locke. Her correspondence with him is almost entirely personal; she also entered into a brief but philosophically interesting exchange of letters with Leibniz.

Cumberland, Richard


(1631 - 1718)

English philosopher and bishop. He wrote a Latin Treatise of the Laws of Nature (1672), translated twice into English and once into French. Admiring Grotius, Cumberland hoped to refute Hobbes in the interests of defending Christian morality and religion. He refused to appeal to innate ideas and a priori arguments because he thought Hobbes must be attacked on his own ground. Hence he offered a reductive and naturalistic account of natural law. The one basic moral law of nature is that the pursuit of the good of all rational beings is the best path to the agent’s own good. This is true because God made nature so that actions aiding others are followed by beneficial consequences to the agent, while those harmful to others harm the agent. Since the natural consequences of actions provide sanctions that, once we know them, will make us act for the good of others, we can conclude that there is a divine law by which we are obligated to act for the common good. And all the other laws of nature follow from the basic law. Cumberland refused to discuss free will, thereby suggesting a view of human action as fully determined by natural causes. If on his theory it is a blessing that God made nature (including humans) to work as it does, the religious reader must wonder if there is any role left for God concerning morality. Cumberland is generally viewed as a major forerunner of utilitarianism.

curve-fitting problem

The problem of making predictions from past observations by fitting curves to the data. Curve fitting has two steps: first, select a family of curves; then, find the best-fitting curve by some statistical criterion such as the method of least squares (e.g., choose the curve that has the least sum of squared deviations between the curve and data). The method was first proposed by Adrian Marie Legendre (1752-1833) and Carl Friedrich Gauss (1777-1855) in the early nineteenth century as a way of inferring planetary trajectories from noisy data.


More generally, curve fitting may be used to construct low-level empirical generalizations. For example, suppose that the ideal gas law, P = nkT, is chosen as the form of the law governing the dependence of the pressure P on the equilibrium temperature T of a fixed volume of gas, where n is the molecular number per unit volume and k is Boltzmann’s constant (a universal constant equal to 1.3804 × 10—16 erg°C—1. When the parameter nk is adjustable, the law specifies a family of curves — one for each numerical value of the parameter. Curve fitting may be used to determine the best-fitting member of the family, thereby effecting a measurement of the theoretical parameter, nk.



The philosophically vexing problem is how to justify the intial choice of the form of the law. On the one hand, one might choose a very large, complex family of curves, which would ensure excellent fit with any data set. The problem with this option is that the best-fitting curve may over fit the data. If too much attention is paid to the random elements of the data, then the predictively useful trends and regularities will be missed. If it looks too good to be true, it probably is. On the other hand, simpler families run a greater risk of making grossly false assumptions about the true from of the law. Intuitively, the solution is to choose a simple family of curves that maintains a reasonable degree of fit. The simplicity of a family of curves is measured by the paucity of parameters. The problem is to say how and why such a trade-off between simplicity and goodness of fit should be made.


When a theory can accommodate recalcitrant data only by the ad hoc-i.e., improperly motivated - addition of new terms and parameters, students of science have long felt that the subsequent increase in the degree of fit should not count in the theory’s favor, and such additions are sometimes called ad hoc hypotheses. The best-known example of this sort of ad hoc hypothesizing is the addition of epicycles upon epicycles in the planetary astronomies of Ptolemy and Copernicus. This is an example in which a gain in fit need not compensate for the loss of simplicity.


Contemporary philosophers sometimes formulate the curve-fitting problem differently. They often assume that there is no noise in the data, and speak of the problem of choosing among different curves that fit the data exactly. Then the problem is to choose the simplest curve from among all those curves that pass through every data point. The problem is that there is no universally accepted way of defining the simplicity of single curves.


No matter how the problem is formulated, it is widely agreed that simplicity should play some role in theory choice. Rationalists have championed the curve-fitting problem as exemplifying the underdetermination of theory from data and the need to make a priori assumptions about the simplicity of nature. Those philosophers who think that we have no such a priori knowledge still need to account for the relevance of simplicity to science.


Whewell described curve fitting as the colligation of facts in the quantitative sciences, and the agreement in the measured parameters (coefficients) obtained by different colligations of facts as the consilience of inductions. Different colligations of facts (say on the same gas at different volume or for other gases) may yield good agreement among independently measured values of parameters (like the molecular density of the gas and Boltzmann’s constant). By identifying different parameters found to agree, we constrain the from of the law without appealing to a priori knowledge (good news for empiricism). But the accompanying increase in unification also worsens the overall degree of fit. Thus, there is also the problem of how and why we should trade off unification with total degree of fit.


Statisticians often refer to a family of hypotheses as a model. A rapidly growing literature in statistics on model selection has not yet produced any universally accepted formula for trading off simplicity with degree of fit. However, there is wide agreement among statisticians that the paucity of parameters is the appropriate way of measuring simplicity.

cut-elimination theorem

A theorem stating that a certain type of inference rule (including a rule that corresponds to modus ponens) is not needed in classical logic. The idea was anticipated by J. Herbrand; the theorem was proved by G. Gentzen and generalized by S. Kleene. Gentzen formulated a sequent calculus—i.e., a deductive system with rules for statements about derivability. It includes a rule that we here express as ‘From (C ⊢ D,M) and (M,C ⊢ D), infer (C ⊢ D)’ or ‘Given that C yields D or M, and that C plus M yields D, we may infer that C yields D’. This is called the cut rule because it cuts out the middle formula M. Gentzen showed that his sequent calculus is an adequate formalization of the predicate logic, and that the cut rule can be eliminated; anything provable with it can be proved without it. One important consequence of this is that, if a formula F is provable, then there is a proof of F that consists solely of subformulas of F. This fact simplifies the study of provability. Gentzen’s methodology applies directly to classical logic but can be adapted to many nonclassical logics, including some intuitionistic logics. It has led to some important theorems about consistency, and has illuminated the role of auxiliary assumptions in the derivation of consequences from a theory.

cybernetics (coined by Norbert Wiener in 1947 from Greek kubernētēs, ‘helmsman’)

The study of the communication and manipulation of information in service of the control and guidance of biological, physical, or chemical energy systems. Historically, cybernetics has been intertwined with mathematical theories of information (communication) and computation. To describe the cybernetic properties of systems or processes requires ways to describe and measure information (reduce uncertainty) about events within the system and its environment. Feedback and feedforward, the basic ingredients of cybernetic processes, involve information—as what is fed forward or backward — and are basic to processes such as homeostasis in biological systems, automation in industry, and guidance systems. Of course, their most comprehensive application is to the purposive behavior (thought) of cognitively goal-directed systems such as ourselves.


Feedback occurs in closed-loop, as opposed to open-loop, systems. Actually, ‘open-loop’ is a misnomer (involving no loop), but it has become entrenched. The standard example of an openloop system is that of placing a heater with constant output in a closed room and leaving it switched on. Room temperature may accidentally reach, but may also dramatically exceed, the temperature desired by the occupants. Such a heating system has no means of controlling itself to adapt to required conditions.


In contrast, the standard closed-loop system incorporates a feedback component. At the heart of cybernetics is the concept of control. A controlled process is one in which an end state that is reached depends essentially on the behavior of the controlling system and not merely on its external environment. That is, control involves partial independence for the system. A control system may be pictured as having both an inner and outer environment. The inner environment consists of the internal events that make up the system; the outer environment consists of events that causally impinge on the system, threatening disruption and loss of system integrity and stability. For a system to maintain its independence and identity in the face of fluctuations in its external environment, it must be able to detect information about those changes in the external environment. Information must pass through the interface between inner and outer environments, and the system must be able to compensate for fluctuations of the outer environment by adjusting its own inner environmental variables. Otherwise, disturbances in the outer environment will overcome the system — bringing its inner states into equilibrium with the outer states, thereby losing its identity as a distinct, independent system. This is nowhere more certain than with the homeostatic systems of the body (for temperature or blood sugar levels).


Control in the attainment of goals is accomplished by minimizing error. Negative feedback, or information about error, is the difference between acitivity a system actually performs (output) and that activity which is its goal to perform (input). The standard example of control incorporating negative feedback is the thermostatically controlled heating system. The actual room temperature (system output) carries information to the thermostat that can be compared (via goal-state comparator) to the desired temperature for the room (input) as embodied in the set-point on the thermostat; a correction can then be made to minimize the difference (error)—the furnace turns on or off.


Positive feedback tends to amplify the value of the output of a system (or of a system disturbance) by adding the value of the output to the system input quantity. Thus, the system accentuates disturbances and, if unchecked, will eventually pass the brink of instability. Suppose that as room temperature rises it causes the thermostatic set-point to rise in direct proportion to the rise in temperature. This would cause the furnace to continue to output heat (possibly with disastrous consequences). Many biological maladies have just this characteristic. For example, severe loss of blood causes inability of the heart to pump effectively, which causes loss of arterial pressure, which, in turn, causes reduced flow of blood to the heart, reducing pumping efficiency.


Cognitively goal-directed systems are also cybernetic systems. Purposive attainment of a goal by a goal-directed system must have (at least): (1) an internal representation of the goal state of the system (a detector for whether the desired state is actual); (2) a feedback loop by which information about the present state of the system can be compared with the goal state as internally represented and by means of which an error correction can be made to minimize any difference; and (3) a causal dependency of system output upon the error-correction process of condition (2) (to distinguish goal success from fortuitous goal satisfaction).

Cynics

A classical Greek philosophical school characterized by asceticism and emphasis on the sufficiency of virtue for happiness (eudaimonia), boldness in speech, and shamelessness in action. The Cynics were strongly influenced by Socrates and were themselves an important influence on Stoic ethics.


An ancient tradition links the Cynics to Antisthenes (c.445-c.360 B.C.), an Athenian. He fought bravely in the battle of Tanagra and claimed that he would not have been so courageous if he had been born of two Athenians instead of an Athenian and a Thracian slave. He studied with Gorgias, but later became a close companion of Socrates and was present at Socrates’ death. Antisthenes was proudest of his wealth, although he had no money, because he was satisfied with what he had and he could live in whatever circumstances he found himself. Here he follows Socrates in three respects. First, Socrates himself lived with a disregard for pleasure and pain—e.g., walking barefoot in snow. Second, Socrates thinks that in every circumstance a virtuous person is better off than a non-virtuous one; Antisthenes anticipates the Stoic development of this to the view that virtue is sufficient for happiness, because the virtuous person uses properly whatever is present. Third, both Socrates and Antisthenes stress that the soul is more important than the body, and neglect the body for the soul. Unlike the later Cynics, however, both Socrates and Antisthenes do accept pleasure when it is available. Antisthenes also does not focus exclusively on ethics; he wrote on other topics, including logic. (He supposedly told Plato that he could see a horse but not horseness, to which Plato replied that he had not acquired the means to see horseness.)


Diogenes of Sinope (c.400-c.325 B.C.) continued the emphasis on self-sufficiency and on the soul, but took the disregard for pleasure to asceticism. (According to one story, Plato called Diogenes “Socrates gone mad.”) He came to Athens after being exiled from Sinope, perhaps because the coinage was defaced, either by himself or by others, under his father’s direction. He took ‘deface the coinage!’ as a motto, meaning that the current standards were corrupt and should be marked as corrupt by being defaced; his refusal to live by them was his defacing them. For example, he lived in a wine cask, ate whatever scraps he came across, and wrote approvingly of cannibalism and incest. One story reports that he carried a lighted lamp in broad daylight looking for an honest human, probably intending to suggest that the people he did see were so corrupted that they were no longer really people. He apparently wanted to replace the debased standards of custom with the genuine standards of nature — but nature in the sense of what was minimally required for human life, which an individual human could achieve, without society. Because of this, he was called a Cynic, from the Greek word kuon (dog), because he was as shameless as a dog.


Diogenes’ most famous successor was Crates (fl. c.328-325 B.C.). He was a Boeotian, from Thebes, and renounced his wealth to become a Cynic. He seems to have been more pleasant than Diogenes; according to some reports, every Athenian house was open to him, and he was even regarded by them as a household god. Perhaps the most famous incident involving Crates is his marriage to Hipparchia, who took up the Cynic way of life despite her family’s opposition and insisted that educating herself was preferable to working a loom. Like Diogenes, Crates emphasized that happiness is self-sufficiency, and claimed that asceticism is required for self-sufficiency; e.g., he advises us not to prefer oysters to lentils. He argues that no one is happy if happiness is measured by the balance of pleasure and pain, since in each period of our lives there is more pain than pleasure.


Cynicism continued to be active through the third century B.C., and returned to prominence in the second century A.D. after an apparent decline.

Cyrenaics

A classical Greek philosophical school that began shortly after Socrates and lasted for several centuries, noted especially for hedonism. Ancient writers trace the Cyrenaics back to Aristippus of Cyrene (fifth-fourth century B.C.), an associate of Socrates. Aristippus came to Athens because of Socrates’ fame and later greatly enjoyed the luxury of court life in Sicily. (Some people ascribe the founding of the school to his grandchild Aristippus, because of an ancient report that the elder Aristippus said nothing clear about the human end.) The Cyrenaics include Aristippus’s child Arete, her child Aristippus (taught by Arete), Hegesius, Anniceris, and Theodorus. The school seems to have been superseded by the Epicureans. No Cyrenaic writings survive, and the reports we do have are sketchy.


The Cyrenaics avoid mathematics and natural philosophy, preferring ethics because of its utility. (According to them, not only will studying nature not make us virtuous, it also won’t make us stronger or richer.) Some reports claim that they also avoid logic and epistemology. But this is not true of all the Cyrenaics: according to other reports, they think logic and epistemology are useful, consider arguments (and also causes) as topics to be covered in ethics, and have an epictemology. Their epistemology is skeptical. We can know only how we are affected; we can know, e.g., that we are whitening, but not that whatever is causing this sensation is itself white. This differs from Protagoras’s theory; unlike Protagoras the Cyrenaics draw no inferences about the things that affect us, claiming only that external things have a nature that we cannot know. But, like Protagoras, the Cyrenaics base their theory on the problem of conflicting appearances. Given their epistemology, if humans ought to aim at something that is not a way of being affected (i.e., something that is immediately perceived according to them), we can never know anything about it. Unsurprisingly, then, they claim that the end is a way of being affected; in particular, they are hedonists. The end of good actions is particular pleasures (smooth changes), and the end of bad actions is particular pains (rough changes). There is also an intermediate class, which aims at neither pleasure nor pain. Mere absence of pain is in this intermediate class, since the absence of pain may be merely a static state. Pleasure for Aristippus seems to be the sensation of pleasure, not including related psychic states. We should aim at pleasure (although not everyone does), as is clear from our naturally seeking it as children, before we consciously choose to. Happiness, which is the sum of the particular pleasures someone experiences, is choiceworthy only for the particular pleasures that constitute it, while particular pleasures are choiceworthy for themselves. Cyrenaics, then are not concerned with maximizing total pleasure over a lifetime, but only with particular pleasures, and so they should not choose to give up particular pleasures on the chance of increasing the total.


Later Cyrenaics diverge in important respects from the original Cyrenaic hedonism, perhaps in response to the development of Epicurus’s views. Hegesias claims that happiness is impossible because of the pains associated with the body, and so thinks of happiness as total pleasure minus total pain. He emphasizes that wise people act for themselves, and denies that people actually act for someone else. Anniceris, on the other hand, claims that wise people are happy even if they have few pleasures, and so seems to think of happiness as the sum of pleasures, and not as the excess of pleasures over pains. Anniceris also begins considering psychic pleasures: he insists that friends should be valued not only for their utility, but also for our feelings toward them. We should even accept losing pleasure because of a friend, even though pleasure is the end. The odorus goes a step beyond Anniceris. He claims that the end of good actions is joy and that of bad actions is grief. (Surprisingly, he denies that friendship is reasonable, since fools have friends only for utility and wiser people need no friends.) He even regards pleasure as intermediate between practical wisdom and its opposite. This seems to involve regarding happiness as the end, not particular pleasures, and may involve losing particular pleasures for long-term happiness.

Czolbe, Heinrich


(1819 - 1873)

German philosopher. He was born in Danzig and trained in theology and medicine. His main works are Neue Darstellung des Sensualismus (“New Exposition of Sensualism,” 1855), Entstehung des Selbstbewusstseins (“Origin of self-Consciousness,” 1856), Die Grenzen und der Ursprung der menschlichen Erkenntnis (“The Limits and Origin of Human Knowledge,” 1865), and a posthumously published study, Grundzüge der extensionalen Erkenntnistheorie (1875).


Czolbe proposed a sensualistic theory of knowledge: knowledge is a copy of the actual, and spatial extension is ascribed even to ideas. Space is the support of all attributes. His later work defended a non-reductive materialism. Czolbe made the rejection of the supersensuous a central principle and defended a radical “sensationalism.” Despite this, he did not present a dogmatic materialism, but cast his philosophy in hypothetical form.


In his study of the origin of self-consciousness Czolbe held that dissatisfaction with the actual world generates supersensuous ideas and branded this attitude as “immoral.” He excluded supernatural phenomena on the basis not of physiological or scientific studies but of a “moral feeling of duty towards the natural world-order and contentment with it.” The same valuation led him to postulate the eternality of terrestrial life. Nietzsche was familiar with Czolbe’s works and incorporated some of his themes into his philosophy.