Law of thought

The laws of thought are fundamental axiomatic rules upon which rational discourse itself is often considered to be based. The formulation and clarification of such rules have a long tradition in the history of philosophy and logic. Generally they are taken as laws that guide and underlie everyone's thinking, thoughts, expressions, discussions, etc. However, such classical ideas are often questioned or rejected in more recent developments, such as intuitionistic logic, dialetheism and fuzzy logic.

According to the 1999 Cambridge Dictionary of Philosophy,[1] laws of thought are laws by which or in accordance with which valid thought proceeds, or that justify valid inference, or to which all valid deduction is reducible. Laws of thought are rules that apply without exception to any subject matter of thought, etc.; sometimes they are said to be the object of logic[further explanation needed]. The term, rarely used in exactly the same sense by different authors, has long been associated with three equally ambiguous expressions: the law of identity (ID), the law of contradiction (or non-contradiction; NC), and the law of excluded middle (EM). Sometimes, these three expressions are taken as propositions of formal ontology having the widest possible subject matter, propositions that apply to entities as such: (ID), everything is (i.e., is identical to) itself; (NC) no thing having a given quality also has the negative of that quality (e.g., no even number is non-even); (EM) every thing either has a given quality or has the negative of that quality (e.g., every number is either even or non-even). Equally common in older works is the use of these expressions for principles of metalogic about propositions: (ID) every proposition implies itself; (NC) no proposition is both true and false; (EM) every proposition is either true or false.

Beginning in the middle to late 1800s, these expressions have been used to denote propositions of Boolean algebra about classes: (ID) every class includes itself; (NC) every class is such that its intersection ("product") with its own complement is the null class; (EM) every class is such that its union ("sum") with its own complement is the universal class. More recently, the last two of the three expressions have been used in connection with the classical propositional logic and with the so-called protothetic or quantified propositional logic; in both cases the law of non-contradiction involves the negation of the conjunction ("and") of something with its own negation, ¬(A∧¬A), and the law of excluded middle involves the disjunction ("or") of something with its own negation, A∨¬A. In the case of propositional logic, the "something" is a schematic letter serving as a place-holder, whereas in the case of protothetic logic the "something" is a genuine variable. The expressions "law of non-contradiction" and "law of excluded middle" are also used for semantic principles of model theory concerning sentences and interpretations: (NC) under no interpretation is a given sentence both true and false, (EM) under any interpretation, a given sentence is either true or false.

The expressions mentioned above all have been used in many other ways. Many other propositions have also been mentioned as laws of thought, including the dictum de omni et nullo attributed to Aristotle, the substitutivity of identicals (or equals) attributed to Euclid, the so-called identity of indiscernibles attributed to Gottfried Wilhelm Leibniz, and other "logical truths".

The expression "laws of thought" gained added prominence through its use by Boole (1815–64) to denote theorems of his "algebra of logic"; in fact, he named his second logic book An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities (1854). Modern logicians, in almost unanimous disagreement with Boole, take this expression to be a misnomer; none of the above propositions classed under "laws of thought" are explicitly about thought per se, a mental phenomenon studied by psychology, nor do they involve explicit reference to a thinker or knower as would be the case in pragmatics or in epistemology. The distinction between psychology (as a study of mental phenomena) and logic (as a study of valid inference) is widely accepted.

The three traditional laws

History

Hamilton offers a history of the three traditional laws that begins with Plato, proceeds through Aristotle, and ends with the schoolmen of the Middle Ages; in addition he offers a fourth law (see entry below, under Hamilton):

"The principles of Contradiction and Excluded Middle can be traced back to Plato: The principles of Contradiction and of Excluded Middle can both be traced back to Plato, by whom they were enounced and frequently applied; though it was not till long after, that either of them obtained a distinctive appellation. To take the principle of Contradiction first. This law Plato frequently employs, but the most remarkable passages are found in the Phœdo, in the Sophista, and in the fourth and seventh books of the Republic. [Hamilton LECT. V. LOGIC. 62]
Law of Excluded Middle: The law of Excluded Middle between two contradictories remounts, as I have said, also to Plato, though the Second Alcibiades, the dialogue in which it is most clearly expressed, must be admitted to be spurious. It is also in the fragments of Pseudo-Archytas, to be found in Stobæus. [Hamilton LECT. V. LOGIC. 65]
Hamilton further observes that "It is explicitly and emphatically enounced by Aristotle in many passages both of his Metaphysics (l. iii. (iv.) c.7.) and of his Analytics, both Prior (l. i. c. 2) and Posterior (1. i. c. 4). In the first of these, he says: "It is impossible that there should exist any medium between contradictory opposites, but it is necessary either to affirm or to deny everything of everything." [Hamilton LECT. V. LOGIC. 65]
"Law of Identity. [Hamilton also calls this "The principle of all logical affirmation and definition"] Antonius Andreas: The law of Identity, I stated, was not explicated as a coordinate principle till a comparatively recent period. The earliest author in whom I have found this done, is Antonius Andreas, a scholar of Scotus, who flourished at the end of the thirteenth and beginning of the fourteenth century. The schoolman, in the fourth book of his Commentary of Aristotle's Metaphysics – a commentary which is full of the most ingenious and original views, – not only asserts to the law of Identity a coordinate dignity with the law of Contradiction, but, against Aristotle, he maintains that the principle of Identity, and not the principle of Contradiction, is the one absolutely first. The formula in which Andreas expressed it was Ens est ens. Subsequently to this author, the question concerning the relative priority of the two laws of Identity and of Contradiction became one much agitated in the schools; though there were also found some who asserted to the law of Excluded Middle this supreme rank." [From Hamilton LECT. V. LOGIC. 65–66]

Three traditional laws: identity, non-contradiction, excluded middle

The following states the three traditional "laws" in the words of Bertrand Russell (1912):

The law of identity

The law of identity: 'Whatever is, is.'[2]

For all a: a = a.

Regarding this law, Aristotle wrote:

First then this at least is obviously true, that the word "be" or "not be" has a definite meaning, so that not everything will be "so and not so". Again, if "man" has one meaning, let this be "two-footed animal"; by having one meaning I understand this:—if "man" means "X", then if A is a man "X" will be what "being a man" means for him. (It makes no difference even if one were to say a word has several meanings, if only they are limited in number; for to each definition there might be assigned a different word. For instance, we might say that "man" has not one meaning but several, one of which would have one definition, viz. "two-footed animal", while there might be also several other definitions if only they were limited in number; for a peculiar name might be assigned to each of the definitions. If, however, they were not limited but one were to say that the word has an infinite number of meanings, obviously reasoning would be impossible; for not to have one meaning is to have no meaning, and if words have no meaning our reasoning with one another, and indeed with ourselves, has been annihilated; for it is impossible to think of anything if we do not think of one thing; but if this is possible, one name might be assigned to this thing.)

— Aristotle, Metaphysics, Book IV, Part 4 (translated by W.D. Ross)[3]

More than two millennia later, George Boole alluded to the very same principle as did Aristotle when Boole made the following observation with respect to the nature of language and those principles that must inhere naturally within them:

There exist, indeed, certain general principles founded in the very nature of language, by which the use of symbols, which are but the elements of scientific language, is determined. To a certain extent these elements are arbitrary. Their interpretation is purely conventional: we are permitted to employ them in whatever sense we please. But this permission is limited by two indispensable conditions, first, that from the sense once conventionally established we never, in the same process of reasoning, depart; secondly, that the laws by which the process is conducted be founded exclusively upon the above fixed sense or meaning of the symbols employed.

The law of non-contradiction

The law of non-contradiction (alternately the 'law of contradiction'[4]): 'Nothing can both be and not be.'[2]

In other words: "two or more contradictory statements cannot both be true in the same sense at the same time": ¬(A¬A).

In the words of Aristotle, that "one cannot say of something that it is and that it is not in the same respect and at the same time". As an illustration of this law, he wrote:

It is impossible, then, that "being a man" should mean precisely not being a man, if "man" not only signifies something about one subject but also has one significance ... And it will not be possible to be and not to be the same thing, except in virtue of ambiguity, just as if one whom we call "man", and others were to call "not-man"; but the point in question is not this, whether the same thing can at the same time be and not be a man in name, but whether it can be in fact.

— Aristotle, Metaphysics, Book IV, Part 4 (translated by W.D. Ross)[3]

The law of excluded middle

The law of excluded middle: 'Everything must either be or not be.'[2]

In accordance with the law of excluded middle or excluded third, for every proposition, either its positive or negative form is true: A¬A.

Regarding the law of excluded middle, Aristotle wrote:

But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate. This is clear, in the first place, if we define what the true and the false are. To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true; so that he who says of anything that it is, or that it is not, will say either what is true or what is false.

— Aristotle, Metaphysics, Book IV, Part 7 (translated by W.D. Ross)[3]

Rationale

As the quotations from Hamilton above indicate, in particular the "law of identity" entry, the rationale for and expression of the "laws of thought" have been fertile ground for philosophic debate since Plato. Today the debate—about how we "come to know" the world of things and our thoughts—continues; for examples of rationales see the entries, below.

Plato

In one of Plato's Socratic dialogues, Socrates described three principles derived from introspection:

First, that nothing can become greater or less, either in number or magnitude, while remaining equal to itself ... Secondly, that without addition or subtraction there is no increase or diminution of anything, but only equality ... Thirdly, that what was not before cannot be afterwards, without becoming and having become.

— Plato, Theaetetus, 155[5]

Indian logic

The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of Pāṇini,[6] and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as Madhvacharya.[7]

Locke

John Locke claimed that the principles of identity and contradiction (i.e. the law of identity and the law of non-contradiction) were general ideas and only occurred to people after considerable abstract, philosophical thought. He characterized the principle of identity as "Whatsoever is, is." He stated the principle of contradiction as "It is impossible for the same thing to be and not to be." To Locke, these were not innate or a priori principles.[8]

Leibniz

Gottfried Leibniz formulated three additional principles, either or both of which may sometimes be counted as a law of thought:

In Leibniz's thought, as well as generally in the approach of rationalism, the latter two principles are regarded as clear and incontestable axioms. They were widely recognized in European thought of the 17th, 18th, and 19th centuries, although they were subject to greater debate in the 19th century. As turned out to be the case with the law of continuity, these two laws involve matters which, in contemporary terms, are subject to much debate and analysis (respectively on determinism and extensionality[clarification needed]). Leibniz's principles were particularly influential in German thought. In France, the Port-Royal Logic was less swayed by them. Hegel quarrelled with the identity of indiscernibles in his Science of Logic (1812–1816).

Schopenhauer

Four laws

"The primary laws of thought, or the conditions of the thinkable, are four: – 1. The law of identity [A is A]. 2. The law of contradiction. 3. The law of exclusion; or excluded middle. 4. The law of sufficient reason." (Thomas Hughes, The Ideal Theory of Berkeley and the Real World, Part II, Section XV, Footnote, p. 38)

Arthur Schopenhauer discussed the laws of thought and tried to demonstrate that they are the basis of reason. He listed them in the following way in his On the Fourfold Root of the Principle of Sufficient Reason, §33:

  1. A subject is equal to the sum of its predicates, or a = a.
  2. No predicate can be simultaneously attributed and denied to a subject, or a ≠ ~a.
  3. Of every two contradictorily opposite predicates one must belong to every subject.
  4. Truth is the reference of a judgment to something outside it as its sufficient reason or ground.

Also:

The laws of thought can be most intelligibly expressed thus:

  1. Everything that is, exists.
  2. Nothing can simultaneously be and not be.
  3. Each and every thing either is or is not.
  4. Of everything that is, it can be found why it is.

There would then have to be added only the fact that once for all in logic the question is about what is thought and hence about concepts and not about real things.

— Schopenhauer, Manuscript Remains, Vol. 4, "Pandectae II", §163

To show that they are the foundation of reason, he gave the following explanation:

Through a reflection, which I might call a self-examination of the faculty of reason, we know that these judgments are the expression of the conditions of all thought and therefore have these as their ground. Thus by making vain attempts to think in opposition to these laws, the faculty of reason recognizes them as the conditions of the possibility of all thought. We then find that it is just as impossible to think in opposition to them as it is to move our limbs in a direction contrary to their joints. If the subject could know itself, we should know those laws immediately, and not first through experiments on objects, that is, representations (mental images).

Schopenhauer's four laws can be schematically presented in the following manner:

  1. A is A.
  2. A is not not-A.
  3. X is either A or not-A.
  4. If A then B (A implies B).

Two laws

Later, in 1844, Schopenhauer claimed that the four laws of thought could be reduced to two. In the ninth chapter of the second volume of The World as Will and Representation, he wrote:

It seems to me that the doctrine of the laws of thought could be simplified if we were to set up only two, the law of excluded middle and that of sufficient reason. The former thus: "Every predicate can be either confirmed or denied of every subject." Here it is already contained in the "either, or" that both cannot occur simultaneously, and consequently just what is expressed by the laws of identity and contradiction. Thus these would be added as corollaries of that principle which really says that every two concept-spheres must be thought either as united or as separated, but never as both at once; and therefore, even although words are joined together which express the latter, these words assert a process of thought which cannot be carried out. The consciousness of this infeasibility is the feeling of contradiction. The second law of thought, the principle of sufficient reason, would affirm that the above attributing or refuting must be determined by something different from the judgment itself, which may be a (pure or empirical) perception, or merely another judgment. This other and different thing is then called the ground or reason of the judgment. So far as a judgment satisfies the first law of thought, it is thinkable; so far as it satisfies the second, it is true, or at least in the case in which the ground of a judgment is only another judgment it is logically or formally true.[9]

Boole (1854): From his "laws of the mind" Boole derives Aristotle's "Law of contradiction"

The title of George Boole's 1854 treatise on logic, An Investigation on the Laws of Thought, indicates an alternate path. The laws are now incorporated into an algebraic representation of his "laws of the mind", honed over the years into modern Boolean algebra.

Rationale: How the "laws of the mind" are to be distinguished

Boole begins his chapter I "Nature and design of this Work" with a discussion of what characteristic distinguishes, generally, "laws of the mind" from "laws of nature":

"The general laws of Nature are not, for the most part, immediate objects of perception. They are either inductive inferences from a large body of facts, the common truth in which they express, or, in their origin at least, physical hypotheses of a causal nature. ... They are in all cases, and in the strictest sense of the term, probable conclusions, approaching, indeed, ever and ever nearer to certainty, as they receive more and more of the confirmation of experience. ..."

Contrasted with this are what he calls "laws of the mind": Boole asserts these are known in their first instance, without need of repetition:

"On the other hand, the knowledge of the laws of the mind does not require as its basis any extensive collection of observations. The general truth is seen in the particular instance, and it is not confirmed by the repetition of instances. ... we not only see in the particular example the general truth, but we see it also as a certain truth – a truth, our confidence in which will not continue to increase with increasing experience of its practical verification." (Boole 1854:4)

Boole's signs and their laws

Boole begins with the notion of "signs" representing "classes", "operations" and "identity":

"All the signs of Language, as an instrument of reasoning may be conducted by a system of signs composed of the following elements
"1st Literal symbols as x, y, etc representing things as subjects of our conceptions,
"2nd Signs of operation, as +, −, x standing for those operations of the mind by which conceptions of things are combined or resolved so as to form new conceptions involving the same elements,
"3rd The sign of identity, =.
And these symbols of Logic are in their use subject to definite laws, partly agreeing with and partly differing from the laws of the corresponding symbols in the science of Algebra. (Boole 1854:27)

Boole then clarifies what a "literal symbol" e.g. x, y, z,... represents—a name applied to a collection of instances into "classes". For example, "bird" represents the entire class of feathered winged warm-blooded creatures. For his purposes he extends the notion of class to represent membership of "one", or "nothing", or "the universe" i.e. totality of all individuals:

"Let us then agree to represent the class of individuals to which a particular name or description is applicable, by a single letter, as z. ... By a class is usually meant a collection of individuals, to each of which a particular name or description may be applied; but in this work the meaning of the term will be extended so as to include the case in which but a single individual exists, answering to the required name or description, as well as the cases denoted by the terms "nothing" and "universe," which as "classes" should be understood to comprise respectively 'no beings,' 'all beings.'" (Boole 1854:28)

He then defines what the string of symbols e.g. xy means [modern logical &, conjunction]:

"Let it further be agreed, that by the combination xy shall be represented that class of things to which the names or descriptions represented by x and y are simultaneously, applicable. Thus, if x alone stands for "white things," and y for "sheep," let xy stand for 'white Sheep;'" (Boole 1854:28)

Given these definitions he now lists his laws with their justification plus examples (derived from Boole):

  • (1) xy = yx [commutative law]
"x represents 'estuaries,' and y 'rivers,' the expressions xy and yx will indifferently represent" 'rivers that are estuaries,' or 'estuaries that are rivers,'"
  • (2) xx = x, alternately x2 = x [Absolute identity of meaning, Boole's "fundamental law of thought" cf page 49]
"Thus 'good, good' men, is equivalent to 'good' men".

Logical OR: Boole defines the "collecting of parts into a whole or separate a whole into its parts" (Boole 1854:32). Here the connective "and" is used disjunctively, as is "or"; he presents a commutative law (3) and a distributive law (4) for the notion of "collecting". The notion of separating a part from the whole he symbolizes with the "-" operation; he defines a commutative (5) and distributive law (6) for this notion:

  • (3) y + x = x + y [commutative law]
"Thus the expression 'men and women' is ... equivalent with the expression" women and men. Let x represent 'men,' y, 'women' and let + stand for 'and' and 'or' ..."
  • (4) z(x + y) = zx + zy [distributive law]
z = European, (x = "men, y = women): European men and women = European men and European women
  • (5) x − y = −y + x [commutation law: separating a part from the whole]
"All men (x) except Asiatics (y)" is represented by x − y. "All states (x) except monarchical states (y)" is represented by x − y
  • (6) z(x − y) = zx − zy [distributive law]

Lastly is a notion of "identity" symbolized by "=". This allows for two axioms: (axiom 1): equals added to equals results in equals, (axiom 2): equals subtracted from equals results in equals.

  • (7) Identity ("is", "are") e.g. x = y + z, "stars" = "suns" and "the planets"

Nothing "0" and Universe "1": He observes that the only two numbers that satisfy xx = x are 0 and 1. He then observes that 0 represents "Nothing" while "1" represents the "Universe" (of discourse).

The logical NOT: Boole defines the contrary (logical NOT) as follows (his Proposition III):

"If x represent any class of objects, then will 1 − x represent the contrary or supplementary class of objects, i.e. the class including all objects which are not comprehended in the class x" (Boole 1854:48)
If x = "men" then "1 − x" represents the "universe" less "men", i.e. "not-men".

The notion of a particular as opposed to a universal: To represent the notion of "some men", Boole writes the small letter "v" before the predicate-symbol "vx" some men.

Exclusive- and inclusive-OR: Boole does not use these modern names, but he defines these as follows x(1-y) + y(1-x) and x + y(1-x), respectively; these agree with the formulas derived by means of the modern Boolean algebra.[10]

Boole derives the law of contradiction

Armed with his "system" he derives the "principle of [non]contradiction" starting with his law of identity: x2 = x. He subtracts x from both sides (his axiom 2), yielding x2 − x = 0. He then factors out the x: x(x − 1) = 0. For example, if x = "men" then 1 − x represents NOT-men. So we have an example of the "Law of Contradiction":

"Hence: x(1 − x) will represent the class whose members are at once "men," and" not men," and the equation [x(1 − x)=0] thus express the principle, that a class whose members are at the same time men and not men does not exist. In other words, that it is impossible for the same individual to be at the same time a man and not a man. ... this is identically that "principle of contradiction" which Aristotle has described as the fundamental axiom of all philosophy. ... what has been commonly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form." (with more explanation about this "dichotomy" comes about cf Boole 1854:49ff)

Boole defines the notion "domain (universe) of discourse"

This notion is found throughout Boole's "Laws of Thought" e.g. 1854:28, where the symbol "1" (the integer 1) is used to represent "Universe" and "0" to represent "Nothing", and in far more detail later (pages 42ff):

" Now, whatever may be the extent of the field within which all the objects of our discourse are found, that field may properly be termed the universe of discourse. ... Furthermore, this universe of discourse is in the strictest sense the ultimate subject of the discourse."

In his chapter "The Predicate Calculus" Kleene observes that the specification of the "domain" of discourse is "not a trivial assumption, since it is not always clearly satisfied in ordinary discourse ... in mathematics likewise, logic can become pretty slippery when no D [domain] has been specified explicitly or implicitly, or the specification of a D [domain] is too vague (Kleene 1967:84).

Hamilton (1837–38 lectures on Logic, published 1860): a 4th "Law of Reason and Consequent"

As noted above, Hamilton specifies four laws—the three traditional plus the fourth "Law of Reason and Consequent"—as follows:

"XIII. The Fundamental Laws of Thought, or the conditions of the thinkable, as commonly received, are four: – 1. The Law of Identity; 2. The Law of Contradiction; 3. The Law of Exclusion or of Excluded Middle; and, 4. The Law of Reason and Consequent, or of Sufficient Reason."[11]

Rationale: "Logic is the science of the Laws of Thought as Thought"

Hamilton opines that thought comes in two forms: "necessary" and "contingent" (Hamilton 1860:17). With regards the "necessary" form he defines its study as "logic": "Logic is the science of the necessary forms of thought" (Hamilton 1860:17). To define "necessary" he asserts that it implies the following four "qualities":[12]

(1) "determined or necessitated by the nature of the thinking subject itself ... it is subjectively, not objectively, determined;
(2) "original and not acquired;
(3) "universal; that is, it cannot be that it necessitates on some occasions, and does not necessitate on others.
(4) "it must be a law; for a law is that which applies to all cases without exception, and from which a deviation is ever, and everywhere, impossible, or, at least, unallowed. ... This last condition, likewise, enables us to give the most explicit enunciation of the object-matter of Logic, in saying that Logic is the science of the Laws of Thought as Thought, or the science of the Formal Laws of Thought, or the science of the Laws of the Form of Thought; for all these are merely various expressions of the same thing."

Hamilton's 4th law: "Infer nothing without ground or reason"

Here's Hamilton's fourth law from his LECT. V. LOGIC. 60–61:

"I now go on to the fourth law.
"Par. XVII. Law of Sufficient Reason, or of Reason and Consequent:
"XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought; but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of; the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis); but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the logical reason ground, or antecedent; that something else which the mind is necessitated to affirm or posit, is called the logical consequent; and the relation between the reason and consequent, is called the logical connection or consequence. This law is expressed in the formula – Infer nothing without a ground or reason.1
Relations between Reason and Consequent: The relations between Reason and Consequent, when comprehended in a pure thought, are the following:
1. When a reason is explicitly or implicitly given, then there must ¶ exist a consequent; and, vice versa, when a consequent is given, there must also exist a reason.
1 See Schulze, Logik, §19, and Krug, Logik, §20, – ED.
2. Where there is no reason there can be no consequent; and, vice versa, where there is no consequent (either implicitly or explicitly) there can be no reason. That is, the concepts of reason and of consequent, as reciprocally relative, involve and suppose each other.
The logical significance of this law: The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected; each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science.

Welton

In the 19th century, the Aristotelian laws of thoughts, as well as sometimes the Leibnizian laws of thought, were standard material in logic textbooks, and J. Welton described them in this way:

The Laws of Thought, Regulative Principles of Thought, or Postulates of Knowledge, are those fundamental, necessary, formal and a priori mental laws in agreement with which all valid thought must be carried on. They are a priori, that is, they result directly from the processes of reason exercised upon the facts of the real world. They are formal; for as the necessary laws of all thinking, they cannot, at the same time, ascertain the definite properties of any particular class of things, for it is optional whether we think of that class of things or not. They are necessary, for no one ever does, or can, conceive them reversed, or really violate them, because no one ever accepts a contradiction which presents itself to his mind as such.

— Welton, A Manual of Logic, 1891, Vol. I, p. 30.

Russell (1903–1927)

The sequel to Bertrand Russell's 1903 "The Principles of Mathematics" became the three-volume work named Principia Mathematica (hereafter PM), written jointly with Alfred North Whitehead. Immediately after he and Whitehead published PM he wrote his 1912 "The Problems of Philosophy". His "Problems" reflects "the central ideas of Russell's logic".[13]

The Principles of Mathematics (1903)

In his 1903 "Principles" Russell defines Symbolic or Formal Logic (he uses the terms synonymously) as "the study of the various general types of deduction" (Russell 1903:11). He asserts that "Symbolic Logic is essentially concerned with inference in general" (Russell 1903:12) and with a footnote indicates that he does not distinguish between inference and deduction; moreover he considers induction "to be either disguised deduction or a mere method of making plausible guesses" (Russell 1903:11). This opinion will change by 1912, when he deems his "principle of induction" to be par with the various "logical principles" that include the "Laws of Thought".

In his Part I "The Indefinables of Mathematics" Chapter II "Symbolic Logic" Part A "The Propositional Calculus" Russell reduces deduction ("propositional calculus") to 2 "indefinables" and 10 axioms:

"17. We require, then, in the propositional calculus, no indefinable except the two kinds of implication [simple aka "material"[14] and "formal"]-- remembering, however, that formal implication is a complex notion, whose analysis remains to be undertaken. As regards our two indefinables, we require certain indemonstrable propositions, which hitherto I have not succeeded in reducing to less ten (Russell 1903:15).

From these he claims to be able to derive the law of excluded middle and the law of contradiction but does not exhibit his derivations (Russell 1903:17). Subsequently, he and Whitehead honed these "primitive principles" and axioms into the nine found in PM, and here Russell actually exhibits these two derivations at ❋1.71 and ❋3.24, respectively.

The Problems of Philosophy (1912)

By 1912 Russell in his "Problems" pays close attention to "induction" (inductive reasoning) as well as "deduction" (inference), both of which represent just two examples of "self-evident logical principles" that include the "Laws of Thought."[4]

Induction principle: Russell devotes a chapter to his "induction principle". He describes it as coming in two parts: firstly, as a repeated collection of evidence (with no failures of association known) and therefore increasing probability that whenever A happens B follows; secondly, in a fresh instance when indeed A happens, B will indeed follow: i.e. "a sufficient number of cases of association will make the probability of a fresh association nearly a certainty, and will make it approach certainty without limit."[15]

He then collects all the cases (instances) of the induction principle (e.g. case 1: A1 = "the rising sun", B1 = "the eastern sky"; case 2: A2 = "the setting sun", B2 = "the western sky"; case 3: etc.) into a "general" law of induction which he expresses as follows:

"(a) The greater the number of cases in which a thing of the sort A has been found associated with a thing of the sort B, the more probable it is (if cases of failure of association are known) that A is always associated with B;
"(b) Under the same circumstances, a sufficient number of cases of the association of A with B will make it nearly certain that A is always associated with B, and will make this general law approach certainty without limit."[16]

He makes an argument that this induction principle can neither be disproved or proved by experience,[17] the failure of disproof occurring because the law deals with probability of success rather than certainty; the failure of proof occurring because of unexamined cases that are yet to be experienced, i.e. they will occur (or not) in the future. "Thus we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future".[18]

In his next chapter ("On Our Knowledge of General Principles") Russell offers other principles that have this similar property: "which cannot be proved or disproved by experience, but are used in arguments which start from what is experienced." He asserts that these "have even greater evidence than the principle of induction ... the knowledge of them has the same degree of certainty as the knowledge of the existence of sense-data. They constitute the means of drawing inferences from what is given in sensation".[19]

Inference principle: Russell then offers an example that he calls a "logical" principle. Twice previously he has asserted this principle, first as the 4th axiom in his 1903[20] and then as his first "primitive proposition" of PM: "❋1.1 Anything implied by a true elementary proposition is true".[21] Now he repeats it in his 1912 in a refined form: "Thus our principle states that if this implies that, and this is true, then that is true. In other words, 'anything implied by a true proposition is true', or 'whatever follows from a true proposition is true'.[22] This principle he places great stress upon, stating that "this principle is really involved – at least, concrete instances of it are involved – in all demonstrations".[4]

He does not call his inference principle modus ponens, but his formal, symbolic expression of it in PM (2nd edition 1927) is that of modus ponens; modern logic calls this a "rule" as opposed to a "law".[23] In the quotation that follows, the symbol "⊦" is the "assertion-sign" (cf PM:92); "⊦" means "it is true that", therefore "⊦p" where "p" is "the sun is rising" means "it is true that the sun is rising", alternately "The statement 'The sun is rising' is true". The "implication" symbol "⊃" is commonly read "if p then q", or "p implies q" (cf PM:7). Embedded in this notion of "implication" are two "primitive ideas", "the Contradictory Function" (symbolized by NOT, "~") and "the Logical Sum or Disjunction" (symbolized by OR, "⋁"); these appear as "primitive propositions" ❋1.7 and ❋1.71 in PM (PM:97). With these two "primitive propositions" Russell defines "p ⊃ q" to have the formal logical equivalence "NOT-p OR q" symbolized by "~p ⋁ q":

"Inference. The process of inference is as follows: a proposition "p" is asserted, and a proposition "p implies q" is asserted, and then as a sequel the proposition "q" is asserted. The trust in inference is the belief that if the two former assertions are not in error, the final assertion is not in error. Accordingly, whenever, in symbols, where p and q have of course special determination
" "⊦p" and "⊦(p ⊃ q)"
" have occurred, then "⊦q" will occur if it is desired to put it on record. The process of the inference cannot be reduced to symbols. Its sole record is the occurrence of "⊦q". ... An inference is the dropping of a true premiss; it is the dissolution of an implication".[24]

In other words, in a long "string" of inferences, after each inference we can detach the "consequent" "⊦q" from the symbol string "⊦p, ⊦(p⊃q)" and not carry these symbols forward in an ever-lengthening string of symbols.

The three traditional "laws" (principles) of thought: Russell goes on to assert other principles, of which the above logical principle is "only one". He asserts that "some of these must be granted before any argument or proof becomes possible. When some of them have been granted, others can be proved." Of these various "laws" he asserts that "for no very good reason, three of these principles have been singled out by tradition under the name of 'Laws of Thought'.[4] And these he lists as follows:

"(1) The law of identity: 'Whatever is, is.'
(2) The law of contradiction: 'Nothing can both be and not be.'
(3) The law of excluded middle: 'Everything must either be or not be.'"[4]

Rationale: Russell opines that "the name 'laws of thought' is ... misleading, for what is important is not the fact that we think in accordance with these laws, but the fact that things behave in accordance with them; in other words, the fact that when we think in accordance with them we think truly."[25] But he rates this a "large question" and expands it in two following chapters where he begins with an investigation of the notion of "a priori" (innate, built-in) knowledge, and ultimately arrives at his acceptance of the Platonic "world of universals". In his investigation he comes back now and then to the three traditional laws of thought, singling out the law of contradiction in particular: "The conclusion that the law of contradiction is a law of thought is nevertheless erroneous ... [rather], the law of contradiction is about things, and not merely about thoughts ... a fact concerning the things in the world."[26]

His argument begins with the statement that the three traditional laws of thought are "samples of self-evident principles". For Russell the matter of "self-evident"[27] merely introduces the larger question of how we derive our knowledge of the world. He cites the "historic controversy ... between the two schools called respectively 'empiricists' [ Locke, Berkeley, and Hume ] and 'rationalists' [ Descartes and Leibniz]" (these philosophers are his examples).[28] Russell asserts that the rationalists "maintained that, in addition to what we know by experience, there are certain 'innate ideas' and 'innate principles', which we know independently of experience";[28] to eliminate the possibility of babies having innate knowledge of the "laws of thought", Russell renames this sort of knowledge a priori. And while Russell agrees with the empiricists that "Nothing can be known to exist except by the help of experience",[29] he also agrees with the rationalists that some knowledge is a priori, specifically "the propositions of logic and pure mathematics, as well as the fundamental propositions of ethics".[30]

This question of how such a priori knowledge can exist directs Russell to an investigation into the philosophy of Immanuel Kant, which after careful consideration he rejects as follows:

"... there is one main objection which seems fatal to any attempt to deal with the problem of a priori knowledge by his method. The thing to be accounted for is our certainty that the facts must always conform to logic and arithmetic. ... Thus Kant's solution unduly limits the scope of a priori propositions, in addition to failing in the attempt at explaining their certainty".[31]

His objections to Kant then leads Russell to accept the 'theory of ideas' of Plato, "in my opinion ... one of the most successful attempts hitherto made.";[32] he asserts that " ... we must examine our knowledge of universals ... where we shall find that [this consideration] solves the problem of a priori knowledge.".[32]

Principia Mathematica (Part I: 1910 first edition, 1927 2nd edition)

Unfortunately, Russell's "Problems" does not offer an example of a "minimum set" of principles that would apply to human reasoning, both inductive and deductive. But PM does at least provide an example set (but not the minimum; see Post below) that is sufficient for deductive reasoning by means of the propositional calculus (as opposed to reasoning by means of the more-complicated predicate calculus)—a total of 8 principles at the start of "Part I: Mathematical Logic". Each of the formulas :❋1.2 to :❋1.6 is a tautology (true no matter what the truth-value of p, q, r ... is). What is missing in PM’s treatment is a formal rule of substitution;[33] in his 1921 PhD thesis Emil Post fixes this deficiency (see Post below). In what follows the formulas are written in a more modern format than that used in PM; the names are given in PM).

❋1.1 Anything implied by a true elementary proposition is true.
❋1.2 Principle of Tautology: (p ⋁ p) ⊃ p
❋1.3 Principle of [logical] Addition: q ⊃ (p ⋁ q)
❋1.4 Principle of Permutation: (p ⋁ q) ⊃ (q ⋁ p)
❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) [redundant]
❋1.6 Principle of [logical] Summation: (q ⊃ r) ⊃ ((p ⋁ q) ⊃ (p ⋁ r))
❋1.7 [logical NOT]: If p is an elementary proposition, ~p is an elementary proposition.
❋1.71 [logical inclusive OR]: If p and q are elementary propositions, (p ⋁ q) is an elementary proposition.

Russell sums up these principles with "This completes the list of primitive propositions required for the theory of deduction as applied to elementary propositions" (PM:97).

Starting from these eight tautologies and a tacit use of the "rule" of substitution, PM then derives over a hundred different formulas, among which are the Law of Excluded Middle ❋1.71, and the Law of Contradiction ❋3.24 (this latter requiring a definition of logical AND symbolized by the modern ⋀: (p ⋀ q) =def ~(~p ⋁ ~q). (PM uses the "dot" symbol for logical AND)).

Ladd-Franklin (1914): "principle of exclusion" and the "principle of exhaustion"

At about the same time (1912) that Russell and Whitehead were finishing the last volume of their Principia Mathematica, and the publishing of Russell's "The Problems of Philosophy" at least two logicians (Louis Couturat, Christine Ladd-Franklin) were asserting that two "laws" (principles) of contradiction" and "excluded middle" are necessary to specify "contradictories"; Ladd-Franklin renamed these the principles of exclusion and exhaustion. The following appears as a footnote on page 23 of Couturat 1914:

"As Mrs. LADD·FRANKLlN has truly remarked (BALDWIN, Dictionary of Philosophy and Psychology, article "Laws of Thought"), the principle of contradiction is not sufficient to define contradictories; the principle of excluded middle must be added which equally deserves the name of principle of contradiction. This is why Mrs. LADD-FRANKLIN proposes to call them respectively the principle of exclusion and the principle of exhaustion, inasmuch as, according to the first, two contradictory terms are exclusive (the one of the other); and, according to the second, they are exhaustive (of the universe of discourse)."

In other words, the creation of "contradictories" represents a dichotomy, i.e. the "splitting" of a universe of discourse into two classes (collections) that have the following two properties: they are (i) mutually exclusive and (ii) (collectively) exhaustive.[34] In other words, no one thing (drawn from the universe of discourse) can simultaneously be a member of both classes (law of non-contradiction), but [and] every single thing (in the universe of discourse) must be a member of one class or the other (law of excluded middle).

Post (1921): The propositional calculus is consistent and complete

As part of his PhD thesis "Introduction to a general theory of elementary propositions" Emil Post proved "the system of elementary propositions of Principia [PM]" i.e. its "propositional calculus"[35] described by PM's first 8 "primitive propositions" to be consistent. The definition of "consistent" is this: that by means of the deductive "system" at hand (its stated axioms, laws, rules) it is impossible to derive (display) both a formula S and its contradictory ~S (i.e. its logical negation) (Nagel and Newman 1958:50). To demonstrate this formally, Post had to add a primitive proposition to the 8 primitive propositions of PM, a "rule" that specified the notion of "substitution" that was missing in the original PM of 1910.[36]

Given PM's tiny set of "primitive propositions" and the proof of their consistency, Post then proves that this system ("propositional calculus" of PM) is complete, meaning every possible truth table can be generated in the "system":

"...every truth system has a representation in the system of Principia while every complete system, that is one having all possible truth tables, is equivalent to it. ... We thus see that complete systems are equivalent to the system of Principia not only in the truth table development but also postulationally. As other systems are in a sense degenerate forms of complete systems we can conclude that no new logical systems are introduced."[37]

A minimum set of axioms? The matter of their independence

Then there is the matter of "independence" of the axioms. In his commentary before Post 1921, van Heijenoort states that Paul Bernays solved the matter in 1918 (but published in 1926) – the formula ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) can be proved with the other four. As to what system of "primitive-propositions" is the minimum, van Heijenoort states that the matter was "investigated by Zylinski (1925), Post himself (1941), and Wernick (1942)" but van Heijenoort does not answer the question.[38]

Model theory versus proof theory: Post's proof

Kleene (1967:33) observes that "logic" can be "founded" in two ways, first as a "model theory", or second by a formal "proof" or "axiomatic theory"; "the two formulations, that of model theory and that of proof theory, give equivalent results"(Kleene 1967:33). This foundational choice, and their equivalence also applies to predicate logic (Kleene 1967:318).

In his introduction to Post 1921, van Heijenoort observes that both the "truth-table and the axiomatic approaches are clearly presented".[39] This matter of a proof of consistency both ways (by a model theory, by axiomatic proof theory) comes up in the more-congenial version of Post's consistency proof that can be found in Nagel and Newman 1958 in their chapter V "An Example of a Successful Absolute Proof of Consistency". In the main body of the text they use a model to achieve their consistency proof (they also state that the system is complete but do not offer a proof) (Nagel & Newman 1958:45–56). But their text promises the reader a proof that is axiomatic rather than relying on a model, and in the Appendix they deliver this proof based on the notions of a division of formulas into two classes K1 and K2 that are mutually exclusive and exhaustive (Nagel & Newman 1958:109–113).

Gödel (1930): The first-order predicate calculus is complete

The (restricted) "first-order predicate calculus" is the "system of logic" that adds to the propositional logic (cf Post, above) the notion of "subject-predicate" i.e. the subject x is drawn from a domain (universe) of discourse and the predicate is a logical function f(x): x as subject and f(x) as predicate (Kleene 1967:74). Although Gödel's proof involves the same notion of "completeness" as does the proof of Post, Gödel's proof is far more difficult; what follows is a discussion of the axiom set.

Completeness

Kurt Gödel in his 1930 doctoral dissertation "The completeness of the axioms of the functional calculus of logic" proved that in this "calculus" (i.e. restricted predicate logic with or without equality) that every valid formula is "either refutable or satisfiable"[40] or what amounts to the same thing: every valid formula is provable and therefore the logic is complete. Here is Gödel's definition of whether or not the "restricted functional calculus" is "complete":

"... whether it actually suffices for the derivation of every logico-mathematical proposition, or where, perhaps, it is conceivable that there are true propositions (which may be provable by means of other principles) that cannot be derived in the system under consideration."[41]

The first-order predicate calculus

This particular predicate calculus is "restricted to the first order". To the propositional calculus it adds two special symbols that symbolize the generalizations "for all" and "there exists (at least one)" that extend over the domain of discourse. The calculus requires only the first notion "for all", but typically includes both: (1) the notion "for all x" or "for every x" is symbolized in the literature as variously as (x), ∀x, Πx etc., and the (2) notion of "there exists (at least one x)" variously symbolized as Ex, ∃x.

The restriction is that the generalization "for all" applies only to the variables (objects x, y, z etc. drawn from the domain of discourse) and not to functions, in other words the calculus will permit ∀xf(x) ("for all creatures x, x is a bird") but not ∀f∀x(f(x)) [but if "equality" is added to the calculus it will permit ∀f:f(x); see below under Tarski]. Example:

Let the predicate "function" f(x) be "x is a mammal", and the subject-domain (or universe of discourse) (cf Kleene 1967:84) be the category "bats":
The formula ∀xf(x) yields the truth value "truth" (read: "For all instances x of objects 'bats', 'x is a mammal'" is a truth, i.e. "All bats are mammals");
But if the instances of x are drawn from a domain "winged creatures" then ∀xf(x) yields the truth value "false" (i.e. "For all instances x of 'winged creatures', 'x is a mammal'" has a truth value of "falsity"; "Flying insects are mammals" is false);
However over the broad domain of discourse "all winged creatures" (e.g. "birds" + "flying insects" + "flying squirrels" + "bats") we can assert ∃xf(x) (read: "There exists at least one winged creature that is a mammal'"; it yields a truth value of "truth" because the objects x can come from the category "bats" and perhaps "flying squirrels" (depending on how we define "winged"). But the formula yields "falsity" when the domain of discourse is restricted to "flying insects" or "birds" or both "insects" and "birds".

Kleene remarks that "the predicate calculus (without or with equality) fully accomplishes (for first order theories) what has been conceived to be the role of logic" (Kleene 1967:322).

A new axiom: Aristotle's dictum – "the maxim of all and none"

This first half of this axiom – "the maxim of all" will appear as the first of two additional axioms in Gödel's axiom set. The "dictum of Aristotle" (dictum de omni et nullo) is sometimes called "the maxim of all and none" but is really two "maxims" that assert: "What is true of all (members of the domain) is true of some (members of the domain)", and "What is not true of all (members of the domain) is true of none (of the members of the domain)".

The "dictum" appears in Boole 1854 a couple places:

"It may be a question whether that formula of reasoning, which is called the dictum of Aristotle, de Omni et nullo, expresses a primary law of human reasoning or not; but it is no question that it expresses a general truth in Logic" (1854:4)

But later he seems to argue against it:[42]

"[Some principles of] general principle of an axiomatic nature, such as the "dictum of Aristotle:" Whatsoever is affirmed or denied of the genus may in the same sense be affirmed or denied of any species included under that genus. ... either state directly, but in an abstract form, the argument which they are supposed to elucidate, and, so stating that argument, affirm its validity; or involve in their expression technical terms which, after definition, conduct us again to the same point, viz. the abstract statement of the supposed allowable forms of inference."

But the first half of this "dictum" (dictum de omni) is taken up by Russell and Whitehead in PM, and by Hilbert in his version (1927) of the "first order predicate logic"; his (system) includes a principle that Hilbert calls "Aristotle's dictum" [43]

(x)f(x) → f(y)

This axiom also appears in the modern axiom set offered by Kleene (Kleene 1967:387), as his "∀-schema", one of two axioms (he calls them "postulates") required for the predicate calculus; the other being the "∃-schema" f(y) ⊃ ∃xf(x) that reasons from the particular f(y) to the existence of at least one subject x that satisfies the predicate f(x); both of these requires adherence to a defined domain (universe) of discourse.

Gödel's restricted predicate calculus

To supplement the four (down from five; see Post) axioms of the propositional calculus, Gödel 1930 adds the dictum de omni as the first of two additional axioms. Both this "dictum" and the second axiom, he claims in a footnote, derive from Principia Mathematica. Indeed, PM includes both as

❋10.1 ⊦ ∀xf(x) ⊃ f(y) ["I.e. what is true in all cases is true in any one case"[44] ("Aristotle's dictum", rewritten in more-modern symbols)]
❋10.2 ⊦∀x(p ⋁ f(x)) ⊃ (p ⋁ ∀xf(x)) [rewritten in more-modern symbols]

The latter asserts that the logical sum (i.e. ⋁, OR) of a simple proposition p and a predicate ∀xf(x) implies the logical sum of each separately. But PM derives both of these from six primitive propositions of ❋9, which in the second edition of PM is discarded and replaced with four new "Pp" (primitive principles) of ❋8 (see in particular ❋8.2, and Hilbert derives the first from his "logical ε-axiom" in his 1927 and does not mention the second. How Hilbert and Gödel came to adopt these two as axioms is unclear.

Also required are two more "rules" of detachment ("modus ponens") applicable to predicates.

Tarski (1946): Leibniz's law

Alfred Tarski in his 1946 (2nd edition) "Introduction to Logic and to the Methodology of the Deductive Sciences" cites a number of what he deems "universal laws" of the sentential calculus, three "rules" of inference, and one fundamental law of identity (from which he derives four more laws). The traditional "laws of thought" are included in his long listing of "laws" and "rules". His treatment is, as the title of his book suggests, limited to the "Methodology of the Deductive Sciences".

Rationale: In his introduction (2nd edition) he observes that what began with an application of logic to mathematics has been widened to "the whole of human knowledge":

"[I want to present] a clear idea of that powerful trend of contemporary thought which is concentrated about modern logic. This trend arose originally from the somewhat limited task of stabilizing the foundations of mathematics. In its present phase, however, it has much wider aims. For it seeks to create a unified conceptual apparatus which would supply a common basis for the whole of human knowledge.".[45]

Law of identity (Leibniz's law, equality)

To add the notion of "equality" to the "propositional calculus" (this new notion not to be confused with logical equivalence symbolized by ↔, ⇄, "if and only if (iff)", "biconditional", etc.) Tarski (cf p54-57) symbolizes what he calls "Leibniz's law" with the symbol "=". This extends the domain (universe) of discourse and the types of functions to numbers and mathematical formulas (Kleene 1967:148ff, Tarski 1946:54ff).

In a nutshell: given that "x has every property that y has", we can write "x = y", and this formula will have a truth value of "truth" or "falsity". Tarski states this Leibniz's law as follows:

  • I. Leibniz' Law: x = y, if, and only if, x has every property which y has, and y has every property which x has.

He then derives some other "laws" from this law:

  • II. Law of Reflexivity: Everything is equal to itself: x = x. [Proven at PM ❋13.15]
  • III. Law of Symmetry: If x = y, then y = x. [Proven at PM ❋13.16]
  • IV. Law of Transitivity: If x = y and y = z, then x = z. [Proven at PM ❋13.17]
  • V. If x = z and y = z, then x = y. [Proven at PM ❋13.172]

Principia Mathematica defines the notion of equality as follows (in modern symbols); note that the generalization "for all" extends over predicate-functions f( ):

❋13.01. x = y =def ∀f:(f(x) → f(y)) ("This definition states that x and y are to be called identical when every predicate function satisfied by x is satisfied by y"[46]

Hilbert 1927:467 adds only two axioms of equality, the first is x = x, the second is (x = y) → ((f(x) → f(y)); the "for all f" is missing (or implied). Gödel 1930 defines equality similarly to PM :❋13.01. Kleene 1967 adopts the two from Hilbert 1927 plus two more (Kleene 1967:387).

George Spencer-Brown (1969): Laws of Form

George Spencer-Brown in his 1969 "Laws of Form" (LoF) begins by first taking as given that "we cannot make an indication without drawing a distinction". This, therefore, presupposes the law of excluded middle. He then goes on to define two axioms, which describe how distinctions (a "boundary") and indications (a "call") work:

  • Axiom 1. The law of calling: The value of a call made again is the value of the call.
  • Axiom 2. The law of crossing: The value of a (boundary) crossing made again is not the value of the crossing.

These axioms bare a resemblance to the "law of identity" and the "law of non-contradiction" respectively. However, the law of identity is proven as a theorem (Theorem 4.5 in "Laws of Form") within the framework of LoF. In general, LoF can be reinterpreted as First-order logic, propositional logic, and second-order logic by assigning specific interpretations to the symbols and values of LoF.

Contemporary developments

All of the above "systems of logic" are considered to be "classical" meaning propositions and predicate expressions are two-valued, with either the truth value "truth" or "falsity" but not both(Kleene 1967:8 and 83). While intuitionistic logic falls into the "classical" category, it objects to extending the "for all" operator to the Law of Excluded Middle; it allows instances of the "Law", but not its generalization to an infinite domain of discourse.

Intuitionistic logic

'Intuitionistic logic', sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

Paraconsistent logic

'Paraconsistent logic' refers to so-called contradiction-tolerant logical systems in which a contradiction does not necessarily result in trivialism. In other words, the principle of explosion is not valid in such logics. Some (namely the dialetheists) argue that the law of non-contradiction is denied by dialetheic logic. They are motivated by certain paradoxes which seem to imply a limit of the law of non-contradiction, namely the liar paradox. In order to avoid a trivial logical system and still allow certain contradictions to be true, dialetheists will employ a paraconsistent logic of some kind.

Three-valued logic

TBD cf Three-valued logic try this A Ternary Arithmetic and Logic – Semantic Scholar[47]

(cf Kleene 1967:49): These "calculi" include the symbols ⎕A, meaning "A is necessary" and ◊A meaning "A is possible". Kleene states that:

"These notions enter in domains of thinking where there are understood to be two different kinds of "truth", one more universal or compelling than the other ... A zoologist might declare that it is impossible that salamanders or any other living creatures can survive fire; but possible (though untrue) that unicorns exist, and possible (though improbable) that abominable snowmen exist."

Fuzzy logic

'Fuzzy logic' is a form of many-valued logic; it deals with reasoning that is approximate rather than fixed and exact.

See also

References

  1. ^ "Laws of thought". The Cambridge Dictionary of Philosophy. Robert Audi, Editor, Cambridge: Cambridge UP. p. 489.
  2. ^ a b c Russell 1912:72,1997 edition.
  3. ^ a b c "Aristotle - Metaphysics - Book 4".
  4. ^ a b c d e Russell 1912:72, 1997 edition.
  5. ^ "Theaetetus, by Plato". The University of Adelaide Library. November 10, 2012. Archived from the original on 16 January 2014. Retrieved 14 January 2014.
  6. ^ Frits Staal (1988), Universals: Studies in Indian Logic and Linguistics, Chicago, pp. 109–28 (cf. Bull, Malcolm (1999), Seeing Things Hidden, Verso, p. 53, ISBN 1-85984-263-1)
  7. ^ Dasgupta, Surendranath (1991), A History of Indian Philosophy, Motilal Banarsidass, p. 110, ISBN 81-208-0415-5
  8. ^ "An Essay concerning Human Understanding". Retrieved January 14, 2014.
  9. ^ "The Project Gutenberg EBook of The World As Will And Idea (Vol. 2 of 3) by Arthur Schopenhauer". Project Gutenberg. June 27, 2012. Retrieved January 14, 2014.
  10. ^ cf Boole 1842:55–57. The modern definition of logical OR(x, y) in terms of logical AND &, and logical NOT ~ is: ~(~x & ~y). In Boolean algebra this is represented by: 1-((1-x)*(1-y)) = 1 – (1 – 1*x – y*1 + x*y) = x + y – x*y = x + y*(1-x), which is Boole's expression. The exclusive-OR can be checked in a similar manner.
  11. ^ William Hamilton, (Henry L. Mansel and John Veitch, ed.), 1860 Lectures on Metaphysics and Logic, in Two Volumes. Vol. II. Logic, Boston: Gould and Lincoln. Hamilton died in 1856, so this is an effort of his editors Mansel and Veitch. Most of the footnotes are additions and emendations by Mansel and Veitch – see the preface for background information.
  12. ^ Lecture II LOGIC-I. ITS DEFINITION -HISTORICAL NOTICES OF OPINIONS REGARDING ITS OBJECT AND DOMAIN-II. ITS UTILITY Hamilton 1860:17–18
  13. ^ Commentary by John Perry in Russell 1912, 1997 edition page ix
  14. ^ The "simple" type of implication, aka material implication, is the logical connective commonly symbolized by → or ⊃, e.g. p ⊃ q. As a connective it yields the truth value of "falsity" only when the truth value of statement p is "truth" when the truth value of statement q is "falsity"; in 1903 Russell is claiming that "A definition of implication is quite impossible" (Russell 1903:14). He will overcome this problem in PM with the simple definition of (p ⊃ q) =def (NOT-p OR q).
  15. ^ Russell 1912:66, 1997 edition
  16. ^ Russell 1912:67, 1997 edition
  17. ^ name="Russell 1912:70, 1997
  18. ^ name="Russell 1912:69, 1997
  19. ^ Russell 1912:70, 1997 edition
  20. ^ (4) A true hypothesis in an implication may be dropped, and the consequent asserted. This is a principle incapable of formal symbolic statement ..." (Russell 1903:16)
  21. ^ Principia Mathematica 1962 edition:94
  22. ^ Russell 1912:71, 1997 edition
  23. ^ For example, Alfred Tarski (Tarski 1946:47) distinguishes modus ponens as one of three "rules of inference" or "rules of proof", and he asserts that these "must not be mistaken for logical laws". The two other such "rules" are that of "definition" and "substitution"; see the entry under Tarski.
  24. ^ Principia Mathematica 2nd edition (1927), pages 8 and 9.
  25. ^ Russell 1997:73 reprint of Russell 1912
  26. ^ Russell 1997:88–89 reprint of Russell 1912
  27. ^ Russell asserts they are "self-evident" a couple times, at Russell 1912, 1967:72
  28. ^ a b Russell 1912, 1967:73
  29. ^ "That is to say, if we wish to prove that something of which we have no direct experience exists, we must have among our premises the existence of one or more things of which we have direct experience"; Russell 1912, 1967:75
  30. ^ Russell 1912, 1967:80–81
  31. ^ Russell 1912, 1967:87,88
  32. ^ a b Russell 1912, 1967:93
  33. ^ In his 1944 Russell's mathematical logic, Gödel observes that "What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs ... The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens ... it is chiefly the rule of substitution which would have to be proved" (Gödel 1944:124)
  34. ^ Cf Nagel and Newman 1958:110; in their treatment they apply this dichotomy to the collection of "sentences" (formulas) generated by a logical system such as that used by Kurt Gödel in his paper "On Formally Undecidable Propositions of Principia Mathematical and Related Systems". They call the two classes K1 and K2 and define logical contradiction ~S as follows: "A formula having the form ~S is placed in [class] K2, if S is in K1; otherwise, it is placed in K1
  35. ^ In the introductory comments to Post 1921 written by van Heijenoort page 264, van H observes that "The propositional calculus, carved out of the system of Principia Mathematica, is systematically studied in itself, as a well-defined fragment of logic".
  36. ^ In a footnote he stated "This operation is not explicitly stated in Principia but is pointed out to be necessary by Russell (1919, p. 151). Indeed: "The legitimacy of substitutions of this kind has to be insured by means of a non-formal principle of inference.1. This footnote 1 states: "1 No such principle is enunciated in Principia Mathematica or in M. Nicod's article mentioned above. But this would seem to be an omission". cf Russell 1919:151 referenced by Post 1921 in van Heijenoort 1967:267)
  37. ^ Post 1921 in van Heijenoort 1967:267)
  38. ^ van Heijenoort's commentary before Post 1921 in van Heijenoort:264–265
  39. ^ van Heijenoort:264
  40. ^ cf introduction to Gödel 1930 by van Heijenoort 1967:582
  41. ^ Gödel 1930 in van Heijenoort 1967:582
  42. ^ cf Boole 1854:226 ARISTOTELIAN LOGIC. CHAPTER XV. [CHAP. XV. THE ARISTOTELIAN LOGIC AND ITS MODERN EXTENSIONS, EXAMINED BY THE METHOD OF THIS TREATISE
  43. ^ He derives this and a "principle of the excluded middle" ~((x)f(x))→(Ex)~f(x) from his "ε-axiom" cf Hilbert 1927 "The Foundations of Mathematics", cf van Heijenoort 1967:466
  44. ^ 1962 edition of PM 2nd edition 1927:139
  45. ^ Tarski 1946:ix, 1995 edition
  46. ^ cf PM ❋13 IDENTITY, "Summary of ❋13" PM 1927 edition 1962:168
  47. ^ http://www.iaeng.org/publication/WCE2010/WCE2010_pp193-196.pdf [bare URL PDF]
  • Emil Post, 1921, Introduction to a general theory of elementary propositions with commentary by van Heijenoort, pages 264ff
  • David Hilbert, 1927, The foundations of mathematics with commentary by van Heijenoort, pages 464ff
  • Kurt Gödel, 1930a, The completeness of the axioms of the functional calculus of logic with commentary by van Heijenoort, pages 592ff.
  • Alfred North Whitehead, Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56 (2nd edition), Cambridge University Press, 1962, no LCCCN or ISBN