Deductive Logic/Quick Reference

This quick reference summarizes several symbolic and deductive logic rules.

Truth Operators
Throughout these notes T indicates "True" and F indicates "False"

Negation: ¬ p ("not")

Conjunction: p•q ("and", "intersection") – also p ∧ q (T only when p=T and q=T)

Disjunction: p ∨ q ("or", "union") – (F only when p=F and q=F)

Conditional: p → q ("if p then q") – Also p ⊃ q (antecedent → consequent)

Biconditional p ↔ q ("equivalence", "p if and only if q" ) – Also p º q

These operators are precisely defined by the following truth table:

Inference Rules:
Modus Ponens (mp): ‘p → q’ and ‘p’ imply ‘q’ (Example: If the day is Saturday, then we wash the car. Today is Saturday. Therefore we wash the car. Common fallacy: affirming the consequence. In this example: We washed the car therefore today is Saturday. This is a fallacy because we might wash the car on other days. )

Modus Tollens (mt): ‘p → q’ and ‘¬ q’ imply ‘¬ p’ (Example: If the day is Saturday, we wash the car. We did not wash the car today. Therefore the day is not Saturday. Common fallacy: denying the antecedent. In this example: It is not Saturday therefore we did not wash the car. This is a fallacy because we might wash the car on other days. )

Conjunction (conj): ‘p’ and ‘q’ imply ‘p•q’

Addition (add): ‘p’ implies ‘p∨q’

Simplification (simp): ‘p•q’ implies ‘p’

Elimination (elim): ‘p∨q’ and ‘¬ p’ imply ‘q’

Transitivity (trans): ‘p → q’ and ‘q → r’ imply ‘p → r’

Constructive dilemma (cd): ‘p → q’, ‘r → s’, and ‘p∨r’ imply ‘q∨s’

Destructive dilemma (dd): ‘p → q’, ‘r → s’, and ‘¬ q ∨ ¬ s’ imply ‘¬ p ∨¬ r’

Equivalent Schemata
Double Negative: ‘p’ ≡ ‘¬ (¬ p)’

Association: ‘p∨(q∨r)’ ≡ ‘(p∨q)∨r’, also: ‘p•(q•r)’ ≡ ‘(p•q)•r’

Commutation: ‘p∨q’ ≡ ‘q∨p’, also: ‘p•q’ ≡ ‘q•p’

De Morgan’s laws: ‘¬ (p∨q)’ ≡ ‘¬ p • ¬ q’, also: ‘¬ (p•q)’ ≡ ‘¬ p ∨¬ q’

Distribution: ‘p•(q∨r)’ ≡ (p•q) ∨(p•r)’, also: ‘p ∨(q•r)’ ≡ (p∨q) • (p∨r)’

Transposition (contra positive): ‘p → q’ ≡ ‘¬ q → ¬ p’ ("twist it around and not it and it will hold")

Implication: ‘p → q’ ≡ ‘¬ p∨q’, also: ‘p → q’ ≡ ‘¬(p• ¬ q)’

Idempotence: ‘p’ ≡ ‘p•p’, also: ‘p’ ≡ ‘p∨p’

Exportation: ‘(p•q) → r’ ≡ ‘p → (q → r)’

Biconditional: ‘p≡q’ ≡ ‘(p → q) • (q → p)’, also: ‘p≡q’ ≡ ‘(p • q) ∨(¬ p • ¬ q)’

Tautologies:
Law of the excluded middle: ‘p ∨¬ p’

Categorical Sentence Schemata
These categorical sentence schemata are used to analyze syllogisms formed by the major premise, the minor premise, and the conclusion such as:

Major premise: All men are mortal Minor premise: Socrates is a man Conclusion: Socrates is mortal

Moods:
The premises and conclusion of a syllogism can be any of the four following types, traditionally known by the letters AEIO:

A – All S are P (All – Universal Affirmative, "All women are mortal") E – No S are P (Exclusion – Universal negative, "No women are immortal") I – Some S are P (Inclusion – Particular Affirmation, "Some women are philosophers") O – Some S are not P (Other? – Particular Negative, "Some women are not philosophers")

Figures:
These syllogisms can be arranged in any of these four figures:

M – Middle term S – Subject – Minor Term Variable P – Predicate of the Conclusion

Valid Categorical Schemata
All the valid categorical schemata are listed here, combining the four moods and figures. All other forms are fallacies.

Payoff Matrix:
You are free to choose between alternative actions A1 or A2. These may represent accepting a job offer or rejecting the job offer, for example. The future state unknown, but is either S1 or S2. This may represent your liking the job or your not liking the job. The payoff (i.e., result, utilities) of choosing alternative A1 if future state S1 occurs is indicated in the corresponding cell (+100 in this case). The matrix can be of any size. You have a preference for the alternative outcomes and can rate your preferences. For example:

1) the best, 2) good, 3) regretful, and 4) you really hate it. These are indicated in parenthesis in each cell.

Decision Options:
Maximax gain (the most optimistic) – Choose the alternative that allows the largest maximum possible gain. This is alternative A1 in this case, hoping for the payoff of +100.

Maximin gain: – Choose the alternative that allows the largest minimum possible gain. Choose A2, because the minimum possible gain is –200.

Minimin loss: – Choose the alternative that allows the smallest minimum possible loss. Choose A2 because the loss of –200 is less than the loss of –250.

Minimax loss (the most pessimistic) – Choose the alternative that allows the smallest maximum possible loss. Choose A2 because the worst that can happen is a loss of –200.

Minimax Regret:

Hurwicz’ rule: Choose the alternative that has the maximum optimism-weighted value. Let’s say you are 60% sure of an optimistic outcome. A1 = .6(100) + .4(-250) = 0. A2 = .6(50) + .4(-200) = -50. So choose A1.

Laplace Utility Rule: Choose the alternative that has the maximum Laplace utility. (Same as next rule with equally likely outcomes assumed). Consider each outcome is equally likely, then the Laplace utility for A1 is (100-250) / 2 = -75. For A2 it is (-200+50) / 2 = -75 so it is a tie.

Expected utility Rule: Choose the alternative that has the maximum expected utility. Assume you believe S1 is will occur with probability 60%. The expected utility of A1 is .6 (100) - .4 (250) = -40. For A2 it is .6(-200) + .4(50) = -100 so chose A1.

Mills' canons of induction
John Stuart Mill (May 20, 1806 – May 8, 1873), an English philosopher and political economist, proposed these tests for causality:

a must cause Z, because:


 * 1) whenever I see Z, I also find a (the method of agreement );
 * 2) if I remove a, Z goes away (the method of difference );
 * 3) whether present or absent, a always accompanies Z (the joint method of agreement and difference );
 * 4) if I change a, Z changes correspondingly (the method of concomitant variations );
 * 5) if I remove the dominating effect of b on Z, the residual Z variations correlate with a (the method of residues ).
 * 6) In these notes the symbol: ∴ is used to mean "therefore".

Method of Agreement
If several different experiments yield the same result and these experiments have only one factor (antecedent) in common, then that factor is the cause of the observed result.

Symbolically: abc → Z   cde → Z    cfg → Z ∴ c → Z

or: abc → ZYX cde → ZW   cfg → ZVUT ∴ c → Z

The method of agreement is theoretically valid but pragmatically very weak, for two reasons:


 * almost never can we be certain that the various experiments share only one common factor. We can increase confidence in the technique by making the experiments as different as possible (except of course for the common antecedent), thereby minimizing the risk of an unidentified common variable; and
 * some effects can result from two independent causes, yet this method assumes that only one cause is operant. If two or more independent causes produce the same experimental result, the method of agreement will incorrectly attribute the cause to any antecedent that coincidentally is present in both experiments. Sometimes the effect must be defined more specifically and exclusively, so that different causes cannot produce the same effect.

Method of Difference
If a result is obtained when a certain factor is present but not when it is absent, then that factor is causal.

Symbolically: abc → Z   ab → ¬ Z ∴ c → Z

or: abc → ZYXW ab → YXW ∴ c → Z

The method of difference is scientifically superior to the method of agreement: it is much more feasible to make two experiments as similar as possible (except for one variable) than to make them as different as possible (except for one variable).

The method of difference has a crucial pitfall: no two experiments can ever be identical in all respects except for the one under investigation. Thus one risks attributing the effect to the wrong factor. Consequently, almost never is the method of difference viable with only two experiments; instead one should do many replicate measurements.

The method of difference is the basis of a powerful experimental technique: the controlled experiment. In a controlled experiment, one repeats an experiment many times, randomly including or excluding the possibly causal variable ‘c ’. Results are then separated into two groups – experiment and control, or c-variable present and c-variable absent – and statistically compared. A statistically significant difference between the two groups establishes that the variable c does affect the results, unless:


 * the randomization was not truly random, permitting some other variable to exert an influence; or
 * some other variable causes both c and the result.