User:Addemf/sandbox/Technical Reasoning/Propositional Entailment

In the previous lesson we learned about the semantic concept of argument validity.

In this lesson we will study the syntactic concept of proof, which is like the syntactic version of the same thing.

Syntax is, to repeat myself a little, the study of a purely formal and almost "thoughtless" system of calculation. It is like a computer computing operations on 0s and 1s, with no sense of the interpretation or significance of the operations.

If our syntax is designed well, then the syntax will actually deliver meaningful results, once we (the readers) interpret them. By analogy, imagine that this is like a movie playing on your computer. The computer only "thinks" that it is computing 0s and 1s, but you the viewer get to see the result. And you are able to interpret what you are seeing as a meaningful representation.

Here we will try to design purely syntactic rules of proof, so that when we interpret the results, a proof can only result in valid arguments.

An Example, Conjunction Elimination
A proof will end up being a premise set, followed by a sequence of sentences, such that every sentence in the sequence is either a premise (i.e. some sentence which we agree to accept without question, similar to an axiom) or which is "justified" to be in the sequence because it follows from an "inference rule".

Consider for example
 * $$\{ P\land Q\}\quad \langle P\rangle$$

In this example, the set of premises is $$\{P\land Q\}$$. The proof is the sequence $$\langle P\rangle$$, which in this simple example, consist of just a single sentence.

What this proof shows is the validity of the argument $$\{P\land Q\}\vDash P$$. The reader is free to check, using the methods of the previous lesson, that this argument is valid.

In general, if $$\alpha$$ and $$\beta$$ are sentences, and $$\alpha\land\beta$$ occur anywhere in a proof, then one is justified in writing $$\alpha$$ later in the proof.

To summarize this rule, we write


 * $$\alpha\land\beta\therefore \alpha$$

This rule is called "conjunction elimination".

Here is a longer example proof.


 * $$\{(P\land Q)\land R\} \quad \langle(P\land Q)\land R, P\land Q, Q, R, Q\land R\rangle$$

The last step of the proof uses "conjunction introduction" which is the rule


 * $$\alpha\land\beta\therefore\alpha$$

Because the premise set of this proof is $$\{ (P\land Q)\land R\}$$, and the conclusion is $$Q\land R$$, then this is meant to be a proof of the validity of $$\{(P\land Q)\land R\}\vDash Q\land R$$.

Pay special attention to the fact that the sentence R follows from $$(P\land Q)\land R$$, by conjunction elimination. This is true even though they are separated by several other sentences in the proof.

When we apply conjunction elimination to $$(P\land Q)\land R$$, we will call this sentence the "reference".

What the proof above demonstrates, is that an application of an inference rule does not need to be immediately next to its reference.

Give a proof, from the premise set $$\mbox{Premise}=\{(P\land Q)\land R\}$$ to the conclusion $$\text{Conclusion} = Q$$.

Good Rules
Notice that $$\alpha\land\beta \therefore \alpha$$ is a good rule of inference, because it "preserves truth". That is to say,


 * $$ \alpha\land\beta\therefore\alpha$$ is a good rule, because $$\alpha\land\beta\vDash \alpha$$.

Also $$\alpha\land\beta\therefore \beta$$ is another good rule, for the same basic reason.

However, $$\alpha\lor\beta\therefore \alpha$$ would be a bad rule. We could program it in a computer, but then the computer would tell us that the argument $$\{ \alpha\lor\beta\}\vDash\alpha$$ is valid, when it is not.

Therefore whenever we use an inference rule, we must check that it is good.

Decide which of the following are good rules.

1. $$\alpha\to \beta\therefore \alpha$$

2. $$\alpha\to\beta\therefore\beta$$

3. $$\neg(\neg \alpha)\therefore \alpha$$

4. $$\neg \alpha\therefore \alpha$$

5. $$(\alpha\to\beta) \land \alpha\therefore\beta$$

(Hint: Precisely 2 are good.)

Proofs with Multiple References
For the inference rule $$\alpha\land\beta\therefore\alpha$$, we only need to refer to one conjunction, somewhere earlier in the proof, in order to justify writing $$\alpha$$.

We will call $$\alpha\land\beta$$ the "reference" of the conjunction elimination rule.

Some rules will require multiple references. For example, consider the rule,
 * "If $$\alpha\lor\beta$$ and $$\neg \beta$$ therefore $$\alpha$$."

Symbolically, this is the rule
 * $$\alpha\lor\beta,\neg\beta\therefore \alpha$$

This rule has two references, separated by a comma, before the $$\therefore$$ symbol.

Notice that this is a "good rule", because $$\{\alpha\lor\beta,\neg\beta\}\vDash\alpha$$.

Below is a listing of all of the proof rules that we will use in this course, which use a fixed number of references.

In essence, every operation has an elimination and an introduction rule.

Provide a proof which demonstrates the truth of each of the following claims.

1. $$\{(P\land Q)\lor \neg R, R\}\vdash P$$

2. $$\{P\leftrightarrow Q, P\}\vdash Q$$

3. $$\{\neg (P\lor \neg Q)\lor R, P\}\vdash R$$

Proof by Cases
We now consider "proof by cases". This inference rule will permit not just one or two, but any amount of references.

A mathematical example of a proof by cases can be used to show "If x is any real number, then $$x^2$$ is nonnegative."


 * Since x is a real number, either it is positive or negative, or zero.
 * Case 1: Suppose x is positive. Then $$x^2 = x\cdot x$$ is a product of two positive real numbers.
 * Since the product of positive numbers is positive, then in this case, $$x^2>0$$, so $$x^2$$ is nonnegative.
 * Case 2: Suppose next that x is negative. Then $$x^2 = x\cdot x$$ is a product of two negative real numbers.
 * Since the product of negative numbers is positive, then again, $$x^2 > 0$$, so $$ x^2$$ is nonnegative.
 * Case 3: Suppose finally that x is zero. Then $$x^2 = 0\cdot 0=0$$.
 * This shows that $$ x^2\ge 0$$, and again $$x^2$$ is nonnegative.
 * Since this exhausts the three cases mentioned above, $$x^2$$ is nonnegative.

An abstraction of this argument style is
 * $$\alpha_1\lor\alpha_2\lor\dots\lor\alpha_n$$.
 * $$\alpha_1\to\beta$$
 * $$\alpha_2\to\beta$$
 * $$\alpha_n\to\beta$$
 * Therefore $$\beta$$.
 * Therefore $$\beta$$.

Demonstrate
 * $$\{P\lor Q, P\to (R\land S), Q\to (R\land S)\}\vdash S$$

Conditional Proof
Suppose that we would like to prove "If p is an integer divisible by 4, then p is divisible by 2." A proof might look like the following.


 * Assume (for conditional proof) that p is an integer divisible by 4.
 * Then $$p=4k$$ by definition of divisibility.
 * Then $$p=2(2k)$$ by algebra.
 * Then p is divisible by 2, by definition of divisibility.
 * Therefore the conditional proof is finished, and we are justified in saying that "If p is divisible by 4, then it is divisible by 2."

By a very rough abstraction, this has the form,


 * Assume $$\alpha$$.
 * Then $$\beta_1$$.
 * Then $$\beta_2$$.
 * Therefore $$\gamma$$.
 * So, by conditional proof, $$\alpha\to\gamma$$.
 * So, by conditional proof, $$\alpha\to\gamma$$.

We will model a similar sort of proof technique in order to help us prove conditional sentences.

In particular, to infer the conditional $$\alpha\to\gamma$$, we may give a proof with premise set $$\{\alpha\}$$ and conclusion $$\gamma$$. We will use the symbol

Here is a demonstration, using "condtional proof", to prove $$\{P\land Q\}\vDash R \to (P\land R)$$.


 * $$\{P\land Q\}\quad \langle P, \{R\} \langle P\land R\rangle, R\to (P\land R)\rangle $$

Because long proofs can get harder to read with all these symbols, it helps it in a table. Subproofs are indented from the main proof.

We annotate the inference rule for conditional proof as, for any nonnegative integer n,



This indicates that the existence of a proof with premise $$\alpha$$ and conclusion $$\gamma$$ justifies the inference to the conditional sentence, $$\alpha\to\gamma$$.

Demonstrate the truth of each of the following statements.

1. $$\{P\lor Q, P\to R, Q\to S\}\vdash R\lor S$$

2. $$\{P\}\vdash Q\to(P\to Q)$$

3. $$\{((P\lor Q)\to R)\to S, P\to R, Q\to R\}\vdash S$$

Subproofs for Contradiction
We have already seen one proof by contradiction, which shows that $$\sqrt 2$$ is irrational.

A proof by contradiction has a rough abstraction as


 * Suppose $$\alpha$$ for contradiction.
 * Then $$\beta_1$$.
 * Then $$\beta_2$$.
 * Therefore $$\gamma\land\neg\gamma$$, which is a contradiction.
 * Because the assumption $$\alpha$$ led to a contradiction, therefore $$\neg\alpha$$.
 * Because the assumption $$\alpha$$ led to a contradiction, therefore $$\neg\alpha$$.

We will therefore adopt the following inference rule, which models this argument pattern.


 * $$\{\alpha\}\langle \beta_1,\beta_2,\dots,\beta_n,\gamma\land\neg\gamma\rangle\therefore \neg\alpha$$

Below is a demonstration of how to use this. In the demonstration we will prove that $$P\to Q\vDash (\neg Q)\to(\neg P)$$.

Note that this proof involves multiple embedded subproofs.

Notice that subproofs get their own "sub-numbering". Every time the indentation level increases, so does the number of decimals which indicate a sub-numbering.

Demonstrate the truth of each of the following statements.

1. $$ \{P\}\vdash \neg P\to P$$

2. $$\{P,\neg P\} \vdash Q$$

Proofs without Premises
Note that, because of our subproof rules, it is now possible to produce proofs which have no premises. Take for example,

This proof contained no premises, although its subproofs did.

Because it had no premises, it is therefore a demonstration of the entailment
 * $$\empty\vDash (P\land Q)\to(P\to Q)$$

Demonstrate that each of the following is a tautology, specifically by providing a proof of it from no premises.

1. $$P\lor \neg P$$

2. $$P\to P$$

3. $$\neg(\neg P)\leftrightarrow P$$

4. $$\neg(P\land Q)\leftrightarrow (\neg P)\lor(\neg Q)$$

Restricted Subproof Rules
Let's now see how things can go wrong if we use these subproofs in an unrestricted way.

First, the reader should check that the entailment
 * $$\empty\vDash P$$

is not correct.

Next, we use the subproof rules to give a proof, with no premises, and with conclusion P.

The problem with this proof is the fact that, at line (2.), we made a reference to line (1.1.) which was not in the main proof.

The whole point of a subproof is to make assumptions (premises) which apply throughout the subproof, but not beyond that. Therefore when we "export" a line from a subproof, to a line outside that subproof, we risk making an invalid inference.

Therefore we need a restricted version of our inference rules. The restriction is simply that a sentence in a subproof cannot be a reference for any inference rule, after the subproof has concluded.

For each proof table below, decide if it displays a proof or not.

If it does, write what it proves. That is to say, identify the premise set and conclusion, in the notation $$\Gamma\vdash C$$.

If it does not, explain why.

1.

2.

3.