Outline
- Inferences and Proofs
- Resolution
- Horn Clauses and Definite Clauses
Introduction
- On Monday, we were continuing out discussion of Logical Agents.
- We were focusing initially on propositional logic as a framework for representing our knowledge base as this
is about the simplest logic in town.
- We were starting to talk about algorithms by which a logical agent could infer things about its environments
(which in turn would hopefully inform the choice of its actions).
- We were looked at examples of how we could see if `KB models alpha` holds in the Wumpus World setting using
model checking.
- Checking all models of a collection of propositional logic statements involves checking all truth assignments
to the variables. If one has `n` variables, one gets `2^n` meaning this algorithm is slow.
- So we said we wanted to determine entailment via theorem proving -- applying rules of inference directly to the sentences in our knowledge base to construct a proof of the desired sentence without consulting models.
- To do this we need three notions: logical equivalence, validity, and satisfiability. We discussed the first two last day (`a equiv b` if `a models b` and `b models a` and a statement is valid if it is true in all models. This is related to entailment by the deduction theorem)
- The next slide give some examples of known logical equivalences for propositional logic.
- After looking at it a second, we proceed to discuss satisifiability.
Common Logical Equivalences
- `(alpha ^^ beta) equiv (beta ^^ alpha)` commutativety of `^^`
- `(alpha vv beta) equiv (beta vv alpha)` commutativety of `vv`
- `((alpha ^^ beta) ^^ gamma) equiv (alpha ^^ (beta ^^ gamma))` associativety of `^^`
- `((alpha vv beta) vv gamma) equiv (alpha vv (beta vv gamma))` associativety of `vv`
- `neg (neg alpha) equiv alpha` double-negation elimination
- `alpha => beta equiv not beta => not alpha` contraposition
- `alpha => beta equiv not alpha vv beta` implication elimination
- `alpha <=> beta equiv alpha => beta ^^ beta =>alpha` biconditional elimination
- `not (alpha ^^ beta) equiv (not alpha vv not beta)` DeMorgan
- `not (alpha vv beta) equiv (not alpha ^^ not beta)` DeMorgan
- `alpha ^^ (beta vv gamma) equiv (alpha ^^ beta) vv (alpha ^^ gamma)` distributivity of `^^` or `vv`
- `alpha vv (beta ^^ gamma) equiv (alpha vv beta) ^^ (alpha vv gamma)` distributivity of `vv` or `^^`
Satisfiability
- A sentence is satisfiable if it is true in some model.
- For example, `(A ^^ B)` is satisfiable in the model `{A= true, B = true}`
- For propositional logic, the problem of given a propositional formula determining if it is satisfiable, is often written as SAT.
It is an example of an NP-complete problem.
- By the deduction theorem, we had `alpha models beta` iff `alpha => beta` is valid (aka, a tautology).
- The later is equivalent to `\not alpha vv beta` and by deMorgan's rule equivalent to `not (alpha ^^ not beta)`.
- So if we could prove `alpha => beta` by checking the unsatisfiability `alpha ^^ not beta`. This process is sometimes called
reduction ad absurdum.
- If it is also called proof by refutation or proof by contradiction.
Inferences and Proofs
- We now look at inference rules that can be applied to derive a proof -- a chain of conclusions that leads to the establishment
of some statement following from the knowledge base.
- The best known such rule is called Modus Ponens and it is written as:
`frac(alpha, alpha => beta)(beta)`
It states that if we have already derives the two statements `alpha` and `alpha => beta` then we are allowed to derive `beta`.
- For example, if we know (WumpusAhead `^^` WumpusAlive) and (WumpusAhead `^^` WumpusAlive) `=>` Shoot, then we are allowed to infer Shoot.
- Another useful rule is And-Elimination:
`frac(alpha ^^ beta)(alpha)`
- By considering all the possible true values for `alpha` and `beta` one can show both Modus Ponens and And-Elimination are sound.
- One can also make any of our logical equivalences into a rule of inference. For example, biconditional elimination yields the following two inferences:
`frac(alpha <=> beta)(alpha => beta ^^ beta =>alpha)`
and
`frac(alpha => beta ^^ beta =>alpha)(alpha <=> beta)`
An Example
- Recall from last day we derived `neg P_(1,2)` from the following knowledge base statements `R_i` using model checking.
- There is no pit in [1,1]:
`R_1 : neg P_(1,1)`
- A square is breezy iff there is a pit in a neighboring square. The has to be stated for each square: for now we include just the relevant squares:
`R_2 : B_(1,1) <=> (P_(1,2) vv P_(2,1))`
`R_3 : B_(2,1) <=> (P_(1,1) vv P_(2,2) vv P_(3,1))`
- The preceding sentences are true in all wumpus worlds. Now we include the breeze percept for the first two squares visited in the specific world the agent is in:
`R_4: neg B_(1,1)`
`R_5: B_(2,1)`
- Alternatively, we can come up with a (propositional) proof of `neg P_(1,2)` from the above knowledge base using rules of inferences.
- A proof consists of a sequence of lines `R_1 , ... R_m` where each `R_i` is either from the knowledge base or follows from some `R_j` and `R_k` where
`j < i` and `k < i` by some rule of (propositional) inference.
The proof of `neg P_(1,2)`
- Applying Biconditional-elimination to `R_2` gives:
`R_6: (B_(1,1) => (P_(1,2) vv P_(2,1))) ^^ ((P_(1,2) vv P_(2,1)) => B_(1,1))`
- Applying And-Elimination to `R_6` gives:
`R_7: ((P_(1,2) vv P_(2,1)) => B_(1,1))`
- Logical equivalence for contraposition gives:
`R_8: (neg B_(1,1) => neg (P_(1,2) vv P_(2,1)))`
- Modus Ponens of `R_4` and `R_8` give:
`R_9: neg (P_(1,2) vv P_(2,1))`
- Applying DeMorgan's rule to `R_9` yields
`R_10: neg P_(1,2) ^^ neg P_(2,1)`
- The desired statement follows from `R_10` by And-Elimination
Search and Proofs
- Each line in the proof we just gave can be checked mechanically (say by a program) to determine it is either from the KB or
follows from preceding lines by one of our rules of inference.
- We found this proof by looking in our textbook, but we could imagine framing proof search as a problem and using one of
our algorithms from earlier in the semester to find proofs:
- Initial State: the initial knowledge base
- Actions: The set of actions consist of all inference rules applied to all the sentences that match the top half of an inference rule
- Result: The result of an action is to add the bottom half of the inference rule to our KB
- Goal: The goal is a KB the contains the statement we are trying to prove
- This gives an alternative to model-checking which is often faster as the proof can ignore irrelevant propositions no matter how many there are.
- Notice this algorithm and propositional logic illustrate a property of knowledge bases known as monotonicity: that the set of entailed sentences can only increase as information is added to the database.
- There are actually logics known as nonmonotonic logics (extensively studied in AI as connected with human reasoning) for which this property fails.
Proof by Resolution
- So far we have looked at making inference algorithms which are sound; we have not considered the completeness of our algorithms.
- We next consider a proof system that has a single inference rule, resolution, which is known to be sound and complete.
- In its simplest form, the rule says that if we known `(A vv B)` and we know `neg B` then we can conclude `A`.
- We can generalize this to:
`frac( L_1 vv ... vv L_m, quad neg L_m vv L_1' vv ...vv L_n')(L_1 vv ... vv L_(m-1) vvL_1' vv ... vv L_n')`
In the above, `L_i` and `L_j'` are literals.
- The bottom line of this inference is called the resolvent.
- If we don't have any `L_j'`'s then it is called unit resolution. (There are algorithms that work better for this case).
- You often see clauses written as `{L_1, ..., L_m}` rather than use `vv`.
Conjunctive Normal Form
- Notice resolution works on an AND of OR's of literals.
- Any propositional formula can be rewritten into one which is equivalent to an AND of ORs or literals. This equivalent form is called Conjunctive Normal Form
- There are a couple of different ways to see one can do this rewriting: One is to use implication and biconditional elimination, followed by DeMorgan laws
- Another way to see this is to look at the false rows truth table of your formula. For such a false row make an OR of the literals for the variables to make this row not hold. We then AND over each of these disjuncts.
An Algorithm For Resolution
- Inference procedures based on resolution work by trying to produce a refutation
- We take `KB ^^ neg alpha` and first convert it to a CNF formula.
- We then cycle over pairs of clauses and for each pair that contains a resolvable literal we resolve the clauses.
- We keep repeating until no new clauses can be derived.
- If we ever resolve and get the empty clause (equivalent to false), then we know `KB` entail `alpha`.
Completeness
- To show resolution is complete we need to show that if a set of clauses is unsatisfiable then the resolution closure contains the empty clause.
- To prove this we actually show the contrapositive: If the resolution closure (RC) does not contain the empty clause then there is a satisfying assignment.
- The following procedure give us the satisfying assignment:
For i from 1 to k:
- If the clause in RC(S) contains the literal `neg P_i` and all its other literals are false under the assignment chosen for `P_1, ... P_(i-1)` then assign false to `P_i`.
- Otherwise, assign true to `P_i`