-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Introduction:
My main goal in writing this was simply to experiment with machine learning through inductive logic and reproduction. One of the most expressive and readable representations of the hypotheses learned is that of the sets of production rules (if-then rules). Rules can be derived from other representations (eg decision trees) or can be learned directly. Here we focus on the direct method. An important aspect of rule-based learning algorithms is that they can learn sets of first-order rules that have much more power of representation than propositional rules that can be derived from decision trees [1]. Learning first-order rules can also be considered as an automatic deduction of PROLOG programs from examples.
Suppose we want a computer to learn to determine if a molecule is "sticky" or not (stickiness is a property I invented) based on its atomic structure. One approach would be to create a learning set of molecules represented using first-rate ground facts.
Idea: • Iteratively build rules that cover - some positive examples, but no negatives. Once a rule has been found, remove the positive examples covered and continue.
To build a rule: – Add literals to the body until no negative examples are covered - if the literals introduce new variables, extend the examples of tuples by all possible constants.
Propositional versus First-Order Logic: The logical propositional does not include the variables and thus cannot express the general relations between the values of the attributes.
Example 1: in Propositional logic, you can write:
IF (Father1=Bob) ^ (Name2=Bob)^ (Female1=True) THEN Daughter1,2=True.
This rule applies only to a specific family!
Example 2: In First-Order logic, you can write:
• IF Father(y,x) ^ Female(y), THEN Daughter(x,y)
This rule (which you cannot write in Propositional Logic) applies to any family!