ABSTRACT - UH



New text in blue; comments in purple; things that are not clear are in read; removal candidates are in turquoise.

Multi-rule-set Decision-making System for a Genetic Algorithm Learning Environment Based on Statistical Composition of the Training Data

B. Gerges, C. F. Eick and F.G. Attia

University of Houston

Abstract

The present study is based on a machine learning system for classification of tasks called DELVAUX [KE95]. The system learns PROSPECTOR-style, Bayesian rules from sets of examples using a genetic algorithm approach. The population consists of rule-sets whose offspring are generated through the exchange of rules; fitter rule-sets produce offspring with higher probability. The fitness of a rule-set is measured by the percentage of training examples it classifies correctly.

This paper describes the development of a Smart Rule Generation (SRG) mechanism, which generates rules based on the statistical properties of training data sets, making the rule-generation process less random. The SRG was developed as a stand-alone system that can be added to any learning environment that handles PROSPECTOR-style rules. The SRG mechanism analyzes the data trying to extract possible relationships between the classes and attributes of the given data set. The SRG was used as DELVAUX’s rule generation vehicle. The initial populations, as well as the rules needed for the mutation operator of the system were generated using the SRG mechanism. As a result of using the SRG mechanism, the DELVAUX’s learning performance showed significant improvement, since the rules were no longer generated randomly.

1. Introduction

Genetic algorithms have emerged as one of the most promising approaches in machine learning. Classifier systems are the most common genetic-based machine learning systems. This study is based on the DELVAUX classifier system, which uses a multilevel credit assignment mechanism which rewards/punishes rule-sets as well as individual rules. Two fitness evaluation algorithms evaluate the rule-sets, assigning a fitness value to each rule-set, while a Reward and Punishment Mechanism (RPM) algorithm assigns a “score” to each rule. The basic DELVAUX system was built by Daw Jong [Jon93] in his M.S. thesis using the Pittsburgh approach. In the second version, Emma Toto [Tot93] added the rule-level reward/punishment mechanism (RPM), which employs a bucket brigade style credit assignment algorithm for Bayesian rules [ET94], to the original DELVAUX system. The original DELVAUX search process generates new rules randomly. The focus of this paper is a Smart Rule Generation (SRG) mechanism [Ger96] that creates new rules more “intelligently” based on statistical properties of the training data. As a part of DELVAUX system, the SRG mechanism is used to generate the initial population more intelligently, helping the system to start the search process in areas of relatively higher fitness, speeding up the system operation, and improving the overall outcome. In addition to the smart initial population generation, the new mechanism is integrated in the mutation operator to make the new rule generation process smarter and less random. We will also demonstrate that the SRG mechanism is also useful for other purposes, namely for the debugging of knowledge-based systems that use Bayesian rules.

The paper is organized is follows. Section 2…

2. The DELVAUX System

DELVAUX is basically an inductive learning system for classification tasks, and learns PROSPECTOR-style, Bayesian classification rules [DHN76] from sets of examples. A genetic algorithm approach is used for learning Bayesian rule-sets, in which a population consists of rule-sets that generate offspring through the exchange of rules, allowing fitter rule sets to produce offspring with a higher probability. Additionally new rule-sets a generated through applying a mutation operator that replaces a rule in a rule-set by a randomly generated rule.

Bayesian Rules and Odd-Multipliers

The PROSPECTOR-style rules are of the form:

If E then H (to degree S, N)

S and N are odds-multipliers which measure the sufficiency and necessity of H for E. PROSPECTOR rules work with odds instead of probabilities, using the following conversion from probabilities to odds [DHN76]:

O(H) = P(H) / (1 – P(H))

The DELVAUX learning environment supports two kinds of rules: is-high and close-to rules. The ‘is-high’ rule provides positive/negative evidence for a certain decision based on the relative highness of an attribute value with those of other test cases, while ‘close-to’ rule provides positive/negative evidence for a certain decision based on closeness of an attribute value to a certain constant.

The Basic Learning System

Assuming that we have a significant number of examples, we want to discover interesting relationships between various attributes in the example space. In general, a domain expert provides these examples. The set of the available examples is partitioned in two sets: the training set and the test set. To evaluate the quality of the rules learnt by our machine learning techniques, they are applied on the examples of the test set. Removal Candidate!

Conversion

If A1, A2…An are the attributes involved in a certain classification task, each example is conceived as a vector: a1, a2, …an, d, where a1 and a2 denote real values for the attributes A1 through An, respectively, and d is the class to which the object, characterized by (a1, a2, ……,an) belongs. The following transformation (i converts the domain of the ith attribute into values in the interval [0, 1] [Jon93]:

Let ai be the value of the attribute in a given example, then we define

(i(ai) = (number of examples whose value for Ai is less than ai)

(number of whose value for Ai is different from aI)

For i = 1,2…n.

Let tr = (a1, a2…an, d) be an example of the training set. Then we define the following transformation ( over the space of training examples.

((tr) = ((1(a1), (2(a2)… (n(an), d)

Let Tr = {tr1, tr2…trm} be the training set and Ts = (ts1, ts2…tsn} be the test set for a certain application, then the normalized training example set is obtained as follows:

((tr) = (((tr1)… ((trm))

The normalized set shows how high a particular value for an attribute is with respect to other values of the same attribute in all the training sets.

Genetic Algorithms for learning Bayesian rules

In the basic DELVAUX system, the initial population is created completely randomly. It is then evaluated with respect to the training set, and the best three selected are inserted into the initial population. The whole procedure is repeated if the population is not complete.

The next generation is generated by using a one-point crossover and mutation operators, which create new rule-sets by exchanging rules between rule-sets in the population or by replacing single rules in a rule-set through randomly generated rules. The algorithm of generating the next generation is shown below:

|Next-generation(t):= |

|INSERT best-members-of (G(t) into G(t+1); |

|DO { |

|Select two members ru1 and ru1 with the roulette wheel method; |

|PERFORM { |

|If should-make-crossover |

|Then crossover(ru1, ru2, ru’1, ru’2) |

|Else { ru’1 := ru1, ru’2 := ru2 } “ru1 and ru2 are copied” |

|If should-have-mutation then ru’1 := mutate(ru’1); |

|If should-have-mutation then ru’2 := mutate(ru’2); |

|If should-have-mutation then ru’2 := invert(ru’1); |

|If should-have-mutation then ru’2 := invert(ru’2); |

|Insert ru’1 into G(t+1); |

|Insert ru’2 into g(t+1); |

|} |

|} UNTIL population of G(t+1) is complete; |

Figure 1: Algorithm of generating the next generation

A Fitness function for the rule-sets

Our approach uses a fitness function h’ that mainly takes into consideration the performance of a rule-set for the training set, but also applies small penalties depending on the cardinality of the rule-set, penalizing very large rule-sets that do not show a significantly better performance than smaller rule-sets.

Let R-Set be the rule-set to be evaluated, r be the number of rules in it, p be the number of attributes in the training set, and c the number of classes of the classification, and h(R-Set ) be the percentage of the example-cases that were classified correctly by the rule-set R-Set. The function h’ is used to measure the fitness of a rule set R-Set. It is defined as:

h’(R-Set) = h(R-Set) - [pic],

p*c is used to approximate the complexity of a classification problem, assuming that larger rule-sets are needed for more complex problems, thus decreasing the size penalty for more complex classification problems.

Genetic Operators

The basic system uses three genetic operators: crossover, mutation, and inversion. Each population is a set of rule-sets and the rule-sets are represented in their chromosomal representation as ordered sequences of rules r1…rn.

In order to generate the offspring for the next generation, parents are selected based on their fitness values h’, using a selection method such as the roulette wheel. The offspring are generated by exchanging rules between rule-sets. The inversion operator is used to change the order of rules in chromosomal representation of a rule-set. It affects the composition of the generations to be produced in the future. The mutation operator selects a rule-set and replaces it with another generated rule r’. Redundant and to really relevant to the main topic of the paper.

Initialization and Termination

Our learning system creates new generation in a loop keeping track of the maximum fitness and of the average fitness of the current generation (tth generation), as follows:

Best-fit*(t) = h’(best-member-of (G(t)),

Average-fit *(t) = ((set(G(t) h’(set))/|G(t)|

If neither Best-fit* nor Average-fit* have increased by more than 0.01% in 100 generations, the learning process terminates. The rule-set with the highest fitness value in the last generation is considered as the learned set of rules and is used for testing.

The Reward and Punishment mechanism

It evaluates individuals rules inside a rule-set, assigning a “score” to each of them depending on their contribution in the rule-set decision making [EiTo95]. The mechanism consists of two steps:

1. Based on its past performance, the goodness of a rule is measured mathematically by considering it a fuzzy set with membership function:

Good rule = (no. of times the rule provides right evidence)/(no. of times it is fired)

2. Pay-off functions are the mechanisms that distribute the feedback of the environment to the classifier system, namely a rule-set.

In the enhanced version of DELVAUX, this information is used by the mutation operator to give better rules a higher chance of not being selected for replacement.

3. Statistical Analysis of Data

According to the general architecture of the proposed DELVAUX system, the training data set, and accordingly the problem search space, is first quantisized. The result will be a set of cells, each representing a region in the search space. The cells are evaluated to assess their potential in generating good rules. The evaluation mechanism relies on the number of examples represented by each cell, compared with its neighbors. As a result of the evaluation process, a fitness value is assigned to each cell. Then, based on their fitness, the cells will compete against each other using Roulette Wheel Method. The winner cell will be used to generate the required new rule that lies within the cell's territory. Rule generation heuristics will be applied on the selected cell to finally generate the required good rule. These rule selection heuristics are a by-product of the cell evaluation process.

Cells

Only after the first generation

Figure 2: The Proposed System Architecture

Here, each cell corresponds to a unique combination of a class, an attribute and a range of that attribute's possible values. We need to define some statistical parameters needed for Search Space quantization and cell evaluation process.

NA Total number of attributes in the given data set.

NC Total number of Classes in the given data set.

NR Total number of ranges an attribute value will be divided into.

NX Total number of Examples in the given data set.

Each cell is denoted by C (i, j, k) which corresponds to one attribute Ai, one class Cj, and one range Rk. Then the total number of cells = NA * NC * NR

The set of potential rules generated if k is 0 or is NR-1

if is-high (Ai) then Cj (to degree S, N)

otherwise, if 0 ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download