ࡱ> |~{7 bjbjUU "7|7|׀l,,,8,dH-j`...(V.V.V.WWWBjDjDjDjDjDjDj$;l [nXhj-WRWWWhj_V.V.#j___WpV.V.Bj_WBj_j_f>hh`hV. . s7q%,Zhh|j0jhn;^nh_ New text in blue; comments in purple; things that are not clear are in read; removal candidates are in turquoise. Multi-rule-set Decision-making System for a Genetic Algorithm Learning Environment Based on Statistical Composition of the Training Data B. Gerges, C. F. Eick and F.G. Attia University of Houston Abstract The present study is based on a machine learning system for classification of tasks called DELVAUX [KE95]. The system learns PROSPECTOR-style, Bayesian rules from sets of examples using a genetic algorithm approach. The population consists of rule-sets whose offspring are generated through the exchange of rules; fitter rule-sets produce offspring with higher probability. The fitness of a rule-set is measured by the percentage of training examples it classifies correctly. This paper describes the development of a Smart Rule Generation (SRG) mechanism, which generates rules based on the statistical properties of training data sets, making the rule-generation process less random. The SRG was developed as a stand-alone system that can be added to any learning environment that handles PROSPECTOR-style rules. The SRG mechanism analyzes the data trying to extract possible relationships between the classes and attributes of the given data set. The SRG was used as DELVAUXs rule generation vehicle. The initial populations, as well as the rules needed for the mutation operator of the system were generated using the SRG mechanism. As a result of using the SRG mechanism, the DELVAUXs learning performance showed significant improvement, since the rules were no longer generated randomly. 1. Introduction Genetic algorithms have emerged as one of the most promising approaches in machine learning. Classifier systems are the most common genetic-based machine learning systems. This study is based on the DELVAUX classifier system, which uses a multilevel credit assignment mechanism which rewards/punishes rule-sets as well as individual rules. Two fitness evaluation algorithms evaluate the rule-sets, assigning a fitness value to each rule-set, while a Reward and Punishment Mechanism (RPM) algorithm assigns a score to each rule. The basic DELVAUX system was built by Daw Jong [Jon93] in his M.S. thesis using the Pittsburgh approach. In the second version, Emma Toto [Tot93] added the rule-level reward/punishment mechanism (RPM), which employs a bucket brigade style credit assignment algorithm for Bayesian rules [ET94], to the original DELVAUX system. The original DELVAUX search process generates new rules randomly. The focus of this paper is a Smart Rule Generation (SRG) mechanism [Ger96] that creates new rules more intelligently based on statistical properties of the training data. As a part of DELVAUX system, the SRG mechanism is used to generate the initial population more intelligently, helping the system to start the search process in areas of relatively higher fitness, speeding up the system operation, and improving the overall outcome. In addition to the smart initial population generation, the new mechanism is integrated in the mutation operator to make the new rule generation process smarter and less random. We will also demonstrate that the SRG mechanism is also useful for other purposes, namely for the debugging of knowledge-based systems that use Bayesian rules. The paper is organized is follows. Section 2 2. The DELVAUX System DELVAUX is basically an inductive learning system for classification tasks, and learns PROSPECTOR-style, Bayesian classification rules [DHN76] from sets of examples. A genetic algorithm approach is used for learning Bayesian rule-sets, in which a population consists of rule-sets that generate offspring through the exchange of rules, allowing fitter rule sets to produce offspring with a higher probability. Additionally new rule-sets a generated through applying a mutation operator that replaces a rule in a rule-set by a randomly generated rule. Bayesian Rules and Odd-Multipliers The PROSPECTOR-style rules are of the form: If E then H (to degree S, N) S and N are odds-multipliers which measure the sufficiency and necessity of H for E. PROSPECTOR rules work with odds instead of probabilities, using the following conversion from probabilities to odds [DHN76]: O(H) = P(H) / (1 P(H)) The DELVAUX learning environment supports two kinds of rules: is-high and close-to rules. The is-high rule provides positive/negative evidence for a certain decision based on the relative highness of an attribute value with those of other test cases, while close-to rule provides positive/negative evidence for a certain decision based on closeness of an attribute value to a certain constant. The Basic Learning System Assuming that we have a significant number of examples, we want to discover interesting relationships between various attributes in the example space. In general, a domain expert provides these examples. The set of the available examples is partitioned in two sets: the training set and the test set. To evaluate the quality of the rules learnt by our machine learning techniques, they are applied on the examples of the test set. Removal Candidate! Conversion If A1, A2An are the attributes involved in a certain classification task, each example is conceived as a vector: a1, a2, an, d, where a1 and a2 denote real values for the attributes A1 through An, respectively, and d is the class to which the object, characterized by (a1, a2, ,an) belongs. The following transformation (i converts the domain of the ith attribute into values in the interval [0, 1] [Jon93]: Let ai be the value of the attribute in a given example, then we define (i(ai) = (number of examples whose value for Ai is less than ai) (number of whose value for Ai is different from aI) For i = 1,2n. Let tr = (a1, a2an, d) be an example of the training set. Then we define the following transformation ( over the space of training examples. ((tr) = ((1(a1), (2(a2) (n(an), d) Let Tr = {tr1, tr2trm} be the training set and Ts = (ts1, ts2tsn} be the test set for a certain application, then the normalized training example set is obtained as follows: ((tr) = (((tr1) ((trm)) The normalized set shows how high a particular value for an attribute is with respect to other values of the same attribute in all the training sets. Genetic Algorithms for learning Bayesian rules In the basic DELVAUX system, the initial population is created completely randomly. It is then evaluated with respect to the training set, and the best three selected are inserted into the initial population. The whole procedure is repeated if the population is not complete. The next generation is generated by using a one-point crossover and mutation operators, which create new rule-sets by exchanging rules between rule-sets in the population or by replacing single rules in a rule-set through randomly generated rules. The algorithm of generating the next generation is shown below: Next-generation(t):= INSERT best-members-of (G(t) into G(t+1); DO { Select two members ru1 and ru1 with the roulette wheel method; PERFORM { If should-make-crossover Then crossover(ru1, ru2, ru1, ru2) Else { ru1 := ru1, ru2 := ru2 } ru1 and ru2 are copied If should-have-mutation then ru1 := mutate(ru1); If should-have-mutation then ru2 := mutate(ru2); If should-have-mutation then ru2 := invert(ru1); If should-have-mutation then ru2 := invert(ru2); Insert ru1 into G(t+1); Insert ru2 into g(t+1); } } UNTIL population of G(t+1) is complete;  Figure 1: Algorithm of generating the next generation A Fitness function for the rule-sets Our approach uses a fitness function h that mainly takes into consideration the performance of a rule-set for the training set, but also applies small penalties depending on the cardinality of the rule-set, penalizing very large rule-sets that do not show a significantly better performance than smaller rule-sets. Let R-Set be the rule-set to be evaluated, r be the number of rules in it, p be the number of attributes in the training set, and c the number of classes of the classification, and h(R-Set ) be the percentage of the example-cases that were classified correctly by the rule-set R-Set. The function h is used to measure the fitness of a rule set R-Set. It is defined as: h(R-Set) = h(R-Set) -  EMBED Equation.3 , p*c is used to approximate the complexity of a classification problem, assuming that larger rule-sets are needed for more complex problems, thus decreasing the size penalty for more complex classification problems. Genetic Operators The basic system uses three genetic operators: crossover, mutation, and inversion. Each population is a set of rule-sets and the rule-sets are represented in their chromosomal representation as ordered sequences of rules r1rn. In order to generate the offspring for the next generation, parents are selected based on their fitness values h, using a selection method such as the roulette wheel. The offspring are generated by exchanging rules between rule-sets. The inversion operator is used to change the order of rules in chromosomal representation of a rule-set. It affects the composition of the generations to be produced in the future. The mutation operator selects a rule-set and replaces it with another generated rule r. Redundant and to really relevant to the main topic of the paper. Initialization and Termination Our learning system creates new generation in a loop keeping track of the maximum fitness and of the average fitness of the current generation (tth generation), as follows: Best-fit*(t) = h(best-member-of (G(t)), Average-fit *(t) = ((set(G(t) h(set))/|G(t)| If neither Best-fit* nor Average-fit* have increased by more than 0.01% in 100 generations, the learning process terminates. The rule-set with the highest fitness value in the last generation is considered as the learned set of rules and is used for testing. The Reward and Punishment mechanism It evaluates individuals rules inside a rule-set, assigning a score to each of them depending on their contribution in the rule-set decision making [EiTo95]. The mechanism consists of two steps: Based on its past performance, the goodness of a rule is measured mathematically by considering it a fuzzy set with membership function: Good rule = (no. of times the rule provides right evidence)/(no. of times it is fired) Pay-off functions are the mechanisms that distribute the feedback of the environment to the classifier system, namely a rule-set. In the enhanced version of DELVAUX, this information is used by the mutation operator to give better rules a higher chance of not being selected for replacement. 3. Statistical Analysis of Data According to the general architecture of the proposed DELVAUX system, the training data set, and accordingly the problem search space, is first quantisized. The result will be a set of cells, each representing a region in the search space. The cells are evaluated to assess their potential in generating good rules. The evaluation mechanism relies on the number of examples represented by each cell, compared with its neighbors. As a result of the evaluation process, a fitness value is assigned to each cell. Then, based on their fitness, the cells will compete against each other using Roulette Wheel Method. The winner cell will be used to generate the required new rule that lies within the cell's territory. Rule generation heuristics will be applied on the selected cell to finally generate the required good rule. These rule selection heuristics are a by-product of the cell evaluation process.  Cells      Only after the first generation     Figure 2: The Proposed System Architecture      Here, each cell corresponds to a unique combination of a class, an attribute and a range of that attribute's possible values. We need to define some statistical parameters needed for Search Space quantization and cell evaluation process. NA Total number of attributes in the given data set. NC Total number of Classes in the given data set. NR Total number of ranges an attribute value will be divided into. NX Total number of Examples in the given data set. Each cell is denoted by C (i, j, k) which corresponds to one attribute Ai, one class Cj, and one range Rk. Then the total number of cells = NA * NC * NR The set of potential rules generated if k is 0 or is NR-1 if is-high (Ai) then Cj (to degree S, N) otherwise, if 0 0) = Class-count[j] / NX * 100 = TEC[j] (if AR-count[i, k] = 0) PEC [i, j, k] = Count[i, j, k] / Class-count[j] * 100 TEAR[i, k] = AR-count[i, k] / NX * 100 TEC[j] = Class-count[j] / NX * 100 . PEAR and PEC give the measure of how highly populated a particular cell is, compared with the other cells in the same row and same column respectively. TEC indicates how high a particular class is represented in the Data Set, while TEAR measures how high a particular attribute range is represented in the data set, compared with the other ranges for the same attribute. Intuitively we can say that, a highly populated cell is probably a good candidate to generate a positive evidence rule, but the problem lies in the assessment of the goodness of cells to produce good negative-evidence rules. This can be analyzed by evaluating the cell's population relative to its neighbors, or in other words relative to its row and column totals. Therefore, a cell whose value of PEAR is much higher than that of the corresponding TEC is a good candidate to produce a positive evidence rule because its elements (attribute, class and range) are highly positively correlated. Whereas, a cell for which the value of PEAR is not too far from that of the corresponding TEC is not an interesting candidate to generate a new rule. To translate the information we have for each cell into a single nuclear number that reflects that cell's relative goodness as a potential rule generator, five fitness functions were implemented. The first two rely on the simple line of reasoning that is; the higher a cell's population is (PEAR and PEC), the higher the score assigned to it. The other three functions follow the second line of reasoning, comparing always the value of PEAR against that of TEC, and the value of PEC against TEAR's; favoring cells whose population deviates significantly from the average population. F1[i, j, k] = PEAR[i, j, k] + 0.5 * PEC[i, j, k] F2[i, j, k] = PEAR[i, j, k] * PEC[i, j, k] F3 [i, j, k] = [ ( PEAR[i, j, k] - TEC[j] ) + ( PEC[i, j, k] - TEAR[i, k] ) ]2 F4 [i, j, k] = | ln ( PEAR[i, j, k] / TEC[ j] ) + ln ( PEC[i, j, k] / TEAR [i, k] ) ] F5 [i, j, k] = |PEAR [i, j, k] TEC[j] + PEC [i, j, k] - TEAR [i,k] | | TEC[j] TEAR [i,k] | Functions F3 and F4 are very similar. Function F3 measures the deviation of PEAR and PEC from the corresponding TEC and TEAR respectively. It sums the results, and then squares the sum to account for negative deviations (negative correlation), and also to give higher reward to cells with higher deviations. Function F4 accommodates for negative deviations by taking the absolute value of the result. The following sketch shows the main difference between F3 and F4. F3 gives equal scores to cells with equal negative and positive deviation, while F4 does not reward positive and negative deviations evenly. In general, according to F4, the larger the deviation is, the higher the reward received by a cell with negative deviation in comparison with that received by a cell with equal but positive deviation. In order to make these two deviations counteract, rather than enforce each other, we add them first then square (take the absolute value) later.    F3 F3  & F4 F4    PEAR Figure 3: Comparison between F3s and F4s behavior with respect to PEAR In function F5, the deviations are taken as a percentage of their reference value (TEAR and TEC). The specific characteristics of the particular domain in question are what should dictate the choice of the right fitness function. Now the cells are evaluated and given a score (fitness) by applying the selected fitness function on each cell at a time. In the next step the Roulette Wheel Mechanism is used to select a cell. In this process, a roulette wheel of a predefined number of slots is divided into parts that are equal in number to the competing cells. Each part is associated with one cell and contains a number of slots that is proportional to that cells fitness. The cell is selected by selecting a slot from the spinning wheel. The probability of a cell being selected is then proportional to the number of slots assigned to that cell, and correspondingly to its fitness. The selected cell will be used to generate the required rule. The number of slots assigned to each part is proportional to the fitness of the corresponding cell, according to the formula: Slots [i, j, k] = (F [i, j, k] ) NA-1 NC-1 8 | S S S F[x,y,z] | x=0 y=0 z=0 Where F[i, j, k] is the value of the selected fitness function evaluated for the cell (i, j, k). Experimental results that compare the 5 functions should be added to the paper! In general the paper is weak in presenting empirical evidence concerning the techniques that are introduced in the paper. 4. Generating Rules based on Statistical Properties of the Training Set After the cells are evaluated, one cell is selected to generate the required rule. Now we need to find a way to select one particular rule from those corresponding to the selected cell. The elements that define a rule are: attribute, class, sufficiency (S), necessity (N), the rule type of whether it is a close-to rule or an is-high rule, and finally the close-to value in the case of a close-to-rule. The selection of a cell defines the rules attribute and class. Moreover it implicitly defines the rule type and a range for the close-to value (when applicable). So, the problem boils down, basically, to the selection of the proper S and N. The system was coded in the C language under the Unix operating system. The Unix command that invokes the system is RuleGen. <# of rules> The parameter points to the file where the training data of the problem in hand is stored. The format of this file is as follows: the first line contains three numbers. The first two of them indicate the number of attributes and classes of the problem domain. The third number indicates the number of examples in that file. After the first line, each of the subsequent lines represents one example. The second parameter tells the system how many rules we need to generate. The system assumes that all the rules it will generate at one time will be of the same rule-set. Hence, it tries as much as possible to prevent duplicate rules from being generated in the same run. Each newly generated rule is compared with ones that were generated earlier. The third parameter, the , can be a number between 1 and 5, indicating which one of the five presented fitness functions the system should use to generate the requested rules. The fourth parameter is optional. It provides the system with the name of the MRE file to use. The MRE file contains the output generated by the MRE mechanism, which is a part of the enhanced DELVAUX system. The output of the system will be a text file and is shown below for the data provided as follows, Number of Rules: 10, Number of Attributes : 6 and Number of classes: 5  Rule # Type Attr. Cls-to Val. Class S N  0 0 3 0.30 2 0.0972 10.2883 1 1 5 0.00 4 0.0168 59.5051 2 0 1 0.70 1 0.0162 61.5848 3 1 0 0.00 4 15.8489 0.0631 4 1 3 0.00 3 0.0470 21.2682 5 0 4 0.30 1 8.5496 0.1170 6 1 5 0.00 3 0.3255 3.0718 7 1 6 0.00 1 0.0247 40.4910 8 0 4 0.50 1 0.0583 17.1573  9 1 1 0.00 5 88.3527 0.0113 Figure 4: Output of the system is a text file Where a type 0 refers to a close-to-rule, and type 1 refers to an is-high-rule. Smart Initial Population In the enhanced system of DELVAUX, the initial population generation was kept completely random and in an attempt to reduce this randomness, we used the SRG mechanism. Comparing the results produced by the system equipped with one of the five smart functions, with those produced by the enhanced system, which generates the rules randomly, we notice that all five functions produced a better initial population fitness than the enhanced system. The final results of the modified system were also generally better than what was achieved by the enhanced system. The two most significant results are actually the maximum and average fitness of the first generation, whose improvement is due, solely, to the SRG system. Smart Mutation Operator The mutation operator consists of two parts, selecting the rule to be replaced and generating a new rule. The idea behind improving the mutation operator was to mix random and deterministic mechanisms in a predefined proportion to reach the best possible behavior of the system. The first two steps towards determinism were RPM mechanism and MRE (Missing Rule Evidence) that is described below were incorporated in the Enhanced Version. The Missing Rule Evidence (MRE) Mechanism Not very clear how MRE works; add explanation that introduces MRE more clearly and its relationship to the previously discussed components should be explained more clearly! Perhaps we should also introduce MRE as a part of the rule generation process and not as a separate mechanism. The MRE is based on the idea of tracking the missing negative/positive evidence for a particular decision in a rule set, which results in incorrect classification of the given examples. A function MRE (R-set, Di) was developed to measure the lack of evidence for the decision Di in rule ret R-set. When generating new rules as part of the mutation operator, the classes, which lack more evidence in the rule-set, should be chosen with higher probability. Consequently, higher the value of the MRE function for a particular decision, the higher the probability that decision will be chosen to form the new rule. The new rule should provide positive or negative evidence depending on the kind of evidence that is missing the most for the class. Thus we have established a mechanism to select a decision for the new to-be-generated rule. Yet Another Mutation Operator This operator blends both the MRE and the statistical analysis method to come up with a new more deterministic mechanism that still keeps the flavor of randomness in the process. Two approaches are considered to blend these two pieces together: Combined fitness In this approach we still use the cell fitness as the basis of selecting the components of the new rules. This is done by defining a modified fitness function, fit(Ai, Dj, Rk) = fit (Ai, Dj, Rk) * MRE(Dj) where fit (Ai, Dj, Rk) is the fitness function of the cell C(Ai, Dj, Rk) as defined by one of the five fitness functions introduced earlier, fit(Ai, Dj, Rk) is the modified fitness, and MRE(dj) is the value of the MRE function corresponding to that cell. The cells are then selected using the roulette wheel method and the cell generation process continues. This approach could not be fully implemented due to time limitations. Separate fitness Here a roulette wheel is run to select a decision, Dk based on the corresponding values of the MRE function and the attention is turned to the cells, discarding all the cells that correspond to decisions other than Dk. Then another roulette wheel is run to select a cell from those that are left and then the rule generation process is followed. Empirical results After adding the smart mutation operator, implemented with separate fitness to the system, we ran several tests. The result of one of the functions is shown below: Data Set Function 1 Gen. End TestMax.Avg.GL-TR1 1199 0.6839 0.47790.6081GL-TR2 9990.64860.43610.6090GL-TR310460.70650.48220.6037Avg.10810.67970.46540.6069 Figure 5: The result of running the system with Smart Mutation Operator on the Glass data and using fitness function F4 Comparing these results with those achieved by the enhanced system, we see a lot of improvement in all aspects, from speed of conversion to Max., Avg., and test final fitnesses. The experiment and Fig. 5 has to be explained in more detail! In general, the current version of the paper lacks discussion of experiments that provide more evidence that the proposed mechanism work. Moreover, discussions how what we do relates to the general field are also completely missing. Other empirical results (e.g. those that evaluate the 5 fitness functions) from your thesis should be added to the paper to make it discussions more sound and believable. I also feel that the mechanisms that are proposed in the paper are also useful beyond the narrow focus of DELVAUX! 5. Conclusion The environment where the research is performed is an individual learning system for classification tasks, called DELVAUX. Both versions of DELVAUX, the original by Daw Jong [Jon93], and the Enhanced one by Ema Toto [Tot93] did not make use of the statistical information that can be obtained from analyzing the given data set. That was the motive of this research, in which, we tried to make use of the statistical information to improve the rule generation operation. The goal was then to devise a new system that can generate rule less randomly and more intelligently, by statistically analyzing the training data set. We introduced a methodology for the statistical analysis of the training data set as follows: firstly, the search space was quantized, by dividing it into discrete cells, to limit the search effort, then, fitness functions were defined to measure the potential of each cell of the search space to generate good rules. Several fitness functions were implemented and tested for that purpose. Then, selecting a cell at random, giving cells with higher fitness more chance to be selected generates a rule. Once a cell is selected, most of the new rules parameters, except for the multiplier, are automatically defined. The rule multipliers (S and N) are derived, as well, from the statistical results. They are defined to be functions of the cells fitness. The system was implemented using the C language in a SUNOS/Unix environment. An SRG run will typically be as fast as a fraction of a second. However, an average run of the SRG DELVAUX system (of up to 200 generations per run), on a fast, unloaded machine (Sparc20), lasted about 1.50 hours. Few modifications to the system, as well as writing some shell scripts, were necessary to automate the testing process and cut down the required testing time. This research also opened several new ways for further research and improvements, one of them being to combine the SRG together with the RPG Mechanism to form a complete independent learning environment, in which SRG will be its rule generation tool and RPM its rule deletion tool. Another suggested area of research is to experiment with a wider range of fitness functions, and to identify the relationship between the application and the best fitness function to use for it. Bibliography [KE95] Y-J. Kim and C. F. Eick. Multi-rule-set Decision-making Schemes for a Genetic Algorithm Learning Environment for Classification Tasks. In The 4th Annual Conference on Evolutionary Programming, San Diego, March 1995 [EJ93] Christoph F. Eick and Daw Jong. Learning Bayesian classification rules through genetic algorithms. In Bharat Bhagra, Timothy Finin, and Yelena Yasha, Editors, Proceedings of the Second International Conference on Information and Knowledge Management. ACM, 1993. [ET94] Christoph F. Eick and Emma Toto. Evaluation and Enhancement of Bayesian Rule-sets in a genetic algorithm learning environment for classification tasks, pp. 366-375, In Z. Ras, M. Zemankova (Eds.), Proc. 8th Int. Symposium on Methodologies for Intelligent Systems, Charlotte 1994, Springer Verlag. [Jon93] Daw Shuye Jong. Learning Bayesian classification rules with Genetic algorithms. Masters thesis, Department of Computer Science, University of Houston, 1993. [Sec93] Nicola Secomandi. Analysis and improvement of a genetic-based learning environment for classification tasks. Masters thesis, Department of Computer Science, University of Houston, 1993. [Tot93] Emma Toto. Combining rule debugging techniques with genetic algorithms A hybrid approach to learn rules for classification tasks. Masters thesis, Department of Computer Science, 1993. [Ger96] Bassem Gerges. Generating Rules for Classification Systems based on Statistical Composition of Training Data. Masters thesis, Department of Computer Science, University of Houston, 1996. [Gia89] David E. Giarratano. Expert Systems: Principles and programming. PWS-Kent, Boston, 1989. [Hol75] John H. Holland. Adaptation in Natural and Artificial Systems. The University of Michigan Press, 1975. [Jac89] Peter Jackson. Introduction To Expert Systems. Addison-Wesley, Wokingham, England, 1989. [Sim83] Herbert A. Simon. Why should Machines learn? In R.S. Michalski, J.G.Carbonell, and T.M.Mitchell, editors, Machine Learning, An Artificial Intelligence Approach (Vol. 1), Palo Alto, California: Tioga Press, 1983. [BGH190] L. B. Booker, D. E. Goldberg, and J.H.Holland. Classifier systems and genetic Algorithms. In Jude W. Shavlik and Thomas G. Dietterich, editors, Readings in Machine Learning. San Mateo, California, 1990. [DHN76] R.Duda, P.Hart, and J.Nilson. Subjective Bayesian methods for rule based inference systems. In Proceedings of the National Computer Conference, Pages 1075-1082, 1976. [Gre90] John J. Grefenstette. Credit assignment in rule discovery systems based on genetic algorithms. In Jude W. Shavlik and Thomas G. Dietterich, editors, Readings in Machine Learning. San Mateo, California, 1990. PAGE 1 PAGE 1 Evaluation Quantization Data Set Rule Selection Heuristics Fitness Cell Selection Existing Rule-set MRE Candidate Decisions Rule Generation Generated rule Selected Cell &'MNsC!"V [ k l @Mc+6;E:AFջթթ|xxoi 6OJQJCJOJPJQJ6CJ 6OJQJOJQJ 5>*CJ6CJB*OJPJQJph OJPJQJCJCJOJQJ5B*CJOJQJph5B*CJOJQJph35B*CJOJQJph5CJOJQJ5B*CJOJQJph5B*CJOJQJph5CJOJQJ)s"89:CD!V Z [ k l @CMcd$@&a$ *@@&  `$a$$a$ׅiju]^$`a$$`a$ $a$` $ @&a$$@&a$ @ @&^@ ` $@&`a$FNTVhijuz{~^_`bchÿÿÿÿÿÿÿÿÿÿÿùõÿí>*H*OJQJ >*OJQJOJQJ H*OJQJ jGOJQJCJH* jGCJCJH*CJ5CJB*OJQJphB*OJQJph35CJOJQJ>*CJOJQJ6CJOJQJOJQJ 6OJQJ:vwKLef,AB I $$Ifa$  @$a$ $^`a$OPQwxLMUVYZ]^abf,BMCJH*OJQJ6CJH*OJQJ6CJOJQJ CJOJQJB*OJQJph OJPJQJ6CJH*OJQJ CJOJQJ jGCJOJQJ jGOJQJ H*OJQJ@j l m q r w x } ~ ! !!>!A!B!M!P!Q!!!!!!!!!!!!!!!!!"""f"i"j"m"y"""""$ $;56CJOJPJQJ56OJPJQJOJQJ OJPJQJ5OJPJQJCJ CJOJQJ CJOJQJ6CJH*OJQJ6CJOJQJE !T!!!"-"<"i"j"""$$z%$`a$ *@ *@X$$Ifl *04 la $$Ifa$ $$3$4$S$T$$${%%%%%%%%&&&w'x'z'{'}'')))))))********ƻzqeq j6H*OJQJh6H*OJQJh jS6OJQJ 6OJQJH*OJPJQJ OJPJQJ667B*OJPJQJph B*ph3B*H*OJPJQJph3B*OJPJQJph3 6B*ph3jEHOJQJU"jD: OJQJUVmHnHujOJQJUOJQJCJ6CJ'z%{%%%&&&&&}''))))***++ , , ,/,0, *@ @ $a$$a$$`a$**++++++ , ,/,0,--[.. /22222222222222333333333%3Q3R3S3T3W3X3\3^3_3`3a3Q4R4S4V4444ƵCJH*OJQJ CJOJQJ mHnHujCJUmHnHu5 jCJOJQJUmHnHu CJOJQJjUmHnHuOJQJ5CJ OJPJQJ5>*6CJ 6CJ]CJ 6OJQJ90,,,----Z.[.../ /22222222222 *@  & F $h^ha$$ & Fa$$h`ha$2222223333333333Q3S3T3X3]3^3`3b3c3Q4R444$a$ !4455555555555555666,6-65676H6Y6[6]6`666666666778 888!8D8G8a8b8c8~88888888899$9L9O9Q9l9m999999999 jCJOJQJCJH*OJQJ CJOJQJ jOJQJOJQJ6CJH*OJQJ6CJOJQJ CJOJQJCJH*OJQJH456575?5556H6666:7;7<77 8!8"8H8889$9%9Q99$@&a$$`a$$a$$a$9999(:):;:>:U:V::;4;5;;;;;;;>BBBBBCCRCTCUCVCCCCCCCCCEDPDQDRDWDXDYDuDvDwDEEEFFFFFFFFFkFlFmFFFFHHͰjCJUmHnHu>*CJOJQJCJH*OJQJCJCJH*OJQJ6CJH*OJQJ CJOJQJ6CJOJQJ6CJOJQJ CJOJQJE99>:j:k::/;0;1;v;x;y;;;;;;;>>I@_@BBBB`$`a$$ a$$a$$@&a$BCTCCCEDFDhEHHHHI:I>I@ICIDIEIFIJIRIII !$a$$`a$ ^` $^`a$`HHHHHHHHIIII!I7I8I9I;II?I@IBICIFIGIRIcIIIIIIIIIIIMMMN NJNLN^NzNNNNNN OOPQ7R@RDRER徹CJ]6CJ B*CJph OJQJh CJOJQJ CJOJQJOJQJ CJOJQJCJH*OJQJ CJOJQJ mHnHu H*OJQJOJQJjUmHnHujCJUmHnHuCJ;IMMNNNNNOPQQSSSMTNTYYYYYZ'ZZZA[$`a$$a$ $@ ^@ `a$$a$ERKRSRSST TLTMTYYZZ!]"]u]v]w]]]^#^`` a a aZa[atauabbd grgtggghhPiTicifiiii jj,k-k>kll 6OJQJOJQJ56OJQJ56CJOJQJOJQJ 6OJQJ6CJ5>* OJPJQJ B*ph>*CJCJ56CJ5CJ 5>*CJjCJUmHnHu B*CJph6CJCJ6A[[[1\~\\!]u]v]w]]]^^ ^"^#^^_``` a abbb`$`a$`$a$b dee grgsgtggghhhOiPi~ii,k-k>klllllgmm$a$ $ @a$ @$a$lllmmmmmmmmmmmmnn'n(nInJnOninjnknnnnnno|q}qqqq{{{{|||}}}C~rVw&'0źŶŶŶŜŶŶŶŶ CJOJQJB*CJH*]phB*CJ]ph6CJ B*CJphCJH*CJ B*CJph CJH*hCJh OJQJh5CJ5CJOJQJ5>*CJOJQJ>*CJOJQJ=mmmmmmmmmmmmmmdm$$Ifl40Q `? 04 la* $$Ifa$$If mmmmmmml8ccccc $$Ifa$$$Ifl4,\Q  `vT`u04 la*mmmmmmmmmnYPPPPPPPP $$Ifa$$$Ifl4rQ   9 04 la* nn nnnn n'nZQQQQQQ $$Ifa$$$IflrQ v9u04 la*'n(n/n4n;nBnInYPPPPP $$Ifa$$$Ifl4rQ v9u04 la*InJnOnTn[nbninZQQQQQ $$Ifa$$$IflrQ v9u04 la*injnknnnn|q}qqZTTTOFFF$`a$$a$`$$IflrQ v9u04 la*qqqvcwdw{{{ { { { { {{{{{}C~~ 88n^8`n$ 8n^`na$$ 88n^8`na$$a$$`a$~r8w[2ąƅDžׅ$a$ 88n^8`n  88n^8`n$ 88n^8`na$$ 88n^8`na$$ 88n^8`na$$ 88n^8`na$$ 88n^8`na$ ʄąDžׅ؅ޅ߅ 35=N`ae{ CJOJQJ0JmHnHu0J j0JUCJ CJOJQJ6CJCJ%ׅ (345=>MNW`$a$&`#$`aefz{$a$$a$ &P/ =!J"#*$%jDdB  S A? 2SF\;+:=nD`!SF\;+:=n;"hn -xJ@E{II+ꢐFS ]BHP~  7M:I`f$}I!ݑ,hC@f ]ʧ sZF1QaYA7`uYjh@fsex!h|~+91|h"ro_\n7))y~ "5WޔS댞ƫ$ĞxJ뙸VNV/!F0EW0 "R[Z5EG7z2^]]N~.Y(gEoYs_\*IM_fw%~Jn"CJ=P"ԗL\s'؊;"hn x}TMo@8-MҪ 4ʩN[bvC@%k[+mN*T zBp 8!kPfץ37]y'e؄k,\Ѐ+}>bh& .1CB֚&vQS)ف'Ķ|6nVVʰnbyv UCaR ؾkku5, Q3LbL0IEEl+Ş̍}RZ a}h9ziFZ6KQ'cq嬫Az׋af HCYu |!Wz<>B(d.KASIX^'n)r{ۢ$bܒ ;ks=H\$سGpBGVg4*>{6+T5+ S@[FMGa AEA1 afZM5Dٗ*֟馦ffIXxQǍ鵺n4f5j9'YB'[ߊlߧĘ:PB-Y?#TmzJ'tҽ$>NS͛nK  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrtuvwxyz}Root Entry  F7qData sWordDocument "ObjectPool `7q7q_977574141F`7q>7qOle CompObjRObjInfo FMicrosoft Equation 3.0DNQE Equation.34 rp*c*50[]Oh+'0p Equation Native P1TablenSummaryInformation( DocumentSummaryInformation8 $  , 8 DPX`h ABSTRACT BST TechnologyechNormaloceicko15cMicrosoft Word 9.0@ʗ1@@w @ 5@67q@> Heading 2$$@&a$ CJOJQJ>@> Heading 3$$-D@&a$5CJ:@: Heading 4$$@&a$5CJ:@: Heading 5$$@&a$5CJ:@: Heading 6$$@&a$5CJDD Heading 7$@&^`>*CJ4@4 Heading 8$@&5CJ< @< Heading 9 $$@&a$ 56CJ<A@< Default Paragraph FontDB@D Body Text$ @a$CJOJPJQJfR@f Body Text Indent 2#$ "^`"a$ CJOJQJ\C@\ Body Text Indent |^`| CJOJQJ&)@!& Page Number, @2, Footer  !JS@BJ Body Text Indent 3$`a$CJ<P@R< Body Text 2$a$ CJOJQJ6Q@b6 Body Text 3 CJOJQJ %'BL\ot+)(*/.210zS{I %'BL\ot    s"89:CD!VZ[kl  @ C M c d iju]^vwKLef,AB IT-<ij  z!{!!!"""""}##%%%%&&&'' ( ( (/(0((())))Z*[***+ +................//////////Q/S/T/X/]/^/`/b/c/Q0R00016171?1112H2222:3;3<33 4!4"4H4445$5%5Q555>6j6k66/70717v7x7y7777777::I<_<>>>>?T???E@F@hADDDDE:E>E@ECEDEEEFEJEREEEII'JLJuJvJJKLLNNNMONOTTTTTU'UUUAVVV1W~WW!XuXvXwXXXYY Y"Y#YYZ[[[ \ \]]] _`` brbsbtbbbcccOdPd~dd,f-f>fgggggghhhhhhhhhhhhhhhhhhhhhhhhhhhhhii iiii i'i(i/i4i;iBiIiJiOiTi[ibiiijikiiii|l}llllqcrdrvvv v v v v vvvvvxCyyzr{8||}w}[~2Āƀǀ׀ (345=>MNW`aefz{00000000000000(00[0[0[0[0[0[00008000000000000000 00j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j800000000000000000000000000000000800"80800"0"0"0"80800%0%0%0%0%0%0%H0%H0%0 (0 (0 ( 0 (0 (0 (0 ( 0 (0X0%x0x00000(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(0(00(0000002020003030303030303030303030303030000000303030300=0=0=000=0=0=0000=00=0=0=0=0=0=0=0=0=000000X00H0000H0H0H0H0H0H0H0H0H0H0H0H0H0H0H0H0H0H0HX0!HX0!HX0!H09UX0!H0uUX0!HX0!H0U0U000UX0!HX0!H0X00X0X000X0X0X0X0XH0!H0;^0;^0;^ 0;^00000Y_00Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_0Y_00Y_0X00vf0vf000vf0vfX0000000X00o0o00000000000000o0o0@0@0@0@0@0@0 000000000000000000000000000000000 F $*49HERlMQSTVX[]`beo z%0,249BIA[bmmmn'nIninq~ׅ`NPRUWYZ\^_acdfghijklmnpqO!!!: !!8@r#V"( )B B 2  BENGײnHI_EQ X$X$_E`TX$_E`T"M2  BSC6OENG|&HOJ6OQjJ `TSC`TSC6OSC6O"2 @ n'B(YENG6H uIXQjJ b:(Y\-b:(Y\-X`T:(Y\-X`T"!%\B  S D" \B  S DjJ" VB   C D"BVB  C D"+#VB  C D","VB  C D"*)VB  C D"%"VB  C D"&VB  C D"' VB  C D"(!VB   C D")"VB ! C D"$"VB " C D"- VB # C D".2 % BCYVENGײnH WI_EQ X$YVX$YV_E`TX$YV_E`T" b2 ' %B*QCkRENG|\H#JkRQ  `T˦*Q\; `T˦*Q\;kR*Q\;kR"VB 7 C D"VB 8 C D"VB > C D"VB ? C D"VB K C D"3VB i C D",VB j C D"5VB s C D" 8VB t C D" 8VB u C D"8VB x C D" 8b { C  {"  VB | C D"VB } C D"VB ~ C D"VB  C D"VB  C D"VB  C D"VB  C D"VB  C D"\B  S D"\B  S D"VB  C D" VB  C D"VB  C D" VB  C D""VB @ C D"#b x*  3"Z ( 3 (   Z2 ) 3 )4{ f * S *(  Z2 + 3 +b"4({ ZB ,B S D"z9%L ZB - S Dh%z'L Z . 3 . * $  T / # /%* x*  Z2 0 3  0 c-  Z 1 3 1Bp Z. Z2 2 3 2%3 ZB 3 S Dj j /ZB 4 S D+ZB 5 S Dj 0ZB 6 S Dj ,` I C  Ix!.&8  TB M C DJ zK 6/Z2 S 3  S9%#  TB \ C D( (,ZB ]B S D%(ZB g S D"" 'TB l C D)TB n C Dz}{ZB p S D""TB r C D8 9T8TB v C DF8TB w C D 6 ;TB y C D 8N z   H  ZB  S DZZZB  S DZ"ZZB  S D :8 :ZB  S Dh::ZB  S DH::TB  C D  TB B C Dh% h% TB  C D$z$ ZB  S D" " B S  ?_P%'............////Q/T/U/V/X/Y/Z/[/^/`/DDDE E>E@EAEFEITTTTTTTTTU!X~$tiDtjt?yt7X.t>X,Yt t8|}tutt;<xtx>tsLt^tnt^jjtK~t4t(h|t|8t~t7-8t|&t{Hxt&8't}Ht(t nn> tr  t8::t'^;t8tttKt%^$t(tn..tV-mt!t``tPPt tt   tttt"EEt#t_`acy{NPy#{#j4k455Z5[5t5u5p6q6666666#7$76777T7U7~7777>>>>>>>>>>>> ? ???9?:?I?J?X?Y?e?g?o?p???????????????5@8@IIIICJHJ~JJJJTTTT=U>UUUEXFXjjw wx#x'x+xxxxx;yAyKyNyOyTy{{{<B׀_`y{NP`j>E%'SU {!|!!!A$$%%&&&&&&((**R0000151m1t12!2H2Q22222<3B333N4T455T5Z555k6p666G7S7y7~777>>>>??h?n???@ @JJAJCJ_J`J|J~JKKMMTTTTmZvZZZdMdPdTdddiijjx xyyz_z`zzzz:{p{{||||||}-}L}~~׀33333333333333333333333333333333333333333333333333333333333333333333333333333rF G H H m m @KK ^?^^_k{l}llll w w xByր׀tech144eC:\Documents and Settings\tech144\Application Data\Microsoft\Word\AutoRecovery save of reviseabs3.asdtech144eC:\Documents and Settings\tech144\Application Data\Microsoft\Word\AutoRecovery save of reviseabs3.asdtech144eC:\Documents and Settings\tech144\Application Data\Microsoft\Word\AutoRecovery save of reviseabs3.asdceickD:\KDD\Bassem-Mod.docceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asdceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asdceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asdceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asdceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asdceick[C:\WINNT\Profiles\ceick\Application Data\Microsoft\Word\AutoRecovery save of Bassem-Mod.asd 1Z} ML* E7 +Vhh^h`o(.hh^h`o(.hh^h`o(.hh^h`o(.hh^h`o(1Z}ML*E7+Vijghhhhhhhhhhhhhhhhhhhhiiiii i'i(i/i4i;iBiIiJiOiTi[ibiiiji׀@BBSBB(JKP@PNP@UnknownG: Times New Roman5Symbol3& : Arial9Garamond;& zHELVETICA3zTIMES"Aho$YFZjSD&c