ࡱ> z|y7 tbjbjUU S7|7|ol||||P P P $t PFT<t 2RR4DDDK K K #2%2%2%2%2%2%2$4 .6I2-P K G K K K I2(||DDv2(((K |DP D#2(K #2((K/s1>P 1D pNAt $\11d2021,6C'<61(t t ||||Direct Kernel Least-Squares Support Vector Machines with Heuristic Regularization Mark J. Embrechts Department of Decision Sciences and Engineering Systems Rensselaer Polytechnic Institute, Troy, NY 12180 E-mail:embrem@rpi.edu Abstract This paper introduces least squares support vector machines as a direct kernel method, where the kernel is considered as a data pre-processing step. A heuristic formula for the regularization parameter is proposed based on preliminary scaling experiments. I. INTRODUCTION A. One-Layered Neural Networks for Regression A standard (predictive) data mining problem is defined as a regression problem for predicting the response from descriptive features. In order to do so, we will first build a predictive model based on training data, evaluate the performance of this predictive model based on validation data, and finally use this predictive model to make actual predictions on a test data for which we generally do not know (or pretend not to know) the response value. It is customary to denote the data matrix as EMBED Equation.3 and the response vector as EMBED Equation.3 . In this case, there are n data points and m descriptive features in the dataset. We would like to infer EMBED Equation.3 from EMBED Equation.3 by induction, denoted as EMBED Equation.3 , in such a way that our inference model works not only for the training data, but also does a good job on the out-of-sample data (i.e., validation data and test data). In other words, we aim to build a linear predictive model of the type:  EMBED Equation.3  (1) The hat symbol indicates that we are making predictions that are not perfect (especially for the validation and test data). Equation (1) is the answer to the question wouldnt it be nice if we could apply wisdom to the data, and pop comes out the answer? The vector EMBED Equation.3 is that wisdom vector and is usually called the weight vector in machine learning. There are many different ways to build such predictive regression models. Just to mention a few possibilities here, the regression model could be a linear statistical model, a Neural Network based model (NN), or a Support Vector Machine (SVM)[1-3] based model. Examples for linear statistical models are Principal Component Regression models (PCR) and Partial-Least Squares models (PLS). Popular examples of neural network-based models include feedforward neural networks (trained with one of the many popular learning methods), Sef-Organizing Maps (SOMs), and Radial Basis Function Networks (RBFN). Examples of Support Vector Machine algorithms include the perceptron-like support vector machines (SVMs), and Least-Squares Support Vector Machines (LS-SVM), also known as kernel ridge regression. A straightforward way to estimate the weights is outlined in Equation (2).  EMBED Equation.3  (2) Predictions for the training set can now be made for EMBED Equation.3 by substituting (2) in (1):  EMBED Equation.3  (3) Before applying this formula for a general prediction proper data preprocessing is required. A common procedure in data mining to center all the descriptors and to bring them to a unity variance. The same process is then applied to the response. This procedure of centering and variance normalization is known as Mahalanobis scaling. While Mahalanobis scaling is not the only way to pre-process the data, it is probably the most general and the most robust way to do pre-processing that applies well across the board. If we represent a feature vector as EMBED Equation.3 , Mahalanobis scaling will result in a rescaled feature vector EMBED Equation.3 and can be summarized as:  EMBED Equation.3  (4) where EMBED Equation.3 represents the average value and EMBED Equation.3 represents the standard deviation for attribute EMBED Equation.3 . Making a test model proceeds in a very similar way as for training: the wisdom vector or the weight vector will now be applied to the test data to make predictions according to:  EMBED Equation.3  (5) In the above expression it was assumed that there are k test data, and the superscript test is used to explicitly indicate that the weight vector will be applied to a set of k test data with m attributes or descriptors. If one considers testing for one sample data point at a time, Eq. (5) can be represented as a simple neural network with an input layer and just a single neuron, as shown in Fig. 1. The neuron produces the weighted sum of the average input features. Note that the transfer function, commonly found in neural networks, is not present here. Note also that that the number of weights for this one-layer neural networks equals the number of input descriptors or attributes.  Fig. 1. Neural network representation for regression B. The Machine Learning Dilemma Equations (2) and (3) contain the inverse of the feature kernel, EMBED Equation.3 , defined as:  EMBED Equation.3  (9) The feature kernel is a  EMBED Equation.3  symmetric matrix where each entry represents the similarity between features. Obviously, if there were two features that would be completely redundant the feature matrix would contain two columns and two rows that are (exactly) identical, and the inverse does not exist. One can argue that all is still well, and that in order to make the simple regression method work one would just make sure that the same descriptor or attribute is not included twice. By the same argument, highly correlated descriptors (i.e., cousin features in data mining lingo) should be eliminated as well. While this argument sounds plausible, the truth of the matter is more subtle. Let us repeat Eq. (2) again and go just one step further as shown below.  EMBED Equation.3  (10) Eq. (10) is the derivation of an equivalent linear formulation to Eq. (2), based on the so-called right-hand pseudo-inverse or Penrose inverse, rather than using the more common left-hand pseudo-inverse. It was not shown here how that last line followed from the previous equation, but the proof is straightforward and left as an exercise to the reader. Note that now the inverse is needed for a different entity matrix, which now has an  EMBED Equation.3  dimensionality, and is called the data kernel,  EMBED Equation.3 , as defined by:  EMBED Equation.3  (11) The right-hand pseudo-inverse formulation is less frequently cited in the literature, because it can only be non-rank deficient when there are more descriptive attributes than data points, which is not the usual case for data mining problems (except for data strip mining[17] cases). The data kernel matrix is a symmetrical matrix that contains entries representing similarities between data points. The solution to this problem seems to be straightforward. We will first try to explain here what seems to be an obvious solution, and then actually show why this wont work. Looking at Eqs. (10) and (11) it can be concluded that, except for rare cases where there are as many data records as there are features, either the feature kernel is rank deficient (in case that  EMBED Equation.3 , i.e., there are more attributes than data), or the data kernel is rank deficient (in case that  EMBED Equation.3 , i.e., there are more data than attributes). It can be now argued that for the  EMBED Equation.3 case one can proceed with the usual left-hand pseudo-inverse method of Eq. (2), and that for the  EMBED Equation.3 case one should proceed with the right-hand pseudo inverse, or Penrose inverse following Eq. (10). While the approach just proposed here seems to be reasonable, it will not work well in practice. Learning occurs by discovering patterns in data through redundancies present in the data. Data redundancies imply that there are data present that seem to be very similar to each other (and that have similar values for the response as well). An extreme example for data redundancy would be a dataset that contains the same data point twice. Obviously, in that case, the data matrix is ill-conditioned and the inverse does not exist. This type of redundancy, where data repeat themselves, will be called here a hard redundancy. However, for any dataset that one can possibly learn from, there have to be many soft redundancies as well. While these soft redundancies will not necessarily make the data matrix ill-conditioned, in the sense that the inverse does not exist because the determinant of the data kernel is zero, in practice this determinant will be very small. In other words, regardless whether one proceeds with a left-hand or a right-hand inverse, if data contain information that can be learnt from, there have to be soft or hard redundancies in the data. Unfortunately, Eqs. (2) and (10) cant be solved for the weight vector in that case, because the kernel will either be rank deficient (i.e., ill-conditioned), or poor-conditioned, i.e., calculating the inverse will be numerically unstable. We call this phenomenon the machine learning dilemma: (i) machine learning from data can only occur when data contain redundancies; (ii) but, in that case the kernel inverse in Eq. (2) or Eq. (10) is either not defined or numerically unstable because of poor conditioning. Taking the inverse of a poor-conditioned matrix is possible, but the inverse is not sharply defined and most numerical methods, with the exception of methods based on single value decomposition (SVD), will run into numerical instabilities. The data mining dilemma seems to have some similarity with the uncertainty principle in physics, but we will not try to draw that parallel too far. Statisticians have been aware of the data mining dilemma for a long time, and have devised various methods around this paradox. In the next sections, we will propose several methods to deal with the data mining dilemma, and obtain efficient and robust prediction models in the process. C. Regression Models Based on the Data Kernel Reconsider the data kernel formulation of Eq. (10) for predictive modeling. There are several well-known methods for dealing with the data mining dilemma by using techniques that ensure that the kernel matrix will not be rank deficient anymore. Two well-known methods are principal component regression and ridge regression.[5] In order to keep the mathematical diversions to its bare minimum, only ridge regression will be discussed. Ridge regression is a very straightforward way to ensure that the kernel matrix is positive definite (or well-conditioned), before inverting the data kernel. In ridge regression, a small positive value, (, is added to each element on the main diagonal of the data matrix. Usually the same value for ( is used for each entry. Obviously, we are not solving the same problem anymore. In order to not deviate too much from the original problem, the value for ( will be kept as small as we reasonably can tolerate. A good choice for ( is a small value that will make the newly defined data kernel matrix barely positive definite, so that the inverse exists and is mathematically stable. In data kernel space, the solution for the weight vector that will be used in the ridge regression prediction model now becomes:  EMBED Equation.3  (12) and predictions for  EMBED Equation.3 can now be made according to:  EMBED Equation.3  (13) where a very different weight vector was introduced:  EMBED Equation.3 . This weight vector is applied directly to the data kernel matrix (rather than the training data matrix) and has the same dimensionality as the number of training data. To make a prediction on the test set, one proceeds in a similar way, but applies the weight vector on the data kernel for the test data, which is generally a rectangular matrix, and projects the test data on the training data according to:  EMBED Equation.3  (14) where it is assumed that there are  EMBED Equation.3  data points in the test set. II. THE KERNEL TRANSFORMATION The kernel transformation is an elegant way to make a regression model nonlinear. The kernel transformation goes back at least to the early 1900s, when Hilbert addressed kernels in the mathematical literature. A kernel is a matrix containing similarity measures for a dataset: either between the data of the dataset itself, or with other data (e.g., support vectors[1,3]). A classical use of a kernel is the correlation matrix used for determining the principal components in principal component analysis, where the feature kernel contains linear similarity measures between (centered) attributes. In support vector machines, the kernel entries are similarity measures between data rather than features and these similarity measures are usually nonlinear, unlike the dot product similarity measure that we used before to define a kernel. There are many possible nonlinear similarity measures, but in order to be mathematically tractable the kernel has to satisfy certain conditions, the so-called Mercer conditions. [1]  EMBED Equation.3  (15) The expression above, introduces the general structure for the data kernel matrix, EMBED Equation.3 , for  EMBED Equation.3 data. The kernel matrix is a symmetrical matrix where each entry contains a (linear or nonlinear) similarity between two data vectors. There are many different possibilities for defining similarity metrics such as the dot product, which is a linear similarity measure and the Radial Basis Function kernel or RBF kernel, which is a nonlinear similarity measure. The RBF kernel is the most widely used nonlinear kernel and the kernel entries are defined by  EMBED Equation.3  (16) Note that in the kernel definition above, the kernel entry contains the square of the Euclidean distance (or two-norm) between data points, which is a dissimilarity measure (rather than a similarity), in a negative exponential. The negative exponential also contains a free parameter, (, which is the Parzen window width for the RBF kernel. The proper choice for selecting the Parzen window is usually determined by an additional tuning, also called hyper-tuning, on an external validation set. The precise choice for ( is not crucial, there usually is a relatively broad range for the choice for ( for which the model quality should be stable. Different learning methods distinguish themselves in the way by which the weights are determined. Obviously, the model in Eqs. (12 - 14) to produce estimates or predictions for EMBED Equation.3 is linear. Such a linear model has a handicap in the sense that it cannot capture inherent nonlinearities in the data. This handicap can easily be overcome by applying the kernel transformation directly as a data transformation. We will therefore not operate directly on the data, but on a nonlinear transform of the data, in this case the nonlinear data kernel. This is very similar to what is done in principal component analysis, where the data are substituted by their principal components before building a model. A similar procedure will be applied here, but rather than substituting data by their principal components, the data will be substituted by their kernel transform (either linear or nonlinear) before building a predictive model. The kernel transformation is applied here as a data transformation in a separate pre-processing stage. We actually replace the data by a nonlinear data kernel and apply a traditional linear predictive model. Methods where a traditional linear algorithm is used on a nonlinear kernel transform of the data are introduced here as direct kernel methods. The elegance and advantage of such a direct kernel method is that the nonlinear aspects of the problem are captured entirely in the kernel and are transparent to the applied algorithm. If a linear algorithm was used before introducing the kernel transformation, the required mathematical operations remain linear. It is now clear how linear methods such as principal component regression, ridge regression, and partial least squares can be turned into nonlinear direct kernel methods, by using exactly the same algorithm and code: only the data are different, and we operate on the kernel transformation of the data rather than the data themselves. In order to make out-of-sample predictions on true test data, a similar kernel transformation needs to be applied to the test data, as shown in Eq. (14). The idea of direct kernel methods is illustrated in Fig. 2, by showing how any regression model can be applied to kernel-transformed data. One could also represent the kernel transformation in a neural network type of flow diagram and the first hidden layer would now yield the kernel-transformed data, and the weights in the first layer would be just the descriptors of the training data. The second layer contains the weights that can be calculated with a hard computing method, such as kernel ridge regression. When a radial basis function kernel is used, this type of neural network would look very similar to a radial basis function neural network, except that the weights in the second layer are calculated differently.  Fig. 2. Direct kernels as a data pre-processing step A. Dealing with Bias: Centering the Kernel There is still one important detail that was overlooked so far, and that is necessary to make direct kernel methods work. Looking at the prediction equations in which the weight vector is applied to data as in Eq. (1), there is no constant offset term or bias. It turns out that for data that are centered this offset term is always zero and does not have to be included explicitly. In machine learning lingo the proper name for this offset term is the bias, and rather than applying Eq. (1), a more general predictive model that includes this bias can be written as:  EMBED Equation.3  (17) where  EMBED Equation.3 is the bias term. Because we made it a practice in data mining to center the data first by Mahalanobis scaling, this bias term is zero and can be ignored. When dealing with kernels, the situation is more complex, as they need some type of bias as well. We will give only a recipe here, that works well in practice, and refer the reader to the literature for a more detailed explanation.[3, 6] Even when the data were Mahalanobis-scaled, before applying a kernel transform, the kernel still needs some type of centering to be able to omit the bias term in the prediction model. A straightforward way for kernel centering is to subtract the average from each column of the training data kernel, and store this average for later recall, when centering the test kernel. A second step for centering the kernel is going through the newly obtained vertically centered kernel again, this time row by row, and subtracting the row average form each horizontal row. The kernel of the test data needs to be centered in a consistent way, following a similar procedure. In this case, the stored column centers from the kernel of the training data will be used for the vertical centering of the kernel of the test data. This vertically centered test kernel is then centered horizontally, i.e., for each row, the average of the vertically centered test kernel is calculated, and each horizontal entry of the vertically centered test kernel is substituted by that entry minus the row average. Mathematical formulations for centering square kernels are explained in the literature.[3, 6] The advantage of the kernel-centering algorithm introduced (and described above in words) in this section is that it also applies to rectangular data kernels. The flow chart for pre-processing the data, applying a kernel transform on this data, and centering the kernel for the training data, validation data, and test data is shown in Fig. 3.  Fig. 3. Data pre-processing with kernel centering B. Direct Kernel Ridge Regression So far, the argument was made that by applying the kernel transformation in Eqs. (13) and (14), many traditional linear regression models can be transformed into a nonlinear direct kernel method. The kernel transformation and kernel centering proceed as data pre-processing steps (Fig. 2). In order to make the predictive model inherently nonlinear, the radial basis function kernel will be applied, rather than the (linear) dot product kernel, used in Eqs. (2) and (10). There are actually several alternate choices for the kernel,[1-3] but the RBF kernel is the most widely applied kernel. In order to overcome the machine learning dilemma, a ridge can be applied to the main diagonal of the data kernel matrix. Since the kernel transformation is applied directly on the data, before applying ridge regression, this method is called direct-kernel ridge regression. Kernel ridge regression and (direct) kernel ridge regression are not new. The roots for ridge regression can be traced back to the statistics literature.[5] Methods equivalent to kernel ridge regression were recently introduced under different names in the machine learning literature (e.g., proximal SVMs were introduced by Mangasarian et al.[7] kernel ridge regression was introduced by Poggio et al.[8] and Least-Squares Support Vector Machines were introduced by Suykens et al.[9-10]). In these works, Kerned Ridge Regression is usually introduced as a regularization method that solves a convex optimization problem in a Langrangian formulation for the dual problem that is very similar to traditional SVMs. The equivalency with ridge regression techniques then appears after a series of mathematical manipulations. By contrast, we introduced kernel ridge regression with few mathematical diversions in the context of the machine learning dilemma and direct kernel methods. For all practical purposes, kernel ride regression is similar to support vector machines, works in the same feature space as support vector machines, and was therefore named least-squares support vector machines by Suykens et al. Note that kernel ridge regression still requires the computation of an inverse for a  EMBED Equation.3 matrix, which can be quite large. This task is computationally demanding for large datasets, as is the case in a typical data mining problem. Since the kernel matrix now scales with the number of data squared, this method can also become prohibitive from a practical computer implementation point of view, because both memory and processing requirements can be very demanding. Krylov space-based methods[10] and conjugate gradient methods[1, 10] are relatively efficient ways to speed up the matrix inverse transformation of large matrices, where the computation time now scales as n2, rather than n3. The Analyze/Stripminer code[12] developed by the author applies Mllers scaled conjugate gradient method to calculate the matrix inverse.[13] The issue of dealing with large datasets is even more profound. There are several potential solutions that will not be discussed in detail. One approach would be to use a rectangular kernel, were not all the data are used as bases to calculate the kernel, but a good subset of support vectors is estimated by chunking[1] or other techniques such as sensitivity analysis. More efficient ways for inverting large matrices are based on piece-wise inversion. Alternatively, the matrix inversion may be avoided altogether by adhering to the support vector machine formulation of kernel ridge regression and solving the dual Lagrangian optimization problem and applying the sequential minimum optimization or SMO.[16] III. HEURISTIC REGULARIZATION FOR  EMBED Equation.3  It has been shown that kernel ridge regression can be expressed as an optimization method,[10-15] where rather than minimizing the residual error on the training set, according to:  EMBED Equation.3  (18) we now minimize:  EMBED Equation.3  (19) The above equation is a form of Tikhonov regularization[14] that has been explained in detail by Cherkassky and Mulier[4] in the context of empirical versus structural risk minimization. Minimizing the norm of the weight vector is in a sense similar to an error penalization for prediction models with a large number of free parameters. An obvious question in this context relates to the proper choice for the regularization parameter or ridge parameter (. In the machine learning, it is common to tune the hyper-parameter l using a tuning/validation set. This tuning procedure can be quite time consuming for large datasets, especially in consideration that a simultaneous tuning for the RBF kernel width must proceed in a similar manner. We therefore propose a heuristic formula for the proper choice for the ridge parameter, that has proven to be close to optimal in numerous practical cases [36]. If the data were originally Mahalanobis scaled, it was found by scaling experiments that a near optimal choice for ( is  EMBED Equation.3  (20) where n is the number of data in the training set. Note that in order to apply the above heuristic the data have to be Mahalanobis scaled first. Eq. (20) was validated on a variety of standard benchmark datasets from the UCI data repository, and provided results that are nearly identical to an optimally tuned ( on a tuning/validation set. In any case, the heuristic formula for ( should be an excellent starting choice for the tuning process for (. The above formula proved to be also useful for the initial choice for the regularization parameter C of SVMs, where C is now taken as 1/(. ACKNOWLEDGEMENT The author acknowledges the National Science Foundation support of this work (IIS-9979860). The discussions with Robert Bress, Kristin Bennett, Karsten Sternickel, Boleslaw Szymanski and Seppo Ovaska were extremely helpful to prepare this paper. REFERENCES [1] Nello Cristianini and John Shawe-Taylor [2000] Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press. [2] Vladimir Vapnik [1998] Statistical Learning Theory, John Wiley & Sons. [3] Bernhard Schlkopf and Alexander J. Smola [2002] Learning with Kernels, MIT Press. [4] Vladimir Cherkassky and Filip Mulier [1998] Learning from Data: Concepts, Theory, and Methods, John Wiley & Sons, Inc. [5] A. E. Hoerl, and R. W. Kennard [1970] Ridge Regression: Biased Estimation for Non-Orthogonal Problems, Technometrics, Vol. 12, pp. 69-82. [6] B. Schlkopf, A. Smola, and K-R Mller [1998] Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Computation, Vol. 10, 1299-1319, 1998. [7] Glenn Fung and Olvi L. Mangasarian, Proximal Support Vector Machine Classifiers, in Proceedings KDD 2001, San Francisco, CA. [8] Evgeniou, T., Pontil, and M. Poggio, T. [2000] Statistical Learning Theory: A Primer, International Journal of Computer Vision, Vol. 38(1), pp. 9-13. [9] Suykens, J. A. K. and Vandewalle, J. [1999] Least-Squares Support Vector Machine Classifiers, Neural Processing letters, Vol. 9(3), pp. 293-300, Vol. 14, pp. 71-84. [10] Suykens, J. A. K., van Gestel, T. de Brabanter, J. De Moor, M., and Vandewalle, J. [2003] Least Squares Support Vector Machines, World Scientific Pub Co, Singapore. [11] Ilse C. F. Ipsen, and Carl D. Meyer [1998] The Idea behind Krylov Methods, American Mathematical Monthly, Vol. 105, 889-899. [12] The Analyze/StripMiner code is available on request for academic use, or can be downloaded from  HYPERLINK "http://www.drugmining.com" www.drugmining.com. [13] Mller, M. F., [1993] A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning, Neural Networks, Vol. 6, pp.525-534. [14] A. N. Tikhonov and V. Y. Arsenin [1977] Solutions of ill-Posed Problems, W.H. Winston, Washinton D.C. [15] Bennett, K. P., and Embrechts, M. J. [2003] An Optimization Perspective on Kernel Partial Least Squares Regression, Chapter 11 in Advances in Learning Theory: Methods, Models and Applications, Suykens J.A.K. et al., Eds., NATO-ASI Series in Computer and System Sciences, IOS Press, Amsterdam, The Netherlands. [16] Keerthi, S. S., and Shevade S. K. [2003] SMO Algorithm for Least Squares SVM Formulations, Neural Computation, Vol. 15, pp. 487-507. [17] Robert H. Kewley, and Mark J. Embrechts [2000] Data Strip Mining for the Virtual Design of Pharmaceuticals with Neural Networks, IEEE Transactions on Neural Networks, Vol.11 (3), pp. 668-679. PAGE  QRS01%&9:;<VWjklm    ! 4 5 6 7 ( ) < = > ? U a j EHUjQ}B UV jEHUj㼝B UV jEHUjmB UV jEHUjB UV jEHUjB UV jEHUjYC UV jUCJ5\6]CJCJaJCJ5RSe13( U 4Q ,$a$$ ,a$ ,$a$$a$uttta b u v w x 45HIJK"TUhijk»xj违B UVjCJEHUjٿB CJUVjCJEHUjB CJUV 6CJ] j)EHUjyB UV jeEHUjzB UV jEHUjQzB UV jUCJH*j CJEHUj㽝B CJUVCJ jCJU,%&9:;<  HI`atuvwyj3CJEHUj3B CJUV 5CJ\ j-$U 6CJ]j!CJEHUjRzB CJUVjCJEHUj^B CJUVjCJEHUjB CJUVjCJEHUjB CJUV jCJUCJ jU jEHU/>!*$+%+S+T+-40S0 ,$a$ ,L$a$ ,  %)  , - . / żⲩ⟖⌃uljdCCJEHUjB CJUVCJH*jPACJEHUjBzB CJUVjx?CJEHUjB CJUVj=CJEHUjXB CJUVj9CJEHUj3B CJUVj7CJEHUjkC CJUVCJ jCJUj5CJEHUj?zB CJUV* ! ! !!o!p!!!!!%+S+,,--4.5...//4050H0I0J0K0N0S0g0h0{0|0}0~000000żz jsOEHUj}B UV jMEHUj|B UV jKEHUjή}B UV jU jlCJCJH* 6CJ]j%ICJEHUjB CJUVj:GCJEHUjB CJUVjOECJEHUjB CJUVCJ jCJU0S0002233536347R799F<?CUGWGGGGGI $^`a$ ,$ ,a$ ,$a$$a$$a$$a$00000111111222222222222353634407374757H7I7J7K7L7Q77777777777j6B UV j[EHUjy6B UV jXEHUj>6B UVCJH* 5CJ\jVCJEHUjB CJUVjTCJEHUj;}B CJUV 5CJH*\jRCJEHUjB CJUV jCJUCJ jU/777999999::;;<<<< = = ==CUGVGWGGGGIIJJJ JJJ+J,J-J.JKKLPRPQQQRRTTſθήθΛΎΎ·οΎ 5CJ\ jCJUCJH*jWCJEHUjT?B CJUVj CJEHUj}B CJUV jCJU 6CJ]5\ jcUCJ jbEHUj~7B UV js j__EHUjB UV jU j]EHU2IJJMOQQQQQRRhU"Zx]C`D`~``5aTaeaaLcfgHgdi ,  ,$a$TVVVVVVJWPWxZyZZZZZ"\&\E\L\\\\\\\]]*]+]s]w]^^>`B`f`g`z`{`|`}`~````5a6aIaJaKaLaeafayazajBtC CJUVjtCJEHUjtC CJUV 5CJ\jCJEHUjM`C CJUVCJOJPJQJo( 6CJH*] 6CJ]jCJEHUjAB CJUV jCJUCJH*CJ6za{a|aaaaaIcJcddffffggg gggMhNhhhhhaibidieiuiviljmjxjyjjj$k?kkkk lll(m:mmm5n]nnnoop:pppp jCJU6CJ 6CJ]CJ5\ 5CJ\ 6CJ]jrCJEHUj}B CJUV CJOJQJ jlCJCJH*CJ jCJUjCCJEHU=dieiuiviljmjxjyj kTkk&llVmmvn!ooOppyqq"ssut $G^`Ga$$a$$a$$a$pppppTqcqqqr`rnrrss6tZtutvt|t}t~ttttt0J j0JU 6CJ]CJ0JCJ jCJUj3CJUut~tttttttttttttttttttttttttttt&`#$tttttttt $G^`Ga$$&P1/ =!"#$%+ 0&P1/ =!"#$% PDdhJ  C A? "2SX\D/D`!'X\D@xt|xcdd``> @c112BYL%bpuOS'gwxwIń#]~rBp[Ы-aZLd,rHV9b w پ:_dCn(.&Z ?Jypg$J[ur y[[{FߑJX|H FOz YQDdJ  C A? "2&,M*| `!t&,M*܀x hBxڍR1KPM FA*Ncl'iI 8ApOt8d8?]0]_&*4 bxڥOhANwiJ7 Ff"bs(A[&E<{QУ/=śWx HfoI²}2C7PgL0> tKtKP=#_g jޜ lžFx*'\it~~#y\{qm8WTkFdϋ0$OPz ,rhGɟ(  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnoprstuvwx{~Root Entry FNA}LData qWordDocumentSObjectPool aLANA_1136154884OcF aLA aLAOle CompObjfObjInfo "'*+,-./0125:=>?BGLORWZ]befinqty|}~ FMicrosoft Equation 3.0 DS Equation Equation.39q)I|F X nm FMicrosoft Equation 3.0 DS Equation Equation.39qEquation Native :_1117633680 FbvLAbvLAOle CompObj fObjInfo Equation Native  <_11176337096F LA LAOle    II 2y  n FMicrosoft Equation 3.0 DS Equation Equation.39q II 2y  nR,CompObj fObjInfo Equation Native <_1117633645 F ڑLA ڑLAOle CompObjfObjInfoEquation Native < FMicrosoft Equation 3.0 DS Equation Equation.39q nIzI X nm FMicrosoft Equation 3.0 DS Equation Equation.39q_1117633763F(LA(LAOle CompObjfObjInfoEquation Native T_1115535185'rFPLAPLAOle CompObj f8HII X nm !2y  n FMicrosoft Equation 3.0 DS Equation Equation.39qTPIxI 2y  n =X nm 2w  mObjInfo!Equation Native p_1117634019,$FLApxLAOle   FMicrosoft Equation 3.0 DS Equation Equation.39q nIzI 2w n FMicrosoft Equation 3.0 DS Equation Equation.39qCompObj#%!fObjInfo&#Equation Native $<_1115312608^)F&LA&LAOle %CompObj(*&fObjInfo+(Equation Native )dHII X mnT X nm 2w  m =X mnT 2y  n X mnT X nm () "1 X mnT X nm ()2w  m =X mnT X nm () "1 X mnT 2y  n 2w  m =X mnT X nm () "1 X mnT 2y  n FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI y_1117634426.FiLAiLAOle 3CompObj-/4fObjInfo06Equation Native 7,_11152910453F0 LA0 LAOle 8CompObj249f FMicrosoft Equation 3.0 DS Equation Equation.39qVIdI 2y   n =X nm X mnT X nm () "1 X mnT 2y  nObjInfo5;Equation Native <_1117634497"J8F1LA1LAOle @ FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI 2z  FMicrosoft Equation 3.0 DS Equation Equation.39qCompObj79AfObjInfo:CEquation Native D0_1117634521=FYLAYLAOle ECompObj<>FfObjInfo?HEquation Native I0IDI 2z  FMicrosoft Equation 3.0 DS Equation Equation.39qTII 2z'=2z"2zstd(2z)u_1117634536;EBFLALAOle JCompObjACKfObjInfoDMEquation Native Np_1117634559GF"LA"LAOle PCompObjFHQf FMicrosoft Equation 3.0 DS Equation Equation.39qTITI 2zII FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfoISEquation Native T0_1117634587@LFPJLAPJLAOle UCompObjKMVfObjInfoNXEquation Native YH_1117634654QF MA MA,(zII std2z () FMicrosoft Equation 3.0 DS Equation Equation.39qIloI 2z Ole [CompObjPR\fObjInfoS^Equation Native _0_1115312662VF@: MA@: MAOle `CompObjUWafObjInfoXc FMicrosoft Equation 3.0 DS Equation Equation.39qtPIhI 2y   ktest =X kmtest 2w  mEquation Native d_1117533079[FMAMAOle gCompObjZ\hf FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI K F FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfo]jEquation Native k8_11153079331w`FMAMAOle lCompObj_amfObjInfoboEquation Native pl_1136618453eF'MA'MAPnIzI K F =X mnT X nm FMicrosoft Equation 3.0 DS Equation Equation.39qW8G mmOle rCompObjdfsfObjInfoguEquation Native v1_1117533178jF@0MA@0MAOle wCompObjikxfObjInfolz FMicrosoft Equation 3.0 DS Equation Equation.39q PIxI X mnT X nm 2w  m =X mnT 2y  n X mnT X nm () "1 X mnT X nm ()2Equation Native {(_1117516888oF`z8MA`z8MAOle CompObjnpfw  m =X mnT X nm () "1 X mnT 2y  n 2w  m =X mnT X nm () "1 X mnT 2y  n 2w  m =X mnT X nm X mnT () "1 2y  n FMicrosoft Equation 3.0 DS Equation Equation.39qIDI nn FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfoqEquation Native 4_1117516928|tF6FMA6FMAOle CompObjsufObjInfovEquation Native 8_1115308766yFVMAVMAnII K D FMicrosoft Equation 3.0 DS Equation Equation.39qPPIxI K D =X nm X mnTtOle CompObjxzfObjInfo{Equation Native l_1117517248~F&`MA&`MAOle CompObj}fObjInfo FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI m>n FMicrosoft Equation 3.0 DS Equation Equation.39qEquation Native 4_1117517340YF`NiMA`NiMAOle CompObjfObjInfoEquation Native 4_1117517443FqMAqMAOle nIzI n>m FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI m<nCompObjfObjInfoEquation Native 4_1117517522FP>zMAP>zMAOle CompObjfObjInfoEquation Native 4 FMicrosoft Equation 3.0 DS Equation Equation.39qIDI m>n FMicrosoft Equation 3.0 DS Equation Equation.39q_1115533006TFfMAfMAOle CompObjfObjInfoEquation Native _1117518204F0MA0MAOle CompObjf޴hITI 2w  n =X mnT X nm X mnT +I() "1 2y  n FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfoEquation Native ,_1115533211F.MA.MAOle nIzI y FMicrosoft Equation 3.0 DS Equation Equation.39qhII 2y  =X nm X mnT X nm X mnT +I() CompObjfObjInfoEquation Native _1117518265FVMAVMA"1 2y  n =K D K D +I() "1 2y  n =K D 2w  n FMicrosoft Equation 3.0 DS Equation Equation.39qOle CompObjfObjInfoEquation Native < IDI 2w n FMicrosoft Equation 3.0 DS Equation Equation.39qޠnIzI K Dtest =X kmtest X mntrain () T_1115533371FMAMAOle CompObjfObjInfoEquation Native _1117518341FPGMAPGMAOle CompObjf FMicrosoft Equation 3.0 DS Equation Equation.39q|I,zI kI FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfoEquation Native ,_1117533758hFMAMAOle CompObjfObjInfoEquation Native t_1117533817FMAMAXPIxI 2K  nn =k 11 k 12 ...k 1n k 21 k 22 ...k 2n ...k n1 k n2 ...k nn []Ole CompObjfObjInfoEquation Native @ FMicrosoft Equation 3.0 DS Equation Equation.39q$IloI 2K  nm FMicrosoft Equation 3.0 DS Equation Equation.39q_1117533848FMAMAOle CompObjfObjInfoEquation Native ,_1119401632F3NA3NAOle CompObjfIpI n FMicrosoft Equation 3.0 DS Equation Equation.39qؤPIxI k ij a"e "2x  j "2x  l    22 2 2ObjInfoEquation Native _1117534078FNANAOle  FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI y FMicrosoft Equation 3.0 DS Equation Equation.39qCompObjfObjInfoEquation Native ,_1115535231FNANA    TU !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSVWYZ[\]^_`abcdefghijklmnopqrs~lXv[S5Hs3i|˼TBn%|Hv=m7S<տ}A e<_<_ǩ;C/3RP}QG_S=_=&E_îtYi1W(){n3dGAs?GnIyޠxqۮ,'bA/SrTOrϵe#̀?xUMKWK37wd~SX_P=Qa Re_ntZnp/{27FDdJ   C A? "2&23<]`H)4c`!23<]`H)4c`!x5O P =I} ^A"*ءCUhA9 ~ ^sc{NrBц{e ]Ur("ZEQdT5FkjIկ4хu"R%"zJ N(,C O5+ %Yn8 1h|$ #(tѩUꖘ{hbLL. LB.ʀ YK*ckX7DϝYln{ogg$"1@zbr$fQ9P #kb#E.=ܬij'g]9zI~JqP0_o@Ze;li&w g( uk)hJs-{ǽw^2pf*SKi ;b;CPϙހdGCw .pkx%? !~O<눎 ̗sFgBJunVOS)QsO!S{iT <bOy :y׫_mw7rZ`0z)~zaQODdJ   C A ? " 2IǕؒ|Ar> T%`!Ǖؒ|Ar> T\@xcdd``^ @bD"L1JE `x0:0 Yjl R A@1) jx|K2B* R. XB2sSRsA.Jsi#F U h&27D(d܄p E \Pq]6Gl;_F&&\HH\1U2DdJ   C A ? " 2`q7yhx<`!4q7yhxxcdd``> @bD"L1JE `x0.`41c@ԔP5< %! `fjvF ,L ! ~ Ay K@.Jsi#F*]&=4!27)? p`1brokEIQ" P .(b {F9W 0y{Ĥ\Y\ 1 U`u2!;fDd`41J   C A ? " 27/d(GA;bm`!7/d(GA;bm X djxڅJAs6٘p"VK[e*Ia \̢&UA%l">̙IQp=!gOR4k+8c,ӴxVl0w'S֜;: ⎲rlLJT-5g' |뤶ƕ5|j _67$:^0 fpo1d*5sL:x/ y v:h봛9-QՆ؈:mDN?yGe=Kϣ"=Wé[l9|A;0A^׎5i]vo7qKi {r<г&b(A :[DdJ  C A ? " 2;>t9eѿ =VR`!>t9eѿ =VH@Rx5OKQn3cD&v$`vl,FX a$F~]ݩW`+B dl6Ȓb"queJE\>S@eA\ FO滠D*~99nC7YDaNS_lS^*^o½c1* w0-)dS&BT+`!Lr>-)dS&B  dXJxڍQAJP}3IM Ƃ >.D=.+bsJи.]y sQhP'33o אb|E !.ҐC4䈩fycwYYJ!M'9.vWQqgƜqciIdH/F7`@Uov#/<3QE]IN;w>J!0\$vг%ymZY~_ޚMtSAISMi݋1o,O*rNx?7H DdJ  C A? "2IK}]WE%A `!K}]WE\@xcdd``^ @bD"L1JE `x0:0 Yjl R A@1) jx|K2B* R. XB2sSRs\ F\ > LydokEIQ"  `.lTv0oؿLLJ%b&#3Xs3IDdJ  C A? "2~ǂdJYy @("`!~ǂdJYy @  MxڍRK`}wim *(vpR+*5V@@' .N뗠8wyw}?.4 `@ %+`"i-Z6eW8#]6"֐J|'&{21)uïXNNm'W!p>8H$ >apf?\/f~q+ 7KEa0wEً2Ff׳.n?\s7䖤g㓂Cp%9pQjx1s@v0qSϙhJE6ocG{?o৳.ݨ{x=}:ȺQR7Z0}aDd 0  # Ab:Af=z&q$nAf=z&PNG  IHDR+!bXsRGBIDATx^v!E .& ~rl@ґv;?~;_( XDPP4( )T-6 @ 5 @FPRT( X*c @K R}lR(`Zm@j,@KH h>QT-6 @ 5 hVO.fl[+G`^;ZWmI9)A_H6%USDHy:T"DD}~zmq@81,X|>_.:VPE }/sD8Q8%J栀W* 5O(`@w` @4/PV@J`y_)' s)B`,t$]k e{') s*B 7JC\({tB1ݪ~2LH/_ atu&rk]+z}lz |'kŘY/UޝJ{tٙf} Te+쁆0 U={`!SYG@͖HT= H`ܤ{jQKःST= Ak:Wg4!!尓TIArfn^]v*5=Wz`gϝprF "Ԣql\~~ o Xeg y<:oNJT>>sG~R.Y\ W\gMmM:)oܩ[^@R13Na$/WtRpPCj Qj}3^E /IR:6Dt,3`Fn;AjuШpZd|xaZny~s~ D|z+sd ^랻Ms`z'S2d@ [o&!IB_6%GGw:8t)Sif^\ZݲUhVvgԺ.=PC?H |{x}VxlehsL=pluƐEִZLC , 8 ?31&@{Sa9N>/K m4m~BCA*V M¶۶Ȁ^ub,$m6k@\{V{C+zEڸR+v!=n74'낄FRMo8 ۓZ>y شALloMEк*ws漢O#ޮ3[j<_x^.@`u'L@Kw8u EU=uv[~,V\Y 3&tf7N'p9`ψYf%pE`oE)#sBZ+YS]=s36X}s/k}%{n3G[V%!{Uj6ޙ=xf\NNvNf6_O6d'u$&=^Kר̳:yΎo/~Q/kqDQzQVyȂ+[y5Xz w5^m3~|)^ v;CvizDn^ѱEQ+"u,O_.ZSV}OM }=:X=_-}$=0۞~X{<3[&3X\sYzAoC'vTTXn;]h'5XWF o3( l\c7{+paoQ!왬,翺]Qn_#~S2Q8znH ѳIA`TY6vk]PN?8o#JrlNK>I^KRc%Pk{nĨ rY[a;rhJX߄@C..K=LNyXU ^~l.p0'i8is @o}OR)olS%m?FB\nsz=)JЊ}LNrC j/e,{$ɢ}/FGl(hXYp]'ܣP8:Px{r}z"Z(/8D2~Q4_j9)]4};p$=3gD 2`:jN Ni/ T)Ur1+e9Rb0 V`:U'U+r(`5SwU?|[(9h9#Q =(`@Yh'`6n@ PM \ӗW @ԁZ)[W x:g3 )P}Pllj(S:E<%S:E<%S:E<%S:E<%S:E<%Sf*IENDB`DdhTJ  C A? "2;G#C+@Fv/4`!G#C+@Fvd@ |XJxڍP; @}3(AFQ/"BTP;;[O-Gyo5Dd|J  C A? "2veok?oR6`!Jeok?o ` 0xڍQ=KA}3KKCaNA; 4䊔I@ uvX'YXEp3A\X7;%@c х݀%$KbkfN@3LewOz]"W.pwv`N?[]/$ #JnL҇x87 xM],d&uKdW~_ERj7zl̸>_ JjSf6W/*W/*)KSFbk{mF8DdlTIJ # C A!? "2ffX`c)eh)B8`!:fX`c)eh)Т`!xcdd`` @c112BYL%bpuySgwJ]d6̻0! CͿquK Z22`#Z1` &h^o Bz*߳"t7HDdJ  C A? "2J[NmSU6&=`![NmSU6j@` !xcdd`` @bD"L1JE `x,56~) M @ K 7$# L aA $37X/\!(?71a %@i mvMo (ܤ&9`}@2l.;~a1S=#RpeqIj.$@n2r/=Dd|TJ  C A? "2:JD9L}ܛ2=f?`!JD9L}ܛ2=fd` 0XJxڍP A=(_Jjxو`CQ^wi=sΙsbg"Jΰ%RZHJ4"vsAj22*IMوCwvk'Ap Bfa6`kp.IMr6;͕ۊHgzԆtE =Ww).:71W&??R}< 1j6;Dd|J  C A? "2v vϪÃ@RA`!J vϪÃ@ ` 0xڍQJA}3KCaaA{;X+R&{?l+-,IV ;s{G&ηbsv#(ܤ&9`}@2on.;~a1S=#RpeqIj.$@n V&0DdlJ " C A ? "2M|r4ڨ)iI`!!|r4ڨj`!xcdd`` @bD"L1JE `x0 Yjl R A@^ UXRY`vo`0L` ZZ]46b`|vMoF6QI9 Ls>dv \Pwr6b;{F&&\H1A|1&Dd J $ C A"? "2|kW 3oސTK`!|kW 3oސR@CxڕS1K@~V :"U" Z-Tڂ 8\ugtt8{xr*X^{A )]|fȘ0 @D،Iv#]x\2 L|vG-pi@=c&xN.o껍V_I^1VHe~L߇i. @)q}:k9Zn8Tg466U6" {nT7X?tz95S?SbOޏUR7 ȍ83 guS`Q;ƧSUg,ʲ>L֕Lg>JR|Lu1;g~cc|ː Ct];ѓ X;] s {LZDdJ  C A? "2&23<]`H)4cM`!23<]`H)4c`!x5O P =I} ^A"*ءCUhA9 ~ ^sc{NrBц{e ]Ur("ZEQdT5FkjIկ4хu"R%"zJ N(,C O5+ %Yn8 1h|$ #(tѩUꖘ{lǷI[E @tP(Q(P8vVbVpX%iI:L\I#');8>4;PbtW:7]׀RhPE-׳nBkԳ%aDdJ  C A? ""2&̞I~ . G.W`!̞I~ . G@Hx5O ` Z8[E|CA`Aܜա.XX?I]. `4 QaB!=j2`ʦ繜ИRu5* TOtf3.&x!7o>n /XG= .Zn:,y,htM㙪usK̝N_vi~ %gs_W*yDd( mrJ  C A? "#2Xk=|3RʁC6؂4X`!,k=|3RʁC6؂@@ 1PVxڕAkPK6/v.Ѓ]Lca)ԋԃݫ-.l]{(Ы{ATAz<ū.L^ҼUhM?y3o"@]~|-$aKSeK¨'0Ėߪҫ) 9AM:&UiN=HD`>-gS$eRyܐ6Gߋ86s<~ʗ5ʏ0H}nIUt=v ďgjJM2Wp<缭%H:r]f`2&ҲکZ=8E[x6XCWҳ҃CvÚ&s9!iMc)=PBkŕe=S}rr].?9u52]0;a菸0xa89Ve'sNqP6_o鯁t88wt*HОDdJ % C A#? "$2ZzٮROg8%%6[`!.zٮROg8%%@ChxڍJA ƿdۂ =p{V?}/a.8McOI2o#z-Y{o![)Tw|(>N ĵxobźQV>7_ #IE>s7Nl<};~ 1)B_L[RMWОk!24J|$ /F(c'1jE9b>[ʼhXUEVi@"DdJ & C A$? "%2%~yuTN Kij]`!~yuTN Kij@`!x5O; PO` ;E-"ET ,ow<8Wh/|}MkMwKK̽Uρ%x_&)DdJ ' C A%? "&2S)w;b26q4G"_`!S)w;b26q4G"\`e PxڝS;KAfEb;`m hVA,,l6 w^{,1cops2zl,G_&8 CaWȻV^t`}M|u iȧ3r%2j!cfY|Qx_K|^鏕ӑ\N(6H?Nse?8zWVm{h1ebw NlgVaZjJcb{+,әLu(,SWP9 ʏip]ʾs|Z *@F%~ozDdJ ( C A? "'2&23<]`H)4cTb`!23<]`H)4c`!x5O P =I} ^A"*ءCUhA9 ~ ^sc{NrBц{e ]Ur("ZEQdT5FkjIկ4хu"R%"zJ N(,C O5+ %Yn8 1h|$ #(tѩUꖘ{ @R @@YH/j:5,buJ  ȔNTXXxG,p#t-܏b~lm>Z @`&ϾgMc!^'ֆ~/q~Ux;k,!@`>r95" 5qI,c׿GŸқCj.1 0L9⬩@f#39[@{~TG,kܪ&2A2'  @d/̲1 Ѕ@VeE+a99W^ q~Qئ !vv}XW"vt wdνxW+0]}yu]ئV!<Ƭ#mC2Ma%x1"@}sލ@.-*l;;}{Ljkv=FcSWr7g@s>bbsr> ȔN 4R~7}k5ӜI!5coƏri>#4]:'@LxtO`W1p򦻌WOjd&p%pܶybmK`2r 52 ,ۇ]}=%{Kh^w ev Ȕ͘z ( %1ސnUF%=̷~u1MxQRmdʶz'УM*w׵TT&j|O'xB_q:vۓ7S3Ĭe\"@ D;D1"p{v1Өy|^k-5B6Nc XJ}@'>mKZ||@ہq)#No7e%@\o+7rG?2 @H)SOŜŸ p* SZX}<1qb%@|GSm GΉt4S ȔSO C A^@~  \)O @ ȔO @ Ȕͧ@ @d @dS @2Sh @2e)P @`xr)4 @@s( 0L9 @L| @^@~  \@l> @ / S?@h. S6 @)B @4)O @ ȔO @ Ȕͧ@ @d @dS @2Sh @2e)P @`xr)4 @@s( 0L9 @L| @^߿ @  L/ 8ٚ"@U - ){MαC˛J^ @2vH\0*!@NȔMBL)M=!+E @ ߪ3ه&_9DOCޭLwרv;# #V :t;3ebN'@ t)v '.}䕌/> )O'ɐz,CNw  @ ES^ʔ}\&=%YJkz٧<&!ۓ;%0E ϔE:㕉N'@ngJj˭\nò  @ZfJ+K kXdʫyg_WaZ @ 2 Άe}s= @O Δe%V/@L]R~P[? ? RGvTXr#PN>:2L ؑ+P$Sm%]AS]"e$fo^u@J1ocM ܿۗ(P$SguпhR[K*ݶ7?v64܄%Vާbn[W |V ܽI*-O;_-h3]~9wPRmR' jBX̙Rގ+{]xCnuq-{ۇQO\ԕhuWh87Ȝ)AEPh/ZsJX累ZVy=gI"1cQWA+&_k=7] JY 8&p֭cSgW Qnm+C˃ ̔[d xB&w Nӗ;)/ω7ApW 9KpY1 d˔_:wǗ<܈ dz4q%,i.To:r<.B꿀lr;>[aYޔocqn3] Nt.B70$t6߶,ĢٙdJc5\ژŸ?>n+7]m5n5pO@?RFFȓ)mR85{+-{yoPmCî񷏿OI?LL7Ojw3<+WB2d׹jӛ@xRvڳ܅uho}g~ri++KmWxQȺz7~_ƻ3׳/'%^ٻAk۩Q Y㺟]\/6_\euSڤwV6؝ @㶥W׶N|H͔G p?` nK~A @R/*P]x,-c")mRЏ{Ν c&Qx)3yk*;  @ D 7-/@y)mR~pCvC; ޶9qt ̔"E:p-تnL{mKo2_Xh ya  @`mKIuсɔE x*{ @ Qo Jљ&Wv޹Z O|sZgDљr[]:U/&P   Q'V;,*T(Wыnk}lxE mNv@uL_Eƛ͒ږq}OCk樥V}}vaD`<븚,rF,sb3p?RÖ̃}?fs1|trmHl]gmmZ <~r}qI'RmoPiESwûϲ@!rHԫU(\;=Xd8љE< \ ?- wY @s{o1W2qqDY44]ݾwN  @Bbbz3Ožfb +rX7bWVu#Y!1MWc\xܶz{^bfu\Z_7r̲p2Ψth~z$гO^^MvM=ֳjڞ3d(VFN!ߘ<=V&t( + y>g)ϋOiY<{#,5䝈DjmX7싂%F9|[+(щ).]=Qwf{wuKY#֜W@kZ ѕDϓ)CYrmww>ou~Q+(!Ù6cm5~\wzkypt*:~XO%_=~dȿa^ݗo)|^2j_̲4>Y ٯdƛtZsf>qnw.~q_\}#7󽜸a) Ox _j.S}M  N/yZ#PY *d-[Vq5UT+#C~ܧ,_WG+yeQ& N KlzN/eD0bQt0>G7?|1بJ^)2e<= }yuߍN/j:i&%VhjcxL0}aw,5G*?1Y,tt29z &D{uKafmJ-sRs)l#Nc?X5km-1qy?MV}&:Q qHkZodE*a:w/ Wsue風yN M޵^jT̾L9U/uUKJܼ TH1eBv~Xيh-SZ3Z`Kk-r.ƛ= 6˿\\V͟AeʐuGF[Mrg_mUή%]1m57V<04okxEȔ^A*$4Y=](qUED2ou:`XȐ[?BC]SCKs?7ua;rj|燐V~՛x+Εv FidDاPBƘ'U~[u3s|~?'X!Qx=)=ۯ,?%:yѶ-e FGT}ߚG֬Y_&:ڌ'7c$Zmm)/[QcW d5jg)S|E!_S¹e|1SvrL}!ƮHNGruئ(_PI2ev UjIR\-uS/!ޙ(Tz6{x+sƉJ="<{<7:xp?fN\ţ K:$#fw*eX{]E3JOle CompObjfObjInfoEquation Native x\,IloI 2y  n =X nm 2w  m +b FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI b_1117536084F/NA/NAOle CompObjfObjInfoEquation Native ,_1117536699FPpBNAPpBNAOle CompObjf !"#$%&')*+,-./02 FMicrosoft Equation 3.0 DS Equation Equation.39qnIzI nn FMicrosoft Equation 3.0 DS Equation Equation.39qObjInfoEquation Native 4_1136156749FPNNAPNNAOle CompObjfObjInfoEquation Native  )_1136620570Fp{bNAp{bNA) XQ|F  FMicrosoft Equation 3.0 DS Equation Equation.39qWœ(6\O 2y   i "2y  i   i=1nOle  CompObj fObjInfo Equation Native  train "  22 FMicrosoft Equation 3.0 DS Equation Equation.39qW'| 2y   i "2y  i   i=1n train "  22 +_1136620610F7pNA7pNAOle CompObjfObjInfoEquation Native _1115539118mFpzNApzNAOle CompObjf22w    22 FMicrosoft Equation 3.0 DS Equation Equation.39qތPIxI =min1;0.05n200() 32 {}ObjInfoEquation Native 1TableX6SummaryInformation( ~)u_uz 8>p暗yڍE7G\|k ڑ) ~~u6@lk'@4zӾʑ՗Gum?arY)s)Ȧ}vJ;~x\)uyXh4:3]S ^/y3@`A'3}qV_<6P!{[rVuzXpgMUX7>WL;БUm9@&J%(PK^rp 1s _ZE|v=,eȔ0ӝ-+e.5B@Kuz(6\Ǯ)YCc YJ6RCՠL}]w8@F:: ǒ`MX~\$e:ּul&U\ !ql囶C R`{ yܖyz1l9ؐK$!,\O3snU8mm#]sv3eXzg:X9b(\̷ʳu%ha{]` @ *\)Z6;2<枂Թ1Œ"(SfYC*M\ibǟaMH&^$mw=wa,U^WՎo jK9p-J`w]]VO7&4yxo+}m,ou)Wi<NMw.])iZp]xˑ_RK %MKkQdX)P.L|$~*ywiUD 1Dsb}v~X^y<͆P y󜒧ViV7p)pw .ݸ2{c7aן{ OrZoa䦶K/]/u:M뫂;9fw&OiN ]s֖86*މfJpkxzmrwG~_oǟDEn[0.oZ#xpŽYXdyzb?,\YxM.)7/ }̴b5Z}]6uO^bp|eVT:nW_-˛wGBOU>w㉭DQQ>А-1\mVxBT8^}Yt(p+v٘lUi1rZy)v8m{WyÚވ^d )L&EywvbL<=xj4Jh2%Mfɔkf_%}Msu\v]<>KXɲEHZxS=xXuOFS3|l R?.Ymfst ]իL6)fB AX!x|OːFa>+.SFC<>/bc{+)kUK͗o\rcYs%S6 p W宺}~ۃ0A 0@WQrw[x{1a&eJ/W=LaLzeNn7׻r>k#CDWhLJWir͑em6:SfNˆxqo@?~ǻJ3n5cm #@@oϸ_\[畧idJ[Nv޲/i-P` ts)irtto2oĠRtW}#;'й@܊|sX ̔-/o*6KX~\r&w{XL)UL0C!ʑh%3$kݭ Z >{ wM [(ӇsOͧF)eo>2цIpϸ{uP2#}#wTN@M|sɛI/%S<Ҳq֣jV!L&K~),GɮfJ[3-L9X/KWe,N`]r_^xMU7}g R2}6pT5sǥ!qw?|v7ҥx}do/ P2Bs[ݹzQs d9mvg˼_}gUat[<2˕fbzIیn۳^+?ژudL5yM@\XyҶUv˟49.4rfd* @nr)A6} ̙Rn7ߓM (pfwepEĵgʣGMh<` \iHF}I4YZxd)  ʐQa~IP7[() 曓}3jJ<_;phZw56 fJre$P2Sy\,oy[ڠeBvz&{k():v+{[1f [)(A=h%(nssr+꣓;3;pkm  .(Ceoi wf 3 ۵e۩鼰WéŋOq9`':_Zw}/X#S}$yu+tQA.A7)@Ly@p]o?M^KUD]B"N?s1e%j~l4,`)l2T~p?2Ǹϋ>E 'WEmt9: \ɧ{n;.'K3->uUݧ<]j6[F4@˶B%iвKPhNƖwڎ8U3;rEft r~3B&YvtF!'lgȫ4six~3  @ɔ[% ;_#@@' :e&'h[O(Ր߸X|cMjf]ϹƵkg/ _?qN@ @uLʦ\4S;iFuM3jU`w\_`ͺ[9﫹 y&* x }YV@n)WǯX4 @`'k/ !g߯,>O|ƽAWhSaTV`}\ͣd'T D]cU u=S$KiE٭@ugN @7j"C^?ֿKȉ!пL,;4k\bB5 ]" S2Q$@ʥ5\SЬ>3:Q){5u. Rk>9wbvN6E+>eWsI`>Z)E @). @@nUys.9o3t_:>eBN` c͍ ȔuKH@h2  H@l[ @D2Di( @F2e#x @&)'LC!@4) 0L9d  @L^ @dʉ&P @J l;%6Wf8  @drZ&@|E@L' @LYV @Ȕ_i$@)j +3m @r2e9[- @" S~e PN@,ge @Wdʯ̴q @ ȔlLP?z ПLߜ ߿+?6K`*r4Xb?T@tM$&oO`6KĻgZ!@)nyzyZij,i  3Sm ,iVx)-)V%F_jw{;=[]BO,$@2 l7KEaVeQj PB@,MX?Lm/~e(TEl([?%fʥO2Vn]=V K$}5=u%5E/o/zr  & >S]y!1άSI姦` d85=60ěO=Y>}мQ5c7}U8 YSoj ,n NXCeJ+A<tM`H}>}_5+rU)>d>A!p; ٨VcdV7֠Q PJ@,%]GdNC@c@GR||m( @ۿ"NؠwY?5BD`\ xbm @ اLt> @Li  @ Ȕ'@) @T2U @25@ * S : @@ @RdTA @Ȕ @@L*| @ @H)SO SZ @2e  @dJk @ U@Lt> @Li  @ Ȕ'@) @T2U @25@ * S : @@ @RdTA @/ϯg +~:;it:M) S9/"R`MNz3-8Asڸi|Co̳Q~FPvegQ~qRZڲ0Rq._O76|\_;MwgxO94) lAL"@@& t/ a *vFقmHk Ȕ uM@N8 ) SF9. []+avsE@,NV›v;1#jӫ@yyQ Zd?29Z*ٮƨQ.c 0L9d)@@BUn؆5,9f0puR+m?+3e@zhM eP x=0% _Ufm#ڏ)6y.UGȫ6AGm 7VgE<8*CA, S<{j'/K]_<$i WKr(K._e]k7ck6C@2eS0yYsFltXBîmBS[y |%:]+ ܤ(} Mf?N )kj@uquάeewN!ډ/v oߊ\a:dʉ'^ }rx @`rR& lo2qAȔߙk#%Ax7S<6) Yd)rjNE ]/\\<=}2v D?9ndnFa-oggL`&e,5cr.V^}7?qDN'@`8r)S0.贬/Ƒ(尛&ii[OT{=s;],YE PX@, y-v-W rwAv?LncH[[l KGϝw~*ыpsDsQU&gdӳu5GWB{+ԸL, SNN]` }S; )1 ")z%0n1c aaτ @oQv0u'j/I.{Ȧcz%٧#{C  ' S7e &P\4aǎw1ut̻3hqB%+|YQG%XZBr^"nw CsC;_oj @}A"p|^S`(|fvv[k;m2JJY| T~twFZnn,Q; x`2lܚuܓ&#.lsKm #m tj%˶n¼-= 0L9L~Ym^oimAgo0kG #mJ l,r`e^ۭYRH | kKK<~rXv^S Fu NU#$@N*lm)@֪.k<v <'@`$#M|[oϿ PQ@+j ȔG7gaM1&@`,Ͼǚ/ @zO㬨 0L9| УL㬨 0L9| УL㬨 0L9| УL㬨 0L9| УL㬨 0Wꍌ{'IENDB`KDdTJ * C A'? ")2h\lm`ŢP`!h\lm`Ţ hOxڍRMK@}3ik J!xA,ݣNJ8$HKAj"osbG'pE9;lbyzl ?YNQR-^S߷1 (5A1~WeUKNԋOjmҥ2]KOD>f=!zфkZy._DdJ + C A(? "*2&ncoq+_`!ncoq+_@Hx5O ` *XAE}_@T(PZ7'qI$. M+J(!YRːS>]?QCn62%!S;!%L e7o>}n?QLC0-pQ'=Hz U+xgt|SQnn3;&i./l @f*8EDd59:0 , # A)+bEenآ\ȎbD_nDenآ\ȎbPNG  IHDR<sRGBDIDATx^z,F] +Q _BRQS_~ @ [5 @, @ r=L @ ʕ& @8:% @ʕ @8:% @ʕ @8zF pb&`?> @@ʕ @8ΨQB#`O 揫Þ @@ʵ @  P@  @e(2#, @4@j1@  \ˌ @@媡 @ P&r-3 @@* @@ʵ @  P@  @e(2#, @4qhvpaW ߼F7MM?TMr%Sp=׽Eav@Dcb2֛pf3 @s ag&{l;~Z$'FL@@[h)C"ߴ2yw(뭪-"1 @C`91+ie!@(D3ދ+[Cs)M}w u\ױf&@"~9eߋڸNw]7'كpe۽(׳  @` _:+iUuIꪐ6Z s@ e2"ʵC @xoj<\ϭC v@}6\Mm|(q,@9FHQo՛l!@b_o Vf. !P~3dh&iNwi~-w>=SfƕhjR`C會Vfa@*ڢ6d 7‡&aK6Ff\J)i:*v0u=L]P ɬ!6D†/6RRS ]Sp; ={>olٚL'oU3!t7mMl5Z1*)GSlk ;[pܙ;icOC""35+M>ԒbKBZ÷5$i7~pmO7sPlq323@gqj7qƃ<>GLEbflsRn"M)a'2 Ax9~Ƕ^ r}ʗ(Ý א|[cOh`r,bRhW.Km#kmFm\*%Ɇ~U ̄NjFʸ9Y@[kMK8(yrU]#lH 2hŏS@+B2D>-uuN=fP4Dg_{i%zvG9gX?&<kMg?/P`EP0ޝdO ,ѦבwI)4ivU?]N&F*YOs>?н:EG/wc]8<#s /So R~l=y'-p$k*!]{Nn$3} (^דܳ*n"3,{uv)}lK8֕瞞At xZC_%g~<v5AzxZM+n ˧pFR$k{ޞW OA:#h%o,bD'Rm~ؚ8m^>p~xcxe"I^x3:~NH=T)SEe']2&6)|3gsfbwouT=g@vVx ,x0JHHvgN/z);X5'd~^Z=c&s@$bS,s!$ h ʅ{-ՒflWOV85;;pr;q]j@+ oU`j8G~%^&KN:3pMgWJ3x~\BBQ3ؕ Ss\4! 'VGxgkԐ:o@E<74pjeBhe-;OָfJ <|Js_D 2jEJC,T睱]-$ݳ>g;u(43"%E%>!ߝEf.ya+S fF͎G| OdoY{;]1l6@JZOH>tpyn}jz2)ʒPyR?& Jjk/W|+/*IftxtR3T峸O]ugM%8;Oc//T-Qx\'%cد)W95*M{\m cMt!w=}k/A_&](PXuMa SË0yP9Yܟy%l$baAۨ)#[Mbi n2m!_bp SǶLFc@k=ƾNhSFՖiۨ׻J_JHk "( ЙJn fEζLF7eME.h#rmƨ h;!"l-ӶQwՑ>#DP3}MfEpuH2 m$jC~=u!@aQVT+ZON}{+LcӾM}6 21)ht֫HX קP50a.Tm7/?M?jKmpmi[%οĪ%GK5Gk^!ٗVm⩍}jZ?ҟ&ϋ$ۤx[i(cHDF\xM%f!Bhi!= Ye 1O']L:6{af(ft jU'CpHo c%M(Zmk\+s.Vf:\kghGFpXǫyjm-f*h7\3yCKRPf|Tf3KNcwLQ'gsZxcS pTw DO,JCz k( 'K3w ϧ'h@V,H4OY6;o{U@u6P7(YO\{1 S S%n-$@ S(azcFe6jx1QE}/Yout6k>̼li?N l`O$cc+ ?ڬ|l0IGuU<ŋdN9IE@1;B­ӭk@p [@WߡdLy҆k*Y6 G [-(Wxz]DĊ @PǖD+#A^(W!2zxROm@PE媪spUrVXB +VCM?Mj˅DwE֗6VUF08Tz wwYvNcuv&:c*G p;&n6g  @&"GVIN:!c=ױ> @?V5ro%ַM%N}P7+!8@r(șUόn  Uu(N/c4 SϮߒQK03I%Rk%0!@O@>QfIrUg] 2MV`Y-NzJnc7W KJ/@3ă;"LA@Qޒ(y,%r]> `pN%r%_ % HvXѩy(5>(  tkד/UO-Ύcg׏!@'@PbQ"X@"f{"cIG \[ž5U @6J~N>(sj2RvXU @w@YYqu8RBPBi6YL/@\M/ M:DL#r?"a- @PH媰(Bb  z3cj(WOvg^@=Щ={edxsg@}_PG媮${ؘj ūİWN媼@S`) Щ+Pk1~k#| 'PԩpuE@_qvX1B r}dC&U@4VH(ב4?8%u! @:f@GdM[m FAKuҰ W<_dktc|b&e' gE"" $i!8z@x@0__F#x)W'xR=z1E0.' \fz`uPxH^*/~vX/+(@`bs5d~cUe:@Dǭé~# zLMcB'P@`KyS}J [!"* (_+XJ9应ndcP@am0"9W6l&h`=-l2B'y9L-8Q5d|;6* '/xV Wti3.c7$|% oȤ`u]<66nZXB@ KQBEWԲlC2xLyN+t$7kZ9+jj.Bxo$ 0f:#%|"J󑛘Y{oy~ rOmfQ͘Ϝ!6J1dAx!HOT,Zo3I3Bt3>WrfM^I 0t)4Zz`Dz 0Q,XBJXV{B4 tJZ0 t*\h(?P,UUj xyo`I: صɝ=pJ`r\j; +g~tTt6/ PN XfYA`e _egg2OgC4#yFϪGp B6VfEu߭{jdY5B @Xo,칞U/ @@[{2 @gY6/CKag \Mѯ^$h xPbP zD$ p+뭕%/ $ra_ T@V@Ƌ~}b e@@ī*@J~U۸Mzw}Pet2#, &xMUZEv@CZWKk/!t@v#!rm%c @cn#rm(#!T@Vw@KЯbTBN(1uE@%!އP#~I_%r_zxx @xuVѯ^#r[q\x'x}z_WPf@aׇOo@7ukf#x}d"ꪣ_Wg>@gJM@=uwf ~}Ȥ.ڣ_wgn@^{kKf@ pu%%!M媥ЯZ*AEЯT rW}5!"@l׳G{;*Kzl>ޞ@ A@X4B@V1@ A@F=WV`~@K I@TyI"V@@1rP1 U_Me(2#Wu!*@ ׃E0Pgw= UAR(W))vW!6@@?ī!sЯU|  a3 NHκ pe"ȗ \>κ 0u>cf@;k;;#ѯkDBWE!$(;z@:*.!KKS*EW= b__:9C sTG! UU9e(w~}d@3c@gzGY"Z@`#FL CJ'??!_@GJK@`#FF+ A* ^U r}UI_pa Ifz;( 0h"~T by27'~mF@@G MJ\BuW!8@`7 0CP3Uk'@Cw@]_SB媤DŽ~=T ,'x] #r}CF@>jJF\U`Яp!U呿겊:T%$@`xoL>SCJ ^0#r=d~t@vcP^IW% @`.{(롅*lU$@O@>]~Uɣ_U` Я3ݡPڰѯז kC/+MUcU O:13Mzvzw}H@H^ЁfW!6@`$={WѯT+=DzS5J@ NJg@7@T͗sA\}r/@aBzH3GJ@LZf p U"F+=Я#(O`~.'_OFrem%ab@5Qd(=ܙ\skG%x%9C G*p~@E 0uv(ѻCÞ A:_9i_<@|P3݅:DH 8r, Qj}F*~ f f M:0"P_ []BJW!\'PԯTE/!zzfXJ7=E_A)89pJތfz~N϶뺪0 P"k!rÝY_# ԯ|!\]xYl( J: ]ܪCp@@@rr4J(ו eʌn|-ˣ\-SC Ih#my3;n5S6¿ij%}N?T2]Nm-sP]@sb1 @uyz5)tz6bpN3κ @+_eNDk %4)OmNSSm?G[<d54yw*ki*c @O@"[ۨ( @jŝp5zf{1X6<:rk,վ{YS\/+(@f8BM>`Q~}XxB r/ЧyѯԍF]V5 @Z$^&/V*:$uUHA m"Q_&Cأ\1 ?ZVTofc庋]7 -~|8_;"4RvG>A*'UXta&yR.bhCC7&ULg!NB-ېN#\XYW -SGzBY^A-OYARqA w?~ Tb0u2wfhWlŌ4<]ۿg3Лn_e-a`@8-:@ BH= }P2a/G!z) c#"Z$6E, T0& pV1)Om2Ln gg23$FSz2ܷ2R'd L|jsa<4ncdZmaeV<`b _͐̊mCub;a&N Ykv{}ޚٲ5'v*'O~ROG?^^Mnjվh<܎~ؔD 6I0E,l˷}!m%Hi{8sT];a>S[m) {Rj(l'i% :mOK޹I(323@gqj7qƃ<P^s(wQ!%uRkE?E6euUQKITok7\3z~R?{kX[ʓܶVCCo 6uqT2ZEIսOH6i;Sg#2zK}RM6OaҨZ@ߗ2ziB> Si6{SOE[`E2vYOǺEPs@vQd.wLRKyJFRU_nmԣ HvS4v3\іlqx//wmuŖ#50Mj=gja PN`M^0ǰ~/ΒNjd6|wd͌)ɸev~6r@H~+^&?.7d8_L&Φ1a\+};~r F;̚\4#6g!N#g+Nad쒝_%=*Y==S%i|$HؾOE8)2$װwNR53Aɐ\!q2}˽xS7zS)33sUM2fOypՂP oc0ZÔkU'+Jf(c6^;p~W(~zPݸ>_\8sR[J캽n+^="%*|lGkziXP @0-& Mϥ) AE/7Tr }ruuT} $JἾqT @HĨw$AҼbnpx9ªgl ]y$dƳ}.9GXQL"9!5}N?x: w.%w+y-\KwZo;$PZ} 'WDa{.[(3Vew%jexoSgg'SC8[%WdS/4VtuUh8@:3 5Zůū1_XI =G̈́cuqcs;wjM!VN i &f%dm< m!Oqj@`ϣHD_*H&V:w`JCl;cE2XLT$\W[~!$D+ 6hMQtBʯS}+XlTuwf3ï7jI=޾MJ=:Œ@[TH`lQ?7K%Κ+aFt RGqzhP!]ri{dRה63m-m"]C@3FLM*y-$=,gW xj(?F; Ҽn\7%45!0\L>UPA00> y+z}$Ũ 0ԵΙgo"@TMrr(W" DwL/O55D(yG58j^6{?Bb\fߴS1KR~/ !/jE*  br' F@_%6-,7uʳ6qAdi==C$ @9(W Wx_IA[(R1y o -mS\/I S,.P\`@ ePP3 @@k>C ,#r] @\1 @`2L@ Eڅ @\f"@ .Մ.  @1*bW:)N @@NG @CpZ`F@ L'tL@ 0uF@ L'r @@@ e3ZɬiIENDB`DdJ - C A*? ",2Ikˊ?~`Y%`!kˊ?~`Yj@` !xcdd`` @bD"L1JE `x,56~) M @ K 7$# L aA $37X/\!(?71]46b`&ηbL{hbpenR~F> g{ ԝ\`0M)I)$5n Adyw+/ DdJ . C A+? "-2=Ӯ´`R(`!Ӯ´`R(:`!Hxcdd`` @c112BYL%bpuekw:DdQSJ / C A,? ".21ODv㉝G `!ODv㉝G`hn `\xڥT=KA;]ZXvSl,TbxbР.׈ZY`"g!"]#b{7e͛ɲ E/ՀaC)cXkBϜȨ' dIL.+ f=AN$B]g;h@I܉ [|:ItGD9T;Y %/|͋&lGۺx{37Lu\}9&XQ9^u޳q#DQcn :YA)+ ;298t%Z|ѠxV/DdigJ 0 C A-? "/2_;'CM%>㚗m`!e_;'CM%>㚗 `\3xڥTkA~vn&k ՃՃJMO-xFL7j&v!'ϽAă"]Z$8x4Λe 7o^v(^pɈ!c1zͲZ˛8c;v: GP<hG жR-XF]pzD 0{?bHh>"mhFM4HkrnNi[,Jj)Dڢ:h m4G7?Y=ß$C^A3bZg4Nk %w'eixPgH$*}8wDyK www.drugmining.comyK 6http://www.drugmining.com/Oh+'0$ <H d p | 0A Brief Introduction to Scientific Data Mining: BrMark EmbrechtscarkarkNormalbMark Embrechtsc54kMicrosoft Word 9.0n@,@1@@W@A[DocumentSummaryInformation8($CompObj1j՜.+,D՜.+,l( hp  Embrex Technologies.jp 0A Brief Introduction to Scientific Data Mining: Title 8@ _PID_HLINKSAp,9http://www.drugmining.com/  FMicrosoft Word Document MSWordDocWord.Document.89q i0@0 Normal_HmH sH tH <@< Heading 1$$@&a$ 5CJ\6@6 Heading 2$$@&a$CJ6@6 Heading 3$$@&a$CJ6@6 Heading 4$@& 5CJ\F@F Heading 5$|@&^`| 5CJ\F@F Heading 6$$L@&^L`a$CJ0@0 Heading 7$@&CJ8@8 Heading 8$$@&a$6]<A@< Default Paragraph Font.>@. Title$a$ 5CJ\.J@. Subtitle$a$CJ0B@0 Body Text$a$CJHC@"H Body Text Indent|^`|CJ, @2, Footer  !&)@A& Page Number2"@2 Caption$a$ 5CJ\.U@a. Hyperlink >*B*phLR@rL Body Text Indent 2L^L`CJ>V@> FollowedHyperlink >*B* ph,@, Header  !|e@| HTML Preformatted7 2( Px 4 #\'*.25@9OJPJQJ^J8P@8 Body Text 2$ ,a$ /o               /o&RSe13(U4 Q >&$'%'S'T')4,S,,,..//5/6/43R355F8;?UCWCCCCCEFFIKMMMMMNNhQ"VxYC\D\~\\5]T]e]]L_aaaccddddeeee:ffFgghhiiZjjkltlm=no ooooooooooooooooooo o!o"o#o$o%o&o'o(o)o*o+o,o-o0o0000000`00000x0000000000000000000000000000000000000000000000000000000X00000000000000000000000000000000000000000000000000000000@0@0@0@0@0@0 0000000000000000000000000000 a  07Tzapt^abdeghjkmS0Idiuttt_cfilnot`%9;Vjl 46(<>auw4 H J Thj%9; `tv ,.  o4,H,J,g,{,},,,,,--......43H3J33333335558 9 9EFFF+F-FxVVVf\z\|\5]I]K]e]y]{]aaaCklkk/o::::::::::::::::::::::::::::::::::::::::::::::X!`WX 2$ g% r]K2$f^s=x/̩,g|f$2$N9*M :h8b$fTZZL aR$pd*lس.v9P=}R$% (tuR$fB)ϔxͣ HJ{^2$tJW&Rlb$H95I{fL# 2R$\>H ʺSN2$Jl~;#-b$Q:RQM2;)?2$mqB؃{45*R$ݲg*# ⦾>R2$D[E&B `2$[gbNMk$R$@r20~1\q'R$66-VOѦXJR$@: !zM7yb$BYjm`ɻ Ռ.2$2Qlh‘&LCE@R$(|>V+â1MjR$\/'j]ɇ@W0(  B S  ?/o[d ^ h   * 5 #.0vx_bUW""##$!$*$,$''6697?7889===u@w@DDEEFFGGQNTNOORRRRRR