ࡱ> )')      !"#$%&)` vbjbj {{? &z:z:z:z:z:z:z:d:V|:`n"***;= I8::::::$yhT^uz: @;  ^z:z:**\\\  z:*z:*8\ 8\\l|z:z:m* yit_l|tQ0l5N$5$mm85z:m\l^^U$    :::::::::z:z:z:z:z:z:  COLLECTED UNPUBLISHED PAPERS Thomas R. Knapp 2014 In this document I have tried to put together a number of unpublished papers I wrote during the last several years. They range in length from 2 pages to 90 pages, and in complexity from easy to fairly technical. The papers are in rough chronological order, from 2004 until 2014. I think there's something in here for everybody. Feel free to download and/or print anything that you find to be of interest. Enjoy! Table of Contents Investigating the relationship between two variables...........................................3 Minus vs. divided by...........................................................................................19 Percentages: The most useful statistics ever invented.....................................26 Significance test, confidence interval, both, or neither? ..................................116 Random............................................................................................................125 The all-purpose Kolmogorov-Smirnov test for two independent samples........174 n........................................................................................................................180 Seven: A commentary regarding Cronbach's Coefficient Alpha.......................206 Why is the one-group pretest-posttest design still used?.................................212 N (or n) vs. N-1 (or n-1) revisited.....................................................................215 The independence of observations...................................................................221 Standard errors.................................................................................................227 Medians for ordinal scales should be letters, not numbers......... .....................230 To pool or not to pool........................................................................................236 Rating, ranking, or both?...................................................................................242 Three.................................................................................................................248 Change..............................................................................................................251 Should we give up on causality?........... ...........................................................260 Learning statistics through finite populations and sampling without replacement ..........................................................................................................................267 Two-by-two tables.............................................................................................276 Validity? Reliability? Different terminology altogether?.....................................285 p, n, and t: Ten things you need to know..........................................................291 Assessing the validity and reliability of Likert scales and Visual Analog scales .........................................................................................................................295 In support of null hypothesis significance testing............................ .................306 The unit justifies the mean................................................................................314 Alpha beta soup................................................................................................319 Using Pearson correlations to teach or learn statistics.....................................324 Dichotomization: How bad is it? .......................................................................343 Learning descriptive statistics through baseball................................................347 p-values.............................................................................................................367 Separate variables vs. composites....................................................................370 Cutoff points.......................................................................................................377 Womb mates......................................................................................................383 Should we give up on experiments?..................................................................387 Statistics without the normal distribution............................................................391 INVESTIGATING THE RELATIONSHIP BETWEEN TWO VARIABLES Abstract "What is the relationship between X and Y?", where X is one variable, e.g., height, and Y is another variable, e.g., weight, is one of the most common research questions in all of the sciences. But what do we mean by "the relationship between two variables"? Why do we investigate such relationships? How do we investigate them? How do we display the data? How do we summarize the data? And how do we interpret the results? In this paper I discuss various approaches that have been taken, including some of the strengths and weaknesses of each. The ubiquitous research question "What is the relationship between X and Y?" is, and always has been, a question of paramount interest to virtually all researchers. X and Y might be different forms of a measuring instrument. X might be a demographic variable such as sex or age, and Y might be a socioeconomic variable such as education or income. X might be an experimentally manipulable variable such as drug dosage and Y might be an outcome variable such as survival. The list goes on and on. But why are researchers interested in that question? There are at least three principal reasons: 1. Substitution. If there is a strong relationship between X and Y, X might be substituted for Y, particularly if X is less expensive in terms of money, time, etc. The first example in the preceding paragraph is a good illustration of this reason; X might be a measurement of height taken with a tape measure and Y might be a measurement of height taken with an electronic stadiometer. 2. Prediction. If there is a strong relationship between X and Y, X might be used to predict Y. An equation for predicting income (Y) from age (X) might be helpful in understanding the trajectory in personal income across the age span. 3. Causation. If there is a strong relationship between X and Y, and other variables are directly or statistically controlled, there might be a solid basis for claiming, for example, that an increase in drug dosage causes an increase in life expectancy. What does it mean? In a recent internet posting, Donald Macnaughton (2002) summarized the discussion that he had with Jan deLeeuw, Herman Rubin, and Robert Frick regarding seven definitions of the term "relationship between variables". The seven definitions differed in various technical respects. My personal preference is for their #6: There is a relationship between the variables X and Y if, for at least one pair of values X' and X" of X, E(Y|X') ~= E(Y|X"), where E is the expected-value operator, the vertical line means "given", and ~= means "is not equal to". (It indicates that X varies, Y varies, and all of the X's are not associated with the same Y.) Research design In order to address research questions of the "What is the relationship between X and Y?" type, a study must be designed in a way that will be appropriate for providing the desired information. For relationship questions of a causal nature a double-blind true experimental design, with simple random sampling of a population and simple random assignment to treatment conditions, might be optimal. For questions concerned solely with prediction, a survey based upon a stratified random sampling design is often employed. And if the objective is to investigate the extent to which X might be substituted for Y, X must be "parallel" to Y (a priori comparably valid, with measurements on the same scale so that degree of agreement as well as degree of association can be determined). Displaying the data For small samples the raw data can be listed in their entirety in three columns: one for some sort of identifier; one for the obtained values for X; and one for the corresponding obtained values for Y. If X and Y are both continuous variables, a scatterplot of Y against X should be used in addition to or instead of that three-column list. [An interesting alternative to the scatterplot is the "pair-link" diagram used by Stanley (1964) and by Campbell and Kenny (1999) to connect corresponding X and Y scores.] If X is a categorical independent variable, e.g., type of treatment to which randomly assigned in a true experiment, and Y is a continuous dependent variable, a scatterplot is also appropriate, with values of X on the horizontal axis and with values of Y on the vertical axis. For large samples a list of the raw data would usually be unmanageable, and the scatterplot might be difficult to display with even the most sophisticated statistical software because of coincident or approximately coincident data points. (See, for example, Cleveland, 1995; Wilkinson, 2001.) If X and Y are both naturally continuous and the sample is large, some precision might have to be sacrificed by displaying the data according to intervals of X and Y in a two-way frequency contingency table (cross-tabulation). Such tables are also the method of choice for categorical variables for large samples. How small is small and how large is large? That decision must be made by each individual researcher. If a list of the raw data gets to be too cumbersome, if the scatterplot gets too cluttered, or if cost considerations such as the amount of space that can be devoted to displaying the data come into play, the sample can be considered large. Summarizing the data For continuous variables it is conventional to compute the means and standard deviations of X and Y separately, the Pearson product-moment correlation coefficient between X and Y, and the corresponding regression equation(s), if the objective is to determine the direction and the magnitude of the degree of linear relationship between the two variables. Other statistics such as the medians and the ranges of X and Y, the residuals (the differences between the actual values of Y and the values of Y on the regression line for the various values of X), and the like, might also be of interest. If curvilinear relationship is of equal or greater concern, the fitting of a quadratic or exponential function might be considered. [Note: There are several ways to calculate Pearson's r, all of which are mathematically equivalent. Rodgers & Nicewander (1988) provided thirteen of them. There are actually more than thirteen. I derived a rather strange-looking one several years prior to that (Knapp, 1979) in an article on estimating covariances using the incidence sampling technique developed by Sirotnik & Wellington (1974).] For categorical variables there is a wide variety of choices. If X and Y are both ordinal variables with a small number of categories (e.g., for Likert-type scales), Goodman and Kruskal's (1979) gamma is an appropriate statistic. If the data are already in the form of ranks or easily convertible into ranks, one or more rank-correlation coefficients, e.g., Spearman's rho or Kendall's tau, might be preferable for summarizing the direction and the strength of the relationship between the two variables. If X and Y are both nominal variables, indexes such as the phi coefficient (which is mathematically equivalent to Pearson's r for dichotomous variables) or Goodman and Kruskal's (1979) lambda might be equally defensible alternatives. For more on displaying data in contingency tables and for the summarization of such data, see Simon (1978) and Knapp (1999). Interpreting the data Determining whether or not a relationship is strong or weak, statistically significant or not, etc. is part art and part science. If the data are for a full population or for a "convenience" sample, no matter what size it may be, the interpretation should be restricted to an "eyeballing" of the scatterplot or contingency table, and the descriptive (summary) statistics . For a probability sample, e.g., a simple random random or a stratified random sample, statistical significance tests and/or confidence intervals are usually required for proper interpretation of the findings, as far as any inference from sample to population is concerned. But sample size must be seriously taken into account for those procedures or anomalous results could arise, such as a statistically significant relationship that is substantively inconsequential. (Careful attention to choice of sample size in the design phase of the study should alleviate most if not all of such problems.) An example The following example has been analyzed and scrutinized by many researchers. It is due to Brad Efron and his colleagues (see, for example, Diaconis & Efron, 1983). [LSAT = Law School Aptitude Test; GPA = Grade Point Average] Data display(s) Law School Average LSAT score Average Undergraduate GPA 1 576 3.39 2 635 3.30 3 558 2.81 4 578 3.03 5 666 3.44 6 580 3.07 7 555 3.00 8 661 3.43 9 651 3.36 10 605 3.13 11 653 3.12 12 575 2.74 13 545 2.76 14 572 2.88 15 594 2.96 680 - LSAT - 2 - * - * 640 - * - - - * 600 - - * - * - * * * * - 560 - * - * - * - +----------+---------+---------+---------+---------+----GPA 2.70 2.85 3.00 3.15 3.30 3.45 [The 2 indicates there are two data points (for law schools #5 and #8) that are very close to one another in the (X,Y) space. It doesn't clutter up the scatterplot very much, however. Note: Efron and his colleagues always plotted GPA against LSAT. I have chosen to plot LSAT against GPA. Although they were interested only in correlation and not regression, if you cared about predicting one from the other it would make more sense to have X = GPA and Y = LSAT, wouldn't it? ] Summary statistics N MEAN STDEV lsat 15 600.3 41.8 gpa 15 3.095 0.244 Correlation between lsat and gpa = 0.776 The regression equation is lsat = 188 + 133 gpa (standard error of estimate = 27.34) Unusual Observations Obs. gpa lsat Fit Stdev.Fit Residual St.Resid 1 3.39 576.00 639.62 11.33 -63.62 -2.56R R denotes an obs. with a large st. resid. Interpretation The scatterplot looks linear and the correlation is rather high (it would be even higher without the outlier). Prediction of average LSAT from average GPA should be generally good, but could be off by about 50 points or so (approximately two standard errors of estimate). If this sample of 15 law schools were to be "regarded" as a simple random sample of all law schools, a statistical inference might be warranted. The correlation coefficient of .776 for n = 15 is statistically significant at the .05 level, using Fisher's r-to-z transformation; and the 95% confidence interval for the population correlation extends from .437 to .922 on the r scale (see Knapp, Noblitt, & Viragoontavan, 2000), so we can be reasonably assured that in the population of law schools there is a non-zero linear relationship between average LSAT and average GPA. Complications Although that example appears to be simple and straightforward, it is actually rather complicated, as are many other two-variable examples. Here are some of the complications and some of the ways to cope with them: 1. Scaling. It could be argued that neither LSAT nor GPA are continuous, interval-level variables. The LSAT score on the 200-800 scale is usually determined by means of a non-linear normalized transformation of a raw score that might have been corrected for guessing, using the formula number of right answers minus some fraction of the number of wrong answers. GPA is a weighted heterogeneous amalgam of course grades and credit hours where an A is arbitrarily given 4 points, a B is given 3 points, etc. It might be advisable, therefore, to rank-order both variables and determine the rank correlation between the corresponding rankings. Spearman's rho for the ranks is .796 (a bit higher than the Pearson correlation between the scores). 2. Weighting. Each of the 15 law schools is given a weight of 1 in the data and in the scatterplot. It might be preferable to assign differential weights to the schools in accordance with the number of observations that contribute to its average, thus giving greater weight to the larger schools. Korn and Graubard (1998) discuss some very creative ways to display weighted observations in a scatterplot. 3. Unit-of analysis. The sample is a sample of schools, not students. The relationship between two variables such as LSAT and GPA that is usually of principal interest is the relationship that would hold for individual persons, not aggregates of persons, and even there one might have to choose whether to investigate the relationship within school or across schools. The unit-of-analysis problem has been studied for many years (see, for example, Robinson, 1950 and Knapp, 1977), and has been the subject of several books and articles, more recently under the heading "hierarchical linear modeling" rather than "unit of analysis" (see, for example, Raudenbush & Bryk, 2002 and Osborne, 2000). 4. Statistical assumptions. There is no indication that those15 schools were drawn at random from the population of all law schools, and even if they were, a finite population correction should be applied to the formulas for the standard errors used in hypothesis testing or interval estimation, since the population at the time (the data were gathered in 1973) consisted of only 82 schools, and 15 schools takes too much of a "bite" out of the 82. Fisher's r-to-z transformation only "works" for a bivariate normal population distribution. Although the scatterplot for the 15 sampled schools looks approximately bivariate normal, that may not be the case in the population, so a conservative approach to the inference problem would involve a choice of one or more of the following approaches: a. A test of statistical significance and/or an interval estimate for the rank correlation. Like the correlation of .776 between the scores, the rank correlation of .796 is also statistically significant at the .05 level, but the confidence interval for the population rank correlation is shifted to the right and is slightly tighter. b. Application of the jackknife to the 15 bivariate observations. Knapp, et al. (2000) did that for the "leave one out" jackknife and estimated the 95% confidence interval to be from approximately .50 to approximately .99. c. Application of the bootstrap to those observations. Knapp, et al. (2000) did that also [as many other researchers, including Diaconis & Efron, 1983 had done], and they found that the middle 95% of the bootstrapped correlations ranged from approximately .25 to approximately .99. d. A Monte Carlo simulation study. Various population distributions could be sampled, the resulting estimates of the sampling error for samples of size 15 from those populations could be determined, and the corresponding significance tests and/or confidence intervals carried out. One population distribution that might be considered is the bivariate exponential. 5. Attenuation. The correlation coefficient of .776 is the correlation between obtained average LSAT score and obtained average GPA at those 15 schools. Should the relationship of interest be an estimate of the correlation between the corresponding true scores rather than the correlation between the obtained scores? It follows from classical measurement theory that the mean true score is equal to the mean obtained score, so this should not be a problem with the given data, but if the data were disaggregated to the individual level a correction for attenuation (unreliability) may be called for. (See, for example, Muchinsky, 1996 and Raju & Brand, 2003; the latter article provides a significance test for attenuation-corrected correlations.) It would be relatively straightforward for LSAT scores, since the developers of that test must have some evidence regarding the reliability of the instrument. But GPA is a different story. Has anyone ever investigated the reliability of GPA? What kind of reliability coefficient would be appropriate? Wouldn't it be necessary to know something about the reliability of the classroom tests and the subsequent grades that "fed into" the GPA? 6. Restriction of range. The mere fact that the data are average scores presents a restriction-of-range problem, since average scores vary less from one another than individual test scores do. There is also undoubtedly an additional restriction because students who apply to law schools and get admitted have (or should have) higher LSAT scores and higher GPAs than students in general. A correction for restriction of range to the correlation of .776 might be warranted (the end result of which should be an even higher correlation), and a significance test is also available for range-corrected correlations (Raju & Brand, 2003). 7. Association vs. agreement. Reference was made above to the matter of association and agreement for parallel forms of measuring instruments. X and Y could be perfectly correlated (for example, X = 1,2,3,4,5, and Y = 10,20,30,40,50, respectively) but not agree very well in any absolute sense. That is irrelevant for the law school example, since LSAT and GPA are not on the same scale, but for many variables it is the matter of agreement in addition to the matter of association, that is of principal concern (see, for example, Robinson, 1957 and Engstrom, 1988). 8. Interclass vs. intraclass. If X and Y are on the same scale, Fisher's (1958) intraclass correlation coefficient may be more appropriate than Pearson's product-moment correlation coefficient (which Fisher called an interclass correlation). Again this is not relevant for the law school example, but for some applications, e.g., an investigation of the relationship between the heights of twin-pairs, Pearson's r would actually be indeterminate because we wouldn't know which height to put in which column for a given twin-pair. 9. Precision. How many significant digits or decimal places are warranted when relationship statistics such as Pearson r's are reported? Likewise for the p-values or confidence coefficients that are associated with statistical inferences regarding the coresponding population parameters. For the law school example I reported an r of .776, a p of (less than) .05, and a 95% confidence interval. Should I have been more precise and said that r = .7764 or less precise and said that r = .78? The p that "goes with" an r of .776 is actually closer to .01 than to .05. And would anybody care about a confidence coefficient of, say, 91.3? 10. Covariance vs. correlation. Previous reference was made to the tradition of calculating Pearson's r for two continuous variables whose linear relationship is of concern. In certain situations it might be preferable to calculate the scale-bound covariance between X and Y rather than, or in addition to, the scale-free correlation. In structural equation modeling it is the covariances, not the correlations, that get analyzed. And in hierarchical linear modeling the between-aggregate and within-aggregate covariances sum to the total covariance, but the between-aggregate and within-aggregate correlations do not (see Knapp, 1977). Another example The following table (from Agresti, 1990) summarizes responses of 91 married couples to a questionnaire item. This example has also been analyzed and scrutinized by many people [perhaps because of itheir prurient interest?]. The item: Sex is fun for me and my partner (a) never or occasionally, (b) fairly often, (c) very often, (d) almost always. Wife's Rating Husband's Never Fairly Very Almost Rating fun often often always Total Never fun 7 7 2 3 19 Fairly often 2 8 3 7 20 Very often 1 5 4 9 19 Almost always 2 8 9 14 33 Total 12 28 18 33 91 What is the relationship between Wife's Rating (X) and Husband's Rating (Y)? There are many ways of analyzing these data in order to provide an answer to that question. In decreasing order of my preference, they are: 1. Percent agreement (strict). The number of agreements is equal to 7+8+4+14 = 33 (the sum of the frequencies in the principal diagonal) which, when divided by 91 and multiplied by 100, gives a percent agreement of 36.3%. If this sample of 91 couples were a simple random sample from some large population of couples, a confidence interval for the corresponding population percentage could be constructed. (Using the normal approximation to the binomial, the 95% confidence interval for percent agreement in the population would extend from 26.4% to 46.2%.) In any event, the relationship does not appear to be very strong. 2. Percent partial agreement (lenient). If "agreement" were to be defined as "not off by more than one category", the percent agreement is those 33 + the sums of the frequencies in the adjacent parallel diagonals, i.e., 7+3+9 = 19 and 2+5+9 = 16, for a total of 68 "agreements" out of 91 possibilities, or 74.7%. 3. Goodman and Kruskal's (1979) gamma. The two variables are both ordinal (percent agreement does not take advantage of that ordinality, but it is otherwise very simple and very attractive) and the number of categories is small (4), so by applying any one of the mathematically-equivalent formulas for gamma, we have gamma = .047. 4. Goodman and Kruskal's (1979) lambda. Not as good a choice as Goodman's gamma, because it does not reflect the ordinality of the two variables. For these data lambda = .159. 5. Somers' (1962) D. Somers' D is to be preferred to gamma if the two variables take on independent and dependent roles (for example, if we would like to predict husband's rating from wife's rating, or wife's rating from husband's rating). That does not appear to be the case here, but Somers' D for these data is .005. 6. Cohen's (1960) kappa. This is one of my least favorite statistics, since it incorporates a "correction" to percent agreement for chance agreements and I don't believe that people ever make chance ratings. But it is extremely popular in certain disciplines (e.g., psychology) and some people would argue that it would be appropriate for the wife/husband data, for which it is .129 (according to the graphpad.com website calculator and Michael Friendly's website). ["Weighted kappa", a statistic that reflects partial agreement, also "corrected for chance", is .237.] The sampling distribution of Cohen's kappa is a mess (see, for example, Fleiss, Cohen, & Everitt, 1969), but the graphpad.com calculator yielded a 95% confidence interval of -.006 to .264 for the population unweighted kappa. 7. Krippendorff's (1980) alpha . This statistic is alleged to be an improvement over Cohen's kappa, since it also "corrects" for chance agreements and is a function of both agreements and disagreements. For these data it is .130. (See the webpage entitled "Computing Krippendorff's Alpha-Reliability".) "Alpha" is a bad choice for the name of this statistic, since it can be easily confused with Cronbach's (1951) alpha. 8. Contingency coefficient. Not recommended; it also does not reflect ordinality and its range is not from the usually-desirable -1 to +1 or 0 to +1. 9. Rank correlation. Not recommended; there are too many "ties". 10. Pearson's r. Also not recommended; it would treat the two variables as interval-level, which they definitely are not. [I would guess, however, that over half of you would have done just that!] A third (and final) example One of the most common research contexts is a true experiment in which each subject is randomly assigned to an experimental group or to a control group and the two groups are compared on some continuous variable in order to determine whether or not, or the extent to which, the treatment as operationalized in the experimental condition has had an effect. Here X is a dichotomous (1, 0) nominal independent variable and Y is a continuous dependent variable. An example of such a context was provided by Dretzke (2001) in her book on the use of Excel for statistical analyses: "A researcher wanted to find out if dreaming increased as a result of taking three milligrams of Melatonin before going to sleep each night. Nineteen people were randomly assigned to one of two treatment conditions: Melatonin (n = 10) and Placebo (n = 9). Each morning the number of dreams recalled were reported and tallied over a one-week period. " (p. 152) Here are the data: Melatonin (X = 1) Placebo (X = 0) 21 12 18 14 14 10 20 8 11 16 19 5 8 3 12 9 13 11 15 1. Displaying the data. The listing of the 19 "scores" on the dependent variable in two columns (without identifiers), with 10 of the scores under the "Melatonin" column (experimental group) and 9 of the scores under the "Placebo" column (control group) seems at least necessary if not sufficient. It might be advisable to re-order the scores in each column from high to low or from low to high, however, and/or display the data graphically, as follows: - * 20.0+ * - * y - * - - * 15.0+ * - * * - * - * * - * * 10.0+ * - * - * * - - 5.0+ * - - * +---------+---------+---------+---------+---------+------x 0.00 0.20 0.40 0.60 0.80 1.00 2. Descriptive statistics. Most data analysts would be interested in the mean and standard deviation of the dependent variable for each group, and the difference between the two means. (If there were an outlier or two, the medians and the ranges might be preferred instead of, or in addition to, the means and standard deviations.) And since the relationship between the two variables (X = type of treatment and Y = number of dreams recalled) is of primary concern, the point-biserial correlation between X and Y (another special case of Pearson's r) should also be calculated. For the given data those summary statistics are: Melatonin: mean = 15.1; standard deviation = 4.3 (median = 14.5; range = 13) Placebo: mean = 9.8; standard deviation = 4.1 (median = 10.0; range = 13) Difference between the means = 15.1 - 9.8 = 5.3 Correlation between X (type of treatment) and Y (number of dreams recalled) = .56 (with the Melatonin group coded 1 and the Placebo group coded 0) 3. Inferential statistics. Almost everyone would carry out a two independent samples one-tailed t test. That would be inadvisable, however, for a number of reasons. First of all, although the subjects were randomly assigned to treatments there is no indication that they were randomly sampled. [See the opposing views of Levin, 1993 and Shaver, 1993 regarding this distinction. Random sampling, not random assignment, is one of the assumptions underlying the t test. ] Secondly, the t test assumes that in the populations from which the observations were drawn the distributions are normal and homoscedastic (equal spread). Since there is apparently only one population that has been sampled (and that not randomly sampled) and its distribution is of unknown shape, that is another strike against the t test. (The sample observations actually look like they've been sampled from rectangular, i.e., uniform, distributions and the two samples have very similar variability, but that doesn't really matter; it's what's going on in the population that counts.) The appropriate inferential analysis is a randomization test (sometimes called a permutation test)--see, for example, Edgington (1995)--where the way the scores (number of dreams recalled) happened to fall into the two groups subsequent to the particular randomization employed is compared to all of the possible ways that they could have fallen, under the null hypothesis that the treatments are equally effective and the Melatonin group would always consist of 10 people and the Placebo group would always consist of 9 people. [If the null hypothesis were perfectly true, each person would recall the same number of dreams no matter which treatment he/she were assigned to.] The number of possible ways is equal to the number of combinations of 19 things taken 10 at a time (for the number of different allocations to the experimental group; the other 9 would automatically comprise the control group), which is equal to 92378, a very large number indeed. As an example, one of the possible ways would result in the same data as above but with the 21 and the 3 "switched". For that case the Melatonin mean would be 13.3 and the Placebo mean would be 11.8, with a corresponding point biserial correlation of .16 between type of treatment and number of dreams recalled. I asked John Pezzullo [be sure to visit his website some time, particularly the Interactive Stats section] to run the one-tailed randomization test for me. He very graciously did so and obtained a p-value of .008 (the difference between the two means is statistically significant at the .01 level) and so it looks like the Melatonin was effective (if it's good to be able to recall more dreams than fewer!). Difficult cases The previous discussion makes no mention of situations where, say, X is a multi-categoried ordinal variable and Y is a ratio-level variable. My advice: Try if at all posssible to avoid such situations, but if you are unable to do so consult your favorite statistician. The bottom line(s) If you are seriously interested in investigating the relationship between two variables, you should attend to the following matters, in the order in which they are listed: 1. Phrase the research question in as clear and concise a manner as possible. Example: "What is the relationship between height and weight?" reads better than "What is the relationship between how tall you are and how much you weigh?" 2. Always start with design, then instrumentation, then analysis. For the height/weight research question, some sort of survey design is called for, with valid and reliable measurement of both variables, and employing one or more of the statistical analyses discussed above. A stratified random sampling design (stratifying on sex, because sex is a moderator of the relationship between height and weight), using an electronic stadiometer to measure height and an electronic balance beam scale to measure weight, and carrying out conventional linear regression analyses within sex would appear to be optimal. 3. Interpret the results accordingly. References Agresti, A. (1990). Categorical data analysis. New York: Wiley. Campbell, D.T., & Kenny, D.A. (1999). A primer on regression artifacts. New York: Guilford. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46. Cleveland, W.S. (1995). Visualizing data. Summit, NJ: Hobart. Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Diaconis, P., & Efron, B. (1983). Computer-intensive methods in statistics. Scientific American, 248 (5), 116-130. Dretzke, B.J. (2001). Statistics with Microsoft Excel (2nd ed.). Saddle River, NJ: Prentice-Hall. Edgington, E.S. (1995). Randomization tests (3rd. ed.). New York: Dekker. Engstrom, J.L. (1988). Assessment of the reliability of physical measures. Research in Nursing & Health, 11, 383-389. Fisher, R.A. (1958). Statistical methods for research workers (13th ed.). New York: Hafner. Fleiss, J.L., Cohen, J., & Everitt, B.S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 323-327. Goodman, L. A. & Kruskal, W. H. (1979). Measures of association for cross classifications. New York: Springer-Verlag. Knapp, T.R. (1977). The unit-of-analysis problem in applications of simple correlation analysis to educational research. Journal of Educational Statistics, 2, 171-196. Knapp, T.R. (1979). Using incidence sampling to estimate covariances. Journal of Educational Statistics, 4, 41-58. Knapp, T.R. (1999). The analysis of the data for two-way contingency tables. Research in Nursing & Health, 22, 263-268. Knapp, T.R., Noblitt, G.L., & Viragoontavan, S. (2000). Traditional vs. "resampling" approaches to statistical inferences regarding correlation coefficients. Mid-Western Educational Researcher, 13 (2), 34-36. Korn, E.L, & Graubard, B.I. (1998). Scatterplots with survey data. The American Statistician, 52 (1), 58-69. Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Thousand Oaks, CA: Sage. Levin, J.R. (1993). Statistical significance testing from three perspectives. Journal of Experimental Education, 61, 378-382. Macnaughton, D. (January 28, 2002). Definition of "Relationship Between Variables". Internet posting. Muchinsky, P.M. (1996). The correction for attenuation. Educational and Psychological Measurement, 56, 63-75. Osborne, J. W. (2000). Advantages of hierarchical linear modeling. Practical Assessment, Research, & Evaluation, 7 (1). Available online. Raju, N.S., & Brand, P.A. (2003). Determining the significance of correlations corrected for unreliability and range restriction. Applied Psychological Measurement, 27 (1), 52-71. Raudenbush, S. W., & Bryk, A.S. (2002). Hierarchical linear models: Applications and data analysis methods. (2nd. ed.) Newbury Park, CA: Sage. Robinson, W.S. (1950). Ecological correlations and the behavior of individuals. American Sociological Review, 15, 351-357. Robinson, W.S. (1957). The statistical measurement of agreement. American Sociological Review, 22, 17-25. Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42 (1), 59-66. Shaver, J.P. (1993). What statistical significance is, and what it is not. Journal of Experimental Education, 61, 293-316. Simon, G.A. (1978). Efficacies of measures of association for ordinal contingency tables. Journal of the American Statistical Association, 73 (363), 545-551. Sirotnik, K.A., & Wellington, R. (1974). Incidence sampling: An integrated theory for "matrix sampling". Journal of Educational Measurement, 14, 343-399. Somers, R.H. (1962). A new asymmetric measure of association for ordinal variables. American Sociological Review, 27, 799-811. Stanley, J.C. (1964). Measurement in today's schools (4th ed.). New York: Prentice-Hall. Wilkinson, L. (2001). Presentation graphics. In N.J. Smelser & P.B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences. Amsterdam: Elsevier. MINUS VS. DIVIDED BY You would like to compare two quantities A and B. Do you find the difference between the quantities or their quotient? If their difference, which gets subtracted from which? If their quotient, which quantity goes in the numerator and which goes in the denominator? The research literature is somewhat silent regarding all of those questions. (The exception is the fine article by Finney, 2007.). What follows is an attempt to at least partially rectify that situation by providing some considerations regarding when to focus on A-B, B-A, A/B, or B/A. Examples 1. You are interested in the heights of John Doe (70 inches) and his son, Joe Doe (35 inches). Is it the positive difference 70 - 35 = 35, the negative difference 35 - 70 = -35, the quotient 70/35 = 2, or the quotient 35/70 = 1/2 = .5 that is of primary concern? 2. You are interested in the percentage of smokers in a particular population who got lung cancer (10%) and the percentage of non-smokers in that population who got lung cancer (2%). Is it the attributable risk 10% - 2% = 8%, the corresponding "attributable risk" 2% - 10% = -8%, the relative risk 10%/2% = 5, or the corresponding relative risk 2%/10% =1/5 =.2 that you should care about? 3. You are interested in the probability of drawing a spade from an ordinary deck of cards and the probability of not drawing a spade. Is it 13/52 - 39/52 = -26/52 = -1/2 = -.5, 39/52 - 13/52 = 26/52 = 1/2 = .5, (13/52)/(39/52) = 1/3, or (39/52)/(13/52) = 3 that best operationalizes a comparison between those two probabilities. 4. You are interested in the change from pretest to posttest of an experimental group that had a mean of 20 on the pretest and a mean of 30 on the posttest, as opposed to a control group that had a mean of 20 on the pretest and a mean of 10 on the posttest. Which numbers should you compare, and how should you compare them? Considerations for those examples 1. The negative difference isn't very useful, other than as an indication of how much "catching up" Joe needs to do. As far as the other three alternatives are concerned, it all depends upon what you want to say after you make the comparison. Do you want to say something like "John is 35 inches taller than Joe"? "John is twice as tall as Joe"? "Joe is half as tall as John"? 2. Again, the negative attributable risk is not very useful. The positive attributable risk is most natural ("Is there a difference in the prevalence of lung cancer between smokers and non-smokers?"). The relative risk (or an approximation to the relative risk called an "odds ratio") is the overwhelming choice of epidemiologists. They also favor the reporting of relative risks that are greater than 1 ("Smokers are five times as likely to get lung cancer") rather than those that are less than 1 ("Non-smokers are one-fifth as likely to get lung cancer"). One difficulty with relative risks is that if the quantity that goes in the denominator is zero you have a serious problem, since you can't divide by zero. (A common but unsatisfactory solution to that problem is to call such a quotient "infinity".) Another difficulty with relative risks is that no distinction is made between a relative risk for small risks such as 2% and 1%, and for large risks such as 60% and 30%. 3. Both of the difference comparisons would be inappropriate, since it is a bit strange to subtract two things that are actually the complements of one another (the probability of something plus the probability of not-that-something is always equal to 1). So it comes down to whether you want to talk about the "odds in favor of" getting a spade ("1 to 3") or the "odds against" getting a spade ("3 to 1"). The latter is much more natural. 4. This very common comparison can get complicated. You probably don't want to calculate the pretest-to-posttest quotient or the posttest-to-pretest quotient for each of the two groups, for two reasons: (1) as indicated above, one or more of those averages might be equal to zero (because of how the "test" is scored); and (2) the scores often do not arise from a ratio scale. That leaves differences. But what differences? It would seem best to subtract the mean pretest score from the mean posttest score for each group (30 - 20 = 10 for the experimental group and 10 - 20 = -10 for the control group) and then to subtract those two differences from one another (10 -[-10] = 20, i.e., a "swing'"of 20 points), and that is what is usually done. What some of the literature has to say I mentioned above that the research literature is "somewhat silent" regarding the choice between differences and quotients. But there are a few very good sources, in addition to Finney (2007), regarding the advantages and disadvantages of each. The earliest reference I could find is an article in Volume 1, Number 1 of the Peabody Journal of Education by Sherrod (1923). In that article he summarized a number of quotients that had just been developed, including the familiar mental age divided by chronological age, and made a couple of brief comments regarding differences, but did not provide any arguments concerning preferences for one vs. the other. One of the best pieces (in my opinion) is an article that appeared recently on the American College of Physicians' website. The author pointed out that although differences and quotients of percentages are calculated from the same data, differences often "feel" smaller than quotients. Another relevant source is the article that H.P. Tam and I wrote a few years ago (Knapp & Tam, 1997) concerning proportions, differences between proportions, and quotients of proportions. (A proportion is just like a percentage, with the decimal point moved two places to the left.) There are also a few good substantive studies in which choices were made, and the investigators defended such choices. For example, Kruger and Nesse (2004) preferred the male-to-female mortality ratio (quotient) to the difference between male and female mortality numbers. That ratio is methodologically similar to sex ratio at birth. It is reasonably well known that male births are more common than female births in just about all cultures. (In the United States the sex ratio at birth is about 1.05, i.e., there are approximately five percent more male births than female births, on the average.) The Global Youth Tobacco Survey Collaborating Group (2003) also chose the male-to-female ratio for comparing the tobacco use of boys and girls in the 13-15 years of age range. In an interesting "twist", Baron, Neiderhiser, and Gandy (1997) asked samples of Blacks and samples of Whites to estimate what the Black-to-White ratio was for deaths from various causes, and compared those estimates to the actual ratios as provided by the Centers for Disease Control (CDC). Some general considerations It all depends upon what the two quantities to be compared are. 1. Let's first consider situations such as that of Example #1 above, where we want to compare a single measurement on a variable with another single measurement on that variable. In that case, the reliability and validity with which the variable can be measured are crucial. You should compare the errors for the difference between two measurements with the errors for the quotient of two measurements. The relevant chapters in the college freshman physics laboratory manual (of all places) written by Simanek (2005) is especially good for a discussion of such errors. It turns out that the error associated with a difference A-B is the sum of the errors for A and B, whereas the error associated with a quotient A/B is the difference between the relative errors for A and for B. (The relative error for A is the error in A divided by A, and the relative error for B is the error for B divided by B.) 2. The most common comparison is for two percentages. If the two percentages are independent, i.e., they are not for the same observations or matched pairs of observations, the difference between the two is usually to be preferred; but if the percentages are based upon huge numbers of observations in epidemiological investigations the quotient of the two is the better choice, and usually with the larger percentage in the numerator and the smaller percentage in the denominator. If the percentages are not independent, e.g., the percentage of people who hold a particular attitude at Time 1 compared to the percentage of those same people who hold that attitude at Time 2, the difference (usually the Time 2 percentage minus the Time 1 percentage, i.e., the change, even if that is negative) is almost always to be preferred. Quotients of non-independent percentages are very difficult to handle statistically. 3. Quotients of probabilities are usually preferred to their differences. 4. On the other hand, comparisons of means that are not percentages (did you know that percentages are special kinds of means, with the only possible "scores" 0 and 100?) rarely involve quotients. As I pointed out in Example #4 above, there are several differences that might be of interest. For randomized experiments for which there is no pretest, subtracting the mean posttest score for the control group from the mean posttest score for the experimental group is most natural and most conventional. For pretest/posttest designs the "difference between the differences" or the difference between "adjusted" posttest means (via the analysis of covariance, for example) is the comparison of choice. 5. There are all sorts of change measures to be found in the literature, e.g., the difference between the mean score at Time 2 and the mean score at Time 1 divided by the mean score at Time 1 (which would provide an indication of the percent "improvement"). Many of those measures have sparked a considerable amount of controversy in the methodological literature, and the choice between expressing change as a difference or as a quotient is largely idiosyncratic. The absolute value of differences It is fairly common for people to concentrate on the absolute value of a difference, in addition to, or instead of, the "raw" difference. The absolute value of the difference between A and B, usually denoted as |A-B|, which is the same as |B-A|, is especially relevant when the discrepancy between the two is of interest, irrespective of which is greater. Statistical inference The foregoing discussion assumed that the data in hand are for a full population (even if the "N" is very small). If the data are for a random sample of a population, the preference between a difference statistic and a quotient statistic often depends upon the existence and/or complexity of the sampling distributions for such statistics. For example, the sampling distribution for a difference between two independent percentages is well known and straightforward (either the normal distribution or the chi-square distribution can be used) whereas the sampling distribution for the odds ratio is a real mess. A controversial example It is very common during a presidential election campaign to hear on TV something like this: In the most recent opinion poll, Smith is leading Jones by seven points. What is meant by a point? Is that information important? If so, can the difference be tested for statistical significance and/or can a confidence interval be constructed around it? The answer to the first question is easy. A point is a percentage. For example, 46% of those polled might have favored Smith and 39% might have favored Jones, a difference of seven points or seven percent. Since those two numbers dont add to 100, there might be other candidates in the race, some of those polled had no preferences, or both. [Ive never heard anybody refer to the ratio of the 46 to the 39. Have you?] It is the second question that has sparked considerable controversy. Some people (like me) dont think the difference is important; what matters is the actual % support for each of the candidates. (Furthermore, the two percentages are not independent, since their sum plus the sum of the percentages for other candidates plus the percentage of people who expressed no preferences must add to 100.) Other people (like my friend Milo Schield) think it is very important, not only for opinion polls but also for things like the difference between the percentage of people in a sample who have blue eyes and the percentage of people in that same sample who have green eyes (see Simon, 2004), and other contexts. Alas (for me), differences between percentages calculated on the same scale for the same sample can be tested for statistical significance and confidence intervals for such differences can be determined. See Kish (1965) and Scott and Seber (1983). Financial example: "The Rule of 72" [I would like to thank my former colleague and good friend at The Ohio State University, Dick Shumway, for referring me to this rule that his father, a banker, first brought to his attention.] How many years does it take for your money to double if it is invested at a compound interest rate of r? It obviously depends upon what r is, and whether the compounding is daily, weekly, monthly, annually, or continuously. I will consider here only the "compounded annually" case. The Rule of 72 postulates that a good approximation to the answer to the money-doubling question can be obtained by dividing the % interest rate into 72. For interest rates of 6% vs. 9%, for instance, the rule would claim that your money would double in 72/6 = 12 years and 72/9 = 8 years, respectively. But how good is that rule? The mathematics for the "exact" answer with which to compare the approximation as indicated by the Rule of 72 is a bit complicated, but consider the following table for various reasonable interest rates (both the exact answers and the approximations were obtained by using the calculator that is accessible at that marvelous website, www.moneychimp.com , which also provides the underlying mathematics): r(%) Exact Approximation 3 23.45 24 4 17.67 18 5 14.21 14.40 6 11.90 12 7 10.24 10.29 8 9.01 9 9 8.04 8 10 7.27 7.20 11 6.64 6.55 12 6.12 6 ... 18 4.19 4 How good is the rule? In evaluating its "goodness" should we take the difference between exact and approximation (by subtracting which from which?) or should you divide one by the other (with which in the numerator and with which in the denominator?)? Those are both very difficult questions to answer, because the approximation is an over-estimate for interest rates of 3% to 7% (by decreasingly small discrepancies) and is an under-estimate for interest rates of 8% and above (by increasingly large discrepancies). Additional reading If you would like to pursue other sources for discussions of differences and quotients (and their sampling distributions), especially if you're interested in the comparison of percentages, the epidemiological literature is your best bet, e.g., the Rothman and Greenland (1998) text. For an interesting discussion of differences vs. quotients in the context of learning disabilities, see Kavale (2003). I mentioned reliability above (in conjunction with a comparison between two single measurements on the same scale). If you would like to see how that plays a role in the interpretation of various statistics, please visit my website (www.tomswebpage.net) and download any or all of my book, The reliability of measuring instruments (free of charge). References Baron, J., Neiderhiser, B., & Gandy, O.H., Jr. (1997). Perceptions and attributions of race differences in health risks. (On Jonathan Baron's website.) Finney, D.J. (2007). On comparing two numbers. Teaching Statistics, 29 (1), 17-20. Global Youth Tobacco Survey Collaborating Group. (2003). Differences in worldwide tobacco use by gender: Findings from the Global Youth Tobacco Survey. Journal of School Health, 73 (6), 207-215. Kavale, K. (2003). Discrepancy models in the identification of learning disability. Paper presented at the Learning Disabilities Summit organized by the Department of Education in Washington, DC. Kish, L. (1965). Survey sampling. New York: Wiley. Knapp, T.R., & Tam, H.P. (1997). Some cautions concerning inferences about proportions, differences between proportions, and quotients of proportions. Mid-Western Educational Researcher, 10 (4), 11-13. Kruger, D.J., & Nesse, R.M. (2004). Sexual selection and the male:female mortality ratio. Evolutionary Psychology, 2, 66-85. Rothman, K.J., & Greenland, S. (1998). Modern epidemiology (2nd. ed.). Philadelphia: Lippincott, Williams, & Wilkins. Scott, A.J., & Seber, G.A.F. (1983). Difference of proportions from the same survey. The American Statistician, 37 (4), Part 1, 319-320. Sherrod, C.C. (1923). The development of the idea of quotients in education. Peabody Journal of Education, 1 (1), 44-49. Simanek, D. (2005). A laboratory manual for introductory physics. Retrievable in its entirety from:  HYPERLINK "http://www.lhup.edu/~dsimanek/scenario/contents.htm" http://www.lhup.edu/~dsimanek/scenario/contents.htm Simon, S. (November 9, 2004). Testing multinomial proportions. StATS Website. PERCENTAGES: THE MOST USEFUL STATISTICS EVER INVENTED "Eighty percent of success is showing up." - Woody Allen Baseball is ninety percent mental and the other half is physical. - Yogi Berra "Genius is one percent inspiration and ninety-nine percent perspiration." - Thomas Edison Preface You know what a percentage is. 2 out of 4 is 50%. 3 is 25% of 12. Etc. But do you know enough about percentages? Is a percentage the same thing as a fraction or a proportion? Should we take the difference between two percentages or their ratio? If their ratio, which percentage goes in the numerator and which goes in the denominator? Does it matter? What do we mean by something being statistically significant at the 5% level? What is a 95% confidence interval? Those questions, and much more, are what this monograph is all about. In his fine article regarding nominal and ordinal bivariate statistics, Buchanan (1974) provided several criteria for a good statistic, and concluded: The percentage is the most useful statistic ever invented (p. 629). I agree, and thus my choice for the title of this work. In the ten chapters that follow, I hope to convince you of the defensibility of that claim. The first chapter is on basic concepts (what a percentage is, how it differs from a fraction and a proportion, what sorts of percentage calculations are useful in statistics, etc.) If youre pretty sure you already understand such things, you might want to skip that chapter (but be prepared to return to it if you get stuck later on!). In the second chapter I talk about the interpretation of percentages, differences between percentages, and ratios of percentages, including some common mis-interpretations and pitfalls in the use of percentages. Chapter 3 is devoted to probability and its explanation in terms of percentages. I also include in that chapter a discussion of the concept of odds (both in favor of, and against, something). Probability and odds, though related, are not the same thing (but you wouldnt know that from reading much of the scientific and lay literature). Chapter 4 is concerned with a percentage in a sample vis--vis the percentage in the population from which the sample has been drawn. In my opinion, that is the most elementary notion in inferential statistics, as well as the most important. Point estimation, interval estimation (confidence intervals), and hypothesis testing (significance testing) are all considered. The following chapter goes one step further by discussing inferential statistical procedures for examining the difference between two percentages and the ratio of two percentages, with special attention to applications in epidemiology. The next four chapters are devoted to special topics involving percentages. Chapter 6 treats graphical procedures for displaying and interpreting percentages. It is followed by a chapter that deals with the use of percentages to determine the extent to which two frequency distributions overlap. Chapter 8 discusses the pros and cons of dichotomizing a continuous variable and using percentages with the resulting dichotomy. Applications to the reliability of measuring instruments (my second most favorite statistical concept--see Knapp, 2009) are explored in Chapter 9. The final chapter attempts to summarize things and tie up loose ends. There is an extensive list of references, all of which are cited in the text proper. You may regard some of them as old (they actually range from 1919 to 2009). I like old references, especially those that are classics and/or are particularly apt for clarifying certain points. [And Im old too.] Table of Contents Chapter 1: The basics Chapter 2: Interpreting percentages Chapter 3: Percentages and probability Chapter 4: Sample percentages vs. population percentages Chapter 5: Statistical inferences for differences between percentages and ratios of percentages Chapter 6: Graphing percentages Chapter 7: Percentage overlap of two frequency distributions Chapter 8: Dichotomizing continuous variables: Good idea or bad idea? Chapter 9: Percentages and reliability Chapter 10: Wrap-up References Chapter 1: The basics What is a percentage? A percentage is a part of a whole. It can take on values between 0 (none of the whole) and 100 (all of the whole). The whole is called the base. The base must ALWAYS be reported whenever a percentage is determined. Example: There are 20 students in a classroom, 12 of whom are males and 8 of whom are females. The percentage of males is 12 out of 20, or 60%. The percentage of females is 8 out of 20, or 40%. (20 is the base.) To how many decimal places should a percentage be reported? One place to the right of the decimal point is usually sufficient, and you should almost never report more than two. For example, 2 out of 3 is 66 2/3 %, which rounds to 66.67% or 66.7%. [To refresh your memory, you round down if the fractional part of a mixed number is less than 1/2 or if the next digit is 0, 1, 2, 3, or 4; you round up if the fractional part is greater than or equal to 1/2 or if the next digit is 5, 6, 7, 8, or 9.] Computer programs can report numbers to ten or more decimal places, but that doesnt mean that you have to. I believe that people who report percentages to several decimal places are trying to impress the reader (consciously or unconsciously). Lang and Secic (2006) provide the following rather rigid rule: When the sample size is greater than 100, report percentages to no more than one decimal place. When sample size is less than 100, report percentages in whole numbers. When sample size is less than, say, 20, consider reporting the actual numbers rather than percentages. (p. 5) [Their rule is just as appropriate for full populations as it is for samples. And they dont say it, perhaps because it is obvious, but if the size of the group is equal to 100, be it sample or population, the percentages are the same as the numerators themselves, with a % sign tacked on.] How does a percentage differ from a fraction and a proportion? Fractions and proportions are also parts of wholes, but both take on values between 0 (none of the whole) and 1 (all of the whole), rather than between 0 and 100. To convert from a fraction or a proportion to a percentage you multiply by 100 and add a % sign. To convert from a percentage to a proportion you delete the % sign and divide by 100. That can in turn be converted to a fraction. For example, 1/4 multiplied by 100 is 25%. .25 multiplied by 100 is also 25%. 25% divided by 100 is .25, which can be expressed as a fraction in a variety of ways, such as 25/100 or, in lowest terms, 1/4. (See the excellent On-Line Math Learning Center website for examples of how to convert from any of these part/whole statistics to any of the others.) But, surprisingly (to me, anyhow), people tend to react differently to statements given in percentage terms vs. fractional terms, even when the statements are mathematically equivalent. (See the October 29, 2007 post by Roger Dooley on the Neuromarketing website. Fascinating.) Most authors of statistics books, and most researchers, prefer to work with proportions. I prefer percentages [obviously, or I wouldnt have written this monograph!], as does my friend Milo Schield, Professor of Business Administration and Director of the W. M. Keck Statistical Literacy Project at Augsburg College in Minneapolis, Minnesota. (See, for example, Schield, 2008). One of the reasons I don't like to talk about proportions is that they have another meaning in mathematics in general: "a is in the same proportion to b as c is to d". People studying statistics could easily be confused about those two different meanings of the term "proportion". One well-known author (Gerd Gigerenzer) prefers fractions to both percentages and proportions. In his book (Gigerenzer, 2002) and in a subsequent article he co-authored with several colleagues (Gigerenzer, et al., 2007), he advocates an approach that he calls the method of natural frequencies for dealing with percentages. For example, instead of saying something like 10% of smokers get lung cancer, he would say 100 out of every 1000 smokers get lung cancer [He actually uses breast cancer to illustrate his method]. Heynen (2009) agrees. But more about that in Chapter 3, in conjunction with positive diagnoses of diseases. Is there any difference between a percentage and a percent? The two terms are often used interchangeably (as I do in this monograph), but percentage is sometimes regarded as the more general term and percent as the more specific term. The AMA Manual of Style, the BioMedical Editor website, the Grammar Girl website, and Milo Schield have more to say regarding that distinction. The Grammar Girl (Mignon Fogarty) also explains whether percentage takes a singular or plural verb, whether to use words or numbers before the % sign, whether to have a leading 0 before a decimal number that cant be greater than 1, and all sorts of other interesting things. Do percentages have to add to 100? A resounding YES, if the percentages are all taken on the same base for the same variable, if only one response is permitted, and if there are no missing data. For a group of people consisting of both males and females, the % male plus the % female must be equal to 100, as indicated in the above example (60+40=100). If the variable consists of more than two categories (a two-categoried variable is called a dichotomy), the total might not add to 100 because of rounding. As a hypothetical example, consider what might happen if the variable is something like Religious Affiliation and you have percentages reported to the nearest tenth for a group of 153 people of 17 different religions. If those percentages add exactly to 100 I would be terribly surprised. Several years ago, Mosteller, Youtz, and Zahn (1967) determined that the probability (see Chapter 3) of rounded percentages adding exactly to 100 is perfect for two categories, approximately 3/4 for three categories, approximately 2/3 for four categories, and approximately "6/c for c e"5, where c is the number of categories and  is the well-known ratio of the circumference of a circle to its diameter (= approximately 3.14). Amazing! [For an interesting follow-up article, see Diaconis & Freedman (1979). Warning: It has some pretty heavy mathematics!] Heres a real-data example of the percentages of the various possible blood types for the U.S.: O Positive 38.4% A Positive 32.3% B Positive 9.4% O Negative 7.7% A Negative 6.5% AB Positive 3.2% B Negative 1.7% AB Negative .7% [Source: American Red Cross website] Those add to 99.9%. The probability that they would add exactly to 100%, by the Mosteller, et al. formula, is approximately .52. Cant a percentage be greater than 100? I said above that percentages can only take on values between 0 and 100. There is nothing less than none of a whole, and there is nothing greater than all of a whole. But occasionally [too often, in my opinion, but Milo Schield disagrees with me] you will see a statistic such as Her salary went up by 200% or John is 300% taller than Mary. Those examples refer to a comparison in terms of a percentage, not an actual percentage. I will have a great deal to say about such comparisons in the next chapter and in Chapter 5. Why are percentages ubiquitous? People in general, and researchers in particular, have always been interested in the % of things that are of a particular type, and they always will be. What % of voters voted for Barack Obama in the most recent presidential election? What % of smokers get lung cancer? What % of the questions on a test do I have to answer correctly in order to pass? An exceptionally readable source about opinion polling is the article in the Public Opinion Quarterly by Wilks (1940a), which was written just before the entrance of the U.S. into World War II, a time when opinions regarding that war were diverse and passionate. I highly recommend that article to those of you who want to know how opinion polls SHOULD work. S.S. Wilks was an exceptional statistician. What is a rate? A rate is a special kind of percentage, and is most often referred to in economics, demography, and epidemiology. An interest rate of 10%, for example, means that for every dollar there is a corresponding $1.10 that needs to be taken into consideration (whether it is to your advantage or to your disadvantage). There is something called The Rule of 72 regarding interest rates. If you want to determine how many years it would take for your money to double if it were invested at a particular interest rate, compounded annually, divide the interest rate into 72 and youll have a close approximation. To take a somewhat optimistic example, if the rate is 18% it would take four years (72 divided by 18 is 4) to double your money. [You would actually have only 1.93877 times as much after four years, but thats close enough to 2 for government work! Those of you who already know something about compound interest might want to check that.] Birth rates and death rates are of particular concern in the analysis of population growth or decline. In order to avoid small numbers, they are usually reported per thousand rather than per hundred (which is what a simple percent is). For example, if in the year 2010 there were to be six million births in the United States out of a population of 300 million, the (crude) birth rate would be 6/300, or 2%, or 20 per thousand. If there were three million deaths in that same year, the (also crude) death rate would be 3/300, or 1%, or 10 per thousand. One of the most interesting rates is the response rate for surveys. It is the percentage of people who agree to participate in a survey. For some surveys, especially those that deal with sensitive matters such as religious beliefs and sexual behavior, the response rate is discouragingly low (and often not even reported), so that the results must be taken with more than the usual grain of salt. Some rates are phrased in even different terms, e.g., parts per 100,000 or parts per million (the latter often used to express the concentration of a particular pollutant). What kinds of calculations can be made with percentages? The most common kinds of calculations involve subtraction and division. If you have two percentages, e.g., the percentage of smokers who get lung cancer and the percentage of non-smokers who get lung cancer, you might want to subtract one from the other or you might want to divide one by the other. Which is it better to do? That matter has been debated for years. If 10% of smokers get lung cancer and 2% of non-smokers get lung cancer (the two percentages are actually lower than that for the U.S.), the difference is 8% and the ratio is 5-to-1 (or 1-to-5, if you invert that ratio). I will have much more to say about differences between percentages and ratios of percentages in subsequent chapters. (And see the brief, but excellent, discussion of differences vs. ratios of percentages at the American College of Physicians website.) Percentages can also be added and multiplied, although such calculations are less common than the subtraction or division of percentages. Ive already said that percentages must add to 100, whenever theyre taken on the same base for the same variable. And sometimes were interested in the percentage of a percentage, in which case two percentages are multiplied. For example, if 10% of smokers get lung cancer and 60% of them (the smokers who get lung cancer) are men, the percentage of smokers who get cancer and are male is 60% of 10%, or 6%. (By subtraction, the other 4% are female.) You also have to be careful about averaging percentages. If 10% of smokers get lung cancer and 2% of non-smokers get lung cancer, you cant just split the difference between those two numbers to get the % of people in general who get lung cancer by adding them together and dividing by two (to obtain 6%). The number of non-smokers far exceeds the number of smokers (at least in 2009), so the percentages have to be weighted before averaging. Without knowing how many smokers and non-smokers there are, all you know is that the average lung cancer % is somewhere between 2% and 10%, but closer to the 2%. [Do you follow that?] What is inverse percentaging? Youre reading the report of a study in which there is some missing data (see the following chapter), with one of the percentages based upon an n of 153 and another based upon an n of 147. [153 is one of my favorite numbers. Do you know why? Ill tell you at the end of this monograph.] You are particularly interested in a variable for which the percentage is given as 69.8, but the author didnt explicitly provide the n for that percentage (much less the numerator that got divided by that n). Can you find out what n is, without writing to the author? The answer is a qualified yes, if youre good at inverse percentaging. There are two ways of going about it. The first is by brute force. You take out your trusty calculator and try several combinations of numerators with denominators of 153 and 147 and see which, if any, of them yield 69.8% (rounded to the nearest tenth of a percent). OR, you can use a book of tables, e.g., the book by Stone (1958), and see what kinds of percentages you get for what kinds of ns. Stones book provides percentages for all parts from 1 to n of ns from 1 to 399. You turn to the page for an n of 153 and find that 107 is 69.9% of 153. (That is the closest % to 69.8.) You then turn to the page for 147 and find that 102 is 69.4% of 147 and 103 is 70.1% of 147. What is your best guess for the n and for the numerator that you care about? Since the 69.9% for 107 out of 153 is very close to the reported 69.8% (perhaps the author rounded incorrectly or it was a typo?), since the 69.4% for 102 out of 147 is not nearly as close, and the 70.1% is also not as close (and is an unlikely typo), your best guess is 107 out of 153. But you of course could be wrong. What about the unit of analysis and the independence of observations? In my opinion, more methodological mistakes are made regarding the unit of analysis and the independence of observations than in any other aspect of a research study. The unit of analysis is the entity (person, classroom, school,whatever) upon which any percentage is taken. The observations are the numbers that are used in the calculation, and they must be independent of one another. If, for example, you are determining the percentage male within a group of 20 people, and there are 12 males and 8 females in the group (as above), the percentage of male persons is 12/20 or 60%. But that calculation assumes that each person is counted only once, there are no twins in the group, etc. If the 20 persons are in two different classrooms, with one classroom containing all 12 of the males and the other classroom containing all 8 of the females, then the percentage of male classrooms is 1/2 or 50%, provided the two classrooms are independent They could be dependent if, to take an admittedly extreme case, there were 8 male/female twin-pairs who were deliberately assigned to different classrooms, with 4 other males joining the 8 males in the male classroom. [Gets tricky, doesnt it?] One of the first researchers to raise serious concerns about the appropriate unit of analysis and the possibility of non-independent observations was Robinson (1950) in his investigation of the relationship between race and literacy. He found (among other things) that for a set of data in the 1930 U.S. Census the correlation between a White/Black dichotomy and a Literate/Illiterate dichotomy was only .203 with individual person as the unit of analysis (n = 97,272) but was .946 with major geographical region as the unit of analysis (n = 9), the latter being a so-called ecological correlation between % Black and % Illiterate. His article created all sorts of reactions from disbelief to demands for re-analyses of data for which something other than the individual person was used as the unit of analysis. It (his article) was recently reprinted in the International Journal of Epidemiology, along with several commentaries by Subramanian, et al. (2009a, 2009b), Oakes (2009), Firebaugh (2009), and Wakefield (2009). I have also written a piece about the same problem (Knapp, 1977a). What is a percentile? A percentile is a point on a scale below which some percentage of things fall. For example, John scored at the 75th percentile on the SAT means that 75% of the takers scored lower than he did and 25% scored higher. We dont even know, and often dont care, what his actual score was on the test. The only sense in which a percentile refers to a part of a whole is as a part of all of the people, not a part of all of the items on the test. Chapter 2: Interpreting percentages Since a percentage is simple to calculate (much simpler than, say, a standard deviation, the formula for which has ten symbols!), you would think that it is also simple to interpret. Not so, as this chapter will now show. Small base It is fairly common to read a claim such as 66 2/3 % of doctors are sued for malpractice. The information that the claimant doesnt provide is that only three doctors were included in the report and two of them were sued. In the first chapter I pointed out that the base upon which a percentage is determined must be provided. There is (or should be) little interest in a study of just three persons, unless those three persons are very special indeed. There is an interesting article by Buescher (2008) that discusses some of the problems with using rates that have small numbers in the numerator, even if the base itself is large. And in his commentary concerning an article in the journal JACC Cardiovascular Imaging, Camici (2009) advises caution in the use of any ratios that refer to percentages. Missing data The bane of every researchers existence is the problem of missing data. You go to great lengths in designing a study, preparing the measuring instruments, etc., only to find out that some people, for whatever reason, dont have a measurement on every variable. This situation is very common for a survey in which questions are posed regarding religious beliefs and/or sexual behavior. Some people dont like to be asked such questions, and they refuse to answer them. What is the researcher to do? Entire books have been written about the problem of missing data (e.g., Little & Rubin, 2002). Consider what happens when there is a question in a survey such as Do you believe in God?, the only two response categories are yes and no, and you get 30 yeses, 10 nos, and 10 missing responses in a sample of 50 people. Is the %yes 30 out of 50 (=60%) or 30 out of 40 (= 75%)? And Is the %no 10 out of 50 (=20%) or 10 out of 40 (=25%)? If its out of 50, the percentages (60 and 20) dont add to 100. If its out of 40, the base is 40, not the actual sample size of 50 (thats the better way to deal with the problemno response becomes a third category). Overlapping categories Suppose youre interested in the percentages of people who have various diseases. For a particular population the percentage having AIDS plus the percentage having lung cancer plus the percentage having hypertension might very well add to more than 100 because some people might suffer from more than one of those diseases. I used this example in my little book entitled Learning statistics through playing cards (Knapp, 1996, p. 24). The three categories (AIDS, lung cancer, and hypertension) could overlap. In the technical jargon of statistics, they are not mutually exclusive. Percent change Whenever there are missing data (see above) the base changes. But when youre specifically interested in percent change the base also does not stay the same, and strange things can happen. Consider the example in Darrell Huffs delightful book, How to lie with statistics (1954), of a man whose salary was $100 per week and who had to take a 50% pay cut to $50 per week because of difficult economic times. [(100-50)/100 = .50 or 50%.] Times suddenly improved and the person was subsequently given a 50% raise. Was his salary back to the original $100? No. The base has shifted from 100 to 50. $50 plus 50% of $50 is $75, not $100. There are several other examples in the research literature and on the internet regarding the problem of % decrease followed by % increase, as well as % increase followed by % decrease, % decrease followed by another % decrease, and % increase followed by another % increase. (See, for example, the definition of a percentage at the wordIQ.com website; the Pitfalls of Percentages webpage at the Hypography website; the discussion of percentages at George Mason Universitys STATS website; and the articles by Chen and Rao, 2007, and by Finney, 2007.) A recent instance of a problem in interpreting percent change is to be found in the research literature on the effects of smoking bans. Several authors (e.g., Lightwood & Glantz, 2009; Meyers, 2009) claim that smoking bans cause decreases in acute myocardial infarctions (AMI). They base their claims upon meta-analyses of a small number of studies that found a variety of changes in the percent of AMIs, e.g., Sargent, Shepard, and Glantz (2004), who investigated the numbers of AMIs in Helena, MT before a smoking ban, during the time the ban was in effect, and after the ban had been lifted. There are several problems with such claims, however: 1. Causation is very difficult to determine. There is a well-known dictum in research methodology that "correlation is not necessarily causation". As Sargent, et al. (2004) themselves acknowledged: "This is a before and after study that relies on historical controls (before and after the period that the law was in effect), not a randomised controlled trial. Because this study simply observed a change in the number of admissions for acute myocardial infarction, there is always the chance that the change we observed was due to some unobserved confounding variable or systematic bias." (p. 979) 2. Sargent, et al. found a grand total of 24 AMIs in the city of Helena during the six-month ban in the year 2002, as opposed to an average of 40 AMIs in other six-month periods just before and just after the ban. Those are very small numbers, even though the difference of 16 is "statistically significant" (see Chapters 4 and 5). They also compared that difference of 16 AMIs to a difference of 5.6 AMIs between 18 and 12.4 before and after for a "not Helena" area (just outside of Helena). The difference between those two differences of 16 and 5.6 was also found to be small but "statistically significant". But having a "not Helena" sample is not the same as having a randomly comparable group in a controlled experiment in Helena itself. 3. But to the point of this section, the drop from 40 to 24 within Helena is a 40% change (16 "out of" 40); the "rebound" from 24 to 40 is a 66 2/3% change (16 "out of" the new base of 24). To their credit, Sargent et al. did not emphasize the latter, even though it is clear they believe it was the ban and its subsequent rescission that were the causes of the decrease followed by the increase. [Note: The StateMaster.com website cites the Helena study as an example of a "natural experiment". I disagree. In my opinion, "natural experiment" is an oxymoron. There is nothing natural about an experiment, which is admittedly artificial (the researcher intervenes), but usually necessary for the determination of causation. Sargent, et al. did not intervene. They just collected existing data.] I recently encountered several examples of the inappropriate calculation and/or interpretation of percent change in a table in a newpaper (that shall remain nameless) on % increase or decrease in real estate sales prices. The people who prepared the table used [implicitly] a formula for % change of the form (Time 2 price minus Time 1 price)/ Time 1 price. One of the comparisons involved a median price at Time 1 of $0 and a median price at Time 2 of $72,500 that was claimed to yield a 0% increase, since the calculation of ($72,500 - 0)/ 0 was said to be equal to 0. Not so. You can't divide by 0, so the percent increase was actually indeterminate. Percent difference Percent change is a special case of percent difference. (Its change if its for the same things, usually people, across time.) Both percent difference (see the following section) and the difference between two percentages (see Chapter 5) come up all of the time [but theyre not the same thing, so be careful!]. The percent difference between two continuous quantities Cole (2000) suggests that it is better to use logarithms when interpreting the percent difference between two continuous quantities. He gives the example of a comparison between the average height of British adult men (177.3 centimeters, which is approximately 69.8 inches, or slightly under 510) and the average height of British adult women (163.6 centimeters, which is approximately 64.4 inches). The usual formula for finding the percent difference between two quantities x1 and x2 is 100(x2 x1)/x1. But which do you call x1 and which do you call x2? Working the formula one way (with x1 = the average height of the women and x2 = the average height of the men), you find that the men are 8.4% taller than the women. Working the formula the other way (with x1 = the average height of the men and x2 = the average height of the women) you find that the women are 7.7% shorter than the men (the numerator is negative). Cole doesnt like that asymmetry. He suggests that the formula be changed to (100loge x2 100loge x1), where e = 2.1728 is the base of the natural logarithm system. If you like logarithms and youre comfortable working with them, youll love Coles article! Comparisons between percentages that must add to 100 One annoying (to me, anyhow) tendency these days is to compare, by subtraction, the percentage of support for one candidate for political office with the percentage of support for another candidate when they are the only two candidates for that office. For example: Smith is leading Jones by 80% to 20%, a difference of 60 points. Of course. If Smith has 80%, Jones must have 20% (unless there is a "no preference" option and/or there are any missing data), the difference must be 60%, and why use the word points?! Milo Schield and I have one final disagreement, and it is in regard to this matter. He likes reporting the difference and using the word "points". In Chapter 5 I will begrudgingly return to a variation of the problem, in which the testing of the significance of the difference between two percentages is of interest, where the two percentages together with a third percentage add to 100. Ratios vs. differences of percentages that don't have to add to 100 Consider an example (unlike the previous example) where it is reasonable to calculate the ratio of two percentages. Suppose one-half of one percent of males in a population of 100 million males have IQ scores of over 200 and two percent of females in a population of 100 females have IQ scores of over 200. (There are approximately 100 million adult males and approximately 100 million adult females in the United States.) Should we take the ratio of the 2% to the .5% (a ratio of 4 to 1) and claim that the females are four times as smart? No. There are at least two problems with such a claim. First of all, having a number less than 1 in the denominator and a number greater than 1 in the numerator can produce an artificially large quotient. (If the denominator were 0, the ratio couldnt even be calculated, since you cant divide by 0.) Secondly, does it really matter how large such a ratio is, given that both numerator and denominator are small. Surely it is the difference between those two percentages that is important, not their ratio. Although in general there are fewer problems in interpreting differences between percentages than there are in interpreting ratios of percentages, when subgroup comparisons are made in addition to an overall comparison, things can get very complicated. The classic case is something called Simpsons Paradox (Simpson, 1951) in which the differences between two overall percentages can be in the opposite direction from differences between their corresponding subgroup percentages. The well-known mathematician John Allen Paulos (2001) provided the following hypothetical (but based upon an actual lawsuit) example: To keep things simple, let's suppose there were only two departments in the graduate school, economics and psychology. Making up numbers, let's further assume that 70 of 100 men (70 percent) who applied to the economics department were admitted and that 15 of 20 women (75 percent) were. Assume also that five out of 20 men (25 percent) who applied to the psychology department were admitted and 35 of 100 women (35 percent) were. Note that in each department a higher percentage of women was admitted. Top of Form Bottom of Form If we amalgamate the numbers, however, we see what prompted the lawsuit: 75 of the 120 male applicants (62.5 percent) were admitted to the graduate school as a whole whereas only 50 of the 120 female applicants (41.7 percent) were. How can that be? The paradox arises from the fact that there are unequal numbers of men and women contributing to the percentages (100 men and 20 women for economics; 20 men and 100 women for psychology). The percentages need to be weighted before they are combined into overall figures. For additional discussions of Simpson's Paradox, see Baker and Kramer (2001), Malinas (2001) and Ameringer, Serlin, and Ward (2009). Reporting of ranges in percentages across studies In his editorial a few years ago, Cowell (1998) referred to an authors citing of the results of a successful surgical procedure as ranging from 43% to 100%, without mentioning that the 100% was for one successful procedure performed on one patient! He (Cowell) argued, as I have, that the base must always be given along with the percentage. Other misunderstandings and errors in interpreting percentages Milo Schield (2000) discussed a number of problems that people have when it comes to percentages in various contexts, especially rates. One example he cites is the claim made by some people that if X% of A are B, then X% of B are A. No. If you dont believe Schield or me, try various numbers or draw a Venn diagram for two overlapping circles, A and B. He has written other interesting articles regarding statistical literacy (or lack of same), one of which (Schield, 2005) deals largely with misunderstandings of percentages. He also put on the internet a test of statistical literacy (Schield, 2002). You can get to it by googling statistical literacy inventory and clicking on the first entry. Here are three of the questions on that test (as cited in The Washington Post on February 6, 2009): 1. True or False. If a stock decreases 50 percent and then increases by 50 percent, it will be back to its original value. 2. True or False. If a stock drops from $300 to zero, that is a 300 percent decrease. 3. A company has a 30 percent market share in the Western US and a 10 percent market share in the Eastern US. What is the company's overall market share in the entire US? Do you know the answers? The interesting book, Mathsemantics, by Edward MacNeal (1994), includes a chapter on percentages in which the author discusses a number of errors that he discovered when a group of 196 applicants for positions with his consulting firm were tested. Some of the errors, and some of the reasons that people gave for having made them, are pretty bizarre. For example, when asked to express .814 as a percentage to the nearest whole percent, one person gave as the answer 1/8 %. One of my favorite examples in that chapter is to a person (fortunately nameless) who claimed that Richie Ashburn, a former baseball player with the Philadelphia Phillies, hit three hundred and fifty percent in one of his major league seasons. [Im a baseball nut.] In case youre having trouble figuring out whats wrong with that, Ill help you out. First of all, as you now know (if you already didnt), percentages must be between 0 and 100. Secondly, baseball batting averages are per thousand rather than per hundred. Ashburn actually hit safely about 35% of the time, which, in baseball jargon, is three fifty, i.e., a proportion of .350. Speaking of my love for baseball, and going back to Simpsons Paradox, I once wrote an article (Knapp, 1985) in which I provided real data that showed Player A had a higher batting average than Player B against both right-handed and left-handed pitching but had a lower overall batting average. I later discovered additional instances of batting averages that constituted evidence for a transitive case of Simpsons Paradox, i.e., one for which A > B > C against both right-handed and left-handed pitching, but for which A < B < C overall. (The symbol > means greater than; < means less than.) Chapter 3: Percentages and probability What do we mean by the probability of something? There are several approaches to the definition of probability. The first one that usually comes to mind is the so-called a priori definition that is a favorite of teachers of statistics who use coins, dice, and the like to explain probability. In the a priori approach the probability of something is the number of ways that something can take place divided by the total number of equally likely outcomes. For example, in a single toss of a coin, the probability of heads is the number of ways that can happen (1) divided by the total number of equally likely outcomes (2heads or tails), which is 1/2, .50, or 50%, depending upon whether you want to use fractions, proportions, or percentages to express the probability. Similarly, the probability of a 4 in a single roll of a die, is 1 (there is only one side of a die that has four spots) divided by 6 (the total number of sides), which is equal to 1/6, .167, or 16.7% (to three significant figures). But there are problems with that definition. In the first place, it only works for symmetric situations such as the tossing of fair coins and the rolling of unloaded dice. Secondly, it is actually circular, since it defines probability in terms of equally likely, which is itself a probabilistic concept. Such concerns have led to a different definition, the long-run empirical definition, in which the probability of something is the number of ways that something did happen (note the use of did rather than can) divided by the total number of things that happened. This definition works for things like thumbtack tosses (what is the probability of landing with its point up?) as well as for coins, dice, and many other probabilistic contexts. The price one pays, however, is the cost of actually carrying out the empirical demonstration of tossing a thumbtack (or tossing a coin or rolling a die) a large number of times. And how large is large?? There is a third (and somewhat controversial) subjective definition of probability that is used in conjunction with Bayesian statistics (an approach to statistics associated with a famous equation derived by the clergyman/mathematician Rev. Thomas Bayes who lived in the 18th century). Probability is defined as a number between 0 and 1 (for fractions and proportions) or between 0 and 100 (for percentages) that is indicative of a persons strength of conviction that something will take place (note the use of will rather than either can or did). An example that illustrates various definitions of probability, especially the subjective definition, is the question of the meaning of the probability of rain. There recently appeared an article in The Journal of the American Meteorological Society, written by Joslyn, Nadav-Greenberg, and Nichols (2009), that was devoted entirely to that problem. (Weather predictions in terms of percentages and probabilities have been around for about a century---see, for example, Hallenbeck, 1920.) Ive already said that I favor the reporting of parts out of wholes in terms of percentages rather than in terms of fractions or proportions. I also favor the use of playing cards rather than coins or dice to explain probabilities. That should come as no surprise to you, since in the previous chapter I referred to my Learning statistics through playing cards book (Knapp, 1996), which, by the way, also concentrates primarily on percentages. The probability of not-something If P is the probability that something will take place, in percentage terms, then 100 P is the probability that it will not take place. For example, if you draw one card from an ordinary deck of cards, the probability P that its a spade is 13/52, or 1/4, or .25, or 25%. The probability that its not a spade is 100 25 = 75%, which can also be written as 39/52, or 3/4, or .75. Probabilities vs. odds People are always confusing probabilities and odds. If P is the probability of something, in percentage terms, then the odds in favor of that something are P divided by (100 - P); and the odds against it are (100 - P) divided by P. The latter is usually of greater interest, especially for very small probabilities. For example, if you draw one card from an ordinary deck of cards, the probability P that its a spade, from above, is 13/52, or 1/4, or .25, or 25%. The odds in favor of getting a spade are 25 divided by (100 25), or 1 in 3; the odds against it are (100 25) divided by 25, or 3 to 1. [In his book, Probabilities and life , the French mathematician Emile Borel (1962) claims that we act as though events with very small probabilities never occur. He calls that the single law of chance.] There are actually two mistakes that are often made. The first is the belief that probabilities and odds are the same thing (so some people would say that the odds of getting a spade are 1/4, or 25%). The second is the belief that the odds against something are merely the reciprocal of its probability (so they would say that the odds against getting a spade are 4 to 1). Complex probabilities The examples just provided were for simple situations such as tossing a coin once, rolling a die once, or drawing one card from a deck of cards, for which you are interested in a simple outcome. Most applications of probability involve more complicated matters. If there are two events, A and B, with which you are concerned, the probability that either of them will take place is the sum of their respective probabilities, if the events are mutually exclusive, and the probability that both of them will take place is the product of their respective probabilities, if the events are independent. Those are both mouthfuls, so lets take lots of examples (again using playing cards): 1. If you draw one card from a deck of cards, what is the probability that it is either a spade or a heart? Since getting a spade and getting a heart are mutually exclusive (a card cannot be a spade and a heart), the probability of either a spade or a heart is the probability of a spade plus the probability of a heart, which is equal to 13/52 plus 13/52 = 26/52 = 1/2, or 50%. [Its generally easier to carry out the calculations using fractions, but to report the answer in terms of percentages.] 2. If two cards are drawn from a deck of cards, what is the probability that they are both spades? This problem is a bit more difficult. We must first specify whether or not the first card is replaced in the deck before the second card is drawn. If the two events, spade on first card and spade on second card, are to be independent (i.e., that the outcome of the second event does not depend upon the outcome of the first event) the first card must be replaced. If so, the desired probability is 1/4 for the first card times 1/4 for the second card, which is equal to 1/16 or 6.25%. If the first card is not replaced, the probability is 13/52 times 12/51 = 1/4 times 4/17 = 1/17 or 5.88%. 3. If two cards are drawn from a deck of cards, what is the probability that either of them is a spade? This is indeed a complex problem. First of all, we need to know if the cards are to be drawn with replacement (the first card is returned to the deck before the second card is drawn) or without replacement (it isnt). Secondly, we need to specify whether either means one but not both or one or both. Let us consider just one of the four combinations. (Ill leave the other three as exercises for the curious reader!) If the drawing is with replacement and either means one but not both, the possibilities that are favorable to getting a spade are spade on first draw, no spade on second draw and no spade on first draw, spade on second draw. Those probabilities are, using the or rule in conjunction with the and rule, (1/4 times 3/4) plus (3/4 times 1/4), i.e., 3/16 + 3/16, or 6/16, or 3/8, or 37.5%. 4. In his other delightful book, How to take a chance, Darrell Huff (1959) discusses the probability of having two boys out of four children, if the probability of a boy and the probability of a girl are equally likely and independent of one another. Many people think the answer is 2/4 = 1/2 = .50 = 50%. Huff not only shows that the correct answer is 6/16 = 3/8 = .375 = 37.5%, but he (actually Irving Geis) illustrates each of the permutations. You can look it up (as Casey Stengel used to say). This is a conceptually different probability problem than the previous one. It just happens to have the same answer. The birthday problem There is a famous probability problem called The Birthday Problem, which asks: If n people are gathered at random in a room, what is the probability that at least two of them have the same birthday (same month and day, but not necessarily same year)? It turns out that for an n of 23 the probability is actually (and non-intuitively) greater than 50%, and for an n of 70 or so it is a virtual certainty! See, for example, the website www.physics.harvard.edu/academics/undergrad/probweek/sol46 and my favorite mathematics book, Introduction to finite mathematics (Kemeny, Snell, and Thompson, 1956). The best way to carry out the calculation is to determine the probability that NO TWO PEOPLE will have the same birthday (using the generalization of the and rule---see above), and subtract that from 100 (see the probability of not-something). Risks A risk is a special kind of percentage, and a special kind of probability, which is of particular interest in epidemiology. The risk of something, e.g., getting lung cancer, can be calculated as the number of people who get something divided by the total number of people who could get that something. (The risk of lung cancer, actually the crude risk of lung cancer, is actually rather low in the United States, despite all of the frightening articles about its prevalence and its admittedly tragic consequences.) There is also an attributable risk (AR), the difference between the percentage of people in one group who get something and the percentage of people in another group who get that something. [N.B. "Attributable" doesn't necessarily mean causal.] In Chapter 1 I gave a hypothetical example of the percentage of smokers who get lung cancer minus the percentage of non-smokers who get lung cancer, a difference of 10% - 2% = 8%. And then there is a relative risk (RR), the ratio of the percentage of people in one group who get something to the percentage of people in another group who get that something. Referring back again to smoking and lung cancer, my hypothetical example produced a ratio of 10%/ 2%, or 5 to 1. Risks need not only refer to undesirable outcomes. The risk of making a million dollars by investing a thousand dollars, for example, is a desirable outcome (at least for the winner). The methodological literature is replete with discussions of the minimum value of relative risk that is worthy of serious consideration. The most common value is "2.00 or more", especially when applying relative risks to individual court cases. Those same sources often have corresponding discussions of a related concept, the probability of causality [causation], PC, which is defined as 1 - 1/RR. If the RR threshold is 2.00, then the PC threshold is .50, or 50%. See Parascandola (1998); Robins (2004); Scheines (2008); and Swaen and vanAmelsvoort (2009) for various points of view regarding both of those thresholds. Like probabilities in general, risks can be expressed in fractions, proportions, or percentages. Biehl and Halpern-Felsher (2001) recommend using percentages. (Yay!) Sensitivity and specificity In medical diagnostic testing there are two kinds of probability that are of interest: 1. The probability that the test will yield a positive result (a finding that the person being tested has the disease) if the person indeed has the disease. Such a probability is referred to as the sensitivity of the test. 2. The probability that the test will yield a negative result (a finding that the person being tested does not have the disease) if the person indeed does not have the disease. Such a probability is referred to as the specificity of the test. We would like both of those probabilities to be 100% (a perfect test). Alas, that is not possible. No matter how much time and effort go into devising diagnostic tests there will always be false positives (people who dont have the disease but are said to have it) and false negatives (people who do have the disease but are said to not have it). [Worse yet, as you try to improve the test by cutting down on the number of false positives you increase the number of false negatives, and vice versa.] Its sensitivity is the probability of a true positive; its specificity is the probability of a true negative. There is something called Youdens Index (Youden, 1950), which combines sensitivity and specificity. Its formula can be written in a variety of ways, the simplest being J = sensitivity + specificity 100. Theoretically it can range from -100 (no sensitivity and no specificity) to 100 (perfect test), but is typically around 80 (e.g., when both sensitivity and specificity are around 90%). [A more interesting re-formulation of Youdens Index can be written as J = (100- sensitivity) (100-specificity), i.e., the difference between the true positive rate and the false positive rate.] For example (an example given by Gigerenzer, 2002, and pursued further in Gigerenzer et al., 2007 with slightly changed numbers), a particular mammography screening test might have a sensitivity of 90% and a specificity of 91% (those are both high probabilities, but not 100%). Suppose that the probability of getting breast cancer is 1% (10 chances in 1000). For every group of 1000 women tested, 10 of whom have breast cancer and 990 of whom do not, 9 of those who have it will be correctly identified (since the tests sensitivity is 90%, and 90% of 10 is 9). For the 990 who do not have breast cancer, 901 will be correctly identified (since the tests specificity is 91%, and 91% of 990 is 901). Therefore there will be 9 true positives, 901 true negatives, 89 false positives, and 1 false negative. Gigerenzer goes on to point out the surprising conclusion that for every positive finding only about 1 in 11 (9 out of the 98 positives), or approximately 9%, is correct. He argues that if a woman were to test positive she neednt be overly concerned, since the probability that she actually has breast cancer is only 9%, with the corresponding odds of 89 to 9 (almost 10 to 1) against it. A further implication is that it might not be cost-effective to use diagnostic tests with sensitivities and specificities as low as those. In his delightful book entitled Innumeracy (note the similarity to the word illiteracy), Paulos (1988) provides a similar example (p. 66) that illustrates how small the probability typically is of having a disease, given a positive diagnosis. For another (negative) commentary regarding cancer screening, see the JNCI editorial by Woloshin and Schwartz (2009). Probabilistic words and their quantification in terms of percentages The English language is loaded with words such as always, never, sometimes, seldom, etc. [Is sometimes more often than seldom; or is it the other way round?] There is a vast literature on the extent to which people ascribe various percentages of the time to such words. The key reference is an article that appeared in the journal Statistical Science , written by Mosteller and Youtz (1990; see also the several comments regarding that article in the same journal and the rejoinder by Mosteller and Youtz). They found, for example, that across 20 different studies the word possible received associated percentages throughout the entire scale (0% to 100%), with a median of 38.5%. (Some people didnt even ascribe 0% to never and 100% to always. In an earlier article in the nursing research literature, Damrosch and Soeken (1983) reported a mean of 45.21% for possible, a mean of 13.71% for never and a mean of 91.35% for always.) Mosteller and Youtz quote former president Gerald R. Ford as having said that there was a very real possibility of a swine flu epidemic in 1976-77. In a previous article, Mosteller (1976) estimated the public meaning of a very real possibility to be approximately 29%, and Boffey (1976) had claimed the experts put the probability of a swine flu epidemic in 1976 -77 somewhat lower than that. Shades of concerns about swine flu in 2009! There is a related matter in weather forecasting. Some meteorologists (e.g., Jeff Haby) have suggested that words be used instead of percentages. In a piece entitled Using percentages in forecasts on the weatherprediction.com website, he argues that probabilistic expressions such as there is a 70% chance of a thunderstorm should be replaced by verbal expressions such as thunderstorms will be numerous. (See also the articles by Hallenbeck, 1920, and by Joslyn, et al., 2009, referred to above.) Believe it or not, there is an online program for doing so, put together by Burnham and Schield in 2005. You can get to it at the  HYPERLINK "http://www.StatLit.org" www.StatLit.org website. There is also an interesting controversy in the philosophical literature regarding the use of probabilistic words in the analysis of syllogisms, rather than the more usual absolute words such as All men are mortal; Socrates is a man; therefore, Socrates is mortal. It started with an article in the Notre Dame Journal of Formal Logic by Peterson (1979), followed by an article by Thompson (1982), followed by an unpublished paper by Peterson and Carnes, followed by another article by Thompson (1986), and ending (I think) with a scathing article by Carnes and Peterson (1991). The controversy revolves around the use of words like few, many, and most in syllogisms. An example given in Thompsons second article (1986) is: Almost 27% of M are not P. Many more than 73% of M are S. Therefore, some S are not P. Is that a valid argument? (You decide.) Chance success I take an 80-item true-false test and I answer 40 of them correctly. Should I be happy about that? Not really. I could get around 40 (= 50%) without reading the questions, if the number of items for which true is the right answer is approximately equal to the number of items for which false is the right answer, no matter what sort of guessing strategy I might employ (all true, all false, every other one true, etc.) The scoring directions for many objective tests (true-false, multiple-choice, etc.) often recommend that every score on such tests be corrected for chance success. The formula is R W/(k-1), where R is the number of right answers, W is the number of wrong answers, and k is the number of choices. For the example just given, R = 40, W = 40, k = 2, so that my score would be 40 40/(2-1) = 40 40 = 0, which is what I deserve! For more on chance success and the correction for guessing, see Diamond and Evans (1973). Percentages and probability in the courtroom As you might expect, probability (in terms of either percentages or fractions) plays an important role in jury trials. One of the most notorious cases was that of the famous professional football player and movie star, O.J. Simpson, who was accused in 1994 of murdering his wife, Nicole, and her friend, Ronald Goldman. There was a great deal of evidence regarding probabilities that was introduced in that trial, e.g., the probability that an individual chosen at random would wear a size 12 shoe AND have blood spots on the left side of his body. (Simpson wears size 12; the police found size 12 footprints nearby, with blood spots to the left of the footprints. Simpson claimed he cut his finger at home.) For more on this, see the article by Merz and Caulkins (1995); the commentary by John Allen Paulos (1995---yes, that Paulos), who called it a case of statisticide; and the letters by defense attorney Alan Dershowitz (1995, 1999). [Simpson was acquitted.] Several years prior to the Simpson case (in 1964), a Mrs. Juanita Brooks was robbed in Los Angeles by a person whom witnesses identified as a white blonde female with a ponytail, who escaped in a yellow car driven by a black male with a mustache and a beard. Janet and Malcolm Collins, an inter-racial couple who fit those descriptions, were arrested and convicted of the crime, on the basis of estimates of the following probabilities for persons drawn at random: P(yellow car) = 1/10 = 10% P(male with mustache) = 1/4 = 25% P(female with hair in ponytail) = 1/10 = 10% P(female with blonde hair) = 1/3 = 33 1/3 % P(black male with beard) = 1/10 = 10% P(inter-racial couple in car) = 1/1000 = .1% Product of those probabilities = 1/12,000,000 = .00000833% [The convictions were overturned because there was no empirical evidence provided for those probabilities and their independence. Oy.] There was another interesting case, Castenada v. Partida, involving the use of percentages in the courtroom, which was cited in an article by Gastwirth (2005). It concerned whether or not Mexican-Americans were discriminated against in the jury-selection process. (They constituted only 39% of the jurors, although they constituted 79.1% of the relevant population and 65% of the adults in that population who had some schooling.) My favorite percentages and probability example Let me end this chapter by citing my favorite example of misunderstanding of probabilities, also taken from Paulos (1988): Later that evening we were watching the news, and the TV weather forecaster announced that there was a 50 percent chance of rain for Saturday and a 50 percent chance for Sunday, and concluded that there was therefore a 100 percent chance of rain that weekend. (p. 3) I think that says it all. Chapter 4: Sample percentages vs. population percentages Almost all research studies that are concerned with percentages are carried out on samples (hopefully random) taken from populations, not on entire populations. It follows that the percentage in the sample might not be the same as the percentage in the population from which the sample is drawn. For example, you might find that in a sample of 50 army recruits 20 of them, or 40%, are Catholics. What percentage of all army recruits is Catholic? 40%? Perhaps, if the sample mirrors the population. But it is very difficult for a sample to be perfectly representative of the population from which it is drawn, even if it is randomly drawn. The matter of sampling error, wherein a sample statistic (such as a sample percentage) may not be equal to the corresponding population parameter (such as a population percentage) is the basic problem to which statistical inference is addressed. If the two are close, the inference from sample to population is strong; if theyre not, its weak. How do you make such inferences? Read on. Point estimation A single-best estimate of a population percentage is the sample percentage, if the sample has been drawn at random, because the sample percentage has been shown to have some nice statistical properties, the most important of which is that is unbiased. Unbiased means that the average of the percentages for a large number of repeated samples of the same size is equal to the population percentage, and therefore it is a long-run property. It does NOT mean that youll hit the population percentage on the button each time. But youre just as likely to be off on the high side as you are to be off on the low side. How much are you likely to be off? That brings us to the concept of a standard error. Standard error A standard error of a statistic is a measure of how off youre likely to be when you use a sample statistic as an estimate of a population parameter. Mathematical statisticians have determined that the standard error of a sample percentage P is equal to the square root of the product of the population percentage and 100 minus the population percentage, divided by the sample size n, if the sample size is large. But you almost never know the population percentage (youre trying to estimate it!). Fortunately, the same mathematical statisticians have shown that the standard error of a sample percentage is APPROXIMATELY equal to the square root of the product of the sample percentage and 100 minus the sample percentage, divided by the sample size n; i.e., S.E. H" " P (100  P)/ n [the symbol H" means approximately equal to] For example, if you have a sample percentage of 40 for a sample size of 50, the standard error is " 40(60)/50, which is equal to 6.93 to two decimal places, but let s call it 7. So you would be likely off by about 7% (plus or minus) if you estimate the population percentage to be 40%. Edgerton (1927) constructed a clever abac (mathematical nomogram) for reading off a standard error of a proportion (easily convertible to a percentage), given the sample proportion and the sample size. [Yes, that was 1927 82 years ago!] Its very nice. There are several other nomograms that are useful in working with statistical inferences for percentages (see, for example, Rosenbaum, 1959). And you can even get a business-card-size chart of the standard errors for various sample sizes at the  HYPERLINK "http://www.gallup-robinson.com" www.gallup-robinson.com website. Interval estimation (confidence intervals) Since it is a bit presumptuous to use just one number as an estimate of a population percentage, particularly if the sample size is small (and 50 is a small sample size for a survey), it is recommended that you provide two numbers within which you believe the population percentage to lie. If you are willing to make a few assumptions, such as sample percentages are normally distributed around population percentages, you should lay off two (its actually 1.96, but call it two) standard errors to the right and left of the sample percentage to get a 95% confidence interval for the population percentage, i.e., an interval that you are 95% confident will capture the unknown population percentage. (Survey researchers usually call two standard errors the margin of error,) For our example, since the standard error is 7%, two standard errors are 14%, so 40% 14%, an interval extending from 26% to 54%, constitutes the 95% confidence interval for the population percentage. 40% is still your single-best estimate, but youre willing to entertain the possibility that the population percentage could be as low as 26% and as high as 54%. It could of course be less than 26% or greater than 54%, but you would be pretty confident that it is not. Since two standard errors = 2 " P (100 - P) / n and P(100 - P) is close to 2500 for values of P near 50, a reasonably good approximation to the margin of error is 100/ "n . I said above that the formula for the standard error of a percentage is a function of the population percentage, but since that is usually unknown (thats what youre trying to estimate) you use the sample percentage instead to get an approximate standard error. Thats OK for large samples, and for sample and population percentages that are close to 50. A situation where it very much does matter whether you use the sample percentage or the population percentage in the formula for the standard error is in the safety of clinical trials for which the number of adverse events is very small. For example, suppose no adverse events occurred in a safety trial for a sample of 30 patients. The sample P = 0/30 = 0%. Use of the above formula for the standard error would produce a standard error of 0, i.e., no sampling error! Clearly something is wrong there. You cant use the sample percentage, and the population percentage is unknown, so what can you do? It turns out that you have to ask what is the worst that could happen, given no adverse events in the sample. The answer comes from The rule of 3 (sort of like The rule of 72 for interest rates; see Chapter 1). Mathematical statisticians have shown that the upper 95% confidence bound for zero events in a sample is approximately 3/n in terms of a proportion, or approximately 300/n in terms of a percentage. (See Jovanovic & Levy, 1997, and van Belle, 2002 regarding this intriguing result. The latter source contains all sorts of "rules of thumb", some of which are very nice, but some of the things that are called rules of thumb really aren't, and there are lots of typos.) The lower 95% confidence bound is, of course, 0. So for our example you could be 95% confident that the interval from 0% to 10% (300/30 = 10) captures the percentage of adverse events in the population from which the sample has been drawn. The lower 95% confidence bound if all events in a sample are "adverse" is 100 - 300/n and the upper bound is 100. For our example you could be 95% confident that the interval from 90% to 100% "captures" the percentage of adverse events in the population from which the sample has been drawn. There is nothing special about a 95% confidence interval, other than the fact that it is conventional. If you want to have greater confidence than 95% for a given sample size you have to have a wider interval. If you want to have a narrower confidence interval you can either settle for less confidence or take a larger sample size. [Do you follow that?] But the only way you can be 100% confident of your inference is to have an interval that goes from 0 to 100, i.e., the entire scale! One reason why many researchers prefer to work with proportions rather than percentages is that when the statistic of interest is itself a percentage it is a bit awkward to talk about a 95% confidence interval for a %. But I dont mind doing that. Do you? In Chapter 1 I cited an article in the Public Opinion Quarterly by S.S. Wilks (1940a) regarding opinion polling. In a supplementary article in that same issue he provided a clear exposition of confidence intervals for single percentages and for differences between two percentages (see the following chapter for the latter matter). An article two years later by Mosteller and McCarthy (1942) in that journal shed further light on the estimation of population percentages. [I had the personal privilege of TAing for both Professor Mosteller and Professor McCarthy when I was doing my doctoral study at Harvard in the late 1950s. Frederick Mosteller was also an exceptional statistician.] For a very comprehensive article concerning confidence intervals for proportions, see Newcombe (1998a). He actually compared SEVEN different methods for getting confidence intervals for proportions, all of which are equally appropriate for percentages. Hypothesis testing (significance testing) Another approach to statistical inference (and until recently far and away the most common approach) is the use of hypothesis testing. In this approach you start out by making a guess about a parameter, collect data for a sample, calculate the appropriate statistic, and then determine whether or not your guess was a good one. Sounds complicated, doesnt it? It is, so lets take an example. Going back to the army recruits, suppose that before you carried out the survey you had a hunch that about 23% of the recruits would be Catholic. (You read somewhere that 23% of adults in the United States are Catholic, and you expect to find the same % for army recruits.) You therefore hypothesize that the population percentage is equal to 23. Having collected the data for a sample of 50 recruits you find that the percentage Catholic in the sample is 40. Is the 40 close enough to the 23 so that you would not feel comfortable in rejecting your hypothesis? Or are the two so discrepant that you can no longer stick with your hypothesis? How do you decide? Given that the margin of error for a percentage is two standard errors and for your data two standard errors is approximately 14%, you can see that the difference of 17% between the hypothesized 23% and the obtained 40% is greater than the margin of error, so your best bet is to reject your hypothesis (it doesnt reconcile with the sample data). Does that mean that you have made the correct decision? Not necessarily. There is still some (admittedly small) chance that you could get 40% Catholics in a sample of 50 recruits when there are actually only 23% Catholics in the total population of army recruits. Weve actually cheated a little in the previous paragraph. Since the population percentage is hypothesized to be 23, the 23 should be used to calculate the standard error rather than the 40. But for most situations it shouldn t matter much whether you use the sample percentage or the hypothesized population percentage to get the standard error. ["40(60)/50 = 6.93 is fairly close to "23(77)/50 = 5.95, for example.] The jargon of hypothesis testing There are several technical terms associated with hypothesis testing, similar to those associated with diagnostic testing (see the previous chapter): The hypothesis that is tested is often called a null hypothesis. (Some people think that a null hypothesis has to have zero as the hypothesized value for a parameter. Theyre just wrong.) There is sometimes a second hypothesis that is pitted against the null hypothesis (but not for our example). It is called, naturally enough, an alternative hypothesis. If the null hypothesis is true (youll not know if it is or not) and you reject it, you are said to have made a Type I error. If the null hypothesis is false (youll not know that either) and you fail to reject it, you are said to have made a Type II error. The probability of making a Type I error is called the level of significance and is given the Greek symbol . The probability of making a Type II error doesn t usually have a name, but it is given the Greek symbol . 1   is called the power of the hypothesis test. Back to our example Null hypothesis: Population percentage = 23 If the null hypothesis is rejected, the sample finding is said to be  statistically significant. If the null hypothesis is not rejected, the sample finding is said to be not statistically significant. Suppose you reject that hypothesis, since the corresponding statistic was 40, but it (the null hypothesis) is actually true. Then you have made a Type I error (rejecting a true null hypothesis). If you do not reject the null hypothesis and its false (and should have been rejected) then you would have made a Type II error (not rejecting a false null hypothesis). The level of significance, , should be chosen before the data are collected, since it is the  risk that one is willing to run of making a Type I error. Sometimes it is not stated beforehand. If the null hypothesis is rejected, the researcher merely reports the probability of getting a sample result that is even more discrepant from the null hypothesis than the one actually obtained if the null hypothesis is true. That probability is called a p value, and is typically reported as p < .05 (i.e., 5%), p < .01 (i.e., 1%), or p < .001 (i.e., .1%) to indicate how unlikely the sample result would be if the null hypothesis is true.  and/or power (= 1  ) should also be stated beforehand, but they depend upon the alternative hypothesis, which is often not postulated. [In order to draw the  right sample size to test a null hypothesis against an alternative hypothesis, the alternative hypothesis must be explicitly stated. Tables and formulas are available (see, for example, Cohen, 1988) for determining the optimal sample size for a desired power.] The connection between interval estimation and hypothesis testing You might have already figured out that you can do hypothesis testing for a percentage as a special case of interval estimation. It goes like this: 1. Get a confidence interval around the sample percentage. 2. If the hypothesized value for the population percentage is outside that interval, reject it; if its inside the interval, dont reject it. [Strictly speaking, you should use the sample percentage to get the standard error in interval estimation and you should use the hypothesized population percentage to get the standard error in hypothesis testing--see above--but lets not worry about that here.] Neat, huh? Lets consider the army recruits example again. The sample percentage is 40. The 95% confidence interval goes from 26 to 54. The hypothesized value of 23 falls outside that interval. Therefore, reject it. (That doesnt mean its necessarily false. Remember Type I error!) Its all a matter of compatibility. The sample percentage of 40 is a piece of real empirical data. You know you got that. What you dont know, but you wish you did, is the population percentage. Percentages of 26 to 54 are compatible with the 40, as indicated by the 95% confidence you have that the interval from 26 to 54 captures the population percentage. 23 is just too far away from 40 to be defensible. van Belle (2002) takes an idiosyncratic approach to interval estimation vs. hypothesis testing. He claims that you should use the hypothesis testing approach in order to determine an appropriate sample size, before the study is carried out; but you should use interval estimation to report the results after the study has been carried out. I disagree. There are the same sorts of sources for the determination of sample size in the context of interval estimation as there are for the determination of sample size in the context of hypothesis testing. (See the reference to Walker & Lev, 1953 in the following section.) In my opinion, if you have a hypothesis to test (especially if you have both a null and an alternative hypothesis), you should use hypothesis-testing procedures for the determination of sample size. If you don't, go the interval estimation route all the way. Caution: Using interval estimation to do hypothesis testing can be more complicated than doing hypothesis testing directly. I will provide an example of such a situation in Chapter 5 in conjunction with statistical inferences for relative risks. Sample size In all of the foregoing it was tacitly assumed that the size of the sample was fixed and the statistical inference was to be based upon the sample size that you were stuck with. But suppose that you were interested in using a sample size that would be optimal for carrying out the inference from a sample percentage to the percentage in the population from which the sample had been drawn. There are rather straightforward procedures for so doing. All you need do is to decide beforehand how much confidence you want to have when you get the inferred interval, how much error you can tolerate in making the inference, have a very rough approximation of what the population percentage might be, and use the appropriate formula, table, or internet routine for determining what size sample would satisfy those specifications. Lets take an example. Suppose you were interested in getting a 95% confidence interval (95% is conventional), you dont want to be off by more than 5%, and you think the population percentage is around 50 (thats when the standard error is largest, so thats the most conservative estimate). The formula for the minimum optimal sample size is: n H" 4z2 P(100-P)/W2 [see, for example, Walker and Lev (1953, p. 70)] where P is your best guess, W is the width of the confidence interval (the width is twice the margin of error), and z is the number of standard errors you need to  lay off to the right and to the left of the sample P (z comes from the normal, bell-shaped sampling distribution). Substituting the appropriate values in that formula (z is approximately equal to 2 for 95% confidence) you find that n is equal to 4(2)2 50(100-50) /102 = 400. If you draw a sample of less than 400 you will have less than 95% confidence when you get the sample P and construct the interval. If you want more confidence than 95% youll need to lay off more standard errors and/or have a larger n (for three standard errors youll need an n of about 900). If you want to stay with 95% confidence but you can tolerate more error (say 10% rather than 5%, so that W = 20), then you could get away with an n of about 100. The Dimension Research, Inc. website actually does all of the calculations for you. Just google dimension research calculator, click on the first entry that comes up, and click on sample size for proportion on the left-hand-side menu . Then select a confidence interval, enter your best-guess corresponding percentage P, and your tolerable W, click the Calculate button, and Shazam! Youve got n. van Belle (2002) claims that you should have a sample size of at least 12 when you construct a confidence interval. He provides a diagram that indicates the precision of an interval is very poor up to an n of 12 but starts to level off thereafter. Percentage transformations One of the problems when carrying out statistical inferences for percentages is the fact that percentages are necessarily boxed in between 0 and 100, and often have rather strange distributions across aggregates for which they have been computed. There cant be less than 0% and there cant be more than 100%, so if most of the observations are at the high end of the scale (large percentages) or at the low end of the scale (small percentages) it is almost impossible to satisfy the linearity and normal distribution assumptions that are required for many inferential tests. Consider the following example taken from the Ecstathy website: You have data regarding % Postgraduate Education and % Belief in Biblical Literalism for members of 13 religious denominations (Unitarian-Universalist, Episcopal Church, United Presbyterian, United Church of Christ, United Methodist, Evangelical Lutheran Church, Roman Catholic, Southern Baptist, Seventh Day Adventist, Church of Nazarene, Assemblies of God, Jehovahs Witness, and Church of God in Christ), and youre interested in the relationship between those two variables. You plot the data as depicted in the following scatter diagram (which includes the best-fitting line and the regression statistics:  INCLUDEPICTURE "mhtml:file:///C:\\Documents%20and%20Settings\\Owner\\Desktop\\Ecstathy%20Relationships%20between%20percentages%20are%20not%20linear%20(usually).mht!http://3.bp.blogspot.com/_Bqkz12OAfXQ/SD39WqyNwOI/AAAAAAAAAEo/IKxVzm9CT2U/s400/BiblitOrig.gif"  Here is the plot without the names of the religions superimposed (and with proportions rather than percentages, but that doesnt matter):  INCLUDEPICTURE "mhtml:file:///C:\\Documents%20and%20Settings\\Owner\\Desktop\\Ecstathy%20Relationships%20between%20percentages%20are%20not%20linear%20(usually).mht!http://2.bp.blogspot.com/_Bqkz12OAfXQ/SD5HgKyNwRI/AAAAAAAAAFA/qHeBn6qBCoI/s400/BiblSmoo.gif"  You would like to use Pearsons product-moment correlation coefficient to summarize the relationship and to make an inference regarding the relationship in the population of religious denominations from which those 13 have been drawn (assume that the sample is a simple random sample, which it undoubtedly was not!). But you observe that the plot without the names is not linear (it is curvilinear) and the assumption of bivariate normality in the population is also not likely to be satisfied. What are you to do? The recommendation made by the bloggers at the website is to transform both sets of percentages into logits (which are special types of logarithmic transformations), plot the logits, and carry out the analysis in terms of the logits of the percentages rather than in terms of the percentages themselves. It works; heres the plot (this one looks pretty linear to me):  INCLUDEPICTURE "mhtml:file:///C:\\Documents%20and%20Settings\\Owner\\Desktop\\Ecstathy%20Relationships%20between%20percentages%20are%20not%20linear%20(usually).mht!http://1.bp.blogspot.com/_Bqkz12OAfXQ/SD5NI6yNwTI/AAAAAAAAAFQ/LnncPlYZN3E/s400/BibLit.gif"  There are transformations of percentages other than logits that have been recommended in the methodological literature--see, for example, the articles by Zubin (1935), by Finney (1947; 1975), and by Osborne (2002). Zubin even provided a handy-dandy table for converting a percentage into something he called t or T (not the t of the well-known t test, and not the T of T scores). Nice. The classic case of inferences regarding single percentages You manufacture widgets to be sold to customers. You worry that some of the widgets might be defective, i.e., you are concerned about quality control. What should you do? If the widgets are very small objects (such as thumbtacks) that are made by the thousands in an assembly-line process, the one thing you cant afford to do is inspect each and every one of them before shipping them out. But you can use a technique thats called acceptance sampling, whereby you take a random sample of, say, 120 out of 2000 of them, inspect all of the widgets in the sample, determine the percentage of defectives in the sample, and make a judgment regarding whether or not that percentage is acceptable. For example, suppose you claim (hope?) that your customers wont complain if there are 2% (= 40) or fewer defectives in the lot of 2000 widgets that they buy. You find there are 3 defectives (1.67%) in the sample. Should you automatically accept the lot (the population) from which the sample has been drawn? Not necessarily. There is some probability that the lot of 2000 has more than 2% defectives even though the sample has only 1.67%. This is the same problem that was discussed in a different context (see above) regarding the percentage of army recruits that is Catholic. Once again, you have three choices: (1) get a point estimate and use it (1.67%) as your single-best estimate; (2) establish a confidence interval around that estimate and see whether or not that interval captures the tolerable 2%; or (3) directly test the 2% as a null hypothesis. There is an excellent summary of acceptance sampling available at myphliputil.pearsoncmg.com/student/bp_heizer...7/ct02.pdf. For the problem just considered, it turns out that the probability of acceptance is approximately .80 (i.e., an 80% probability). I used the same numbers that they do, their widgets are batteries, and they take into account the risk to the customer (consumer) as well as the risk to the manufacturer (producer). A great website for inferences regarding percentages in general The West Chester University website has an excellent collection of discussions of statistical topics. Although that website is intended primarily for students who are taking college courses online, any interested parties can download any of the various sections. Section 7_3 is concerned with the finite population correction that should be used for inferences regarding percentages for samples drawn from small, i.e., finite, populations. See also Krejcie & Morgan, 1970; Buonaccorsi, 1987; and Berry, Mielke, and Helmericks, 1988 for such inferences. (vanBelle (2002) argues that the correction can usually be ignored.) The websites name is:  HYPERLINK "http://courses.wcupa.edu/rbove/Berenson/CD-ROM%20Topics/Section 7_3" http://courses.wcupa.edu/rbove/Berenson/CD-ROM%20Topics/Section 7_3 Chapter 5: Statistical inferences for differences between percentages and ratios of percentages In the previous chapter I talked about statistical inferences for a single percentage. Such inferences are fairly common for survey research but not for other kinds of research, e.g., experimental research in which two or more treatments are compared with one another. The inference of greatest interest in experimental research is for the difference between two statistics or the ratio of two statistics, e.g., the percentage of people in the experimental group who do (or get) something and the percentage of people in the control group who do (or get) something. The something is typically a desirable outcome such as passed the course or an undesirable outcome such as died. Differences between percentages Just as for a single percentage, we have our choice of point estimation, interval estimation, or hypothesis testing. The relevant point estimate is the difference between the two sample percentages. Since they are percentages, their difference is a percentage. If one of the percentages is the % in an experimental group that survived (they got the pill, for example), and the other percentage is the % in a control group that survived (they didnt get the pill), then the difference between the two percentages gives you an estimate of the absolute effect of the experimental condition. If 40% of experimental subjects survive and 30% of control subjects survive, the estimate of the experimental effect is 10%. But just as for a single percentage, it is better to report two numbers rather than one number for an estimate, i.e., the endpoints of a confidence interval around the difference between the two percentages. That necessitates the calculation of the standard error of the difference between two percentages, which is more complicated than the standard error of a single percentage. The formula for two independent samples (unmatched) and the formula for two dependent samples (matched by virtue of being the same people or matched pairs of people) are different. The independent samples case is more common. The formula for the standard error of the difference between two independent percentages is: S.E. H" " P1(100  P1)/n1 + P2(100  P2)/n2 where the P s are the two percentages and the n s are the two sample sizes. It often helps to display the relevant data in a  2 by 2 table: Sample 1 Sample 2  Success P1 P2  Failure 100 P1 100 P2 where Success and Failure are the two categories of the variable upon which the percentages are taken, and need not be pejorative. Edgertons (1927) abac can be used to read off the standard error of the difference between two independent percentages, as well as the standard error of a single sample percentage. Later, Hart (1949) and Lawshe and Baker (1950) presented quick ways to test the significance of the difference between two independent percentages. Stuart (1963) provided a very nice set of tables of standard errors of percentages for differences between two independent samples for various sample sizes, which can also be used for the single-sample case. And Fleiss, Levin, and Paik (2003) provide all of the formulas youll ever need for inferences regarding the difference between percentages. They even have a set of tables (pp. 660-683) for determining the appropriate sample sizes for testing the significance of the difference between two percentages. (See also Hopkins & Chappell, 1994.) The formula for dependent samples is a real mess, involving not only the sample percentages and the sample sizes but also the correlation between the two sets of data (since the percentages are for the same people or for matched people). However, McNemar (1947) provided a rather simple formula that is a reasonable approximation to the more complicated one: S.E. H" 100/n " (b + c) where n ( = n1 = n2 , since the people are paired with themselves or with their  partners ), b is the number of pairs for which the person in Sample 1 was a  success and the partner in Sample 2 was a  failure ; and c is the number of pairs for which the person in Sample 1 was a failure and the partner in Sample 2 was a success. Sample 1 Sample 2 Success Failure Success [a] [b] P1 = (a+b)/n Failure [c] [d] P2 = (a+c)/n a is the number of pairs for which both members were successes, and d is the number of pairs for which both members were failures; but, rather surprisingly, neither a nor d contributes to the standard error. If a researcher is concerned with change in a given sample from Time 1 to Time 2, that also calls for the dependent-samples formula. An example to illustrate both the independent and the dependent cases You are interested in the percentage of people who pass examinations in epidemiology. Suppose there are two independent samples of 50 students each (50 males and 50 females) drawn from the same population of graduate students, where both samples take an epidemiology examination. The number of males who pass the examination is 40 and the number of females who pass is 45. Displaying the data as suggested above we have: Males Females Passed 40/50 = 80% 45/50 = 90% Failed 10/50 = 20% 5/50 = 10% The standard error of the difference between the two percentages is S.E. H" " 80(20)/50 + 90(10)/50 = 7.07 (rounded to two places) On the other hand, suppose that these students consist of 50 married couples who take the same course, have studied together (within pairs, not between pairs), and take the same epidemiology examination. Those samples would be dependent. If in 74% of the couples both husband and wife passed, in 6% of the couples wife passed but husband failed, in 16% of the couples husband passed but wife failed, and in 2 couples both spouses failed, we would have the following 2 by 2 table: Husband Wife Passed Failed Passed 37 [a] 3 [b] 40 (= 80%) Failed 8 [c] 2 [d] 10 45 (= 90%) S.E. H" 100/50" (3 + 8) = 6.63 In both cases 80% of the males passed and 90% of the females passed, but the standard error is smaller for matched pairs since the data for husbands and wives are positively correlated and the sampling error is smaller. If the correlation between paired outcomes is not very high, say less than .50 (van Belle, 2002) the pairing of the data is not very sensitive. If the correlation should happen to be NEGATIVE, the sampling error could actually be WORSE for dependent samples than for independent samples! Would you believe that there is also a procedure for estimating the standard error of the difference between two partially independent, partially dependent percentages? In the husbands and wives example, for instance, suppose there are some couples for which you have only husband data and there are some couples for which you have only wife data. Choi and Stablein (1982) and Thompson (1995) explain how to carry out statistical inferences for such situations. Interval estimation for the difference between two independent percentages As far as the interval estimation of the difference between two independent population percentages is concerned, we proceed just as we did for a single population percentage, viz., laying off two S.E.s to the right and to the left of the difference between the two sample percentages in order to get a 95% confidence interval for the difference between the two corresponding population percentages. The sample difference is 90% - 80 % = 10% for our example. The standard error for the independent case is 7.07%. Two standard errors would be 14.14%. The 95% confidence interval for the population difference would therefore extend from 10% - 14.14% to 10% + 14.14%, i.e., from -4.14% to 24.14%. You would be 95% confident that the interval would capture the difference between the two population percentages. [Note that the -4.14% is a difference, not an actual %.] Since Sample 2 is the wives sample and Sample 1 is the husbands sample, and we subtracted the husband % from the wife %, we are willing to believe that in the respective populations the difference could be anywhere between 4.14% in favor of the husbands and 24.14% in favor of the wives. I refer you to an article by Wilks (1940b) for one of the very best discussions of confidence intervals for the difference between two independent percentages. And in the previous chapter I mentioned an article by Newcombe (1998a) in which he compared seven methods for determining a confidence interval for a single proportion. He followed that article with another article (Newcombe, 1998b) in which he compared ELEVEN methods for determining a confidence interval for the difference between two independent proportions! Hypothesis testing for the difference between two independent percentages In the previous chapter I pointed out that except for a couple of technical details, interval estimation subsumes hypothesis testing, i.e., the confidence interval consists of all of the hypothesized values of a parameter that are not rejectable. For our example any hypothesis concerning a population difference of -4.14 through 24.14 would not be rejected (and would be regarded as not statistically significant at the 5% level). Any hypotheses concerning a population difference that is outside of that range would be rejected (and would be regarded as statistically significant at the 5% level). In a very interesting article concerning the statistical significance of the difference between two independent percentages (he uses proportions), the late and ever-controversial Alvan R. Feinstein (1990) proposed the use of a unit fragility index in conjunction with the significance test. This index provides an indication of the effect of a switch of an observation from one category of the dependent variable to the other category (his illustrative example had to do with a comparison of cephaloridine with ampicillin in a randomized clinical trial). That index is especially helpful in interpreting the results of a trial in which the sample is small. (See also the commentary by Walter, 1991 regarding Feinsteins index.) Feinstein was well-known for his invention of methodological terminology. My favorite of his terms is trohoc [thats cohort spelled backwards] instead of case-control study. He didnt like case-control studies, in which cases who have a disease are retrospectively compared with controls who dont, in an observational non-experiment. There is an advantage of interval estimation over hypothesis testing that Ive never seen discussed in the methodological literature. Researchers often find it difficult to hypothesize the actual magnitude of a difference that they claim to be true in the population (and is not null). The theory underlying their work is often not far enough advanced to suggest what the effect might be. They are nevertheless eager to know its approximate magnitude. Therefore, instead of pitting their research (alternative) hypothesis against a null hypothesis and using power analysis for determining the appropriate sample size for testing the effect, all they need to do is to specify the magnitude of a tolerable width of a confidence interval (for a margin of error of, say, 3%), use that as the basis for the determination of sample size (see the appropriate formula in Fleiss, et al., 2003), carry out the study, and report the confidence interval. Nice; straightforward; no need to provide granting agencies with weak theories; and no embarrassment that often accompanies hurriedly-postulated effects that far exceed those actually obtained. Two examples of lots of differences between percentages Peterson, et al. (2009) were interested in testing the effectiveness of a particular intervention designed to help teenagers to stop smoking. Using a rather elaborate design that had a tricky unit-of analysis problem (schools containing teenage smokers were randomly assigned to the experimental treatment and to the control treatment, rather than individual students). Their article is loaded with both confidence intervals for, and significance tests of, the difference between two percentages. Sarna, et al. (2009) were also interested in stopping smoking, but for nurses rather than teenagers. Like Peterson, et al., their article contains several tables of confidence intervals and significance tests for the differences between percentages. But it is about a survey, not an experiment, in which nurses who quit smoking were compared to nurses who did not quit smoking, even though all of them registered at the Nurses QuitNet website for help in trying to do so. If you're interested in smoking cessation, please read both of those articles and let me know (tknapp5@juno.com) what you think of them. The difference between two percentages that have to add to 100 In Chapter 2 I said that I dont care much for the practice of taking differences between two percentages that have been calculated on the same base for the same variable, e.g., the difference in support for Candidate A and Candidate B for the same political office, when they are the only two candidates. I am even more opposed to making any statistical inferences for such differences. In that same section of Chapter 2 I referred to a difference between two percentages that didnt have to add to 100, but their sum plus the sum of one or more other percentages did have to add to 100. Let us consider the difference in support for Candidate A and Candidate B when there is a third candidate, Candidate C, for the same political office. (Remember Ralph Nader?) The following example is taken from the guide that accompanies the statistical software StatPac (Walonick, 1996): In a random sample of 107 people, 38 said they planned to vote for Candidate A, 24 planned to vote for Candidate B, and 45 favored Candidate C for a particular public office. The researcher was interested only in the difference between the support for Candidate A (35.5%) and the support for Candidate B (22.4%). Is that difference statistically significant? Several statisticians have tackled that problem. (See, for example, Kish, 1965; Scott & Seger, 1983.) In order to test the significance of the difference between the 35.5% for Candidate A and the 22.4% for Candidate B, you need to use the z-ratio of the difference (35.5 - 22.4 = 13.1) to the standard error of that difference. The formula for the approximate standard error is the square root of the expression [(P1+ P2 ) (P1 P2 )2 ]/n, and the sampling distribution is normal. (Walonick uses t; hes wrong.) For this example P1 = 35.5, P2 = 24.4, and n = 107, yielding a standard error of .098 and a z of 1.82, which is not statistically significant at the conventional .05 level. Ratios of percentages Now for the biggie in epidemiological research. Weve already discussed the difference between absolute risk, as represented by the difference between two percentages, and relative risk, as represented by the ratio of two percentages. Relative risk tends to be of greater importance in epidemiology, since the emphasis is on risks for large populations of people having one characteristic compared to risks for equally large populations of people having a contrasting characteristic. The classic example is smokers vs. non-smokers and the relative risk of getting lung cancer. But let s take as a simpler example the relationship between maternal age and birthweight. Fleiss, et al. (2003) provide a set of hypothetical data for that problem. Here are the data: Maternal age Birthweight d" 20 years > 20 years d" 2500 grams 10 15 > 2500 grams 40 135 The ratio of interest is the percentage of younger women whose baby is of low birthweight (10/50, or 20%) divided by the percentage of older women whose baby is of low birthweight (15/150, or 10%). The relative risk of low birthweight is therefore 20%/10%, or 2.00. If these data are for a random sample of 200 women, what is the 95% confidence interval for the relative risk in the population from which the sample has been drawn? Is the relative risk of 2.00 statistically significant at the 5% level? Although the first question is concerned with interval estimation and the second question is concerned with hypothesis testing, the two questions are essentially the same, as we have already seen several times. I shall give only a brief outline of the procedures for answering those questions. The determination of an estimate of the standard error of the ratio of two percentages is a bit complicated, but here it is (Fleiss, et al., 2003, p. 132): S.E. H" r " (n12 / n11 n1. + n22 / n21 n2. ), where r is the relative risk n11 is the number in the upper-left corner of the table (10 in the example) n12 is the number in the upper-right corner (40) n21 is the number in the lower-left corner (15) n22 is the number in the lower-right corner (135) n1. is the total for the first row (50) n2. is the total for the second row (150) Substituting those numbers in the formula for the standard error, we get S.E. = .75 (to two decimal places) Two standard errors would be approximately 1.50, so the 95% confidence interval for the population ratio would be from .50 to 3.50. Since that interval includes 1 (a relative risk of 1 is the same risk for both groups), the obtained sample ratio of 2.00 is not statistically significant at the 5% level. Fleiss, et al. (2003) actually recommend that the above formula for estimating the standard error not be used to get a confidence interval for a ratio of two percentages. They suggest instead that the researcher use the odds ratio instead of the relative risk (the odds ratio for those data is 2.25), take the logarithm of the odds ratio, and report the confidence interval in terms of log odds. [Here we go with logarithms again!] I dont think that is necessary, since everything is approximate anyhow. If those data were real, the principal finding is that younger mothers do not have too much greater risk for having babies of low birthweight than do older mothers. Fleiss et al. arrive at the same conclusion by using the logarithmic approach. Another hybrid inferential problem Earlier in this chapter I referred to procedures derived by Choi and Stablein (1982) and by Thompson (1995) for estimating the standard error of the difference between two percentages where the samples were partially independent and partially dependent, due to missing data. There is another interesting situation that comes up occasionally where you would like to test the difference between two independent percentage gains, i.e., where each gain is the difference between two dependent percentages. (A loss is treated as a negative gain.) Building upon the work of Marascuilo and Serlin (1979) [see also Levin & Serlin, 2000], Howell (2008) discussed a hypothetical example where a change from fall (42/70 = 60%) to spring (45/70 = 64.3%) for an intervention group is compared with change from fall (38/70 = 54.3%) to spring (39/70 = 55.7%) for a control group. The difference between the 4.3% gain for the intervention group and the 1.4% gain for the control group was not statistically significant, which is not surprising since the swing is only about 3%. Those of you who are familiar with the classic monograph on experimental design by Campbell and Stanley (1966) might recognize Howells example as a special case of Campbell and Stanleys True Experimental Design #4, i.e., the Pretest/Posttest Control Group Design. (See also the article by Vickers, 2001 in which he discusses four different ways for analyzing the data for such a design.) Sample size In the previous chapter I talked about a handy-dandy internet calculator that determined the optimal sample size for a confidence interval for a single percentage. The situation for determining the optimal sample sizes for confidence intervals for the difference between two percentages or the ratio of two percentages (for either independent samples or for dependent samples) is much more complicated. (See Fleiss, et al., 2003, for all of the gory details. And the PASS2008 software is particularly good for carrying out all of the calculations for you [it is available for a 7-day free trial].) Non-random samples and full populations Suppose you have a non-random sample of boys and a non-random sample of girls from a particular school and you want to compare the percentage of boys in the boy sample who think that President Obama is doing a good job with the percentage of girls in the girl sample who think that President Obama is doing a good job. Would a confidence interval or a significance test of the difference between, or the ratio of, the two percentages be appropriate? Suppose you have percentages for the entire population of boys and the entire population of girls? Would a confidence interval or a significance test be appropriate there? You cant imagine how controversial both of those matters are! The opinions range from very conservative to very liberal. The very conservative people argue that statistical inferences are appropriate only for probability samples, of which simple random samples are the most common type (everybody has an equal and independent chance of being drawn into the sample) and not for either non-random samples or entire populations. Period. End of discussion. The very liberal people argue that they are appropriate for both non-random samples and for entire populations, since they provide an objective basis for determining whether or not, or to what extent, to get excited about a finding. The people in-between (which from a cursory glance at the scientific literature are the majority) argue that for a non-random sample it is appropriate to use statistical inferential procedures in order to generalize from the non-random sample to a hypothetical population of people like these; and/or it might be appropriate to use statistical inferential procedures for an entire population in order to generalize from a finding now to findings for that population at other times. As one of those very conservative people (we meet in a telephone booth every year), those last two arguments blow my mind. I dont care about hypothetical populations (do you?) and hardly anybody studies populations by randomly sampling them across time. In his article, Desbiens (2007) did a review of the literature and found that many authors of research reports in medical education journals use statistical inferences for entire populations. He claims that they shouldnt. I agree. More than two percentages The previous discussion was concerned with procedures for statistical inferences when comparing the difference between, or the ratio of, two percentages. It is natural to ask if these procedures generalize to three or more percentages. The answer is sort of. If youre interested in testing the significance of the difference AMONG several percentages (e.g., the percentage of Catholics who voted for Obama, the percentage of Protestants who voted for Obama, and the percentage of Jews who voted for Obama), there are comparable (and more complicated) formulas for so doing (see Fleiss, et al., 2003). Confidence intervals for the more-than-two case, however, are much more awkward to handle, primarily because there are three differences (A-B, A-C, B-C) to take into consideration. [There might also be those same three differences to take into consideration when carrying out the significance testing, if you care about pairwise differences as well as the overall difference. Its just like the problem of an overall F test vs. post hoc comparisons in the analysis of variance, if that means anything to you!] The situation for ratios is even worse. There is no appropriate statistic for handling A/B/C, for example, either via significance testing or confidence intervals. Chapter 6: Graphing percentages Ive never cared much for statistical graphics, except for scatter diagrams that facilitate the understanding of the form and the degree of the relationship between two variables. (See the scatter diagrams that I used in Chapter 4 to illustrate data transformations for percentages.) I also dont always agree with the often-stated claim that a picture is worth a thousand words. (I like words.) But I realize that there are some people who prefer graphs to words and tables, even when it comes to percentages. I therefore decided to include in this monograph a brief chapter on how to display percentages properly when graphical techniques are used. You may want to adjust your zoom view for some of these graphs, in order to get a better idea of the information contained therein. Pie charts Far and away the most common way to show percentages is the use of pie charts, with or without colors. For example, if one of the findings of a survey is that 60% of cigarette smokers are males and 40% of cigarette smokers are females, that result could be displayed by using a pie (circle) divided into two slices, a blue slice constituting 60% of the pie (216 of the 360 degrees in the circle) labeled MALES, and a red slice constituting the other 40% of the pie (the other 144 degrees) labeled FEMALES. There is absolutely nothing wrong with such charts, but I think theyre unnecessary for summarizing two numbers (60 and 40)---actually only one number (60 or 40)---since the other follows automatically. If the variable has more than two categories, pie charts are somewhat more defensible for displaying percentages, but if the number of categories is too large it is difficult to see where one slice ends and another slice begins. The software EXCEL that is part of Microsoft Office has the capability of constructing pie charts (as well as many other kinds of charts and graphs), and it is fairly easy to copy and paste pie charts into other documents. Heres one for the 60% male, 40% female example.  Heres another, and more complicated, pie chart that illustrates one way to handle small slices. The data are for the year 2000.  INCLUDEPICTURE "http://upload.wikimedia.org/wikipedia/commons/e/ee/Percentages_of_the_us_population_by_race_-_2000.png" \* MERGEFORMATINET  Some people are adamantly opposed to the use of pie charts for displaying percentages (van Belle, 2002, p. 160, for example, says "Never use a pie chart"), but Spence and Lewandowsky (1991) supported their use. They even provided data from experiments that showed that pie charts arent nearly as bad as the critics claim. Bar graphs Bar graphs are probably the second most common way to display percentages. (But van Belle, 2002, doesn't like them either.) The categories of the variable are usually indicated on the horizontal (X) axis and the percentage scale usually constitutes the vertical (Y) axis, with bars above each of the categories on the X axis extending to a height corresponding to the relevant percentage on the Y axis. The categories need not be in any particular order on the X axis, if the variable is a nominal variable such as Religious Affiliation. But if the variable is an ordinal variable such as Socio-economic Status, the categories should be ordered from left to right on the X axis in increasing order of magnitude. Heres the 60%, 40% example as a bar graph:  Heres a bar graph for more than two categories. The data are percentages of responses by pregnant mothers to the question "Reading the [Preparing to Parent] newsletters helped convince me to...". Note that the bars are horizontal rather than vertical and the percentages do not add to 100 because more than one response is permitted.  Heres a more complicated (but readable) bar graph for the breakdown of responses of two groups of pregnant women (those at risk for complications and those not at risk) in that same study:  For other examples of the use of bar graphs, see Keppel et al. (2008). One of the least helpful percentage bar graphs I've ever seen can be downloaded from the StateMaster.com website. It is concerned with the percent of current smokers in each of 49 states, as of the year 2004. It lists those percents in decreasing order (from 27.5% for Kentucky to 10.4% for Utah; it also lists the District of Columbia, Puerto Rico, and the U.S. Virgin Islands, but not my home state of Hawaii!). Each percent is rounded to one place to the right of the decimal point, and there is a bar of corresponding horizontal length right next to each of those percents. It is unhelpful because (a) the bars aren't really needed (the list of percents is sufficient); and (b) rounding the percents to one decimal place resulted unnecessarily in several ties (since the number of current smokers in each of the states and the population of each state are known or easily estimable, all of those ties could have been broken by carrying out the calculations to two decimal places rather than one). A research example that used both a pie chart and a bar graph On its website, the IntelTechnology Initiative provides the following example of the use of a pie chart and a bar graph for displaying counts and percentages obtained in a survey regarding attitudes toward biodiversity.  EMBED AcroExch.Document.7  In their article, Spence and Lewandowsky (1991) reported results that indicated that bar graphs and pie charts were equally effective in displaying the key features of percentage data. They provide various bar graphs for displaying four percentages (A = 10%; B = 20%; C = 40%; D = 30%). Nice. In Chapter 2 I referred to Milo Schield and the W.M. Keck Statistical Literacy Project at Augsburg College. In a presentation he gave to the Section on Statistical Education of the American Statistical Association, Schield (2006) gave a critique of pie and bar percentage graphs that appeared in the newspaper USA Today. Ive seen many of those graphs; some are really bad. Other graphical techniques for displaying percentages Kastellec and Leoni (2007) provided several arguments and a great deal of evidence supporting the use of graphs to improve the presentation of findings in political science research. In their article they include real-data examples for which they have converted tables into graphs. Some of those examples deal with percentages or proportions presented through the use of mosaic plots, dot plots, advanced dot plots, or violin plots (those are their actual names!). Rather than trying to explain those techniques here, I suggest that you read the Kastellec and Leoni article and see for yourself. (Theyre not just applicable to political science.) Their article also has an extensive set of references pro and con the use of graphs. If you skipped Chapter 1 and youre having difficulty distinguishing among percentages, proportions, and fractions, I suggest that you take a look at the British website  HYPERLINK "http://www.active-maths.co.uk/fractions/whiteboard/fracdec_index.html" www.active-maths.co.uk/.../fracdec_index.html, which lays out nicely how each relates to the others. And heres another example of the graphing of percentages (taken from the Political Calculations website):  INCLUDEPICTURE "http://1.bp.blogspot.com/_5aAsxFJOeMw/RkCu7oocJyI/AAAAAAAAAPA/3QeMS6BseMc/s400/2005-distribution-percentage-income-by-age.JPG" \* MERGEFORMATINET  That graph contains a lot of interesting information (e.g., that the percentage of people aged 65-74 who have very high incomes is almost as high as the percentage of people aged 25-34--read along the right-hand edge of the graph), but I personally find it to be too busy, and it looks like Jaws! Chapter 7: Percentage overlap of two frequency distributions One of the things that has concerned me most about statistical analysis over the years is the failure by some researchers to distinguish between random sampling and random assignment when analyzing data for the difference between two groups. Whether they are comparing a randomly sampled group of men with a randomly sampled group of women, or a randomly assigned sample of experimental subjects with a randomly assigned sample of control subjects (or, worse yet, two groups that have been neither randomly sampled nor randomly assigned), they invariably carry out a t-test of the statistical significance of the difference between the means for the two groups and/or construct a confidence interval for the corresponding "effect size". I am of course not the first person to be bothered by this. The problem has been brought to the attention of readers of the methodological literature for many years. [See, for example, Levin's (1993) comments regarding Shaver (1993); Lunneborg (2000); Levin (2006); and Edgington & Onghena (2007).] As I mentioned in an earlier chapter of this book, some researchers "regard" their non-randomly-sampled subjects as having been drawn from hypothetical populations of subjects "like these". Some have never heard of randomization (permutation) tests for analyzing the data for the situation where you have random assignment but not random sampling. Others have various arguments for using the t-test (e.g., that the t-test is often a good approximation to the randomization test); and still others don't seem to care. It occurred to me that there might be a way to create some sort of relatively simple "all-purpose" statistic that could be used to compare two independent groups no matter how they were sampled or assigned (or just stumbled upon). I have been drawn to two primary sources: 1. The age-old concept of a percentage. 2. Darlington's (1973) article in Psychological Bulletin on "ordinal dominance" (of one group over another). [The matter of ordinal dominance was treated by Bamber (1975) in greater mathematical detail and in conjunction with the notion of receiver operating characteristic (ROC) curves, which are currently popular in epidemiological research.] My recommendation Why not do as Darlington suggested and plot the data for Group 1 on the horizontal axis of a rectangular array, plot the data for Group 2 on the vertical axis, see how many times each of the observations in one of the groups (say Group 1) exceeds each of the observations in the other group, convert that to a percentage (he actually did everything in terms of proportions), and then do with that percentage whatever is warranted? (Report it and quit; test it against a hypothesized percentage; put a confidence interval around it; whatever). Darlington's example [data taken from Siegel (1956)] The data for Group 1: 0, 5, 8, 8, 14, 15, 17, 19, 25 (horizontal axis) The data for Group 2: 3, 6, 10, 10, 11, 12, 13, 13, 16 (vertical axis) The layout: 16xxx13xxxxx13xxxxx12xxxxx11xxxxx10xxxxx10xxxxx6xxxxxxx3xxxxxxxx 05881415171925 The number of times that an observation in Group 1 exceeded an observation in Group 2 was 48 (count the xs). The percentage of times was 48/81, or .593, or 59.3%. Let's call that Pe for "percentage exceeding". [Darlington calculated that proportion (percentage) but didn't pursue it further. He recommended the construction of an ordinal dominance curve through the layout, which is a type of cumulative frequency distribution similar to the cumulative frequency distribution used as the basis for the Kolmogorov-Smirnov test.] How does this differ from other suggestions? Comparing two independent groups by considering the degree of overlapping of their respective distributions appears to have originated with the work of Truman Kelley (1919), the well-known expert in educational measurement and statistics at the time, who was interested in the percentage of one normal distribution that was above the median of a second normal distribution. [His paper on the topic was typographically botched by the Journal of Educational Psychology and was later (1920) reprinted in that journal in corrected form.] The notion of distributional overlap was subsequently picked up by Symonds (1930), who advocated the use of biserial r as an alternative to Kelley's measure, but he was taken to task by Tilton (1937) who argued for a different definition of percentage overlap that more clearly reflected the actual amount of overlap. [Kelley had also suggested a method for correcting percentage overlap for unreliability.] Percentage overlap was subsequently further explored by Levy (1967), by Alf and Abrahams (1968), and by Elster and Dunnette (1971). In their more recent discussions of percentage overlap, Huberty and his colleagues (Huberty & Holmes, 1983; Huberty & Lowman, 2000; Hess, Olejnik, & Huberty, 2001; Huberty, 2002) extended the concept to that of "hit rate corrected for chance" [a statistic similar to Cohen's (1960) kappa] in which discriminant analysis or logistic regression analysis is employed in determining the success of "postdicting" original group membership. (See also Preese, 1983; Campbell, 2005; and Natesan & Thompson, 2007.) There is also the "binomial effect size display (BESD)" advocated by Rosenthal and Rubin (1982) and the "probability of superior outcome" approach due to Grissom (1994). BESD has been criticized because it involves the dichotomization of continuous variables (see the following chapter). Grissom's statistic is likely to be particularly attractive to experimenters and meta-analysts, and in his article he includes a table that provides the probabilistic superiority equivalent to Cohen's (1988) d for values of d between .00 and 3.99 by intervals of .01. Most closely associated with the procedure proposed here (the use of Pe) is the work represented by a sequence of articles beginning with McGraw and Wong (1992) and extending through Cliff (1993), Vargha and Delaney (2000), Delaney and Vargha (2002), Feng and Cliff (2004), and Feng (2006). [Amazingly--to me, anyhow--the only citation to Darlington (1973) in any of those articles is by Delaney and Vargha in their 2002 article!] McGraw and Wong were concerned with a "common language effect size" for comparing one group with another for continuous, normally distributed variables, and they provided a technique for so doing. Cliff argued that many variables in the social sciences are not continuous, much less normal, and he advocated an ordinal measure d (for sample dominance;  for population dominance). [This is not to be confused with Cohen's effect size d, which is appropriate for interval-scaled variables only.] He (Cliff) defined d as the difference between the probability that an observation in Group 1 exceeds an observation in Group 2 and the probability that an observation in Group 2 exceeds an observation in Group 1. In their two articles Vargha and Delaney sharpened the approach taken by McGraw and Wong, in the process of which they suggested a statistic, A, which is equal to my Pe if there are no ties between observations in Group 1 and observations in Group 2, but they didn't pursue it as a percentage that could be treated much like any other percentage. Feng and Cliff, and Feng, reinforced Cliff's earlier arguments for preferring  and d, which range from -1 to +1. Vargha and Delaney's A ranges from 0 to 1 (as do all proportions) and is algebraically equal to (1 + d)/2, i.e., it is a simple linear transformation of Cliff's measure. The principal difference between Vargha and Delaney's A and Cliff's d, other than the range of values they can take on, is that A explicitly takes ties into account. Dichotomous outcomes The ordinal-dominance-based "percentage exceeding" measure also works for dichotomous dependent variables. For the latter all one needs to do is dummy-code (0,1) the outcome variable, string out the 0's followed by the 1's for Group 1 on the horizontal axis, string out the 0's followed by the 1's for Group 2 on the vertical axis, count how many times a 1 for Group 1 appears in the body of the layout with a 0 for Group 2, and divide that count by n1 times n2, where n1 is the number of observations in Group 1 and n2 is the number of obervations in Group 2. Here is a simple hypothetical example: The data for Group 1: 0, 1, 1, 1 The data for Group 2: 0, 0, 1, 1, 1 The layout: 110xxx0xxx0xxx 0111 There are 9 instances of a 1 for Group 1 paired with a 0 for Group 2, out of 4X5 = 20 total comparisons, yielding a "percentage exceeding" value of 9/20, or .45, or 45%. Statistical inference For the Siegel/Darlington example, if the two groups had been simply randomly sampled from their respective populations, the inference of principal concern might be the establishment of a confidence interval around the sample Pe . [You get tests of hypotheses "for free" with confidence intervals for percentages, as I pointed out in Chapter 4.] But there is a problem regarding the "n" for Pe. In that example the sample percentage, 59.3, was obtained with n1 x n2 = 9x9 = 81 in the denominator. 81 is not the sample size (the sum of the sample sizes for the two groups is only 9 + 9 = 18). This problem had been recognized many years ago in research on the probability that Y is less than X, where Y and X are vectors of length n and m, respectively. In articles beginning with Birnbaum and McCarty (1958) and extending through Owen, Craswell, and Hanson (1964), Ury (1972), and others, a procedure for making inferences from the sample probabilities to the corresponding population probabilities was derived. The Owen, et al. and Ury articles are particularly helpful in that they include tables for constructing confidence intervals around a sample Pe . For the Siegel/Darlington data, the confidence intervals are not very informative, since the 90% interval extends from 0 (complete overlap in the population) to 100 (no overlap), because of the small sample size. If the two groups had been randomly assigned to experimental treatments, but had not been randomly sampled, a randomization test is called for, with a "percentage exceeding" calculated for each re-randomization, and a determination made of where the observed Pe falls among all of the possible Pe's that could have been obtained under the (null) hypothesis that each observation would be the same no matter to which group the associated object (usually a person) happened to be assigned. For the small hypothetical example of 0's and 1's the same inferential choices are available, i.e., tests of hypotheses or confidence intervals for random sampling, and randomization tests for random assignment. [There are confidence intervals associated with randomization tests, but they are very complicated. See, for example, Garthwaite (1996).] If those data were for a true experiment based upon a non-random sample, there are "9 choose 4" (the number of combinations of 9 things taken 4 at a time) = 126 randomizations that yield Pe 's ranging from 0.00 (all four 0's in Group 1) to 80 (four 1's in Group 1 and only one 1 in Group 2). The 45 is not among the 10% least likely to have been obtained by chance, so there would not be a statistically significant treatment effect at the 10% level. (Again the sample size is very small.) The distribution is as follows: Pe frequency .00 1 .05 22 .20 58 .45 40 .80 5 ____ 126 To illustrate the use of an arguably defensible approach to inference for the overlap of two groups that have been neither randomly sampled nor randomly assigned, I turn now to a set of data originally gathered by Ruback and Juieng (1997). They were concerned with the problem of how much time drivers take to leave parking spaces after they return to their cars, especially if drivers of other cars are waiting to pull into those spaces. They had data for 100 instances when other cars were waiting and 100 instances when other cars were not waiting. On his statistical home page, Howell (2007) has excerpted from that data set 20 instances of "someone waiting" and 20 instances of no one waiting, in order to keep things manageable for the point he was trying to make about statistical inferences for two independent groups. Here are the data (in seconds): Someone waiting (Group 1) 49.48 43.30 85.97 46.92 49.18 79.30 47.35 46.52 59.68 42.89 49.29 68.69 41.61 46.81 43.75 46.55 42.33 71.48 78.95 42.06 No one waiting (Group 2) 36.30 42.07 39.97 39.33 33.76 33.91 39.65 84.92 40.70 39.65 39.48 35.38 75.07 36.46 38.73 33.88 34.39 60.52 53.63 50.62 Here is the 20x20 dominance layout (I have rounded to the nearest tenth of a second in order to save room and not bothered to order each data set): 36.3xxxxxxxxxxxxxxxxxxxx42.1xxxxxxxxxxxxxxxxxx40.0xxxxxxxxxxxxxxxxxxxx39.3xxxxxxxxxxxxxxxxxxxx33.8xxxxxxxxxxxxxxxxxxxx33.9xxxxxxxxxxxxxxxxxxxx39.7xxxxxxxxxxxxxxxxxxxx84.9xx40.7xxxxxxxxxxxxxxxxxxxx39.7xxxxxxxxxxxxxxxxxxxx39.5xxxxxxxxxxxxxxxxxxxx35.4xxxxxxxxxxxxxxxxxxxx75.1xxx36.5xxxxxxxxxxxxxxxxxxxx38.7xxxxxxxxxxxxxxxxxxxx33.9xxxxxxxxxxxxxxxxxxxx34.4xxxxxxxxxxxxxxxxxxxx60.5xxxxx53.6xxxxxx50.6xxxxxx49.543.386.046.949.279.347.446.559.742.949.368.741.646.843.846.642.371.579.042.1 [Sorry if this got cut off at the right...but you might be able to re-format things so that you get to see the entire table.] For these data Pe is equal to 318/400 = 79.5%. Referring to Table 1 in Ury (1972) a 90% confidence interval for e is found to extend from 79.5  36.0 to 79.5 + 36.0, i.e., from 43.5 to 100. A "null hypothesis" of a 50% proportion overlap in the population could not be rejected. Howell actually carried out a randomization test for the time measures, assuming something like a natural experiment having taken place (without the random assignment, which would have been logistically difficult if not impossible to carry out). Based upon a random sample of 5000 of the 1.3785 x 1011 possible re-randomizations he found that there was a statistically significant difference at the 5% level (one-tailed test) between the two groups, with longer times taken when there was someone waiting. He was bothered by the effect that one or two outliers had on the results, however, and he discussed alternative analyses that might minimize their influence. Disadvantages of the "percentage exceeding" approach The foregoing discussion was concerned with the postulation of Pe as a possibly useful measure of the overlap of the frequency distributions for two independent groups. But every such measure has weaknesses. The principal disadvantage of Pe is that it ignores the actual magnitudes of the n1 x n2 pairwise differences, and any statistical inferences based upon it for continuous distributions are therefore likely to suffer from lower power and less precise confidence intervals. A second disadvantage is that there is presently no computer program available for calculating Pe . [I'm not very good at writing computer programs, but I think that somebody more familiar with Excel than I am would have no trouble dashing one off. The layouts used in the two examples in this paper were actually prepared in Excel and "pasted" into a Word document.] Another disadvantage is that it is not (at least not yet) generalizable to two dependent groups, more than two groups, or multiple dependent variables. A final note Throughout this chapter I have referred to the 10% significance level and the 90% confidence coefficient. The choice of significance level or confidence coefficient is of course entirely up to the researcher and should reflect his/her degree of willingness to be wrong when making sample-to-population inferences. I kinda like the 10% level and 90% confidence for a variety of reasons. First of all, I think you might want to give up a little on Type I error in order to pick up a little extra power (and give up a little precision) that way. Secondly, as illustrated above, more stringent confidence coefficients often lead to intervals that don't cut down very much on the entire scale space. And then there is my favorite reason that may have occurred to others. When checking my credit card monthly statement (usually by hand, since I like the mental exercise), if I get the units (cents) digit to agree I often assume that the totals will agree. If they agree, Visa's "null hypothesis" doesn't get rejected when perhaps it should be rejected. If they don't agree, if I reject Visa's total, and if it turns out that Visa is right, I have a 10% chance of having made a Type I error, and I waste time needlessly re-calculating. Does that make sense? Chapter 8: Dichotomizing continuous variables: Good idea or bad idea? A very bad idea, or at least so say Cohen (1983); Hunter and Schmidt (1990); MacCallum, Zhang, Preacher, and Rucker (2002); Streiner (2002); Owen and Froman (2005); Royston, Altman, and Sauerbrei (2006); Altman and Royston (2006); Taylor, West, and Aiken (2006); and others. [2006 was a good year for anti-dichotomization articles!] But its done all the time. Is there no good defense for it? In what follows Ill try to point out some of its (admittedly few) advantages and its (unfortunately many) disadvantages. Here are a few advantages: Simplicity of description When it comes to investigating the relationship between two variables X and Y, nothing is simpler than dichotomizing both variables at their medians and talking about what % were above the median on X and Y, what % were below the median on X and Y, what % were above on X but below on Y, and what % were below on X but above on Y. Having to plot the continuous data, trying to figure out whether or not the plot is linear enough to use Pearson r, worrying about outliers, etc., is a pain. Simplicity of inference Percent of agreement, i.e., the % for both above plus the % for both below, can be treated just like a simple percentage (see Chapter 4). The single-best point estimate of the population percentage of agreement is the sample percentage of agreement, the confidence interval for the population percentage is straight-forward, and so is the hypothesis test. Applicability to crazy distributions There are some frequency distributions of continuous or "near-continuous" variables that are so unusual that dichotomization is often used in order to make any sense out of the data. In the following sections I would like to consider two of them. Number of cigarettes smoked per day When people are asked whether or not they smoke cigarettes and, if so, approximately how many they smoke each day, the frequency distribution has a big spike at 0, lesser spikes at 20 (the one-pack-a-day people), 40 (two packs), and 60 (three packs), but also some small spikes at 10 (half pack), 30 (pack and a half), etc. Some people smoke (or say they smoke) just one cigarette per day, but hardly anyone reports 3, 7, 11 or other non-divisors of 20. In Table 1, below, I have provided a frequency distribution for 5209 participants in the well-known Framingham Heart Study at Time 7 (which is Year 14--around 1960--of that study). I have also included some descriptive statistics for that distribution, in order to summarize its central tendency, variability, skewness, and kurtosis. (The distribution and all calculations based upon it were carried out in Excel, carried out to far too many decimal places!) Note some of the interesting features. The distribution exhibits the "crazy" pattern indicated in the previous paragraph, with several holes (particularly at the high end) and with heapings at observations ending in 0 and 5. It has a mode and a median of 0; a mean of about 9 1/2; standard deviation of about 13; skewness between 1 and 2; and kurtosis of approximately that same magnitude. [At first I thought that the 90 (4 1/2 packs per day!) was an error and should have been 9, but there was more than one such observation in the full data set.] I am of course not the first person to study the frequency distribution of number of cigarettes smoked per day (see, for example, Klesges, Debon, & Ray, 1995 and the references they cite). Table 1: Data for Year 14 of the Framingham Heart Study # Cigs Frequency02087173Mean9.477795244365Median0441Mode0532Standard Dev.13.18546630Variance173.8564726Kurtosis1.291503819Skewness1.334515915Range9010130Minimum01115Maximum901226Sum37134136Count3918146Missing1291151111691713181919620581210221023124225532602792802913022531032033034035223603703803904019341042143044045646047048049050195105205305405525605705805906017610620630640650660670680690701710720730740750760770780790801810820830840850860870880890901 So what? That distribution fairly cries out to be dichotomized. But where to cut? The obvious place is between 0 and 1, so that all of the people who have "scores" of 0 can be called "non-smokers" and all of the people who have "scores" from 1 to 90 can be called "smokers". For the data in Table1 there were 2087 non-smokers out of 2727 non-missing-data persons, or 76.5%, which means there were 640 smokers, or 23.5%. [As usual, "missing" causes serious problems. I don't know why there were so many participants who didn't respond to the question. Can you speculate why?] Klondike Just about every computer that has Microsoft Windows as an operating system includes as part of a free software package the solitaire game of Klondike. It has a number of versions, but the one that is most interesting (to me) is the "turn-one, single pass through the pack" version. The object is to play as many cards as possible on the foundation piles of ace through king of each of the four suits. The possible "scores" (number of cards played to those piles) range from 0 to 52. Of considerable interest (again, to me, anyhow) is the frequency distribution of those scores. One would hope that the distribution could be derived mathematically, but since there is a deterministic aspect (skill) to the game (in addition to the stochastic aspect) and things can get very complicated very quickly, all such efforts to do so appear to have been unsuccessful. As the authors of a recent paper on solitaire (Yan, et al., 2005) put it: " It is one of the embarrassments of applied mathematics that we cannot determine the odds of winning the common game of solitaire." (p. 1554) Some probabilities have been mathematically derived for some versions of Klondike, e.g., the probability of being unable to play a single card in the "turn three, unlimited number of passes" version (see Latif, 2004). I recently completed 1000 games of "turn-one, single-pass" Klondike [we retired professors have lots of time on our hands!], and the distribution of my scores is displayed in Table 2, below (summary descriptive statistics have been added, all from Excel). Note the long tail to the right with small frequencies between 19 and 31, a big hole between 31 and 52, and heaping on 52. (Once you're able to play approximately half of the deck on the foundation piles you can usually figure out a way to play the entire deck.) I won (got a score of 52) 36 times out of 1000 tries, for a success rate of 3.6%. [Note also the paucity of scores of 0. I got only two of them in 1000 tries. It's very unusual to not be able to play at least one card on the foundation piles. And it's positively re-inforcing each time you play a card there. B.F. Skinner would be pleased!] Table 2: Results of 1000 games of Klondike ScoreFrequency02123Mean9.687237366Median7489Mode55117Standard Dev.9.495114688Variance90.157197110Kurtosis11.5888872Skewness3.242691968Range521064Minimum01140Maximum521234Sum96871334Count1000141815221611172118111962042162262322412532632702822923013113203303403503603703803904004104204304404504604704804905005105236 Again, so what? This distribution also cries out to be dichotomized, but where? If all you care about is winning (being able to play all 52 cards on the foundation piles) the obvious place to cut is just below 52, call the winners (36 of them) 1's and the losers 0's, and talk about the percentage of winners (or, alternatively, the probability of winning), which is approximately 4%. Another reasonable possibility is to dichotomize at the median (of 7), with half of the resulting scores below that number and the other half above that number. Klondike is occasionally played competitively, so if you are able to play 7 or more cards you have approximately a 50% chance of beating your opponent. [I just finished another 1000 games, with essentially the same results: 41 wins (4.1%), a mean of about 10; etc.] Although he is generally opposed to dichotomizing, Streiner (2002) referred to situations where it might be OK, e.g., for highly skewed distributions such as the above or for non-linearly-related variables. [I love the title of his article!] Now for a few of the disadvantages: Loss of information The first thing thats wrong with dichotomization is a loss of information. For the original variable, number of cigarettes smoked per day, we have a pretty good idea of the extent to which various people smoke, despite its crazy distribution. For the dichotomy, all we know is whether or not they smoke. Inappropriate pooling of people For the smoker vs.non-smoker dichotomy there is no distinction made between someone who smokes one cigarette per day and someone who smokes four or more packs per day. Or, switching examples from smoking to age (above or below age 21, say), height (above or below 57), or weight (above or below 130#), the problem could be even worse. Decreased precision or power The principal objective of interval estimation is to construct a rather tight interval around the sample statistic so that the inference from statistic to corresponding parameter is strong. Confidence intervals for percentages derived from dichotomization are generally less precise than their counterparts for continuous variables. The situation for hypothesis testing is similar. If the null hypothesis is false you would like to have a high probability of rejecting it in favor of the alternative hypothesis, i.e., high power. The power for dichotomies is generally lower than the power for continuous variables. (But see Owen & Froman, 2005 for a counter-example.) You will find discussions of additional disdvantages to dichotomization in the references cited at the beginning of this chapter. So whats a researcher to do? There is no substitute for common sense applied to the situation in hand. A good rule to keep in mind is when tempted to dichotomize, dont, UNLESS you have one or more crazy continuous distributions to contend with. Chapter 9: Percentages and reliability Reliability and validity are the Rosencranz and Guildenstern of scientific measurement. In Shakespeares Hamlet people couldnt say one name without saying the other, and the two of them were always being confused with one another. Similarly, in discussing the properties of good measuring instruments, reliability and validity often come out as a single word; and some people confuse the two. What is the difference between reliability and validity? Simply put, reliability has to do with consistency; validity has to do with relevance. An instrument might yield consistent results from measure to re-measure, yet not be measuring what you want it to measure. In this chapter I shall concentrate on reliability, in which I am deeply interested. Validity, though more important (what good is it to have a consistent instrument if it doesnt measure the right thing?), ultimately comes down to a matter of expert judgment, in my opinion, despite all of the various types of validity that you read about. How do percentages get into the picture? In the previous chapter I referred to a couple of advantages of dichotomies, viz., their simplicity for description and for inference. Consider the typical classroom spelling test for which 65% is passing, i.e., in order to pass the test a student must be able to spell at least 65% of the words correctly. (We shall ignore for the moment why 65%, whether the words are dictated or whether the correct spelling is to be selected from among common misspellings, and the like. Those matters are more important for validity.) Mary takes a test consisting of 200 words and she gets 63% right (126 out of the 200). Youre concerned that those particular 200 words might contain too many sticklers and she really deserved to get 65% or more (at least 130 out of the 200; she only missed the cutoff by four words). Suppose that the 200 words on the test had been randomly drawn from an unabridged dictionary. You decide to randomly draw another set of words from that same dictionary and give Mary that parallel form. This time she gets 61% right. You now tell her that she has failed the test, since she got less than 65% on both forms. Types of reliability The example just presented referred to parallel forms. That is one type of reliability. In order to investigate the reliability of a measuring instrument we construct two parallel forms of the instrument, administer both forms to a group of people, and determine the percentage of people who pass both forms plus the percentage of people who fail both forms: our old friend, percent agreement. Percent agreement is an indicator of how consistently the instrument divides people into passers and failers. But suppose that you have only one form of the test, not two. You can administer that form twice to the same people and again determine the % who pass both times plus the % who fail both times. This test/re-test approach is not quite as good as parallel forms, since the people might parrot back at Time 2 what they say at Time 1, therefore endowing the instrument with artificially high reliability. Or suppose that youre interested in the reliability of rating essays. You administer the essay test just once, but you ask the teacher to rate the students essays twice (so-called intra-rater reliability) or ask two different teachers to rate the students essays once each (inter-rater reliability). Percent agreement is again a good way to determine the extent to which the two sets of ratings agree. Robinson (1957) discussed the advantages and disadvantages of percent agreement vs. traditional Pearson correlations for measuring intra-rater or inter-rater reliability. Got the idea? Kappa There is a strange (again, in my opinion) statistic called kappa (Cohen, 1960), which is percent agreement corrected for chance. Its formula is:  = (P - Pc)/(100 - Pc) where P is actual percent agreement and Pc is the percent agreement that is expected  by chance . So if two raters of essays have 80% agreement using a four-point rating scale, and if they were both susceptible to occasional random ratings (without reading the essay itself?), they could have (1/4)(1/4) = 1/16 = 6.25% agreement  by chance . That would be Pc. Therefore,  would be (80  6.25)/(100  6.25) = 78.67%. There are two reasons why I think kappa is strange. First of all, I dont think raters rate by chance. Secondly, even if they do, a researcher need only demand that the percent agreement be higher in order to compensate for same. [Hutchinson (1993) presented an argument for the use of tetrachoric correlation rather than kappa.] Landis and Koch (1977) claim that a kappa of 61% to 80% , for example, is indicative of substantial agreement. Why not up those numbers by 10% and define percent agreement of 71% to 90% as substantial? But kappa is VERY commonly used; see Fleiss et al. (2003) and some of the references that they cite. One very interesting non-reliability use of kappa is in the detection of possible cheating on an examination (Sotaridona, 2006). Now theres a context in which there is indeed liable to be a great deal of chance going on! Criterion-referenced vs. norm-referenced measurement The previous section described various ways for determining the reliability of an instrument where there is some sort of cutoff point above which there is success and below which there is failure. Such instruments are called criterion-referenced. On the other hand, instruments such as the SAT or the GRE do not have cutoff points; they are not passed or failed. Scores on those tests are interpreted relative to one another rather than relative to a cutoff point. Theyre called norm-referenced. [Be careful not to confuse norms with standards. Norms are what are; standards are what should be.] There are several other contributions in the criterion-referenced measurement literature regarding the use of percentages as indicators of the reliability of such instruments. For example, in building upon the work of Hambleton and Novick (1973), Subkoviak (1976), and others, Smith (2003) and, later, Walker (2005) advocated the use of the standard error of a percentage in the estimation of the reliability of a classroom test (a potentially different reliability for each student). The formula for the standard error becomes "P(100-P)/k, where P is the % of items answered correctly and k is the number of items (the  item sample size , analogous to n, the traditional  people sample size ). For example, if John answered correctly 16 out of 20 items, his P is 80%, and his standard error is "80(100-80)/20, which is about 9%. If Mary answered correctly 32 out of 40 items correctly (not necessarily items on the same test), her P is also 80% but her standard error is "80(100-80)/40, which is about 6 1/3%. Therefore the evidence is more reliable for Mary than for John. The problem, however, is that the traditional formula for the standard error of a percentage assumes that the number of observations that contribute to the percentage (people, items,, whatever) are independent of one another. That is much more defensible when people are sampled than when items are sampled. Chase (1996) went one step further by discussing a method for estimating the reliability of a criterion-referenced instrument test before it's ever administered! Miscellany There have been a number of other contributions in the literature regarding the uses of percentages in conjunction with the estimation of the reliability of a measuring instrument. Here are a few examples: Barnettes (2005) Excel program for computing confidence intervals for various reliability coefficients includes the case of percentages. Feldt (1996) provides formulas for confidence intervals around a proportion of mastery. Guttman (1946) discussed a method for determining a lower bound for the reliability of an instrument that produced qualitative (nominal or ordinal) data. I (Knapp, 1977b) proposed a technique for determining the reliability of a single test item that has been dichotomously scored. Much later I (Knapp, 2014) I put together a whole book on reliability, some of which is concerned with the use of percentages as indicators of the reliability of a measuring instrument. Chapter 10: Wrap-up In this book I have tried to explain why I think that percentages are the most useful statistics ever invented. I hope you agree. But even if you dont, I hope you now know a lot more about percentages than you did when you started reading the book. I also said I would tell you why 153 is one of my favorite numbers. It comes from the New Testament in a passage that refers to a miracle that Jesus performed when he made it possible for his apostles to catch a boatload of fish after they had caught nothing all day long. The evangelists claim that the catch consisted of 153 large fish. Who counted them? Was it exactly 153 fish? I would like to close with a brief annotated bibliography of references that I did not get an opportunity to cite in the previous nine chapters. Here it is (the full bibliographical information can be found in the References section that follows the conclusion of this chapter): Aiken, et al. (2003). This article in the Journal of the American Medical Association about the relationship between nurse educational level and patient mortality has tons of percentages in its various tables. (Hospital was the unit of analysis; n = 168 of them.) There were several letters to the editor of that journal in early 2004 regarding the article. I suggest that you read the article, the letters, and the rejoinder by Aiken et al., and make your own judgment. As they say on the Fox News Channel, I report, you decide. Azar (2004, 2007, 2008) has written several papers on percentage thinking. Economists claim that many people behave irrationally when making shopping saving decisions by focusing on percentage saving rather than absolute saving. He cites the classic example (Thaler, 1980; Darke and Freedman, 1993) of a person who exerts more effort to save $5 on a $25 radio than on a $500 TV. Its the same $5. (See also Chen & Rao, 2007, for comparable examples.) Fascinating stuff. Freedman, Pisani, & Purves (2007). This is far and away the best statistics textbook ever written (in my opinion), the illustrations are almost as hilarious as those in Darrell Huffs books, and there is some great stuff on percentages. (My favorite illustration is a cartoon on page 376 in which a prospective voter says to a politician Im behind you 100 percent, plus or minus 3 percent or so .) Check it out! Gonick and Smith (1993). If you want to learn statistics on your own, and have a lot of laughs in the process, this book is for you. Through a combination of words, formulas, and cartoons (mostly cartoons, by Gonick) the authors summarize nicely most of the important concepts in statistics, both descriptive and inferential. My favorite cartoon in the book is the one on page 2 picturing a statistician dining with his date. He says to her: Im 95% confident that tonights soup has probability between 73% and 77% of being really delicious! They even discuss the probability of a disease given a positive diagnosis (pp. 46-50) and the estimation of confidence intervals for percentages--actually proportions (pp. 114-127) that we talked about in Chapters 3 and Chapters 5, respectively, in this book (but without the great illustrations that Gonick provides). Paulos (2008). In this companion to his Innumeracy book (he really has a way with words!), Paulos claims that the arguments for the existence of God dont add up, and he closes the book with the tongue-in-cheek claim that 96.39 per cent of us want to have a world that is closer to a heaven on earth than it is now. Amen. Resis (1978). In what must be one of the most important applications of percentages known to mankind, Resis described a meeting in 1944 in which Winston Churchill suggested to Josef Stalin a way of dividing up European spheres of influence between Britain and Russia. On page 368 he cited Churchills actual words, as follows:  For some additional interesting information regarding this matter, just google percentages agreement [not to be confused with "percent agreement", which is a way of determining reliability]). Robbins & Robbins (2003a and 2003b). This pair of articles represents one of the strangest, yet interesting, applications of percentages I have ever seen. The authors have collected data for estimating the percentage of people (both men and women) who have hair of various lengths! Read both articles. Youll like them. Thibadeau (2000). Its hard to know whether Thibadeau is serious or not when he presents his arguments for doing away with all taxes and replacing all paper money and coins with electronic currency. But this is a delightful read (free, on the internet) and he has several interesting comments regarding percentages. My favorite one is in the section on sales taxes, where he says: sales tax is almost always a strange percentage like 6% or 7%. If something costs $1, we have to take the time to figure out whether the guy is giving the proper change on $1.07 for the five dollar bill. Most people dont check. (p. 20) Some great websites that I havent previously mentioned: 1. RobertNiles.com was developed by Robert Niles and is intended primarily for journalists who need to know more about mathematics and statistics. He has a particularly nice discussion of percentages. 2. Dr. Ray L. Winsteads website has a Percentage metric time clock that tells you at any time of any day what percentage of the day (to four decimal places!) has transpired. How about that?! 3. The website for the physics department at Bellevue College (its name is scidiv.bellevuecollege.edu/Physics/.../F-Uncert-Percent.html) calculates for you both the absolute percentage certainty and the relative percentage certainty of any obtained measurement. All you need do is input the measurement and its margin of error. Nice. 4. The Healthy People 2010 website has all sorts of percentages among its goals for the year 2010. For example, it claims that 65% of us are presently exposed to second-hand smoke [I think that is too high]; its goal is to reduce that to 45%. 5. The CartoonStock website has some great percentage cartoons.  6. There is a downloadable file called Bakers Percentage (just google those words) that provides the ingredients for various recipes as percentages of the weight of the principal ingredient (usually flour). Unfortunately (in my opinion) all of the weights of the ingredients are initially given in grams rather than in ounces. 7.  HYPERLINK "http://www.StatPages.org" www.StatPages.org is John Pezzullos marvelous website, which will refer you to sources for calculating just about any descriptive statistic you might be interested in, as well as carry out a variety of inferential procedures. References Aiken, L.H., Clarke, S.P., Cheung, R.B., Sloane, D.M, & Silber, J.H. (2003). Education levels of hospital nurses and surgical patient mortality. Journal of the American Medical Association, 290 (12), 1617-1623. Alf, E., & Abrahams, N.M. (1968). Relationship between per cent overlap and measures of correlation. Educational and Psychological Measurement, 28, 779-792. Altman, D.G., and Royston, P. (2006). The cost of dichotomising continuous variables. British Medical Journal (BMJ), 332, 1080. Ameringer, S., Serlin, R.C., & Ward, S. (2009). Simpson's Paradox and experimental research. Nursing Research, 58 (2), 123-127. Azar, O.H. (2004). Do people think about dollar or percentage differences? Experiments, pricing implications, and market evidence. Working paper, Northwestern University. Azar, O.H. (2007). Relative thinking theory. Journal of Socio-Economics, 36 (1), 1-14. Azar, O.H. (2008). The effect of relative thinking on firm strategy and market outcomes: A location differentiation model with endogenous transportation costs. Journal of Economic Psychology, 29, 684-697. Baker, S.G., & Kramer, B.S. (2001). Good for women, good for men, bad for people: Simpsons Paradox and the importance of sex-specific analysis in observational studies. Journal of Women's Health & Gender-based Medicine, 10 (9), 867-872. Bamber, D. (1975). The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology, 12, 387-415. Barnette, J.J. ( 2005). ScoreRel CI: An Excel program for computing confidence intervals for commonly used score reliability coefficients. Educational and Psychological Measurement, 65 (6), 980-983. Berry, K.J., Mielke, P.W., Jr., & Helmericks, S.G. (1988). Exact confidence limits for proportions. Educational and Psychological Measurement, 48, 713-716. Biehl, M., & Halpern-Felsher, B.L. (2001). Adolescents' and adults' understanding of probability expressions. Journal of Adolescent Health, 28, 30-35. Birnbaum, Z.W., & McCarty, R.C. (1958). A distribution-free upper confidence bound for P{Y1; 1,1,2->1; 1,1,3->2; 1,2,2->2; 1,2,3->2; 1,3,3->2; 2,2,2->2; 2,2,3->2; 2,3,3->3; 3,3,3->3. Do you agree? 2 is the modal decision (6 out of the 10), and my understanding is that is what usually happens in practice (very few manuscripts are accepted forthwith and very few are rejected outright). Three reviewers should be sufficient. If there are two and their respective recommendations are 1 and 3 (the worst case), the editor should "break the tie" and give it a 2. If there is just one reviewer, that's too much power for one individual to have. If there are more than three, all the better for reconciling differences of opinion, but the extra work involved might not be worth it. The March, 1991 issue of Behavioral and Brain Sciences has lots of good stuff about the number of reviewers and related matters. Kaplan, Lacetera, and Kaplan (2008) actually base the required number of reviewers on a fancy statistical formula! I'll stick with three. Alternate section (if one of the previous five is no good) Question: What is the maximum number of journals you should try before you give up hope for getting a manuscript published? Answer: Three. Why? If you are successful in getting your manuscript published by the first journal to which you submit it (with or without any revisions), count your blessings. If you strike out at the first journal, perhaps because your manuscript is not deemed to be relevant for that journal's readership or because the journal has a very high rejection rate, you certainly should try a second one. But if you get rejected again, try a third, and also get rejected there, you should "get the message" and concentrate your publication efforts on a different topic. One thing you should never do is submit the same manuscript to two different journals simultaneously. It is both unethical and wasteful of the time of busy reviewers. I do know of one person who submitted two manuscripts, call them A and B, to two different journals, call them X and Y, respectively, at approximately the same time. Both manuscripts were rejected. Without making any revisions he submitted Manuscript A to Journal Y and Manuscript B to Journal X. Both were accepted. Manuscript review is very subjective, so that sort of thing, though amusing, is not terribly surprising. For all its warts, however, nothing seems to work better than peer review. References Aaronson, L.S. (1994). Milking data or meeting commitments: How many papers from one study? Nursing Research, 43, 60-62. American Meteorological Society (October, 2012). AMS Journals Authors Guide. Assmann, S., Pocock, S.J., Enos, L.E., & Kasten, L.E. (2000). Subgroup analysis and other (mis)uses of baseline data in clinical trials. The Lancet, 355 (9209), 1064-1069. Behavioral and Brain Sciences (March, 1991). Open Peer Commentary following upon an article by D.V. Cicchetti. 14, 119-186. Blancett, S.S., Flanagin, A., & Young, R.K. (1995). Duplicate publication in the nursing literature. Image, 27, 51-56. Bland, J.M., & Altman, D.G. (2010). Statistical methods for assessing agreement between two methods of clinical measurement. International Journal of Nursing Studies, 47, 931936. Cliff, N. (1988). The eigenvalues-greater-than-one rule and the reliability of components. Psychological Bulletin, 103, 276-279. Cohen, J. (1983). The cost of dichotomization. Applied Psychological Measurement, 7, 249-253. Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155-159. Dimitroulis, G. (2011). Getting published in peer-reviewed journals. International Journal of Oral and Maxillofacial Surgery, 40, 1342-1345. Dotsch, R. (n.d.) Degrees of Freedom Tutorial. Accessible on the internet. Ebel, R.L. (1969). Expected reliability as a function of choices per item. Educational and Psychological Measurement, 29, 565-570. Ebel, R.L. (1972). Why a longer test is usually more reliable. Educational and Psychological Measurement, 32, 249-253. Erlen, J.A., Siminoff, L.A., Sereika, S.M., & Sutton, L.B. (1997). Multiple authorship: Issues and recommendations. Journal of Professional Nursing, 13 (4), 262-270. Freidlin, B., Korn, E.L., Gray, T., et al. (2008). Multi-arm clinical trials of new agents: Some design considerations. Clinical Cancer Research, 14, 4368-4371. Green, S., Liu, P-Y, & O'Sullivan, J. (2002). Factorial design considerations. Journal of Clinical Oncology, 20, 3424-3430. Hewes, D.E. (20030. Methods as tools. Human Communication Research, 29 (3), 448-454. Kaiser, H.F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141-151. .. Kaplan, D., Lacetera, N., & Kaplan, C. (2008). Sample size and precision in NIH peer review. PLoS ONE, 3 (7), e2761. Kelley, K., Maxwell, S. E., & Rausch, J. R. (2003). Obtaining power or obtaining precision: Delineating methods of sample-size planning. Evaluation & the Health Professions, 26, 258-287. Kelley, T.L. (1942). The reliability coefficient. Psychometrika, 7 (2), 75-83. Killip, S., Mahfoud, Z., & Pearce, K. (2004) What is an intracluster correlation coefficient? Crucial concepts for primary care researchers. Annals of Family Medicine, 2, 204-208. King, J.T., Jr. (2000). How many neurosurgeons does it take to write a research article? Authorship proliferation in neurological research. Neurosurgery, 47 (2), 435-440. Knapp, T.R. (1979). Using incidence sampling to estimate covariances. Journal of Educational Statistics, 4, 41-58. Knapp, T.R. (2007a). Effective sample size: A crucial concept. In S.S. Sawilowsky (Ed.), Real data analysis (Chapter 2, pp. 21-29). Charlotte, NC: Information Age Publishing. Knapp, T.R. (2007b). Bimodality revisited. Journal of Modern Applied Statistical Methods, 6 (1), 8-20. Knapp, T.R., & Campbell-Heider, N. (1989). Numbers of observations and variables in multivariate analyses. Western Journal of Nursing Research, 11, 634-641. Kratochwill, T.R., & Levin, J.R. (2010). Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods, 15 (2), 124-144. LeBreton, J.M., & Senter, J.L. (2008) Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods 11, 815-852. Matell, M.S., & Jacoby, J. (1971). Is there an optimal number of alternatives for Likert Scale items? Educational and Psychological Measurement, 31, 657-674. Marcus-Roberts, H. M., & Roberts, F. S. (1987). Meaningless statistics. Journal of Educational Statistics. 12, 383-394. NEJM Author Center (n.d.) Frequently Asked Questions.mht. O'Keefe, D.J. (2003). Against familywise alpha adjustment. Human Communication Research, 29 (3), 431-447. Owen, S.V., & Froman, R.D. (2005). Why carve up your continuous data? Research in Nursing & Health, 28, 496-503. Pollard, R.Q, Jr (2005). From dissertation to journal article: A useful method for planning and writing any manuscript. The Internet Journal of Mental Health, 2 (2), doi:10.5580/29b3. Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42 (1), 59-66. Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13, 1715-1726. Statistics S 1.1. (n.d.) Working with data. Accessible on the internet. Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9 (4). Walker, H.W. (1940). Degrees of freedom. Journal of Educational Psychology, 31 (4), 253-269. SEVEN: A COMMENTARY REGARDING CRONBACHS COEFFICIENT ALPHA A population of seven people took a seven-item test, for which each item is scored on a seven-point scale. Here are the raw data: ID item1 item2 item3 item4 item5 item6 item7 total 1 1 1 1 1 1 1 1 7 2 2 2 2 2 2 3 3 16 3 3 4 6 7 7 4 5 36 4 4 7 5 3 5 7 6 37 5 5 6 4 6 4 5 2 32 6 6 5 7 5 3 2 7 35 7 7 3 3 4 6 6 4 33 Here are the inter-item correlations and the correlations between each of the items and the total score: item1 item2 item3 item4 item5 item6 item7 item2 0.500 item3 0.500 0.714 item4 0.500 0.536 0.750 item5 0.500 0.464 0.536 0.714 item6 0.500 0.643 0.214 0.286 0.714 item7 0.500 0.571 0.857 0.393 0.464 0.286 total 0.739 0.818 0.845 0.772 0.812 0.673 0.752 The mean of each of the items is 4 and the standard deviation is 2 (with division by N, not N-1; these are data for a population of people as well as a population of items). The inter-item correlations range from .214 to .857 with a mean of .531. [The largest eigenvalue is 4.207. The next largest is 1.086.] The range of the item-to-total correlations is from .673 to .845. Cronbachs alpha is .888. Great test (at least as far as internal consistency is concerned)? Perhaps; but there is at least one problem. See if you can guess what that is before you read on. While youre contemplating, let me call your attention to seven interesting sources that discuss Cronbachs alpha (see References for complete citations): 1. Cronbachs (1951) original article (naturally). 2. Knapp (1991). 3. Cortina (1993). 4. Cronbach (2004). 5. Tan (2009). 6. Sijtsma (2009). 7. Gadermann, Guhn, and Zumbo (2012). OK. Now back to our data set. You might have already suspected that the data are artificial (all of the items having exactly the same means and standard deviations, and all of items 2-7 correlating .500 with item 1). Youre right; they are; but thats not what I had in mind. You might also be concerned about the seven-point scales (ordinal rather than interval?). Since the data are artificial, those scales can be anything we want them to be. If they are Likert-type scales they are ordinal. But they could be something like number of days per week that something happened, in which case they are interval. In any event, thats also not what I had in mind. You might be bothered by the negative skewness of the total score distribution. I dont think that should matter. And you might not like the smallness (and the seven-ness? I like sevensthus the title of this paper) of the number of observations. Dont be. Once the correlation matrix has been determined, the N is not of direct relevance. (The software doesnt know or care what N is at that point.) Had this been a sample data set, however, and had we been interested in the statistical inference from a sample Cronbachs alpha to the Cronbachs alpha in the population from which the sample has been drawn, the N would be of great importance. What concerns me is the following: The formula for Cronbachs alpha is kravg /[1 + (k-1)ravg ], where k is the number of items and ravg is the average (mean) inter-item correlation, when all of the items have equal variances (which they do in this case) and is often a good approximation to Cronbachs alpha even when they dont. (More about this later.) Those rs are Pearson rs, which are measures of the direction and magnitude of the LINEAR relationship between variables. Are the relationships linear? I have plotted the data for each of the items against the other items. There are 21 plots (the number of combinations of seven things taken two at a time). Here is the first one. - item2 - * - - 6.0+ * - - * - - 4.0+ * - - * - - 2.0+ * - - * - - ----+---------+---------+---------+---------+---------+--item1 1.2 2.4 3.6 4.8 6.0 7.2 I dont know about you, but that plot looks non-linear, almost parabolic, to me, even though the linear Pearson r is .500. Is it because of the artificiality of the data, you might ask. I dont think so. Here is a set of real data (item scores that I have excerpted from my daughter Katies dissertation (Knapp, 2010): [They are the responses by seven female chaplains in the Army Reserves to the first seven items of a 20-item test of empathy.] ID item1 item2 item3 item4 item5 item6 item7 total 1 5 7 6 6 6 6 6 42 2 1 7 7 5 7 7 7 41 3 6 7 6 6 6 6 6 43 4 7 7 7 6 7 7 6 47 5 2 6 6 6 7 6 5 38 6 1 1 3 4 5 6 5 25 7 2 5 3 6 7 6 6 35 Here are the inter-item correlations and the correlation of each item with the total score: item1 item2 item3 item4 item5 item6 item7 item2 0.566 item3 0.492 0.826 item4 0.616 0.779 0.405 item5 0.060 0.656 0.458 0.615 item6 0.156 0.397 0.625 -0.062 0.496 item7 0.138 0.623 0.482 0.175 0.439 0.636 total 0.744 0.954 0.855 0.746 0.590 0.506 0.566 Except for the -.062 these correlations look a lot like the correlations for the artificial data. The inter-item correlations range from that -.062 to .826, with a mean of .456. [The largest eigenvalue is 3.835 and the next-largest eigenvalue is 1.479] The item-to-total correlations range from .506 to .954. Cronbachs alpha is .854. Another great test? But how about linearity? Here is the plot for item2 against item1 for the real data. - item2 - * * * * - - 6.0+ * - - * - - 4.0+ - - - - 2.0+ - - * - - ----+---------+---------+---------+---------+---------+--item1 1.2 2.4 3.6 4.8 6.0 7.2 Thats a worse, non-linear plot than the plot for the artificial data, even though the linear Pearson r is a respectable .566. Going back to the formula for Cronbachs alpha that is expressed in terms of the inter-item correlations, it is not the most general formula. Nor is it the one that Cronbach generalized from the Kuder-Richardson Formula #20 (Kuder & Richardson, 1937) for dichotomously-scored items. The formula that always  works is:  = [k/(k-1)]{1-("i 2/2)}, where k is the number of items, i 2 is the variance of item i (for i=1,2,& ,k) and 2 is the variance of the total scores. For the artificial data, that formula yields the same value for Cronbachs alpha as before, i.e., .888, but for the real data it yields a value of .748, which is lower than the .854 previously obtained. That happens because the item variances are not equal, ranging from a low of .204 (for item #6)to a high of 5.387 (for item #1). The item variances for the artificial data were all equal to 4. So what? Although the most general formula was derived in terms of inter-item covariances rather than inter-item correlations, there is still the (hidden?) assumption of linearity. The moral to the story is the usual advice given to people who use Pearson rs: ALWAYS PLOT THE DATA FIRST. If the inter-item plots dont look linear, you might want to forgo Cronbachs alpha in favor of some other measure, e.g., the ordinal reliability coefficient advocated by Gadermann, et al. (2012). There are tests of linearity for sample data, but this paper is concerned solely with the internal consistency of a measuring instrument when data are available for an entire population of people and an entire population of items (however rare that situation might be). References Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98-104. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Cronbach, L. J. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement, 64, 391-418. [This article was published after Lee Cronbachs death, with extensive editorial assistance provided by Richard Shavelson.] Gadermann, A.M., Guhn, M., & Zumbo, B.D. (2012). Estimating ordinal reliability for Likert-type and ordinal item response data: A conceptual, empirical, and practical guide. Practical Assessment, Research, & Evaluation, 17 (3), 1-13. Knapp, K. (2010). The metamorphosis of the military chaplaincy: From hierarchy of minister-officers to shared religious ministry profession. Unpublished D.Min. thesis, Barry University, Miami Shores, FL. Knapp, T.R. (1991). Coefficient alpha: Conceptualizations and anomalies. Research in Nursing & Health, 14, 457-460. [See also Errata, op. cit., 1992, 15, 321.] Kuder, G.F., & Richardson, M.W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151-160. Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbachs alpha. Psychometrika, 74, 107-120. Tan, S. (2009), Misuses of KR-20 and Cronbach's Alpha reliability coefficients. Education and Science, 34 (152), 101-112. WHY IS THE ONE-GROUP PRETEST- POSTTEST DESIGN STILL USED? Introduction Approximately 50 years ago, Donald Campbell and Julian Stanley (1963) carefully explained why the one-group pretest posttest design (Y1 X Y2 ) was a very poor choice for testing the effect of an independent variable X on a dependent variable Y. The reasons ranged from relatively obvious matters such as the absence of a control group to somewhat sophisticated considerations such as regression toward the mean. Yet that design continues to be used in nursing research (see, e.g., Quinn, 2011) and elsewhere. Why? In this paper I will try to conjecture some reasons for its survival in the research literature. But first I would like to add one other weakness to the Campbell & Stanley (hereinafter referred to as C&S) list of what's wrong with it: There is no basis for any sort of helpful inference from it, statistical or scientific, even if the sample used in the study has been randomly drawn (which is rarely the case). For suppose there is a big difference (change) between the pretest and posttest results. What can you say? You can't say that there is a statistically significant effect of X on Y, because there is no chance assignment to experimental and control groups (there is no control group) to talk about. The difference is what it is, and that's that. You could wrap a standard error or two around it, but that won't help in inferring anything about the effect of X on Y. Lack of understanding Although the C&S monograph has been cited extensively, it did originate in education and psychology, so is it possible that researchers in other disciplines such as nursing might not have been exposed to its cautions? I personally doubt it, for three reasons: (1) I am familiar enough with graduate curricula in nursing to know that C&S has indeed been used in courses in research design in many schools and colleges of nursing; (2) adaptations (sometimes dangerously close to plagiarisms) of the C&S discussions appear in several nursing research textbooks; and (3) the Google prompt "campbell stanley designs nursing" [but without the quotes] returns hundreds of hits. Necessity or willingness to settle for less This is probably the most likely reason. Researchers are sometimes subject to pressures from colleagues and superiors to "give the treatment to everybody". [The Sinclair Lewis novel Arrowsmith (1925) provides a good example of that.] The researcher who might otherwise argue for a better design might not be willing to spend the political capital necessary to overturn an original decision to go with the Y1 X Y2 approach. Perhaps the researcher her(him)self would like to conserve some personal effort by using the one-group design. Having a control group to contend with is much more work. Perhaps the researcher doesn't care whether or not the difference is attributable to X; all she(he) might care about is whether things got better or worse between pretest and posttest, not why. Perhaps the researcher uses the design in a "negative" way. If X is hoped to produce an increase in Y from pretest to posttest, and if in fact a decrease is observed, any hypothesis regarding a positive change would not be supported by the data, no matter how big or how small that decrease is. Perhaps the researcher considers the use of this design as a pilot effort (for a main study that might or might not follow). Perhaps the researcher feels that the time between pretest and posttest is so short (a quick measure of Y; a quick shot of X; and another quick measure of Y?) that if there's any change in Y it must be X that did it. Or perhaps the researcher not only doesn't care about causality but also is interested primarily in individual changes (John lost five points, Mary gained ten points, etc.) even if the gains and the losses cancel each other out. The raw data for a Y1 X Y2 design show that nicely. Can it be salvaged? Apparently so. There have been several suggestions for improving upon the design in order to make it more defensible as a serious approach to experimentation. One suggestion (Glass, 1965) is to use a complicated design that is capable of separating maturation and testing effects from the treatment effect. Another approach (Johnson, 1986) is to randomly assign subjects to the various measurement occasions surrounding the treatment (e.g., pretest, posttest, follow-up) and compare the findings for those subgroups within the one-group context. A third variation is to incorporate a "double pretest" before implementing the treatment. If the difference between either pretest and the posttest is much greater than the difference between the two pretests, additional support is provided for the effect of X. [Marin, et al. (1990) actually used such a design in their study of the effects of an anti-smoking campaign for Hispanics.] What can be done to minimize its use? It's all well and good to complain about the misuse or overuse of the one-group pretest posttest design. It's much more difficult to try to fix the problem. I have only the following three relatively mild recommendations: 1. Every graduate program (master's and doctoral) in nursing should include a required course in the design of experiments in which C&S (or a reasonable facsimile thereof) is the adopted textbook, with particular emphasis placed upon their excellent section on the one-group pretest posttest design. (They use the notation O1 X O2 rather than Y1 X Y2 , where the O stands for "observation" on the dependent variable Y. I find Y1 X Y2 to be much more straightforward.) 2. Thesis and dissertation committees should take a much stronger stance against the one-group design. The best people to insist upon that are those who serve as statistical consultants in colleges and schools of nursing. 3. Editors of, and reviewers for, nursing research journals should automatically reject a manuscript in which this design plays the principal role. A historical note regarding the Campbell & Stanley work As indicated in the References section that follows, "Experimental and quasi-experimental designs for research on teaching" first appeared as a chapter in a compilation of papers devoted to educational research. It received such acclaim that it was adapted (essentially forthwith) as a paperback book published by Rand McNally in 1966, but without the words "on teaching" [undoubtedly in the hope of attracting a larger market, which it indeed did]. It has gone in and out of print many times. I cherish my personal copy of the original chapter, which I regret I never had signed by either Campbell or Stanley, both of whom are deceased. I met Campbell once. I knew Stanley much better; he and I were graduates of the same doctoral program in educational measurement and statistics at Harvard. References Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research on teaching. In N.L. Gage (ed.), Handbook of research on teaching. Chicago: Rand McNally. Glass, G.V (1965). Evaluating testing, maturation, and treatment effects in a pretest-posttest quasi-experimental design. American Educational Research Journal, 2 (2), 83-87. Johnson, C.W. (1986). A more rigorous quasi-experimental alternative to the one-group pretest-posttest design. Educational and Psychological Measurement, 46, 585-591. Lewis, S. (1925). Arrowsmith. New York: Harcourt, Brace. Marin, B.V., Marin, G., Perez-Stable, E.J., Otero-Sabogal, R., & Sabogal, F. (1990). Cultural differences in attitudes toward smoking: Developing messages using the theory of reasoned action. Journal of Applied Social Psychology, 20 (6), 478-493. Quinn, M. (2011). Introduction of active video gaming into the middle school curriculum as a school-based childhood obesity intervention. Journal of Pediatric Health Care, 1-10. N (or n) vs. N - 1 (or n - 1) REVISITED Prologue Over 40 years ago I (Knapp,1970) wrote an article regarding when you should use N and when you should use N - 1 in the denominators of various formulas for the variance, the standard deviation, and the Pearson product-moment correlation coefficient. I ended my "pro N" article with this sentence: "Nobody ever gets an average by dividing by one less than the number of observations." (page 626). There immediately followed three other comments (Landrum, 1971; Games, 1971; Hubert, 1972) concerning the matter of N vs. N - 1. Things were relatively quiet for the next few years, but the controversy has erupted several times since, culminating in a recent clever piece by Speed (2012) who offered a cash prize [not yet awarded] to the person who could determine the very first time that a discussion was held on the topic. The problem Imagine that you are teaching an introductory ("non-calculus") course in statistics. [That shouldn't be too hard. Some of you who are reading this might be doing that or have done that.] You would like to provide your students with their first formulas for the variance and for the standard deviation. Do you put N, N -1, n, or n - 1 in the denominators? Why? Some considerations 1. Will your first example (I hope you'll give them an example!) be a set of data (real or artificial) for a population (no matter what its size)? I hope so. N is fine, and is really the only defensible choice of the four possibilities. You never subtract 1 from the number of observations in a population unless you want to calculate the standard error of some statistic using the finite population correction (fpc). And nobody uses n to denote the population size. 2. Will that first example be for a sample? N would be OK, if you use N for sample size and use something like Npop for population size. [Yes, I have seen Npop.] N -1 would be OK for the sample variance, if you always use N for sample size, you have a random sample, and you would like to get an unbiased estimate of the population variance; but it's not OK for the sample standard deviation. (The square root of an unbiased estimate of a parameter is not an unbiased estimate of the square root of the parameter. Do you follow that?) n would be OK for both the sample variance and the sample standard deviation, and is my own personal preference. n - 1 would be OK for the sample variance, if you always use n for sample size, you have a random sample, and you would like to get an unbiased estimate of the population variance; but it's not OK for the sample standard deviation (for the same reason indicated for N - 1). 3. What do most people do? I haven't carried out an extensive survey, but my impression is that many authors of statistics textbooks and many people who have websites for the teaching of statistics use a sample for a first example, don't say whether or not the sample is a random sample, and use n - 1 in the denominator of the formula for the variance and in the denominator of the formula for the standard deviation. The massive compendium (1886 pages) on statistical inference by Sheskin (2011) is an interesting exception. On pages 12-13 of his book he provides all of the possible definitional formulas for standard deviation and variance (for population or sample, N or n, N-1 or n-1, biased or unbiased estimator,  or s). He makes one mistake, however. On page 12 he claims that the formula for the sample standard deviation with n-1 in the denominator yields an unbiased estimator of the population standard deviation. As indicated above, it does not. (He later corrects the error in a footnote on page 119 with the comment: Strictly speaking, s~ [his notation] is not an unbiased estimate of , although it is usually employed as such. That s a bit tortured [how does he know that?], but I think you get the idea.) Another commendable exception is Richard Lowry's VassarStats website. For his "Basic Sample Stats" routine he gives the user the choice of n or n-1. Nice. 4. Does it really matter? From a practical standpoint, if the number of observations is very large, no. But from a conceptual standpoint, you bet it does, no matter what the size of N or n. In the remainder of this paper I will try to explain why; identify the principal culprits; and recommend what we should all do about it. Why it matters conceptually A variance is a measure of the amount of spread around the arithmetic mean of a frequency distribution, albeit in the wrong units. My favorite example is a distribution of the number of eggs sold by a super market in a given month. No matter whether you have a population or a sample, or whether you use in the denominator the number of observations or one less than the number of observations, the answer comes out in "squared eggs". In order to get back to the original units (eggs) you must "unsquare" by taking the square root of the variance, which is equal to the standard deviation. A variance is a special kind of mean. It is the mean of the squared differences (deviations) from the mean. A standard deviation is the square root of the mean of the squared differences from the mean, and is sometimes called "the root mean square". You want to get an "average" measure of differences from the mean, so you want to choose something that is in fact an "average". You might even prefer finding the mean of the absolute values of the differences from the mean to finding the standard deviation. It's a much more intuitive approach than squaring the differences, finding their average, and then unsquaring at the end. The culprits In my opinion, there are two sets of culprits. The first set consists of some textbook authors and some people who have websites for the teaching of statistics who favor N - 1 (or n - 1) for various reasons (perhaps they want their students to get accustomed to n - 1 right away because they'll be using that in their calculations to get unbiased estimates of the population variance, e.g., in ANOVA) or they just don't think things through. The second set consists of two subsets. Subset A comprises the people who write the software and the manuals for handheld calculators. I have an old TI-60 calculator that has two keys for calculating a standard deviation. One of the keys is labelled n and the other is labelled n-1. The guidebook calls the first "the population deviation"; it calls the second "the sample deviation" (page 5-6). It's nice that the user has the choice, but the notation is not appropriate [and the word  standard before deviation should not be omitted]. Greek letters are almost always reserved for population parameters, and as indicated above you don't calculate a population standard deviation by having in the denominator one less than the number of observations. Subset B comprises the people who write the software and the manuals for computer packages such as Excel, Minitab, SPSS, and SAS. All four of those use n - 1 as the default. [Good luck in trying to get the calculation using n.] n + 1 [not the magazine] Believe it or not, there are a few people who recommend using n + 1 in the denominator, because that produces the minimum mean squared error in estimating a population variance. See, for example, Hubert (1972), Yatracos (2005), and Biau and Yatracos (2012). It all depends upon what you want to maximize or minimize. Degrees of freedom Is it really necessary to get into degrees of freedom when first introducing the variance and the standard deviation? I don't think so. It's a strange concept (as Walker, 1940, pointed out many years ago) that students always have trouble with, no matter how you explain it. The number of unconstrained pieces of data? Something you need to know in order to use certain tables in the backs of statistics textbooks? Whatever. Pearson r For people who use n in the denominator for the sample variance and sample standard deviation, the transition to the Pearson product-moment correlation coefficient is easy. Although there are at least 13 different formulas for the Pearson r (Rodgers & Nicewander, 1988; I've added another one), the simplest to understand is " zx zy /n , where the z's are the standard scores for the two variables X and Y that are to be correlated. The people who favor n - 1 for the standard deviation, and use that standard deviation for the calculation of the z scores, need to follow through with n - 1 in the denominator of the formula for Pearson r. But that ruins "the average cross-product of standard scores" interpretation. If they don't follow through with n - 1, they're just plain wrong. Proportions and the t sampling distribution It is well known that a proportion is a special kind of arithmetic mean. It is also well known that if the population standard deviation is unknown the t sampling distribution for n 1 degrees of freedom should be used rather than the normal sampling distribution when making statistical inferences regarding a population mean. But it turns out that the t sampling distribution should not be used for making statistical inferences regarding a population proportion. Why is that? One of the reasons is simple to state: If you are testing a hypothesis about a population proportion you always know the population standard deviation, because the population standard deviation is equal to the square root of the product of the population proportion multiplied by 1 minus the population proportion. In this case, if you dont want to use the binomial sampling distribution to test the hypothesis, and youll settle for a large sample approximation, you use the normal sampling distribution. All of this has nothing to do with t. Problems start to arise when you want to get a confidence interval for the population proportion. You dont know what the actual population proportion is (thats why youre trying to estimate it!), so you have to settle for the sample proportion when getting an interval estimate for the population proportion. What do you do? You calculate p hat (the sample proportion) plus or minus some number c of standard errors (where the standard error is equal to the standard deviation of the product of p hat and 1- p hat divided by the sample size n). How does t get into the act (for the interval estimation of p)? It doesnt. Some people argue that you should use n -1 rather than n in the formula for the standard error and use the t sampling distribution for n -1 degrees of freedom in order to make the inference. Not so, argued Goodall (1995), who explained why (it involves the definition of t as a normal divided by the square root of a chi-square). Bottom line: For proportions there is no n-1 and no t. Its n and normal. [Incidentally, using the sample p hat to get a confidence interval for a population p creates another problem. If p hat is very small (no matter what p happens to be), and n is small, that confidence interval will usually be too tight. In the extreme, if p hat is equal to 0 (i.e., there are no successes) the standard error is also equal to 0, indicating no sampling error whatsoever, which doesnt make sense. There is something called the Rule of Three that is used to get the upper bound of 3/n for a confidence interval for p when there are no successes in a sample of size n. See, for example, Jovanovic and Levy, 1997.] A call to action If you happen to be asked to serve as a reviewer of a manuscript for possible publication as an introductory statistics textbook, please insist that the authors provide a careful explanation for whatever they choose to use in the denominators for their formulas for the variance, the standard deviation, and the Pearson r, and how they handle inferences concerning proportions. If you have any influence over the people who write the software and the manuals for computer packages that calculate those expressions, please ask them to do the same. I have no such influence. I tried very hard a few years ago to get the people at SPSS to take out the useless concept of "observed power" from some of its ANOVA routines. They refused to do so. References Biau, G., & Yatracos, Y.G. (2012). On the shrinkage estimation of variance and Pitman closeness criterion. Journal de las Societe Francaise de Statistique, 153, 5-21. [Don't worry; it's in English.] Games, P.A. (1971). Further comments on "N vs. N - 1". American Educational Research Journal, 8, 582-584. Goodall, G. (1995). Dont get t out of proportion!. Teaching Statistics, 17 (2), 50-51. Hubert, L. (1972). A further comment on N versus N-1. American Educational Research Journal, 9 (2), 323-325. Jovanovic, B.D., & Levy, P.S. (1997). A look at the Rule of Three. The American Statistician, 51 (2), 137-139. Knapp, T.R. (1970). N vs. N - 1. American Educational Research Journal, 7, 625-626. Landrum, W.L. (1971). A second comment on N vs. N - 1. American Educational Research Journal, 8, 581. Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42, 59-66. Sheskin, D.J. (2011). Handbook of parametric and nonparametric statistical procedures (5th ed.). Boca Raton, FL: CRC Press. Speed, T. (December 19, 2012). Terence's stuff: n vs. n - 1. IMS Bulletin Online. Walker, H.M. (1940). Degrees of freedom. Journal of Educational Psychology, 31, 253-269. Yatracos, Y.G. (2005). Artificially augmented samples, shrinkage, and mean squared error reduction. Journal of the American Statistical Association, 100 (472), 1168-1175. THE INDEPENDENCE OF OBSERVATIONS What this paper is NOT about It is not about observation as the term is used in psychology, e.g., when the behavior of children at play is observed through one-way windows in a laboratory setting. It is also not about observational research as that term is used in epidemiology, i.e., as a type of research different from true experimental research, in that no variables are manipulated by the researchers themselves. And it is not about independent variables or independent samples, except tangentially. What this paper IS about It is concerned with the term "observation" defined as a measurement taken on an entity (usually a person). It might be a univariate observation, e.g., my height (71); a bivariate observation, e.g., my height and my weight (71, 145); or a multivariate observation, e.g., my sex, age, height, and weight (M, 83, 71, 145). If I were a twin (I'm not, but the name Thomas does mean "twin" in Aramaic), I could be interested in analyzing a data set that includes the bivariate observation for me and the bivariate observation for my twin, which might be something like (70, 150). What is meant by the term "independence of observations" Two or more observations are said to be independent if knowing one of them provides no information regarding the others. Using the same example of my height and weight, and my twin's height and weight, if you knew mine, you knew I had a twin, but you didn't know his/her (we could be "identical" or "fraternal") height and weight, you would suspect (and rightly so) that those two observations would not be independent. Why it is important For virtually every statistical analysis, whether it be for an experiment or a non-experiment, for an entire population or for a sample drawn from a population, the observations must be independent in order for the analysis to be defensible. "Independence of observations" is an assumption that must be satisfied, even in situations where the usual parametric assumptions of normality, homogeneity of variance, homogeneity of regression, and the like might be relaxed. So what is the problem? The problem is that it is often difficult to determine whether the observations obtained in a particular study are or are not independent. In what follows I will try to explain the extent of the problem, with examples; provide at least one way to actually measure the degree of independence of a set of observations; and mention some ways of coping with non-independence. Some examples 1. In an article I wrote about thirty years ago (Knapp, 1984), I gave the following example of a small hypothetical data set: Name Height Weight Sue 5'6" 125# Ginny 5'3" 135# Ginny 5'3" 135# Sally 5'8" 150# Those four observations are not independent, because Ginny is in the data twice. (That might have happened because of a clerical error; but you'd be surprised how often people are counted in data more than once. See below.) To calculate their mean height or their mean weight, or the correlation between their heights and their weights, with n = 4, would be inappropriate. The obvious solution would be to discard the duplicate observation for Ginny and use n = 3. All three of those observations would then be independent. 2. Later in that same article I provided some real data for seven pairs of 16-year-old, Black, female identical twins (you can tell I like heights, weights, and twins): Pair Heights (in inches) Weights (in pounds) 1 (Aa) A:68 a:67 A:148 a:137 2 (Bb) B:65 b:67 B:124 b:126 3 (Cc) C:63 c:63 C:118 c:126 4 (Dd) D:66 d:64 D:131 d:120 5 (Ee) E:66 e:65 E:119 e:124 6 (Ff) F:62 f:63 F:119 f:130 7(Gg) G:66 g:66 G:114 g:104 Are those observations independent? Hmmm. Nobody is in the data more than once, but as forewarned above there is something bothersome here. You might want to calculate the mean height of these women, for example, but how would you do it? Add up all of the heights and divide by 14? No; that would ignore the fact that a is a twin of A, b is a twin of B, etc. How about averaging the heights within each pair and finding the average of those seven averages? No; that would throw away some interesting within-pair data. How about just finding the average height for the capital letter twins? No; that would REALLY be wasteful of data. And things are just as bad for the weights. The plot thickens if you were to be interested in the relationship between height and weight for the seven twin-pairs. You could start out all right by plotting Twin A's weight against Twin A's height, i.e., Y=148, X=68. When it comes to Twin a you could plot 137 against 67, but how would you indicate the twinship? (In that same article I suggested using colored data points, a different color for each twin-pair.) Likewise for B and b, C and c, etc. That plot would soon get out of hand, however, even before any correlation between height and weight were to be calculated. Bottom line: These fourteen observations are not independent of one another. 3. In a recent study, Russak, et al. (2010) compared the relative effectiveness of two different sunscreens (SPF 85 and SPF 50) for preventing sunburn. Each of the 56 participants in the study applied SPF85 to one side of the face and SPF50 to the other side (which side got which sunscreen was randomly determined). They presented the results in the following table: Sun Protection Factor Sunburned Not Sunburned 85 1 55 50 8 48 Sainani (2010) included that table in her article as an example of non-independent observations, because it implied there were 56 participants who used SPF 85 and 56 other participants who used SPF50, whereas in reality 56 participants used both. She (Sainani) claimed that the following table displayed the data correctly: SPF-50 Side Sunburned Not Sunburned SPF-85 Side Sunburned 1 0 Not sunburned 7 48 The observations in this second table are independent. The best way to spot non-independent observations in such tables is to calculate row totals, column totals, and the grand total. If the grand total is greater than the number of participants there is a non-independence problem. What to do about possibly non-independent observations The best thing to do is to try to avoid the problem entirely, e.g., by not doing twin research and by not having people serve as their own controls. But that might be too much of a cop-out. Twin research is important; and the advantages of having people serve as their own controls could outweigh the disadvantages (see Knapp, 1982 regarding the latter matter, where I actually come down on the side of not doing so). One thing that should always be tried is to get a measure of the degree of independence. In a conference presentation many years ago, Glendening (1976) suggested a very creative approach, and I summarized it in Knapp (1984). For the case of k aggregates, with n observations per aggregate, a measure of independence, I, is found by taking an F-type ratio of the variance of the aggregate means to one-nth of the variance of the within-aggregate observations. If that ratio is equal to 1, the observations are perfectly independent. If that ratio is greater than 1, the observations are not independent. For a simple hypothetical example, consider a case of k = 2 and n = 3 where the observations for the two aggregates are (1,7,13) and (5,11, 17), respectively. The variance of the aggregate means is equal to 4; the within-aggregate variance is equal to 12 (all variances are calculated by dividing by the number of things, not one less than the number of things); one-third of the within-aggregate variance is also equal to 4; ergo I = 1 and the observations are independent. (They even look independent, don't they?) For another hypothetical example with the same dimensions, consider (1,2,3) and (4,5,6). For those observations I is equal to 81/8 or 10.125, indicating very non-independent obervations. (They look independent, but they're not. It all has to do with the similarity between the aggregate observations and observations you might expect to get when you draw random samples from the same population. See Walker, 1928, regarding this matter for correlations between averages vs. correlations between individual measurements.) For the heights of the seven pairs of twins (each pair is an aggregate), with k = 7 and n = 2, I is equal to 13.60. For the weights, I is equal to 8.61. The height observations and the weight observations are therefore both non-independent, with the former "more non-independent" than the latter. (Some prior averaging is necessary, since the within-aggregate variances aren't exactly the same.) That is intuitively satisfying, since height is more hereditary than weight in general, and for twins in particular. If a more sophisticated approach is desired, non-independence of observations can be handled by the use of intraclass correlations, hierarchical linear analysis, generalized estimating equations, or analysis of mixed effects (fixed and random). Those approaches go beyond the scope of the present paper, but the interested reader is referred to the articles by Calhoun, et al. (2008) and by McCoach and Edelson (2010). What the consequences are of treating dependent observations as independent We've already seen for the sunscreen example that one of the consequences is an artificial inflation of the sample size (112 rather than 56). An associated consequence is an artificial increase in the degree of statistical significance and an artificial decrease in the width of a confidence interval (again see Sainani, 2010 re Russak, et al., 2010). A third consequence, for the correlation between two variables, is that the "total" correlation for which the data for two or more aggregates are "pooled" together is usually larger than the within-aggregate correlations. Consider, for instance, the correlation between height and weight for males and females combined into one aggregate. Since males are generally both taller than and heavier than females, the scatter plot for the pooled data is longer and tighter than the scatter plots for males and for females taken separately. What some other methodological critics say about independence of observations One thing that bothers me is that most authors of statistics textbooks have so very little to say about the independence of observations, other than listing it as one of the assumptions that must be satisfied in a statistical analysis. Bland and Altman (1994) are particularly hard on textbook authors regarding this. (In my opinion, Walker & Lev, 1953 is the only textbook with which I am familiar that says everything right, but even they devote only a couple of pages to the topic.) Some critics have carried out extensive reviews of the research literature in their various fields and found that treating non-independent observations as though they were independent is very common. My favorite articles are by Sauerland, et al. (2003), by Bryant, et al. (2006), and by Calhoun, et al. (2008). Sauerland, et al. chastise some researchers for the way they handle (or fail to handle) fingers nested within hands nested in turn within patients who are undergoing hand surgery. Bryant, et al. are concerned about limbs nested within patients in orthopedic research. Calhoun, et al. discuss the problem of patient nested within practice in medical research in general. References Bland, J.M, & Altman, D.G. (1994). Correlation, regression, and repeated data. BMJ, 308, 896. Bryant, D., Havey, T.C., Roberts, R., & Guyatt, G. (2006). How Many Patients? How Many Limbs? Analysis of Patients or Limbs in the Orthopaedic Literature: A Systematic Review. Journal of Bone Joint Surgery in America, 88, 41-45. Calhoun, A.W., Guyatt, G.H., Cabana, M.D., Lu, D., Turner, D.A., Valentine, S., & Randolph, A.G. (2008). Addressing the Unit of Analysis in Medical Care Studies: A Systematic Review. Medical Care, 46 (6), 635-643. Glendening, L. (April, 1976). The effects of correlated units of analysis: Choosing the appropriate unit. Paper presented at the annual meeting of the American Educational Research association. San Francisco. Knapp, T.R. (1982). A case against the single-sample repeated-measures experiment. Educational Psychologist, 17, 61-65. Knapp, T.R. (1984). The unit of analysis and the independence of observations. Undergraduate Mathematics and its Applications Project (UMAP) Journal, 5, 363-388. McCoach, D.B., & Adelson, J.L. Dealing with dependence (Part I): Understanding the effects of clustered data. Gifted Child Quarterly, 54 (2), 152-155. Russak, J.E., Chen, T., Appa, Y., & Rigel, D.S. (2010). A comparison of sunburn protection of high-sun protection factor (SPF) sunscreens: SPF 85 sunscreen is significantly more protective than SPF 50. Journal of the American Academy of Dermatology, 62, 348-349. Sainani, K. (2010). The importance of accounting for correlated observations. Physical Medicine and Rehabilitation, 2, 858-861. Sauerland, S., Lefering, R., Bayer-Sandow, T., Bruser, P., & Neugebauer, E.A.M. (2003). Fingers, hands or patients? The concept of independent observations. Journal of Hand Surgery, 28, 102-105. Walker, H.M. (1928). A note on the correlation of averages. Journal of Educational Psychology, 19, 636-642. Walker, H.M., & Lev, J. (1953). Statistical inference. New York: Holt. STANDARD ERRORS Introduction What is an error? It is a difference between a "truth" and an "approximation". What is a standard error? It is a standard deviation of a sampling distribution. What is a sampling distribution? It is a frequency distribution of a statistic for an infinite number of samples of the same size drawn at random from the same population. How many different kinds of standard errors are there? Aye, there's the rub. Read on. The standard error of measurement The standard error of measurement is the standard deviation of a frequency distribution of an object's obtained measurements around its "true score" (what it "should have gotten"). The obtained measurements are those that were actually obtained or could have been obtained by applying a measuring instrument an infinite (or at least a very large) number of times. For example, if a person's true height (only God knows that) is 69 inches and we were to measure his(her) height a very large number of times, the obtained measurements might be something like the following: 68.25, 70.00, 69.50, 70.00, 68.75, 68.25. 69.00, 68.75, 69.75, 69.25,... The standard error of measurement provides an indication of the reliability (consistency) of the measuring instrument. The formula for the standard error of measurement is " 1 - , where  is the standard deviation of the obtained measurements and  is the reliability of the measuring instrument. The standard error of prediction (aka the standard error of estimate) The standard error of prediction is the standard deviation of a frequency distribution of measurements on a variable Y around a value of Y that has been predicted from another variable X. If Y is a "gold standard" of some sort, then the standard error of prediction provides an indication of the instrument's validity (relevance). The formula for the standard error of prediction is y" 1 - xy2 , where y is the standard deviation for the Y variable and xy is the correlation between X and Y. The standard error of the mean The standard error of the mean is the standard deviation of a frequency distribution of sample means around a population mean for a very large number of samples all of the same size. The standard error of the mean provides an indication of the goodness of using a sample mean to estimate a population mean. The formula for calculating the standard error of the mean is /"n , where  is the standard deviation of the population and n is the sample size. Since we usually don't know the standard deviation of the population, we often use the sample standard deviation to estimate it. The standard errors of other statistics Every statistic has a sampling distribution. We can talk about the standard error of a proportion (a proportion is actually a special kind of mean), the standard error of a median, the standard error of the difference between two means, the standard error of a standard deviation (how's that for a tongue twister?), etc. But the above three kinds come up most often. What can we do with them? We can estimate, or test a hypothesis about, an individual person's "true score" on an achievement test, for example. If he(she) has an obtained score of 75 and the standard error of measurement is 5, and if we can assume that obtained scores are normally distributed around true scores, we can "lay off" two standard errors to the left and two standard errors to the right of the 75 and say that we are 95% confident that his(her) true score is "covered" by the interval from 65 to 85. We can also use that interval to test the hypothesis that his(her) true score is 90. Since 90 is not in that interval, it would be rejected at the .05 level. The standard error of prediction works the same way. Lay it off a couple of times around the Y that is predicted from X, using the regression of Y on X to get the predicted Y, and carry out either interval estimation or hypothesis testing. The standard error of the mean also works the same way. Lay it off a couple of times around the sample mean and make some inference regarding the mean of the population from which the sample has been randomly drawn. So what is the problem? The principal problem is that people are always confusing standard errors with "ordinary" standard deviations, and standard errors of one kind with standard errors of another kind. Here are examples of some of the confusions: 1. When reporting summary descriptive statistics for a sample, some people report the mean plus or minus the standard error of the mean rather than the mean plus or minus the standard deviation. Wrong. The standard error of a mean is not a descriptive statistic. 2. Some people think that the concept of a standard error refers only to the mean. Also wrong. 3. Some of those same people think a standard error is a statistic. No, it is a parameter, which admittedly is usually estimated by a statistic, but that doesn't make it a statistic. 4. The worst offenders of lumping standard errors under descriptive statistics are the authors of many textbooks and the developers of statistical "packages" for computers, such as Excel, Minitab, SPSS, and SAS. For all of those, and for some other packages, if you input a set of data and ask for basic descriptive statistics you get, among the appropriate statistics, the standard error of the mean. 5. [A variation of #1] If the sample mean plus or minus the sample standard deviation is specified in a research report, readers of the report are likely to confuse that with a confidence interval around the sample mean, since confidence intervals often take the form of a b, where a is the statistic and b is its standard error or some multiple of its standard error. So what should we do about this? We should ask authors, reviewers, and editors of manuscripts submitted for publication in scientific journals to be more careful about their uses of the term "standard error". We should also write to officials at Excel, Minitab, SPSS, SAS, and other organizations that have statistical routines for different kinds of standard errors, and ask them to get things right. While we're at it, it would be a good idea to ask them to default to n rather than n-1 when calculating a variance or a standard deviation. (But that's a topic for another paper, which I have already written.) MEDIANS FOR ORDINAL SCALES SHOULD BE LETTERS, NOT NUMBERS Introduction In their under-appreciated article entitled "Meaningless statistics" , Marcus-Roberts and Roberts (1987, page 347) gave an example of a five-point ordinal scale for which School 1 had a lower mean than School 2, but for a perfectly defensible monotonic transformation of that scale School 1 had the higher mean. The authors claimed that we shouldn't compare means that have been calculated for ordinal scales. I wholeheartedly agree. We should compare medians. The matter of the appropriateness of means, standard deviations, and Pearson r's for ordinal scales has been debated for many years, starting with S.S. Stevens' (1946) proscription. I even got myself embroiled in the controversy, twice (Knapp, 1990, 1993). What this paper is not about I am not concerned with the situation where the "ordinal scale" consists merely of the rank-ordering of observations, i.e., the data are ranks from 1 to n, where n is the number of things being ranked. I am concerned with ordinal ratings, not rankings. (Ratings and rankings aren't the same thing.) The purpose of the present paper In this paper I make an even stronger argument than Marcus-Roberts and Roberts made: If you have an ordinal scale, you should always report the median as one of the ordered categories, using a letter and not a number. Two examples 1. You have a five-categoried grading scale with scale points A, B, C, D, and E (the traditional scale used in many schools). You have data for a particular student who took seven courses and obtained the following grades, from lowest to highest: D,C,C,B,B,B,A (there were no E's). The median grade is the fourth lowest (which is also the fourth highest), namely B. You don't need any numbers for the categories, do you? 2. You have a five-categoried Likert-type scale with scale points a (strongly disagree), b(disagree), c(undecided), d(agree) and e(strongly agree). First dataset: You have data for a group of seven people who gave the responses a,b,b,b,c,d,e. The median is b (it's also the mode). No need for numbers. Second dataset: You have data for a different group of seven people. Their responses were a,b,c,d,d,d,d (there were no e's). The median is d. Still no need for numbers. Third dataset: You have data for a group of ten people who gave the following responses: a,a,b,b,b,c,c,c,d,d (still no e's). What is the median? I claim there is no median for this dataset; i.e., it is indeterminate. Fourth dataset: You have data for a group of ten people who gave the following responses: a,a,a,a,a,e,e,e,e,e. There is no median for that dataset either. Fifth dataset: You have the following data for a group of sixteen people who gave the following responses: a,b,b,b,b,c,c,c,c,c,c,d,d,d,d,e. That's a very pretty distribution (frequencies of 1, 4, 6, 4, and 1); it's as close to a normal distribution you can get for sixteen observations on that five-point scale (the frequencies are the binomial coeficients for n = 4). But normality is not necessary. The median is c (a letter, not a number). What do most people do? I haven't carried out an extensive survey, but I would venture to say that for those examples most people would assign numbers to the various categories, get the data, put the obtained numerical scores in order, and pick out the one in the middle. For the letter grades they would probably assign the number 4 to an A, the number 3 to a B, the number 2 to a C, the number 1 to a D, and the number 0 to an E. The data would then be 1,2,2,3,3,3,4 for the person and the median would be 3. They might even calculate a "grade-point average" (GPA) for that student by adding up all of those numbers and dividing by 7. For the five datasets for the Likert-type scale they would do the same thing, letting strongly disagree = 1, disagree = 2, undecided = 3, agree = 4, and strongly agree = 5. The data for the third dataset would be 1,1,2,2,2,3,3,3,4,4, with a median of 2.5 (they would "split the difference" between the middle two numbers, a 2 and a 3, i.e., they would add the 2 and the 3 to get 5 and divide by 2 to get 2.5). The data for the fourth dataset would be 1,1,1,1,1,5,5,5,5,5, with a median of 3, again by adding the two middle numbers, 1 and 5, to get 6 and dividing by 2 to get 3. What's wrong with that? Lots of things. First of all, you don't need to convert the letters into numbers; the letters work just fine by themselves. Secondly, the numbers 1,2,3,4,and 5 for the letter grades and for the Likert-type scale points are completely arbitrary; any other set of five increasing numbers would work equally well. Finally, there is no justification for the splitting of the difference between the middle two numbers of the third dataset or the fourth dataset. You can't add numbers for such scales; there is no unit of measurement and the response categories are not equally spaced. For instance, the "difference" between a 1 and a 2 is much smaller than the "difference" between a 2 and a 3. That is, the distinction between strongly disagree and disagree is minor (both are disagreements) compared to the distinction between disagree and undecided. Furthermore, the median of 2.5 for the third dataset doesn't make sense; it's not one of the possible scale values. The median of 3 for the fourth dataset is one of the scale values, but although that is necessary it is not sufficient (you can't add and divide by 2 to get it). [I won't even begin to get into what's wrong with calculating grade-point averages. See Chansky (1964) if you care. His article contains a couple of minor errors, e.g., his insistence that scores on interval scales have to be normally distributed, but his arguments against the usual way to calculate a GPA are very sound.] But, but,... I know. People have been doing for years what Marcus-Roberts and Roberts, and I, and others, say they shouldn't. How can we compare medians with means and modes without having any numbers for the scale points? Good question. For interval and ratio scales go right ahead, but not for ordinal scales; means for ordinal scales are a no-no (modes are OK). How about computer packages such as Excel, Minitab, SPSS, and SAS? Can they spit out medians as letters rather than numbers? Excel won't calculate the median of a set of letters, but it will order them for you (using the Sort function on the Data menu), and it is a simple matter to read the sorted list and pick out the median. My understanding is the other packages can't do it (my friend Matt Hayat confirms that both SPSS and SAS insist on numbers). Not being a computer programmer I don't know why, but I'll bet that it would be no harder to sort letters (there are only 26 of them) than numbers (there are lots of them!) and perhaps even easier than however they do it to get medians now. How can I defend my claim about the median for the third and fourth datasets for the Likert-type scale example? Having an even number of observations is admittedly one of the most difficult situations to cope with in getting a median. But we are able to handle the case of multiple modes (usually by saying there is no mode) so we ought to be able to handle the case of not being able to determine a median (by saying there is no median). How about between-group comparisons? All of the previous examples were for one person on one scale (the seven grades) or for one group of persons on the same scale (the various responses for the Likert-type scale). Can we use medians to compare the responses for the group of seven people whose responses were a,b,b,b,c,d,e (median = b) with the group of seven people whose responses were a,b,c,d,d,d,d (median = d), both descriptively and inferentially? That is the 64-dollar question (to borrow a phrase from an old radio program). But let's see how we might proceed. The two medians are obviously not the same. The first median of b represents an over-all level of disagreement; the second median of d represents an over-all level of agreement. Should we subtract the two (d - b) to get c? No, that would be awful. Addition and subtraction are not defensible for ordinal scales, and even if they were, a resolution of c (undecided) wouldn't make any sense. If the two groups were random samples, putting a confidence interval around that difference would be even worse. Testing the significance of the "difference" between the two medians, but not by subtracting, is tempting. How might we do that? If the two groups were random samples from their respective populations, we would like to test the hypothesis that they were drawn from populations that have the same median. We don't know what that median-in-common is (call it x, which would have to be a,b,c,d,or e), but we could try to determine the probability of getting, by chance, a median of b for one random sample and a median of d for another random sample, when the median in both populations is equal to x, for all x = a,b,c,d,and e. Sound doable? Perhaps, but I'm sure it would be hard. Let me give it a whirl. If and when I run out of expertise I'll quit and leave the rest as an "exercise for the reader" (you). OK. Suppose x =a. How many ways could I get a median of b in a random sample of seven observations? Does a have to be one of the observations? Hmmm; let's start by assuming yes, there has to be at least one a. Here's a partial list of possibilities: a,b,b,b,c,c,c a,b,b,b,c,c,d a,b,b,b,c,c,e a,b,b,b,c,d,d a,b,b,b,c,d,e (the data we actually got for the first sample) a,b,b,b,c,e,e a,a,b,b,c,c,c a,a,b,b,c,c,d a,a,b,b,c,c,e a,a,b,b,c,d,d ... I haven't run out of expertise yet, but I am running out of patience. Do you get the idea? But there's a real problem. How do we know that each of the possibilities are equally likely? It would intuitively seem (to me, anyhow) that a sample of observations with two a's would be more likely than a sample of observations with only one a, if the population median is a, wouldn't it? One more thing I thought it might be instructive to include a discussion of a sampling distribution for medians (a topic not to be found in most statistics books). Consider the following population distribution of the seven spectrum colors for a hypothetical situation (colors of pencils for a "lot" in a pencil factory?) Color Frequency Red (R) 1 Orange (O) 6 Yellow (Y) 15 Green (G) 20 Blue (B) 15 Indigo (I) 6 Violet (V) 1 That's a nice, almost perfectly normal, distribution (the frequencies are the binomial coefficients for n = 6). The median is G. [Did your science teacher ever tell you how to remember the names of the seven colors in the spectrum? Think of the name Roy G. Biv.] Suppose we take 100 random samples of size five each from that population, sampling without replacement within sample and with replacement among samples. I did that; here's what Excel and I got for the empirical sampling distribution of the 100 medians: [Excel made me use numbers rather than letters for the medians, but that was OK; I transformed back to letters after I got the results.] Median Frequency O 1 Y 25 G 51 B 22 I 1 You can see that there were more medians of G than anything else. That's reasonable because there are more Gs in the population than anything else. There was only one O and only one I, There couldn't be any Rs or Vs; do you know why? Summary In this paper I have tried, hopefully at least partially successfully, to create an argument for never assigning numbers to the categories of an ordinal scale and to always report one of the actual categories as the median for such a scale. References Chansky, N. (1964). A note on the grade point average in research. Educational and Psychological Measurement, 24, 95-99. Knapp, T.R. (1990). Treating ordinal scales as interval scales: An attempt to resolve the controversy. Nursing Research, 39 (2), 121-123. Knapp, T.R. (1993). Treating ordinal scales as ordinal scales. Nursing Research, 42 (3), 184-186. Marcus-Roberts, H.M., & Roberts, F.S. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383-394. Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680. To pool or not to pool: That is the confusion Prologue Isn't the English language strange? Consider the word "pool". I go swimming in a pool. I shoot pool at the local billiards parlor. I obtain the services of someone in the secretarial pool to type a manuscript for me. I participate in a pool to try to predict the winners of football games. I join a car pool to save on gasoline. You and I pool our resources. And now here I am talking about whether or not to pool data?! With 26 letters in our alphabet I wouldn't think we'd need to use the word "pool" in so many different ways. (The Hawaiian alphabet has only 12 letters...the five vowels and seven consonants H,K,L,M,N,P,and W; they just string lots of the same letters together to make new words.) What is the meaning of the term "pooling data"? There are several contexts in which the term "pooling data" arises. Here are most of them: 1. Pooling variances Let's start with the most familiar context for pooling data (at least to students in introductory courses in statistics), viz., the pooling of sample variances in a t test of the significance of the difference between two independent sample means. The null hypothesis to be tested is that the means of two populations are equal (the populations from which the respective samples have been randomly sampled). We almost never know what the population variances are (if we did we'd undoubtedly also know what the populations means are, and there would be no need to test the hypothesis), but we often assume that they are equal, so we need to have some way of estimating from the sample data the variance that the two populations have in common. I won't bore you with the formula (you can look it up in almost any statistics textbook), but it involves, not surprisingly, the two sample variances and the two sample sizes. You should also test the "poolability" of the sample variances before doing the pooling, by using Bartlett's test or Levene's test, but almost nobody does; neither test has much power. [Note: There is another t test for which you don't assume the population variances to be equal, and there's no pooling. It's variously called the Welch-Satterthwaite test or the Behrens-Fisher test. It is the default t test in Minitab. If you want the pooled test you have to explicitly request it.] 2. Pooling within-group regression slopes One of the assumptions for the appropriate use of the analysis of covariance(ANCOVA) for two independent samples is that the regression of Y (the dependent variable) on X (the covariate) is the same in the two populations that have been sampled. If a test of the significance of the difference between the two within-group slopes is "passed" (the null hypothesis of equality of slopes is not rejected), those sample slopes can be pooled together for the adjustment of the means on the dependent variable. If that test is "failed" (the null hypothesis of equality of slopes is rejected) the traditional ANCOVA is not appropriate and the Johnson-Neyman technique (Johnson & Neyman, 1936) must be used in its place. 3. Pooling raw data across two (or more) subgroups This is the kind of pooling people often do without thinking through the ramifications. For example, suppose you were interested in the relationship between height and weight for adults, and you had a random sample of 50 males and a random sample of 50 females. Should you pool the data for the two sexes and calculate one correlation coefficient, or should you get two correlation coefficients (one for the males and one for the females)? Does it matter? The answer to the first question is a resounding "no" to the pooling. The answer to the second question is a resounding "yes". Here's why. In almost every population of adults the males are both taller and heavier than the females, on the average. If you pool the data and create a scatter plot, it will be longer and skinnier than the scatterplots for the two sexes treated separately, thereby producing a spuriously high correlation between height and weight. Try it. You'll see what I mean. And read the section in David Howell's (2007) statistics textbook (page 265) regarding this problem. He provides an example of real data for a sample of 92 college students (57 males, 35 females) in which the correlation between height and weight is .60 for the males, .49 for the females, and .78 for the two sexes pooled together. 4. Pooling raw data across research sites This is the kind of pooling that goes on all the time (often unnoticed) in randomized clinical trials. The typical researcher often runs into practical difficulties in obtaining a sufficient number of subjects at a single site and "pads" the sample size by gathering data from two or more sites. In the analysis he(she) almost never tests the treatment-by-site interaction, which might "be there" and would constrain the generalizability of the findings. 5. Pooling data across time There is a subtle version of this kind of pooling and a not-so-subtle version. Researchers often want to combine data for various years or minutes or whatever, for each unit of analysis (a person, a school, a hospital, etc.), usually by averaging, in order to get a better indicator of a "typical" measurement. They(the researchers) usually explain why and how they do that, so that's the not-so- subtle version. The subtle version is less common but more dangerous. Here the mistake is occasionally made of treating the Time 2 data for the same people as though they were different people from the Time 1 people. The sample size accordingly looks to be larger than it is, and the "correlatedness" of the data at the two points in time is ignored, often to the detriment of a less sensitive analysis. (Compare, for example, data that should be treated using McNemar's test for correlated samples with data that are appropriately handled by the traditional chi-square test of the independence of two categorical variables.) 6. Pooling data across scale categories This is commonly known as "collapsing" and is frequently done with Likert-type scales. Instead of distinguishing between those who say "strongly agree" from those who say "agree'", the data for those two scale points are combined into one over-all "agree" designation. Likewise for "strongly disagree" and "disagree". This can result in a loss of information, so it should be used as a last resort. 7. Pooling "scores" on different variables There are two different ways that data can be pooled across variables. The first way is straightforward and easy. Suppose you were interested in the trend of average (mean) monthly temperatures for a particular year in a particular city.For some months you have temperatures in degrees Fahrenheit and for other months you have temperatures in degrees Celsius. (Why that might have happened is not relevant here.) No problem. You can convert the Celsius temperatures to Fahrenheit by the formula F = (9/5)C + 32; or you can convert the Fahrenheit temperatures to Celsius by using the formula C = (5/9) (F - 32). The second way is complicated and not easy. Suppose you were interested in determining the relationship between mathematical aptitude and mathematical achievement for the students in your particular secondary school, but some of the students had taken the Smith Aptitude Test and other students had taken the Jones Aptitude Test. The problem is to estimate what score on the Smith test is equivalent to what score on the Jones test. This problem can be at least approximately solved if there is a normative group of students who have taken both the Smith test and the Jones test, you have access to such data, and you have for each test the percentile equivalent to each raw score on each test. For each student in your school who took Smith you use this "equipercentile method" to estimate what he(she) "might have gotten" on Jones. Assign to him(her) the Jones raw score equivalent to the percentile rank that such persons obtained on Smith. Got it? Whew! 8. Pooling data from the individual level to the group level This is usually referred to as "data aggregation". Suppose you were interested in the relationship between secondary school teachers' numbers of years of experience and the mathematical achievement of their students. You can't use the individual student as the unit of analysis, because each student doesn't have a different teacher (except in certain tutoring or home-school situations). But you can, and should, pool the mathematical achievement scores across students in their respective classrooms in order to get the correlation between teacher years of experience and student mathematical achievement. 9. Pooling cross-sectional data to approximate panel data Cross-sectional data are relatively easy to obtain. Panel (longitudinal) data are not. Why? The principal reason is that the latter requires that the same people are measured on each of the occasions of interest, and life is such that people often refuse to participate on every occasion or they are unable to participate on every occasion (some even die). And you might not even want to measure the same people time after time, because they might get bored with the task and just "parrot back" their responses, thereby artificially inflating the correlations between time points. What has been suggested is to take a random sample of the population at Time 1, a different random sample at Time 2,...etc. and compare the findings across time. You lose the usual sensitivity provided by having repeated measurements on the same people, but you gain some practical advantages. There is a more complicated approach called a cross-sectional-sequential design, whereby random samples are taken from two or more cohorts at various time points. Here is an example (see Table 1, below) taken from an article that Chris Kovach and I wrote several years ago (Kovach & Knapp, 1989, p. 26). You get data for five different ages (60, 62, 64, 66, and 68) for a three-year study(1988, 1990, 1992). Nice, huh?  INCLUDEPICTURE "http://www.pdfonline.com/testDocStorage/DocStorage/e08441015dca484991072c5939cb0e62/2013-Knapp-To-pool-or-not-to-pool_images/2013-Knapp-To-pool-or-not-to-pool4x1.jpg" \* MERGEFORMATINET  10. Pooling findings across similar studies This very popular approach is technically called "meta-analysis" (the term is due to Glass, 1976), but it should be called "meta-synthesis" (some people do use that term), because it involves the combining of results, not the breaking-down of results. I facetiously refer to it occasionally as "a statistical review of related literature", because it has come to replace almost all narrative reviews in certain disciplines. I avoid it like the plague; it's much too hard to cope with the problems involved. For example, what studies (published only? published and unpublished?) do you include? How do you determine their "poolability"? What statistical analysis(es) do you employ in combining the results? Summary So, should you pool or not? Or, putting it somewhat differently, when should you pool and when should you not? The answer depends upon the following considerations, in approximately decreasing order of importance: 1. The research question(s). Some things are obvious. For example, if you are concerned with the question "What is the relationship between height and weight for adult females?" you wouldn't want to toss in any height&weight data for adult males. But you might want to pool the data for Black adult females with the data for White adult females, or the data for older adult females with the data for younger adult females. It would be best to test the poolability before you do so, but if your sample is a simple random sample drawn from a well-defined population of adult females you might not know or care who's Black and who's White. On the other hand, you might have to pool if you don't have an adequate number of both Blacks and Whites to warrant a separate analysis for each. 2. Sample size. Reference was made in the previous paragraph to the situation where there is an inadequate number of observations in each of two (or more) subgroups, which would usually necessitate pooling (hopefully poolable entities). 3. Convenience, common sense, necessity In order to carry out an independent samples t test when you assume equal population variances, you must pool. If you want to pool across subgroups, be careful; you probably don't want to do so, as the height and weight example (see above) illustrates. When collapsing Likert-type scale categories you might not have enough raw frequencies (like none?) for each scale point, which would prompt you to want to pool. For data aggregation you pool data at a lower level to produce data at a higher level. And for meta-analysis you must pool; that's what meta-analysis is all about. A final caution Just as "acceptance" of a null hypothesis does not mean it is necessarily true,"acceptance" in a poolability test does not mean that poolability is necessarily justified. References Glass, G. V (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3-8. Howell, D.C. (2007). Statistical methods for psychology (6th ed.). Belmont, CA: Johnson, P. O., & Neyman, J. (1936). Tests of certain linear hypotheses and their applications to some educational problems. Statistical Research Memoirs, 1, 57-93. Kovach, C.R., & Knapp, T.R. (1989). Age, cohort, and time-period confounds in research on aging. Journal of Gerontological Nursing, 15 (3), 11-15. RATING, RANKING, OR BOTH? Suppose you wanted to make your own personal evaluations of three different flavors of ice cream: chocolate, vanilla, and strawberry. How would you go about doing that? Would you rate each of them on a scale, say from 1 to 9 (where 1 = awful and 9 = wonderful)? Or would you assign rank 1 to the flavor you like best, rank 2 to the next best, and rank 3 to the third? Or would you do both? Does it matter? What follows is a discussion of the general problem of ratings vs. rankings, when you might use one rather than the other, and when you might want to use both. Terminology and notation Rating n things on a scale from 1 to w, where w is some convenient positive integer, is sometimes called "interactive" measurement. Ranking n things from 1 to n is often referred to as "ipsative" measurement. (See Cattell, 1944 or Knapp, 1966 for explanations of those terms.) The number of people doing the rating or the ranking is usually denoted by k. Advantages and disadvantages of each Let's go back to the ice cream example, with n = 3, w = 9, and have k = 2 (A and B, where you are A?). You would like to compare A's evaluations with B's evaluations. Sound simple? Maybe; but here are some considerations to keep in mind: 1. Suppose A gives ratings of 1, 5, and 9 to chocolate, vanilla, and strawberry, respectively; and B gives ratings of 5, 5, and 5, again respectively. Do they agree? Yes and no. A's average (mean) rating is the same as B's, but A's ratings vary considerably more than B's. There is also the controversial matter of whether or not arithmetic means are even relevant for scales such as this 9-point Likert-type ordinal scale, but let's save that for another paper. (I have actually written two papers on the topic...Knapp,1990 and Knapp, 1993; but the article by Marcus-Roberts & Roberts, 1987, is by far the best, in my opinion.) 2. Suppose A gives chocolate rank 1, vanilla rank 2, and strawberry rank 3. Suppose that B does also. Do they agree? Again, yes and no. The three flavors are in exactly the same rank order, but A might like all of them a lot and was forced to discriminate among them; whereas B might not like any of them, but designated chocolate as the "least bad", with vanilla in the middle, and with strawberry the worst. 3. Reference was made above to the relevance of arithmetic means. If an analysis that is more complicated than merely comparing two means is contemplated, the situation can get quickly out of hand. For example, suppose that n = 31 (Baskin-Robbins' large number of flavors), w is still 9, but k is now 3 (you want to compare A's, B's, and C's evaluations). Having A, B, and C rate each of 31 things on a 9-point scale is doable, albeit tedious. Asking them to rank 31 things from 1 to 31 is an almost impossible task. (Where would they even start? How could they keep everything straight?) And comparing three evaluators is at least 1.5 times harder than comparing two. Matters are even worse if sampling is involved. Suppose that you choose a random sample of 7 of the Baskin-Robbins 31 flavors and ask a random sample of 3 students out of a class of 50 students to do the rating or ranking, with the ultimate objective of generalizing to the population of flavors for the population of students. What descriptive statistics would you use to summarize the sample data? What inferential statistics would you use? Help! A real example: Evaluating the presidents Historians are always studying the accomplishments of the people who have served as presidents of the United States, starting with George Washington in 1789 and continuing up through whoever is presently in office. [At this writing, in 2013, Barack Obama is now serving his second four-year term.] It is also a popular pastime for non-historians to make similar evaluations. Some prototypes of ratings and/or rankings of the various presidents by historical scholars are the works of the Schlesingers (1948, 1962, 1997), Lindgren (2000), Davis (2012), and Merry (2012). [The Wikipedia website cites and summarizes several others.] For the purpose of this example I have chosen the evaluations obtained by Lindgren for presidents from George Washington to Bill Clinton. Table 1 contains all of the essential information in his study. [It is also his Table 1.] For this table, n (the number of presidents) is 39, w (the number of scale points for the ratings) is 5 (HIGHLY SUPERIOR=5, ABOVE AVERAGE=4, AVERAGE=3, BELOW AVERAGE=2, WELL BELOW AVERAGE=1), and k (the number of raters) is 1 (actually averaged across the ratings provided by 78 scholars; the ratings given by each of the scholars were not provided). The most interesting feature of the table is that it provides both ratings and rankings, with double ratings arising from the original scale and the subsequent tiers of "greatness". [Those presidents were first rated on the 5-point scale, then ranked from 1 to 39, then ascribed further ratings by the author on a 6-point scale of greatness (GREAT, NEAR GREAT, ABOVE AVERAGE, AVERAGE, BELOW AVERAGE, AND FAILURE. Three presidents, Washington, Lincoln, and Franklin Roosevelt are almost always said to be in the "GREAT" category.] Some presidents, e.g., William Henry Harrison and James Garfield, were not included in Lindgren's study because they served such a short time in office. Table 1 Ranking of Presidents by Mean Score Data Source: October 2000 Survey of Scholars in History, Politics, and Law Co-Sponsors: Federalist Society & Wall Street Journal Mean Median Std. Dev. Great 1 George Washington 4.92 5 0.27 2 Abraham Lincoln 4.87 5 0.60 3 Franklin Roosevelt 4.67 5 0.75 Near Great 4 Thomas Jefferson 4.25 4 0.71 5 Theodore Roosevelt 4.22 4 0.71 6 Andrew Jackson 3.99 4 0.79 7 Harry Truman 3.95 4 0.75 8 Ronald Reagan 3.81 4 1.08 9 Dwight Eisenhower 3.71 4 0.60 10 James Polk 3.70 4 0.80 11 Woodrow Wilson 3.68 4 1.09 Above Average 12 Grover Cleveland 3.36 3 0.63 13 John Adams 3.36 3 0.80 14 William McKinley 3.33 3 0.62 15 James Madison 3.29 3 0.71 16 James Monroe 3.27 3 0.60 17 Lyndon Johnson 3.21 3.5 1.04 18 John Kennedy 3.17 3 0.73 Average 19 William Taft 3.00 3 0.66 20 John Quincy Adams 2.93 3 0.76 21 George Bush 2.92 3 0.68 22 Rutherford Hayes 2.79 3 0.55 23 Martin Van Buren 2.77 3 0.61 24 William Clinton 2.77 3 1.11 25 Calvin Coolidge 2.71 3 0.97 26 Chester Arthur 2.71 3 0.56 Below Average 27 Benjamin Harrison 2.62 3 0.54 28 Gerald Ford 2.59 3 0.61 29 Herbert Hoover 2.53 3 0.87 30 Jimmy Carter 2.47 2 0.75 31 Zachary Taylor 2.40 2 0.68 32 Ulysses Grant 2.28 2 0.89 33 Richard Nixon 2.22 2 1.07 34 John Tyler 2.03 2 0.72 35 Millard Fillmore 1.91 2 0.74 Failure 36 Andrew Johnson 1.65 1 0.81 37T Franklin Pierce 1.58 1 0.68 37T Warren Harding 1.58 1 0.77 39 James Buchanan 1.33 1 0.62 One vs. both From a purely practical perspective, ratings are usually easier to obtain and are often sufficient. The conversion to rankings is essentially automatic by putting the ratings in order. (See above regarding ranking large numbers of things "from scratch", without the benefit of prior ratings.) But there is always the bothersome matter of "ties". (Note the tie in Table 1 between Pierce and Harding for 37th place but, curiously, not between VanBuren and Clinton, or between Coolidge and Arthur.) Ties are equally problematic, however, when rankings are used without ratings. Rankings are to be preferred when getting the correlation (not the difference) between two variables, e.g., A's rankings and B's rankings, whether the rankings are the only data or whether the rankings have been determined by ordering the ratings. That is because from a statistical standpoint the use of the Spearman rank correlation coefficient is almost always more defensible than the use of the Pearson product-moment correlation coefficient for ordinal data and for non-linear interval data. It Is very unusual to see both ratings and rankings used for the same raw data, as was the case in the Lindgren study. It is rather nice, however, to have both "relative" (ranking) and "absolute" (rating) information for things being evaluated. Other recommended reading If you're interested in finding out more about rating vs. ranking, I suggest that in addition to the already-cited sources you read the article by Alwin and Krosnick (1985) and the measurement chapter in Richard Lowry's online statistics text. A final remark Although ratings are almost always made on an ordinal scale with no zero point, researchers should always try to see if it would be possible to use an interval scale or a ratio scale instead. For the ice cream example, rather than ask people to rate the flavors on a 9-point scale it might be better to ask how much they'd be willing to pay for a chocolate ice cream cone, a vanilla ice cream cone, and a strawberry ice cream cone. Economists often argue for the use of such "utils" when gathering consumer preference data. [Economics is usually called the study of supply and demand. "The study of the maximization of utility, subject to budget constraints" is more indicative of what it's all about.] References Alwin, D.F., & Krosnik, J.A. (1985). The measurement of values in surveys: A comparison of ratings and rankings. Public Opinion Quarterly, 49 (4), 535-552. Cattell, R.B. (1944). Psychological measurement: ipsative, normative, and interactive. Psychological Review, 51, 292-303. Davis, K.C. (2012). Don't know much about the American Presidents. New York: Hyperion. Knapp, T.R. (1966). Interactive versus ipsative measurement of career interest. Personnel and Guidance Journal, 44, 482-486. Knapp, T.R. (1990). Treating ordinal scales as interval scales: An attempt to resolve the controversy. Nursing Research, 39, 121-123. Knapp, T.R. (1993). Treating ordinal scales as ordinal scales. Nursing Research, 42, 184-186. Lindgren, J. (November 16, 2000). Rating the Presidents of the United States, 1789-2000. The Federalist Society and The Wall Street Journal. Lowry, R. (n.d.) Concepts & applications of inferential statistics. Accessed on January 11, 2013 at http://vassarstats.net/textbook/. Marcus-Roberts, H., & Roberts, F. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383-394. Merry, R. (2012). Where they stand. New York: Simon and Schuster. Schlesinger, A.M. (November 1,1948). Historians rate the U.S. Presidents. Life Magazine, 65-66, 68, 73-74. Schlesinger, A.M. (July, 1962). Our Presidents: A rating by 75 historians. New York Times Magazine, 12-13, 40-41, 43. Schlesinger, A.M., Jr. (1997). Rating the Presidents: Washington to Clinton. Political Science Quarterly, 11 (2), 179-190. Wikipedia (n.d.) Historical rankings of Presidents of the United States. Accessed on January 10, 2013. THREE I have always been fascinated by both words and numbers. (I don't like graphs, except for frequency distributions, scatter diagrams, and interrupted time-series designs). The word "TWO" and the number "2" come up a lot in statistics (the difference between two means, the correlation between 2 variables, etc.). I thought I'd see if it would be possible to write a paper about "THREE" and "3". [I wrote one recently about "SEVEN" and "7"---regarding Cronbach's Alpha, not the alcoholic drink.] What follows is my attempt to do so. I have tried to concentrate on ten situations where "threeness" is of interest. 1. Many years ago I wrote a paper regarding the sampling distribution of the mode for samples of size three from a Bernoulli (two-point) population. Students are always confusing sampling distributions with population distributions and sample distributions, so I chose this particular simple statistic to illustrate the concept. [Nobody wanted to publish that paper.] Here is the result for Pr(0) = p0 and Pr(1) = p1: Possible Data Mode Relative Frequency 0,0,0 0 p0 3 0,0,1 0 p0 2 p1 0,1,0 0 " 1,0,0 0 " 1,1,0 1 p1 2 p0 1,0,1 1 " 0,1,1 1 " 1,1,1 1 p1 3 Therefore, the sampling distribution is: Mode Relative Frequency 0 p0 3 + 3p0 2 p1 1 p1 3 + 3p1 2 p0 For example, if p0 = .7 and p1 = .3: Mode Relative Frequency 0 .343 + 3 (.147) = .343 + .441 = .784 1 .027 + 3 (.063) = .027 + .189 = .216 2. I wrote another paper (that somebody did want to publish) in which I gave an example of seven observations on three variables for which the correlation matrix for the data was the identity matrix of order three. Here are the data: Observation X1 X2 X3 A 1 2 6 B 2 3 1 C 3 5 5 D 4 7 2 E 5 6 7 F 6 4 3 G 7 1 4 That might be a nice example to illustrate what can happen in a regression analysis or a factor analysis where everything correlates zero with everything else. 3. My friend and fellow statistician Matt Hayat reminded me that there are three kinds of t tests for means: one for a single sample, one for two independent samples, and one for two dependent (correlated) samples. 4. There is something called "The rule of three" for the situation where there have been no observed events in a sample of n binomial trials and the researcher would like to estimate the rate of occurrence in the population from which the sample has been drawn. Using the traditional formula for a 95% confidence interval for a proportion won't work, because the sample proportion ps is equal to 0, 1-ps is equal to 1, and their product is equal to 0, implying that there is no sampling error!. The rule of three says that you should use [0, 3/n] as the 95% confidence interval, where n is the sample size. 5. Advocates of a "three-point assay" argue for having observations on X (the independent, predictor variable in a regression analysis) at the lowest, middle, and highest value, with one-third of them at each of those three points. 6. Some epidemiologists like to report a "three-number summary" of their data, especially for diagnostic testing: sensitivity, specificity, and prevalence. 7. And then there is the standardized third moment about the mean (the cubed mean of the deviation scores divided by the cube of the standard deviation), which is Karl Pearson's measure of the skewness of a frequency distribution, and is sometimes symbolized by "b1. [Its square, b1, is generally more useful in mathematical statistics.] Pearson's measure of the kurtosis of a frequency distribution is b2, the standardized fourth moment about the mean (the mean of the fourth powers of the deviation scores divided by the standard deviation raised to the fourth power), which for the normal distribution just happens to be equal to 3. 8. If you have a sample Pearson product-moment correlation coefficient r, and you want to estimate the population Pearson product-moment correlation coefficient , the procedure involves the Fisher r-to-z transformation, putting a confidence interval around the z with a standard error of 1/" (n-3), and then transforming the endpoints of the interval back to the r scale by using the inverse z-to-r transformation. Chalk up another 3. 9. Years ago when I was studying plane geometry in high school, the common way to test our knowledge of that subject was to present us with a series of k declarative statements and ask us to indicate for each statement whether it was always true, sometimes true, or never true. [Each of those test items actually constituted a three-point Likert-type scale.] 10. Although it is traditional in a randomized controlled trial (a true experiment) to test the effect of one experimental treatment against one control treatment, it is sometimes more fruitful to test the relative effects of three treatments (one experimental treatment, one control treatment, and no treatment at all). For example, when testing a "new" method for teaching reading to first-graders against an "old" method for teaching reading to first-graders, it might be nice to randomly assign one-third of the pupils to "new", one-third to "old", and one-third to "none". ". It's possible that the pupils in the third group who don't actually get taught how to read might do as well as those who do. Isn't it? CHANGE Introduction Mary spelled correctly 3 words out of 6 on Monday and 5 words out of 6 on Wednesday. How should we measure the change in her performance? Several years ago Cronbach and Furby (1970) argued that we shouldn't; i.e., we don't even need the concept of change. An extreme position? Of course, but read their article sometime and see what you think about it. Why not just subtract the 3 from the 5 and get a change of two words? That's what most people would do. Or how about subtracting the percentage equivalents, 50% from 83.3%, and get a change of 33.3%? But...might it not be better to divide the 5 by the 3 and get 1.67, i.e., a change of 67%? [Something that starts out simple can get complicated very fast.] Does the context matter? What went on between Monday and Wednesday? Was she part of a study in which some experimental treatment designed to improve spelling ability was administered? Or did she just get two days older? Would it matter if the 3 were her attitude toward spelling on Monday and the 5 were her attitude toward spelling on Wednesday, both on a five-point Likert-type scale, where 1=hate, 2=dislike, 3=no opinion, 4=like, and 5=love? Would it matter if it were only one word, e.g., antidisestablishmentarianism, and she spelled it incorrectly on Monday but spelled it correctly on Wednesday? These problems regarding change are illustrative of what now follows. A little history Interest in the concept of change and its measurement dates back at least as long ago as Davies (1900). But it wasn't until much later, with the publication of the book edited by Harris (1963), that researchers in the social sciences started to debate the advantages and the disadvantages of various ways of measuring change. Thereafter hundreds of articles were written on the topic, including many of the sources cited in this paper. "Gain scores" The above example of Mary's difference of two words is what educators and psychologists call a "gain score", with the Time 1 score subtracted from the Time 2 score. [If the difference is negative it's a loss, rather than a gain, but I've never heard the term "loss scores".] Such scores have been at the heart of one of the most heated controversies in the measurement literature. Why? 1. The two scores might not be on exactly the same scale. It is possible that her score of 3 out of 6 was on Form A of the spelling test and her score of 5 out of 6 was on Form B of the spelling test, with Form B consisting of different words, and the two forms were not perfectly comparable (equivalent, "parallel"). It might even have been desirable to use different forms on the two occasions, in order to reduce practice effect or mere "parroting back" at Time 2 of the spellings (correct or incorrect) at Time 1. 2. Mary herself and/or some other characteristics of the spelling test might have changed between Monday and Wednesday, especially if there were some sort of intervention between the two days. In order to get a "pure" measure of the change in her performance we need to assume that both of the testing conditions were the same. In a randomized experiment all bets regarding the direct relevance of classical test theory should be off if there is a pretest and a posttest to serve as indicators of a treatment effect, because the experimental treatment could affect the posttest mean AND the posttest variance AND the posttest reliability AND the correlation between pretest and posttest. 3. Gain scores are said by some measurement experts (e.g., O'Connor, 1972; Linn & Slinde, 1977; Humphreys, 1996) to be very unreliable, and by other measurement experts (e.g., Zimmerman & Williams, 1982; Williams & Zimmerman, 1996; Collins, 1996) to not be. Like the debate concerning the use of traditional interval-level statistics for ordinal scales, this controversy is unlikely ever to be resolved. I got myself embroiled in it many years ago (see Knapp, 1980; Williams & Zimmerman, 1984; Knapp, 1984). [I also got myself involved in the ordinal vs. interval controversy (Knapp, 1990, 1993).] The problem is that if the instrument used to measure spelling ability (Were the words dictated? Was it a multiple-choice test of the discrimination between the correct spelling and one or more incorrect spellings?) is unreliable, Mary's "true score" on both Monday and Wednesday might have been 4 (she "deserved" a 4 both times), and the 3 and the 5 were both measurement errors attributable to "chance", and the difference of two words was not a true gain at all. Some other attempts at measuring change Given that gain scores might not be the best way to measure change, there have been numerous suggestions for improving things. In the Introduction (see above) I already mentioned the possibility of dividing the second score by the first score rather than subtracting the first score from the second score. This has never caught on, for some good reasons and some not-so-good reasons. The strongest arguments against dividing instead of subtracting are: (1) it only makes sense for ratio scales (a 5 for "love" divided by a 3 for "no opinion" is bizarre, for instance); and (2) if the score in the denominator is zero, the quotient is undefined. [If you are unfamiliar with the distinctions among nominal, ordinal, interval, and ratio scales, read the classic article by Stevens (1946).] The strongest argument in favor of the use of quotients rather than differences is that the measurement error could be smaller. See, for example, the manual by Bell(1999) regarding measurement uncertainty and how the uncertainty "propagates" via subtraction and division. It is available free of charge on the internet. Other methodologists have advocated the use of "modified" change scores (raw change divided by possible change) or "residualized" change (the actual score at Time 2 minus the Time 2 score that is predicted from the Time 1 score in the regression of Time 2 score on Time 1 score). Both of these, and other variations on simple change, are beyond the scope of the present paper, but I have summarized some of their features in my reliability book (Knapp, 2014). The measurement of change in the physical sciences vs. the social sciences Some physical scientists wonder what the fuss is all about. If you're interested in John's weight of 250 pounds in January of one year and his weight of 200 pounds in January of the following year, for example, nothing other than subtracting the 250 from the 200 to get a loss of 50 pounds makes any sense, does it? Well, yes and no. You could still have the problem of scale difference (the scale in the doctor's office at Time 1 and the scale in John's home at Time 2?) and the problem of whether the raw change (the 50 pounds) is the best way to operationalize the change. Losing 50 pounds from 250 to 200 in a year is one thing, and might actually be beneficial. Losing 50 pounds from 150 to 100 in a year is something else, and might be disastrous. [I recently lost ten pounds from 150 to 140 and I was very concerned. (I am 5'11" tall.) I have since gained back five of those pounds, but am still not at my desired "fighting weight", so to speak.] Measuring change using ordinal scales I pointed out above that it wouldn't make sense to get the ratio of a second ordinal measure to a first ordinal measure in order to measure change from Time 1 to Time 2. It's equally wrong to take the difference, but people do it all the time. Wakita, Ueshima, & Noguchi (2012) even wrote a long article devoted to the matter of the influence of the number of scale categories on the psychological distances between the categories of a Likert-type scale. In their article concerned with the comparison of the arithmetic means of two groups using an ordinal scale, Marcus-Roberts and Roberts (1987) showed that Group I's mean could be higher than Group II's mean on the original version of an ordinal scale, but Group II's mean could be higher than Group I's mean on a perfectly defensible transformation of the scale points from the original version to another version. (They used as an example a grading scale of 1, 2, 3, 4, and 5 vs. agrading scale of 30, 40, 65, 75, and 100.) The matter of subtraction is meaningless for ordinal measurement. Measuring change using dichotomies Dichotomies such as male & female, yes & no, and right & wrong play a special role in science in general and statistics in particular. The numbers 1 and 0 are most often used to denote the two categories of a dichotomy. Variables treated that way are called "dummy" variables. For example, we might "code" male=1 and female =0 (not male); yes=1 and no=0 (not yes); and right=1 and wrong=0 (not right). As far as change is considered, the only permutations of 1 and 0 on two measuring occasions are (1,1), e.g., right both times; (1,0), e g., right at Time 1 and wrong at Time 2; (0,1), e.g., wrong at Time 1 and right at Time 2; and (0,0), e.g., wrong both times. The same permutations are also the only possibilities for a yes,no dichotomy. There are even fewer possibilities for the male, female variable, but sex change is well beyond the scope of this paper! Covariance F vs. gain score t For a pretest & posttest randomized experiment, Cronbach and Furby (1970) suggested the use of the analysis of covariance rather than a t test of the mean gain in the experimental group vs. the mean gain in the control group as one way of avoiding the concept of change. The research question becomes "What is the effect of the treatment on the posttest over and above what is predictable from the pretest?" as opposed to "What is the effect of the treatment on the change from pretest to posttest?" In our recent paper, Bill Schafer and I (Knapp & Schafer, 2009) actually provided a way to convert from one analysis to the other. Measurement error In the foregoing sections I have made occasional references to measurement error that might produce an obtained score that is different from the true score. Are measurement errors inevitable? If so, how are they best handled? In an interesting article (his presidential address to the National Council on Measurement in Education), Kane (2011) pointed out that in everyday situations such as sports results (e.g., a golfer shooting a 72 on one day and a 69 on the next day; a baseball team losing one day and winning the next day), we don't worry about measurement error. (Did the golfer deserve a 70 on both occasions? Did the baseball team possibly deserve to win the first time and lose the second time?). Perhaps we ought to. What we should do That brings me to share with you what I think we should do about measuring change: 1. Start by setting up two columns. Column A is headed Time 1 and Column B is headed Time 2. [Sounds like a Chinese menu.] 2. Enter the data of concern in the appropriate columns, with the maximum possible score (not the maximum obtained score) on both occasions at the top and the rest of the scores listed in lockstep order beneath. For Mary's spelling test scores, the 3 would go in Column A and the 5 would go in Column B. For n people who attempted to spell antidisestablishmentarianism on two occasions, all of the 1's would be entered first, followed by all of the 0's, in the respective columns. 3. Draw lines connecting score in Column A with the corresponding score in Column B for each person. There would be only one (diagonal) line for Mary's 3 and her 5. For the n people trying to spell antidisestablishmentarianism, there would be n lines, some (perhaps all; perhaps none) horizontal, some (perhaps all; perhaps none) diagonal. If all of the lines are horizontal, there is no change for anyone. If all of the lines are diagonal and crossed, there is a lot of change going on. See Figure 1 for a hypothetical example of change from pretest to posttest for 18 people, almost all of whom changed from Time1 to Time 2 (only one of the lines is horizontal). I am grateful to Dave Kenny for permission to reprint that diagram, which is Figure 1.7 in the book co-authored by Campbell and Kenny (1999). [A similar figure, Figure 3-11 in Stanley (1964), antedated the figure in Campbell & Kenny. He (Stanley) was interested in the relative relationship between two variables, and not in change per se. He referred to parallel lines, whether horizontal or not, as indicative of perfect correlation.]  INCLUDEPICTURE "http://www.pdfonline.com/testDocStorage/DocStorage/31cfea6c14cf4946bcb5d5b2c7471b78/2013-Knapp-Change_images/2013-Knapp-Change10x1.jpg" \* MERGEFORMATINET  Figure 1: Some data for 18 hypothetical people. Ties are always a problem (there are several ties in Figure 1, some at Time 1 and some at Time 2), especially when connecting a dichotomous observation (1 or 0) at Time 1 with a dichotomous observation at Time 2 and there are lots of ties. The best way to cope with this is to impose some sort of arbitrary (but not capricious) ordering of the tied observations, e.g., by I.D. number. In Figure 1, for instance, there is no particular reason for the two people tied at a score of 18 at Time 1 to have the line going to the score of 17 at Time 2 be above the line going to the score of 15 at Time 2. [It doesn't really matter in this case, because they both changed, one "losing" one point and the other "losing" two points.] 4. Either quit right there and interpret the results accordingly (Figure 1 is actually an excellent "descriptive statistic" for summarizing the change from pretest to posttest for those 18 people) or proceed to the next step. 5. Calculate an over-all measure of change. What measure? Aye, there's the rub. Intuitively it should be a function of the number of horizontal lines and the extent to which the lines cross. For ordinal and interval measurements the slant of the diagonal lines might also be of interest (with lines slanting upward indicative of "gain" and with lines slanting downward indicative of "loss"). But what function? Let me take a stab at it, using the data in Figure 1: The percentage of horizontal lines (no change) in that figure is equal to 1 out of18, or 5.6%. [Unless your eyes are better than mine, it's a bit hard to find the horizontal line for the 15th person, who "went" from 13 to 13, but there it is.] The percentage of upward slanting lines (gains), if I've counted correctly, is equal to 6 out of 18, or 33.3%. The percentage of downward slanting lines (losses) is equal to 11 out of 18, or 61.1%. A person who cares about over-all change for this dataset, and for most such datasets, is likely to be interested in one or more of those percentages. [I love percentages (see Knapp, 2010).] Statistical inference from sample to population Up to now I've said nothing about sampling (people, items, etc.). You have to have a defensible statistic before you can determine its sampling distribution and, in turn, talk about significance tests or confidence intervals. If the statistic is a percentage, its sampling distribution (binomial) is well known, as is its approximation (normal) for large samples and for sample percentages that are not close to either 0 or 100. The formulas for testing hypotheses about population percentages and for getting confidence intervals for population percentages are usually expressed in terms of proportions rather than percentages, but the conversion from percentage to proportion is easy (drop the % sign and move the decimal point two places to the left). Caution: concentrate on only one percentage. For the Campbell and Kenny data, for instance, don't test hypotheses for all of the 5.6%, the 33.3%, and the 61.1%, since that would be redundant (they are not independent; they add to 100). If you wanted to go a little further, you could carry out McNemar's (1947) test of the statistical significance of dichotomous change, which involves setting up a 2x2 contingency table and concentrating on the frequencies in the "off-diagonal"(1,0) and (0,1) cells, where, for example, (1,0) indicates a change from yes to no, and (0,1) indicates a change from no to yes. But I wouldn't bother. Any significance test or any confidence interval assumes that the sample has been drawn at random, and you know how rare that is! Some closing remarks, and a few more references I'm with Cronbach and Furby. Forget about the various methods for measuring change that have been suggested by various people. But if you would like to find out more about what some experts say about the measurement of change, I recommend the article by Rogosa, Brandt, and Zimowski (1982), which reads very well [if you avoid some of the complicated mathematics]; and the book by Hedeker and Gibbons (2006). That book was cited in an interesting May 10, 2007 post on the Daily Kos website entitled "Statistics 101: Measuring change". Most of the research on the measurement of change has been devoted to the determination of whether or not, or to what extent, change has taken place. There are a few researchers, however, who turn the problem around by claiming in certain situations that change HAS taken place and the problem is to determine if a particular measuring instrument is "sensitive", or "responsive", or has the capacity to detect such change. If you care about that (I don't), you might want to read the letter to the editor of Physical Therapy by Fritz (1999), the response to that letter, and/or some of the articles cited in the exchange. References Bell, S. (1999). A beginner's guide to uncertainty of measurement. NationalPhysical Laboratory, Teddington, Middlesex, United Kingdom, TW11 0LW. Campbell, D.T., & Kenny, D.A. (1999). A primer on regression artifacts. New York: Guilford. Collins, L.M. (1996). Is reliability obsolete? A commentary on "Are simple gain scores obsolete?". Applied Psychological Measurement, 20, 289-292. Cronbach, L.J., & Furby, L. (1970). How we should measure "change"...Or should we? Psychological Bulletin, 74, 68-80. Davies, A.E. (1900). The concept of change. The Philosophical Review, 9, 502-517. Fritz, J.M. (1999). Sensitivity to change. Physical Therapy, 79, 420-422. Harris, C. W. (Ed.) (1963). Problems in measuring change. Madison, WI: University of Wisconsin Press. Hedeker, D., & Gibbons, R.D. (2006) Longitudinal data analysis. Hoboken, NJ: Wiley. Humphreys, L. (1996). Linear dependence of gain scores on their components imposes constraints on their use and interpretation: A commentary on "Are simple gain scores obsolete?". Applied Psychological Measurement, 20, 293-294. Kane, M. (2011). The errors of our ways. Journal of Educational Measurement, 48, 12-30. Knapp, T.R. (1980). The (un)reliability of change scores in counseling research. Measurement and Evaluation in Guidance, 11, 149-157. Knapp, T.R. (1984). A response to Williams and Zimmerman. Measurement and Evaluation in Guidance, 16, 183-184. Knapp, T.R. (1990). Treating ordinal scales as interval scales. Nursing Research, 39, 121-123. Knapp, T.R. (1993). Treating ordinal scales as ordinal scales. Nursing Research, 42, 184-186. Knapp, T.R. (2010). Percentages: The most useful statistics ever invented. Included in the present work (see pages 26-115). Knapp, T.R. (2014). The reliability of measuring instruments. Unpublished monograph. Available free of charge at www.tomswebpage.net. Knapp, T.R., & Schafer, W.D. (2009). From gain score t to ANCOVA F (and vice versa). Practical Assessment, Research, and Evaluation (PARE), 14 (6). Linn, R.L., & Slinde, J.A. (1977). The determination of the significance of change between pretesting and posttesting periods. Review of Educational Research,47, 121-150. Marcus-Roberts, H., & Roberts, F. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383-394. McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12, 153-157. O'Connor, E.F., Jr. (1972). Extending classical test theory to the measurement of change. Review of Educational Research, 42, 73-97. Rogosa, D.R., Brandt, D., & Zimowski, M. (1982). A growth curve approach to the measurement of change. Psychological Bulletin, 90, 726-748. Stanley, J.C. (1964). Measurement in today's schools (4th ed.). Englewood Cliffs, NJ: Prentice-Hall. Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680. Wakita,T., Ueshima,N., & Noguchi, H. (2012). Psychological distance between categories in the Likert Scale: Comparing different numbers of options. Educational and Psychological Measurement, 72, 533-546. Williams, R.H., & Zimmerman, D.W. (1984). A critique of Knapp's "The (un)reliability of change scores in counseling research". Measurement and Evaluation in Guidance, 16, 179-182. Williams, R.H., & Zimmerman, D.W. (1996) Are simple gain scores obsolete? Applied Psychological Measurement, 20, 59-69. Zimmerman, D.W., & Williams, R.H. (1982). Gain scores can be highly reliable. Journal of Educational Measurement, 19, 149-154. SHOULD WE GIVE UP ON CAUSALITY? Introduction Researcher A randomly assigns forty members of a convenience sample of hospitalized patients to one of five different daily doses of aspirin (eight patients per dose), determines the length of hospital stay for each person, and carries out a test of the significance of the difference among the five mean stays. Researcher B has access to hospital records for a random sample of forty patients, determines the daily dose of aspirin given to, and the length of hospital stay for, each person, and calculates the correlation (Pearson product-moment) between dose of aspirin and length of stay. Researcher A's study has a stronger basis for causality ("internal validity"). Researcher B's study has a stronger basis for generalizability ("external validity"). Which of the two studies contributes more to the advancement of knowledge? Oh; do you need to see the data before you answer the question? The raw data are the same for both studies. Here they are: IDDose(in mg)LOS(in days)IDDose(in mg)LOS(in days)1755211752527510221752537510231752547510241753057515252252067515262252577515272252587520282252591251029225301012515302253011125153122530121251532225351312520332752514125203427530151252035275301612525362753017175153727535181752038275351917520392753520175204027540 And here are the results for the two analyses (courtesy of Excel and Minitab):  SUMMARYGroupsCountSumMeanVariance 75 mg810012.521.43125 mg814017.521.43175 mg818022.521.43225 mg822027.521.43275 mg826032.521.43ANOVASource of VariationSSdfMSFBetween Groups2000450023.33Within Groups7503521.43Total275039  Correlation of dose and los = 0.853 The regression equation is: los = 5.00 + 0.10 dose PredictorCoefStandard errort-ratioConstant5.001.882.67dose0.100.009910.07 s = 4.44R-sq = 72.7%R-sq(adj) = 72.0%Analysis of VarianceSOURCEDFSSMSRegression120002000.0Error3875019.7Total392750 The results are virtually identical. (For those of you familiar with "the general linear model" that is not surprising.) There is only that tricky difference in the df's associated with the fact that dose is discrete in the ANOVA (its magnitude never even enters the analysis) and continuous in the correlation and regression. But what about the assumptions? Here is the over-all frequency distribution for LOS: MidpointCount51*104****157*******208********258********307*******354****401*Looks pretty normal to me. And here is the LOS frequency distribution for each of the five treatment groups: (This is relevant for homogeneity of variance in the ANOVA and for homoscedasticity in the regression.) Histogram of lostreat = 75N = 8MidpointCount*51103***153***201*N = 8 Histogram of lostreat =125MidpointCount*101153***203***251*N = 8Histogram of lostreat =175MidpointCount*151203***253***301*N = 8Histogram of lostreat =225MidpointCount*201253***303***351*N = 8Histogram of lostreat =275MidpointCount*251303***353***401* Those distributions are as normal as they can be for eight observations per treatment condition. (They're actually the binomial coefficients for n = 3.) So what? The "So what?" is that the statistical conclusion is essentially the same for the two studies; i.e., there is a strong linear association between dose and stay. The regression equation for Researcher B's study can be used to predict stay from dose quite well for the population from which his (her) sample was randomly drawn. You're only likely to be off by 5-10 days in length of stay, since the standard error of estimate, s, = 4.44.) Why do we need the causal interpretation provided by Researcher A's study? Isn't the greater generalizability of Researcher B's study more important than whether or not the "effect" of dose on stay is causal for the non-random sample? You're probably thinking "Yeah; big deal, for this one example of artificial data." Of course the data are artificial (for illustrative purposes). Real data are never that clean, but they could be. Read on. What do other people have to say about causation, correlation, and prediction? The sources cited most often for distinctions among causation (I use the terms "causality" and "causation" interchangeably), correlation, and prediction are usually classics written by philosophers such as Mill (1884) and Popper (1959); textbook authors such as Pearl (2000); and journal articles such as Bradford Hill (1965) and Holland (1986). I would like to cite a few other lesser known people who have had something to say for or against the position I have just taken. I happily exclude those who say only that "correlation is not causation" and who let it go at that. Schield (1995): My friend Milo Schield is very big on emphasizing the matter of causation in the teaching of statistics. Although he included in his conference presentation the mantra "correlation is not causality", he carefully points out that students might mistakenly think that correlation can never be causal. He goes on to argue for the need to make other important distinctions among causality, explanation, determination, prediction, and other terms that are often confused with one another. Nice piece. Frakt (2009): In an unusual twist, Austin Frakt argues that you can have causation without correlation. (The usual minimum three criteria for a claim that X causes Y are strong correlation, temporal precedence, and non-spuriousness.) He gives an example for which the true relationship between X and Y is mediated by a third variable W, where the correlation between X and Y is equal to zero. White (2010): John Myles White decries the endless repetiton of "correlation is not causation". He argues that most of our knowledge is correlational knowledge; causal knowledge is only necessary when we want to control things; causation is a slippery concept; and correlation and causation go hand-in-hand more often than some people think. His take-home message is that it's much better to know X and Y are related than it is to know nothing at all. Anonymous (2012): Anonymous starts out his (her) two-part article with this: "The ultimate goal of social science is causal explanation. The actual goal of most academic research is to discover significant relationships between variables." Ouch! But true? He (she) contends that we can detect a statistically significant effect of X on Y but still not know why and when Y occurs. That looks like three (Schield, Frakt, and Anonymous) against two (White and me), so I lose? Perhaps. How about a compromise? In the spirit of White's distinction between correlational knowledge and causal knowledge, can we agree that we should concentrate our research efforts on two non-overlapping strategies: true experiments (randomized clinical trials) carried out on admittedly handy non-random samples, with replications wherever possible; and non-experimental correlational studies carried out on random samples, also with replications? A closing note What about the effect of smoking (firsthand, secondhand, thirdhand...whatever) on lung cancer? Would you believe that we might have to give up on causality there? There are problems regarding the difficulty of establishing a causal connection between the two even for firsthand smoking. You can look it up (in Spirtes, Glymour, & Scheines, 2000, pp.239-240). You might also want to read the commentary by Lyketsos and Chisolm (2009), the letter by Luchins (2009) regarding that commentary, and the reply by Lyketsos and Chisolm (2009) concerning why it is sometimes not reported that smoking was responsible for the death of a smoker who had lung cancer (whereas stress as a cause for suicide almost always is). References Anonymous (2012). Explanation and the quest for 'significant' relationships. Parts 1 and 2. Downloaded from the Rules of Reason website on the internet. Bradford Hill, A. (1965). The environment and disease: Association or causation. Proceedings of the Royal Society of Medicine, 58, 295-300. Frakt, A. (2009). Causation without correlation is possible. Downloaded from The Incidental Economist website on the internet. Holland, P.W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81 (396), 945-970. [Includes comments by D.B. Rubin, D.R. Cox, C.Glymour, and C.Granger, and a rejoinder by Holland.] Luchins, D.J. (2009). Meaningful explanations vs. scientific causality. JAMA, 302 (21), 2320. Lyketsos, C.G., & Chisolm, M.S. (2009). The trap of meaning: A public health tragedy. JAMA, 302 (4), 432-433. Lyketsos, C.G., & Chisolm, M.S. (2009). In reply. JAMA, 302 (21), 2320-2321. Mill, J. S. (1884). A system of logic, ratiocinative and Inductive. London: Longmans, Green, and Co. Pearl, J. (2000). Causality. New York: Cambridge University Press. Popper, K. (1959). The logic of scientific discovery. London: Routledge. Schield, M. (1995). Correlation, determination, and causality in introductory statistics. Conference presentation, Annual Meeting of the American Statistical Association. Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search. (2nd. ed.) Cambridge, MA: The MIT Press. White, J.M. (2010). Three-quarter truths: correlation is not causation. Downloaded from his website on the internet. LEARNING STATISTICS THROUGH FINITE POPULATIONS AND SAMPLING WITHOUT REPLACEMENT Introduction Just about every statistics course and just about every statistics textbook concentrates on infinite populations and sampling with replacement, with particular emphasis on the normal distribution. This paper is concerned solely with finite populations and sampling without replacement, with only a passing reference to the normal distribution. Are populations infinite or finite? Real world populations are all finite, no matter how small or how large. How do we draw samples? With replacement or without replacement? Real world samples are all drawn without replacement. It would be silly to draw some observations once and some more than once. Ergo, let's talk about finite populations and sampling from them without replacement. Two examples 1. On his website, statistician Robert W. Hayden gives the artificial example of a population of six observations from which all possible samples of size three are to be drawn. He claims that many basic statistical concepts can be learned from discussing this example. I agree. Consider one of the cases Hayden talks about: A population consisting of the observations 3,6,6,9,12, and 15. a. It has a frequency distribution. Here it is Observation frequency 3 1 6 2 9 1 12 1 15 1 b. It has a mean of (3+6+6+9+12+15)/6 = 51/6 = 8.50 c. It has a median of 7.50 (if we "split the difference" between the middle two values, which is fine if the scale is interval or ratio). d. It has a mode of 6 (there are more 6's than anything else). e. It has a range of 15-3 = 12. f. It has a variance of [(3-8.5)2 + 2(6-8.5)2 + (9-8.5)2 + (12-8.5)2 + (15-8.5)2]/6 = 97.50/6 = 16.25. Curiously, Hayden divides the sum of the squared differences from the mean by 5...one less than the number of observations, rather than the number of observations. Many people divide the sum of the squared differences of the observations in a sample from the sample mean by one less than the number of observations (for a couple of complicated reasons), but Hayden is the only one I know of who calculates a population variance the way he does. g. It has a standard deviation of " 16.25 = 4.03 It has some other interesting summary measures, but those should suffice for now. As indicated above, Hayden considers taking all possible samples of size three from a population of six observations, without replacing an observation once it is drawn. For the 3,6,6,9,12,15 population they are: 3,6,6 (both 6's; they are for different things: people, rats, hospitals...whatever) 3,6,9 (for one of the 6's) 3,6,9 (for the other 6) 3,6,12 (for one of the 6's) 3,6,12,(for the other 6) 3,6,15 (for one of the 6's) 3,6,15 (for the other 6) 3,9,12 3,9,15 3,12,15 6,6,9 (both 6's) 6,6,12 (both 6's) 6,6,15 (both 6's) 6,9,12 (one of the 6's) 6,9,12 (the other 6) 6,9,15 (one of the 6's) 6,9,15 (the other 6) 6,12,15 (one of the 6's) 6,12,15 (the other 6) 9,12 15 Therefore there are 20 such samples. Suppose you would like to estimate the mean of that population by using one of those samples. The population mean (see above) is 8.50. The first sample (3,6,6) would produce a sample mean of 5 (an under-estimate). The second and third samples (3,6,9) would produce a sample mean of 6 (also an under-estimate). The fourth and fifth samples (3,6,12) would produce a sample mean of 7 (still an under-estimate). The sixth and seventh samples (3,6,15) would produce a sample mean of 8 (an over-estimate). The eighth sample (3,9,12) would also produce a sample mean of 8 (an over-estimate). The ninth sample (3,9,15) would produce a sample mean of 9 (another over-estimate). The tenth sample (3,12,15) would produce a sample mean of 10 (still another over-estimate). The eleventh sample (6,6,9) would produce a sample mean of 7 (an under-estimate). The twelfth sample (6,6,12) would produce a sample mean of 8 (an over-estimate). The thirteenth sample (6,6,15) would produce a sample mean of 9 (an over-estimate) . The fourteenth and fifteenth samples (6,9.12) would produce a sample mean of 9 (an over-estimate). The sixteenth and seventeenth samples (6,9,15) would produce a sample mean of 10 (an over-estimate). The eighteenth and nineteenth samples (6,12,15) would produce a sample mean of 11 (an over-estimate). The twentieth sample mean (9,12,15) would produce a sample mean of 12 (an over-estimate). The possible sample means are 5,6,6,7,7,7,8,8,8,8,9,9,9,9,10,10,10,11,11, and 12. The frequency distribution of those means is the sampling distribution for samples of size three taken from the 3,6,6,9,12,15 population. Here it is: Sample mean Frequency 5 1 6 2 7 3 8 4 9 4 10 3 11 2 12 1 Ten of them are under-estimates, by various amounts; ten of them are over-estimates, also by various amounts. But the mean of those means (do you follow that?) is 8.50 (the population mean). Nice, huh? But the problem is that in real life if you have just one of those samples (the usual case) you could be lucky and come close to the population mean or you could be 'way off. That's what sampling is all about. I could say a great deal more about this example but I'm eager to move on to another example. 2. One of the most interesting (to me, anyhow) populations is the 50 states of the United Sates. [I have several examples of the use of this population in my Learning statistics through playing cards book (Knapp, 1996).] Here are some data for that population: state admrank nhabrank arearank DE 1 46 49 PA 2 6 32 NJ 3 9 46 GA 4 10 21 CT 5 30 48 MA 6 13 45 MD 7 19 42 SC 8 26 40 NH 9 42 44 VA 10 12 37 NY 11 3 30 NC 12 11 29 RI 13 44 50 VT 14 49 43 KY 15 25 36 TN 16 16 34 OH 17 7 35 LA 18 22 33 IN 19 14 38 MS 20 32 31 IL 21 5 24 AL 22 23 28 ME 23 41 39 MO 24 17 18 AR 25 34 27 MI 26 8 22 FL 27 4 26 TX 28 2 2 IA 29 31 23 WI 30 18 25 CA 31 1 3 MN 32 21 14 OR 33 29 10 KS 34 33 13 WV 35 38 41 NV 36 36 7 NE 37 39 15 CO 38 24 8 ND 39 48 17 SD 40 47 16 MT 41 45 4 WA 42 15 20 ID 43 40 11 WY 44 50 9 UT 45 35 12 OK 46 28 19 NM 47 37 5 AZ 48 20 6 AK 49 49 1 HI 50 43 47 where: 1. state is the two-letter abbreviation for each of the 50 states 2. admrank is the rank-order of their admission to the union (Delaware was first, Pennsylvania was second,...,Hawaii was fiftieth). 3. nhabrank is the rank-order of number of inhabitants, according to the 2000 census (California was first, Texas was second,...,Wyoming was fiftieth). 4. arearank is the rank-order of land area (Alaska is first, Texas is second,..., Rhode Island is fiftieth). Of considerable interest (at least to me) is the relationship between pairs of those variables (admission to the union and number of inhabitants; admission to the union and land area; and number of inhabitants and land area). I (and I hope you) do not care about the means, variances, or standard deviations of those variables. (Hint: If you do care about such things for this example, you will find that they're the same for all three variables.) The relationships (something called Spearman's rank correlations) for the entire population are as follows (the correlation can go from -1 through 0 to +1, where the negative correlations are indicative of inverse relationships and the positive correlations are indicative of direct relationships): +.394 for admrank and nhabrank -.720 for admrank and arearank -.013 for nhabrank and arearank The relationship between the rank-order of admission to the union and the rank-order of number of inhabitants is direct but modest; the relationship between the rank-order of admission to the union and the rank-order of land area is inverse and rather strong; and the relationship between the rank-order of number of inhabitants and the rank-order of land area is essentially zero. Those all make sense, if you think about it and if you call upon your knowledge of American history! But what happens if you take samples from this population? I won't go through all possible samples of all possible sizes, but let's see what happens if you take, say, ten samples of ten observations each. And let's choose those samples randomly. I got on the internet and used something called the Research Randomizer. The numbers of the states that I drew for each of those sets of samples were as follows (sampling within sample was without replacement, but sampling between samples was with replacement; otherwise I would run out of states to sample after taking five samples!): Set 1Set 2Set 3Set 4Set 5Set 6Set 7Set 8Set 9Set 1016211222101251177123612201592111819101117222317231310201314212824212623133617262729352327291839202834313625343119432730383237293634274635444342393339404048414648434548404846494550504747 For the first set (DE,CT,NH,OH,IL,ME,AR,IA,OR,and AZ ) the sample data are (the numbers in parentheses are the ranks for the ranks; you need those because all ranks must go from 1 to the number of things being ranked, in this case 10): state admrank nhabrank arearank DE 1(1) 46(10) 49(10) CT 5(2) 30(5) 48(9) NH 9(3) 42(9) 44(8) OH 17(4) 7(2) 35(7) IL 21(5) 5(1) 24(4) ME 23(6) 41(8) 39(6) AR 25(7) 34(7) 27(5) IA 29(8) 31(6) 23(3) OR 33(9) 29(4) 10(2) AZ 48(10) 20(3) 6(1) The rank correlations are (using the ranks of the ranks): -.382 for admrank and nhabrank -.939 for admrank and arearank +.612 for nhabrank and arearank The -.382 is a poor estimate of the corresponding population rank correlation (and is actually of opposite sign). The -.939 isn't too bad (both are negative and high). The +.612 is terrible (the population rank correlation is approximately zero). If you have easy access to a statistics package that includes the calculation of Spearman's rank correlation, why don't you take a crack at Set 2 and see what you get. Other samples of other sizes would produce other rank correlations; and don't be surprised if the sample correlations are quite different from their population counterparts. Small samples take small "bites" out of a population of size 50. But where are the formulas? In the typical introductory course in statistics you are bombarded by all sorts of complicated formulas. Case in point: The formula for the population standard deviation is " " (X - )2/N; and the formula for the standard error of the mean is /"n (for sampling with replacement). I could present similar formulas for finite populations (and for sampling without replacement), but I won't. They are similar to those (some are actually more complicated) and all involve the so-called "finite population correction" (fpc), about which I'll have more to say in the next section. The difference between two percentages I would like to close this paper with a discussion of one of the most common problems in statistics in general, viz., the estimation of, or the testing of a hypothesis about, the difference between two percentages. Some examples are: What is the difference between the percentage of men who are Chief Executive Officers (CEOs) and the percentage of women who are Chief Executive Officers? What is the difference in the percentage of smokers who get lung cancer and the percentage of non-smokers who get lung cancer? What is the difference in the recovery percentage of patients who are given a particular drug and the recovery percentage of patients who are given a placebo? Krishnamoorthy and Thomson (2002) give the artificial but realistic quality control example of the percentages of non-acceptable cans produced by two canning machines. (They actually do everything in terms of proportions, but I prefer percentages. They are easily convertible from one to the other: multiply a proportion by 100; divide a percentage by 100.) "Non-acceptable" is defined as containing less than 95% of the purported weight on the can. Each machine produces 250 cans. One machine is expected to have an approximate 6% non-acceptable rate and the other machine is expected to have an approximate 2% non-acceptable rate. A sample of is to be drawn from each machine. The authors provide tables for determining the appropriate sample size for each machine, depending upon the tolerance for Type I errors (rejecting a true hypothesis) and Type II errors (not rejecting a false hypothesis). For their specifications the appropriate sample size was 136 cans from each machine for what they called "the Z test", which was one of three tests discussed in their article and for which the normal sampling distribution is relevant. (The mathematics in their article is not for the faint of heart, so read around it.) Their formulas involved the finite population correction, which subtracts one from both the population size and the sample size, as well as subtracts the sample size from the population size. That strikes me as backwards. Shouldn't we always start with finite populations and sampling without replacement, and then make corrections for (actually extensions to) infinite populations and sampling with replacement? References Knapp, T.R. (1996). Learning statistics through playing cards. Thousand Oaks, CA: Sage. Krishnamoorthy, K. & Thomson, J. (2002). Hypothesis testing about proportions in two finite populations. The American Statistician, 56 (3), 215-222. TWO-BY-TWO TABLES Introduction Two-by-two tables, hereinafter referred to as 2x2 tables, are very useful devices for displaying verbal or numerical information. In what follows I will try to explain what they are and why they are so useful. Notation and jargon Every 2x2 table is of the form a b , where a,b,c,and d are pieces of information. c d They might be individual words or word phrases, individual numbers, or sets of numbers. The places where the information lies are called "cells". a and b constitute the first row (horizontal dimension) of the table, c and d constitute the second row; a and c constitute the first column (vertical dimension), b and d the second column; a and d form the "major (principal) diagonal", and b and c form the "minor diagonal". Surrounding the basic 2x2 format there is often auxiliary information such as headings or sums (across the top and down the side). Such things are called "marginals" Some examples 1. My favorite example of a 2x2 table that conveys important verbal information is the table that is found in just about every statistics textbook in one form or another. Here is one version (Ho is the "null" hypothesis, i.e., the hypothesis that is directly tested): Truth Ho is true Ho is false Reject Ho Type I error No error Decision Do not reject Ho No error Type II error In that table, a = Type I error, b = No error, c = No error, and d = Type II error. The other words, albeit essential, are headings and marginals to the table itself. 2. Matrices and determinants play an important role in mathematics in general and in advanced statistics in particular. One of the simplest matrices is the 2x2 matrix consisting of the four entities a, b, c, and d. The determinant of that matrix is the difference between the product of the two entities in the major diagonal and the product of the two entities in the minor diagonal, i.e. ad - bc. 3. Perhaps the most commonly-encountered 2x2 table is the 2x2 contingency table (also called a cross-tabulation or "cross-tab") that displays frequencies used to determine the relationship between two nominal variables. Here's an interesting one: Sex Male Female Democrat a b Political affiliation Republican c d Here a is the number of Male Democrats, b is the number of Female Democrats, c is the number of Male Republicans, and d is the number of Female Republicans in a population or in a sample drawn from a population. Various combinations of those numbers, e.g., the difference between a/(a+c) and b/(b+d), provide some indication of the relationship between sex and political affiliation. 4. The 2x2 table that is arguably of most concern to researchers in fields such as psychology, education, and nursing is the table that displays four sets of numbers arising from a randomized trial in which the rows are groups of subjects, e.g., right-handed people and left-handed people, the columns are experimental treatments to which they have been assigned, e.g., two ways of teaching handwriting, and interest is in the testing of the "main effect" of handedness, the "main effect" of treatment, and the handedness-by treatment "interaction effect" on a dependent variable such as handwriting legibility. Kinds of 2x2 tables As suggested by the above examples, there are four different kinds of 2x2 tables. The first kind consists of words in each of the cells that convey some important information about a particular concept, such as the difference between a Type I error and a Type II error. The second kind consists of any four numbers, one in each cell, that are subjected to subsequent calculations such as evaluating a determinant. The third kind consists of a set of frequencies for investigating the relationship between variables. And the fourth kind consists of multiple numbers per cell that provide the basis for even more complicated calculations, such as those arising in the Analysis of Variance (ANOVA). Let me take them one-at-a-time. Verbal 2x2 tables What makes these tables useful is that many concepts involve distinctions between "sub-concepts" for which 2x2 tables are ideal in laying out those distinctions. Scientific theories are particularly concerned with exemplars of such distinctions and with procedures for testing them. Here is an example of a 2x2 table (plus marginals), downloaded from an internet article entitled 'How to evaluate a 2 by 2 table", that is used as a basis for defining and determining the sensitivity and the specificity of a diagnostic testing procedure, where a = TP denotes "true positive", b = FP denotes "false positive", c = FN denotes "false negative", and d = TN denotes "true negative" [This table is actually a re-working of the table given above for summarizing the difference between a Type I error and a Type II error. Can you figure out which cells in this table correspond to which cells in the previous table?] Disease presentDisease absentTest positiveTPFPTotal positiveTest negativeFNTNTotal negativeTotal with diseaseTotal with- out diseaseGrand total Verbal 2x2 tables are also good for summarizing research designs such as the "counter-balanced (crossover)" design, where the row headings are the two groups, the column headings are the two time points, and the symbols for the experimental (E) and control (C) conditions in the body of the table are a = E, b = C , c = C, and d = E. Single-number 2x2 tables These are useful for summarizing features of research designs, e.g., the number of observations in each cell for a 2x2 factorial ANOVA. They provide an indication of the orthogonality or non-orthogonality of such a design. If the cell frequencies a,b,c, and d are not proportional, the design is not orthogonal, and the main effects and the interaction effect cannot be assessed independently of one another. But far and away the most useful are those 2x2 "cross-tabs" for which the individual numbers a, b, c, and d are frequencies for the categories of two nominal variables. On the University of Denver Medical School website there is an article entitled "Understanding Biostatistics: The 2x2 Table". The first sentence of that article reads: "Most of biostatistics can be understood in the context of the 2x2 table." That is actually putting it mildly. Here is a real-data example and a partial list of the descriptive statistics or inferential statistics that can be calculated for such tables: THE "LOWLY" 2 X 2 TABLE Susan Carol Losh Department of Educational Psychology and Learning Systems Florida State University, Spring, 2009 How Gender Influences Answers to the Question: Does the Sun go around the Earth or does the Earth go around the Sun? MaleFemaleTotalAnswer to Question:Sun goes around Earth (WRONG and other responses except earth around sun)146 305451Earth goes around Sun (RIGHT)6527151367Total (at the bottom of each column are SEPARATE totals for women and men, then a total for everyone combined)79810201818Source: NSF Surveys of Public Understanding of Science and Technology, 2006, Director, General Social Survey (NORC). n = 1818 [Dr. Losh prefers the term "gender" to the term "sex". I don't. Gender is primarily a grammatical term. In foreign languages such as German, French, and Spanish, some nouns are of the masculine gender and some nouns are of the feminine gender. In Latin there is also a neuter gender.] --------------------------------------------------------------------------------------------------------- Here are the statistics. Not all would be appropriate for this particular example. Proportions (or percentages), taken in all sorts of different directions Relative risks Odds ratios (which are often taken as approximations to relative risks) Goodman's lambda (for measuring the relationship between the two variables) Phi coefficient (special case of Pearson's r for dichotomies) Contingency coefficient Fisher's exact test Chi-square test of the difference between two independent proportions (or percentages) Chi-square test of the difference between two dependent (correlated) proportions (or percentages), e.g., McNemar's test Various measures such as Cochran's Q and Somer's d -------------------------------------------------------------------------------------------------------- Isn't it amazing that just four numbers can generate all of those?! Here is the result that would be of principal interest for the earth & sun example: How Gender Influences Answers to the Question: Does the Sun go around the Earth or does the Earth go around the Sun? OBSERVED PERCENTAGESMaleFemaleTotal SampleAnswer to Question:Sun goes around Earth (and other responses...) WRONG)18.3%29.9%24.8%Earth goes around Sun (RIGHT)81.770.175.2100.0%100.0%100.0%Casebases79810201818 The usual cautions regarding the calculations of the above quantities should be followed. Some of them can result in zeroes in the denominators and thus produce indeterminate statistics. Multiple-numbers 2x2 tables Let me bring this paper to a close by devoting several pages to factorial designs for which an ANOVA is usually employed (if the "scores" on the dependent variable are of interval or ratio level of measurement). But contrary to most discussions of 2x2 ANOVAs where the context is a true experiment (randomized control trial) I would like to use as a motivating example an observational study for which people have been randomly sampled but not randomly assigned. [For an observational study there is nothing to assign them to!] The principal reason for my preference is that one of the assumptions for the analysis of variance is random sampling of one or more populations. If you have random assignment but not random sampling, the appropriate analysis is a randomization (permutation) test. An ANOVA provides only an approximation to the results of such tests. The following data were downloaded from an internet article written by R, C. Gardner. Dr. Gardner doesn't say what the context is, but let us assume that we have four random samples of five persons each, cross-classified by political affiliation (the row variable) and sex (the column variable), and the dependent variable is score obtained on a measure of attitudes toward the federal government (higher score indicates more favorable attitudes). A "1" for IV1 (political affiliation) is a Democrat; a "2" for IV1 is a Republican; a "1" for IV2 (sex) is a male, and a "2" for IV2 is a female. ID# IV1 IV2 DV  Those data produce the following 2x2 table: Sex Male Female Democrat 11 16 14 12 15 17 13 18 17 18 Political affiliation Republican 17 13 16 10 15 16 15 14 17 12 Here are the cell means (a single-number 2x2 table): 14.0 16.2 16.0 13.0 Since there is an equal number of observations (5) in each cell, the row and column means can be easily determined as: First row (Democrats): 15.1 Second row (Republicans): 14.5 First column (Males): 15.0 Second column (Females: 14.6 Therefore, "on the average", the Democrats had more favorable attitudes toward the federal government than the Republicans; and the males had more favorable attitudes toward the federal government than the females. But are those "main effects" statistically significant? And how about the "interaction effect"? Female Democrats had more favorable attitudes than Male Democrats, yet female Republicans had less favorable attitudes than male Republicans. Looking at it in another way, male Republicans had more favorable attitudes than male Democrats [these are obviously fictitious data!], yet female Republicans had less favorable attitudes than female Democrats. Is the interaction effect statistically significant? The answers to all of those questions are provided by the ANOVA summary table, which is: Source of variation SS df MS F-ratio p-value Political Affiliation 1.800 1 1.800 .419 .527 Sex .800 1 .800 .186 .672 Interaction 33.800 1 33.800 7.860 .013 Within subjects 68.800 16 4.300 Total 4486.000 19 If a level of significance of .05 had been chosen a priori, neither main effect would be statistically significant, but the interaction effect would be. Interpretation: The effect of political affiliation depends upon what one's sex is. Or putting it another way, the effect of sex depends upon what one's political affiliation is. Cautions: The interaction effect might or might not be causal. Our interpretation might be wrong. We did not reject the hypothesis of no main effect of political affiliation. It might be false, and we've made a Type II error. We also did not reject the hypothesis of no main effect of sex. It also might be false, and we've made another Type II error. We did reject the no-interaction hypothesis. It might be true, and we've made a Type I error. Gets complicated, doesn't it? Epilogue When I was in the army many years ago right after the end of the Korean War, I had a fellow soldier friend who claimed to be a "polytheistic atheist". He claimed that were lots of gods and he didn't believe in any of them. But he worried that he might be wrong. His dilemma can be summarized by the following 2x2 table: Truth There is at least one god There are no gods God(s) No error Error Belief No god(s) Error No error I think that says it all. Validity? Reliability? Different terminology altogether? Several years ago I wrote an article entitled Validity, reliability, and neither (Knapp, 1985) in which I discussed some researchers identifications of investigations as validity studies or reliability studies but which were actually neither. In what follows I pursue the matter of confusion regarding the terms validity and reliability and suggest the possibility of alternative terms for referring to the characteristics of measuring instruments. I am not the first person to recommend this. As long ago as 1936, Goodenough suggested that the term reliability be done away with entirely. Concerns about both reliability and validity have been expressed by Stallings & Gillmore (1971), Feinstein (1985, 1987), Suen (1988), Brown (1989), and many others. The problems The principal problem, as expressed so succintly by Ennis (1999), is that the word reliability as used in ordinary parlance is what measurement experts subsume under validity. (See also Feldt & Brennan, 1989.) For example, if a custodian falls asleep on the job every night, most laypeople would say that he(she) is unreliable, i.e., a poor custodian; whereas psychometricians would say that he(she) is perfectly reliable, i.e., a consistent poor custodian. But theres more. Even within the measurement community there are all kinds of disagreements regarding the meaning of validity. For example, some contend that the consequences of misuses of a measuring instrument should be taken into account when evaluating its validity; others disagree. (Pro: Messick, 1995, and others; Anti: Lees-Haley, 1996, and others.) And there is the associated problem of the awful (in my opinion) terms internal validity and external validity that have little or nothing to do with the concept of validity in the measurement sense, since they apply to the characteristics of a study or its design and not to the properties of the instrument(s) used in the study. [Internal validity is synonymous with causality and external validity is synonymous with generalizability. nuff said.] The situation is even worse with respect to reliability. In addition to matters such as the (un?)reliable custodian, there are the competing definitions of the term reliability within the field of statistics in general (a sample statistic is reliable if it has a tight sampling distribution with respect to its counterpart population parameter) and within engineering (a piece of equipment is reliable if there is a small probability of its breaking down while in use). Some people have even talked about the reliability of a study. For example, an article I recently came across on the internet claimed that a study of the reliability (in the engineering sense) of various laptop computers was unreliable, and so was its report! Some changes in, or retentions of, terminology and the reasons for same There have been many thoughtful and some not so thoughtful recommendations regarding change in terminology. Here are a few of the thoughtful ones: 1. Ive already mentioned Goodenough (1936). She was bothered by the fact that the test-retest reliability of examinations (same form or parallel forms) administered a day or two apart are almost always lower than the split-halves reliability of those forms when stepped up by the Spearman-Brown formula, despite the fact that both approaches are concerned with estimating the reliability of the instruments. She suggested that the use of the term reliability be relegated to the limbo of outworn concepts (p. 107) and that results of psychometric investigations be expressed in terms of whatever procedures were used in estimating the properties of the instruments in question. 2. Adams (1936). In that same year he tried to sort out the distinctions among the usages of the terms validity, reliability, and objectivity in the measurement literature of the time. [Objectivity is usually regarded as a special kind of reliability: inter-rater reliability if more than one person is making the judgments; intra-rater reliability for a single judge.] He found the situation to be chaotic and argued that validity, reliability, and objectivity are qualities of measuring instruments (which he called scales). He suggested that accuracy should be added as a term to refer to the quantitative aspects of test scores. 3. Thorndike (1951), Stanley (1971), Feldt and Brennan (1989), and Haertel (2006). They are the authors of the chapter on reliability in the various editions of the Educational Measurement compendium. Although they all commented upon various terminological problems, they were apparently content to keep the term reliability as is [judging from the retention of the single word Reliability in the chapter title in each of the four editions of the book]. 4. Cureton (1951), Cronbach (1971), Messick (1989), and Kane (2006). They were the authors of the corresponding chapters on validity in Educational Measurement. They too were concerned about some of the terminological confusion regarding validity [and the chapter titles went from Validity to Test Validation back to Validity and thence to Validation, in that chronological order], but the emphasis changed from various types of validity in the first two editions to an amalgam under the heading of Construct Validity in the last two. 5. Ennis (1999). Ive already referred to his clear perception of the principal problem with the term reliability. He suggested the replacement of reliability with consistency. He was also concerned about the terms true score and error of measurement. [More about those later.] 6. AERA and APA Standards (in press). Although the final wordings of the sections on validity and reliability in the latest version of the Standards had not yet been determined as of the time of my writing of this paper, the titles of the two sections will apparently be Validity and Errors of Measurement and Reliability/Precision, respectively. [I submitted a review of the draft versions of those sections.] Like the authors of the chapters in the various editions of Educational Measurement, the authors of the sections on validity express some concerns about confusions in terminology, but they appear to want to stick with validity, whereas the authors of the section on reliability prefer to expand the term reliability. [In the previous (1999) version of the Standards the title was Reliability and Errors of Measurement.] Some incorrect uses of the terms reliability and/or validity in research methods textbooks or articles Adams (1936) cited several examples of misuses of reliability and validity in measurement articles and textbooks that were popular at the time of his writing. Here are a few recent examples. (Some are relatively minor matters; others are more serious offences.) My personal recommendations 1. I prefer relevance to validity, especially given my opposition to the terms internal validity and external validity. I realize that relevance is a word that is over-used in the English language, but what could be a better measuring instrument than one that is completely relevant to the purpose at hand? Examples: a road test for measuring the ability to drive a car; a stadiometer for measuring height; and a test of arithmetic items all of the form a + b = ___ for measuring the ability to add. 2. Im with Ennis (1999) regarding changing reliability to consistency, even though in my unpublished book on the reliability of measuring instruments (Knapp, 2014) I came down in favor of keeping it reliability. [Ennis had nothing to say one way or the other about changing validity to something else.] 3. I dont like to lump techniques such as Cronbachs alpha under either reliability or consistency. For those I prefer the term homogeneity, as did Kelley (1942); see Traub (1997). I suggest that time must pass (even if just a few minutessee Horst, 1954) between the measure and the re-measure. 4, I also dont like to subsume objectivity under reliability (either inter-rater or intra-rater). Keep it as objectivity. 5. Two terms I recommend for Goodenoughs limbo are accuracy and precision, at least as far as measurement is concerned. The former term is too ambiguous. [How can you ever determine whether or not something is accurate?] The latter term should be confined to the width of a confidence interval and the number of digits that are defensible to report when making a measurement. True score and error of measurement As I indicated above, Ennis (1999) doesnt like the terms true score and error of measurement. Both terms are used in the context of reliability. The former refers to (1) the score that would be obtained if there were no unreliability; and (2) the average (arithmetic mean) of all of the possible obtained scores for an individual. The latter is the difference between an obtained score and the corresponding true score. What bothers Ennis is that the term true score would seem to indicate the score that was actually deserved in a perfectly valid test, whereas the term is associated only with reliability. I dont mind keeping both true score and error of measurement under consistency, as long as there is no implication that the measuring instrument is also necessarily relevant. The instrument chosen to provide an operationalization of a particular attribute such as height or the ability to add or to drive a car might be a lousy one (thats primarily a judgment call), but it always needs to produce a tight distribution of errors of measurement for any given individual. References Adams, H.F. (1936). Validity, reliability, and objectivity. Psychological Monographs, 47, 329-350. American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME). (in press). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Brown, G.W. (1989). Praise for useful words. American Journal of Diseases of Children, 143 , 770. Cronbach, L. J. (1971). Test validation. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 443-507). Washington, DC: American Council on Education. Cureton, E. F. (1951). Validity. In E. F. Lindquist (Ed.), Educational measurement (1st ed., pp. 621-694). Washington, DC: American Council on Education. Ennis, R.H. (1999). Test reliability: A practical exemplification of ordinary language philosophy. Yearbook of the Philosophy of Education Society. Feinstein, A.R. (1985). Clinical epidemiology: The architecture of clinical research. Philadelphia: Saunders. Feinstein, A.R. (1987). Clinimetrics. New Haven, CT: Yale University Press. Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 105-146). New York: Macmillan. Goodenough, F.L. (1936). A critical note on the use of the term "reliability" in mental measurement. Journal of Educational Psychology, 27, 173-178. Haertel, E. H. (2006). Reliability. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 65-110). Westport, CT: American Council on Education/Praeger. Horst, P. (1954). The estimation of immediate retest reliability. Educational and Psychological Measurement, 14, 705-708. Kane, M. L. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17-64). Westport, CT: American Council on Education/Praeger. Kelley, T.L. (1942). The reliability coefficient. Psychometrika, 7, 75-83. Knapp, T.R. (1985). Validity, reliability, and neither. Nursing Research, 34, 189-192. Lees-Haley, P.R. (1996). Alice in validityland, or the dangerous consequences of consequential validity. American Psychologist, 51 (9), 981-983. Paul R. Lees-Haley Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13-103). Washington, DC: American Council on Education. Messick, S. (1995). Validation of inferences from persons responses and performances as scientific inquiry into score meaning. American Psychologist, 50 (9), 741-749. Stallings, W.M., & Gillmore, G.M. (1971). A note on accuracy and precision. Journal of Educational Measurement, 8, 127-129. (1) Stanley, J. C. (1971). Reliability. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 356-442). Washington, DC: American Council on Education. Suen, H.K. (1987). Agreement, reliability, accuracy, and validity: Toward a clarification. Behavioral Assessment, 10, 343-366. Thorndike, R.L. (1951). Reliability. In E.F. Lindquist (Ed.), Educational measurement (1st ed., pp. 560-620). Washington, DC: American Council on Education. Traub, R.E. (1997). Classical test theory in historical perspective. Educational Measurement: Issues and Practice, 16 (4), 8-14. p, n, AND t: Ten things you need to know Introduction You want to test a hypothesis or construct a confidence interval for a proportion, or for the difference between, or for the ratio of, two proportions. Should you use n or n-1 in the denominators for the formulas for the appropriate standard errors? Should you use the t sampling distribution or the normal sampling distribution? Answers to these and associated questions will be provided in what is to follow. An example In the guide that accompanies the StatPac Statistics Calculator, Walonick (1996-2010) gives an example of the proportions (he uses percentages, but that doesnt matter) of people who have expressed their plans to vote for Candidate A or Candidate B for a particular public office. The sample size was 107; the two proportions were .355 for Candidate A and .224 for Candidate B. (The other people in the sample planned to vote for other candidates.) How should various statistical inferences for this example be handled? 1. Single sample p and n Let p be the sample proportion, e.g., the .355 for Candidate A, for a sample size n of 107. If you want to test a hypothesis about the corresponding population proportion , you should use the binomial sampling distribution to do so. But since tables and computer routines for the binomial sampling distribution for (relatively) large sample sizes such as 107 are not readily available, most people choose to use approximations to the binomial. It is well known that for large samples p is normally distributed around  with standard error equal to the square root of (1-)/n, just as long as  is not too close to 0 or to 1. Some people use n-1 in the denominator rather than n, and the t sampling distribution rather than the normal sampling distribution. They re wrong. (See, for example, Goodall, 1995.) The situation is similar for a confidence interval for , but since  is unknown the sample proportion p must be used in its stead. Again, n and normal; there s no n-1 and no t. 2. The difference between two independent p s for their two n s Let me change the example for the purpose of this section by considering the .355 for Candidate A in Survey #1 conducted by one organization vs. a proportion of .298 (I just made that up) for Candidate A in Survey #2 conducted by a different organization, so that those p s can be considered to be independent. For the usual null hypothesis of no difference between the two corresponding  s, the difference between p1 and p2 is approximately normally distributed around 0 with a standard error that is a function of the two p s and their respective n s. Again, no n-1 s and no t. Likewise for getting a confidence interval for the difference between the two  s. 3. The difference between two non-independent p s and their common n Once again modifying the original example, consider the p of .355 for Candidate A at Time 1 vs. a p of .298 for Candidate A at Time 2 for the same people. This is a case for the use of McNemar s test (McNemar,1947). The chi-square sampling distribution is most commonly employed for either testing a hypothesis about the difference between the corresponding  s or constructing a confidence interval for that difference, but there is an equivalent normal sampling distribution procedure. Both use n and theres no t. 4. The ratio of two independent ps This doesnt usually come up in research in the social sciences, but it is very common in epidemiological research in the analysis of relative risks and odd ratios. As you might expect, things get very messy mainly because ratios almost always have more complicated sampling distributions than differences have. If you want to test the ratio of the p of .355 for Survey #1 to the p of .298 for Survey #2 (see above) against 1 or construct a confidence interval for the ratio of the two corresponding  s, see the compendium by Fleiss, Levin, and Paik (2003) for all of the gory details. You will find that there are no n-1 s and no t s. 5. The difference between two ps for the same scale Ive saved the inference for the original Walonick example for last, because it is the most controversial. Let us consider the significance test only, since that was the inference in which Walonick was interested. In order to test the significance of the difference between the .355 for Candidate A and the .224 for Candidate B, you need to use the ratio of the difference (.355-,224 = .131) to the standard error of that difference. The formula for the approximate standard error (see Kish, 1965; Scott & Seber, 1983; and Franklin, 2007) is the square root of the expression [(p1+ p2 ) (p1 p2 )2 ]/n, where n = 107 for this example. The relevant sampling distribution is normal, not t. Why is this controversial? First of all, it doesnt make sense to some people (especially me). Its like testing the significance of the difference between the proportion of people who respond strongly agree and the proportion of people who respond agree to a question on an opinion poll. Or testing the significance of the difference between the proportion of people who are 57 tall and the proportion of people who are 510 tall. The frequency distribution for the various scale points should be sufficient. Does anybody really care if the difference between the proportions for any two of them is statistically significant? And what significance level should be chosen? Are both Type I errors and Type II errors relevant? Secondly, those two proportions in the Walonick example are bothersomely [is there such a word?] non-independent, especially if both are very close to .5. Apparently the multinomial sampling theory takes care of that, but Im still skeptical. Thirdly, if it makes sense to you to carry out the test, be sure to use the correct standard error (indicated above). Most people dont, according to Franklin. Im afraid that Walonick used the wrong formula. He also used t. I cant get his ratio of 1.808 to come out no matter what I use for the standard error, whether for n or n-1, or for  pooled p s or  unpooled p s. In the remainder of this paper I would like to close with five additional comments (6 through 10) regarding p, n, and t. 6. It perhaps goes without saying, but I d better say it anyhow: The p I m using here is a sample proportion, not a  p-value for significance testing. And the  is a population proportion, not the ratio of the circumference of a circle to its diameter. 7. I have a  thing about the over-use of n-1 rather than n. The authors of many statistics textbooks first define the sample variance and the sample standard deviation with n-1 in the denominator, usually because they want their readers to get used to that when carrying out a t test or an ANOVA. But a variance should be an average (an arithmetic mean), and nobody gets an average by dividing by one less than the number of entities that contribute to it. And some of those same authors make the mistake of claiming that the standard deviation with n-1 in the denominator provides an unbiased estimate of the population standard deviation. Thats true for the variance but not for the standard deviation. For more on this see my N vs. N-1 article (Knapp, 1970). 8. I also have a thing about people appealing to the use of the t sampling distibution rather than the normal sampling distribution for small samples. It is the absence of knowledge of the population variance, not the size of the sample, that warrants the use of t rather than normal. 9. I favor explicit inferences to finite populations rather than inferences for finite populations that use traditional infinite population procedures with a finite population correction involving n (the sample size) and N (the population size). I realize that my preference gets me into all sorts of difficult formulas, but I guess Im willing to pay that price. All real populations that are of interest in scientific research are finite, no matter how large or how small. 10. I prefer percentages to proportions (see Knapp, 2010), even though Ive concentrated on proportions here. It is admittedly awkward to talk about, for example, a 95 percent confidence interval for a population percent (I use percentage and percent interchangeably), but percentages are much more understandable to students, particularly those who use the word proportion in contexts such as a is in the same proportion to b as c is to d. References Fleiss, J.L., Levin, B., & Paik, M.C. (2003). Statistical methods for rates and proportions (3rd ed.). New York: Wiley. Franklin, C.H. (2007). The margin of error for differences in polls. Unpublished document, Political Science department, University of Wisconsin, Madison, WI. Goodall, G. (1995). Dont get t out of proportion!. Teaching Statistics, 17 (2), 50-51. Kish, L. (1965) . Survey sampling. New York: Wiley. Knapp, T.R. (1970). N vs. N-1. American Educational Research Journal, 7, 625-626. Knapp, T.R. (2010). Percentages: The most useful statistics ever invented. Included in the present work (see pages 26-115). McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12, 153-157. Scott, A.J., & Seber, G.A.F. (1983). Difference of proportions from the same survey. The American Statistician, 37, 319-320. Walonick, D.S. (1996-2010). Statistics calculator. StatPac Inc. Assessing the Validity and Reliability of Likert scales and Visual Analog Scales Introduction Consider the following scales for measuring pain: It hurts: Strongly disagree Disagree Can't tell Agree Strongly agree (1) (2) (3) (4) (5) How bad is the pain?: ______________________________________ no pain excruciating How much would you be willing to pay in order to alleviate the pain?______ The first two examples, or slight variations thereof, are used a lot in research on pain. The third is not. In what follows I would like to discuss how one might go about assessing (testing, determining) the validity and the reliability of measuring instruments of the first kind (a traditional Likert Scale [LS]) and measuring instruments of the second kind (a traditional Visual Analog Scale [VAS]) for measuring the presence or severity of pain and for measuring some other constructs. I will close the paper with a few brief remarks regarding the third example and how its validity and reliability might be assessed. The sequence of steps 1. Although you might not agree, I think you should start out by addressing content validity (expert judgment, if you will) as you contemplate how you would like to measure pain (or attitude toward legalizing marijuana, or whatever the construct of interest might be). If a Likert-type scale seems to make sense to you, do the pain experts also think so? If they do, how many scale points should you have? Five, as in the above example, and as was the case for the original scale developed by Rensis Likert (1932)? Why an odd number such as five? In order to provide a "neutral", or "no opinion" choice? Might not too many respondents cop out by selecting that choice? Shouldn't you have an even number of scale points (how about just two?) so that respondents have to take a stand one way or the other? The same sorts of considerations hold for the "more continuous" VAS, originally developed by Freyd (1923). (He called it a Graphic Rating Scale. Unlike Likert, his name was not attached to it by subsequent users. Sad.) How long should it be? (100 millimeters is conventional.) How should the endpoints read? Should there be intermediate descriptors underneath the scale between the two endpoints? Should it be presented to the respondents horizontally (as above) or vertically? Why might that matter? 2. After you are reasonably satisfied with your choice of scale type (LS or VAS) and its specific properties, you should carry out some sort of pilot study in which you gather evidence regarding feasibility (how willing and capable are subjects to respond?), "face" validity (does it appear to them to be measuring pain, attitude toward marijuana, or whatever?), and tentative reliability (administer it twice to the same sample of people, with a small amount of time in-between administrations, say 5 minutes or thereabouts). This step is crucial in order to "get the bugs out" of the instrument before its further use. But the actual results, e.g., whether the pilot subjects express high pain or low pain, favorable attitudes or unfavorable attitudes, etc., should be of little or no interest, and certainly do not warrant publication. 3. If and when any revisions are made on the basis of the pilot study, the next step is the most difficult. It entails getting hard data regarding the reliability and/or the validity of the LS or the VAS. For a random sample drawn from the same population from which a sample will be drawn in the main study, a formal test-retest assessment should be carried out (again with a short interval between test and retest), and if there exists an instrument that serves as a "gold standard" it should also be administered and the results compared with the scale that is under consideration. Likert Scales As far as the reliability of a LS is concerned, you might be interested in evidence for either or both of the scale's "relative reliability" and its "absolute reliability". The former is more conventional; just get the correlation between score at Time 1 and score at Time 2. Ah, but what particular correlation? The Pearson product-moment correlation coefficient? Probably not; it is appropriate only for interval-level scales. (The LS is an ordinal scale.) You could construct a cxc contingency table, where c is the number of categories (scale points) and see if most of the frequencies lie in the upper-right and lower-left portions of the table. That would require a large number of respondents if c is more than 3 or so, in order to "fill up" the c2 cells; otherwise the table would look rather anemic. If further summary of the results is thought to be necessary, either Guttman's (1946) reliability coefficient or Goodman & Kruskal's (1979) gamma (sometimes called the index of order association) would be good choices for such a table, and would serve as the reliability coefficient (for that sample on that occasion). If the number of observations is fairly small and c is fairly large, you could calculate the Spearman rank correlation between score at Time 1 and score at Time 2, since you shouldn't have too many ties, which can often wreak havoc. [Exercise for the reader: When using the Spearman rank correlation in determining the relationship between two ordinal variables X and Y, we get the difference between the rank on X and the rank on Y for each observation. For ordinal variables in general, subtraction is a "no-no". (You can't subtract a "strongly agree" from an "undecided", for example.) Shouldn't a rank-difference also be a "no-no"? I think it should, but people do it all the time, especially when they're concerned about whether or not a particular variable is continuous enough, linear enough, or normal enough in order for the Pearson r to be defensible.] The matter of absolute reliability is easier to assess. Just calculate the % agreement between score at Time 1 and score at Time 2. If there is a gold standard to which you would like to compare the scale under consideration, the (relative) correlation between scale and standard (a validity coefficient) needs to be calculated. The choice of type of validity coefficient, like the choice of type of reliability coefficient, is difficult. It all depends upon the scale type of the standard. If it is also ordinal, with d scale points, a cxd table would display the data nicely, and Goodman & Kruskal's gamma could serve as the validity coefficient (again, for that sample on that occasion). (N.B.: If a gold standard does exist, serious thought should be given to forgoing the new instrument entirely, unless the LS or VAS under consideration would be briefer but equally reliable and content valid.) Visual Analog Scales The process for the assessment of the reliability and validity of a VAS is essentially the same as that for a LS. As indicated above, the principal difference between the two is that a VAS is "more continuous" than a LS, but neither possesses a meaningful unit of measurement. For a VAS there is a surrogate unit of measurement (usually the millimeter), but it wouldn't make any sense to say that a particular patient has X millimeters of pain. (Would it?) For a LS you can't even say 1 what or 2 what,..., since there isn't a surrogate unit. Having to treat a VAS as an ordinal scale is admittedly disappointing, particularly if it necessitates slicing up the scale into two or more (but not 101) pieces and losing some potentially important information. But let's face it. Most respondents will probably concentrate on the verbal descriptors along the bottom of the scale anyhow, so why not help them along? (If there are no descriptors except for the endpoints, you might consider collapsing the scale into those two categories.) Statistical inference For the sample selected for the LS or VAS reliability and validity study, should you carry out a significance test for the reliability coefficient and the validity coefficient? Certainly not a traditional test of the null hypothesis of a zero relationship. Whether or not a reliability or a validity coefficient is significantly greater than zero is not the point (they darn well better be). You might want to test a "null" hypothesis of a specific non-zero relationship (e.g., one that has been found for some relevant norm group), but the better analysis strategy would be to put a confidence interval around the sample reliability coefficient and the sample validity coefficient. (If you have a non-random sample it should be treated just like a population, i.e., descriptive statistics only.) The article by Kraemer (1975) explains how to test a hypothesis about, and how to construct a confidence interval for, the Spearman rank correlation coefficient, rho. A similar article by Woods (2007; corrected in 2008) treats estimation for both Spearman's rho and Goodman & Kruskal's gamma. That would take care of Likert Scales nicely. If the raw data for Visual Analog Scales are converted into either ranks or ordered categories, inferences regarding their reliability and validity coefficients could be handled in the same manner. Combining scores on Likert Scales and Visual Analog Scales The preceding discussion was concerned with a single-item LS or VAS. Many researchers are interested in combining scores on two or more of such scales in order to get a "total score". (Some people argue that it is also important to distinguish between a Likert item and a Likert scale, with the latter consisting of a composite of two or more of the former. I disagree; a single Likert item is itself a scale; so is a single VAS.) The problems involved in assessing the validity and reliability of such scores are several magnitudes more difficult than for assessing the validity and reliability of a single LS or a single VAS. Consider first the case of two Likert-type items, e.g., the following: The use of marijuana for non-medicinal purposes is widespread. Strongly Disagree Disagree Undecided Agree Strongly Agree (1) (2) (3) (4) (5) The use of marijuana for non-medicinal purposes should be legalized. Strongly Disagree Disagree Undecided Agree Strongly Agree (1) (2) (3) (4) (5) All combinations of responses are possible and undoubtedly likely. A respondent could disagree, for example, that such use is widespread, but agree that it should be legalized. Another respondent might agree that such use is widespread, but disagree that is should be legalized. How to combine the responses to those two items in order to get a total score? See next paragraph. (Note: Some people, e.g., some "conservative" statisticians, would argue that scores on those two items should never be combined; they should always be analyzed as two separate items.) The usual way the scores are combined is to merely add the score on Item 1 to the score on Item 2, and in the process of so doing to "reverse score", if and when necessary, so that "high" total scores are indicative of an over-all favorable attitude and "low" total scores are indicative of an over-all unfavorable attitude. The respondent who chose "2" (disagree) for Item 1 and "4" (agree) for Item 2 would get a total score of 4 (i.e., a "reversed" 2) + 4 (i.e., a "regular" 4) = 8, since he(she) appears to hold a generally favorable attitude toward marijuana use. But would you like to treat that respondent the same as a respondent who chose "5" for the first item and "3" for the second item? They both would get a total score of 8. See how complicated this is? Hold on; it gets even worse! Suppose you now have total scores for all respondents. How do you summarize the data? The usual way is to start by making a frequency distribution of those total scores. That should be fairly straightforward. Scores can range from 2 to 10, whether or not there is any reverse-scoring (do you see why?), so an "ungrouped" frequency distribution should give you a pretty good idea of what's going on. But if you want to summarize the data even further, e.g., by getting measures of central tendency, variability, skewness, and kurtosis, you have some tough choices to make. For example, is it the mean, the median, or the mode that is the most appropriate measure of central tendency for such data? The mean is the most conventional, but should be reserved for interval scales and for scales that have an actual unit of measurement. (Individual Likert scales and combinations of Likert scales are neither: Ordinal in, ordinal out.) The median should therefore be fine, although with an even number of respondents that can get tricky (for example, would you really like to report a median of something like 6.5 for this marijuana example?). Getting an indication of the variability of those total scores is unbelievably technically complicated. Both variance and standard deviation should be ruled out because of non-intervality. (If you insist on one or both of those, what do you use in the denominator of the formula... n or n-1?) How about the range (the actual range, not the possible range)? No, because of the same non-intervality property. All other measures of variability that involve subtraction are also ruled out. That leaves "eyeballing" the frequency distribution for variability, which is not a bad idea, come to think of it. I won't even get into problems involved in assessing skewness and kurtosis, which should probably be restricted to interval-level variables in any event. (You can "eyeball" the frequency distribution for those characteristics just like you can for variability, which also isn't a bad idea.) The disadvantages of combining scores on two VASs are the same as those for combining scores on two LSs. And for three or more items things don't get any better. What some others have to say about the validity and the reliability of a LS or VAS The foregoing (do you know the difference between "forgoing" and "foregoing"?) discussion consists largely of my own personal opinions. (You probably already have me pegged, correctly, as a "conservative" statistician.) Before I turn to my most controversial suggestion of replacing almost all Likert Scales and almost all Visual Analog Scales with interval scales, I would like to call your attention to authors who have written about how to assess the reliability and/or the validity of a LS or a VAS, or who have reported their reliabilities or validities in substantive investigations. Some of their views are similar to mine. Others are diametrically opposed. 1. Aitken (1969) According to Google, this "old" article has been cited 1196 times! It's that good, and has a brief but excellent section on the reliability and validity of a VAS. (But it is very hard to get a hold of. Thank God for helpful librarians like Shirley Ricker at the University of Rochester.) 2. Price, et al. (1983). As the title of their article indicates, Price, et al. claim that in their study they have found the VAS to be not only valid for measuring pain but also a ratio-level variable. (I don't agree. But read the article and see what you think.) 3. Wewers and Lowe (1990) This is a very nice summary of just about everything you might want to know concerning the VAS, written by two of my former colleagues at Ohio State (Mary Ellen Wewers and Nancy Lowe). There are fine sections on assessing the reliability and the validity of a VAS. They don't care much for the test-retest approach to the assessment of the reliability of a VAS, but I think that is really the only option. The parallel forms approach is not viable (what constitutes a parallel item to a given single-item VAS?) and things like Cronbach's alpha are no good because they require multiple items that are gathered together in a composite. It comes down to a matter of the amount of time between test and retest. It must be short enough so that the construct being measured hasn't changed, but it must be long enough so that the respondents don't merely "parrot back" at Time 2 whatever they indicated at Time 1; i.e., it must be a "Goldilocks" interval. 4. Von Korff, et al. (1993) These authors developed what they call a "Quadruple Visual Analog Scale" for measuring pain. It consists of four items, each having "No pain " and "worst possible pain" as the two endpoints, with the numbers 0 through 10 equally spaced beneath each item. The respondents are asked to indicate the amount of pain (1) now, (2) typical, (3) best, and (4) worst; and then to add across the four items. Interesting, but wrong (in my opinion). 5. Bijur, Silver, and Gallagher (2001) This article was a report of an actual test-retest (and re-retest...) reliability study of the VAS for measuring acute pain. Respondents were asked to record their pain levels in pairs one minute apart thirty times in a two-hour period. The authors found the VAS to be highly reliable. (Not surprising. If I were asked 60 times in two hours to indicate how much pain I had, I would pick a spot on the VAS and keep repeating it, just to get rid of the researchers!) 6. Owen and Froman (2005) Although the main purpose of their article was to dissuade researchers from unnecessarily collapsing a continuous scale (especially age) into two or more discrete categories, the authors made some interesting comments regarding Likert Scales. Here are a couple of them: "...equal appearing interval measurements (e.g., Likert-type scales...)" (p. 496) "There is little improvement to be gained from trying to increase the response format from seven or nine options to, say, 100. Individual items usually lack adequate reliability, and widening the response format gives an appearance of greater precision, but in truth does not boost the items reliability... However, when individual items are aggregated to a total (sum or mean) scale score, the continuous score that results usually delivers far greater precision." (p. 499) A Likert scale might be an "equal appearing interval measurement", but it's not interval-level. And I agree with the first part of the second quote (it sounds like a dig at Visual Analog Scales), but not with the second part. Adding across ordinal items does not result in a defensible continuous score. As the old adage goes, "you can't make a silk purse out of a sow's ear". 7. Davey, et al. (2007) There is a misconception in the measurement literature that a single item is necessarily unreliable and invalid. Not so, as Davey, et al. found in their use of a one-item LS and a one-item VAS to measure anxiety. Both were found to be reliable and valid. (Nice study.) 8. Hawker, et al. (2011) This article is a general review of pain scales in general. The first part of the article is devoted to the VAS (which the authors call "a continuous scale"; ouch!). They have this to say about its reliability and validity: "Reliability. Testretest reliability has been shown to be good, but higher among literate (r = 0.94, P< 0.001) than illiterate patients (r= 0.71, P < 0.001) before and after attending a rheumatology outpatient clinic [citation]. Validity. In the absence of a gold standard for pain, criterion validity cannot be evaluated. For construct validity, in patients with a variety of rheumatic diseases, the pain VAS has been shown to be highly correlated with a 5-point verbal descriptive scale (nil, mild, moderate,severe, and very severe) and a numeric rating scale (with response options from no pain to unbearable pain), with correlations ranging from 0.710.78 and.0.620.91, respectively) [citation]. The correlation between vertical and horizontal orientations of the VAS is 0.99 [citation] " (page s241) That's a lot of information packed into two short paragraphs. One study doesn't make for a thorough evaluation of the reliability of a VAS; and as I have indicated above, those significance tests aren't appropriate. The claim about the absence of a gold standard is probably warranted. But I find a correlation of .99 between a vertical VAS and a horizontal VAS hard to believe. (Same people at the same sitting? You can look up the reference if you care.) 9. Vautier (2011) Although it starts out with some fine comments about basic considerations for the use of the VAS, Vautier's article is a very technical discussion of multiple Visual Analog Scales used for the determination of reliability and construct validity in the measurement of change. The references that are cited are excellent. 10. Franchignoni, Salaffi, and Tesio (2012) This recent article is a very negative critique of the VAS. Example: "The VAS appears to be a very simple metric ruler, but in fact it's not a true linear ruler from either a pragmatic or a theoretical standpoint. " (page 798). (Right on!) In a couple of indirect references to validity, the authors go on to argue that most people can't discriminate among the 101 possible points for a VAS. They cite Miller's (1956) famous 7 + or - 2 rule), and they compare the VAS unfavorably with a 7-pont Likert scale. Are Likert Scales and Visual Analog Scales really different from one another? In the previous paragraph I referred to 101 points for a VAS and 7 points for an LS. The two approaches differ methodologically only in the number of points (choices, categories) from which a respondent makes a selection. There are Visual Analog Scales that aren't really visual, and there are Likert Scales that are very visual. An example of the former is the second scale at the beginning of this paper. The only thing "visual" about that is the 100-millimeter line. As examples of the latter, consider the pictorial Oucher (Beyer, et al., 2005) and the pictorial Defense and Veterans Pain Rating Scale (Pain Management Task Force, 2010) which consist of photographs of faces of children (Beyer) or drawings of soldiers (Pain Management Task Force) expressing varying degrees of pain. The Oucher has six scale points (pictures) and the DVPRS has six pictures super-imposed upon 11 scale points, with the zero picture indicating "no pain", the next two pictures associated with mild pain, the fourth associated with moderate pain, and the last two associated with severe pain. Both instruments are actually amalgams of Likert-type scales and Visual Analog Scales. I once had the pleasant experience of co-authoring an article about the Oucher with Judy Beyer. (Our article is cited in theirs.) The instrument now exists in parallel forms for each of four ethnic groups. Back to the third item at the beginning of this paper I am not an economist. I took only the introductory course in college, but I was fortunate to have held a bridging fellowship to the program in Public Policy at the University of Rochester when I was a faculty member there, and I find the way economists look at measurement and statistics problems to be fascinating. (Economics is actually not the study of supply and demand. It is the study of the optimization of utility, subject to budget constraints.) What has all of that to do with Item #3? Plenty. If you are serious about measuring amount of pain, strength of an attitude, or any other such construct, try to do it in a financial context. The dollar is a great unit of measurement. And how would you assess the reliability and validity? Easy; use Pearson r for both. You might have to make a transformation if the scatter plot between test scores and retest scores, or between scores on the scale and scores on the gold standard, is non-linear, but that's a small price to pay for a higher level of measurement. Afterthought Oh, I forgot three other sources. If you're seriously interested in understanding levels of measurement you must start with the classic article by Stevens (1946). Next, you need to read Marcus-Roberts and Roberts (1987) regarding why traditional statistics are inappropriate for ordinal scales. Finally, turn to Agresti (2010). This fine book contains all you'll ever need to know about handling ordinal scales. Agresti says little or nothing about validity and reliability per se, but since most measures of those characteristics involve correlation coefficients of some sort, his suggestions for determining relationships between two ordinal variables should be followed. References Agresti, A. (2010). Analysis of ordinal categorical data (2nd. ed.). New York: Wiley. Aitken, R. C. B. (1969). Measurement of feeling using visual analogue scales. Proceedings of the Royal Society of Medicine, 62, 989-993. Beyer, J.E., Turner, S.B., Jones, L., Young, L., Onikul, R., & Bohaty, B. (2005). The alternate forms reliability of the Oucher pain scale. Pain Management Nursing, 6 (1), 10-17. Bijur, P.E., Silver, W., & Gallagher, E.J. (2001). Reliability of the Visual Analog Scale for measurement of acute pain. Academic Emergency Medicine, 8 (12), 1153-1157. Davey, H.M., Barratt, A.L., Butow, P.N., & Deeks, J.J. (2007). A one-item question with a Likert or Visual Analog Scale adequately measured current anxiety. Journal of Clinical Epidemiology, 60, 356-360. Franchignoni, F., Salaffi, F., & Tesio, L. (2012). How should we use the visual analogue scale (VAS) in rehabilitation outcomes? I: How much of what? The seductive VAS numbers are not true measures. Journal of Rehabilitation Medicine, 44, 798-799. Freyd, M. (1923). The graphic rating scale. Journal of Educational Psychology , 14 , 83-102. Goodman, L.A., & Kruskal, W.H. (1979). Measures of association for cross classifications. New York: Springer-Verlag. Guttman, L. (1946). The test-retest reliability of qualitative data. Psychometrika, 11 (2), 81-95. Hawker, G.A., Mian, S., Kendzerska, T., & French, M. (2011). Measures of adult pain. Arthritis Care & Research, 63, S11, S240-S252. Kraemer, H.C. (1975). On estimation and hypothesis testing problems for correlation coefficients. Psychometrika, 40 (4), 473-485. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22, 5-55. Marcus-Roberts, H.M., & Roberts, F.S. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383-394. Miller, G.A. (1956). The magical number seven, plus or minus two: Limits on our capacity for processing information. Psychological Review, 63, 81-97. Owen, S.V., & Froman, R.D. (2005). Why carve up your continuous data? Research in Nursing & Health, 28, 496-503. Pain Management Task Force (2010). Providing a Standardized DoD and VHA Vision and Approach to Pain Management to Optimize the Care for Warriors and their Families. Office of the Army Surgeon General. Price, D.D., McGrath, P.A., Rafii, I.A., & Buckingham, B. (1983 ). The validation of Visual Analogue Scales as ratio scale measures for chronic and experimental Pain, 17, 45-56. Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680. Vautier, S. (2011). Measuring change with multiple Visual Analogue Scales: Application to tense arousal. European Journal of Psychological Assessment, 27, 111-120. Von Korff, M,, Deyo, R.A, Cherkin, D., & Barlow, S.F. (1993). Back pain in primary care: Outcomes at 1 year. Spine, 18, 855-862. Wewers, M.E., & Lowe, N.K. (1990). A critical review of visual analogue scales in the measurement of clinical phenomena. Research in Nursing & Health, 13, 227-236. Woods, C.M. (2007; 2008). Confidence intervals for gamma-family measures of ordinal association. Psychological Methods, 12 (2), 185-204. IN SUPPORT OF NULL HYPOTHESIS SIGNIFICANCE TESTING Introduction For the last several years it has been fashionable to deride the use of null hypothesis significance testing (NHST) in scientific research, especially the testing of "nil" hypotheses for randomized trials in which the hypothesis to be tested is that there is zero difference between the means of two experimental populations. The literature is full of claims such as these: "The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is always false in the real world." (Cohen, 1990, p. 1308) "It is foolish to ask 'Are the effects of A and B different'? They are always different...for some decimal place." (Tukey, 1991, p. 100) "Given the many attacks on it, null hypothesis testing should be dead." (Rindskopf, 1997, p. 319) "[NHST is] surely the most bone-headedly misguided procedure ever institutionalized in the rote training of science students." (Rozeboom, 1997, p. 335) "Logically and conceptually, the use of statistical significance testing in the analysis of research data has been thoroughly discredited." (Schmidt & Hunter, 1997, p. 37) [1997 was a good year for attacks against NHST. In the same year there appeared an entire book entitled "What if there were no significance tests?" (edited by Harlow, Mulaik, & Steiger, 1997). It consisted of several chapters, some of which were pro NHST and some of which were con (mostly con). That book followed upon an earlier book entitled "The significance test controversy" (edited by Morrison & Henkel, 1970)] In what follows in this paper, which has a title similar to that of Hagen (1997) and is in the spirit of Abelson (1997a, 1997b), I would like to resurrect some of the arguments in favor of NHST, but starting from a different perspective, viz., that of legal and medical decision-making. Two very important null hypotheses (and their alternatives) The nulls: 1. The defendant is innocent. 2. The patient is well. The alternatives: 1. The defendant is guilty. 2. The patient is ill. Let us consider first "The defendant is innocent". Unlike most scientific null hypotheses that we would like to reject, this hypothesis is an example of a hypothesis that we would like to be able to "accept", or at least "not reject", if, of course, the defendant is in fact innocent. How do we proceed? 1. We (actually the prosecuting attorneys) gather evidence that bears upon the defendant's guilt or innocence. 2. The "sample size" (amount of evidence) ranges from the testimony of one or two witnesses to multi-year investigations, depending upon the seriousness of the crime that the defendant is alleged to have committed. 3. The evidence is tested in court, with the attorneys for the defense (often court-appointed) arguing for the truth of the null hypothesis and with the prosecuting attorneys arguing for the truth of the alternative hypothesis. 4. An inference (verdict) is rendered regarding the hypotheses. If the null hypothesis is rejected, the defendant becomes subject to some sort of penalty ranging from a fine or community service to life imprisonment or death. If the null hypothesis is not rejected, the defendant is set free. 5. No matter what inference is made, we acknowledge that a mistake could be made. We might have made a "Type I error" by rejecting a true null hypothesis, in which case an innocent person will have been punished unjustly. We would like to keep the probability of such a decision to be small. Or we might have made a "Type II error" by not rejecting a false null hypothesis, in which case a guilty person will have been set free and might commit the same crime again, or perhaps an even worse crime. We would like to keep the probabilities of either of those eventualities very small. Fortunately, we cannot commit both of those errors simultaneously, but the probabilities work at cross-purposes. As we try to decrease the probability of making a Type I error we increase the probability of making a Type II error, and vice versa. The only way to keep both probabilities small is to increase the sample size, i.e., to obtain more evidence. [In his very balanced and very long summary of the NHST controversy, Nickerson (2000) uses this same analogy to what happens in a U.S. court of law.] Now for "The patient is well". We all prefer to be well than to be ill, so we often seek out medical advice, usually from doctors and/or nurses, to help us in deciding which we are at any given time. How do we proceed? 1. We (ourselves, the doctors, and the nurses) gather evidence that bears upon our wellness or our illness. 2. The "sample size" (amount of evidence) ranges from seeing what happens when we say "aaahhh" to the carrying out of various procedures such as MRIs and biopsies. 3. The evidence is tested in the clinic, with everybody hoping that the null hypothesis is true and the alternative hypothesis is false.. 4. An inference (decision) is made regarding the hypotheses. If the null hypothesis is rejected, we are told that we are ill and some treatment is recommended. If the null hypothesis is not rejected, we are relieved to hear that, and we are free to leave the clinic. 5. No matter what inference is made, we acknowledge that a mistake could be made. There might have been a "Type I error" in the rejection of a true null hypothesis, in which case we would be treated for an ailment or a disease that we don't have. We would like to keep the probability of such a decision to be small. Or there might have been a "Type II error" in the failure to reject a false null hypothesis, in which case we would go untreated for an ailment or a disease that we had. We would like to keep the probabilities of either of those eventualities very small. Fortunately, both of those errors cannot occur simultaneously, but the probabilities work at cross-purposes. As we try to decrease the probability of making a Type I error we increase the probability of making a Type II error, and vice versa. The only way to keep both probabilities small is to increase the sample size, i.e., to obtain more evidence (have more tests taken). Those two situations are very similar. The principal differences are (1) the parties in the legal example are "rooting for" different outcomes, whereas the parties in the medical example are "rooting for" the same outcome; and (2) the consequences of errors in the legal example are usually more severe than the consequences of errors in the medical example. Once a defendant has been incarcerated for a crime (s)he didn't commit, it is very difficult to undo the damage done to that person's life. Once a defendant has been set free from a crime that (s)he did commit, (s)he remains a threat to society. But if we are not treated for our ailment or disease it is possible to seek such treatment at a later date. Likewise for being treated for an ailment or a disease that we don't have, the principal consequence of which is unnecessary worry and anxiety, unless the treatment is something radical that is worse than the ailment or the disease itself. A null hypothesis that is always true One of the arguments against NHST is the claim that the null hypothesis is always false. I would like to give an example of a null hypothesis that is always true: The percentage of black cards in a new deck of cards is equal to 50. What is "null" about that? It is null because it is directly testable. There is no zero in it, so it is not "nil", but it can be tested by taking a random sample of cards from the deck, determining what percentage of the sampled cards is black, and making an inference regarding the percentage of black cards in the entire deck (the population). The difference between this example and both the legal example and the medical example is that either no error is made (a true null hypothesis is not rejected) or a Type I error has been made (a true null hypothesis is rejected). There is no way to make a Type II error. (Do you see why?) Now for null hypothesis significance testing in research. First of all it is well to distinguish between a hypothesis imbedded in a theory and a hypothesis arising from a theory. For example, consider the theoretical non-null hypothesis that boys are better in mathematics than girls are. One operational null hypothesis arising from the theory is the hypothesis that the mean score on a particular mathematics achievement test for some population of boys is equal to the mean score on that same test for some population of girls. Is that hypothesis ever true? Some people would argue that those two means will always differ by some amount, however small, so the hypothesis is not worth testing. I respectfully disagree. To put it in a spiritual context, only God knows whether or not it is true. We mere mortals can only conjecture and test. And that brings me to the point that opponents of null-hypothesis-testing always make, viz., we should use interval estimation rather than hypothesis testing if we are seriously interested in the extent to which boys and girls differ on that achievement test. I have nothing against interval estimation (the use of confidence intervals). As a matter of fact, I generally prefer them to hypothesis tests. But, 1. I have said elsewhere (Knapp, 2002): "If you have hypotheses to test (a null hypothesis you may or may not believe a priori and/or two hypotheses pitted against one another), use a significance test to test them. If you dont, confidence intervals are fine." (p. 241) 2. Although interval estimation often subsumes hypothesis testing (if the otherwise hypothesized parameter is in the interval, you can't reject it; if it isn't, you can and must), there are certain situations where it either does not or it has additional problems that are not associated with the corresponding hypothesis test. For example, if you want to make an inference regarding a population percentage (or proportion), the hypothesis-testing procedure is straightforward, with the standard error of the percentage being a simple function of the hypothesized percentage. On the other hand, in order to get a standard error to be employed in a confidence interval for a percentage you have to use the sample percentage, since you don't know the population percentage (you're trying to estimate it). When the number of "successes" in a sample is very small, the sample percentage can be a serious under-estimate of the population percentage. This is especially true if the number of "successes" in the sample is equal to zero, in which case the use of the sample percentage in the formula for the standard error would imply that there is no sampling error at all! The "Rule of three" (Jovanovic & Levy, 1997) is an attempt to cope with such an eventuality, but provides only a shaky estimate. 3. One of the arguments for preferring confidence intervals over hypothesis tests is that they go rather naturally with "effect sizes" that are in the original units of the dependent variable (i.e., are not standardized), but as Parker (1995) pointed out in his comments regarding Cohen's 1994 article, the actual difference between two means is often not very informative. (See his "number of chili peppers" example.) 4. Interval estimation is not the panacea that it is occasionally acclaimed to be. For every user of hypothesis testing who says "the probability is less than .05 that the null hypothesis is true" there is a user of interval estimation who says "I am .95 confident that the difference between the two population means varies between a and b". The p value is not an indication of the truth of the null hypothesis, and it is the difference between the sample means that varies, not the difference between the population means. 5. We could do both, i.e., use the techniques of NHST power analysis to draw a random sample of a size that is "optimal" to test our particular null hypothesis against our particular alternative hypothesis (for alpha = .05, for example) but use interval estimation with a confidence interval of the corresponding level (.95, for example) for reporting the results. There are sample-size procedures for interval estimation directly; however, they are generally more complicated and not as readily available as those for NHST. But one thing we should not do (although you wouldn't know it by perusing the recent research literature) is to report both the upper and lower limits of the confidence interval AND the actual magnitude of the p-value that is found for the hypothesis test. If we care about 1- confidence we should only care about whether p is greater than or less than . A compromise Jones & Tukey (2000) suggested that if we're interested in the difference between the means of two populations, A and B, we should investigate the difference between the corresponding sample means and then make one of the following inferences: 1. The mean of Population A minus the mean of Population B is greater than 0. 2. The mean of Population A minus the mean of Population B is less than 0. 3. The sign of the difference is yet to be determined. Read their article. You'll like it. Some recent references You might have gotten the impression that the problem has gone away by now, given that the latest citation so far is to the year 2002. I assure you that it has not. The controversy regarding the use, abuse, misuse, etc. of NHST is just as hot in 2014 as it was in the heyday year of 1997. Here are a few examples: 1.. LeMire (2010) recommends a different framework as the context for NHST. He calls it NHSTAF (the A and the F are for Argument and Framework) and it is based upon the work of Toulmin (1958). It's different, but interesting in its defense of NHST. 2. Lambdin (2012) claims that psychologists know about the weaknesses of NHST but many of them go ahead and use it anyhow. He calls this psychology's "dirty little secret". He goes on to blast significance tests in general and p-values in particular (he lists 12 misconceptions about them). His article is very well written and has lots of good references 3. White (2012), in the first of several promised blogs about NHST, tries to pull together most of the arguments for and against NHST, and claims that it is important to distinguish between the problems faced by individual researchers and the problems faced by the community of researchers. That blog includes several interesting comments made by readers of the blog, along with White's replies to most of those comments. 4. Wood (2013) is an even better blog, accompanied by lots of good comments (with Wood's replies), several important references, and great pictures of R.A. Fisher, Jerzy Neyman, and Egon Pearson! 5. Cumming (2013) is a relentless advocate of interval estimation, with the use of confidence intervals around sample "effect sizes" and with a heavy reliance on meta-analysis. He calls his approach (presumptuously) "The New Statistics". A final note Some of the writers who have contributed to the NHST controversy remind me of the radical left and the radical right in American politics; i.e., people who are convinced they are correct and those "across the aisle" are not. A little humility, coupled with the sort of compromise suggested by Jones and Tukey (2000), could go a long way toward a solution of this vexing problem. References Abelson, R.P. (1997a). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8 (1), 12-15. Abelson, R.P. (1997b). A retrospective on the significance test ban of 1999 (if there were no significance tests they would be invented). In L.L. Harlow. S.A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 117-141). Hillsdale, NJ: Erlbaum.on, R. P. (1997b). A retrospective on the test ban of 1999 (if there were Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312. Cohen, J. (1994). The earth is round (p<.05). American Psychologist, 49, 997-1003. Cumming, G. (2013) The New Statistics: Why and How. Psychological Science, 20 (10), 1-23. Hagen, R.L. (1997). In praise of the null hypothesis statistical test. American Psychologist, 52, 15-24. Harlow, L.L., S.A. Mulaik, & J.H. Steiger (Eds.). (1997). What if there were no significance tests? Hillsdale, NJ: Erlbaum. Jones, L.V., & Tukey, J.W. (2000). A sensible formulation of the significance test. Psychological Methods, 5 (4), 411-414. Jovanovic, B.D., & Levy, P.S. (1997). A look at the rule of three. The American Statistician, 51 (2), 137-139. Knapp, T.R. (2002). Some reflections on significance testing. Journal of Modern Applied Statistical Methods, 1 (2), 240-242. Lambdin, C. (2012). Significance tests as sorcery: Science is empirical--significance tests are not. Theory & Psychology, 22 (1), 67-90. LeMire, S. (2010). An argument framework for the application of null hypothesis statistical testing in support of research. Journal of Statistics Education, 18 (2), 1-23. Morrison, D.E., & Henkel, R.E. (Eds.) (1970). The significance test controversy: A reader. Chicago: Aldine. Nickerson, R.S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5 (2), 241-301. Parker, S. (1995). The "difference of means" might not be the "effect size" . American Psychologist, 1101-1102. Rindskopf, D.M. (1997). Testing "small", not null, hypotheses: Classical and Bayesian approaches. In L.L. Harlow, S.A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 319-332). Hillsdale, NJ: Erlbaum.on, R. P. (1997b). A retrospective on the significance Rozeboom, W.W. (1997). Good science is abductive, not hypothetico-deductive. In L.L. Harlow, S.A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 335-391). Hillsdale, NJ: Erlbaum. Schmidt, F.L., & Hunter, J.E. (1997). Eight common but false objections to the discontinuance of significance tests in the analysis of research data. In L.L. Harlow, S.A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 37-64). Hillsdale, NJ: Erlbaum. Toulmin, S. E. (1958). The uses of argument. Cambridge, MA: Cambridge University Press Tukey, J.W. (1991). The philosophy of multiple comparisons. Statistical Science, 6, 100-116. White, J.M. (May 10, 2012). Criticism 1 of NHST: Good tools for individual researchers are not good tools for research communities. Downloaded from the internet. Wood, J. (May 5, 2013). Let's abandon significance tests. Downloaded from the internet. The unit justifies the mean Introduction How should we think about the mean? Let me count the ways: 1. It is the sum of the measurements divided by the number of measurements. 2. It is the amount that would be allotted to each observation if the measurements were re-distributed equally. 3. It is the fulcrum (the point at which the measurements would balance). 4. It is the point for which the sum of the deviations around it is equal to zero. 5. It is the point for which the sum of the squared deviations around it is a minimum. 6. It need not be one of the actual measurements. 7. It is not necessarily in or near the center of a frequency distribution. 8. It is easy to calculate (often easier than the median, even for computers). 9. It is the first moment around the origin. 10. It requires a unit of measurement; i.e., you have to be able to say the mean "what". I would like to take as a point of departure the first and the last of these matters and proceed from there. Definition Everybody knows what a mean is. You've been calculating them all of your lives. What do you do? You add up all of the measurements and divide by the number of measurements. You probably called that "the average", but if you've taken a statistics course you discovered that there are different kinds of averages. There are even different kinds of means (arithmetic, geometric, harmonic), but it is only the arithmetic mean that will be of concern in this paper, since it is so often referred to as "the mean". The mean what The mean always comes out in the same units that are used in the scale that produced the measurements in the first place. If the measurements are in inches, the mean is in inches; if the measurements are in pounds, the mean is in pounds; if the measurements are in dollars, the mean is in dollars; etc. Therefore, the mean is "meaningful" for interval-level and ratio-level variables, but it is "meaningless" for ordinal variables, as Marcus-Roberts and Roberts (1987) so carefully pointed out. Consider the typical Likert-type scale for measuring attitudes. It usually consists of five categories: strongly disagree, disagree, no opinion, agree, and strongly agree (or similar verbal equivalents). Those five categories are most frequently assigned the numbers 1,2,3,4,and 5, respectively. But you can't say 1 what, 2 what, 3 what, 4 what, or 5 what. The other eight "meanings of the mean" all flow from its definition and the requirement of a unit of measurement. Let me take them in turn. Re-distribution This property is what Watier, Lamontagne, and Chartier (2011) call (humorously but accurately) "The Socialist Conceptualization". The simplest context is financial. If the mean income of all of the employees of a particular company is equal to x dollars, x is the salary each would receive if the total amount of money paid out in salaries were distributed equally to the employees. (That is unlikely to ever happen.) A mean height of x inches is more difficult to conceptualize, because we rarely think about a total number of inches that could be re-distributed, but x would be the height of everybody in the group, be it sample or population, if they were all of the same height. A mean weight of x pounds is easier to think of than a mean height of x inches, since pounds accumulate faster than inches do (as anyone on a diet will attest). Fulcrum (or center of gravity) Watier, et al. (2011) call this property, naturally enough, "The Fulcrum Conceptualization". Think of a see-saw on a playground. (I used to call them teeter-totters.) If children of various weights were to sit on one side or the other of the see-saw board, the mean weight would be the weight where the see-saw would balance (the board would be parallel to the ground). The sum of the positive and negative deviations is equal to zero This is actually an alternative conceptualization to the previous one. If you subtract the mean weight from the weight of each child and add up those differences ("deviations") you always get zero, again an indication of a balancing point. The sum of the squared deviations is a minimum This is a non-intuitive (to most of us) property of the mean, but it's correct. If you take any measurement in a set of measurements other than the mean and calculate the sum of the squared deviations from it, you always get a larger number. (Watier, et al., 2011, call this "The Least Squares Conceptualization".) Try it sometime, with a small set of numbers such as 1,2,3, and 4. It doesn't have to be one of the actual measurements This is obvious for the case of a seriously bimodal frequency distribution, where only two different measurements have been obtained, say a and b. If there is the same numbers of a's as b's then the mean is equal to (a+b)/2. But even if there is not the same number of a's as b's the mean is not equal to either of them. It doesn't have to be near the center of the distribution This property follows from the previous one, or vice versa. The mean is often called an indicator of "the central tendency" of a frequency distribution, but that is often a misnomer. The median, by definition, must be in the center, but the mean need only be greater than the smallest measurement and less than the largest measurement. It is easy to calculate Compare what it is that you need to do in order to get a mean with what you need to do in order to get a median. If you have very few measurements the amount of labor involved is approximately the same: Add (n-1 times) and divide (once); or sort and pick out. But if you have many measurements it is a pain in the neck to calculate a median, even for a computer (do they have necks?). Think about it. Suppose you had to write a computer program that would calculate a median. The measurements are stored somewhere and have to be compared with one another in order to put them in order of magnitude. And there's that annoying matter of an odd number vs. an even number of measurements. To get a mean you accumulate everything and carry out one division. Nice. The first moment Karl Pearson, the famous British statistician, developed a very useful taxonomy of properties of a frequency distribution. They are as follows: The first moment (around the origin). This is what you get when you add up all of the measurements and divide by the number of them. It is the (arithmetic) mean. The term "moment" comes from physics and has to do with a force around a certain point.. The first moment around the mean. This is what you get when you subtract the mean from each of the measurements, add up those "deviations", and divide by the number of them. It is always equal to zero, as explained above. The second moment around the mean. This is what you get when you take those deviations, square them, add up the squared deviations, and divide by the number of them. It is called the variance, and it is an indicator of the "spread" of the measurements around their mean, in squared units. Its square root is the standard deviation, which is in the original units. The third moment around the mean. This is what you get when you take the deviations, cube them (i.e., raise them to the third power), add them up, divide by the number of deviations, and divide that by the cube of the standard deviation. It provides an indicator of the degree of symmetry or asymmetry ("skewness") of a distribution. The fourth moment around the mean. This is what you get when you take the deviations, raise them to the fourth power, add them up, divide by the number of them, and divide that by the fourth power of the standard deviation. It provides an indicator of the extent of the kurtosis ("peakedness") of a distribution. What about nominal variables in general and dichotomies in particular? I hope you are now convinced that the mean is OK for interval variables and ratio variables, but not OK for ordinal variables. In 1946 the psychologist S.S. Stevens claimed that there were four kinds of variables, not three. The fourth kind is nominal, i.e., a variable that is amenable to categorization but not very much else. Surely if the mean is inappropriate for ordinal variables it must be inappropriate for nominal variables? Well, yes and no. Let's take the "yes" part first. If you are concerned with a variable such as blood type, there is no defensible unit of measurement like an inch, a pound, or a dollar. There are eight different blood types (A+, A-, B+, B-, AB+, AB-, O+, and O-). No matter how many of each you have, you can't determine the mean blood type. Likewise for a variable such as religious affiliation. There are lots of categories (Catholic, Protestant, Jewish, Islamic,...,None), but it wouldn't make any sense to assign the numbers 1,2,3,4,..., k to the various categories, calculate the mean, and report it as something like 2.97. Now for the "no" part. For a dichotomous nominal variable such as sex (male, female) or treatment (experimental, control), it is perfectly appropriate (alas) to CALCULATE a mean, but you have to be careful about how you INTERPRET it. The key is the concept of a "dummy" variable. Consider, for example, the sex variable. You can call all of the males "1" (they are male) and all of the females "0" (they are not). Suppose you have a small study in which there are five males and ten females. The "mean sex" (sounds strange, doesn't it?) is equal to the sum of all of the measurements (5) divided by the number of measurements (15), or .333. That's not .333 "anythings", so there is still no unit of measurement, but the .333 can be interpreted as the PROPORTION of participants who are male (the 1's). It can be converted into a percentage by multiplying by 100 and affixing a % sign, but that wouldn't provide a unit of measurement either. There is an old saying that "there is an exception to every rule". This is one of them. References Marcus-Roberts, H.M., & Roberts, F.S. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383-394. Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680. Watier, N.N., Lamontagne, C., & Chartier, S. (2011). What does the mean mean? Journal of Statistics Education, 19 (2), 1-20. ALPHA BETA SOUP Introduction About twenty years ago I wrote a little statistics book (Knapp, 1996) in which there were no formulas and only two symbols (X and Y). It seemed to me at the time (and still does) that the concepts in descriptive statistics and inferential statistics are difficult enough without an extra layer of symbols and formulas to exacerbate the problem of learning statistics. I have seen so many of the same symbols used for entirely different concepts that I decided I would like to try to point out some of the confusions and make a recommendation regarding what we should do about them. I have entitled this paper "Alpha beta soup" to indicate the "soup" many people find themselves in when trying to cope with multiple uses of the Greek letters alpha and beta, and with similar multiple uses of other Greek letters and their Roman counterparts. A little history I'm not the only person who has been bothered by multiple uses of the same symbols in statistical notation. In 1965, Halperin, Hartley, and Hoel tried to rescue the situation by proposing a standard set of symbols to be used for various statistical concepts. Among their recommendations was alpha for the probability associated with certain sampling distributions and beta for the partial regression coefficients in population regression equations. A few years later, Sanders and Pugh (1972) listed several of the recommendations made by Halperin, et al., and pointed out that authors of some statistics books used some of those symbols for very different purposes. Alpha As many of you already know, in addition to the use of alpha as a probability (of a Type I error in hypothesis testing, i.e.,"the level of significance"), the word alpha and the symbol  are encountered in the following contexts: 1. The Y-intercept in a population regression analysis for Y on X. (The three H's recommended beta with a zero subscript for that; see below.) 2. Cronbach's (1951) coefficient alpha, which is the very popular indicator of the degree of internal consistency reliability of a measuring instrument. 3. Some non-statistical contexts such as "the alpha male". Beta The situation regarding beta is even worse. In addition to the use of betas as the (unstandardized) partial regression coefficients, the word beta and the symbol  are encountered in the following contexts: 1. The probability of making a Type II error in hypothesis testing. 2. The standardized partial regression coefficients, especially in the social science research literature, in which they're called "beta weights". (This is one of the most perplexing and annoying contexts.) 3. A generic name for a family of statistical distributions. 4. Some non-statistical contexts such as "the beta version" of a statistical computer program or the "beta blockers" drugs. Other Greek letters 1. The confusion between upper-case sigma (") and lower-case sigma (). The former is used to indicate summation (adding up) and the latter is used as a symbol for the population standard deviation. The upper-case sigma is also used to denote the population variance-covariance matrix in multivariate analysis. 2. The failure of many textbook writers and applied researchers to use the Greek nu (v) for the number of degrees of freedom associated with certain sampling distributions, despite the fact that almost all mathematical statisticians use it for that. [Maybe it looks too much like a v?] 3. The use of rho () for the population Pearson product-moment correlation coefficient and for Spearman's rank correlation, either in the population or in a sample (almost never stipulated). 4. It is very common to see  used for a population proportion, thereby causing all sorts of confusion with the constant  = 3.14159... The upper-case pi () is used in statistics and in mathematics in general to indicate the product of several numbers (just as the upper-case sigma is used to indicate the sum of several numbers). 5. The Greek letter gamma () is used to denote a certain family of statistical distributions and was used by Goodman and Kruskal (1979) as the symbol for their measure of the relationship between two ordinal variables. There is also a possible confusion with the non-statistical but important scientific concept of "gamma rays". 6. The Greek letter lambda () is used as the symbol for the (only) parameter of a Poisson distribution, was used by Goodman and Kruskal as the symbol for their measure of the relationship between two nominal variables, and was adopted by Wilks for his multivariate statistic (the so-called "Wilks' lambda"), which has an F sampling distribution. 7. The Greek delta () was Cohen's (1988 and elsewhere) choice for the hypothesized population "effect size", which is the difference between two population means divided by their common standard deviation. The principal problems with the Greek delta are that it is used to indicate "a very small amount" in calculus, and its capitalized version () is often used to denote change. (Cohen's delta usually has nothing to do with change, because its main use is for randomized trials where the experimental and control groups are measured concurrently at the end of an experiment.) Some non-statistical contexts in which the word delta appears are: Delta airlines; delta force; and the geographic concept of a delta 8. Cohen (1960) had earlier chosen the Greek letter kappa () to denote his measure of inter-rater reliability that has been corrected for chance agreement. This letter should be one of the few that don't cause problems. [The only confusion I can think of is in the non-statistical context where the kappa is preceded by phi and beta.] 9. Speaking of phi (pronounced "fee" by some people and "fy" by others), it is used to denote a measure of the relationship between two dichotomous nominal variables (the so-called "phi coefficient") in statistics. But it is used to denote "the golden ratio" of 1.618 and as one of many symbols in mathematics in general to denote angles. 10. Lastly (you hope) there is epsilon (), which is used to denote "error" (most often sampling error) in statistics, but, like delta, is used to indicate "a very small amount" in calculus. Some of their Roman counterparts H,H, and H (1965) and Sanders and Pugh (1972) agreed that population parameters should be denoted by Greek letters and sample statistics should be denoted by Roman letters. They also supported upper-case Roman letters for parameters, if Greek letters were not used, and lower-case Roman letters for statistics. There are still occasional violators of those suggestions, however. Here are two of them: 1. The use of s for the sample standard deviation is very common, but there are two s's, one whose formula has the sample size n in the denominator and the other whose formula has one less than the sample size in the denominator, so they have to be differentiated from one another notationally. I have wriiten extensively about the problem of n vs. n-1 (Knapp, 1970; Knapp, 2013) 2. Most people prefer  for the population intercept and a for the sample intercept, respectively, rather than 0 and b0 . The use of bars and hats Here things start to get tricky. The almost universal convention for symbolizing a sample mean for a variable X is to use an x with a horizontal "bar" (overscore?) above it. Some people don't like that, perhaps because it might take two lines of type rather than one. But I found a blog on the internet that explains how to do it without inserting an extra line. Here's the sample mean "x bar" on the same line:  EQ \O(x,) . Nice, huh? As far as hats (technically called carets or circumflexes) are concerned, the "rule" in mathematical statistics is easy to state but hard to enforce: When referring to a sample statistic as an estimate of a population parameter, use a lower-case Greek letter with a hat over it. For example, a sample estimate of a population mean would be "mu hat" (  EQ \O(,^) ). [I also learned how to do that from a blog.] What should we do about statistical notation? As a resident of Hawaii [tough life] I am tempted to suggest using an entirely different alphabet, such as the Hawaiian alphabet that has all five of the Roman vowels (a,e,i,o,u) and only seven of the Roman consonants (h,k,l,m,n,p,w), but that might make matters worse since you'd have to string so many symbols together. (The Hawaiians have been doing that for many years. Consider, for example, the name of the state fish:  HYPERLINK "http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDIQFjAC&url=http%3A%2F%2Fwww.foxnews.com%2Fstory%2F2006%2F01%2F28%2Fhumuhumunukunukuapuaa-ousted-in-hawaii&ei=GXXtUo3dCpfZoASxjYDgBA&usg=AFQjCNEZgtWQoVvkadEt2jXy93qjE1AU5w&sig2=7TRbgivq3eBWdYf1dgLIeQ&bvm=bv.60444564,d.cGU" Humuhumunukunukuapuaa.) How about this: Don't use any Greek letters. (See Elena C. Papanastasiou's 2003 very informative yet humorous article about Greek letters. As her name suggests, she is Greek.) And don't use capital letters for parameters and small letters for statistics. Just use lower-case Roman letters WITHOUT "hats" for parameters and lower-case Roman letters WITH "hats" for statistics. Does that make sense? References Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed.). Hillsdale, NJ: Erlbaum. Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Goodman, L.A., & Kruskal, W.H. (1979). Measures of association for cross classifications. New York: Springer-Verlag. Halperin, M., Hartley, H.O., & Hoel, P.G. (1965). Recommended standards for statistical symbols and notation. The American Statistician, 19 (3), 12-14. Knapp, T.R. (1970). N vs. N-1. American Educational Research Journal, 7, 625-626. Knapp, T.R. (1996). Learning statistics through playing cards. Thousand Oaks, CA: Sage. Knapp, T.R. (2013). N (or n) vs. N - 1(or n - 1) re-visited. Included in the present work (see pages 215-220). Papanastasiou, E.C. (2003). Greek letters in measurement and statistics: Is it all Greek to you?. ASA STATS 36, 17-18. Sanders, J.R., & Pugh, R.C. (1972). Recommendation for a standard set of statistical symbols and notations. Educational Researcher, 1 (11), 15-16. USING PEARSON CORRELATIONS* TO TEACH OR LEARN STATISTICS Preface This paper summarizes one person's approach (mine) to the teaching and the learning of some basic concepts in statistics. One of the most common research questions in science is: "What is the relationship between X and Y?", where X is an independent (predictor) variable and Y is a dependent (criterion) variable. So why spend a lot of time on means, standard deviations, ANOVAs, etc. when you can get right to the heart of the problem by plotting the (X,Y) data, calculating the corresponding Pearson r, and (for sample data) making whatever inference from sample to population is justified? [I know that means, standard deviations, ANOVAs, as well as Pearson r's, are all part of the so-called "general linear model" (I hate the word "model") and many formulas for it** directly involve means and standard deviations, but I claim that the direction and magnitude of a relationship outweigh any concerns regarding location or dispersion.] I have chosen for illustrative purposes to use a set of data collected a few years ago consisting of the ages, heights, weights, and positions (pitcher, catcher, first base, etc.) of the players on each of the 30 Major League Baseball teams (14 in the American League and 16 in the National League). This choice is not only based on (bad pun) my personal interest in the sport but also on people's general interest in height and weight. The total number of observations is 1034, although there is one missing weight (for a pitcher on the Cincinnati Reds). The data are available free of charge on the internet, but I have prepared Minitab, SPSS, and Excel versions that I will be happy to send to anyone who might be interested. (Some of you might have already gotten a copy of the data.) My personal preference is Minitab, and all of the analyses in what follows have been carried out by using that computer program (more specifically a 1986 version, with which I am perfectly satisfied). _______________________________________________________________ *The full name of a Pearson Correlation is "Pearson Product-Moment Correlation Coefficient", but product-moment is a concept in physics that most people don't understand, and it's really not a coefficient since it doesn't necessarily multiply anything. Getting started Suppose you were interested in the question: "What is the relationship between height and weight?" for Major League Baseball Players. If you have handy the data I will be referring to throughout this paper, there are several prior questions that you need to ask yourself before you do any plotting, calculating, or inferring. Here are ten of them. (The questions are addressed to the reader, whether you are a teacher or a student.) 1. Do you care about all of the players (the entire population of 1034 players) or just some of them? If all of them, you can plot and you can calculate, but there is no statistical inference to be made. Whatever you might choose to calculate are parameters for populations, not statistics for samples. 2. If all of them, do you care about what teams they're on, what positions they play, or their ages; or are you only interested in the "over-all" relationship between height and weight, regardless of team membership, position, or age? 3. If you do care about any of those matters, when it comes to plotting the data, how are you going to indicate same? For example, since there are 30 teams, will you use some sorts of symbols to indicate which of the data points correspond with which of the teams? How can you do that without cluttering things up? [Even if you don't care, trying to plot 1034 points (or 1033 points...remember that one weight is missing) so you can see what is going on is no easy task. Sure, Minitab or SPSS or Excel will do it for you (Minitab did it for me), but the picture might not be very pretty. More about that later.] 4. If just some of the players, which ones? The questions already raised are equally applicable to "sub-populations", where a sub-population consists of one of the 30 teams. (The number of observations for each of the teams varies from a low of 28 to a high of 38, with most around 35.) Sub-populations can and should be treated just like entire populations. 5. If you care about two of the sub-populations, say Team 10 and Team 27, how are you going to handle the "nesting" problem? (Player is nested within team.) Are you interested in the relationship between height and weight with the data for the two teams "pooled" together or in that relationship within each of the teams? This turns out to be one of the most important considerations in correlational research and is also the one that is often botched in the research literature. The "pooled" correlation can be vastly different from the correlation within each team. [Think about what would happen if you plot weight against height for a group of people that consists of half males and half females.] Therefore, even if you don't care about the within-team correlation, you should plot the data for each team separately and calculate the within-team correlations separately in order to determine whether or not the two sets of data are "poolable". 6. If you're interested in the entire population but rather than study it in full you want to take a sample from the population (the usual case) and generalize from sample to population (the usual objective), how do you do the sampling? Randomly? (Ideally.). What size sample, and why? With or without replacement? If with replacement, what will you do if you sample the same person more than once (unlikely, but possible)? 7. Suppose you choose to draw a simple random sample of size 30 without replacement. How will you do that? Use a table of random numbers found in the back of a statistics textbook? (The players are numbered from 1 to 1034, with the American League players listed first.) Use the random-sampling routine that is included in your favorite statistical "package" ? (All three of Minitab, SPSS, and Excel have them, and they are of varying degrees of user-friendliness.) Or use one of those that are available on the internet, free of charge? (There is a nice one at the random.org website.) 8. Do you expect the random sample of size 30 to consist of one player from each of the 30 teams? If so, you're dreaming. (It's possible but extremely unlikely.) Under what circumstances, if any, would you feel unsatisfied with the sample you draw and decide to draw a different one? (Please don't do that.) 9. Suppose you choose to draw a "stratified" random sample rather than a simple one. What variable (team, position, age) would you stratify on? Why? If age (a continuous variable carried out to two decimal places!), how would you do the stratification? How do you propose to "put the pieces (the sub-samples) back together again" for analysis purposes? 10. Whether you stratify or not, will you be comfortable with the routines that you'll use to plot the data, calculate the Pearson r, and carry out the inference from sample to population? Do you favor significance tests or confidence intervals? (Do you know how the two are related?) Plotting Let's consider, in turn, three of the examples alluded to above: 1. Entire population 2. Two sub-populations (Team 10 and Team 27) 3. Simple random sample from entire population I asked Minitab to plot weight (Y) against height (X) for all 1034 players. Here it is: [ N* = 1 indicates that one point is missing; the symbols in the heart of the plot indicate how many points are in the various regions of the X,Y space...the * indicates a single point and the + indicates more than 9]: - 300+ - * weight - * * - * - * 2 * * * 250+ * * 2 4 4 5 * * * * - * 2 4 6 + 6 4 2 3 - * * 3 6 + + + 8 7 * * * - * * 5 + + + + + + 5 * * - * * 3 6 + + + + + 8 6 4 200+ * * 6 + + + + + + + 3 * - 2 + + + + + + + 5 * 3 - * * 8 + + + + + 7 * - * * 4 8 8 + 6 8 2 4 * - 2 2 4 3 3 4 2 * 150+ * * 2 - - --------+---------+---------+---------+---------+-----height 69.0 72.0 75.0 78.0 81.0 N* = 1 Notice the very-close-to-elliptical shape of the plot. Does that suggest to you that the relationship is linear but not terribly strong? [It does to me.] That's nice, because Pearson r is a measure of the direction and the magnitude of linear relationship between two variables. The famous British statistician Karl Pearson invented it over 100 years ago. It can take on any values between -1 and +1, where -1 is indicative of a perfect inverse (negative) linear relationship and +1 is indicative of a perfect direct (positive) linear relationship. Notice also how difficult it would be to label each of the data points according to team membership. The plot is already jam-packed with indicators of how many points there are in the various regions. Take a guess as to the value of the corresponding Pearson r. We'll come back to that in the "Calculating" section (below). I then asked Minitab to make three weight-vs.-height plots for me, one for Team 10, one for Team 27, and one for the two teams pooled together. Here they are: First, Team 10 (number of players = 33): weight - - * - 225+ - 2 * - * * * - - * * * * 200+ * 5 * * - - * * 2 2 - * * - * 2 175+ * - * - - * - 150+ ------+---------+---------+---------+---------+------+height 70.5 72.0 73.5 75.0 76.5 78.0 Second, Team 27 (number of players = 36): weight - - * * - 245+ * - * - * * - * - 2 210+ 2 * * - 2 * * * - * * - 2 * * - 2 3 175+ * - * * * * - * - - * 140+ --+---------+---------+---------+---------+---------+-height 70.0 72.5 75.0 77.5 80.0 82.5 Third, both teams pooled together (number of players = 69): weight - - * * - 245+ * - * * - * * - * - * * 2 3 * 210+ 2 * * - * 3 7 * 2 * * * - * * - * * 2 3 2 2 - 4 3 2 175+ * * - * * 2 * - * * - - * 140+ --+---------+---------+---------+---------+---------+-height 70.0 72.5 75.0 77.5 80.0 82.5 There are several things to notice regarding those three plots. [I'll bet you've "eye-balled" some of the features already.] The first thing that caught my eye was the difference in the plots for the two teams taken separately. The Team 10 plot, although reasonably linear, is rather "fat". The Team 27 plot is very strange-looking, does not appear to be linear, and has that extreme "outlier" with height over 82.5 inches and weight over 245 pounds. (He's actually 83 inches tall and weighs 260 pounds...a very big guy.) The third plot, for the pooled data, looks more linear but is dominated by that outlier. Once again, try to guess what the three correlations will be, before carrying out the actual calculations (see the following section). What do you think should be done with the outlier? Delete it? Why or why not? Are the data for the two teams poolable, and having combined them is OK? Why or why not? Lastly, I asked Minitab to draw a simple random sample of 30 players from the entire population of 1034 players, and to plot their weights against their heights. [The Minitab command is simply "sample 30 C1 C2", where C1 is the column containing a list of all 1034 ID numbers and C2 is where I wanted to put the ID numbers of the 30 sampled players. How would you (did you) do it?] Here's the plot: - * weight - - * - 250+ - - * * - * * * - * * 225+ * - * * * - * * * * - * - 2 * 200+ * * * * - * - * * - - 2 --------+---------+---------+---------+---------+-----height 72.0 73.5 75.0 76.5 78.0 What do you think about that plot? Is it "linear enough"? (See the section on testing linearity and normality, below.) Guess what the Pearson r is. If you used your software to draw a simple random sample of the same size, does your plot look like mine? Calculating I asked Minitab to calculate all five Pearson r's for the above examples (one for the entire population; three for teams 10 and 27; and one for the random sample). Here are the results: Entire population: r = .532. I'd call that a moderate, positive relationship. Team 10: r = .280. Low, positive relationship? Team 27: r = .723 (including the outlier). Strong, positive relationship, but partially attributable to the outlier. The correlation is .667 without the outlier. Two teams combined: r = .587 (including the outlier). Moderate, positive, even with the outlier. The correlation is .510 without the outlier. Pooling was questionable, but turned out to be not as bad as I thought it would be. Random sample: r = .439. Low-to-moderate, positive relationship. It's an under-estimate of the correlation for the entire population. It could just as easily have been an over-estimate. That's what chance is all about Inferring As I indicated earlier in this paper, for the entire-population example and for the two-team sub-population example, there is no statistical inference to be made. The correlation is what it is. But for the random-sample example you might want to carry out one or more statistical inferences from the sample data that you know and have, to the population data that you (in real life) would not know and wish you had. Let's see what the various possibilities are. First, point estimation. If someone put a gun to your head and demanded that you give one number that is your best guess for the correlation between height and weight in the population of 1034 baseball players, what would you say? After plotting my random sample data and calculating the corresponding Pearson r for that sample, I'd say .439, i.e., the sample correlation itself. I would not be very comfortable in so doing, however, for three reasons: (1) my sample of 30 is pretty small; (2) I probably should make a "finite population correction", because the population is not of infinite size and the sample takes a bite (albeit small) out of that population; and (3) I happen to know (and you probably don't) from the mathematical statistics literature that the sample correlation is not necessarily "the best" single estimate of the population correlation. [It all has to do with unbiased estimation, maximum likelihood estimation, Bayesian inference, and other such esoteric matters, so let's not worry about it, since we can see that point estimation is risky business.] Second, interval estimation. Rather than provide one number that is our best single guess, how about a range of numbers that might "capture" the population correlation? [We could always proclaim that the population correlation is between -1 and +1, but that's the entire range of numbers that any Pearson r can take on, so that would not be very informative.] It turns out that interval estimation, via so-called confidence intervals, is the usually preferred approach to statistical inference. Here's how it works. You must first decide how confident you would like to be when you make your inference. This is always "researcher's choice", but is conventionally taken to be 95% or 99%, with the former percentage the more common. (The only way you can be 100% confident is to estimate that the correlation is between -1 and +1, but as already indicated that doesn't narrow things down at all.) Next you must determine the amount of sampling error that is associated with a Pearson r when drawing a simple random sample of a certain size from a population. There are all sorts of formulas for the sampling error, but if you can assume that there is a normal bivariate (elliptical) distribution of X and Y in the population from which the sample has been drawn [more about this later], and you want your computer to do the work for you, all you need to do is tell your software program what your sample correlation and your sample size are and it will give you the confidence interval automatically. My favorite source is Richard Lowry's VassarStats website. I gave his interval estimation routine 95% confidence, my .439 correlation, and my "n" of 30, and it returned the numbers .094 and .690. I can therefore be approximately 95% confident that the interval from .094 to .690 captures the population correlation. Since I have the full population data (but wouldn't in real life) I see that my interval does capture the population correlation (of .532). But it might not have. All I now know is that in the long run 95% of intervals constructed in this way will do so and 5% will not. Third, there is hypothesis testing, which (alas) is the most common type of statistical inference. On the basis of theory and/or previous research I could hypothesize (guess) that the correlation between height and weight in the population is some number, call it  (the Greek rho), which is the population counterpart to the sample Roman letter r. Suppose I claim that there is absolutely no (zero) linear relationship between the heights and the weights of Major League baseball players, because I theorize that there should be just as many short and heavy players, and tall and light players, as there are short and light and tall and heavy ones. I would therefore like to test the hypothesis that  is equal to zero (the so-called "null" hypothesis). Or, suppose that Smith and Jones had carried out a study of the heights and the weights of adults in general and found that the Pearson correlation between height and weight was .650. I might claim that the relationship should be the same for baseball players as it is for adults in general, so I would like to test the hypothesis that  is equal to .650. Let's see how both tests would work: Hypothesis 1:  = 0, given that r = .439 for n =30 The test depends upon the probability that I would get a sample r of .439 or more when the population  = 0. If the probability is high, I can't reject Hypothesis 1; if it's low (say, .05 or thereabouts...the usual conventional choice), I can. Just as in the interval estimation approach (see above) I need to know what the sampling error is in order to determine that probability. Once again there are all sorts of formulas and associated tables for calculating the sampling error and, subsequently, the desired probability, but fortunately we can rely on Richard Lowry and others who have done the necessary work for us. I gave Lowry's software my r and my n, and a probability (p) of .014 was returned. Since that probability is less than .05 I can reject the hypothesis that  = 0. (I might be wrong, however. If so, I'm said to have made what the statisticians call a Type I Error: rejecting a true hypothesis.) [Important aside: There is an interesting connection between interval estimation and hypothesis testing that is especially relevant for Pearson r's. If you determine the 95% confidence interval for  and that interval does not include 0, then you are justified in rejecting the hypothesis that  = 0. For the example just completed, the 95% confidence interval around .439 was found to go from .094 to .690, and that interval does not include 0, so once again I can reject Hypothesis 1, indirectly. (I can also reject any other values that are not in that interval.) The .439 is said to be "significant at the 5% level" (5% is the complement of 95% and I got a "p-value" of .05.] Hypothesis 2:  =..650, given that r = .439 for n =30 The logic here is similar. If the probability is high of getting a sample r of .439 (or anything more discrepant) when the population  = .650, I can't reject Hypothesis 2; if the probability is low, I can. But the mathematics gets a little heavier here, so I will appeal to the preceding "Important aside" section and claim that I cannot reject Hypothesis 2 because .650 is within my 95% confidence interval. (I might be wrong again, for the opposite reason, and would be said to have made a Type II Error: Not rejecting a false hypothesis.) Since I know that  is .532...but wouldn't know that in real life (I can't stress that too often), I "should have" rejected both Hypothesis 1 and Hypothesis 2. Rank correlations and non-parametric inference In the Inferring section (see above) I said that certain things (e.g., the use of Richard Lowry's routine for determining confidence intervals) follow if you can assume that the bivariate distribution of X and Y in the population is normal. What if you know it isn't or you are unwilling to assume that it is? In that case you can rank-order the X's, rank-order the corresponding Y's, and get the correlation between the ranks rather than the actual measures. There are several kinds of rank correlations, but the most common one is the Spearman rank correlation, call it rS (pronounced "r-sub-S"), where the S stands for Charles Spearman, the British psychologist who derived it. It turns out that Spearman's rS is identical to Pearson's r for the ranked data. Consider as an example the height (X) and weight (Y) data for Team 27. Here are the actual heights, the actual weights, the ranks for the heights, and the ranks for the weights (if two or more people have the same height or the same weight, they are assigned the mean of the ranks for which they are tied): ID X Y Xrank Yrank 1 73 196 12.0 17.0 2 73 180 12.0 9.0 3 76 230 31.0 31.5 4 75 224 24.0 30.0 5 70 160 1.5 2.0 6 73 178 12.0 7.0 7 72 205 6.5 22.5 8 73 185 12.0 11.5 9 75 210 24.0 26.0 10 74 180 17.5 9.0 11 73 190 12.0 15.0 12 73 200 12.0 19.5 13 76 257 31.0 35.0 14 73 190 12.0 15.0 15 75 220 24.0 29.0 16 70 165 1.5 3.0 17 77 205 34.5 22.5 18 72 200 6.5 19.5 19 77 208 34.5 24.0 20 74 185 17.5 11.5 21 75 215 24.0 28.0 22 75 170 24.0 5.0 23 75 235 24.0 33.0 24 75 210 24.0 26.0 25 72 170 6.5 5.0 26 74 180 17.5 9.0 27 71 170 3.5 5.0 28 76 190 31.0 15.0 29 71 150 3.5 1.0 30 75 230 24.0 31.5 31 76 203 31.0 21.0 32 83 260 36.0 36.0 [the "outlier"] 33 75 246 24.0 34.0 34 74 186 17.5 13.0 35 76 210 31.0 26.0 36 72 198 6.5 18.0 As indicated above, the Pearson r for the actual heights and weights is .723. The Pearson r for the ranked heights and weights (the Spearman rS ) is .707. [Thank you, Minitab, for doing the ranking and the correlating for me.] Had this been a random sample (which it is not...it is a sub-population) you might have wanted to make a statistical inference from the sample to the population from which the sample had been randomly drawn. The procedures for so doing are similar to the procedures for ordinary Pearson r and are referred to as "non-parametric". The word "non-parametric" derives from the root word "parameter" that always refers to a population. If you cannot assume that there is a normal bivariate distribution of X and Y in the population whose principal parameter is the population correlation, "non-parametric" inference is called for. Testing for linearity and normality If you're really compulsive and can't judge the linearity and normality of the relationship between X and Y by visual inspection of the X,Y plot, you might want to carry out formal tests for both. Such tests are available in the literature, but it would take us too far afield to go into them. And if your data "fails" either or both of the tests, there are data transformations that make the relationship "more linear" or "more normal". Regression Closely related to the Pearson correlation between two variables is the regression of one of the variables on the other, for predictive purposes. Some people can't say "correlation" without saying "regression". [Some of those same people can't say "reliability" without saying "validity".] In this section I want to point out the connection between correlation and regression, using as an example the data for my simple random sample of 30 players drawn from the entire population of 1034 players. Suppose that you were interested not only in estimating the direction and the degree of linear relationship between their heights (X) and their weights (Y), but were also interested in using their data to predict weight from height for other players. You would start out just like we already have, namely plotting the data, calculating the Pearson r, and making a statistical inference, in the form of an estimation or a hypothesis test, from the sample r to the population . But then the focus switches to the determination of the line that "best fits" the sample plot. This line is called the Y-on-X regression line, with Y as the dependent variable (the predictand) and X as the independent variable (the predictor). [There is also the X-on-Y regression line, but we're not interested in predicting height from weight.] The reason for the word "regression" will be explained soon. I gave Minitab the command "regr c4 1 c3", asking for a regression analysis with the data for the dependent variable (Y) in column 4 and with the corresponding data for the one independent variable (X) in column 3. Here is what it (Minitab) gave me: The regression equation is weight = - 111 + 4.39 height Predictor Coef Stdev t-ratio Constant -111.2 126.5 -0.88 height 4.393 1.698 2.59 s = 19.61 R-sq = 19.3% R-sq(adj) = 16.4% Analysis of Variance SOURCE DF SS MS Regression 1 2576.6 2576.6 Error 28 10772.1 384.7 Total 29 13348.7 Unusual Observations Obs. height weight Fit Stdev.Fit Residual St.Resid 6 79.0 190.00 235.87 8.44 -45.87 -2.59R 27 75.0 270.00 218.30 3.68 51.70 2.68R Let's take that output one piece at a time. The first and most important finding is the equation of the best-fitting regression line. It is of the form Y' = a + bX, where a is the Y intercept and b is the slope. [You might remember that from high school algebra. Y' is the "fitted Y", not the actual Y.] If you want to predict a player's weight from his height, just plug in his height in inches and you'll get his predicted weight in pounds. For example, consider a player who is 6 feet 2 inches (= 74 inches) tall. His predicted weight is -111 + 4.39 (74) = 214 pounds. Do you know that will be his exact weight? Of course not; it is merely approximately equal to the average of the weights for the six players in the sample who are approximately 74 inches tall. (See the plot, above.) But it is a heck of a lot better than not knowing his height. The next section of the output provides some clarifying information (e.g., that the intercept is actually -111.2 and the slope is 4.393) and the first collection of inferential statistics. The intercept is not statistically significantly different from zero (trust me that a t-ratio of -0.88 produces a p-value greater than .05), but the slope is (big t, small p; trust me again). The intermediate column (Stdev) is the so-called "standard error" for the intercept and the slope, respectively. When combined with the coefficient itself, it produces the t-ratio, which in turn produces the [unindicated, but less than .05] p-value. Got that? The third section provides a different kind of standard error (see below)...the s of 19.61; the squared correlation in percentage form (check that .193 is the square of .439); and the "adjusted" squared correlation of 16.4%. The squared correlation needs to be adjusted because you are trying to fit a regression line for two variables and only 30 points. [Think what would happen if you had just two points. Two points determine a straight line, so the line would fit perfectly but unsatisfactorily.] The square root of the adjusted squared correlation, which is equal to .405, might very well provide "the best" point estimate of the population correlation (see above). The "Analysis of variance" section re-inforces (actually duplicates) the information in the previous two sections and would take us too far afield with unnecessary jargon, so let's ignore that for the time being. [Notice, however, that if you divide the MS for Regression by the MS for Error, you get the square of the t-ratio for the slope. That is not accidental. What is accidental is the similarity of the correlation of .439 to the slope of 4.393. The slope is equal to the correlation multiplied by the ratio of the standard deviation of Y to the standard deviation of X (neither is indicated, but trust me!), which just happens to be about 10.] The last section is very interesting. Minitab has identified two points that are pretty far off the best-fitting regression line, one above the line (Obs. 27) and one below the line (Obs. 6), if you think they should be deleted. [I don't.] Going back to predicting weight from height for a player who is 74 inches tall: The predicted weight was 214 pounds, but that prediction is not perfect. How good is it? If you take the 214 pounds and lay off the standard error (the second one) of 19.61 a couple of times on the high side and a couple of times on the low side you get a 95% confidence interval (yes, a different one) that ranges from 214 - 2(19.61) to 214 + 2(19.61), i.e., from 175 pounds to 253 pounds. That doesn't narrow things down very much (the range of weights in the entire population is from 150 pounds to 290 pounds), but the prediction has been based on a very small sample. Why the term "regression"? The prediction just carried out illustrates the reason. That player's height of 74 inches is below the mean for all 30 players (you can see that from the plot). His predicted weight of 214 pounds is also below the mean weight (you can see that also), but it is closer to the mean weight than his height is to the mean height (how's that for a mouthful?), so his predicted weight is "regressed" toward the mean weight. The matter of "regression to the mean" comes up a lot in research in general. For example, in using a pre-experimental design involving a single group of people (no control group) measured twice, once before and once after the intervention, if that group's performance on the pretest is very low compared to the mean of a larger group of which it is apart, its performance on the posttest will usually be closer to the posttest mean than it was to the pretest mean. [It essentially has nowhere to go but up, and that is likely to be mis-interpreted as a treatment effect, whereas it is almost entirely attributable to the shape of the plot, which is a function of the less-than-perfect correlation between pretest and posttest. Think about it!] Sample size For my random-sample example I chose to take a sample of 30 players. Why 30? Why indeed? What size sample should I have I taken? Believe it or not, sample size is arguably the most important consideration in all of inferential statistics (but you wouldn't know it from actual practice, where many researchers decide on sample sizes willy-nilly and often not random sample sizes at that.) Briefly put, the appropriate sample size depends upon how far wrong you're willing to be when you make a statistical inference from a sample to the population from which the sample has been drawn. If you can afford to be wrong by a lot, a small sample will suffice. If you insist on never being wrong you must sample the entire population. The problem becomes one of determining what size sample is tolerably small but not so small that you might not learn very much from it, yet not so large that it might approximate the size of the entire population. [Think of the appropriate sample size as a "Goldilocks" sample size.] So, what should you do? It depends upon whether you want to use interval estimation or hypothesis testing. For interval estimation there are formulas and tables available for determining the appropriate sample size for a given tolerable width for the confidence interval. For hypothesis testing there are similar formulas and tables for determining the appropriate sample size for tolerable probabilities of making Type I Errors and Type II Errors. You can look them up, as Casey Stengel used to say. [If you don't know who Casey Stengel is, you can look that up also!] Multiple regression I haven't said very much about age. The correlation between height and weight for the entire population is .532. Could that correlation be improved if we took into account the players' ages? (It really can't be worse even if age doesn't correlate very highly with weight; its actual correlation with weight for these data is only .158, and its correlation with height is -.074...a negative, though even smaller, correlation.) I went back to the full data and gave Minitab the command "regr c4 2 c3 c5", where the weights are in Column 4, the heights are in Column 3, and the ages are in Column 5. Here is what it returned (in part): The regression equation is weight = - 193 + 0.965 age + 4.97 height 1033 cases used 1 cases contain missing values Predictor Coef Stdev t-ratio Constant -192.66 17.89 -10.77 age .9647 .1249 7.72 height 4.9746 .2341 21.25 s = 17.30 R-sq = 32.2% R-sq(adj) = 32.1% We can ignore everything but the regression equation (which, for those of you who are mathematically inclined, is the equation of a plane, not a line), the s, and the R-sq, because we have full-population data. Taking the square root of the R-sq of .322 we get an R of .567, which is higher than the r of .532 that we got for height alone, but not much higher. [It turns out that R is the Pearson r correlation between Y and Y'. Nice, huh?] We can also use the new regression equation to predict weight from height and age, with a slightly smaller standard error of 17.30, but let's not. I think you get the idea. The reason it's called multiple regression is because there is more than one independent variable. Summary That's about it (for now). I've tried to point out some important statistical concepts that can be squeezed out of the baseball data. Please let me know (my e-mail address is tknapp5@juno.com) if you think of others. And by all means (another bad pun) please feel free to ask me any questions about this "module" and/or tell me about all of the things I said that are wrong. Oh, one more thing: It occurred to me that I never told you how to calculate a Pearson r. In this modern technological age most of us just feed the data into our friendly computer program and ask it to do the calculations. But it's possible that you could find yourself on a desert island some time without your computer and have a desire to calculate a Pearson r. The appendix that follows should help. (And you might learn a few other things in the process.) APPENDIX [My thanks to Joe Rodgers for the great article he co-authored with Alan Nicewander, entitled "Thirteen ways to look at the correlation coefficient", and published in The American Statistician, 1988, volume 42, number 1, pages 59-66. I have included some of those ways in this appendix.] Here are several mathematically equivalent formulas for the Pearson r (actually , since these formulas are for population data): 1.  = " zX zY --------- n This is the best way to "think about" Pearson r. It is the average (mean) product of standardized variable X and standardized variable Y. A standardized variable z , e.g., zX , is equal to the raw variable minus the mean of the variable, divided by the standard deviation of the variable, i.e., zX = (X - MX)/sX. This formula for r also reflects the product-moment feature (the product is of the z's; a moment is a mean). Since X and Y are usually not on the same scale, what we care about is the relative relationship between X and Y, not the absolute relationship. It is not a very computationally efficient way of calculating r, however, since it involves all of those intermediate calculations that can lead to round-off errors. 2.  = 1 - 1/2 [s2 of (zY - zX )] This a variation of the previous formula, involving the difference between "scores" on the standardized variables rather than their product. If there are small differences, the variance of those differences is small and the r is close to +1. If the differences are large (with many even being of opposite sign), the variance is large and the r is close to -1. 3.  = n"XY - ("X)("Y) ------------------------------------------ " [n"X2 - ("X)2 ] [n"Y2 - ("Y)2 ] (N.B.: the square root is taken of the product of the bracketed terms in the denominator) This formula looks much more complicated (and it is, in a way), but involves only the number of observations, the actual X and Y data, and their squares. In "the good old days" before computers, I remember well entering the X's in the left end of the keyboard of a Monroe or Marchant calculator, entering the Y's in the right end, pushing a couple of buttons, and getting "X, "Y, "X2 , "Y2 , and 2"XY in the output register all in one fell swoop! [That was quite an accomplishment then.] 4.  = cosine (), where  is the angle between a vector for the X variable and a vector for the Y variable in the n-dimensional "person space" rather than the two-dimensional "variable space". [If you don't know anything about trigonometry or multi-dimensional space, that will mean absolutely nothing to you.] 5. If you really want a mathematical challenge, try using the formula in the following excerpt from an article I wrote about 25 years ago (in the Journal of Educational Statistics, 1979, volume 4, number 1, pages 41-58). You'll probably have to read a lot of that article in order to figure out what a gsm is, what all of those crazy symbols are, etc.; but as I said, it's a challenge!  Dichotomization: How bad is it? I love percentages (Knapp, 2010). I love them so much I'm tempted to turn every statistical problem into an analysis of percentages. But is that wise, especially if you have to dichotomize continuous variables in order to do it? Probably not (see, for example, Cohen, 1983; Streiner, 2002; MacCallum, et al., 2002; Owen & Froman, 2005). But the more important question is: How much do you lose by so doing? What follows is an attempt to compare "undichotomized" variables with dichotomized variables, with special attention given to situations where the relationship between two variables is of primary concern. An example In conjunction with the high school Advanced Placement Program in Statistics, Bullard (n.d.) gathered data on 866 major league baseball players, including their heights (x) and their weights (y). For this "population" of 866 persons the "ordinary" Pearson r correlation is .609. I asked Minitab to dichotomize the two variables at their medians and calculate the Pearson correlation between the dichotomized height and the dichotomized weight (this is sometimes called a phi coefficient). The result was a correlation of .455. As you can see, the correlation between the original heights and weights is greater than the correlation between the dichotomized heights and weights. Intuitively that is to be expected, because you're throwing away potentially useful information by dichotomizing. I then carried out what I like to call "a poor man's Monte Carlo simulation". I asked Minitab to draw 30 random samples each of size 30 from the population of the 866 original heights and weights, and 30 other random samples from the population of the 866 dichotomized heights and weights (sampling was "without replacement" within sample and "with replacement" between samples). For each of those 60 samples I also asked Minitab to calculate the correlation between height and weight, and to summarize the data. Here's what I got: Size of sampleoriginaldichotomy3030Number of samples3030Mean r.6275.4374Median r.6580.4440SD (std. error).0960.1973Minimum r.4240.0000Maximum r.7570.7360 Not only are the correlations lower for the dichotomized variables, but the standard error is higher, meaning that the dichotomization would produce wider confidence intervals and lower power for significance tests regarding the correlation between height and weight. Therefore, case closed? Don't ever dichotomize? Well, not quite. First of all, the above demonstration is not a proof. Maybe the dichotomized variables don't "work" as well as the original variables for this dataset only. (Irwin & McClelland, 2003, do provide a proof of the decrease in predictability for a special case.) Secondly, although it's nice to have high correlations between variables in order to be able to predict one from the other, the primary objective of research is to seek out truth, and not necessarily to maximize predictability. (The dichotomized version of a variable might even be the more valid way to measure the construct of interest!). Finally, there are some known situations where the reverse is true, and there are some frequency distributions of continuous variables that are so strange they cry out for dichotomization. Read on. Some counter-examples In their critique of dichotomization, Owen and Froman cite a study by Fraley and Spieker (2003) in which the correlation between dichotomized variables was higher than the correlation between the original continuous variables. Maxwell and Delaney (1993) showed that the interaction effect of two dichotomized variables in an ANOVA could be greater than the effect of their continuous counterparts in a multiple regression analysis. And while I was playing around with the baseball data I had Minitab take a few random samples of size 30 each and calculate the correlation between height and weight for the undichotomized and the dichotomized variables for the same sample. For one of them I got a correlation of .495 for continuous heights and weights and a correlation of .535 for their associated dichotomies. A few outliers "destroyed" the correlation for the original variables (.609 in the population), while all of those 1,1 and 0, 0 combinations "enhanced" the correlation for the dichotomized variables (.455 in the population). It can happen. That's one of the vagaries of sampling. Frequency distributions In their article, MacCallum, et al. (2002) acknowledged that dichotomization might be justified for the frequency distribution of number of cigarettes smoked per day, with spikes at 0,10, 20, and 40, and lots of holes in-between multiples of 10. I displayed an actual such distribution in my percentages book (Knapp, 2010). I also displayed a similarly strange distribution for the number of cards successfully played in the solitaire game of Klondike. Both of those distributions were strong candidates for dichotomization. Age If there ever is a variable that is subject to dichotomization more than age is, I don't know what that variable might be. When people are interested in a research question such as "What is the relationship between age and political affiliation, more often than not they choose to either break up the age range into groupings such as "Generation X", "Baby Boomers", and the like, or dichotomize it completely into "young" and "old" by cutting somewhere. Chen, Cohen, and Chen (2006) have shown that not only do you lose information by dichotomizing age when it is an independent variable (when else?!), but one of the statistics of greatest use in a field such as epidemiology, the odds ratio, turns out to be biased: The further the cutpoint is from the median of the continuous distribution, the greater the bias, with the net effect that the odds ratio is artificially larger. As indicated above, although it's nice to get high correlations between variables, including high odds ratios, the quest should be for truth, not necessarily for predictability. So does that mean that age should never be dichotomized? Again, not quite. Chen, Cohen, and Chen admit that there are some situations where age dichotomization is defensible, e.g., if subjects are intentionally recruited in age groups that are hypothesized to differ on some dependent variable. Those terrible Likert-type scales I don't know about you, but I hate the 4, 5, 6, or 7-point ordinal scales that permeate research in the social sciences. If they can't be avoided in the first place (by using interval scales rather than ordinal scales or by using approaches that are tailor-made for ordinal variables...see, for example, Agresti, 2010), then they certainly should be dichotomized. Do we really need the extra sensitivity provided by, say, the typical "Strongly agree", "Agree", "Undecided", "Disagree", "Strongly Disagree" scales for measuring opinions? Isn't a simple "Agree" vs. "Disagree" sufficient? For a Likert-type scale with an even number of scale points I suggest dichotomizing into "low" (0) and "high" (1) groups by slicing in the center of the distribution. If it has an odd number of scale points I suggest dichotomizing by slicing through the middle category, randomly allocating half of the observations in that category to "low" and the other half to "high". There; isn't that easy? A final comment I can't resist ending this paper with a quotation from a blogger who was seeking statistical help (name of blogger and site to remain anonymous) and asked the following question: "Can anyone tell me how to dichotomize a variable into thirds?" Oy. References Agresti, A. (2010). The analysis of ordinal categorical data (2nd. ed.). New York: Wiley. Bullard, F. (n.d.) Excel file of data for 866 Major League Baseball players. http://apcentral.collegeboard.com/apc/public/repository/bullard_MLB_list.xls Chen, H., Cohen, P., & Chen, S. (2006). Biased odds ratios from dichotomization of age. Statistics in Medicine, 26, 3487-3497. Cohen, J. (1983). The cost of dichotomization. Applied Psychological Measurement, 7, 249253. Fraley, R.C., & Spieker, S.J. (2003). Are infant attachment patterns continuously or categorically distributed? A taxometric analysis of strange situation behavior. Developmental Psychology, 39, 387404. Irwin, J.R,, & McClelland, G.H. (2003). Negative consequences of dichotomizing continuous predictor variables. Journal of Marketing Research, 40, 366371. Knapp, T.R. (2010). Percentages: The most useful statistics ever invented. Included in the present work (see pages 26-115). MacCallum, R.C., Zhang, S., Preacher, K.J., & Rucker, D.D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7 (1), 19-40. Maxwell, S.E., & Delaney, H.D. (1993). Bivariate median splits and spurious statistical significance. Psychological Bulletin, 113 (1), 181-190. Owen, S.V., & Froman, R.D. (2005). Why carve up your continuous data? Research in Nursing & Health, 28, 496-503. Streiner, D.L. (2002). Breaking up is hard to do: The heartbreak of dichotomizing continuous data. Canadian Journal of Psychiatry, 47, 262-266. LEARNING DESCRIPTIVE STATISTICS THROUGH BASEBALL Introduction About 17 years ago I wrote a little book entitled Learning statistics through playing cards (Knapp, 1996), in which I tried to explain the fundamental concepts of statistics (both descriptive and inferential) by using an ordinary deck of playing cards for generating the numbers. In 2003 Jim Albert wrote Teaching statistics through baseball. What follows can be thought of as a possible sequel to both books, with its emphasis on descriptive statistics and the tabletop dice game "Big League Baseball". The reason I am restricting this presentation to descriptive statistics is that there is no random sampling in baseball (more about this later), and random sampling is the principal justification for generalizing from a sample (a part) to a population (the whole). But it has been said, by the well-known statistician John Tukey and others, that there has been too much emphasis on inferential statistics anyhow. See his classic 1977 book, Exploratory data analysis (EDA) and/or any of the popular computer packages that have implemented EDA. The price you might have to pay for reading this monograph is not in money (it's free) but in the time and effort necessary to understand the game of baseball. (Comedian Bob Newhart's satirical routine about the complications of baseball is hilarious!) In the next couple of sections I will provide the basics. If you think you need to know more, watch a few games at your local Little League field or a few Major League games on TV (especially if Los Angeles Dodgers broadcaster Vin Scully is the announcer). Enjoy! How the game is played As many of you already know, there are nine players on each team and the teams take turns batting and fielding. The nine players are: 1. The pitcher (who throws the ball that the batter tries to hit) 2. The catcher (who catches any ball the batter doesn't hit and some others) 3. The first baseman (who is "the guardian" of the base that the batter must first run to after hitting the ball) 4. The second baseman (the "guardian" of the next base) 5. The shortstop (who helps to guard second base, among other things) 6. The third baseman (the "guardian" of that base) 7. The left fielder (who stands about 100 feet behind third base and hopes to catch any balls that are hit nearby) 8. The center fielder (who is positioned similarly behind second base) 9. The right fielder (likewise, but behind first base). The object of the game as far as the batters are concerned is to run counter-clockwise around the bases (from first to second to third, and back to "home plate" where the ball is batted and where the catcher is the "guardian"). The object of the game as far as the fielders are concerned is to prevent the batters from doing that. Some specifics: 1. There are nine "innings" during which a game is played. An inning consists of each team having the opportunity to bat until there are three "outs", i.e., three unsuccessful opportunities for the runners to reach the bases before the ball (thrown by the fielders) gets there. If the runner reaches the base before the ball does, he is credited with a "hit". 2. Each batter can choose to swing the bat or to not swing the bat at the ball thrown by the pitcher. If the batter chooses to swing, he has three opportunities to try to bat the ball; failure on all three opportunities results in three "strikes" and an "out". If the batter chooses to not swing and the ball has been thrown where he should have been able to bat it, that also constitutes a strike. But if he chooses to not swing and the pitcher has not thrown the ball well, the result is called a "ball". He (the batter) is awarded first base if he refrains from swinging at four poor pitches. (Check out what Bob Newhart has to say about why there are three strikes and four balls!) In reality there are often many combinations of swings and non-swings that result in successes or failures. For example, it is quite common for a batter to swing and miss at two good pitches, to not swing at two bad pitches, and to eventually swing, bat the ball, run toward first base, and get there either before or after the ball is caught and thrown to the first baseman. 3. The team that scores the more "runs" (encirclings of the bases by the batters) after the nine innings are played is the winner. There are several other technical matters that I will discuss when necessary. What does this have to do with statistics? My favorite statistic is a percentage (see Knapp, 2013), and percentages abound in baseball. For example, one matter of great concern to a batter is the percentage of time that he bats a ball and arrives safely at first base (or even beyond) before the ball gets there. If a batter gets one successful hit every four times at bat, he is said to have a "batting average" of 1/4 or 25% or .250. (In baseball such averages are always carried out to three decimal places.) That's not very good, but is fairly typical. The average batting average of the regular players in the Major Leagues (there are two of them, the American League and the National League, with 15 teams in each league) has remained very close to .260 for many, many years. (See the late paleontologist Stephen Jay Gould's 1996 book, Full house, and his 2004 book, Triumph and tragedy in Mudville.) Similarly, a matter of great concern to the pitcher is that same percentage of the time that a batter is successful against him. (One of the neat things about baseball is the fact that across all of the games played, "batting average of" for the batters must be equal to "batting average against" for the pitchers.) Batters who bat successfully much higher than .250 and pitchers who hold batters to averages much lower than .250 are usually the best players. Other important percentages are those for the fielders. If they are successful in "throwing out" the runners 95 percent or more of the time (fielding averages of .950 or better) they are doing their jobs very well. Some other statistical aspects of baseball 1. In the previous section I pointed out that the average batting average has remained around .260. The average standard deviation (the most frequently used measure of variability...see below) has decreased steadily over the years. It's now approximately .030. (See Gould, 1996 and 2004, about that also.) 2. One of the most important concepts in statistics is the correlation between two variables such as height and weight, age and pulse rate, etc. Instead of, or in addition to, "batting average against", baseball people often look at a pitcher's "earned run average", which is calculated by multiplying by nine the number of earned runs given up and dividing by the number of innings pitched. (See Charles M. Schulz's 2004 book, Who's on first, Charlie Brown? , page 106, for a cute illustration of the concept.) Those two variables, "batting average against" and "earned run average", correlate very highly with one another, not surprisingly, since batters who don't bat very well against a pitcher are unlikely to score very many runs against him. 3. The matter of "weighting" certain data is very common in statistics and especially common in baseball. For example, if a player has a batting average of .250 against left-handed pitchers and a batting average of .350 against right-handed pitchers, it doesn't necessarily follow that his overall batting average is .300 (the simple average of the two averages), since he might not have batted against left-handed pitchers the same number of times as he did against right-handed pitchers. This is particularly important in trying to understand something called Simpson's Paradox (see below). 4. "Unit of analysis" is a very important concept in statistics. In baseball the unit of analysis is sometimes the individual player, sometimes the team, sometimes the league itself. Whenever measurements of various aspects of baseball are taken, they should be independent of one another. For example, if the team is the unit of analysis and we find that there is a strong correlation between the number of runs scored and the number of hits made, the correlation between those same two variables might be higher and might be lower (it's usually lower) if the individual player is taken as the unit of analysis, and the number of "observations" (pieces of data) might not be independent in the latter case, since player is "nested" within team. 5. "Errors" arise in statistics (measurement errors, sampling errors, etc.) and, alas, are unfortunately also fairly common in baseball. For example, when a ball is batted to a fielder and he doesn't catch it, or he catches it and then throws wildly to the baseman, thereby permitting the batter to reach base, that fielder is charged with an error, which can sometimes be an important determinant of a win or a loss. A "simulated" game In his 2003 book, Jim Albert displays the following tables for simulating a game of baseball, pitch by pitch, using a set of three dice (one red die and two white dice). This approach, called [tabletop] Big League Baseball was marketed by Sycamore Games in the 1960s. Result of rolling the red die in Big League Baseball." Red diePitch result1, 6Ball in play2, 3Ball4, 5Strike Result of rolling the two white dice in Big League Baseball." Second die123456First die1SingleOutOutOutOutError2OutDoubleSingleOutSingleOut3OutSingleTripleOutOutOut4OutOutOutOutOutOut5OutSingleOutOutOutSingle6ErrorOutOutOutSingleHome run In the Appendix I have inserted an Excel file that lists the results of over 200 throws I made of those dice in order to generate the findings for a hypothetical game between two teams, call them Team A and Team B. [We retirees have all kinds of time on our hands to do such things, but I "cheated" a little by using RANDOM.ORG's virtual dice roller rather than actually throwing one red die and two white dice.] Those findings will be used throughout the rest of this monograph to illustrate percentages, correlations, and other statistical concepts that are frequently encountered in real-life research. "Big League Baseball" does not provide an ideal simulation, as Albert himself has acknowledged (e.g., it regards balls and strikes as equally likely, which they are not), but as I like to say, "it's close enough for government work". You might want to print out the raw data in the Appendix in order to trace where all of the numbers in the next several sections come from. Some technical matters regarding the simulated game: I previously mentioned that balls and strikes are treated as equally likely, although they are not. Similarly, the probabilities associated with white die 1 and white die 2 do not quite agree with what actually happens in baseball, but once again theyre close enough. You might have noticed that Batter A1 followed Batter A9 after each of the latters appearances. B1 likewise followed B9, etc. throughout the game. But it is quite often the case that players are replaced during a game for various reasons (injury, inept play, etc.). Once a player is replaced he is not permitted to re-enter the game (unlike in basketball and football). There were a couple of occasions where a batter swung at a pitch when he already had three straight balls. That is unusual. Most batters would prefer to not swing at that fourth pitch, hoping it might also be a ball. There was at least one occasion where a runner advanced one base when the following batter got a single. Sometimes runners can advance more than one base in such situations. The various permutations in the second table do not allow for somewhat unusual events such as a batter actually getting struck by a pitch or a batter hitting into a double play in which both he and a runner already on base are out. Most of the time a ball in play (a roll of a 1 or a 6 with the red die) resulted in an out, which is in fact usually the case. We dont know, however, how those outs were made. For example, was the ball hit in the air and was caught by the left fielder? Was the ball hit on the ground to the second baseman who then threw the ball to the first baseman for the out? Etc. As far as the score goes, it doesnt really matter. But it does matter to the players and to the manager (every team has a manager who decides who is chosen to play, who bats in what order, and the like). We also dont know who made the errors. As far as the score goes, that doesnt really matter either. But it does matter to the individual players who made the errors, since it affects their fielding averages. A word needs to be said about the determination of balls and strikes. In the game under consideration, if a strike was the result of a pitch thrown by the pitcher we dont know if the batter swung and missed (which would automatically be a strike) or if he took a pitch at which he should have swung. It is the umpire of the game (every game has at least one umpire) who determines whether or not a pitch was good enough for a batter to hit. Although it didnt happen in this game, sometimes the teams are tied after nine innings have been played. If so, one or more innings must be played until one of the teams gets ahead and stays ahead. Again, unlike in basketball and football, there is no time limit in baseball. If the team that bats second is ahead after 8 innings there is no reason for them to bat in the last half of the 9th inning, since they have already won the game. That also didnt happen in this particular game, but it is very common. Basic Descriptive Statistics 1. Frequency distributions A frequency distribution is the most important concept in descriptive statistics. It provides a count of the number of times that each of several events took place. For example, in the simulated data in the Appendix we can determine the following frequency distribution of the number of hits made by the players on Team A in their game against Team B: Number of hits Frequency 0 3 1 2 2 4 3 or more 0 Do you see how I got those numbers? Players A5, A6, and A7 had no hits; Players A4 and A9 had one each (A4 had a double in the seventh inning; A9 had a single in the second inning); and Players A1, A2, A3, and A8 had two each (A1 had a double in the first inning and a home run in the third inning; A2 had a single in the third inning and a single in the seventh inning; A3 had a single in the seventh inning and a single in the ninth inning; and A8 had a single in the second inning and a single in the seventh inning). Check those by reading line-by-line for each of those players in the Appendix. 2. Measures of "central tendency" The arithmetic mean (or, simply, the mean, i.e., the traditional "average"). There was a total of 10 hits made by the 9 players, yielding a mean of 10/9 = 1.111 hits. The median. Putting the number of hits in rank order we have 0,0,0,1,1,2,2,2,and 2. The middle number in that set of nine numbers is 1, so the median is 1 hit. The mode. The most frequently occurring number of hits is 2 (there are four of them), so the mode is 2 hits. Others: There is something called the geometric mean. It is calculated by finding the "nth" root of the product of the "n" events, where n is the total number of events (which are called "observations" in statistical lingo). There is also the harmonic meanthe reciprocal of the mean of the reciprocals of the n observations. Neither of those comes up very often, especially in baseball. The mean is usually to be preferred when the actual magnitude of each observation is relevant and important, especially when the frequency distribution is symmetric (see below). The median is usually to be preferred when all of the actual magnitudes are less important and the frequency distribution is skewed. The mode is usually not reported because there is often more than one mode (in which case it can be said that there is no mode), but the case of two modes is of some interest. (See Darrell Huff's delightful 1954 book, How to lie with statistics, for some hilarious examples where the mean, the median, or the mode is to be preferred; and see my article about bimodality (Knapp, 2007). 3. Measures of variability The range. The fewest number of hits is 0, and the greatest number of hits is 2, so the range is 2 - 0 = 2 hits. The variance. This will be complicated, so hang on to your ballcaps. The variance is defined as the mean of the squared differences ("deviations") from the mean. [How's that for a difficult sentence to parse!] The three players who had no hits have a 0 - 1.111 = -1.111 difference from the mean. The square of -1.111 is 1.234 [trust me or work it out yourself]. Since there are three of those squared differences, their "contribution" to the variance is 3 x 1.234 = 3.702 squared hits. (More about "squared hits" in the next section.) The two players who had one hit each have a 1 - 1.111 = -.111 difference, which when squared is .012, and when subsequently multiplied by 2 is .024 squared hits (their contribution to the variance). And the four players who had two hits each have a difference of 2 - 1.111 =.889, which when squared is .790 and when multiplied by 4 is 3.160 squared hits. Adding up all of those squared differences we have 3.702 + .024 + 3.160 = 6.886 squared hits. Dividing that by 9 (the number of players on Team A), we get a mean squared difference of .765 squared hits. That's the variance for these data. Whew! The standard deviation. As you can see, the variance comes out in the wrong units (squared hits), so to get back to the original units we have to "unsquare" the .765, i.e., take its square root, which is .875 hits. That's the standard deviation. It provides an indication of the "typical" difference between a measurement and the mean of all of the measurements. [Would you believe that some people divide the sum of the squared deviations from the mean by one less than the number of observations rather than the number of observations, when calculating the variance and the standard deviation? The reason for that is very complicated, alas, but need not concern us, since it has nothing to do with descriptive statistics.] The mean [absolute] deviation. Rather than going through all of that squaring and unsquaring it is sometimes better to take the absolute value of each of the differences and find the mean of those. Doing so here, we would have 3x 1.111 + 2 x .111 + 4 x .889 = 3.333 + .222 + 3.556 = 7.111, divided by 9, which is .790 hits. This statistic doesn't come up very often, but it should. 4. Skewness and kurtosis statistics: There are a couple of other descriptive statistics that come up occasionally. One is an indicator of the extent to which a frequency distribution is symmetric (balanced), and is called a measure of the skewness of the distribution. ("Outliers", i.e., unusual events, are a particularly bothersome source of skewness.) Another descriptive statistic is an indicator of the extent to which a frequency distribution has most of the events piled up at a particular place (usually around the middle of the distribution), and is called a measure of the kurtosis of the distribution. The procedures for calculating such measures are complicated (even more so than for the variance and the standard deviation). Suffice it to say that the above distribution is slightly skewed and not heavily concentrated in one place. 5. A measure of relationship: Pearson product-moment correlation coefficient Suppose we were interested in the question: What is the relationship between the number of pitches thrown to a batter and the number of hits he gets? The data for Team A are the following: Player Number of pitches (X) Number of hits (Y) A1 9 2 A2 8 2 A3 16 2 A4 11 1 A5 7 0 A6 11 0 A7 16 0 A8 17 2 A9 7 1 The best way to describe and summarize such data is to construct a "scatter diagram", i.e., to plot Y against X. I "asked" my favorite statistical software, Minitab, to do this (I have an old but very nice version.) Here's what I got: Y - - * * * - 1.80+ - - - - 1.20+ - - * * - - 0.60+ - - - - -0.00+ * * * * --------+---------+---------+---------+---------+--------X 8.0 10.0 12.0 14.0 16.0 [Each asterisk is a data point.] I then asked Minitab to get the correlation between X and Y. It replied: Correlation of X and Y = -0.223 Next, and last (at least for now), I asked Minitab to "regress" Y on X in order to get an equation for predicting Number of hits from Number of pitches. It replied with this (among other things): The regression equation is Y = 1.47 - 0.0513 X s = 0.9671 R-sq = 5.0% R-sq(adj) = 0.0% Interpretation: From the plot you can see that there is not a very strong relationship between the two variables X and Y. (The points are all over the place.) The correlation (specifically, the Pearson product-moment correlation coefficient) is actually negative (and small), indicating a slight inverse relationship, i.e., as X increased Y decreased and as X decreased Y increased. The "Pearson r" is a measure of the degree of linear relationship between two variables (how close do the points come to falling on a straight line?) and ranges between -1 and +1, with the negative values indicative of an inverse relationship and the positive values indicative of a direct relationship. The R-sq of 5.0% is the square, in percentage terms, of the Pearson r. (.223 multiplied by itself is approximately .05.) It suggests that about 5% of the variance of Y is associated with X (but not necessarily causally...see next paragraph). The R-sq (adj) of 0% is an attempt to adjust the R-sq because you are trying to "fit" a line to only nine data points. (If there were just two data points you'd have to get a perfect fit, since two points determine a line.) For all intents and purposes, the fit in this case is bad. Be particularly careful about interpreting any sort of relationship as causal. Even if the Pearson r had been 1.000 and the prediction of Y from X had been perfect, it would not necessarily follow that X caused Y, i.e., that having a certain number of pitches thrown to him would "make" a batter get a certain number of hits. The old adage that "correlation is not causation" is true. There are lots of other measures of the relationship between two variables , e.g., Spearmans rank correlation and Goodman & Kruskals gamma and lambda, but they too come up only occasionally. Some descriptive statistics for Team B: Frequency distribution of hits: Number of hits Frequency 5 4 2 or more 0 Players B1, B3, B4, B5, and B6 had no hits; B2 had a double in the first inning; B7 had a double in the seventh inning, B8 had a triple in the fourth inning, and B9 had a single in the third inning. Their mean number of hits was 4/9 = .444. The standard deviation was .497 So Team A had more hits (their mean was 1.111 and their standard deviation was .765) than Team B, which might be the reason why they won the game. But is the difference in the two means statistically significant? See the next section. No inferential statistics for these data My liberal colleagues would carry out a t test of the significance of the difference between the 1.111 for Team A and the .444 for Team B. I wouldnt. Heres why: 1. Although the data for the two teams constitute samples of their baseball prowess (assuming that they play more than one game against one another), there is nothing to indicate that the data we have come from random samples. Random sampling is a requisite for inferential statistics in general, and for the t test in particular. Because the example is a hypothetical one, the RANDOM.ORG program was used to generate the data, but it does not follow that every baseball game played by a team is a random sample of its typical performance. [Do you understand that?] This is a very important matter. All we can say is that Team A had an average number of hits that was greater than the average number of hits obtained by Team B on this particular occasion. Simpsons Paradox It is well known in mathematical statistics that a percentage A can be greater than a percentage B for one category of a dichotomy [a dichotomy is a variable having just two categories], a percentage C can be greater than a percentage D for the other category of the dichotomy, yet the pooled percentage of A and C can be less than the pooled percentage of B and D. It all depends upon the numbers that contribute to the various percentages. In an article I wrote several years ago (Knapp, 1985) I gave an actual example of a batter X who had a higher batting average than another batter Y against left-handed pitching, had a higher batting average against right-handed pitching also, but had an overall lower batting average. [I wasnt the first person to point that out; many others have done so.] A few years later I sent a copy of that article to the Cooperstown NY Baseball Hall of Fame, along with a transitive example where X >Y>Z against both left- and right-handed pitchers but X?FHRSTZ[bfgi   % ) - = u v w | ߼ɩɓɞ}rhVh COJQJhVhf,OJQJhVhEOJQJhVhbOJQJhVhL OJQJh>OJQJhVhKVOJQJhVh}OJQJ^JhVh}OJQJhVh8<OJQJhVh:OJQJhVhuOJQJhVhOJQJ,!>Thi   R S * + { | O 0^`0gd_ 0^`0gdS7gdSVOu    S ^  * 5 O Q S s ɽɲ{p{eZOehVhEOJQJhVh.zHOJQJhVhfOJQJhVhwOJQJhVh OJQJhVh' OJQJhVh&OJQJhVhOJQJhVhmOJQJhVh_OJQJhVhS7>*OJQJhVhS7OJQJhVhOJQJhVhwOJQJhVh}OJQJhVh COJQJ % ) * + n y z | } 4 ? K N O P l m ɾԳɨ߳Ԩ߳||qhVhVHOJQJhVhgOJQJhVhh8OJQJhVhY$;OJQJhVhrIOJQJhVh8lOJQJhVh8/OJQJhVhkOJQJhVhaOJQJhVhiOJQJhVhmOJQJhVh COJQJhVhoOJQJ)O P VW:;./h 0^`0gd_ 0^`0gd94 0^`0gd8l 0^`0gdS7=H~3<=>PUVWyꎃxmexeZxmeOhVh##OJQJhVh`F\OJQJh>OJQJhVhiOJQJhVhmOJQJhVh COJQJhVhoOJQJhVh94OJQJ\^Jh>OJQJ\^JhVhiOJQJ\^JhVhmOJQJ\^JhVh COJQJ\^JhVhmOJQJ\^JhVh94OJQJhVh&OJQJ (19:;K"$*,-./NYZ_`aeԣ}ԓԣrggԓԛggԣhVhN@OJQJhVh?OJQJhVh-OJQJhVh##OJQJhqOJQJhOJQJhVhiOJQJhVh=YOJQJhiOJQJh>OJQJhVhb>OJQJhVhmOJQJhVh COJQJhVhyOJQJhVh'qvOJQJ(eghsuz78;<=>Izaoprs9D⹮Įģģģ̹zhVh]OJQJhVhyOJQJhOJQJhVh[Z,OJQJhVhb>OJQJhVh8OJQJhVh.zHOJQJhqOJQJhVh COJQJhVhbOJQJhVh>`OJQJhVh".OJQJhOJQJ0hi=>tuIJijLM 0^`0gd_DEGHI>ChdefhijGHĹ幣壹幍zozhVhOJQJh=YOJQJhVhQOJQJhVh$-OJQJhVhZOJQJhVhlOJQJhVhOJQJhVhFJOJQJhVhMOJQJhVh(* OJQJhVh.zHOJQJhOJQJhVh]OJQJhqOJQJ+HKW$%()1T 'co !!-#znhVh.{OJQJ\hVh".>*OJQJ\hVh_>*OJQJ\hVh_OJQJ\hh_5OJQJhh_5OJQJ\hVh$-OJQJhVhOJQJhVh|$OJQJhqOJQJhOJQJhVhMOJQJh=YOJQJ(M)*'(^_ !!!!-#.#gd_gd_ 0^`0gd_-###v$$''''..//00232;2H22255)6-676>6?6b6n6o6q66W:b:::H;U;<̰̰̰̤̘̘̾̀tthȟ̰̰̰hVhA,OJQJ\hVhaOOJQJ\hVh1OJQJ\hVh&OJQJ\hVhOJQJ\hVhhp!OJQJ\hVh_>*OJQJ\hVh_OJQJ\]hVh_OJQJ\hVh_>*CJOJQJhVhaOCJOJQJhVh_CJOJQJ'.#v$w$$$''''***-+-....w1x1 3 35555p6q666gd_gd_6W:X:c:d:H;I;Y;Z;;;;;;;;<<,<?<Q<c<u<<<<<<^gd_gd_<<<<<<<<<<<<<<= =%=)=*=/=P=Y=b=f=h=m============== >>#>'>3>8>@>D>G>J>T>]>i>m>r>v>>>>>>>>׹כ׹ת׹תכ׹כ׹כ׹כ׹כhVh'vCJOJPJQJhVh.oCJOJPJQJhVh8<CJOJPJQJhVh,KCJOJPJQJhVh_CJOJPJQJhVh_OJPJQJhVh".OJPJQJ=<<=G====>:>K>x>>>???_?|???0@1@BB'BOBxBBBBgd_gd_>>>>>>>????? ?&?*?=?H?Q?h?q?u?x?z?????????0@B'BOB]BcBeBoBpBBBBBBBBBBB⸪}hVh1CJOJPJQJhVhHCJOJPJQJhVh['CJOJPJQJhVh_>*OJPJQJhVh_OJQJ\hVh7k5CJOJPJQJhVh8<CJOJPJQJhVh_CJOJPJQJhVh,KCJOJPJQJ1BBB(C*C@CCCCC D DEE]G^GlGmGEHFH1K2KLLOOJQKQRRgd_B^C_CcChCsCtCCCC DEE1F2F3F]GkGJHQH.I2I6K?KLLLMBNCNdOkOmOqOOOQQRSS9TTUUVlWwWWWWWWW[ZhVhpCJOJPJQJhVh3CJOJPJQJhVh7k5CJOJPJQJhVh|CJOJPJQJhVh_>*CJOJPJQJhVh['CJOJPJQJhVh_CJOJPJQJ6RSSTTUUgWhW\\^^``bbkeleggghhaiiigd_dd[$\$]gd_gd_gd_[Z]ZZZZZ\0\^^{`|```aabbqeegggghhhhaijkk;n^nxoopppptqxqqrrssuukvnvsv⸭␸hVh_>*OJQJhVh_CJOJQJhVh_OJPJQJ\^JhVh]dOJQJhVh_OJQJhVh|CJOJPJQJhVh_>*CJOJPJQJhVh_CJOJPJQJhVh'CJOJPJQJ3iiiij@jhjjjjjkk6n7nsotoptqrruuwwxx`xaxgd_gd_gd_svuvww!x1xfxqx)yEy }} }/}}}}}gh}):~ 6Gׁ'7DEFGSXej˂͂hVhCJOJPJQJhVh@@'CJOJPJQJhVh".CJOJPJQJhVh_>*CJOJPJQJhVh_CJOJPJQJhVhh2CJOJPJQJPQՆֆigd_EJab'(EM+<(JNaʗޗߗ'՘N_ҙat(P⥕ssssssssshVh_>*OJQJhVh_OJQJhVhpOJQJhVh' >*CJOJPJQJhVh' CJOJPJQJhVh_>*CJOJPJQJhVhh2CJOJPJQJhVhHCJOJPJQJhVh_CJOJPJQJhVh7k5CJOJPJQJ-ij+,<=NObcefޗߗ>?gd_gd_?jkFGp,-} gd_ `^``gd_gd_gd_Pߛ0ab,u|}<Yӟ$IW)3yvd"hVh_>*CJOJQJ]^JhVh_CJOJQJ^JhVh_>*CJOJQJhVh_CJOJQJhVh>*CJOJPJQJhVh_>*CJOJPJQJhVh_CJOJPJQJhVh_OJQJ]hVh_>*OJQJ]hVh_>*OJQJhVh_OJQJ"fgҟӟVW34¡y KLΦϦgdpgd_gd_Gly ^~У^{ؤh@æçڧۧܧ;wl`h|h<5OJQJhVhK@OJQJhVhMtOJQJhVhp>*OJQJhVhp>*OJQJ]hVhpOJQJhVh_>*OJQJhVh_OJQJhVh_CJOJQJ^JhVh_>*CJOJQJ^J"hVh_>*CJOJQJ]^JhVh_CJOJQJ]^J!Ϧ,-ۧ!"+,56ìĬZ[}~״شgd<gd_Nz| ",46Z|Po A%?7w{|KLݻݻݻݳݻݻݻݻݨݻݻݻݻݻݝݝݝݝ݌ hVh<>*B*OJQJphhVh$oOJQJhVh] OJQJh|OJQJhVh<>*OJQJhVh+bOJQJhVhM!OJQJhVh<OJQJhVhOJQJh|h5OJQJ2DFgh>?gd<KL BC%&>?RS89dgd<]pyir./3V·嚉we#hVh<>*CJOJPJQJaJ#hVh.->*CJOJPJQJaJ hVh.-CJOJPJQJaJ hVh<CJOJPJQJaJhVh>*OJQJhVhOJQJhVhM!OJQJhVh.->*OJQJhVh<>*OJQJhVh<OJQJhVh<B*OJQJph'#1BPaprsxgd<xy^_/cd`ajkJKgd.lgdB `^``gd<gd<VZ[c#>]HIJKӸӦӎӃxl_RhVh.R0J OJQJhVh.lOJQJ^Jh|h.l5OJQJhVhM!OJQJhVhBOJQJhVhZUOJQJhVh<0JOJQJ#jhVh<OJQJUjhVh<OJQJUhVh<>*OJQJhVh<OJQJ hVh.-CJOJPJQJaJ hVh<CJOJPJQJaJ[hiefYZgda|h^hgd.lgd.l ihjnp>T GOtf  ͿͲymmmbShVha|OJPJQJ^JhVh[OJQJhVha|>*OJQJhVhH+OJQJhVh*OJQJhVhOJQJhVha|OJQJhVhOJQJ^JhVha|OJQJ^JhVh.l>*OJQJ^JhVh.lOJQJ^JhVh.l0J OJQJ\hVh.lOJQJhVh.l0J OJQJ ZFG12XY-.PQgda|<=>TU/0 IJ:<Z[g v   gda| 0 q z  pr(!3 ""h%%,,--./44;;<=====R@U@@@CCCDHH'JPJKKL!LAOPO|OOh|OJQJh<OJQJhVh^]OJQJhVhCOJQJhVhH+OJQJhVha|OJQJ^JhVh[OJQJhVha|OJQJhVha|>*OJQJ> 2 3 JL F'(!"23gda| 7$8$H$gda|gda|3op  '#(#$$g%%%((H+I+----00114444o6p6gda|p699====????@@@@BBCCDDHHHHJKKKOgda|OOOOOOFRGRHRS3Y^]p]^^``````````aaaa@aBaiakaaabbbbbbbbbbccbgg7n0pKprhVha|<OJQJ^JhVha|OJQJ^JhVha|H*OJQJhVha|>*OJQJhVha|OJQJ^JhVh<OJQJha|OJQJhVhM!OJQJh<OJQJh|OJQJhVha|OJQJ4OOGRHRSSRSSSS)TbTTTWW3Y4YZZ]]^]q]r]^^^^c 7$8$H$gda|gda|ccccee`gbgggiik7n0pYºJL;Ͳ͑͜hVhPOJQJ+hVha|0J6>*B*OJQJ^JphhVha|H*OJQJhVha|B*CJaJphhVha|OJQJhVha|>*OJQJhVha|OJQJ^JhVha|>*OJQJ^J2htitttwwSxxVyoy}};<ghihi23deR -DM gda|-DM ^gda|gda|R35ΏϏ{|GH@Aefgda|/0>?fg"#=>Z[gda|OP}~KLqr 7$8$H$gda|gda|LMqrsfpsz(* cJKab쮢~~shVhiOJQJhVha|OJQJ^JhVhY OJQJhVha|OJQJ]hVha|>*OJQJhVha|OJQJhVha|0JOJQJ,j]hVha|B*OJQJUphhVha|B*OJQJph&jhVha|B*OJQJUph-bc%Lyz>B'(gda| +,  fggda|^gda|KL;S"`Q R   Z \ j l      &&$&&&((((((ս}}}}}hVha|H*OJQJ^JhVha|>*OJQJ^JhVhiOJQJhVha|OJQJ^JhVha|>*OJQJhVha|OJQJhVha|0JOJQJ&jhVha|B*OJQJUph,j:hVha|B*OJQJUph1"!" ^`gda|Q R       ` b     L N tu9:ggda|WX^_"#    ##F%G%&&**h,a-b-}-~-/gda|(b-|-s2t2x3y3z3{3}344 5 5 5 55889999;T;">A>??AѮ£яѣ{nahVh %OJQJ^JhVhuU4OJQJ^J&jPhVha|B*OJQJUphU&j fhVha|B*OJQJUphUhVha|OJQJ&j7hVha|B*OJQJUphUhVha|B*OJQJphU&jhVha|B*OJQJUphUhVha|>*OJQJ^JhVha|OJQJ^J//00q2r2|3}34558899;;S;T;>>{A|A5C7CwCxCFFFgda|AAA7CvCFFVFWFXFFFFIIMMCNMN}NNPP(P*P0P8P@PBPRPTPZP\PQQQQQQ̨̾~o̾̾̾̾aaaaaa̾̾ahVha|H*OJQJ^JhVha|0JOJQJ^J0jhVha|B*OJQJU^Jph!hVha|B*OJQJ^Jph*jhVha|B*OJQJU^JphhVha|>*OJQJ^JhVha|OJQJ^JhVha|0J6OJQJ^J+hVha|0J6>*B*OJQJ^Jph&FFGGGIIIILLNNtOuO^P`P|Q~QQQQQQQQQR ^`gda|gda|QQQ R RRRSTBXDXNXPXhZrZuZ~ZZZ[[[[m\\c^h^j^k^r^````,c3c7c*OJQJhVh15WOJQJhVha|>*OJQJhVha|H*OJQJhVha|>*OJQJ^JhVha|OJQJhVha|H*OJQJ^JhVha|OJQJ^J8RRRRVV~WW&X(XdZiZ~ZZZZZZZZ[U[[[l\m\\\,^-^gda|-^^^_^s^t^^^^^^__``+c,c@ "&*26>BFJ.202Z\("F c̰̾̾̾̾̾̾̾̾̾̾̾̾̾̾̾̾̾̾hVha|H*OJQJ^JhVha|H*OJQJ^JhVha|OJQJ^JhVha|>*OJQJhVha|OJQJhVha|B*OJQJphC ҇Ӈ֋؋<>tvx# p^p`gda|gda|#$^`*,./XYΔϔ&)!"GHgda|H  cd  ԯկ֯gda| ǷȷNO۸ܸݸ޸%/+,GHwl[ jxO)N hVha|OJQJUhVhPOJQJhVha|OJQJ^JjNhVha|OJQJUjD@hVha|OJQJUj1:hVha|OJQJUjhVha|OJQJUjhVha|OJQJUjhVha|OJQJUhVha|>*OJQJhVha|OJQJhVh[OJQJķŷƷǷɷʷN߸$%01*+-./FGIJgda|bc|}NO`abEF}~gda|b'({|OP!*M!01ֹwbwSFhVha|OJQJaJhVha|OJQJ^JaJ)jhVha|B* OJQJU\phK hVha|B* OJQJ\phK)jhVha|B* OJQJU\phKhVha|0JOJQJ^JhVha|OJQJ^J!jhVha|OJQJU^JhVha|>*OJQJhVha|OJQJjhVha|OJQJUj_hVha|OJQJU~+,LM !$%&'()*,.01Ff׎ $Ifgda|gda|145678:<>@BCFGHIJLNPRTUXYZ[\FfFf- $Ifgda|1BCTUfgxy`j21F   OP#$*+45dehhVha|H*OJQJhVha|>*OJQJhVha|OJQJhVha|OJQJaJhVha|OJQJ^JaJL\^`bdfgjklmnprtvxy|}~Ff/Ff٘ $Ifgda|Ff1Ffۢ $Ifgda|FfFfݬFf $Ifgda|12jkfg12GHgda|Ff3 $Ifgda|Mkdt$$Iflrq1 &&&&&644 la]p2yta| $Ifgda|   Mkd/$$Iflrq1 &&&&&644 la]p2yta| $Ifgda|VMMMMM $Ifgda|kd$$Iflrq1 &&&&&644 la]p2yta|!#VMMMMM $Ifgda|kd$$Iflrq1 &&&&&644 la]p2yta|#$%&'(*VMMMMM $Ifgda|kd`$$Iflrq1 &&&&&644 la]p2yta|*+,.024VMMMMM $Ifgda|kd$$Iflrq1 &&&&&644 la]p2yta|456_VQQQQQQQQgda|kdֶ$$Iflrq1 &&&&&644 la]p2yta| _`JK st3RSXZ\^ $Ifgda|gda|hist13S 67de  89fg ;<Z[z{´襘hVha|OJQJaJhVha|OJQJ^JaJhVha|>*OJQJ\hVha|OJQJ\hVha|>*H*OJQJhVha|>*OJQJhVha|OJQJhVha|H*OJQJ;^`bdfhjlnprtvxz|~Fft $Ifgda|Ff $Ifgda|Ff $Ifgda|  "$&(*,.02467<>@BFfD $Ifgda|FfBDFHJLNPRTVXZ\^`bdejlnprtvxz|Ffx $Ifgda||~Ff $Ifgda|Ff $Ifgda|  FfH $Ifgda|Ff "$&(*,.024689>@BDFHJLNPFf| $Ifgda|PRTVXZ\^`bdfglmnpqrtuvwxyz{|}Ff $Ifgda|}~Ff $Ifgda|FfL $Ifgda|Ff    !#%Ff $Ifgda|%')+-/13579;<ABCEFGIJKLMNPQRSFf $Ifgda|STUWYZ[`abdefhijlmnpqrstuwyzFf" $Ifgda|z{FfP/ $Ifgda|Ff)Ff5 $Ifgda| b d R W   }~ygda|Ff; $Ifgda|  W    ~      ~xv#############$$$$7$8$Q$R$b$c$v$w$$$$$$$$ꪝhVha|OJQJaJhVha|OJQJ^JaJhVha|>*OJQJhVha|H*OJQJhVha|OJQJ^JhVha|H*OJQJhVha|OJQJhVhLOJQJ> wxT U ~""<#=#u#v### $Ifgda|gda|########MAA $$Ifa$gda|kd @$$Iflrf &'''''644 la]p2yta| $Ifgda|#####5kdA$$Iflֈz f &''''''644 la]p<yta| $Ifgda|####### $Ifgda| $$Ifa$gda|#####>22) $Ifgda| $$Ifa$gda|kdB$$Iflֈz f &''''''644 la]p<yta|#####)kd4C$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|####### $Ifgda| $$Ifa$gda|#####>22) $Ifgda| $$Ifa$gda|kdMD$$Iflֈz f &''''''644 la]p<yta|#####)kdfE$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|######$ $Ifgda| $$Ifa$gda|$$$ $ $>22) $Ifgda| $$Ifa$gda|kdF$$Iflֈz f &''''''644 la]p<yta| $ $$$$)kdG$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|$ $#$$$%$.$7$ $Ifgda| $$Ifa$gda|7$8$:$=$>$>22) $Ifgda| $$Ifa$gda|kdH$$Iflֈz f &''''''644 la]p<yta|>$?$H$Q$R$)kdI$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|R$T$W$X$Y$_$b$ $Ifgda| $$Ifa$gda|b$c$f$j$k$>22) $Ifgda| $$Ifa$gda|kdJ$$Iflֈz f &''''''644 la]p<yta|k$l$t$v$w$)kdK$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|w$z$}$~$$$$ $Ifgda| $$Ifa$gda|$$$$$>22) $Ifgda| $$Ifa$gda|kdM$$Iflֈz f &''''''644 la]p<yta|$$$$$)kd.N$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|$$$$$$$ $Ifgda| $$Ifa$gda|$$$$$>22) $Ifgda| $$Ifa$gda|kdGO$$Iflֈz f &''''''644 la]p<yta|$$$$$)kd`P$$Iflֈz f &''''''644 la]p<yta| $$Ifa$gda| $Ifgda|$$$$$$$$$$$$%%%%%%&%'%0%1%;%<%E%F%O%P%Y%Z%c%d%o%p%y%z%%%%%%%%%%%%%%%%%%%%%%%%%%%& &&&&&&&'&0&1&;&<&E&F&O&P&Y&Z&c&d&m&n&w&x&&&&&&hVha|OJQJ^JaJhVha|OJQJaJZ$$$$$$$ $Ifgda| $$Ifa$gda|$$$$$>22) $Ifgda| $$Ifa$gda|kdyQ$$Iflֈz f &''''''644 la]p<yta|$$$$$5kdR$$Iflֈz f &''''''644 la]p<yta| $Ifgda|$$$$$$$ $Ifgda| $$Ifa$gda|$$$$$>22) $Ifgda| $$Ifa$gda|kdS$$Iflֈz f &''''''644 la]p<yta|$$$$$5kdT$$Iflֈz f &''''''644 la]p<yta| $Ifgda|$$$$$$$ $Ifgda| $$Ifa$gda|$$$%%>22) $Ifgda| $$Ifa$gda|kdU$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdV$$Iflֈz f &''''''644 la]p<yta| $Ifgda|% % %%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kdX$$Iflֈz f &''''''644 la]p<yta|%%%%%5kd(Y$$Iflֈz f &''''''644 la]p<yta| $Ifgda|% %"%#%$%%%&% $Ifgda| $$Ifa$gda|&%'%*%,%-%>22) $Ifgda| $$Ifa$gda|kdAZ$$Iflֈz f &''''''644 la]p<yta|-%.%/%0%1%5kdZ[$$Iflֈz f &''''''644 la]p<yta| $Ifgda|1%4%7%8%9%:%;% $Ifgda| $$Ifa$gda|;%<%?%A%B%>22) $Ifgda| $$Ifa$gda|kds\$$Iflֈz f &''''''644 la]p<yta|B%C%D%E%F%5kd]$$Iflֈz f &''''''644 la]p<yta| $Ifgda|F%I%K%L%M%N%O% $Ifgda| $$Ifa$gda|O%P%S%U%V%>22) $Ifgda| $$Ifa$gda|kd^$$Iflֈz f &''''''644 la]p<yta|V%W%X%Y%Z%5kd_$$Iflֈz f &''''''644 la]p<yta| $Ifgda|Z%]%_%`%a%b%c% $Ifgda| $$Ifa$gda|c%d%g%k%l%>22) $Ifgda| $$Ifa$gda|kd`$$Iflֈz f &''''''644 la]p<yta|l%m%n%o%p%5kda$$Iflֈz f &''''''644 la]p<yta| $Ifgda|p%s%u%v%w%x%y% $Ifgda| $$Ifa$gda|y%z%}%%%>22) $Ifgda| $$Ifa$gda|kd c$$Iflֈz f &''''''644 la]p<yta|%%%%%5kd"d$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kd;e$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdTf$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kdmg$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdh$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kdi$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdj$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kdk$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdl$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kdn$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdo$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%%%%%%% $Ifgda| $$Ifa$gda|%%%%%>22) $Ifgda| $$Ifa$gda|kd5p$$Iflֈz f &''''''644 la]p<yta|%%%%%5kdNq$$Iflֈz f &''''''644 la]p<yta| $Ifgda|%&&&&&& $Ifgda| $$Ifa$gda|& & &&&>22) $Ifgda| $$Ifa$gda|kdgr$$Iflֈz f &''''''644 la]p<yta|&&&&&5kds$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&& &"&#&>22) $Ifgda| $$Ifa$gda|kdt$$Iflֈz f &''''''644 la]p<yta|#&$&%&&&'&5kdu$$Iflֈz f &''''''644 la]p<yta| $Ifgda|'&*&,&-&.&/&0& $Ifgda| $$Ifa$gda|0&1&4&7&8&>22) $Ifgda| $$Ifa$gda|kdv$$Iflֈz f &''''''644 la]p<yta|8&9&:&;&<&5kdw$$Iflֈz f &''''''644 la]p<yta| $Ifgda|<&?&A&B&C&D&E& $Ifgda| $$Ifa$gda|E&F&I&K&L&>22) $Ifgda| $$Ifa$gda|kdx$$Iflֈz f &''''''644 la]p<yta|L&M&N&O&P&5kdz$$Iflֈz f &''''''644 la]p<yta| $Ifgda|P&S&U&V&W&X&Y& $Ifgda| $$Ifa$gda|Y&Z&]&_&`&>22) $Ifgda| $$Ifa$gda|kd/{$$Iflֈz f &''''''644 la]p<yta|`&a&b&c&d&5kdH|$$Iflֈz f &''''''644 la]p<yta| $Ifgda|d&g&i&j&k&l&m& $Ifgda| $$Ifa$gda|m&n&q&s&t&>22) $Ifgda| $$Ifa$gda|kda}$$Iflֈz f &''''''644 la]p<yta|t&u&v&w&x&5kdz~$$Iflֈz f &''''''644 la]p<yta| $Ifgda|x&{&}&~&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|&&&&&5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kdŁ$$Iflֈz f &''''''644 la]p<yta|&&&&&&&&&&&&&&&&&&&&&&''''''"'#','-'6'7'@'A'J'K'T'U'^'_'h'i'r's'|'}'''''''''''''''''**222222222ٽhVha|OJQJ]^JaJhVha|>*OJQJhVha|OJQJhVha|OJQJ^JaJhVha|OJQJaJJ&&&&&5kdނ$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|&&&&&5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kd)$$Iflֈz f &''''''644 la]p<yta|&&&&&5kdB$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kd[$$Iflֈz f &''''''644 la]p<yta|&&&&&5kdt$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&&&>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|&&&&&5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|&&&&&&& $Ifgda| $$Ifa$gda|&&&''>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|'''''5kd؍$$Iflֈz f &''''''644 la]p<yta| $Ifgda|'' ' ' ' '' $Ifgda| $$Ifa$gda|'''''>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|'''''5kd $$Iflֈz f &''''''644 la]p<yta| $Ifgda|'''' '!'"' $Ifgda| $$Ifa$gda|"'#'&'(')'>22) $Ifgda| $$Ifa$gda|kd#$$Iflֈz f &''''''644 la]p<yta|)'*'+','-'5kd<$$Iflֈz f &''''''644 la]p<yta| $Ifgda|-'0'2'3'4'5'6' $Ifgda| $$Ifa$gda|6'7':'<'='>22) $Ifgda| $$Ifa$gda|kdU$$Iflֈz f &''''''644 la]p<yta|='>'?'@'A'5kdn$$Iflֈz f &''''''644 la]p<yta| $Ifgda|A'D'F'G'H'I'J' $Ifgda| $$Ifa$gda|J'K'N'P'Q'>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|Q'R'S'T'U'5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|U'X'Z'['\']'^' $Ifgda| $$Ifa$gda|^'_'b'd'e'>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|e'f'g'h'i'5kdҘ$$Iflֈz f &''''''644 la]p<yta| $Ifgda|i'l'n'o'p'q'r' $Ifgda| $$Ifa$gda|r's'v'x'y'>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|y'z'{'|'}'5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|}''''''' $Ifgda| $$Ifa$gda|'''''>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|'''''5kd6$$Iflֈz f &''''''644 la]p<yta| $Ifgda|''''''' $Ifgda| $$Ifa$gda|'''''>22) $Ifgda| $$Ifa$gda|kdO$$Iflֈz f &''''''644 la]p<yta|'''''5kdh$$Iflֈz f &''''''644 la]p<yta| $Ifgda|''''''' $Ifgda| $$Ifa$gda|'''''>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|'''''5kd$$Iflֈz f &''''''644 la]p<yta| $Ifgda|''''''' $Ifgda| $$Ifa$gda|'''''>22) $Ifgda| $$Ifa$gda|kd$$Iflֈz f &''''''644 la]p<yta|''''''50gda|kḍ$$Iflֈz f &''''''644 la]p<yta| $Ifgda|'****6/7/222222222 $Ifgda| $$Ifa$gda|gda|22222222VJJAAAA $Ifgda| $$Ifa$gda|kd$$Iflr w''''' 644 la]p2yta|22222>22) $Ifgda| $$Ifa$gda|kd2$$Iflֈ w;''''''644 la]p<yta|22222)kd=$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|2222333 $Ifgda| $$Ifa$gda|23333"3#3B3C3\3]3v3w333333333333333444444%4&4/40494:4C4D4M4N4W4X4a4b4k4l4u4v44444444444444444444444444455 5 5555 5)5*53545=5>5hVha|OJQJaJhVha|OJQJ^JaJZ3333 3>22) $Ifgda| $$Ifa$gda|kdH$$Iflֈ w;''''''644 la]p<yta| 3 3333)kdS$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|33333 3"3 $Ifgda| $$Ifa$gda|"3#3%3)3*3>22) $Ifgda| $$Ifa$gda|kd^$$Iflֈ w;''''''644 la]p<yta|*3+393B3C3)kdi$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|C3E3H3I3J3S3\3 $Ifgda| $$Ifa$gda|\3]3_3c3d3>22) $Ifgda| $$Ifa$gda|kdt$$Iflֈ w;''''''644 la]p<yta|d3e3n3v3w3)kd$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|w3y3|3}3~333 $Ifgda| $$Ifa$gda|33333>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|33333)kd$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|3333333 $Ifgda| $$Ifa$gda|33333>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|33333)kd$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|3333333 $Ifgda| $$Ifa$gda|33333>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|33333)kd$$Iflֈ w;''''''644 la]p<yta| $$Ifa$gda| $Ifgda|3333333 $Ifgda| $$Ifa$gda|33344>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|44 4 4 444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd $$Iflֈ w;''''''644 la]p<yta| $Ifgda|44!4"4#4$4%4 $Ifgda| $$Ifa$gda|%4&4)4+4,4>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|,4-4.4/4045kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|04345464748494 $Ifgda| $$Ifa$gda|94:4=4?4@4>22) $Ifgda| $$Ifa$gda|kd*$$Iflֈ w;''''''644 la]p<yta|@4A4B4C4D45kd5$$Iflֈ w;''''''644 la]p<yta| $Ifgda|D4G4I4J4K4L4M4 $Ifgda| $$Ifa$gda|M4N4Q4S4T4>22) $Ifgda| $$Ifa$gda|kd@$$Iflֈ w;''''''644 la]p<yta|T4U4V4W4X45kdK$$Iflֈ w;''''''644 la]p<yta| $Ifgda|X4[4]4^4_4`4a4 $Ifgda| $$Ifa$gda|a4b4e4g4h4>22) $Ifgda| $$Ifa$gda|kdV$$Iflֈ w;''''''644 la]p<yta|h4i4j4k4l45kda$$Iflֈ w;''''''644 la]p<yta| $Ifgda|l4o4q4r4s4t4u4 $Ifgda| $$Ifa$gda|u4v4y4{4|4>22) $Ifgda| $$Ifa$gda|kdl$$Iflֈ w;''''''644 la]p<yta||4}4~4445kdw$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444444 $Ifgda| $$Ifa$gda|44444>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|444445kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|4444455 $Ifgda| $$Ifa$gda|55555>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|5 5 5 5 55kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda| 5555555 $Ifgda| $$Ifa$gda|55555>22) $Ifgda| $$Ifa$gda|kd$$Iflֈ w;''''''644 la]p<yta|5555 55kd'$$Iflֈ w;''''''644 la]p<yta| $Ifgda| 5#5%5&5'5(5)5 $Ifgda| $$Ifa$gda|)5*5-5/505>22) $Ifgda| $$Ifa$gda|kd2$$Iflֈ w;''''''644 la]p<yta|05152535455kd=$$Iflֈ w;''''''644 la]p<yta| $Ifgda|457595:5;5<5=5 $Ifgda| $$Ifa$gda|=5>5A5C5D5>22) $Ifgda| $$Ifa$gda|kdH$$Iflֈ w;''''''644 la]p<yta|D5E5F5G5H55kdS$$Iflֈ w;''''''644 la]p<yta| $Ifgda|>5G5H5Q5R5[5\5e5f5o5p5z5{599;6;<<??hAnABBDEIIOO*PRPzP|PPPPPPQzS|SSShWW*\,\B^H^__aab#b eeKeMeg+giiiiٴٴٴٴ٩٩hVhsOJQJhVha|H*OJQJhVha|OJQJ^JhVha|>*OJQJhVha|OJQJhVha|OJQJaJhVha|OJQJ^JaJBH5K5M5N5O5P5Q5 $Ifgda| $$Ifa$gda|Q5R5U5W5X5>22) $Ifgda| $$Ifa$gda|kd^$$Iflֈ w;''''''644 la]p<yta|X5Y5Z5[5\55kdi$$Iflֈ w;''''''644 la]p<yta| $Ifgda|\5_5a5b5c5d5e5 $Ifgda| $$Ifa$gda|e5f5i5k5l5>22) $Ifgda| $$Ifa$gda|kdt$$Iflֈ w;''''''644 la]p<yta|l5m5n5o5p55kd$$Iflֈ w;''''''644 la]p<yta| $Ifgda|p5s5v5w5x5y5z5 $Ifgda| $$Ifa$gda|z5{5|5;8<888>99999gda|kd$$Iflֈ w;''''''644 la]p<yta|8999999;;7;8;<<<<M?N?????:@@@@@BBBBgda|BDD E!E1G2GIIIIKKNMOMOOOOOOxPzPPPSSVVfWgda|fWhWWWZZtauabb$b%bbbccccsdtdddeeeffIhJhcigda|cidikkambmoolrmrssttuuuu w wxxyyyyy 7$8$H$^gda| 7$8$H$gda|gda|il`mrrtuyy{{}}01WXYjk?@J@|e|X|MhVh.-OJQJhVha|0JOJQJ,jhVha|B*OJQJUph&jhVha|B*OJQJUph(jhVha|OJQJUmHnHu(hVha|0J6B*OJQJ^JphhVha|B*OJQJphjhVha|OJQJUhVha|>*OJQJ!hVha|B*OJQJ^JphhVha|OJQJyzzN{O{{||}}}}+,>?@KL"#ÁāHÎ΂ڃgda|̂΂ʃ΃ӄ["IU4T^#|ԉ1ͿyyyyjhVha|B*OJQJphhVha|>*OJQJ^JhVha|OJQJ^JhVha|OJQJ\^J hVha|>*OJQJ\^J hVha|OJQJ^J hVha|>*OJQJ^JhVha|OJQJ^JhVha|>*OJQJ\]hVha|>*OJQJhVha|OJQJ#ڃۃTU!"‡Ç]^!#ӉԉjkPQʋˋ:gda| 7$8$H$gda|gda|1ZÊ !E/:;eHiЍk9Ǐ0K̐,.LĸĸĸĸĸĸĸĸĪ{j hVha|B*H*OJQJph hVha|B*H*OJQJph#hVha|>*B*OJQJ]phhVha|OJQJ]hVha|>*OJQJ]hVha|>*OJQJhVha|OJQJ hVha|B*OJQJ]phhVha|B*OJQJph hVha|>*B*OJQJph):;xyDE֏׏Z[אؐ78+,p 7$8$H$gda|gda|gda|~Fg ^x(@#36,==btv,ٙs<c(drʝ༮ՠՠՕhVhR|0>*OJQJhVhR|0OJQJhVha|>*OJQJ]hVha|>*OJQJ^J hVha|OJQJ^J hVha|>*OJQJhVha|OJQJhVha|B*OJQJph hVha|>*B*OJQJph6pqOPΕϕ56=>RS gda| 7$8$H$gda|gda| gda|gda|IJ9:ם؝vwPQ֠נ89šơgdR|0gda|ʝ؝BkԞ%/џɠ!78GSȥ PSv/k.ר.ݩꯠhVha|>*OJQJ]hVha|0JOJQJ^J(hVha|0J>*B*OJQJ^JphhVha|>*OJQJ\hVha|OJQJ\hVha|>*OJQJhVha|OJQJhVhR|0OJQJ7ơ()STʣˣäĤ@AץإST/078èĨDE 7$8$H$gda|gda|ݩߩ`Ҫ^y6[ ڮk!.f԰ 1PϿ袑wlhVhsOJQJhwrOJQJ#hVha|>*CJOJPJQJaJ hVha|CJOJPJQJaJhVha|CJOJQJaJhVha|OJQJ]^J hVha|>*OJQJ]^J hVha|OJQJ^J hVha|>*OJQJhVha|OJQJhVha|OJQJ](Efg5./gda| `^``gda|gda| 7$8$H$gda|/reftude45vwgda| `^``gda|PcZhjJ\ ,24YiŷȷʷG #FSǹչo",]7&ľm. @$ .8>hVha|0J6>*OJQJhVha|H*OJQJhVha|>*OJQJhVha|OJQJ hVha|>*B*OJQJphIw/0ij12opͼμFG12gda|gda|;<FH56TUuvPOPr 7$8$H$gda|gda|>?t@Ac7)RZf~:|жЙЋ~phVha|OJQJ\^JhVha|OJQJ^J hVha|>*OJQJ^J hVha|OJQJ^JhVha|>*OJQJ\^JhVha|OJQJ\^JhVha|>*OJQJhVha|OJQJ(hVha|0J6B*OJQJ^JphhVha|B*OJQJph%rs@A]^fgF >ef|}$dd[$\$a$gda| 7$8$H$gda|gda|0  @ABbcd$JKWr׶q^QD4hVha|>*OJQJ]^J hVha|OJQJ^J hVha|OJQJ^J$hVha|>*B*OJQJ^JphhVha|B*OJQJphhVha|0JOJQJ/jthVha|>*B*OJQJUph hVha|>*B*OJQJph)jhVha|>*B*OJQJUphhVha|>*OJQJhVha|OJQJhVha|OJQJ\^JhVha|>*OJQJ\^J}LMO  mnQRQR 7$8$H$gda|gda|rtSnkBVEQ  ً٘~pbhVha|>*OJQJ^JhVha|>*OJQJ^JhVha|OJQJ^JhVha|OJQJ^JhVha|OJQJ^J hVha|OJQJ^JhVha|>*OJQJ^JhVha|OJQJ^JhVha|>*OJQJhVha|OJQJhVha|OJQJ^J hVha|>*OJQJ^J !lm Z[RSFGgda| 7$8$H$gda| Y=a%EXYD;`~˽沦}󦲦hVha|OJQJ^JhVhROJQJ hVha|>*B*OJQJphhVha|>*OJQJhVha|OJQJhVha|>*OJQJ^JhVha|>*OJQJ^JhVha|OJQJ^JhVha|OJQJ^JhVha|OJQJ^J1FG;<XYyzgdf d[$gda|gda| 7$8$H$gda|!6_,;<~OWxS[:;\*KuX¶†{o†††hVh.->*OJQJhVhI OJQJhVhf>*OJQJhVhf;OJQJhVhOd;OJQJhwrhOd5OJQJhwrhf5OJQJhVhfOJQJhVha|>*OJQJ^J hVha|OJQJ^J hVha|OJQJhVha|>*OJQJ(:;]^JKvwrsWXqrD 7$8$H$gdfgdfPRIKtv    / z-LNuٶr[,hVhf0J"6B*CJOJQJaJph/hVhf0J"6>*B*CJOJQJaJphhVhfCJOJQJaJhVh!)CJOJQJaJhVhf>*OJQJ^JhVhfOJQJ^JhVhI OJQJhVhfOJQJhVhf>*OJQJhVhfOJQJ]^JhVhfOJQJ^J!        +,gdf 7$8$H$gdfsuST@AWX> ?   !-!gdfgdf `^``gdf 7$8$H$gdf ![$\$gdf/F@,-?BX 7      #!-!.!o!!!!"!"#"/""ʾʾʾʰՓʓʓʰthVhf@OJQJ#hVhf>*CJOJPJQJaJ hVhfCJOJPJQJaJhVhfOJQJ]hVhf>*OJQJ]hVhf>*OJQJhVhfOJQJhVhfOJQJ^JhVhf>*OJQJ^JhVhf>*OJQJ]^J*-!.!z!!!."/""":#;###.$/$$T%U%%%Y&Z&&&?'@'' 7$8$H$gdf `^``gdfgdf""#+#;#l##### $$%B%l%%%7&J&Z&&&&&V'p'' (( (44ǹ׫ם׫ׂththhh]hVh.lOJQJhVhf>*OJQJhVhf>*OJQJ]hVhfOJQJhVhf>*OJQJ\^JhVhfOJQJ\^JhVhf>*OJQJ^JhVhfOJQJ]^JhVhf>*OJQJ]^JhVhfOJQJ^JhVhf@OJQJhVhf>*@OJQJ''](^(r+s+F,G,,,,,,, -!-g-h-----V.W.../// 7$8$H$gdf//E0F00000F1G111829222u3v333344444477gd.lgda|gdf4444444;;;;@@MANAqAAAAAAA:FIFxFFFGHIIJLʼwh\ʱʱʱʱʱhVh.l56CJjhVh.lOJQJUhVh!)0J56CJhVh.l0J56CJhVh.lmHnHu hVh.ljhVh.lUhVh.lOJQJhVh.l>*OJQJaJhVh.l>*OJQJhVh.lOJQJaJhwrhOd5OJQJaJhwrh.l5OJQJaJ 799::;;;;;;#<Z<<< =6=v==>d>>>K??@A@@@gd.lgd.l@MANAAAAAAAEEFFGGJJSKTKKKLLMM[N\NOOgd.lgd.lgd.lLLLLhPqP+Q.QVSWSSSSSS TTTT=UxU]]HlKlvvvv2wFwzz эxϏŦve hVh.l>*B*OJQJphhVh.lOJQJ^JhVhOJQJhVh.l56CJhVh.l>*OJQJhVh.l0JOJQJ#jhVh.lOJQJUjhVh.lOJQJUhVh)/:OJQJhVhI$OJQJhVh.lOJQJhVhKHOJQJ'ORRT T=UxUyUzUWW&Z'Zz]{]a_b_M`"a#aaabb=d>dddVegd.lgd.lVeWegg5i6ijjjjllnnoo*s+svvrysyzzzz||~gd.lgd.l~~pqNO_`  эҍ9:;͔ΔWX՝gd.lgd.lϏՏ9Җٖ՝ҰݺHNO./BCIJ¿ȿɿ12@AGH*輨hVh.lOJQJ^JhVh.l0JOJQJ&jhVh.lB*OJQJUph333hVh.lB*OJQJph333 hVh.l>*B*OJQJphhVh.l56CJhVh.lOJQJhVh.l>*OJQJ745jk®4523ABklHIgd.lgd.lGH/0"#$ghgd.lgd.l ,-GU0FboVn"jq/78:vwlhVh%wOJQJhVh.l>*OJQJ^JhVh.lOJQJ^JhVh.l>*OJQJhVh.l56CJhVhJ"!OJQJ hVh.l>*B*OJQJphhVh.l0JOJQJ#jhVh.lOJQJUhVh.lOJQJjhVh.lOJQJU*CD-.]^@AefHIgd.lQR -./gd.lgd.l-"+3DS e |wx{|)E !""00/99R??AAAAcFF"I,ICIKI%PPWS{SWWh`q``aaaYmZmhVhOJQJhVh!OJQJhVhBOJQJhVhBH*OJQJ hVh.l>*B*OJQJphhVh.l>*OJQJhVh.l56CJhVh.lOJQJ=  9 : AB   #$  !gd.lgd.l!""#"$" & &&&''C(D(((5-6-".#.0000H2I2H4I4e5f57gd.lgd.l77p8q8/9999;;==R???? B B/E0EcFFFFIILLNgd.lgd.lNOOO%PPPPWS{S|S}SiTjT?V@VXX\\__`aaabbdgd.lgd.lddffiinnnoooorrRsSsttvvcydy}}igd.lgd.lZmnuuwwxx?ycy+L <a{\ʁ:a?IjuIW ޅ1Æφ %/gy ctƈE{ǺۯۯۯۯۯۯۯۯhVhQOJQJhVh.l56>*CJhVh.l56CJh+OJQJhVh.l>*OJQJhVh.lOJQJhVh.l0J56CJEij|}67$&vw23Іц01gd.l1ևׇ$%؉ىJK  wx"#pq7gd.l3I3ы 6Vp!Ra׎6kuw@U[xƐKcɑ͑Ɠܓ"8H:nAsז!Xd}ŗiԘ"hVh=OJQJhVh.lOJQJhVhQOJQJhVh.l>*OJQJS78  yznoΑϑGHIJ͔̔\]gd.l]efƗǗ=>әԙIJњҚۛܛ%&opgd.lgd.l"ƙҙԙ2EFHIĚКڛ%Ntxŝѝԝ՝$מ3?NOƟϰϝρsffshVh=CJOJQJhVh.l>*CJOJQJhVh.lCJOJQJhVh=CJOJQJaJ%hVhB*CJOJPJQJphhVh.l>*CJOJPJQJhVhCJOJPJQJhVh.lCJOJPJQJhVh=OJQJhVh.l>*OJQJhVh.lOJQJ(pҝ՝GHOPџҟ>?|}ޢߢbc01gd.lƟAp1=lFk{̢ܢݢ6QGm¤LХ 'BD$^کW~-}5ѣѣhVh=OJQJ^JhVh.l>*OJQJ^JhVh.l0JOJQJ^J!jhVh.lOJQJU^JhVh.lOJQJ^JhVh=OJQJhVh.l>*OJQJhVh.lOJQJ;WXۥݥCD  ^`xz./QSެ߬]^\gd.l5FP¬ӬݬY\y(>[ۮ'135Zn'c )`.<=Q̴j޵G_c{{Ct˸Ը3`hVh.l0J>*OJQJhVh.lOJQJhVh=OJQJhVh.l>*OJQJN\]BC  1267;<STƵǵgd.lǵ|}mnders^_gdyDgd])gd.lderκϺ23\] 12uv`r+7O: ZlnehVh.->*OJQJhVh:QOJQJhVhyDB*OJQJphhVhyDOJQJ^JhVhyDOJQJ]hVh,OJQJhVhyDH*OJQJhVhyD>*OJQJhwrhyD5OJQJhVhyDOJQJ59:}~3^KLt45gdyD5@B*+675601KL01NOPQ 7$8$H$gdyDgdyD;< JKm+,67wx6gh 7$8$H$gdyDhlmde@AAB- `^``gdyD 7$8$H$gdyD%6B"-BCEF&2ck<s hVhyDB*OJQJ\ph$hVhyDB*OJQJ\aJphhVhyD0JOJQJ]hVhyD0J#6OJQJ$jhVhyD0J#6OJQJU#hVhyD>*CJOJPJQJaJ hVhyDCJOJPJQJaJhVhyD>*OJQJhVhyDOJQJ(-Okl'(pqKPQYZ]_xgd]) 7$8$H$gdyDgdyD~CLOPQY'+xz      %1#& " "((.翳zobbhVh])OJQJ^JhVhaOJQJhVh])>*OJQJhVh]dOJQJhVh:QOJQJhVh])>*OJQJhVh])OJQJhwrh])5OJQJhwrhOd5OJQJhVhyD>*OJQJhVhyD0J#6>*OJQJhVhyDOJQJhVhyD>*OJQJ]&d`&'ij{gd]){      12"#(gd])()qrCD!!!! " """$$&& ((gd])((n(o(((((++,,---8.9.....0011#3435333gd])..33>;A;Q@T@1D4DGGGHEKHKPPTTZZ6_N_``bbeeYi\ioovv{{{-|A|+}:}}}6~?~o~~~~^y*j賥hVh])OJQJ^JhVh])>*OJQJ^J!hVh])B*OJQJ^JphhVh.-OJQJhVh])OJQJ\^JhVhqOJQJhVh])OJQJhVh])>*OJQJ<33333{5|588:: ; ;=;>;C;D;???:@;@P@Q@V@W@XAYACCgd])CCC D!D0D1D6D7DFFF]G^GGGGGIIJJKKDKEKJKKKLLgd])LNNQQRRT*T+TyTzTTTTTWWFYGYUZgZhZZZZZZZ]gd])]]^^5_6___``aa4b5bnbobbbbbddeeeeeee 7$8$H$gd])gd])eepgqgii:i;iXiYi^i_illxnynEoGoXoYoooooooZr[rrrgd])rruu&vdvevvvvvvv y!y{{{{{K|L|||N}O}}}H~I~gd])I~~9:̀̀ !ʂ͂qrJKׄڄRTgd])Hdpqă;̄ׄل:ESTޅFVd͇vj_hVhfOJQJhVha>*OJQJ hVh])OJPJQJ^JaJ#hVha>*OJPJQJ^JaJ&hVha>*OJPJQJ]^JaJ hVhaOJPJQJ^JaJhVhaOJQJ!hVh])B*OJQJaJphhVh])OJQJ^JaJhVh])>*OJQJhVh])OJQJ$Tcḋ͇CDef ƊNJnogd])gd])gda 7$8$H$gda:D)NOYeԉ=>^׺ř׺~rdTDhVh])>*OJQJ]^JhVh])OJQJ]^JaJhVh])OJQJ\^JhVh])>*OJQJhVh])>*OJPJQJhVh])OJPJQJ#hVh >*CJOJPJQJaJhVh])CJOJQJaJhVh])OJQJ#hVh])>*CJOJPJQJaJ hVh])CJOJPJQJaJhVhfOJQJhVhf>*OJQJ^`mn؋[f|'&NSfʎ:T/Z]cÐĐŐʾʾʾʾʡʾʾoʾdWhVhY$;CJOJQJhVhY$;OJQJhVh])>*OJQJ]^JhVh])OJQJ]mH sH "hVh])>*OJQJ]mH sH hVh])OJQJmH sH hVh])>*OJQJ]hVh])>*OJQJhVh])OJQJhVh])OJQJ^JaJhVh])OJQJ^JhVh])>*OJQJ^J njȌ67fgabcdĐۑݑFgdY$;gd])gd]) 7$8$H$gd])Ő̟ϟ bfjȱʱ ()x۸Uǹǹ䞌|m|m|]hVhY$;OJQJ\^J aJhVhY$;OJQJ^JaJhVhY$;OJQJ]^JaJ"hVhY$;>*OJQJ]^JaJhVhY$;OJQJaJhVhY$;>*CJOJQJhVhY$;CJH*OJQJhVhY$;CJH*OJQJhVhY$;CJOJQJ^JhVhY$;CJOJQJhwrhY$;5CJOJQJ#F~N!$o\əݙ=>gdY$;>qrpq&2Zep&gr}ţUWgdY$;WYqsݦGYZ.Q}9:FgdY$;FQ\tĭϭڭ  m12QRgdY$;()ab~z{|}gd])dhgdY$; `^``gdY$; gdY$; 7$8$H$gdY$;gdY$;UbxͺcuPvgUEghVhY$;OJQJ]^JaJ"hVhY$;>*OJQJ]^JaJhVhY$;OJQJ^JaJ hVhY$;CJOJPJQJaJhVhY$;>*CJOJPJQJhVhY$;CJOJPJQJhVhY$;CJOJQJhVhY$;>*OJQJaJhVhY$;OJQJaJhVhY$;OJQJ\^J aJ$hVhY$;OJQJ\^J aJ"hVhY$;>*OJQJ\^J aJPi{|~ǼORVWBLwy28E\+9:@CȼȥșȍȍșșșșȁșȍȍvșșșȍȍșhVh"FOJQJhVh]d>*OJQJhVh8lH*OJQJhVh8l>*OJQJhVhOdOJQJhwrhp>5OJQJhwrh8l5OJQJhVh8lOJQJhVhY$;CJOJQJhVhY$;OJQJ^JaJhVhY$;>*OJQJ^JaJ.}ǼȼоѾDE[\*+EFnogd8lqr@Aabcno/0gd8lgd8leflm@acoVWw~ó}mhwrh)n5OJQJ\^Jhwrh#5OJQJ\^JhVh#OJQJhVh8l>*OJQJ^J!hVh8lOJQJ^J!hVh8l>*OJQJ\^JhVh8lOJQJ\^JhVh'>*OJQJhVh8lH*OJQJhVh8l>*OJQJhVh8lOJQJ'5ABC6bc 7$8$H$gd#p7$8$H$^p`gd#gd#gd]) 7$8$H$gd8lgd8lPQcweLYǺtggZhVhMOJQJ^J"hVh#OJQJ^JhVh8lH*OJQJ^J"aJhVhRu8H*OJQJ^J"hVhRu8OJQJ^J"hVhVHOJQJ^J"hVh#>*OJQJ^J"hVh#OJQJ^J"hwrh#5OJQJh-e5OJQJ\^Jhwrh)n5OJQJ\^Jhwrh&5OJQJ\^JcwxjMNz{A-gh*v [ 7$8$H$gd#[F67,demM!q 7$8$H$gd#qKLYZB_J GH 7$8$H$gd#TZG[ XY^t*V o,oNk0zպպպխ՟՟պՑhVh#H*OJQJ^J"hVh#H*OJQJ^J"hVh#OJQJ^JhVhRu8OJQJ^J"hVh#>*OJQJ^J"hVh#OJQJ^J"hVh#OJQJ^J"aJhVh'H*OJQJ^J"8H[\M a8=)*VW:;cd  7$8$H$gd#  y z   ^HS78z{ 7$8$H$gd#7~<=gdrn 7$8$H$gd8lgd# 7$8$H$gd#PP s  !!!!!!>"?"$ $#$6$8$M$ %!%,0,˿˴s'hVhrn>*B*OJQJ\^J$ph# hVhUlOJQJhVh'OJQJhVhrn>*OJQJhVh8lOJQJhVhrnOJQJh-eh8l5OJQJh-ehrn5OJQJh-eh8l5OJQJ^J#hVh#>*OJQJ^J#hVh#OJQJ^J#&PQ  r s   !!!!(!:!L!^!_!p#q#$$L$gdrnL$M$t$$$$%9%_%`%((U*V***,,L,M,n,o,,,----. 7$8$H$gdrngdrn0,2,<,>,K,L,---..}//25292:2~;;J????AA2D>DDDDSEEELF\FBGCGȩ|n|ahVhrnOJQJ^J&hVhrn>*OJQJ\hVhrnOJQJ\hVh-=)OJQJhVhMOJQJhVhrnOJQJhVhrn>*OJQJ$hVhrn>*B*OJQJ^J%ph# !hVhrnB*OJQJ^J%ph# 'hVhrn>*B*OJQJ\^J$ph# $hVhrnB*OJQJ\^J$ph# $...1.2.^._.|/}///Y1Z17799~;;;;I?J???AA1D=Dgdrn 7$8$H$gdrn=D>DDDEEkFlFBGCGGGGH]HfHgHII JJJJYKZKKK `^``gdrn 7$8$H$gdrngdrnCGYGGGGGGHZHfHHHIJJ_JJ3KNKZKKKKKLLL%L&L3L޿޿th]QhVho(>*OJQJhVhOdOJQJh-eho(5OJQJhVhoOJQJhVhrn>*OJQJ^J'hVhrnOJQJ^J'hVhrn>*OJQJhVhrnOJQJhVhrn>*OJQJ^J&hVhrnOJQJ^J&#hVhrn>*CJOJPJQJaJ hVhrnCJOJPJQJaJ hVhrnCJOJQJ^J&aJKL%L&L3L4LLLLLMMMMNNQQRSSSUUVV"W$WXgdo( 7$8$H$gdL 3LMNRRRR R"RRRSSVVVVVV,V0VVVV"WZZZZZZ[\s]]aaggh5h|j}jjjjmmnooovv{{]kIn<hVh>*OJQJh-eh5OJQJhVhOJQJhVho(H*OJQJ^JhVho(H*OJQJ^JhVho(OJQJ^JhVho(>*OJQJhVho(OJQJ>XX[[\\r]s]]]``a aaaaabbccNdOde effhhgdo(h5h6h|jjjjjllmmmmnnooooooqq6rrrs_tgdgdo(_tatuuvvvvEyFy{{{{]^kl҂ӂHInogdoЏޏ*8FTbpt;<=PQfgd<BFO՗ݗИۘ"OÙי)= #$%RST]K  YZ5UVyyjjhVhAOJQJUhVhAOJQJhVhVOJQJhVhI0JAOJQJhVhIOJQJhVhI>*OJQJh-ehOd5;OJQJh-ehI5;OJQJh-eh5;OJQJhVho(OJQJhVhOJQJhVh>*OJQJ)f~ؓɖזԗ՗ݗޗϘИۘܘXYgdYMNȚɚ"#T]ĜK 7a*]%6gdI5gdI3gdI2gdI/gdI.gdI-gdI,gdI+gdI*gdI)gdI(gdI$gdIgd%`Qml#No FTU%gdAIgdIIgd.^HgdIGgdIFgdIEgdIBgdA?gdA2gdI>gdI=gdI;gdI3gdI:gdI9gdI8gdI7gdIV!"#$% @AXheg̳̳̳th]OhVhI0JX>*OJQJhVh.^OJQJhVh8/>*OJQJhVhVOJQJhVhI0JTOJQJhVhI0JROJQJhVhI0JPOJQJhVhV0JOOJQJhVhI0JOOJQJhVhI>*OJQJhVhIOJQJjզhVhAOJQJUjhVhAOJQJUhVhAOJQJ%QXh|+,ghgd44[gdIDgd.^3gdI3gd.^VgdIUgdISgdIQgdINgdIMgdILgdIKgdI:gdITtu()*+h !.٨wkwkw`wkwSwhVh44OJQJaJhVh]_OJQJhVh44>*OJQJhVh44OJQJhVhOd;OJQJhVh44;OJQJh-eh445;OJQJhVhMOJQJhVhI0JZOJQJhVhI0JZ>*OJQJhVh.^OJQJhVhIOJQJhVhI0JXOJQJhVhI0JX>*OJQJh !34-6Z 7$8$H$gd44gd44.6Z3f (Id"B^%If%&/L^`hﱡﱡhVh sROJQJ^JaJhVh44OJQJ\^JaJhVh44OJQJ^JaJhVh44OJQJ\^JaJhVh44OJQJ^JaJhVh44OJQJ\^JaJhVh44OJQJ\^JaJ 31&b ["c,e;gd44 7$8$H$gd44!%&-.>Xcjkqr :>CDJK[yhVh44OJQJ\^JaJhVh44OJQJ\^JaJ hVh sROJQJ\^JaJ hVh sROJQJ^JaJhVh44OJQJ^JaJhVh sROJQJ\^JaJ?;}X/q>DEM CQRgd44 7$8$H$gd44!"2LW\]eft$()01A]hmntu*/067hVh sROJQJ\^JaJhVh sROJQJ^JaJhVh44OJQJ\^JaJhVh44OJQJ\^JaJ hVh sROJQJ\^JaJ hVh44OJQJ^JaJ@7DEMPklmyz ()56CDQmx tE񗋗z hVh44CJOJPJQJaJhVh44>*OJQJhVh44OJQJhVh sROJQJ^JaJhVh sROJQJ\^JaJhVh44OJQJ\^JaJhVh44OJQJ\^JaJ hVh sROJQJ\^JaJ hVh44OJQJ^JaJ)lxyErsLYZgd44 `^``gd44gd44Egrs:N~0v~h Z\]^qsvwyѺuuuhVhH*OJQJ^JhVhH*OJQJ^JhVhOJQJ^Jh-ehOd5;OJQJh-eh5;OJQJhVhOJQJhVh44>*OJQJhVh44OJQJ hVh44CJOJPJQJaJ#hVh44>*CJOJPJQJaJ.Zuv01 rsFG^ygdgd44gd44gd44 `^``gd44 9:Nbc#-gd>@AFHKLNRTUZ\_`btu EO? @ S V # (*"$Sּ֤֤֤֤֯֯hVhuOJQJhVhH*OJQJhVhOJQJhVhOJQJ^JhVhbOJQJ^JhVh]_OJQJ^JhVhOJQJ^JhVhH*OJQJ^JhVhH*OJQJ^J7-7AB" #     yzR[h.,gd!gd!gd!(gd!gdCgdugdSTY[h> ? %!'!(!*#+#-#.#%%%%&&*+*,,//E0G0J0000h2i2O4u488 <+<>>AAAиииІ{hVhOJQJhVhL>OJQJhVh0JPOJQJhVh!0JPOJQJhVh0JOOJQJhVh!0JOOJQJhVh=5sOJQJhVh!OJQJhVh!>*OJQJh-ehC5OJQJh-eh!5OJQJ0. %!+#%1(*+*~.J00O4u48gdJgdJgd!?gdJgdJgd!gdgd!gd!gd=5sgd!gd!(gd!0gd!gd!gd!gd!88 <+<>>AAAnBODHNIIIUL7MOQQUDgd!gd gd!gd!gd!gd!gd!gd!Kgd!gd!Igd!gd!gd!gd!egd!gd!AAAmBnBpBqBNDODQDRDLGHHJIKILIMINII{Q}QQQWWXXIYJYyZΥuii^ShVh OJQJhVh=5sOJQJhVh!>*OJQJhVh8/OJQJhVh]_OJQJjhVhOJQJUhVhOJQJjhVhOJQJUhVh!0JTOJQJhVh!0JPOJQJhVh!OJQJhVh!0JROJQJhVh0JOOJQJhVh!0JOOJQJUWWYR\]\\J]]S^^^U__``kaa9bbgd gd!gd!gd gd!gd!gd!2gd gd!gd gd!gd!gd gd!gd=5sgd gd!gd yZzZ[[[R\]\n\\]4]9]:]]]0^J^^^^^ _'_5_6_y__]`c``````5a6avh[vhVh0JZOJQJhVh0JZ>*OJQJhVh!0J>*OJQJhVh OJQJhVh!0J>*OJQJhVhOJQJhVh!0JX>*OJQJhVh!0JXOJQJhVh!>*OJQJhVh!0JZOJQJhVh!0JZ>*OJQJhVh!OJQJhVhcOJQJ#6a`aaab.bxbbbbbbbbbbbbbbbbc cc%c'c*c潰}rjr_r_ThVhL>OJQJhVh8/OJQJh+OJQJhVh"FOJQJhVh OJQJh+h!>*OJQJh+h!0JX>*OJQJh+h 0JX>*OJQJhVh8/0JXOJQJhVh 0JXOJQJhVh 0JX>*OJQJhVh!0J>*OJQJhVh!OJQJhVh!0JX>*OJQJbcc0ddMee^ffOggph$iij@jAjBj\ 1$7$8$H$^\ gd6 d1$7$8$H$gd6gd6gdgd!gd!gd gd!gd!gd!gd!gd!gd gd!gd!Jgd *cRcccc+ddddeBeee3fUfffgggggggg8hehhimioiiiijjj?j@j软轢试芯芯och-eh65OJQJh-eh65OJQJ\^JhVh6OJQJhVhOJQJhVh!0JXOJQJhVh!0JOJQJhVh!0JX>*OJQJhVh!0J>*OJQJhVh!0J>*OJQJhVh!B*OJQJphhVh!OJQJhVh!0JOJQJ&@jBjNjPjkkmm n nnnnn(n)n+n,n7n8nDnFnGnHnJnKnLnMnOnPnSnTnVnXnYnZn\n]n_n`nbncnfngninknlnmnonpnrnsnunvnynzn|n~nnnnnnnnnnnnnnnnnnnnnnnnhVh6OJQJRHa^JhVh6OJQJ^JhVh6>*OJQJ^JhVh6OJQJPBjOjPjkmm n nnn)ns`$1$7$8$H$If^gd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% d:1$7$8$H$gd6Pd1$5$7$8$H$]Pgd6 d1$7$8$H$gd6d1$5$7$8$H$gd61$5$7$8$H$]gd6 d1$7$8$H$gd6 1$7$8$H$gd6 )n,n8nEnFn4kd?$$If<ֈpPT|04|ayt% $$1$7$8$H$Ifa$gd% $$1$7$8$H$If]a$gd% @$1$7$8$H$If^@gd% FnHnKnMnPnTnWn~$l$1$7$8$H$If]la$gd% $$1$7$8$H$If]a$gd% @$1$7$8$H$If^@gd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% WnXnZn]n`noV=$$d$1$7$8$H$If]a$gd% $Dd$1$7$8$H$If]Da$gd% $xd$1$7$8$H$If]xa$gd% kd@$$IfֈpPT|04|ayt% `ncngnjnkn'kdQA$$IfֈpPT|04|ayt% $d$1$7$8$H$If]a$gd% $d$1$7$8$H$If]a$gd% Td$1$7$8$H$If^Tgd% knmnpnsnvnzn}n~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% }n~nnnnoYC-$$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% kdB$$IfֈpPT|04|ayt% nnnnn0kdB$$IfֈpPT|04|ayt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% nnnnnnn~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% nnnnnoYC-$$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% kdC$$IfֈpPT|04|ayt% nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnooooo o o ooooooooooo!o"o$o%o'o(o+o,o.o0o2ohVh6OJQJRHa^JhVh6OJQJ^JhVh6OJQJVnnnnn0kdD$$IfֈpPT|04|ayt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% nnnnnnn~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% nnnnnoYC-$$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% kdRE$$IfֈpPT|04|ayt% nnnnn0kdF$$IfֈpPT|04|ayt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% nnnnnnn~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% nnnnnoYC-$$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% kdF$$IfֈpPT|04|ayt% nnooo0kdG$$IfֈpPT|04|ayt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% o o ooooo~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% ooo"o%ooYC-$$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% kdH$$IfֈpPT|04|ayt% %o(o,o/o0o0kdSI$$IfֈpPT|04|ayt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% 0o3o7o:o=oAoDo~$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% T$1$7$8$H$If^Tgd% $$1$7$8$H$If]a$gd% $D$1$7$8$H$If]Da$gd% $x$1$7$8$H$If]xa$gd% 2o3o6o7o9o:op?p@pJAAAAAA $Ifgd% kdP$$Ifl,֞ n!'''''''644 lalyt% @pApBpCpDpLp.kdLQ$$Ifl,ִu n!''''''''6    44 lalyt% $Ifgd% LpMpNpOpPpQpRpSpAkd&R$$Ifl;֞ n!'''''''644 lalyt% $Ifgd% SpZp`pdpiprpsptp $Ifgd% $$Ifa$gd% tpupvp}p7.. $Ifgd% kdR$$Ifl,ִu n!''''''''6    44 lalyt% }ppppppp $Ifgd% $$Ifa$gd% pppp7.. $Ifgd% kd T$$Ifl,ִu n!''''''''6    44 lalyt% ppppppp $Ifgd% $$Ifa$gd% pppp7.. $Ifgd% kdT$$Ifl,ִu n!''''''''6    44 lalyt% ppppppp $Ifgd% $$Ifa$gd% pppp7.. $Ifgd% kdU$$Ifl,ִu n!''''''''6    44 lalyt% ppppppp $Ifgd% $$Ifa$gd% pppp7.. $Ifgd% kdV$$Ifl,ִu n!''''''''6    44 lalyt% ppppppq $Ifgd% $$Ifa$gd% qqqqqq7.... $Ifgd% kdrW$$Ifl;ִu n!''''''''6    44 lalyt% qqqq q q.kdX$$Ifl,ִu n!''''''''6    44 lalyt% $Ifgd%  q q q qqqqqq $Ifgd% qqqqqq7.... $Ifgd% kdlY$$Ifl,ִu n!''''''''6    44 lalyt% qqqq q!q.kdFZ$$Ifl;ִu n!''''''''6    44 lalyt% $Ifgd% !q"q6q9qr$$1$7$8$H$Ifa$gd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% d1$7$8$H$gd68xdz1$5$7$8$H$]8^xgd6 d1$7$8$H$gd6qrr!r"r&r6r=r?rGrHrLrMrQrRrVrXr\r]rarbrhrirnrqryr{rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrss sHtJtitkttthVh6OJQJ^JjFqhVh6OJQJUhVh6OJQJ^J(aJhVh6OJQJhVh6OJQJ^J(aJF>r?rHrMrRrWrpZH$$1$7$8$H$Ifa$gd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% jkdxk$$If\xd@ Ttaxyt% WrXr]rbrirorpZH$$1$7$8$H$Ifa$gd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% jkd"l$$If\xd@ Ttaxyt% orprqrzr{rrrxxbP$$1$7$8$H$Ifa$gd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% d1$7$8$H$gd6jkdl$$IfT\xd@ Ttaxyt% rrrrrrBDkd n$$If0xL$ axyt% $1$7$8$H$Ifgd% jkdvm$$IfT\xd$ axyt% rrrrrrrrrWH$1$7$8$H$Ifgd% jkdn$$IfS\xd$ axyt% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% $$1$7$8$H$Ifa$gd% rrrrrrt^H$$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% $$1$7$8$H$Ifa$gd% $1$7$8$H$Ifgd% jkdHo$$If\xd$ axyt% rrrrrrt^$$1$7$8$H$If]a$gd% $$1$7$8$H$Ifa$gd% $1$7$8$H$Ifgd% jkdo$$If\xd$ axyt% rssItJtjtkttxj_Q_ d1$7$8$H$gd6 1$7$8$H$gd6 d#1$7$8$H$gd6 1$7$8$H$gd6 d1$7$8$H$gd6jkdp$$IfT\xd$ axyt% tttttttttttttttttttttttttttttttttttttttttttuuuu u u u u uu)u+uuuuuuvvvv hVh6OJQJRH`^J(aJh=OJQJ^J(aJhVh6OJQJ^JhVh6OJQJ^J(aJhVh6OJQJ hVh6OJQJRHa^J(aJ?tttttttsZDxd$1$7$8$H$If^xgd% $d$1$7$8$H$If]a$gd% $d$1$7$8$H$If]a$gd% Dkd+|$$If0T tTayt% d!$1$7$8$H$If^gd% $Pd!$1$7$8$H$If]Pa$gd% ttttt|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkd|$$IfFTp t8    ayt% ttttt|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkd=}$$IfFTp t8    ayt% ttttt|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkd}$$IfFTp t8    ayt% ttttt|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkde~$$IfFTp t8    ayt% tttttv`xd$1$7$8$H$If^xgd% $d$1$7$8$H$If]a$gd% $d$1$7$8$H$If]a$gd% Wkd~$$IfFTp t8    ayt% tttuu|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkd$$IfFTp t8    ayt% uu u uu|ix$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $$1$7$8$H$If]a$gd% Wkd!$$IfFTp t8    ayt% uu*u+uuuuv|jjTd$1$7$8$H$If^gd% d$1$7$8$H$Ifgd% <d1$5$7$8$H$]<gd6 d1$7$8$H$gd6 1$7$8$H$gd6Wkd$$IfFTp t8    ayt% vv v vvvvvvpZGx$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% jkdI$$If\xpLXl axyt% $1$7$8$H$Ifgd% d$1$7$8$H$If^gd% v v vvvvvvvvvvv v!v"v$v%v&v(v)v*v+v.v/v0v1v2v4v5v6v7v:v;vv@vAvBvCvDvEvJvKvLvMvPv`vavkvlvmvnvovwvxv}v~vvvvvvvvvvvvh=OJQJ^J(aJ hVh6OJQJRH`^J(aJhVh6OJQJaJhVh6OJQJ^J(aJhVh6OJQJhVh6OJQJaJEvv v"v#v$v%vgN???$1$7$8$H$Ifgd% $d$1$7$8$H$If]a$gd% $Pd$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt` axyt% %v&v)v+v/v0vjTA2$1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt  axyt% 0v1v2v5v7v;vs]G4x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% }kd$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% ;vvAvCvs]G$$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% }kdC$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% CvEvKvLvMvNvOvPvavK~kd$$If4rxpLXlt` axyt% $1$7$8$H$Ifgd% $1$7$8$H$If^gd% x$1$7$8$H$If^xgd% avlvmvnvovxv~vq_F$d$1$7$8$H$If]a$gd% d$1$7$8$H$Ifgd% kkd˅$$If4\xpLXl  axyt% $1$7$8$H$Ifgd% $1$7$8$H$If^gd% ~vvvvvv^H$P$1$7$8$H$If]Pa$gd% ~kd}$$If4rxpLXlt` axyt% $1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% vvvvvvvv[E$P$1$7$8$H$If]Pa$gd% ~kdE$$If4rxpLXlt  axyt% $1$7$8$H$Ifgd% $$1$7$8$H$If]a$gd% vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwww w w w w hVh6OJQJRH`^J(aJhVh6OJQJaJhVh6OJQJaJhVh6OJQJ^J(aJhVh6OJQJJvvvvvv`J$P$1$7$8$H$If]Pa$gd% }kd $$IfrxpLXlt axyt% $1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% vvvvvvJ}kd͈$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% vvvvvv$1$7$8$H$Ifgd% $1$7$8$H$If^gd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% vvvvvvnXII$1$7$8$H$Ifgd% d$1$7$8$H$If^gd% d$1$7$8$H$Ifgd% ~kd$$If4rxpLXlt` axyt% vvvvvvvn[x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $1$7$8$H$Ifgd% kkdU$$If4\xpLXl  axyt% vvvvvvvjTEEE$1$7$8$H$Ifgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt` axyt% vvvvvvjTA2$1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kdϋ$$If4rxpLXlt  axyt% vvvvwws]G4x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% }kd$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% wwww w wsZA$d$1$7$8$H$If]a$gd% $Pd$1$7$8$H$If]Pa$gd% }kdW$$IfrxpLXlt axyt% $1$7$8$H$Ifgd%  w wwwwww&w'w1w2w3w4w5w=w>wCwDwEwFwGwHwIwKwLwMwNwPwQwRwTwUwVwWwZw[w\w]w^w`wawbwcwfwgwhwiwjwlwmwnwowpwqwvwwwxwywwwwwwwwwwwwwwwwwwhVh6OJQJaJhVh6OJQJaJ hVh6OJQJRH`^J(aJhVh6OJQJhVh6OJQJ^J(aJJ wwwww'w2wH5$1$7$8$H$If^gd% ~kd$$If4rxpLXlt` axyt% $1$7$8$H$Ifgd% $1$7$8$H$If^gd% xd$1$7$8$H$If^xgd% 2w3w4w5w>wDwFwGwHwn[x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% kkdߎ$$If4\xpLXl  axyt% $1$7$8$H$Ifgd% HwIwLwNwOwPwQwjTEEE$1$7$8$H$Ifgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt` axyt% QwRwUwWw[w\wjTA2$1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kdY$$If4rxpLXlt  axyt% \w]w^wawcwsZA$d$1$7$8$H$If]a$gd% $Pd$1$7$8$H$If]Pa$gd% }kd!$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% cwgwhwiwjwmw]G$P$1$7$8$H$If]Pa$gd% }kd$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% xd$1$7$8$H$If^xgd% mwowqwwwxwyww5~kd$$If4rxpLXlt` axyt% $1$7$8$H$Ifgd% $1$7$8$H$If^gd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% wwwwwwwwwq[Hx$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% kkdi$$If4\xpLXl  axyt% $1$7$8$H$Ifgd% $1$7$8$H$If^gd% wwwwwwwwq[E$$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt` axyt% $1$7$8$H$Ifgd% wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwpxrxzx|x{{p{q{{{{{<|>|}~~~~wx  lmЃ҃µµhVh6>*OJQJ^JhVh6OJQJ^Jh=OJQJ^JhVh6OJQJaJhVh6OJQJaJhVh6OJQJ^J(aJhVh6OJQJBwwwwwgN8xd$1$7$8$H$If^xgd% $d$1$7$8$H$If]a$gd% $Pd$1$7$8$H$If]Pa$gd% ~kd$$If4rxpLXlt  axyt% wwwwwws]G$$1$7$8$H$If]a$gd% $P$1$7$8$H$If]Pa$gd% }kd$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% wwwwww`J$P$1$7$8$H$If]Pa$gd% }kdk$$IfrxpLXlt axyt% $1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% wwwwwwwJ7d1$5$7$8$H$]gd6}kd+$$IfTrxpLXlt axyt% $1$7$8$H$Ifgd% x$1$7$8$H$If^xgd% $$1$7$8$H$If]a$gd% wqxrx{x|x{{q{{{{{=|>|~~d1$5$7$8$H$gd6hd1$5$7$8$H$]hgd6 d1$7$8$H$gd6<d1$5$7$8$H$]<gd6 d1$7$8$H$gd6 1$7$8$H$gd6 d:1$7$8$H$gd6d1$5$7$8$H$]gd6~~~~~  mу҃y d 1$7$8$H$gd6d1$5$7$8$H$gd6 d:1$7$8$H$gd6hd1$5$7$8$H$]hgd= d1$7$8$H$gd6d1$5$7$8$H$]gd6 d1$7$8$H$gd6 1$7$8$H$gd6 d1$7$8$H$gd6҃OPrsKLWX d81$7$8$H$gd6d1$5$7$8$H$]gd6 d1$7$8$H$gd6d1$5$7$8$H$]gd6 d1$7$8$H$gd6<d1$5$7$8$H$]<gd6 d1$7$8$H$gd6 1$7$8$H$gd6NPqsJLVXCs}6MNi܌ތ&*,/;=ލ=EF^`r{؎ˏhVh`OJQJ!hVh6B*OJQJ^Jph777hVh6OJQJ^JaJhVh6>*OJQJ^JhVh6OJQJ^JhVh6OJQJE~݌ތ<=rd1$5$7$8$H$]gd6d1$5$7$8$H$]gd6 d1$7$8$H$gd6 1$7$8$H$gd6 1$5$7$8$H$gd6 d?1$7$8$H$gd6d1$5$7$8$H$]gd6 d:1$7$8$H$gd6d1$5$7$8$H$]gd6 F_`ߐx 1$7$8$H$gd`Ld1$5$7$8$H$]Lgd6 d?1$7$8$H$gd6d1$5$7$8$H$]gd6 d'1$7$8$H$gd6Pd1$5$7$8$H$]Pgd6 d1$7$8$H$gd6 d1$7$8$H$gd6 1$7$8$H$gd6 ݐސߐՓjkvw;ا%);=G˾}rjrjrhqOJQJh 3ShMOJQJh=OJQJhVh2OJQJhVhS1JH*OJQJhVh(`OJQJhVhS1J>*OJQJhVhS1JOJQJhVhS1JOJQJ^JhVhS1J>*OJQJ^Jh-ehS1J5OJQJ^Jh-eh25OJQJh-ehS1J5OJQJ'ߐIJԓՓkl͕  UV$&gdS1J&GHst"$Țʚ:;Ŝ/7>FYk}םgdS1J45 mϟ+Ԡ0ӡ(XĤˤҤ٤gdS1J٤   GI\ƨ-`ǩ.bɪgdMgdMgdS1JGN[hilo|}~ϨШԨ֨ب٨ڨ    ":<?@ANQUklmorsth 3SCJOJQJh 3Sh-eCJOJQJh 3Sh 3SCJOJQJh 3ShKs(CJOJQJhqCJOJQJh 3ShMCJOJQJHԩթ֩٩ک۩   #:;>@ABCPUWmnortuwxժ֪٪۪ݪ  $;<?ACPWnh 3Sh-eCJOJQJh 3ShKs(CJOJQJh 3Sh 3SCJOJQJhqCJOJQJh 3ShMCJOJQJL/bȫ.cȬ+_ǭ,_$W SgdMnortvԫ֫٫۫ݫ  #:<?ACDQXopsuwԬ׬٬۬    79<>@ANTkmpruhf6CJOJQJh 3ShMCJOJQJhqCJOJQJh 3Sh-eCJOJQJSӭխحڭݭ  !8<>@ANSTknpsͮѮӮ֮ 0469FKLcgily~h=CJOJQJh 3Sh?9LCJOJQJhf6CJOJQJh 3Sh-eCJOJQJh 3ShKs(CJOJQJh 3ShMCJOJQJHȯ̯ϯү߯,025BGH_behtyðưȰ˰ذݰް߰ '(+-0:;=AX[^amsıDZӱٱ hf6CJOJQJh 3ShKs(CJOJQJh 3ShMCJOJQJVL~ c @A%&pgdS1JgdMgdMopqruɻʻ >?]^|}ټڼ굨{heOJQJhVhS1JH*OJQJhVhS1JOJQJ^JhVhS1J>*OJQJhVhS1JOJQJaJhVhS1JOJQJ^JaJh=OJQJhVh=OJQJhS1JOJQJhVhbCOJQJhVhS1JOJQJhVhMOJQJ0pqr{ûƻɻʻ̻Ffb $$Ifa$gd J%Ff $Ifgd J%gdS1J̻ϻѻӻֻػڻݻ  FfFf $$Ifa$gd J% #&),/258;>?BEHKNQTWZ]FfҩFfv $$Ifa$gd J%]^adgjmpsvy|}FfFf $$Ifa$gd J%Ff.ļǼʼͼмӼּټڼۼܼȽ4gdS1JgdS1JFfFfB $$Ifa$gd J%4X{*NOʿ̿gdS1Jgh7AB=LMgd:NgdS1J$pq89  HI*JLMѺhVh:NOJQJaJheOJQJhVh:NH*OJQJhVh:N>*OJQJhVh:NOJQJhKs(hOd5OJQJhKs(h:N5OJQJhVhLOJQJhVhS1JOJQJhVhS1J>*OJQJ7%&RST gd:N !" $Ifgd% gd:Nuu $$Ifa$gd% $Ifgd% tkd߼$$If\ 634ayt%  uu $$Ifa$gd% $Ifgd% tkd$$If\ 634ayt%   ";H $Ifgd% tkd$$If\ 634ayt% HIKJ2QRxxxxxx 0^`0gd:Ngd:Ntkd$$If\ 634ayt% 2BC  25"#&),-012OPSTWX\]^UW_÷éééééh:NOJQJhVh:NOJQJ\^JhVh:N<OJQJhVh:NOJQJaJhVh:NOJQJ^JhVh:NOJQJ\^JaJhVh:NOJQJheOJQJ?R34$a$gd:NUkd5$$IfTa!  634` ap yt% T $Ifgd% gd:N OFFFF $Ifgd% kdƿ$$IfT\"2Ba!  (634` ap(yt% T#)-1OFFFF $Ifgd% kd$$IfT\"2Ba! (634` ap(yt% T12PTX]OFFFF $Ifgd% kd$$IfT\"2Ba! (634` ap(yt% T]^OFFFF $Ifgd% kd$$IfT\"2Ba! (634` ap(yt% TOFFFF $Ifgd% kd$$IfT\"2Ba! (634` ap(yt% Ta@OJJJJEEgd:Ngd:Nkd$$IfT\"2Ba! (634` ap(yt% T/0nomn  $Ifgd% gd:N !*56:;ABNOPcjk  겤ФФݤФФhVh:N>*OJQJhVh:NOJQJ\^J#hVh:N>*B* OJQJ\phhVh:N<OJQJhVh:NOJQJaJhVh:NOJQJ^JhVh:NOJQJhVheOJQJ< 6;BO $Ifgd% $a$gd:NZkd$$IfTa!  634` ap yt% TOPdfhjOC::: $Ifgd% $$Ifa$gd% kd$$IfT\ Q (h634` ap(yt% TjkOCCCC $$Ifa$gd% kd $$IfT\ Q (h634` ap(yt% TOCCCC $$Ifa$gd% kd$$IfT\ Q (h634` ap(yt% TOF::: $$Ifa$gd% $Ifgd% kd$$IfT\ Q (h634` ap(yt% T OCCCC $$Ifa$gd% kd$$IfT\ Q (h634` ap(yt% TSTOJJJJJJJgd:Nkd$$IfT\ Q (h634` ap(yt% TT./0L5m  gd:N    %     ]         &" =" # $ % 0% ' ' + + 0 ˦~shVhiROJQJhKs(>*OJQJhVh0/OJQJhVhZ`>*OJQJhKs(OJQJhKs(hZ`5;OJQJhVhZ`OJQJhVh:N>*OJQJheOJQJjhVh:NOJQJUhVh:NOJQJhVheOJQJh:NOJQJ*$%    A       I J u v   * + $a$gd:N !gd:Ngd:N+          % & W X Y t u v } ~    gdZ`gd:N  $ %   1 2    \ ]     0 1   ! " G # # $ $ % % gdZ`% % /% 0% 1' 2' l( m( ) ) #* $* + + + + ;. <. 0 0 0 *0 +0 0 0 1 1 2 2  gdZ`gdZ`0 0 *0 j0 0 31 f1 H2 {2 2 3 c3 {3 |3 3 4 4 4 R4 S4 ^4 _4 4 4 4 4 5 O5 5 5 5 5 6 6 6 6 !6 "6 6 6 6 6 +7 B7 7 7 7 7 ޷ިި޷޷hVhZ`OJQJ^JhVhZ`5>*CJaJhVhZ`5CJaJhVhZ`OJQJmH sH hVhiROJQJhVhiR>*OJQJ]hVhZ`>*OJQJ]hVhZ`OJQJhVhZ`>*OJQJhVh;OJQJ/2 3 3 p3 3 3 3 4 S4 ^4 _4 4 4 j5 k5 5 5 6 R6 S6 6 6 77 }7 7 7 7 `gdZ`gdZ` #^#`gdZ`gdZ`7 7 7 7 8 L8 c8 8 8 ;9 O9 \9 ]9 9 9 9 : 9: P: ; 5; ; ; < %< &< < < 6= N= = > > > &> )> +> I> Q> R> S> ܱܽܽ~qhVh.OJQJ^JhKs(h.5;OJQJ^JhKs(h.5;OJQJhe5OJQJhKs(h.5OJQJhKs(hn t5OJQJ!hVhZ`B*OJQJaJphhVhZ`>*OJQJ]hVhZ`OJQJhVhZ`>*OJQJhVhiR>*OJQJ(7 8 8 X8 8 8 8 8 9 \9 ]9 9 9 : W: : : : 3; D; E; ; ; < d< q< r< < gdZ` #^#`gdZ`< < < C= = = = > R> S> `> a> ? @ @ @ 0B 2B fB hB H H .J 0J J J O O gd.gdZ` #^#`gdZ`S> `> @ @ 0B 2B :B fB 8J J M M N N O zP KT kT W $X mZ nZ qZ rZ yZ zZ ~Z Z Z Z i i k k k k l l l l l m m m 䯤|hVh.>*OJQJ^J"hVh.OJQJ^J"hVh.OJQJ^J hVh.OJQJhVh.>*OJQJhVh;OJQJ^JhVh.H*OJQJ^JhVh.H*OJQJ^JhVh.OJQJ^JhVh.>*OJQJ^J+O zP |P FT GT lT mT W W %X &X X X Z ] ] ^ ^ h` j` l` ^a `a dc fc f f g i i gd.i ~k k k k l l l l m m Am Bm m m n n n &o 'o jo o o o o o gd~ dd[$\$gd.gd. 7$8$H$gd.gd.m -m 0m ?m Bm Xm dm m m m m m m m m m n n n n n ƵttcRcA hVh;CJOJPJQJaJ hVh#,CJOJPJQJaJ hVhiRCJOJPJQJaJhVh;CJOJQJaJhVh.CJOJQJaJ#hVh.>*CJOJPJQJaJ hVh.CJOJPJQJaJ hVh.CJOJQJ^J aJhVh.OJQJ^J aJhVh.OJQJ^J hVh.OJQJ]^J hVh.>*OJQJ]^J n n n n n n o o &o Eo Zo jo ko uo yo o o o o o s s ~ ~ ! 7 L b - . ! ξ~r~r~r~f~r~r~r~Zr~hVh#,>*OJQJhVh~H*OJQJhVh~>*OJQJhVh~OJQJhKs(hcK5;OJQJhKs(h~5;OJQJhVhcKOJQJhVh.>*OJQJ^J hVh.>*OJQJ]^J hVh.OJQJ^J hVh.>*OJQJhVh.>*OJQJ]hVh.OJQJ#o o o Pp p p p q q Rq Sq Tq s s s s w w y y X| Y| ~ ~ ~ ~   gd~   ! " 7 8 \ ] K L b c i j 2 g j $ gd~$ % ] ^   b c - . ! 4 5 X Y t u g h A 7$8$H$gd~gd~! 5 X Y I % 3  \ m  q f g ijijuuiii^hVh;OJQJhVh~>*OJQJhVh~OJQJ^JhVh~OJQJ^J/$hVh~B*OJQJ]^J.ph# !hVh~B*OJQJ^J-ph# !hVh~B*OJQJ^J,ph# $hVh~B*OJQJ\^J+ph# !hVh~B*OJQJ^J*ph# hVh~OJQJhVh~OJQJ^J)$A B ` a   E F   8 9 H I ) z {   $ % A B 7$8$H$gd~gd~ % ] ^ , - A B   p q s t 7$8$H$gd~gd~ e f g r s V W  \ ] K L 7$8$H$gd~gd~g q  G I K V [  T z . / >  9  / Ϳ讠xͿkhVh~OJQJ^J0hVh~>*OJQJ^JhVh~OJQJ^JhVh~OJQJ^J1hVh~>*OJQJ^J1!hVh~B*OJQJ^J1phhVh~>*OJQJ^J)hVh~>*OJQJ^J0hVh~OJQJ^J)hVh~OJQJhVh~>*OJQJ*L 7 8 M N F G G H w x  8 9 R gd& 7$8$H$gd~gd~ , D g 3 ; < F L l   7 8 9 E R S Ҿ豠uh\Q\hVh&OJQJhVh&>*OJQJhVh&OJQJ^JhKs(h;5OJQJhKs(h&5OJQJ$hVh~>*B*OJQJ^Jph)@J!hVh~B*OJQJ^Jph)@JhVh~OJQJ^J3'hVh~B*OJQJ\]^J2ph*hVh~>*B*OJQJ\]^J2phhVh~OJQJhVh~>*OJQJR S u v   f g  V W w x ) A B t gd&S w % K         S j r       i    ; T     b |   ^ v    : j y   hVh&>*OJPJQJhVh&OJPJQJ!hVh&B*OJQJaJphhVh;OJQJhVh&OJQJ^JhVh0SOJQJ^JaJhVh&OJQJ^JaJhVh&>*OJQJhVh&OJQJ1t u   f g e f $ % K L gd&L 5 6        v w      V 7$8$H$gd&gd&V W     , - R S j k           q r       gd&   a b             y z   %! &! ! ! 3" gd& 7$8$H$gdegd& ! ! ! " %" " " # W# # $ $ '% Q% h% |% % % % ' ' ' (' )' *' 7' * + - - 1 亪䟓ymaVaVaVhVh{OJQJhVh{>*OJQJhVhcK;OJQJhVh{;OJQJhEh{5;OJQJhEhcK5OJQJhVhcKOJQJhVh&>*OJQJ]aJhVh&OJQJaJ!hVh&B*OJQJaJphhVh&>*OJQJhVh&OJQJ hVh&>*B*OJQJph!3" 4" W# /$ 0$ P% Q% % % & & & & ' (' )' *' 7' 8' t' u' ' ' 4( 5( ( ( ( ( gd{gd&( .) /) c) d) ) ) * * 2* 3* * * * * + + - - - - J. K. u0 v0 1 1 1 1 f4 gd{1 1 f4 4 5 ?6 27 a7 8 9 ^: : ; < ? ? E E N $N nN N N N KO nO yO {O O O O R R U U V V X X Z D\ \ ] ] ` ` u v z z z z z ƺƺƺƭƺƭƭhVh0SH*OJQJ^JhVh0S>*OJQJ^JhVh0SOJQJaJ$hVh0SOJQJ^JhVh0S>*OJQJhVh0SOJQJhEh0S5OJQJhVh 2OJQJhVh{OJQJhVh{>*OJQJ4f4 g4 4 4 5 5 ?6 @6 17 27 a7 b7 8 8 9 9 \: ]: : : ; ; < < > ? ? ? ? ? gd{? ? Q@ @ @ A A B B MD ND E E E E E G G J J M M N N N $N %N N N N gd{N N zO O O O O O R R R R U U U U W W X X X X X X X ZZ \Z Z Z gd0Sgd{Z E\ F\ \ \ ] ] ] ] 8` 9` Za [a Zc \c e e h h Nk Tk p p r r t u u v gd0Sv v w w y y z { { 6~ 8~ z ~ ځ ܁ } ~   v w a gd0S 2 gd0Sgd0Sz { /} 0} ~ ~ ~ ~    ~ ځ ̈́ ΄  5 ~ Ά   Q T V w ȇ ه  ȭȭȐȼȼȁp_ȼ hVh0SB*H*OJQJph hVh0S>*B*OJQJphhVh0SB*OJQJphhVh0SOJQJ\^J4aJhVh0S0JOJQJjhVh0SOJQJUhVh0S>*OJQJhVh0SOJQJ!jhVh0SOJQJU^JhVh0SOJQJ^JhVh0S>*OJQJ^J% 1 2 A ӈ  G R i Ɖ   ! $ 3 5 n w K Z 跧ui]u]u]uhVhwP>*OJQJhEhwP5OJQJhVhwPOJQJhVh=OJQJ\^JhEOJQJ\^JhVhYDQOJQJ\^Jh+h0S>*OJQJ\^JhVh0SOJQJ\^J#hVh0S>*CJOJPJQJaJ hVh0SCJOJPJQJaJhVh0SOJQJhVh0S>*OJQJ a b R S " # 4 n o w x ' ( L M J [ \   E gdwPgd0Sgd0SE F 4 5   ӟ ٣ ڣ   | }  / _ ` gdwPgdwP  ? ê = w  E u Ȭ Ӭ  Z \ l m n o d gdwPgdwP m ^ c ʺ @ D O  $ 7 e w n p   $ B r t B  詐hVhwP>*OJQJ^JhVhxnOJQJhVhwPOJQJ^JhVhy>*OJQJhVhwPCJOJQJaJhVhyOJQJhVhwP>*OJQJhVhwPOJQJhVhwPCJOJQJ7d e  G  < t ³ ߳   $ n gdwPgdwPn . 9 \  ɵ  B ]   c gdwPgdwP 0 ; ^ ׸ ) X s ع  ! 4 ? ʺ ˺ ̺ 5 6 i j gdwPgdwP ' 2 b m x 3 \  0 R = ? @ C D gdwPgdwPD P Q ^ _ 7 8    % & T U t v gdwP " ~   @ B   + - N p  : [ | gdwPgdwP      [ c w x | 1 2 9 : s t U V Z [ ~  k m 8 ]  #   K N           * עעעעhVhptCJOJQJhVhwPOJQJaJhVhwP>*OJQJ^JhVhxnCJOJQJhVhwPCJOJQJhVhwPOJQJ^JhVhwPH*OJQJ^J@|  > ^  @ b  ' W w gdwP 7 8 \ ]   $ %          ; l n       gdwPgdwP* 0 2 4 : > M N b h k q v z       " + ; < ] _ h       O P " " $ #% %% :% <% % % % % % % % % % % % % 潰hVh=OJQJ^JhVhwP>*OJQJ^JhVh=>*OJQJ^JhVhwPOJQJ^JhVhwPOJQJ^J(hVhwPCJOJQJaJhVhxnCJOJQJhVhwPCJOJQJhVhptCJOJQJ5 T     " f g h   M N     w x        gdwPgdwP O P   " " " " $ $ $ $ $ % % H% x% % % % & & ( ( ( ( ( ( gdptgdwPgdwP% % % % % % % % ( ( ( ( ( >, A, . . . . 0 0 0 0 0 0 0 0 d1 l1 2 2 2 2 2 2 2 2 b6 d6 r6 v6 6 6 6 6 P: T: \: `: [> \> ̱̣̣̣̣̣̣̣̣̾̾̾̆̕̕̕̕̕̕̕̕jIhVhwPOJQJUhVhwPH*OJQJ^JhVhwPH*OJQJ^JhVh=OJQJ^JhVhwP>*OJQJ^JhVhwPOJQJ^JhVhwPCJOJQJhVhptCJOJQJhVhxnCJOJQJ2( k* l* >, ?, @, A, l- m- x. z. . . . . 2 2 2 2 5 5 5 T6 6 6 P7 R7 &; (; < gdwP< < W> X> Y> Z> [> ]> _> `> > > > @ @ @ @ AA 1$7$8$H$gdD d#1$7$8$H$gdD 1$5$7$8$H$gdpt d1$7$8$H$gdD d1$7$8$H$gdD gdD gd{gd0SgdwP\> ]> ^> _> `> > > > ? ? @ @ @ @ @A AA A A B B B B B B C C D D #F %F 3F 4F *OJQJ^JhVhptOJQJ^JhVhD OJQJ^JhEhD 5;OJQJ!hEhD 5;OJQJ\^JhVh{OJQJhVh0SOJQJhVhD OJQJhVhwPOJQJ#AA B B C C D D $F %F 4F =F v`$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+ d-1$7$8$H$gdD d1$5$7$8$H$gdD d1$7$8$H$gdD (d1$5$7$8$H$](gdD d1$7$8$H$gdD 1$5$7$8$H$]gdD 1$7$8$H$gdD d1$5$7$8$H$]gd4 =F GF HF IF JF MF PF QF r\F$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+kkd@ $$If4\4 H\`4(ayt,+$1$7$8$H$Ifgd,+$$1$7$8$H$Ifa$gd,+GF HF IF JF LF MF OF PF QF RF cF dF fF gF iF jF kF lF rF sF xF yF ~F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F 3H 5H H H J J RK TK 迲hVhptOJQJ^JhVhD OJQJ^JhVhD OJQJ^J(aJhVhD OJQJaJhVhD OJQJhVhD OJQJaJAQF RF dF gF jF kF nX$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+kkd $$If4\4 H\ 4(ayt,+kF lF sF yF F F pZ$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+jkd $$If\4 H\4(ayt,+F F F F F F pZ$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+jkd3 $$If\4 H\4(ayt,+F F F F F F pZ$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+jkd $$If\4 H\4(ayt,+F F F F F F jQB$1$7$8$H$Ifgd,+$|d$1$7$8$H$If]|a$gd,+$d$1$7$8$H$If]a$gd,+d$1$7$8$H$Ifgd,+jkdu $$If\4 H\4(ayt,+F F F F F F pZ$|$1$7$8$H$If]|a$gd,+$$1$7$8$H$If]a$gd,+$1$7$8$H$Ifgd,+jkd $$If\4 H\4(ayt,+F F F 4H 5H SK TK jK tfZLA 1$7$8$H$gdD d 1$7$8$H$gdD 1$5$7$8$H$gdpt d1$7$8$H$gdD d1$5$7$8$H$]gdD d1$7$8$H$gdD jkd $$IfT\4 H\4(ayt,+TK iK kK K K L L N N N O O O O Q Q Q Q Q Q Q S S V V LV MV (W *W KW MW Y Y Y Y 1Z JZ %[ '[ 6[ 8[ /\ 0\ :\ ;\ =\ Q\ \ \ 沣沣ٖ|hVh4>*OJQJhVh4>*OJQJ^JhVh4OJQJ^JhVh4OJQJ^JaJhVhD OJQJ^JaJhVhptOJQJhVhfOJQJ^JhVhD OJQJ^JhVhD OJQJhVhD >*OJQJ^J0jK kK K L N N O O O O Q Q Q zl d*1$7$8$H$gdptxd1$5$7$8$H$]xgdD (1$5$7$8$H$](gdD d 1$7$8$H$gdD d1$5$7$8$H$gdD d%1$7$8$H$gdD d1$5$7$8$H$]gdD 1$7$8$H$gd4 1$7$8$H$gdD d1$7$8$H$gdD Q Q S S V V MV )W *W LW MW Y Y tdd1$5$7$8$H$]dgd4 d'1$7$8$H$gdD Td1$5$7$8$H$]TgdD 1$7$8$H$gdD d1$7$8$H$gdD d1$5$7$8$H$gdD d1$7$8$H$gdD Pd1$5$7$8$H$]PgdD d1$7$8$H$gdD Y &[ '[ 7[ 8[ /\ 0\ ;\ <\ =\ \ \ \ 2] 3]  d:1$7$8$H$gdD d1$5$7$8$H$]gdD d1$7$8$H$gdD d81$7$8$H$gdD d1$5$7$8$H$gd4 d1$7$8$H$gd4 1$7$8$H$gdD 1$7$8$H$gd4@1$5$7$8$H$]@gd4\ \ \ \ \ 1] 3] ] ] ] ] ] ] ] ^ ^ ^ ^ ^ ^ ^ N_ l_ m_ o_ y_ {_ _ _ _ _ _ _ _ _ y` ` ` ` a a .a 0a ua va a a a b 'b 1b 3b db eb 躰hVhZOJQJhEhZ5OJQJhVh=OJQJ^JhEOJQJ^JhVhptOJQJ^JhVhD >*OJQJ^J$hVhD >*B*OJQJ^JphhVhD OJQJ^JhVhD OJQJ53] ] ] ^ ^ ^ ^ m_ z_ {_ _ t^$|d1$5$7$8$H$]|a$gdD d1$7$8$H$gdD 1$7$8$H$gdD 1$5$7$8$H$]gdD d'1$7$8$H$gdD $d1$5$7$8$H$a$gdD d:1$7$8$H$gdD `d1$5$7$8$H$]`gdD d?1$7$8$H$gdD d1$5$7$8$H$]gdD _ _ ` ` /a 0a va a a 2b 3b db eb rb sb ~~~~gdZd1$5$7$8$H$]gdD d1$7$8$H$gdD 1$7$8$H$gdD d:1$7$8$H$gdD @d1$5$7$8$H$]@gdD d'1$7$8$H$gdD d1$5$7$8$H$]gdD d+1$7$8$H$gdD eb rb yb {b b b c c (f Bf h h h s s w w &w Ew } #} 3 F † Æ φ І     - . 7 W X | } ܇ ݇    ) * ơ hVhZ>*OJQJhVhZOJQJaJhVhZOJQJ\hVh2>*OJQJhVh2OJQJhVhZOJQJhVhZ>*OJQJ@sb md nd f f h h h h h h 9i :i }i i >j wj j j fk k k k 4m 5m Em Fm n n fq gdZfq gq r r fs gs s s s s Hw Iw y y y y z z T{ U{ H~ I~ 2 3 F G gdZG T g{kdX $$IfZ07 0634ZabytZ $$Ifa$gdZ dd[$\$gdZgdZ † xx $$Ifa$gdZ{kd $$IfZ07 0634ZabytZ† Æ Ȇ φ xx $$Ifa$gdZ{kd $$IfZ07 0634ZabytZφ І ц     sjj^ $$Ifa$gdZ $IfgdZ dd[$\$gdZgdZ{kdh $$IfZ07 0634ZabytZ  ! # % ' ) + - offZZZZZZ $$Ifa$gdZ $IfgdZkd $$IfZF,06    34Zab ytZ - . 8 : A E I M Q W X Y [ _ f m q x | } ~ Ff $IfgdZFf $$Ifa$gdZFf  ‡ ɇ ͇ ч Շ ܇ ݇ އ Ff Ff $IfgdZFf $$Ifa$gdZ     R S ߋ   ( y (  Y ,     & FgdZgdZFf $$Ifa$gdZ 0 1 Ι ϙ ) * M N   S T p q gdZq ` a Ϩ Ш ; <   k l + , ] ^ j w ° װ   gdZ : E P [ f q | ̲ ײ  E ϳ ѳ > gdZgd2gdZ ѳ > a & Ǽ       $ % N  K O  F ξίίίݣ|oaTohVhZOJQJ^J6hVhZ>*OJQJ^J6hVhZOJQJ^J5hVhZOJPJQJ]hVhZOJPJQJhVhZOJQJ]hVhZ>*OJQJhVh2CJOJQJaJhVhZ>*CJOJQJaJhVhZCJOJQJaJhVh2OJQJhVhZOJQJhVhZCJOJQJ > ? _ a & ' B W Y A B չ V W ܻ ݻ Ǽ ȼ   gdZgdZ   "  4 5 $ N O , F G 3 4  & FgdZh^hgdZgdZh^hgdZ & FgdZ   F G   _ `  " #   g h gdZ 7$8$H$gdZgdZF \  0 J _ ` v  P    * n`nO hVh=CJOJPJQJaJhECJOJPJQJaJ hVh_CJOJPJQJaJ#hVhZ>*CJOJPJQJaJ#hVhZ>*CJOJPJQJaJ hVhZCJOJPJQJaJhVh_CJOJQJhVhZCJOJQJhVhZ>*OJPJQJhVhZOJPJQJhVhZ>*OJQJhVhZOJQJ* G ~   9 B X o  # " 4     ииииииСЋЀииqиhVhdOJQJ^J7aJhVh=OJQJhVh>]OJQJhVhv}OJQJhVhFOJQJhVh>]>*OJQJhVhd>*OJQJhVhd5OJQJhVhdOJQJhVhZCJOJQJhVhZOJQJhVhZ>*OJQJ)  ' ( 5 6 B r  ( L b c o  % 6 I ^`gdZgdZI r   % M i |  1 B j ^`gdZgdZ % > ? K ^ r   + > O b u ^`gdZgdZ ' D E P y   $ 6 _ t u ^`gdZgdZ  9 M v   & : L k } $ 8 ^`gdZgdZ8 H X   # $ / A S f  2 T X Y d x ^`gdZgdZ      2 F Y k {  " 2 E p ^`gdZ `^``gdZgdZp * > P z  + ; d  ^gdZgdZ ^`gdZ / Z   % 9 L ` p  @ T } ^gdZgdZ   ? j ~  . W x | } ^`gdZgdZ  ! 3 E d     9 : $ & J K o gdd ^`gdZgdZo p t u h # h 6 7  gddgdd Y Z 3 4   < = > K L & X Z 2 3 -  7gdVgdVgdZ gdd < > K - 4 5 Q R      $ . 0 6 8 @ B H L V X ^ ` d j   $ & ^ ` j n       [ \ k l     ; { % & 6 7 ͵͵ͩ͵͵ͩͩ͵͵ͩͩ͵͵͵͵͵͵͵͵͵͵͵͵͵͵͵͵hVhVH*OJQJhVhVH*OJQJhVhV>*OJQJhVhVOJQJhVhV>*OJQJ^JhVhVOJQJ^JhVhV5OJQJB- . . / f g       l n 6 7         : $a$gdVgdV: ; z { v w    5 6        K      ! A B   gdVgdV7      K    ! " `% m% |& & ' ' o( q( |( ( ( ( ( ) * * * e+ + G, x, , , - *- 5- t- - - - - - - - . . . . / 0/ :/ ;/ غhdnhdnOJQJhdnhV>*OJQJhdnhVOJQJhVOJQJhdnOJQJhVhVOJQJaJ(!hVhVB*OJQJaJ$ph!hVhVB*OJQJaJ(phhVhV>*OJQJhVhVOJQJ5       ( ) u v ! ! " " i# j# # # # # $ $ P$ Q$ _% `% m% n% $a$gdVgdVn% z& |& & ' ' p( |( }( ( ( ) ) /* * + , 5- - - . ;/ / 20 0 0 gdV dd[$\$gddn dd[$\$gdVgdV;/ }/ / / / 0 %0 20 0 0 0 g1 {1 1 1 1 1 Z2 [2 i2 k2 x2 ~4 4 H9 Z9 *< m> > @ @ @ D D F ԽԽwgwXhXhXOJQJ^J8aJhXhX>*OJQJ^JaJhXhXOJQJ^JaJhXhX>*OJQJhXhXOJQJhXhX5OJQJhdnOJQJhVhV>*OJQJhVhVOJQJhVhV5>*CJhVhV5CJhdnhdnOJQJhdnhV>*OJQJhdnhVOJQJ"0 1 1 Y2 Z2 i2 j2 k2 x2 y2 }4 ~4 4 4 5 7 7 G9 H9 Z9 [9 ; ; l> m> > 7$8$H$gdXgdXgdVgdV 7$8$H$gdV> > @ @ @ @ D D D SE TE F F F F zz.{/{P{Q{k{l{m{{{{||gdX 7$8$H$gdXF F G ẋALx‰>M Cɍٍ^_j&6c'RaǑby谠~~~"hXhX>*OJQJ\^J:aJ$hXhXOJQJ\^J:aJ$hXhX>*OJQJ\^JhXhXOJQJ\^JhX>*OJQJhXhXOJQJ^J9aJhXhX<OJQJ^JaJUhXhXOJQJhXhX>*OJQJ1ome cutoff that seems to be reasonable, e.g., the 65% for the New York State Regents Examinations. (Its sort of a Goldilocks criterion: not too strict, not too lenient; just right.) The second way is to base the cutoff determination upon the avoidance of Type I errors and Type II errors as defined in inferential statistics, or their corresponding epidemiological indicators, namely sensitivity and specificity. I would now like to delve into that approach here, without being over-technical. Some researchers, e.g., Lowery, et al. (2014) use receiver operating characteristic curves (ROCs) to augment the determination of cutoffs, but that is not necessary, in my opinion. Consider the following 2-by-2 table: (Somebody, I dont remember who, once said that the whole world is a 2-by-2 table.) Above Cutoff Below Cutoff Has the disease a b Doesnt have the disease c d For this approach you need to have a gold standard that is accepted as the true state of affairs. And has the disease can be a phenomenon in the usual negative sense of the term or it can be something positive such as is well or deserves to pass. The a,b,c, and d in the table are observed frequencies for the various row and column combinations for some actual data. The good cells are a and d. Those in cell a are true positives who are above the cutoff and really have the disease, so the cutoff is sensitive for them. Those in cell d are true negatives who are below the cutoff and they really dont have the disease, so the cutoff is specific for them. The bad cells are b and c. Those in cell b are false negatives (Type II errors) who are below the cutoff and are said to not have the disease but they do. Those in cell c are false positives (Type I errors) who are above the cutoff and are said to have the disease but they dont. The problem is to strike a balance between sensitivity and specificity, because as one goes up the other goes down, all other things being equal. (The only way you can have your cake and eat it too is to collect lots and lots of data before setting the cutoff points, i.e., to increase the sample size.) The obvious drawback to this approach is the possible non-existence of the gold standard. (If it does exist, why not use it rather than whatever procedure has been carried out to establish the cutoff in the first place, unless the gold standard is too expensive to employ.) Mis-classification errors Once the cutoffs have been determined it is incumbent upon the user to determine some measure of possible errors of classification. A simple measure for the 2-by-2 table shown above might be the sum of the frequencies in the bad cells (b and c), divided by the sum of all four cells, and multiplied by 100. Back to BMI In a long and somewhat technical article, Durazo-Arvisu, et al. (1998) investigated what BMI cutoffs would be optimal for predicting mortality. They used the NHANES1 dataset to arrive at their findings which were (alas): No set of cutoffs (they used BMI quintiles) was found to be optimal for all demographic groups, but the lowest mortality risks were all near the mean BMI for the respective groups. How much does it matter? An example A few years ago, Freedman, et al. (2006) investigated the prediction of mortality from both obesity and cigarette-smoking history, using data from the U.S. Radiologic Technologists (USRT) Study. I was able to gain access to the raw data for a random sample of 200 of the males in that study. Here is what I found for age at death as the dependent variable: Regression of deathage on bmi: r-square = 0.1% Regression of deathage on bmi levels: r-square = 0.0% [the levels were less than 25; 25 to 29.99; 30-39.99; 40 and above] Regression of deathage on pack-years: r-square = 6.4% Regression of deathage on pack-yrs levels r-square = 5.2%. [the levels were less than or equal to 20 and 21 or more] For these data it doesnt seem to matter much, but why bother with the cutoffs? Another example: APGAR scores One of the most common, and simplest, measurements is the total score on the APGAR test of the viability of a newborn baby at one minute and five minutes after birth. Here are the attributes that are scored and the number of points for each (courtesy of MDCalc): Activity/Muscle Tone (A) Active +2; Some Extremity Flexion +1; Limp 0 Pulse (P) > or = 100 BPM +2; <100 BPM +1; Absent 0 Grimace (G) Sneeze/Cough +2; Grimace +1; None 0 Appearance/Color (A) All Pink +2; Blue Extremities, Pink Body +1; Blue/Pale 0 Respirations (R) Good/Crying +2; Irregular/Slow +1; Absent 0 Bottom of Form High APGAR scores are good. The following cutoff scores are often considered: A score of 7, 8, or 9 is normal. Any score lower than 7 is a sign that the baby needs medical attention. There are at least two problems with using a cutoff score of 7. First of all, it isn't clear (to me, anyhow) why 7? Secondly, different infants could get a score of 7 based upon a variety of combinations of scores obtained for the five respective attributes. Yet a decision apparently must be made to either recommend medical attention or to not do so, no matter what attributes contributed to a score of 7. A final note My colleague Jean Brown and I said in our recent article in Research in Nursing & Health: Thou Shalt Not Dichotomize or Otherwise Categorize Continuous Variables Without a Very Compelling Reason for Doing So (Knapp & Brown, 2014, p. 350). Dont do it. Acknowledgment I would like to thank Suzy Milliard, Freedom of Information/Privacy Coordinator for giving me access to the data for the USRT study. References Durazo-Arvizu, R.A., McGee, D.L., Cooper, R.S., Liao, Y., & Luke, A. (1998). Mortality and Optimal Body Mass Index in a Sample of the US Population. American Journal of Epidemiology, 147 (8), 739-749. Forchheimer, M.B., Richards, J.S., Chiodo, A.E., Bryce, T.N., & Dyson-Hudson, T.A. (2011). Cut point determination in the measurement of pain and its relationship to psychosocial and functional Measures after traumatic spinal cord injury: A retrospective model spinal cord injury system analysis. Archives of Medical Rehabilitation, 92, 419-424. Freedman, D.M., Sigurdson, A.J., Rajaraman, P., Doody, M.M., Linet, M.S., & Ron, E. (2006). The mortality risk of smoking and obesity combined. American Journal of Preventive Medicine, 31 (5), 355-362. Knapp, T.R., & Brown, J.K. (2014). Ten statistics commandments that almost never should be broken. Research in Nursing & Health, 37, 347-351. Lowery, A.E., et al. (2014). Impact of symptom burden in post-surgical non-small cell lung cancer survivors. Support Care Cancer, 22, 173-180. MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 1940. Owen, S. V., & Froman, R. D. (2005). Why carve up your continuous data? Research in Nursing & Health, 28, 496503. Streiner, D. L. (2002). Breaking up is hard to do: The heartbreak of dichotomizing continuous data. Canadian Journal of Psychiatry, 47, 262266. Vickers, A.J. (September 3, 2010). Cutoffs in medicineWhy use them?  HYPERLINK "file:///C:\\businessmedicine" Medscape Business of Medicine >  HYPERLINK "file:///C:\\index\\list_6881_0" Stats for the Health Professional. Downloaded from the internet on August 17, 2014.  INCLUDEPICTURE "http://s.ngm.com/2012/01/twins/img/twins-bryans-160.jpg" \* MERGEFORMATINET  WOMB MATES I've always been fascinated by twins ("womb mates"; I stole that term from a 2004 article in The Economist). As far as I know, I am not one (my mother and father never told me so, anyhow), but my name, Thomas, does mean "twin". I am particularly concerned about the frequency of twin births and about the non-independence of observations in studies in which some or all of the participants are twins. This paper will address both matters. Frequency According to various sources on the internet (see for example, CDC, 2013; Fierro, 2014): 1. Approximately 3.31% of all births are twin births, either monozygotic ("identical") or dizygotic ("fraternal"). Monozygotic births are necessarily same-sex; dizygotic births can be either same-sex or opposite-sex. 2. The rates are considerably lower for Hispanic mothers (approximately 2.26%). 3. The rates are much higher for older mothers (approximately 11% for mothers over 50 years of age). 4. The rate for a monozygotic twin birth (approximately 1/2%) is less than that for a dizygotic twin birth. An interesting twin dataset I recently obtained access to a large dataset consisting of adult male radiologic technicians. 187 of them were twins, but not of one another (at least there was no indication of same). It was tempting to see if any of their characteristics differed "significantly" from adult male twins in general, but that was not justifiable because although those twins represented a subset of a 50% random sample of the adult male radiologic technicians, they were not a random sample of US twins. Nevertheless, here are a few findings for those 187 people: 1, The correlation (Pearson product-moment) between their heights and their weights was approximately .43 for 175 of the 187. (There were some missing data.) That's fairly typical. [You can tell that I like to investigate the relationship between height and weight.] 2, For a very small subset (N = 17) of those twins who had died during the course of the study, the correlation between height and weight was approximately .50, which again is fairly typical. 3. For that same small sample, the correlation between height and age at death was approximately -.14 (the taller ones had slightly shorter lives) and the correlation between weight and age at death was approximately -.42 (the heavier persons also had shorter lives). Neither finding is surprising. Big dogs have shorter life expectancies, on the average (see, for example, the pets.ca website); so do big people. Another interesting set of twin data In his book, Twins: Black and White, Osborne (1980) provided some data for the heights and weights of Black twin-pairs. In one of my previous articles (Knapp, 1984) I discussed some of the problems involved in the determination of the relationship between height and weight for twins. (I used a small sample of seven pairs of Osborne's 16-year-old Black female identical twins.) The problems ranged from plotting the data (how can you show who is the twin of whom?) to either non-independence of the observations if you treat "N" as 14 or the loss of important information if you sample one member of each pair for the analysis. 'tis a difficult situation to cope with methodologically. Here are the data. How would you proceed, dear reader (as Ann Landers used to say)? Pair Heights (X) in inches Weights (Y) in pounds 1 (Aa) A: 68 a: 67 A: 148 a: 137 2 (Bb) B: 65 b: 67 B: 124 b: 126 3 (Cc) C: 63 c: 63 C: 118 c: 126 4 (Dd) D: 66 d: 64 D: 131 d: 120 5 (Ee) E: 66 e: 65 E: 123 e: 124 6 (Ff) F: 62 f: 63 F: 119 f: 130 7(Gg) G: 66 g: 66 G: 114 g: 104 Other good sources for research on twins and about twins in general 1. Kenny (2008). In his discussion of dyads and the analysis of dyadic data, David Kenny treats the case of twins as well as other dyads (supervisor-supervisee pairs, father-daughter pairs, etc.) The dyad should be the unit of analysis (individual is "nested" within dyad); otherwise (and all too frequently) the observations are not independent and the analysis can produce very misleading results. 2. Kenny (2010). In this later discussion of the unit-of analysis problem, Kenny does not have a separate section on twins but he does have an example of children nested within classrooms and classrooms nester within schools, which is analogous to persons nested within twin-pairs and twin-pairs nested within families. 3. Rushton & Osborne (1995). In a follow-up article to Osborne's 1980 book, Rushton and Osborne used the same dataset for a sample of 236 twin-pairs (some male, some female; some Black, some White; some identical, some fraternal; all ranged in age from 12 to 18 years) to investigate the prediction of cranial capacity. 4. Segal (2011). In this piece Dr. Nancy Segal excoriates the author of a previous article for his misunderstandings of the results of twin research. 5. Twinsburg, Ohio. There is a Twins Festival held every August in this small town. Just google Twinsburg and you can get a lot of interesting information, pictures, etc. about twins and other multiples who attend those festivals Note: The picture at the beginning of this paper is of the Bryan twins. To quote from the Wikipedia article about them: "The Bryan brothers are identical twin brothers  HYPERLINK "http://en.wikipedia.org/wiki/Bob_Bryan" \o "Bob Bryan" Robert Charles "Bob" Bryan and  HYPERLINK "http://en.wikipedia.org/wiki/Mike_Bryan" \o "Mike Bryan" Michael Carl "Mike" Bryan, American professional doubles tennis players. They were born on April 29, 1978, with Mike being the elder by two minutes. The Bryans have won multiple Olympic medals, including the gold in 2012 and have won more professional games, matches, tournaments and  HYPERLINK "http://en.wikipedia.org/wiki/Grand_Slam_%28tennis%29" \o "Grand Slam (tennis)" Grand Slams than any other pairing. They have held the  HYPERLINK "http://en.wikipedia.org/wiki/List_of_ATP_number_1_ranked_doubles_players" \o "List of ATP number 1 ranked doubles players" World No. 1 doubles ranking jointly for 380 weeks (as of September 8, 2014), which is longer than anyone else in doubles history." References Centers for Disease Control and Prevention (CDC) (December 30, 2013). Births: Final data for 2012. National Vital Statistics Reports, 62 (9), 1-87. Ferrio, P.P. (2014). What are the odds? What are my chances of having twins? Downloaded from the About Health website. (Pamela Prindle Ferrio is an expert on twins and other multiple births, but like so many other people she equates probabilities and odds. They are not the same thing.] Kenny, D.A. (January 9, 2008). Dyadic analysis. Downloaded from David Kenny's website. Kenny, D.A. (November 9, 2010). Unit of analysis. Downloaded from David Kenny's website. Knapp, T.R. (1984). The unit of analysis and the independence of observations. Undergraduate Mathematics and its Applications (UMAP) Journal, 5 (3), 107-128. Osborne, R.T. (1980). Twins: Black and White. Athens, GA: Foundation for Human Understanding. Rushton, J.P., & Osborne, R.T. (1995). Genetic and environmental contributions to cranial capacity in Black and White adolescents. Intelligence, 20, 1-13. Segal, N.L. (2011). Twin research: Misperceptions. Downloaded from the Twofold website. SHould We Give Up On Experiments? In an earlier paper (Knapp, 2013) I presented several arguments pro and con the giving up on causality. In this sequel I would like to extend the considerations to the broader matter of giving up on true experiments (randomized controlled trials) in general. I will touch on ten arguments for doing so. But first... What is an experiment? Although different researchers use the term in different ways (e.g., some equate "experimental" with "empirical" and some others equate an "experiment" with a "demonstration"), the most common definition of an experiment is a type of study in which the researcher "manipulates" the independent variable(s) in order to determine its(their) effect(s) on one or more dependent variables (often called "outcome" variables). That is, the researcher assigns the "units" (usually people) to the various categories of the independent variable(s). [The most common categories are "experimental" and "control".] This is the sense in which the term will be used throughout the present paper. What is a "true" experiment? A true experiment is one in which the units are randomly assigned by the researcher to the categories of the independent variable(s). The most popular type of true experiment is a randomized clinical trial. What are some of the arguments against experiments? 1. They are artificial. Experiments are necessarily artificial. Human beings don't live their lives by being assigned (whether randomly or not) to one kind of "treatment" or another. They might choose to take this pill or that pill, for example, but they usually don't want somebody else to make the choice for them. 2. They have to be "blinded" (either single or double); i.e., the participants must not know which treatment they're getting and/or the experimenters must not know which treatment each participant is getting. If it's "or", the blinding is single; if it's "and", the blinding is double. Both types of blinding are very difficult to carry out. 3. Experimenters must be well-trained to carry out their duties in the implementation of the experiments. That is irrelevant when the subjects make their own choices of treatments (or choose no treatment at all). 4. The researcher needs to make the choice of a "per protocol" or an "intent(ion) to treat" analysis of the resulting data. The former "counts" each unit in the treatment it actually receives; the latter "counts" each unit in the treatment to which it initially has been assigned, no matter if it "ends up" in a different treatment or in no treatment. I prefer the former; most members of the scientific community, especially biostatisticians and epidemiologists, prefer the latter. 5. The persons who end up in a treatment that turns out to be inferior might be denied the opportunity for better health and a better quality of life. 6. Researchers who conduct randomized clinical trials either must trust probability to achieve approximate equality at baseline or carry out some sorts of tests of pre-experimental.equivalence and act accordingly, by adjusting for the possible influence of confounding variables that might have led to a lack of comparability. The former approach is far better. That is precisely what a statistical significance test of the difference on the "posttest" variable(s) is for: Is the difference greater than the "chance" criterion indicates (usually a two-tailed alpha level)? To carry out baseline significance tests is just bad science. (See, for example, the first "commandment" in Knapp & Brown, 2014.) 7. Researchers should use a randomization (permutation) test for analyzing the data, especially if the study sample has not been randomly drawn. Most people don't; they prefer t-tests or ANOVAs, with all of their hard-to-satisfy assumptions. 8. Is the causality that is justified for true experiments really so important? Most research questions in scientific research are not concerned with experiments, much less causality (see, for example, White, 2010). 9. If there were no experiments we wouldn't have to distinguish between whether we're searching for "causes of effects" or "effects of causes". (That is a very difficult distinction to grasp, and one I don't think is terribly important, but if you care about it see Dawid, Faigman, & Fienberg, 2014, the comments regarding that article, and their response.) 10. In experiments the participants are often regarded at best as random representatives of their respective populations rather than as individual persons. As is the case for good debaters, I would now like to present some counter-arguments to the above. In defense of experiments 1. The artificiality can be at least partially reduced by having the experimenters explain to them how important it is that chance, not personal preference, be the basis for determining which people comprise the treatment groups. They should also inform the participants that whatever the results of the experiment are, the findings are most useful to society in general and not necessarily to the participants themselves. 2, There are some situations for which blinding is only partially necessary. For example, if the experiment is a counter-balanced design concerned with two different teaching methods, each person is given each treatment, albeit in randomized order, so every participant can (often must) know which treatment he(she) is getting on which occasion. The experimenters can (and almost always must) also know, in order to be able to teach the relevant method at the relevant time. [The main problem with a counter-balanced design is that a main effect could actually be a complicated treatment-by-time interaction.] 3. The training required for implementing an experiment is often no more extensive than that required for carrying out a survey or a correlational study. 4. Per protocol vs. intention-to-treat is a very controversial and methodologically complicated matter. Good "trialists" need only follow the recommendations of experts in their respective disciplines. 5. See the second part of the counter-argument to #1, above. 6. Researchers should just trust random assignment to provide approximate pre-experimental equivalence of the treatment groups. Period. For extremely small group sizes, e.g., two per treatment, the whole experiment should be treated just like a series of case studies in which a "story" is told about each participant and what the effect was of the treatment that he(she) got. 7. A t-test is often a good approximation to a randomization test, for evidence regarding causality but not for generalizability from sample to population, unless the design has incorporated both random sampling and random assignment. 8. In Knapp (2013) I cite several philosophers and statisticians who strongly believe that the determination of whether X caused Y, Y caused X, or both were caused by W is at the heart of science. Who am I to argue with them? I don't know the answer to that question. I do know that I often take positions opposite to those of experts, whether my positions are grounded in expertise of my own or are merely contrarian. 9. If you are convinced that the determination of causality is essential, and furthermore that it is necessary to distinguish between those situations where the emphasis is placed on the causes of effects as opposed to the effects of causes, go for it, but be prepared to have to do a lot of hard work. (Maybe I'm just lazy.) 10. Researchers who conduct non-experiments are sometimes just as crass in their concern (lack of concern?) about individual participants. For example, does an investigator who collects survey data from available online people even know, much less care, who is who? References: Dawid, A.P., Faigman, D.L., & Fienberg, S.V. (2013). Fitting science into legal contexts: Assessing effects of causes or causes of effects. Sociological Methods & Research, 43 (3), 359-390. Knapp, T.R. (2013). Should we give up on causality? Included in the present work (see pages 260-266). Knapp, T.R., & Brown, J.K. (2014). Ten statistics commandments that almost never should be broken. Research in Nursing & Health, 37, 347-351. White, J.M. (2010). Three-quarter truths: correlation is not causation. Downloaded from his website on the internet. Statistics without the Normal distribution: A fable Once upon a time a statistician suggested that we would be better off if DeMoivre, Gauss, at al. never invented the "normal", "bell-shaped" distribution. He made the following outrageous claims: 1. Nothing in the real world is normally distributed (see, for example, the article entitled "The unicorn, the normal curve, and other improbable creatures", written by Theodore Micceri in Psychological Bulletin, 1989, 105 (1), 156-166.) And in the theoretical statistical world there are actually very few things that need to be normally distributed, the most importance of which are the residuals in regression analysis (see Petr Keil's online post of February 18, 2013). Advocates of normal distributions reluctantly agree that real-world distributions are not normal but they claim that the normal distribution is necessary for many "model-based" statistical inferences. The word "model" does not need to be used when discussing statistics. 2. Normal distributions have nothing to do with the word "normal" as synonymous with "typical" or as used as a value judgment in ordinary human parlance. That word should be saved for clinical situations such as "your blood pressure is normal (i.e., OK) for your age". 3. Many non-parametric statistics, e.g. the Mann-Whitney test, have power that is only slightly less than their parametric counterparts if the underlying population distribution(s) is(are) normal, and often have greater power when the underlying population distribution(s) is(are) not. It is better to have fewer assumptions rather than more, unless the extra assumptions "buy" you more than they cost in terms of technical difficulties. The assumption of underlying normality is often not warranted and if violated when warranted can lead to serious errors in inference. 4. The time spent on teaching "the empirical rule" (68, 95, 99.7) could be spent on better explanations of the always-confusing but crucial concept of a sampling distribution (there are lots of non-normal ones). Knowing that if you go one standard deviation to the left and to the right of the mean of a normal distribution you capture approximately 68% of the observations, if you go two you capture 95%, and if you go three you capture about 99.7% is no big deal. 5. You could forget about "the central limit theorem", which is one of the principal justifications for incorporating the normal distribution in the statistical armamentarium, but is also one of the most over-used justifications and often mis-interpreted. It isn't necessary to appeal to the central limit theorem for an approximation to the sampling distribution of a particular statistic, e.g., the difference between two independent sample means, when the sampling distribution of the same or a slightly different statistic, e.g., the difference between two independent sample medians, can be generated with modern computer techniques such as the jackknife and the bootstrap. 6. Without the normal distribution, and its associated t sampling distribution, people might finally begin to use the more defensible randomization tests when analyzing the data for experiments. t is only good for approximating what you would get if you used a randomization test for such situations, and then only for causality and not generalizability, since experiments are almost never carried out on random samples. 7. Descriptive statistics would be more appropriately emphasized when dealing with non-random samples from non-normal populations, which is the case for most research studies. It is much more important to know what the obtained "effect size" was than to know that it is, or is not, statistically significant, or even what its "confidence limits" are. 8. Teachers wouldn't be able to assign ("curve") grades based upon a normal distribution when the scores on their tests are not even close to being normally distributed. (See the online piece by Prof. S.A. Miller of Hamilton College. The distribution of the scores in his example is fairly close to normal, but the distribution of the corresponding grades is not. Interesting. It's usually the other way 'round.) 9. There would be no such thing as "the normal approximation" to this or that distribution (e.g., the binomial sampling distribution) for which present-day computers can provide direct ("exact") solutions. 10. The use of rank-correlations rather than distribution-bound Pearson r's would gain in prominence. Correlation coefficients are indicators of the relative relationship between two variables, and nothing is better than ranks to reflect relative agreement. That statistician's arguments were relegated to mythological status and he was quietly confined to a home for the audacious, where he lived unhappily ever after.     PAGE  PAGE 2 |NOˆ̆{|͇̇gdX 7$8$H$gdX@ALMwx‰É>M gdX dd[$\$gdX$$dNa$gdXgdX 7$8$H$gdX Cɍʍٍ^j67aÒ *+m dd[$\$gdXgdX 7$8$H$gdXgdX  !s!#%xy”ÔƔǔKͽͽͽݪzdzWhXhXOJQJaJ*j hXhX>*OJQJUaJhXhX>*OJQJ*j hXhX>*OJQJUaJhXhX>*OJQJaJ$jhXhX>*OJQJUaJhXhX>*OJQJ^J<aJ hXhX>*OJQJ^J=aJ hXhXOJQJ^J<aJ %hXhXB*OJQJ^J;aJ#ph#01xKLMOÕĕŕ~xgdgdZgdVgdX 7$8$H$gdXKLMNTU•Õǟԟ  2MNk~ڽڥڙڍڍڅypydYhOhOJQJhOhOJQJ\h>*OJQJhzh>*OJQJhOOJQJhmh>*OJQJh|gh>*OJQJhAh>*OJQJhAh5OJQJh5OJQJjG hUhjhUhOJQJhXOJQJhXhdOJQJhXhVOJQJxy+,;<ƟǟТѢҢ*Ot NgdNO&'ijjk կ֯232gdߪFG`ade̬ͬ  {ŮUe"Ka+иЬЬРРДЌshWh05;OJQJh05;OJQJh0OJQJh@h>*OJQJhEQ4h>*OJQJhonJh>*OJQJhbvKh>*OJQJhN=h>*OJQJhOJQJhOh0JOJQJhOhOJQJjhOhOJQJU,2345 ڶݷwN5Ͻdgd0gd0gd ֶ۷  'KMO&)j멜zrfrZrhOhhl>*OJQJh^ hhl>*OJQJhhlOJQJhhl;OJQJhXhhl5;OJQJhE9~h0OJQJhE9~h0OJQJ^Jh0OJQJ^JhEOJQJhEh0>*OJQJhhgh0>*OJQJh@Kh0>*OJQJhRh0>*OJQJh0OJQJhWh0;OJQJlc  DLNHI9gdhl 7$8$H$gd0gd09:JKdeBCDEFGJ gd0gd0gdhljkfgEFIJKLOPRSUVXY[\bcdfgmnopqtuvǼh=Y0JmHnHuh 3S h 3S0Jjh 3S0JUhi)jhi)UhXhD OJQJhAhOJQJhOJQJhSi#h0OJQJh0OJQJh2:hhlOJQJhhlOJQJhAhhlOJQJ JKLMNOQRTUWXZ[defqstuvh]hgds &`#$gdsgd~ d81$7$8$H$gdD gd901hP:ps/ =!"#$% 70P/ =!"0#$%0 !5 00P/ =!"#$%0 70P/ =!"D#$%0 !70P/ =!"#$%0 4!,1h/ =!"#$% n P]q}OL:PPNG  IHDRݡisRGB pHYs.#.#x?vIDATx^`W RZ(BQwwwRwA {и {4 v_eܙ3G^;ňZ ^ʃ4+i x(bڸ>|p ZHԩS !+@!Րg"--m׮]V'< ~zxҥ p})0xoȐ!"! i- ` 4˝ 4 ?svRn~ZIII J76?PJ@bLvG͛7 ܶm_000ιG ?(XW\@62#\D]9iӦɋA_@В8Cc\3f 3eЃ@c3q~x>[W`KH4 @)(ɓ'pҫW/nؿ?7@6L4ѣGa0H/8wB/(.gI~ &pQIK z ;w&ς|E7nqF7D 7-(W_uǎ{o> @tԻwo" '&&y6h# \ti~g`&:uqmi8#&\V!Y)npA#=,=C5( pPsΝ;˦lzN,;'(Ji2&j*ʖ@hA࿐Va/ɹ, \SG +0J(v@Ȁ.@( 2\=>n@4ڵ+HL=7 t ؿ(1/M ι") x˗iE(j^A ]).>Anݽ{w `)e.'WU0"P%99F螄1P` + |B~̙ҒANI7jg͚u XI:L@F J(>Z|Æ 0ݩ>C@.IWyЌDC6)ȄWL߸⍜ O8h"jP^',uğCc: MJ ѣ^nC@&PX6ʫ@L2()HD#cq/^X!Ȗ'lC3 wkz 4#c+λ DS:Ä[<9́O^W¦#rq2@6rܹszLr@X 瀾Fh_> ̓nAΉ0GG5P$S9nF9סa5rO0,m~Sq#1 _!:1LaO[3?{@ץ(O. o(7 @h0 Yi Q] jY @S[`N+@Sx"tzH〯F ̡n㩛n?m^ GCp[np$ļ&T?tset hG?Ѿ]FTgQB\˹A+@9˯.pP.hH H,@X!3p)")le#FȺ-uʕh E6yD"e|Y~':i?ɻ@x?2< b*=F `&f1F#Q5 24$k ə @D} 0+p% ?Ip9upæ"\D'$^^@>tp nJp.W w<(輋kZ"8z П={8a%)6 cB5Hb%_~R<)> |m23)pISd921 $fnp")(Io#6 0Ax4K5pDhHj_2AlqW˾$5lGodz5 on);i$K+x◠@6Y`]+Hrܸy+tuYNO@9j J/O;-lH^Zđ8ۑIL>hfBk̔1R> ZU3+:ًa3ÞbJ;+`CH V81-m dkr`͂͜ȡ XRHD$|#΁]x5)Ť18;cXu d˪L yE|^1N49I7ya_aNdR 5@c%P+ tb庮8 مpIΡ?<( Y-N!\6#G Pq6:z8Z)Ңo4QhaV^z <8D1qw׉/pDk0nR Qb1,O%49^z!-A4g༂w|UtYHœZm_C~4gǴO6yd],[ VBřxEyնƚJ,-cD@k -oO'4E t1,]hZnT8xL$:0l" /ybdpD!]ocEW:yQ;Y[plI+C,nb"%mp4̋d[h P- J{99DV]G#a)ؚ(R0O?{H dc`p=> GᄉΩ Qvy*«{*ܰAW+Hom&JQpCFO;YJH$trVaˑ"Rw! BxT z/qȃ;*mq-XL`%9 R^^َH^!Џq)8Ը |r)\0JiÍ?w+8&g4Hz6\$  ?yru)'@Dx "EW:d2 I<ŧe29M1:nA4e |yȯ W"p^Y%a):=+S<+UytqB\Q tɫj<.rqlDT,a@QD3*"^+㐁<p,0D3} nRs1*|*UM5" [=/:Ͳ if =S kc…rF nɪ+h>Ct*]@u@%ŷ2 TF6 p@H!!A"ΙvXvd="Pahd\^m6xfD1rX4DG[CVt+`y=Nƈn3[Q4y~Z]_)t(_ .'w=0}\, *2Y %96JWqp,#΅lʛamK U@%iYv!%jV)@ԲM,Q%E0 IDbc%(W F+@Ѭ3! x*]8&>N.=BJH@ B]&t TyKMu&>i4'-TY?sF8Ŋ \X]tCONL@MK oF"`q؈R0 66s"CC6226CE40#\_@-P.哷 (pC>@C\B*qPY~vHDݹ(nė\ʆâBڡ< 4%V'][; ;2V4: EiMiLD2f[7hY$N3f <ΜjZp);A'ELR(3 1cưд9Q h9xӪh}eThLLyn0d8qinɾI|%3/^i~8x#]WpZ]_fWӧOс`\$u$H!pq0ADNs"%D"@Ai0Ѡ ky\V)]Fv)+X0hey Cɀ* p,GH/'hSnoN.ڸS<"(DzF}n֯|.1̘J2Jӳń0-Lxw2@2ً5@Fg2"s'_V MBH<.qb)cdtZlA>L%tf.RSaฐ؈OeUcT,hY6qF [4unΌR44efY("1@ZAn_ b#I64pņ6L7S5FCuі61X b/`hL̺ AA 0Aݹ@UB goL DڼH-0p^3"ԔX&iq7 !3,E)H,lz 9`hj\3mP:Ӣ壷X4 \Pǥ)i 2@,¬ @Kφi<(He$?H~r9.+ v0tpwuX0(,cϜ' nL1<K{&x;zOh('hIND8WƁP'sȈOeAhr4 W2$\_HVD/0ESL#WxfF#[D4(8A6 mXABzEZ]_zEFl}F hS~T.V9)FM%Y +| #Ǣ؋BA:7(? M(4ń܌A`O&Z@qNjfܯuF!CJ=EXC]~tZ潌BꖎN[9^ [&X 6ta& |:Mņ3_s8Jr̎C~`BLQЭ"j-e#lxe8" < ρ_ 6L@`Oedm;wAd1J&8zP90jœR1|nV>Jy%gLÞ.Q% 8,>Le(A~CScӦ"G4_ICn14&vD|h+N¨:?2DAIq6 -lP͸lg+>Yepzu;EUF 3YQ)ϲviؑ^w%0J"+\q9XWL+cB *tk0kd֕h0%`bMohB`.!2Q`F~YztRѲNKpaU0 ;3 _Oܠ%itKpW t0v8JhA=Xna g nt0tƎ@6f xe5M*Y9&ab8r B̗XA1!Fg>蟃XÌ%aB/ fQ[(1k!AVCTnG@);^|؈!àd mLub @)O ڕ/TtZ 7Bqf.N$Ep,gPД B ƂH"v q3 Kiù -fjTW톬~墡MJŹ<(wO:ڑq cbSj"ExGyD L.p UHE!ĊðV*g*a`a,'Z̪05@dYeY#ryꫯ~gN;4nf0hvDK 1O-X-^:ۡd-G Q:lgX<vywe ef \2!+9r!z& AK8BU>vCπQY@<@YA '(wJ'DN?%,'][` X8۱}g0)`G P#)!6b8s??)HP|}s]@ׁ~D5^'9J2/r0~z+D8hl/{oÞm6Z:jx]״Q7#AS$?%-ց0-W\,@9S/l(, ~^H`KE,(j]ǮsrqR 7u$ <+yCWO?4K EcǪ#CŃ?6vÆotKoٖa-Yo%ێ;2L'J8$#(GJ[dE[EV1($L.0-Sq94Cipy0ۉzK9*>2J| A3 ʜͬ'zoz(㌯RkH?^=9E:{Վ.ZrsWeJ%lcE(F6 d Q+(6?tnp.]E^EpU;>82lZ-@>C=FW_I^9OUԱqx#bÑ_mlY2aVcQc:GvP;QG@.g~_x!GVR&W,93RDEPX.~:iA<`]=b87 -U ϼ~_̹IpIC@Wʟǟ3ВLȃl(sy9c 8vId(;W^yJ?k!*QN A9FH;*lP^ wD[xPWH)vĉ:IWZ4uꩧ"2HպqgO")qcalF!#()F4_oD=z4,Gh q壡ͿT:r-Ȁ䆪/J">;&`&pN|h!! \IO(G<̀p$AiN|TSZc~=J +x v`".JO|V ߟvz*XC NǸΉ\ݒkK'>КY1򒊴+'pFz΀`'d9$Cx3qL. XױIX'NjT\tZzCt`OO?L_^MTxC^X$9$-@EH ]4oWB9X$J>c+vIs4j1l4c+\'sbyFa.U<+'B'BfZFk'5 s,ܰtaFRìKNԈ@QXz'g@d@&)h*J@za +_Z mBn;<8'0`ѾMR~+h@]Q4y/*|*fG,` -腒 +V<*FzK0Sʟǟ3<>C #V"І2*N* <^s9'pP\I<%֦8(8DKBB^^>Ͼ ]*}|8fNtTTϜˎLoj#qB .R*ڼ40&A˧ôTLQY#prq߱Z9lD߹,%6YT!rȿcҮsyМ˹g7?XGHIAu%Lg0 .;-.!nGh.AwI͟<Ā'Zn"BJ󫗻WCgQsɹ ~\?.ֿ\9pP!zXM4 7WFW+>_8L\Et[FUC45X9xygYԅR$;[YzHkR`o@tbu@cSܚLI+RXcorvb$NMV1 (0 1CfJ`+bKjp`Uu'2"_g@Ƶ'﫫7dͫ&r7ϿI"`E E uՍhEe"P#)4N< bXsvUZXgTb9ёॠK``]o5ͤ@ˁUVŦ W NT܀e魨>ZE& @kaW6[kkc<O`l#]D} \nc&{%2b߾=Dۜ\ rwS-;3<=6U\66A  ;dlnWXhvj +-yS9HڔE$͸ ض5!rh6 mE_1zhhlP`(DՇf EZ;XC CM0Cz[(aM9TJljͤEPMO#etD@g3b4l(++ ݾc;ͽFg%֧hjMb|̰HӖ\4N:l+,~+j>I莓Qf Cc:f4Gy9#ՉO7/wfU\quqG@pqvs  VXmDy g"%o~~aQQ/qTdSfot], y9$5!8Ĥ(׃[l߸i=YbHFv99u9-F<+JsU2}lL~=z􊊊x$cÿ闏otLu>~~>~~{zp;$RZ]0dO1N L0TPP悜pЉy8P9ԣ`pvnŨ]]R\k4lqeHt ͇sӾjra`O=6zy>՚CX--mMBdQaVqaJ즞^ɦf}Nz4֡cqsh`V8G@sC]H3 (ȃ{lfe𪠀 d:3bx{?.ϋa@UCҔ#89-MF8Bh7R" *r3XXOmdMh9(mu   pWI熪WוL "=VU$.3^K.7ڰ/ YØ[`o `C(KAAQF9Y 0gdI&Yu$e*h+21z%=ɾ"ѻZ9~ݲk*1OI-\¥Eg3F+;7aD>y[Rzy ei7`I`ZdQiT#pB6]N`h?[Ԕy̛7T”ǝ2 lrq%;cXNnKנA&۬xߊqj>`}I@.MLLݶFT1ipps--/…VaT:c\64ֻ{Mcq`{H#a,*,yŗ{ߵgw*I 8YCʰ [UVzXH=XܛjHnhxfߕ"Ȁ!I;Y`BZDgzyxT *F,*e$/˫LYkw<,m0b pi!:3XEEX}rrNr1 BNF獅Z5/@Ҡ5##YS8 8W"fYS7jh8 a&KKWZR Cx ǫơ%D_6*ۂ C OЀe'1/ 뎪?֕Ey|-TYc 7ؿa)pc!ivh)Ɣl[&ȿV^^kL&x=^t\7STj.XƟ6aYT qZa>S V_./C_EYhAu,f.|TwKWxWޞOMa;۝SXkֆ@: ;y=)7 Lhl*C,SPPʔb`_ub2HavhnZr@Nt y ;ELII댄tTon̟ 2# РlX+Ď]=>(0Fh"$nIO,W(0&PAUY8pj6)YXYocf=3{hNj,`X` n׮=[oI P@f?į}H2iNFmL,P}dB{NXT4 O\ x-k޴aX,f(*N%ʵ"Oa*/@yp2cTж7v!<lLAְ @H+p cOx 7RY-cu3Nba#DDqYu7uҮ]~qE1h]9Q, Ē8,bXdp¡r\ߓ;# 'F9lB)K -SY#˃& V0_1`U8 2w5 CTƜF@TNUuꉘ7F$ª5Eq&V9bp0M׬^ϳ\LKǀa٢60ima͛qX|Ϝ#{z:, 3sG8%]~oðt&O/oВzUp Q$j5w2]flY7lžYEds=qm=\]4dukx񔷗Wj{Vc (ddDE֔ANl2'M޸K!TƦ"` T|9##6%_)Ѝg|BDy+L>rhS(!AE;$q"#6"f.A7@PL^NxHS?M"V1biy)?XW8]|ި5[q\C}CCO$$#CqN¼P#a֛d_1|d0z:m\g/(ЁtG:8B Y\ŔmܡC|ئM2so~Bu@yu9HrfH)"e6ÅdcB}ӂp uC X`W,ܰqomSۑ0:ĝ l!:7*dC#1NC5-y_';]1jj+t¼N+"~շ}<v% NWEFLjR6" -|'#g["Zi!B7H )a-X?>I6>>1yX+\rr{!mxu'(a%#;k_tdd<N쪳f,ǯJL)&0hS)=@$Ng_q==LCaFKo{1*믻vпبn]{ddEVT"lAyźˀ!lг83J%P@pBK1rDD@__;}ٲmUUYL{[s^hqnKV4֌i!EGu::鶫' 1xի]su3g՗S>jɟYpTV, K6b:'뒩\JېX+9ֆ% p^!\ oY&rr<^0"U!$:DD)BؔfM,ҁF0Qo/nj$}#<"2IS?5W/206mڮ[[r$Dq`:ZHK46:/5\# ++=' aNp?AO Qwv%VL@C/cH9 UuqXpxPh`^vNn5$T4̹&©y]+!Lqjꌫš4~G-ٍŶboJvJ5cx-' F؀z瞛joRb7opf^*v!"-W(ؕG(-IY*ꕤ I.Cl,py`\ˁMG{ KQ\XTUS69gSO>Ǩ*iiCa-*jRC!]H9%8 92, Fj߱S5M5n*2 K3{`1ԂP5%j2_][ V, ;s.Mʜo^qy}+֮Ƹ7oSA!F/R@uDFC־쓖"k˧ވ]]ؿ̟:s?)۷c PTF& I+ܭG}> qoBNKhScb U\0G;g2䡿q_:mM>}ruB\ȱInIO^kCT%V QL+h!U Ftq .-.aOC*_zݢ%K 0ZϬ ssKQhizslf++ȚUxlG)CJ3ij18`#e*+e:`aЧ,Wl~:&Sb&SC4X3v! '-hQ(>sMB0/Y!b>@bI@!mhcBJJO:CK ‚C32326g{E^aSQQYTt'EFfV;6܀{~^z7߼z̬DuuݦmAK0A>:9dbrzp.% n{"`-?в?r>HIʶUp~G ;=Na/Φ15aCiȀeh! "*ؓ(XE!Ȱc}_{k֬Y,8rdUA!ٵJ\^p1A^piIir& ; f*ب>GPUth%J6W@ $P_qX)OQ'!|ywbZUf-d(z9C/CݓHBk[ l]f&-&`6(Hs*p4?gOef R@caӄؙ$9Wg2U23'q k7lFz7>|}b'*N3v@:w!Z?]{Xs-ҫ'}5{7~dߙ?KyԽ{>#9{v_2swJݹ{_鞬{ -&jpԾt ~!eTa=fXjT"q9?~8B[0cBtZ}d,>hjQ-M؅4%EZԱMa? I޳_l|˯wK.ݹk۪kW:e8U˖wٵ?ʫ.︳UW]O:~Ygӵ{Pb_=2pÍ7ۤ2hPoѫ'~3H/(q*9HԚ=NXN0V ǫhy#@='^QhGxv.&9gUf49WG<6k! @(LŰ, y%cXx8&a}|'}g^(+,헙}E/F&_+_yu>=v-^w/V7Y~' _XhшU: TP bįY"pE\Ȓ$c1 i:ZK1$c#-u}娣R&*P 닉1 [F"B#1eskP'ߟ%SMKhA'X3VWoi?bqY)ti+xhzy۷lmx%>SQAaoY?/=x_b"cI᧟}?=Ҫj.^ Trd񨮪1s&<(vf"=3Zk t6 bt.]seHŦMaU_Cft50o,Cc `pal>IQVv{w~-!Yfs#ImTW@'MK3g|zN:%W~&NތKi^ΣX{C+#`HRѓL2w<.eaFliGYc?Ħ^ڷתsE'dat /,Jpż GIBa<\Yft', cKS$:D ٭J g '3q כ|$^}?ae01o@_0o?\^~^>VTsrb\RZF/% I`pPS|$-==>1[_bXo6WWْ_ۈF!9D `\,>eԳ &[!^ hB  R3G=EKj8,ˁ$DB !b2BBlw~QyN=j]mwr\llxh(=u'>348]E=>3ФhRH Ԝ9k2%"-1*rR(t3C @.UBDߗM{}vu iY3 %[,8 bEG:d #[Т ſZe+4,/P=X]2ɨb[B9a(8EH14?2{+믁x,FamtI6zY[͵E,BW!cH̵uSCb%xPn]X4"0]1e G=d3?-EXxua#MZW,,0㝷o&2-~9c9yX!gktD$?",HeQ8Q6TJi3V/?,.fc/0\4B PN|UCWD&ih.Pa L+6Bc@qh 779$$t@mC>q?&'&j**1F%{ܼCr[*zcr C32l%K!&$ڟ''c79^>[m vW{ + v$8UUWVmߺZёQTdC3cI,̉HU^3;]7':0`͟?7P_BȰ),*`hD2yCãI* p'1xqщAuTԣA~mշMHhҤ3Ǐ߻Wڦ$WUTxEx}PTXAЈƃEY MQs~#ođ܂X #$J7ol{z|2CC5p_II)EE"#TUef9"1)BYSR+#?{m~!OQA2AÌK~?a \m< hꕤg45և,_@ԁbhOꑾ?('zJ;&8 e#Y?/v.(C'"9h`diU6*˲>M m鵘FWE h8p[C֝Pgܠ*;Sd7RRQGZnBT Bu&@ Sni98kqİ5UGKO(+.'866>6&5/7u߇&SGR5F[sv+YdQ*aDղaNv[\L8(hqG-nYk7g ŹmvuƍHT82>8zK@bV!aaݩSlO?iu`]{,lt)eE5)͒)`X!}`UZB=2]P|Zbic h8498 -305Dl[X.2E@ڈ)I~މ 1}FDXPmKG:w&?m vz8=w~gfޣ[*K!~#Jbu^ Y.fN,9Ķ}k[=@ i}ʳRV9X c!Yqф9( )VMjep㧣N!%11O?$IIJLJ߿wφuk ̫824G4M[[H;VpNq1H$Rg1a;1$>:!>JsnO?4ÖJ3)C 0g㣅2Q# '.UHjaj<))H[~uttv %pY(zdgsn~zxR(CUBg/ DCSǛ@Mr▭\]Y5s4J;霳0c ߸q3+ 藛n*C1*'^WBj6q XOKjV_| aekMNRf^d8EBfΚojCmӘ'Ytܙ#O[xlذ&m,ONvz rJ2U^{Ev.lTlFzV۔D 8%J!Vaz$ȼrK6fh}_aU :qv9'ضWLE|L,Q а=*7gOeYRIߒ%njo E{Q.e]F5W]}FD )|hk< ݃(8"(ʷaaf[(f>3gu y/<͉Hّd{GL< Vkˮ?P謴t\ (4oRaӣv<شr ; L @GvؾJ> ~Ͽ(T)/- 0|bv Uja3@r<}pҪK% R\v5X*5ZS驼˓SR9%)yx{u1mt$ݺDDFEn߂]})yaCNp:_~zaWu&dv$:1!߰B<3M,1j}C"2sBb앐B ':1عk."bKK*Beԇ4Uѥf o giUDDFVԹ{j2~iB)I7x }5l\z۵S]YŴl߶۵C2޳{`v*-+OfC@ߟK.UØ<*CBݽ׬YCJvVemw-wyhWƀQ7t=i+'8 &.9Νh\͇I1/Vu@`ƍvED{_]ul0]^No)[03TdC*6㫫+/)MhٚU:v@+3IKH+*[&g~>|׮[Fk-dUUWOWki9,!7poԳBke=(O8 eua" YMVESQV\G(4ҽ'"3sO9u)}XZRJ,mdhm;#CC(7Hej >?= @ KI眃 ܜ)0)t֤34D(4 Iι=]&LӒlp Ul2 }(aP}z`JpVM# E*[>wy=+V`%vdW>X!=6ĩl;")̒&=]w?{͎鹻#cBj#"ly[@\ۇ*yN̬t?ZQQ^Z^&`vu8f.G{7ͷ#L2,N۬ |5Xn0 tqҘq~zpOs݌kˆoH`VP4Vk=LnΚt(c 8:fXcm)\ORosJψݣgmBCwnێ=x M,tʯv*y]%Ys&M" [׮7nHx5w6i raF xg,Xh6 o]{;gAIJZiNxp.M z@OB#wr͵5kgэ o?>aْҟ~ȃ>l`=HVNnPebEZPc"͜`#۲cF3U+W;Eavu .<8 jt*Z9^))NnsHH6_ۅJgϮĶ1sf-ʽ*2*GC4cu|*6 dd|J^' 8SNGYu[-&>91C0["@sf{Պ~1E`cd'B<=6NJv_gaF~HǑa۶nn@;@`,{uĈT"di_>t7d}c'LޞLr(~Z]zܴmǵ7ܜۣg+VE#W^o\j܅o6);ݶm>h9A"2H 'z6p`g+.al$ ܇=qyރ ~7|'Z>?*!,.P p8 -ƪe2rܦ_`lxǢ0ѪD1t*' be!#YOa]Ŏ;H l|8@Dm#Yym7x ؈8%$%ݛNE76&$YlIJ|z={ZqC`hH^A>0M7 !D0LX,_ lXM^f]_N}ʘ=e*Uk 8iO~qQi]ei6KWӳ߾jJvb ^_;6y;XPX-&$N,YfV gBMnynf /@q76Im/yY75u{ܷm3{'ѫ)vv_ 1>\""_\;$ȫk| 1lU1rp109, BC~Ɗj8R>=#N$,,ߙx5O/יvy)3GeKjngS@;`emT]Wz27B\`j'fy/m? I< 'qJYtf?w/1ٺTܞ,.ZbM:uLNsTC9yl0,>ͣg͙߾}6Ϛ!ďgªW:vrlpd,a镫6yN2:b[̼G}M)v5בr7U%T< d ds W4jwЮsǎO6(0lu}Q` bC2ߩZ"-( /QfCb. ubS OÍkj ߅*|J5U)yCb.J=}jG8z9'ӌ9׭?^O:s{Evm߆X[ro RVLL,F/-*BNpz~ỮݻU7Pӫ&,",;77M*gGPQU_UU]xԐp Hd@!!A@+].]**wݧ/%q_=)+gvߖfaKLHIl  +E@nL~+.eLJ VUQz țy:PE>VV)]дr:*:w{XW_3lLfսrvl1gϺ`ttHNN1|ߎ~ //JB|fü< ꋝ8ޱ!~O>naԅwtv!EԘ}Ǹ3 VQLJxH6lOl.'ѢH.1,1Y^VIy׊r;(c;c\k):4 9SG1Y82= ʊܯJ@~i嫗-Y~ٓkዯ4=R/|٢_ɯa7ʃ0?j츥+}y\rw3|Hj ENӶ}O?qcҐ_C^ 6;_ED6=^T!|? 3˯z㍷**|[wAud@d|m%zq6'_l}Ş-;6eV"كCLA18u9PZDoPeYy;;fbNvqRBeKWܑ>G3ki;hẍ[\iR3lMCZj'ϛ7߆<O>$~gquY/ؓOEC*w߃;1 H**^dd8%rjVU"A9cFhK.0M||I=R*٫$@!Tm,Tw{y_|K.]Ƨ2a_FpplՄ#C]C %7QVQnNTZ=OL&U}|د:)gg禦~/S"11966k,Cܹ "(T={B_P4aiӆ:;@JfZ U`P@'Up8CU5"l ծѪ˶%GȤ=D~a qbBN9fUMc]P7`_-\wY6ɱo}gO"I~@4iyx3"Fô"J dfagBj.?rYi믿|jo6$B_=A&ؒTB1؂/g 0h6lbSo?^^ 4 1C7>L//[}1F<]h?e˖v| Ap2"'ظq9tggW8yOYl h"͙s/B"qs6` PFH+k$~$@s6S'xAm8#svVQ(WvEn1ͷ#@׮m{g;MpUV{=^r9ǛUOlp|jVcܱ\z 6Sa/(GLJT6!s;-"й=MiչCG@PRycf_s}P$4yg*TZZ\0];mJܑ\75uOqq)[sTU6EG g^A\ lPCf]8:)o{EE|sbKQ!aL(2"4[q 0t/ڷL%\_yȰа={ 4RP 7t&y鯯v߽O 6rig~!3rmͽy֍71ЪCn)^p0I =d^#!$Krb|Kku\s-խ\#r rEYFp KwGҶБY_;vB K2]Câ GgZfg_~{ͷvֵ];&L~A%UwTڏg~=uyw{%ldjc:uHчX9)AIIɿ:'#=ko@9-B&u|UT*OH29C& Єw㠼QPS^dŢq#%:A%KVM5B?q_Të׬l-IH@XhEm B I7N[=XE?& [CU:1xX{RٿRSNe u9?popEUP|֛{6|8E𖟿w BZ.]1aÇì6I" 2jsLNPʢsƅ jY3JMqbM !]nٲ{ ~P E69PO*G80F~ֵ[n^n`@`Lt ٿy?4 %_a{NF @2#\za2!M, Zn%K_@5!SpŻ6XsЃy#OOo۾-psy٧HѼK?=>gosqm 3U=ْCa6$yAvؐ!CN>[nV~#OxS݈.wH|cOťM"Da%ݺtM a#{湼Msmw)OJr@ 2 @ mAi 2ǤݭO[mvu;خ]BTTԭͺ{JAd~6I~,K"kU}'$p0+H8FcA X2aVE ^B-!X:N !`Pw(|ݐbPaO;tp_\٩Cm~5W_`>@>bl SVTݣW/42} Knr~z-.(87"o?{W K}ۮeKWΟpX>WHꔔ$2r;vM-uvJJgddf#]P[X'|bAa>8cʼn L"Ty$ 4tG0Ѣr 5Ɔ ՗D1j۱4{SݼŗАP1,Mv1],*[)8`]mQ9aR8a O+8L% G(,x+7tÆ !2+%osVg~恃~}q͍, CemZC_tσ0lիG|5q 8EvN&jQN_pYAaHkJoF{-*+Q;ݷ4=6 n@b:!P[I'd,nۼ?&fӒ"mR<)p6t rW/-y|Ȣ k4BB ĭފ|Ch,-+5PKA6?_̶̬.[6``mۦ\xYݺݗ+u?.w.X U+֩X[S^ @ +#= ϑA$Kc}CQQKQʶvE lC p,J+lHz_n_~R[W9ɒƎl?;gޏ=KD ]$)3=+8Ç&@A°oIy %E`{n1rI$8]e# V?}(٨7ӳfUeDƐAn FO8ѻ{PI?KK-W02w0:얖B>c !s=8`:Gm];wƆ]YUd BW|@ܜ|ztǟ ڹ+X ^/ko4O8[o"tL8\VZTV[SzTLb)8hDA^9yϽ3(,R\A zvmSYQmVt7{STԒZ^ ':cl0Y-+>HG*:sbMNp1nH _Ph2['/ p" {EI!Y1_kkJ+Fkڎq6N@1A +N%ԨB 78* ofBfN֬[Kgڰv#'h_1q9ٙɋ,lsBb>}1:v<sE<%8u9FxR`_vF6Pn&`p -ϣ Y~CD'6Ebȩ*$ "ܻkqFLd359,Lv[k¨q ݝP[owOATP%ߜ5r(L4diػҋ/S#--mŤwѢtC Nb#<ԓq1-_E_ L:-`eёDk?(5` V8J9碱cO֛1`.́JOākێ켉UIC#h!U` ,KŴVthv/f\b"s|*nqV`IJ;㌉l˰Z %mJ{Siv |"Mk0aÆ?s`@@) S،$S`셉~>$P!!'oI"pLһCeGRi;`W;rEg\#%P )8W]uƂs@`eZ{lw}EίrCqCvBch2@S=\cE' 1gq&ɣF^tE-yBCCYR})bH΁w&T=b?Єb02p`lͳP}SbP5@'7<7SquǒcNmaw.pCɯY_P"PB,§2> c>dN3.%i"T) P*UWU`E9 Zы0}G|JUeE-Z+UCIq:vѵ+wc\~%ӧ8p`_/wr+u|=_}Ç=Ԏй\ܪ-ű6Ucz)92u3 c#sρQ!*a,=xˌE'ψLdU7AX&L`$a ˴F[LÎܶca.XfUvO .kwwuNZUka~*dzC` JG>⁛B{w"ǭH%Poo4~܅WuTYe:h$E-9ƈqV7|wnNˈy 1{f4oo4eܚF@d-廹Re4lkœ1xЌ%"D EsFE"RM".e>{I<ޓ[_L)fED4͞oE#2|\PO^+3O[>Ȅmذ6=cwT4~q)1``<9Lo01[0'՚X!67>:4L)H )yR+$s{(SͤF9x뮻dJ?yxE.#'?]|ѥTH 5W^1lrϚ2oX>6oPYVJr vS(z`k`0 GDl޼Gn VvZW5PaB !Qܰ]a|os>ݮMRa^oa 4dlVRaSw7HT ܙ>i:{"K@ћIk2eȐAv! '?0Ԕd˖E% pq5tr 4liNX#}}{@ LjU^@o*hbzX nrgMSBT$ ZYϧɵwsf;('Vj=+4I aWZIzZ&WS]b J?a\XKJ:;˯D:0ltʈfH3~5ٺ"!P.1]lvI8P~ZѾcϾ/={9yV{7X6 -ۂn&P@o段C),Y$MO(I5+gՀNNHϝ? Q(^NQ+^$i ,!1 8h`^=o믿vۦiY׮p[~Ȼ3Z`ԇl' s8oD^`*,[/25Ž1J0/8SJ`:>UX[PhrC7kz1AH"C琭Y-ʶ@ ,Κ3]xM̴|MxДѕD\r 7\ zqmd4N)#)& \ <<6>"EnKH_vu"2.j8˩m|m@JcRm@b9jL6 T쀨wr}m~ה߼qs=f9yv؎ 'Hylg\nQ o'ҾX*,.꫉ENݞev>ņVԸQPcIvWLpΦ2ɯRaƐ?b^>"zZ7^sKNuٺk@k%:&ZnmSRf̜J۽=J< kV\Z8qYcƎ4x?#]scGݻiz4PTsߴ#ѩc6x5)|Wmްq4đ#Y (/#?Oppcu"D]eAcB-),4 ;YDdVvVlL,V0@&XP*gcR U`T1~MSZ(Bo˗0fɿ͞5u{`pȄO?-6&ߌ7g{SӒ[V𳀵tNhe",]:L6 F&@p5n:Ct cWGmڵ'7b'I%Oz{<}Ϻk`OYhbddؾ±@nd@EU"5$J1ܣ4zN-k%UKHDg3(Gn_|4Av$ bтO 8e?: 8,߬mАKr3,]D*cd9/6x}!\m Kunү{'~*26f۬ǟ~j؈ᄴL:gҬ3H ;mx}YyaUu5 7pfXܖF Bb틊 (**mU'31 aՅ0zDq AҲⰽb`R}"_y)Ov7NVxRھ=f̀t풔_X“gJW#._h}lmv[bsd:yǨpgVU!&88k6yGS~zJjLoaU۶ᆻj֭[ڵ{ J` NR]ө_.bK,bQr|C1Wi,č_ cSXxrZ <ݩ$T^QZ[_2E+ͫj8֨ un|:ls\' "`f֛&j-̪u ҊloBjY)5" l<7z BBݫIHk8~=S//2RU9cV#Qʪ˔3L5 rRS]sUץn/lضIg?gM͙7ݷA5-Æ/_P= ړ`-F SdB>!ca1JWQme㞶:ro^ѕ=MB.2 H,,H# oђŅZ䘭m|Ι>c&d kEhBQ#& BK, >xXx &{'ؑ֩S~zLmpn6ssfΜ.o1yؚ!{߯nNac3~ iӶ}YQy`HSO$ Bn66n4 m6$$MM5X``Yy!@&X n ޸i㖈(r877R"q֠\0 ЀxZ6CcFPZPH'"/Cr1@EQs<{b=0ǥ$묐w/٢mrJ=^x9r?O>8/O?&i7@@:;@a&kMNoeDx4^.]2H?T+w}L `XHkR9!)_|A"C5߀pb%8U7] ӈ4(XEVs?Km;:/eb5j_v5_|= >tHDKkI_lk @DDeG{U?脎AtMS~6wpMZ`D6+*ȍG6` n#6S}f\a57(!Lh6X_< l)&|JAJ-p G͸O9ldZRhꎵ&6 y͚=ӄoa #N0X| -M2$߆n`8dϙ3W1 7]7#Q 7$U2tĠ!r>$>)/. -3࿍kfN8}L|| o؀-9R04Z fkŒ xF*޾z/>xȘج,#DK1u[N= &ۘW_"2b̝ˌʱS]#i> +%{k \[ׁ#GG@컴 @84>&NO<Æfb`ҭemG:Ĥ Rv߷/s/?oq\l!#uI458Q4]Ο}7zy\KO~%Zll=q%K7ޗJ)#/g-_lV;t eɘ܋ˈ)P`Z6CиUB9çevB G;opS欄{[p#(@Abj ZtVPr`\WfE3Pl̘q0~0rbBvf㴍ߌ F|OUu!Q\B³3|C:D&TPL9y)U,GzNX`@X`r۶ 'A&qWȰV߂ܮcrbJ7?nQ1 aAӦϠli'[MDe'V2?JUqʣ(=H:[[T/";0@pp%CO?h;*zovMNN'A$_{^xsĭx;dٲA\픙N.guWѧݕ>ӪAB6oEs~c^5#NaWe7ڥ=Yn^= +._v)PC(2tNOKצ$šR;)1mh~y^qYw~iVvnTt,R׵[.]Wb %6]#$ǽmp`:EEQ4191&>?-+: a&r1LaZQl#=?㞰63CvBl|TD0Pʵ± w<*]~T`l׮T9u64A!@PDlêVhs:tqG$CV4!UTB, 4cJ,yTTDVQpJ癳`Ά~\l:e7{U'n#:mFf3f~L(+AP-3J)eM O:V84^1n=No=|p9,M~QNCSMdLH]c+~!g>&6j!~B# #E1(؏*Ҳ;tl{A0H}H&4ʫ._hA@8.)/4p[oV,]}kQOPӱc'rK/yfM,3lVvvqqU١CF1`]~]xXW8f)zA9F&f@,߰}vm} 3_z{_ MAC;)9 ıBD+ȋmX lpSUn&%l@I.9j"C!jti7m6-LA0YڶI`&*g3N__h C~E 鴉]ppFP'€saЀjDxa夿l޸ꫮЫ#'{m^c}ǔLb4er4RuqQ-)Dpܿ+87Ad` `6&][aA1=JN&33 CoyNZє^k! (b񈀶5N=H~{m߾w3ƶ}SపFLOD3A{ʸ3]B6DZC{]bҥ+p6@|DnݾgG6ɮn>=" ጑d}!RϿ{dMO:鿾Wo.x^ /2s ԥ+DƁ3R9xvDCzbKA k<$^c3N#)JPں۶3X@2>VMDO{`܀~E&!v'%Q %e\:JqĒ=c&uʕ2nCx|;v31gv.瞻c?$շ+*f$UQV:AS>;t$Pe]]:йSd-_xG~tm_twޭE\!11% {[ٮ@RN('z ˥ငE-:hFAs̓5 ;i'w~)Xb`!4bX<RCĐ%I5WMѭÆ˖S/-Ɯ|rUEysN;nÚO c%Ͽ@Kx!71۟[, 9cbo_/|q \_hkM]vyäZ:%z}}I~AY:vn 1XTǪUФ C8}p߾ҋ ͧOOpu6ݗTd TX:͗ߜvi?0m`m6[Kbb|<'N<Ne-sWS?u﹗eڰe7L$$OJd*_?s\|' xrM[" H%Bȓd P 88X־<ޠ!WNIV$C;PkVȇW_}=^f&i~0C#Q %B̥C8WH8 Zk͵y[!3s aA!~~kV,ǻCnvؓw^M?tɊ)~lyz-瞻z~%#ae'N;vh~=-Dh`H|LoBp_CG~{8pCS;D`y;t_}m鿬^n܅}SO=wͷFݽ{yν?7|!?9/!KU HXiPم5ˁij VV:Bk]NpmKO8:VfF&)<}cnb9X^z  v P٪X&F%9'YQ^Eq vLE&5),۶g?t};?S쉋޴y#? w* g&^(V3.+ ujV!h.yFq5dbzZ6 yhH*bwOvGmd;fΜuwL2KbZZz^۶7&]jS{LJ%\r =ǟkuӎ]i;~=ק 5V:1f7^gCA(- 0[Tbڲi댟g~gf: g\yU^#mOۗ޻w_(ѻ>tƙwu_i3yoOۥ{ڲ}Ǫk_y}]4Em[wVUQ_͛Vfg[G:TgUQipbYHyd2B6p Pq% "eѽsXp" )4 ,+<܀-'Z;JVr1` ܓ O".C" 駟8Z쪸jΝ:ۍ ޚƶ-||֬ZE]/ 4b1̞QkkM1L.ED՜l1ТV)J~5Fͺ8jk(P7dg~OG-+AB|  P&Q Q)zc.H'jwSO3aú͟} ϼP]Yb֬_x/.F2*ʰbs]X d&* b V.Y׷_|ח^&qo#%z /{e ؾ#:{B( E۶n_~kmܸac ?KaկΛ1cyv(LB8ڶm8̴i7 ɑ#ZM-#+v I*kx~]vɈGz*F#$VhO7mHB PW>, }ECڍQ mشL!]txV`$&wcmELB6{E(BMynVU x˹ˤRX3Nm;p^AUD[XdjA0b'Gi^`ѐCu!VX)G?==8o+߉ y/$'߭[~D%uaw8NRnnͲǟ՗E%&&^2*"zXr-/,G>szZVڼsyW¬MoJ7oC܂^=zuW1tڵ)%%YF#<"#Q4 X=<sLss_hɆM[ٌdF9n+:L0PhK0,wsoʼC+9b4fLv Ҍٍ1 K qvLt>amDL[[4د?P,*?5Pgi\䍈BQ?r G X | .lס]VnV]O>Ƴ,`^|VdzO`9˪!Vn0[0k|bynj0[[.0wLP"*: }1 uB Fym9g"Ok:!txK.(;/̯鎏K"KNv!O:|(7JXJKKMi cG$@xAh ",9<B5=# /ZlO?ouat0!.$o~'!8&؉E SlEI)\GDP$- ||1c/3sع*=&]u⨓Lɓ2 pF*f6<  1?g|JY~$jc SVA?M7{Hzނ^c,W2Hr  #7G Qc_x񹤤W@ۻwF@eˎ@VVP :3P!G1R9D  nu>5g^'mWPh˂>geU5V̳x"a&@&Sy9c, p=fZفq+Bq29QX1PM1e֕rƼXLL7j =p?35{Rw0k %cN>}K.:#={%~&>^-bi1-VJ!a HT!lի :SÅB ̙3G^j;vgZN~a}۱].Aa*J_۰n=" iCQ9\! GUcM]ÃnڊӜb>n{7Yb mݲ\C;%LFоp3q|Ry54_ Ew _d PHٮyЭ34hdk ߠ0]w='9JŵgiwE\pw.QQP3wܹsSGw{ڵ~g INNh?QfC=k!1%0f + H+Ah*sXpMA`B&קm.*߶sR|(YdOOKǤgfz%tClp,Ƞf?HAOJ-`wxxlWy16 o߾L#ɍ"YVѝPF RJ- --+OL)(ɭ=ad> cH&O>_`M>̞Ϯ]3QvP;K6Yiz ͻAU& e s=e7!}Db_|5΍+'&F!SƇSǍ߷gsO*y7=d58'<г'SF<,ATٵclA Λ?É O_yiE -aǏ:%>!;IFJ\rJʒeKɨYt)u2i5cFPTkNx!a,&J0Oa_{Sw dK$T0?0됁=ɮgM+TBn3Ƃ*ZՁHy{aW`씭.W;v@Eܻ+* ? Ocg.{bI6R;TxÅ p"3'289ڙ=Gӑ`IGf}xLQC/]еM&-fIfnPJ @E[DD.){ <}slf}kV}x~dtqhwp ~9Bۂ\D x"XRQ)YZkĒKF)V;^#g <٘fP_9rffŌ3)ܑ|+<_,T /2pk\@zZP8N}w\Ϊ/> ݛ>^v@Ts͓7B7oARR,20_壱]Gəz1ԛQرcWMY]Pܾ;=+쉳i>LM ą1&.=2'̴lo?C={[ƽbUxh=dҷoܽqԹNBBљ?-xk߽ 41Ej,&Z ! es+i?߱TcUZj-];ub)\D; *=S3:uDKE^ue4Rg ⁚*A{U_W?ӌ:߼͛7zz޸v V#!.F=e.ܿ{-c72"$#ЭΛ"w:!_"Qg0!1C&X=Kjϝ1On_eZfV+jʲ r,ViQ~SMJX6Y1yJCuYQkQ:ı"@pt֓'Ȕ~x^h i&^9ߖ PR ZjYȡ[cb#Y_ٺ풫GN8 wҰM?n󏳲>\29%_R#2DnG0L,)j,! )0{n+R|LjzUy]nzkR3~ykl\d^A _x~[7n:9swr'?>O׮ ɚSU]SJ>ċׅAcB H$wwMaHEERѣ?n637ʥ5eQkp>H-Y]5jlqQ>$H)ӧӀMXsY}0Dw=,.̦U[ܷO+#7nCE^\gbxъAp=GgPNnIN :瀹3_Ϧ1?i o/^i841AÆ~FF0/óo+%rJs\~ǗhX*N'br՛X\($cnh_TZ&6yAB(0Hl9%Kc#ƾ<"ECta-9㧎ly NϢmŹkyA%9R(d֭;Jl.]\|-~ȅ g,ťHr7얝uMqQ/9zb׆ث_̂Sp9Y4 o#_D> ̀bd%+$…ˆ`ʥ j*.N#IaT'tIJb'bF? &LF%%[~ ?Iںx}|ziQ9~}? xiX}F y&257~ܙL**,,P`£TOJs>FP !X3i ]r&La0%~ďg-KcVf&eUx$2 (+ Ѿݿ-_~O5~J ;L6!6> ]7olץ&37*-䯑ĽeA_-~b/0-ꑖ$#y9ZZr N[/3}:G]>k: yeݾ>8܂|j::8_:a<〛FQ%wt`k7 -(8:;:MVU*pkjsE%F*.%@8-{몂ḟ uu 57mlX{Raye)*P1 ? @u~ȟ 2;H/,cZf%ci#J6Te#fxC(E-Al2$.IIJ^tˠ!yv߾6^>^nݽԤdL]m;CT[`"P]"$1 vNgÆ \}ʲ"bnSS)X fJJ¸qc47ϡS>NS'ӭKt2Mu$$fFGƄtYY۠ޢa7|ؾUw sV5TRZ jٻ,_ Tize`7@@LT\\.墼F : ȡӑojj)OpNVvm 5dep`/X6'LZz5Z=t.{勗1΄b#Ih40>G5:v^fFo>|4)9:ILO{7t} cª|=\jsk;:q·WPq6b(>{}m$Mu4ps.[4@U'OWb2] Y{h;+kk.0Y'kKSllS7!Ndo,o~[=bSxPĬ<+>k XC3<:#KUdv.Ńu؈Qv6XRrjr }Nd!G[J8rH8`nzP&= {r/&pfyiG>|NC{j!RdYJJ3@ƴyz:Sr:vܫs'i#F\dyHHf!c ݟ?eL-UM;K{L̦ eDen\U_ene9`H՟|n^L<ĄtΔ촌t=Qap0OJjr⮽^SNWn_4~ʟ1]!EqOFdݞK$JȐ2/(+2F"9\NEHN)I)ÇaA9z3w6e h*eyyb VXБ!Ն~DGGPb^nuuyumi5@vVָ*\3pΜ74̿ <ONB $PWWӤĖbwϘw6z}- Q5#QTe!ϺRҦFh5 s`LM #pɣ>U6)Z^RC '{i+6|>B̑E)ʲ[]tIKcNV,$ԙo]" M ˥:y巃0 X0!Ҹ\?}p-Lطõ`YL1C\V%?w7ܽ /]>{cՇ+ƧR4w]:WQ{A~ҕcƍ4BP s$ AF6K+J1*=I- p>}{WUq^ad["HiO#<ӐgYZRۼy8u*r]=ZIss3 |@Hj,G]xx ^>]i9ۿ@|s,~';s;s+*T*ay>X l=ܸysbRm*a$?*k!!)/, ’]8ثׁ'OIbnظƄ*>:OϑYM6@\Hd!Һ .HT44H?pB)Rcqi䏊LxT$2xbkә#LFTT;fΘME)G҉224&&rΰ }iqO/ T@Ǩi,,tqqSaN: ̀+r`'!vJVg])nCRwɿ$O]ҊQ$IH[[#GTx1,q PFƀ'\Y|#]g!IZ,+ƍrcbKsmC=|gIsgef[ỳy^v 篘Z ?TB@` `ɓE,[ہAn^'fȐAvn睥nCCq&O/\('vtbwKC5͕ҥ+ u0b@LC`YyO̡swucD;u8bȥ@$7c5l'j_&Ϛ3d?k;R0߀ׯ#@SAȖ>59Y'_Dh~ jޛ7߿EXx>[-TsY4@SP]A<Ȁ!r 4'Zm®rd'Osv~!QxTԪ444t ޥ zS-UW cܰ!C:%S,2P6g!*)!{?|ҡB`tܒ,K \jJ X99_xH f]a 70.kkH'9sl( Y-|~\,X",4k (x$Ѷ|q /FLG)ɸjA :pO/>6Zaˀc11AA.`/ 7ԁG)s++G3qo "6f,&ǖt=1!!%);CS ._͒{zy@Ŵ/pb;eJ]{gϜ3t7^r3ÇF!?듯T`UUSDEY^08EW(%<;1*xx2C!eg =l:#To%&^<׬YEx8ԉL39uΝ\&'%"*b1"_)S)Zz(2GH8v$&0tHx)!槆>$]rcˢnf-' : (52Lkn^b<׬~ >C$؍ĨAC#@SF#Hqd(H&x`5&ض񤱹FFZyՒ,Bt]f޽GÁwŽi*-C! Ϧիo/INAni8ދKju TQZl/p1M>:%̤dА];h&ff'&~ZXӫ^Le*';.HgԬhkŢ+PL6TA3S~ظ_Yxb˗oqPCx\leMl1cflaII>[z0}ۍ\tՓb*p*;gB+K7^_7"iZp~=߫gk>\;˖,EA}ʎ N2=b)JbRN O`Y'^eGce(˃sg`J1b{i?̙;wԨy#GN>{4c2 *p42rIKfz5knhIZbH2w hwH$=s2r(>%v$ L=sBCFsN="A5cas*HN2W%Αܹ+mzž-?+wO}aei?p0LM-*ʫi٣w\lԩ3Jxy#_uҍkUKE]QSG潻d>HiUl5.OSh,%HxSB.w) W+fz軍`a-޽՗jk );k܌'O8vЁޘ]Æ ֥+U)ss*+y}m|N7E3g!C0(6 J4&o7)n:'hWnWQ%AGm{!!rtpMƣs^{ul7y Ҿ?n%4tS)Y V O3 "Uh\TC">k@k$zn~;t_^K%'ܟ اO?W&@EFǤ&g~k]w܅p*0#>d49TfMu GNH=HuC&ss-Oug>ˏ?'O]AU`yG,,<}els$:$#O@'2['؟>lИߏQRQ{rSYOb7>}HC`ƟT$%NM%D8 J1)dh!Ziѥ Y*pi7W߿=z$sg󋗼onb%F4RYb-2)X<䂈0)ʢq J8:›ʸh֚:75w޽;/Ļ/̝3ʥ{B\ÎpJ]KՕ-A_MZ>K#S_KC@+/f&4 +@nts??{vws^yŸs ##á0#Y[}.CąfPT2rM, 9o-\djbyʭ n$'gЖ7\rWUV,^p͚ PUIoĄ8{;wY_lj5&> 6Ш5΃lFG;l^jߵG> 8i넉IJɏ02穦Є,$4ufef.0`úztr !d9%!!͙X^r%$53;\ uv L @;whL0}[`wڃ% OjS )E aõ?&Ɔ4hokkcEQkn窭Bs}{v,P{zbl4b#Ձi,.p ͦ:Ux U ԣE5t3`nh 331.U裰r+ KФ0wnaյKPQa>WDW &7k[_`wǺJMeՕ8dᣣ1+ptnGu΍?nUE?)&G^CTKgj 1|e;7h$&>x޽[/8Hss =2I'Ig;hGgd̜*GpmB3裢A/z="ׯJI.8s }ޭ4Ӥ֬Ң XFu5kB:ܿwY MO`J_lq>]udnmry TB<*K5T 4jBʔ)IOZ&r&at$3 l-3=;ՇT]_{v5MYa1K7yWZRq}2) 󠾪+k/_$(<1;9t`!];YjF;)oc@D T:8;jWVZ8{T3H&":jy=b$^ L:sk(;{{T*p~tqsC觟~^-r<g)I^D.ΰ03f?p*+trt4*u]M=&tɥnԳg׻3^z:#dA>DC~I!UxյںV6v\}e+Wyy3q< }IUh2szBCrt29ۏ$K(>WJ .^)1k+wGKG=-0 ׍L:)\qz85$H?73"'de` 9+J[# ԇig:q؁{lO.\YS^lb; v֦|+96qMX=/O!C畔Hi/A9~;\-Z:]=}Bբ=q++ -^58IDATVȰH>fە+سzF;37Rpjv|05VȂH-a˸BUJI$0d%mRÉ翠Q7 ru M]v6|V|o>x]iYŹ*.)[0uZy;[C.]/\VóϠsA wp/N6q pe$/سgx@ASGdQn=Gzu:~<1c&x('N}gt潕y޲%<|ռ5uu>CIii@"l+" /dNŁ244Ցdoo/QG CH:)0,)۹Hd!O9K!hhb=c缂@vxET8oJXѲ7>^T<1df] *K?o:tl8:6J)ȵ0X̂ͅF )=HLIAװwHaأjiN#Ekn =M`"Za7 w<46;pIl{'"O8DbN-zy}ɩ | }G`ێO;~qLIPd`)҉T5#߹CWa9@ى,LEE%aعSg~ fڌ{њظZyR(),H"$)!XJ%\U Rz65رO[o;!Æ[2c7e·:`R{)@X$'[G&q".\f yX%+"!2~hM "!:˘lomߥ[1vمZPDT&'[l7XG…%,/8([7ScOlg̜F ~CɎA1S.h>;8"[k"ptQ=OrkAC\`5lKˊtRC4M`E saql"6/!N KCJs k8@YbkP^S& D>R`HMw!/yP81n1g=t(`ysHP=!oRUb5S\\$X01hHT,t=,=hnKG9tTMdWTUSAec5zuƌc`d{^HQ[`S=z툷 "TƑT^`xW 3t ]ay$˭;>bl4` V_*-'C ^UPPu2tBs%W@< ;c L.ZBbeN=4^Dэ:1ЍDx轇*Zua1dWVיB|uP ͸y'[ήDT/"6gO[SUK;z~`%amg[TZfdb_:ݾ8%={ĨQyy3gN04'hkVe6s7|]Rr|Mm5qn7ٍcV℁CU ?+`,:JckAE5ӃzAѧ3y^PTRE2JتZeӵҽG;nDǑ#BƆ"o%DVz$!d\\򣏿x8/ @dXZ`|Gtkpᜋ+BP6ZR~q4Ѱ[9qL!⇄N@o߾ cF2$]Dnc~CXTPA>+(704 F6(Jl\nAx D> pWip}#q6 @CL" q?38y\좑IRP eׄ16]6)3EQ ";9e 2tv62%8;㈒Sjppڱ-j\DŽFGEܬ2Wɢ) +`blR\R(:cCF!?QKN㓙s= 8 C=C"|؉p~ (r,w7th_vf9:< l9͞=ZI UZYQ4:/~.b[w|}ƦE5,K7{FQ `b @d(c'G$zy?С}j}G-@vҽg+]x;-rW{GEDe}Դ$*bjMp$fÎQV$~j=Wv0mKPWc09Tӌϛ8BRKec |ssXW72*kGc#RA^M @9..nnb/[VVrƄ!=UH 6%.F2sLlqCJ/fqD(NI GJl<\9rŁeȴC9s”^]Po}lgikVLI)?g[ː !P2 =UDNn9FdlḁVzK0KgIjYPUUD) f$`[/׀AO >&?/^jxۮ.c#3i&2::aFP" a`jE%`O efa#bޤœ5u{?n]hQp *{)\uuk95?"uKجyf15f*hE꾰W>yrt/uu424/8̜j$ 8C}1c C@Hm/]IոV 6fI"Ã=[o.c1u5&zl?4= `"Č2xc&T@Q1ӾRkȈh0 dTxpHN!׷ޱu _>q1 KpZ,-mp? C:vxl)A%P0ߌ3/5u3gΦ"KC"So(Ԁ2lnhM|Uk6C&~תښ{BGV*Lˁ 15R/m1PL`FDl:]JLJF<Q1NbI~5vn})ڛQXh#lx ,΍s+0maPaUy}08,%| qRF.]`h0={ԕvn-Yv(ҼЭGO٤@|  9|-%/1'"+s$% .bBg0$r Ҹ=ltWll恥È+\#R? 000ק#3277‚O< 7+u$(WYLX眤<}YbzQ)hMaam5i*=#]ހإwĆ(g42պ|#L/MHL"ۮE\ `.41clO}w$duuRt:pTB9%ɰ{̯҆ R CSʼn8xa/TEՕhCat/?[;9%%ޱߴÓpafH*¦+pn(lM L51ѱ qtt! AlS܈deO?tCVSp N wTLR;%Rnx*Hw& LO,3{I^z_.o6Nݼ}8ts瀾q( lC#ÉC*C@iB( &^ saK&f>T2.V2Da8 &ccc%yy"wPTUMgۯ[YP+|׋061eC%y%P}@fm x}tU^J\y/=fKcI% r'ak{H>̉dW8+J'w##p9nkDߘt>ԛD{62`BD+p(22:ak"e NΔc qjQ($ Wd Xo;oq"?L{#C(SY+/ Y6~+gSeefY!aRt  *;mI[;ϥVA,"()E^O46Xuz]ocGfm#޼qi  s'ifp K U%} }==)XwG[d8&<$zy/Wu:P6$W)6( ~ஷnߛ;uJV9sj@ܻ`p(qfpZX5-V4Bbdumq-ͭ\f*8_i^^QV1Hhc of:KKW,c}Q6:RG_"L]`!6ۇ$-|n(1ʜ%vw0fOŸ053rб1q'N%ӦL2̋;z?4YԦfP _X~̟4BUWG' |HL! <;VϒjN;.GdtّLF;vO~Fj2$&[)3B b8"km Ĭ+ULi^dAXhI94n˘CNgˢo Q%vE2̹#>]v{s1,D%B&9UH>#PCI0`OB*_ O;ڒ/k^_]|)n=| ~cAad\IGs'AXV]kn3\`ܨ }q9(YS\@}}=t?kAaX$GPƽ(-cY X@4wqEIY&xI3=X$Yit~|9.&^:> :1޿7okG \YQaoa xwlNW4{y]stp-rwU:N649r潻Y%Lt 6x[ȅ0;x$~ p*XLn_qe9_O[WhzL9!lͱ$m6eڴ ^ʩ*Q;YvMǕnuޙ8Е 3EskLArI\J'l= F<ðG^ʈxƗ_~ r\&_Y zz?A~<sDBho\Oٚ?4JӲw2$"*|g(prPZbBrʵ+߃l&3&sd'2#8퍍}OTd<[!+5v3 f!5-W딤$x##4 U(lu nC.d36l+_N/ObRJl\b(3Fvԕ /:w _X”(n!C`DEȶ'솭AQQ]JJ6pYI:$v"VFd S&zUfїf_{>:`?8]!,!ӳ{u_C\G7YWPpP<~kMTdqȰPxy0mII߇UA]޺yeS#HoZ"([ %%lIH8rD my9m;:rNZh6`2Ch+ .@353pcH(EgϞ9yݰ3 6hN!|.ǡkk޹{w>$?)Z5eS"lR NTq2BV_yeff$WN p$-5EY6̿g>#'CZ.]@bYi I*π+/@JK%d5tj BybrBzFJyEIbB|ʺʚh jl)`2ƾ4/O1>ኋ a@IVB:FkㇾI8dedW[@\%|&dh`xC<'/N]Yy#l/RM]=<ɟhB|1tذGOOK|-WP8&$>[ZVo>Ξˏ2Sh޷tm\-s uuuZ*=|\9=~{߰Z1QVL{`M T[zj3h\!Is-G{!ks͍ 18lBR<71:X[bJL6:U`LlYϿ{OˇaIJƭ[+ EcaIy]M W*߬Ro ӸxU4;Gw[δ4P >q2 ,akeMDK_HC=\,2BvUJ S;|En%1 cR(P1'VM(߳VU][zV,!}̍"D˩N\ 1QN|ǨQ1b,C#]`2AǨ 犦~_aԤ$ .Bݹѥ[8mIV NhnG#U$D~>>ёvV:Qd`PFf*: u 9Q@C0f"2P(hj"=!Ģb8FD̪jFEҖ[@fCÇ WuRqrpw.δkz+g!5$ ""jhl SigGB8$H׀RPTnecW?k+ ;'2 v>-{*"MyU|DJGJYL TmOĜi't"eWr̀--wr&qWvI7717 45 xh\ؘ̎ܮ;/_Y[Y 4GiTO9}M>~#DUG܉tmieĊ]| %f A;'NFvo'}xi!8a44zR]b &m/ËAJ('l-9A_Eh(}啱}R",Ub9|jZ=Q°1Ǽ2i$H'0N$ 6H`6 AN%cu]Z~I@[]/*,!3UlS<∣ݸ`TpZ}ϧ\b8 R+v_b95=:#CG`aP04V5q0ɑ6O)\'%pRUeˊwWqod&88QQ%9.)|ӦLn栭 .m"<(R~$fWXb`zd%CnCqmEV+A񉔰CVXb`Kr4c,A?ѱ/Į*"%BG~sO@ HcV>**],?'e))Kr)Asؤ0cǪJ?C uvo';ȶ'ɉ^HTC+?ՙ\'&$}d؉޳}:ϛuBOLɱ?NFh"c&u_mNՙ&ꉹgI6:{Vq\|TԈ^xX(tItF<b-ZRRMCvu;ҰhEZCK]US&+ZCN0:϶½wb+_ .?`J2D!J[_IY) 2%0雼0C6QHIEwU(OFNC}&T$0TE|%fJ Q/| T) \?*]5nA8K9K-_? ;^[bb 4=(2V?(*)9kv=]!H]:u\^QI@o?;ʶaI PdDytع_0ÆM!nɶv6>]QQLU@VCʬhF*05%U .g2!C'ss p!FC΁$.',CIH9yR0qHLYy 'yh$#tL=650hp~qI^ ?̪6v1‚&O|)]-m)pEXea'vfχ]!B.!zHoщ:?!-X R̚K (dZCDʵEM֗~26,^{}^^nŊU+4 6n@ P4 C߉` p0@:;UTG;9Sy]|nWKOf-yo߲fKk+AW_.Vnֻep 113Zm\?2M]; _I޽C vA p-I'қfggq~*)ڴi/tȹ<Vѥ&"ãl-lۃt!*(P Y9 qW  uRD5yPRXcigI7K;| '(d| f,L- Zg$ : q!G~ʆ* Sd:UJ4՟ܮQ7h ^)8\jά-x1#,Lk*1&xbzEV'J )11|DpD(:;!-kjd)3:`mݻtHōצOVRXbe'B"z*Ĵ4mN8W˖өOȜ_KHMvvu }gҥ3Uussy)99Ư2kyH=GW/&VYZCB~Q:34kl`QaI-](9qڱca;OvQd+",b m c/[߄e%̠b%I@눣 /X%,I41Š$o%FdЍD 4mAݓeI,6^-XpԕO ShsNBW'遳rN9 QMNJ<<=8՘t֙3æsvCQ#>^㏿ޱk̙骒4h{tE6 ;pBA]bgpVAN0aQ EKAm̜5sSVJs!pdd0Y.wpJH YYHM¨p\lhz葧y!3-ӫKzZ*fhp·kscj) 3,a䘗 1#B%$& ȌtzVf1eVlRѓ˿>r Gtx1ew틉Npu6tdރhi>TmlơKCG<boo^&)Hu_betDdQ`շ3tlK `B%s}}9SJlq#2d4ƌxz;CH>N^[ԟgĸرC@Tq!4qIz+٭s'C+:BBN^2eQaP+换m;OٳywvBӜ(NYF3sC,Q5Y^A 8sJzz %er\DGP1DDc 89t+n?UoaC$+N W!$lDA68#v<Жhesp"5 MP~aafy0u_x}G:vܳ}gyqHSnuwb|v.82t fd}︶РDwv,)xO'Cd iH"@5xCaD6\plCCAMA]Q塢& ybU.T!E}Z0TVF4 _@ym@5UEf&QZ ys?5WND<#Bn>xۭAAXXNԘCm} X|F>í;rfה D:1(cOaz nkYnD6܇@reqdEɑ+$ę!n~+-M!-]~5UurUu@@\[EFIdm=m^bII;n*\8+gPFFa}ϞJOH"pِC5M tnJ[nX~{ [& 55$?7Vٲ\/l%M7tg7/7Pb{YyN-FY^aqZrb砐 _v.ow,3pP ٕ." ſVdtCb|-={t ڌvǶ$QcCΑ[DnbC9 ke y)thBIMTT]\QVUVTڶË|> 7iŗ2Dzm]eP_FZJv~֨AU՘X(8D?' n@ TArA.43.jUp u|!Q~nAMlg J%ڕT:rE6`FuJE g>FXXV4mVG2.+I|4&P 2GY6b;ojnycښF K{]c i;WJ0ڀͬL<^G Vt?ϑƖĸ{-Ir_ǎ3n]n;h*\'RGу<8]祝`+n "$b WWu"]j~<|dg{v攕i,!bMG4 bC$|eDb Ą4WeFDjP ~Of,NU$P eCʤ?5ܘ~ŊާfSpG2sl6Bxߧ7pnQ]迧=Q)|L JkPHaR9{C/G3 -MjcƼܣ{2 9HېCcG u 2矮p5yk==Mf蓑'5fHb&1d"c*J%6e- {dG煋QtI3eL))Fk@߀[WUr(%wW,-;qon0wIys c[󝻶n۶0ySR2H 8YtYn J'{C⑓ FVWC#.:6l/|ލ[$4+ijD|͛6=sܸ5{ZQrbbSw`HG(1 5qޘ쪺x镡z$Uy7%&Og8;O{i,$8xXw N1,""ôt_h;*azyw,-EfBFJJ*xI(C-@600 #0kjR wN$!1IMI=q7+g|DԄ7kP x.ED+ c9 I2g,-ʃ_^i/;8,O093mu'O núGwǮ>{Q .;G#.YYygL)\wEhd`HnQHPtdԘѣ`kWZTlbd@*BM !MACj{3SA$ç8 mǂrJpy%P&xS9Fz F$? AJgXI1L,d`A[⻁) &_;X2L}.% 8]H\L](S7΍nPP',6a9-8VX+l"-(\D# ;| XSo\{ҍ'aObBBG>|/~$" $$!b:Y852Tܹw+Nh;'H U%~ wWupdqdC{-LMM Q)O%~A'UOTk+j^48Ol '֕%F4*qv&6٥{Ho~6(Wid{2uC,J?G $ O2B@0ҊzxһZzZF rv*ƃ7X!}r㚩5 SܓoKxGM1Gׯ\GiTU 憢]=$}O:q__ uBl\FZ+!ttꚚhRS)706L8wĄ$)ߋ  :CrRB2ɓn  8Bh4kLL)p0wY)@SUu>3X$͏VbXu8|Z4kinil035>lu\|ԙ3Ǿ*tG)/^ŠxM դaNXϨ[ǐ_~ۥcϕ pr[jE^=fIf̾~¹dհ~ڵ3ѷ37{wn=u9u[⽌t7"ak*֓,܋e.UTը1򽴢/\]]Ehk~7_vу-6)|JbB5B'0P)}D"GP/fnUPRP\Q\aZ/7nc1c4Pm72A9<14pUQfn Az#1|C$&XYkj .g_alգnA , E!{^fz wRW=$~4$8<,5 I IdbBs+jjiQ:k`lsl( <}\ƙo@0'7[SKٞ$!n\+d'<0b xBD3)F* "*!&Hd|Q7kftzYi)KK?=5Q,͵993-/a {ĹoWw9꥟٤ݨ$05IaA<N7ݵжs'M~ͅ g>mߜ|^!n]=g{r2xDP57vuU5Muz&܁~Xă¤Ԅʲ.M7*=-OL8G k!c YV*jzĺ5uf;K:-FۙZ驩bc`t{Υ nݸ1V)E%S&NqҘ/XlOH@8Bw{EH`x1 PpL:"DՈHz=tɢ 3Lڌb{f͘w 3 J& oHX{+h:XWG ljn2aSbۻjg]41dl-@ v8}ob2ROFK!L\)pm1( szڸ*Z#!wvn #'/ ;3#()T5ԕV˯'&vG怾}+2175kb +yl\jc%TV܉؈bEn5kO~lA%GΫ3f$$:ڙ;;Sij{!W:tD-;AJmpXYklzvNL鼌Efl(EyUeţuBe8[J(N?dedmJHӀM~֋iWD >`@wP3L{b-9l:ck#M&F]<\|G;{/_E=bqVBb`` 2`YԬWh!z%rJ]]ءW |_!z6Cs]u M _yqnfVIAɏo7r7PQ?}/04"@:ZjYm ZHϹ #'[_sɹnxbOy3iۯ-}gٹL$ه==.GY#9R|b_ i"ّ+{EW/':Ki&ӵGw"pBA`28g~>e%6οnݕfdh$D ֥jUBya B4/+GԸd5+>ID/;>%?G^q9e5z-4~X1CnܸyFl>*FO: h=)ݘќH0kՉqϓᮈD`^ЦRQLJC.!`m@{ySq[`A+] YPn% !*ڈ2@K)"\o- K6uspDQhhnUE V߀C(S xpox 4f#諓5fR&٭HwWWڝwϜ:gb`h4_|Kn,i4{=8+-Mm䝒,^ iOs_$V4+^Sk}Nɵ[7q ֌`p? KYp.N!;@ 1 P UlG I/UqRaY @kEPV9E _~ۏar9u򊲺zlnFTd >\"ni䭈.P c@*Jvbͤ)$VL7WߦPztup-\mC5%$ Ex-,ΜAJP[-KHCOO/% Rʐ?HMPYPP\XXVQw~)I춥4!|v㼼y, lqY!!dE>-TK8Jsf=F7p.,~Kc=ۼeKnVYNfV\.蜵cbiZ0&h{;g6sIi ,coLYF)FjCb7FbiK׮=ij/]$C A`f'ƹnnbg`Ruq7}V(u#&iGO!?< LL Z\[Y6T  JMMaרdXܼlwWQ"s,0dgC~$։wyCBhl#쉢i>wA̳⻈Ih4VXX͛30 T+WKn?zю! %= &Q1:סަ?]"L"@Er~ض#9> 6q.Y|מݛrDN|,Z4.{ %Ňn3,9JSTZK;q xA8y6yxcFv,رOymz},)3J[M|lYQJ!)kXh)hg'4*%b-Jy<CR|t%| 9dUPN,.= abVweIKukWVRd VQ =yi}`Bc.[#R33_hh0&*\j頲ӡd$lVF.Ƥa)-)$A , $]'jpir*(`%l}牌t[nM,bj92*K1; llЯHwBŊ}H(cXFHM`w%ܤl=c>j(K§ԓ׬QȦ. ok1I!S'l8W qEw ]2WzX *žHN |!]vZŸlZjyrb8tkD aeOJ 삍}90F ӷ.]|XeAU2Is alin%PD!rYXAJ7Ԯ~78D*+;މ9|ѣ ^ #zvz}vS o߁RP xy-J)I3~GHȩ+ &\N ' JZuޥc%^ޓb"F9( YtQep@H_bpej \eYMYȘ77vz3 tj7x8֜o +C֧9ܹrK^M_e&>$FyͫdF*;!!AҼ9X/ǥgyCuN$nUp4H` 21@!'svqU[C7ܼ~^7ݻwبXMQmLEBٚzFb0M8?͛wblڵd \L,m2пg3oEN S( MΝ?! -J\fs$).|Jf<ӭKiya# DK]j e-L N_?!#3s@c0'2czAIinA!x} rڜ׊J|c' K2I 1+.w2ϢeWKHyY ?X4!]СM )VDWS:{roږ+\bi?`I!KF>8޾Ɵ)n i!x| *h oȹ7!)DH{9@~ XO78ͤ~{1޿{m%#[EJ<{ɓ|ǃ{DpaXb#){J$uV]  {㨱 f>{+ٳ_Èvgz2#l1_ x~ 4Є.f:&F@mVS(P j&ppk;V<7igBB`@HAު[=6z+Y!]{ML2ʑ\>-@dy+yca(`<\+ puP!e نd-'Q xqys8Bx4RMT /| o]Vo|rRԫ,!RBهA[QF<08hZ u%~(;`B+/\,ϪVMJI.?<۵!O)-ŕzx+%\B'ŁebZFe @yQ#0XVe=p f#![AUoAsn;E:%&J\MS@,4TzFFlT佈0p%@8EDf)7%'1^8q  zČL'K! #z 瀪2@hb :U;t gp>}D \H$e_S@Jŗ"AL?oH\U]3^}=|rҒa#GhRel75DiW:, 1ċs3鼠0 &md8"0p+N!ܿ~oofNnIYYsIQuRSj+rs2|޾͛lx)ѕH ЋG-7' 7M@~+h lJ^ Bz= %JTW~ƌ NO̬ܽNh(^ {vUtf6a PnC*+OHɄ8tިie'gX[ݸ}$]0Vf@lWR2ﭷnǻbB*)/>J(➧42Y(In`*<- r`& Yp?`՟[NՑ d<&U=@fPxD_6P]cmeU̒e/Rh?RWG>'3EmSJEm M:zЋnabkU痕曙;YE 3{GޢaciW\POL (RZ-m`1usŐrxj+ i$l*9wH|³N~nv^Rs1@=3ߛ`涶Օ"@qD׶kjkhe:5.ݺg'N rժ3g}垾>1XXVo[2_EsB$l*X ݮܸKMS<}K[?:Zi* d} bUեE4TWg)8VM>E5b8Wٿ'!5%+/ʌ î7OGXRS4ϗ[1CCQ^&Om;᫅1k%⌲LLN7)хA}M1!PhbuZ_~9?֊9!4 Q}"4NwD9XriೈVR0♿gؓGWU' E"اW΁u(/@.]H>NPJ 8 @׈Ҡ?0Vĉug}>[p82k !wٗ #sބ#;d IEƪc`Aj /o&J0 q% Yd<.Rۤp^S2Mn #@8~"/+JbYy#m5bG^FZI9/y‡r_2 E,<}яy9P"+{=GNrAO)E_n;Kx(`reQISNAB8 ^>KԹKȌy9',乂A'!0%apw`I|ő i+˿2yw2wAXS\RгФH,@4D|" "g }@s i0~- hCM g ;E%d]x07e02[kw111⥛8b >| ugN^| @ȶqkNgzY-6ZƵΝ+1DPg$׳y/mBsHYN8+,.宖-[%{iҥ=%QM욛KqCRB 2|3)):"u{>_~={CC IB) ;^ȃzd!Q܅E܉Y_b }@Jx ZֳgD hMRBH.%֌Nw RU]֢=z#2 ͋qg\0ƯT` 3 JMrB<.E'ZWރ ?#]0jCL(Cճ]?S־~~$|0$œ3FxY>tQ(Ƥ*a"m>S;PHX#V;tZ=٧ﴩ(`%DyVVpPgm>|O$!b m74*GjS%rB%Y:54`CI&]rV|<[7٨| z!;widhaei|Anc.SOq?ZM"FR.Xc(B2*TO0瞳LG"}}Dt\4l je&iVB>ZT#Js1cܵ7i }ˊ9#`]#}xm9sٖ,Y=AK'$q=>˓EL7sea$Y >3 DvТbY{媤ǡ$Sʋl~q!D [5H 0%•ʠF ~)Oc&`p5/V3@pd[ qYٯO?aeO׃+2j$2?;ZBB */bkCmk"#Gt ^E^,XӉQcHEVWC{\]8ii%U?8+=Ob)#zl=:{^fOQ euݷ?86u֪Usv%;XٶN RiI!)]<\~!h3RBFs`Gn&FW7?Lɰ,b.fR},/\GDkgDt,m_fmHp>nѣP1ͅ)SNRr|b/KE%/Qrܲ>@ۀ'2K}ve # Ҵl_ =s`6\TTGtvTp8ΆDO@*D3D(M4r8싧WfvСï_}%7>.]$kC 4N."+cb,h27!;VZZ  +q}9nu3cc+WRwsfi32sXoy mGZ5g*苞!NA&+m jH41¸xYYyݸ3!dkgRC,Zqq3jU5s灭'o߸4+4?ުb 8'8˖ P@lq@.VOō'?xʍJ+n)rfyZu Y`eer8 HW,-5h;﬌=$(Q3ԡG\AGͅզ05u_읥!|r>ҐP73q(q61^`[)RhBS~擰XKf.T&dFsIu^W[st;Tv&gih9jARo+BUk/D>o^98XO[ >ѫ{'+EKKEC)xOIQYY)nWVT*c#|!S̘7Nw-Xl._t݂7oEHLHtubmAJS37E /`k MC]0kt0Y_'Yg\8y:a N,6t (H/^ܽgw Ky%(o`i+F~cxdDŽ 'cӃf`gbŀJ^z `PE/*f/ôpl!5 Bn~-7Ft+2R[Ad$`t;/#V(P 1QE5,àF L5z`婻߸r-Iı.,ǥE)´T%!md~ c&'FL]UOXhFXN8-3-q r`ڔ ]i97RpG6'7Ő'[bl'BT]덐#0Ub#Wދyc1^L0NrMTPAH Ŝ={uW\Cَ:J'yBȀ ]I "a``Ͼ`g$ 6)R7@Ft}^w7QSRYs*prI<u60mU"%uET@7;Z;^PRO{vG!e_=<.w_4{w=tO!x ƅ#a, Ɖ]%F+'N C7J CSHa ߼yBvs r-Y#,%%96PKAeRVv/?_=O-?VH嫌P4 hn=yre*{ڴi=‡4ZSolR:pxX\p-U['%+c᪪2xūrf1c܁p_Z|l,b]YZVUVx*=4P⽉PiJqw2alDdbj2hZvI~qMySlG ZQoR^QϨ+ 3)%-/iie׻wFm&R3-XAA1aY6ŋ(QBe|d/f*FBX0lB¤ %%ŗ8oև`66X>~bʴ%j76: ˊ3f  ,u:w]H|éMQv튏o{k=y7 \)ź'So 'O=f_}]D?9+ӝ=qhTc u̙܅Y3fa"N8MDR_cdL`ϟϡƦ h4T5̍i2 Tѓ_}˗:xw?ye>15Gvj3vSΓ f:Lj϶hCHD& Gj:0ARRY^ꦯ8~U{[;7{R@`^}J;.]ܼyާg7_'ff>>-TTC ;Sa#F_~mLlp 1JM"??޺ZzF9Zt ~%|B5 ^]kaiجD|ǎݙo_ocb`77+σֲ(@7#3I'?8j=}CYO_FZfΝ;# .>MaUTUY]5$\*, ޭQF7h_}{ђu~v͸;vTVAP_]9}T? QQH!1q𹺹p1V@ '?5r+3!Wl= ظqSxDԃGO}|>t 0d{)Sg{mR.;:C:;zT5BzfHZ1Z/5TL?SiGp%V3gByϳh>miy =f JJJ`7bv 1jxMCM~a>LL.\4_o߹u̸V] X~Ҵ<޽{O\uxsio\-Ү]ssh Pi$:*O5{:-~n]{PJI2/ Cxȏ!;mZa $ͅ2SK35mڦ:ϕƹt8|HO%1E'-P:ZPjlG' +I8`gf͜YsNVT(z`$hj72"8-2‚S?>W 366v ~aphtl"Z ~>_M662<!!6]zr l`*i*-L 0:XXY'%$E P:r=ѫe2J+kkԽg{"QV#'w¾=OKқ 8ow#^E&ЏAk$7ݷ·urZ|ҙЕR,Efi73Idp^q!f&O>" I@1Mn|<~ 2PݻvҾy1*8-7;96&_)HjT\0D8X(}m- rA&fó|=}$AUrJbjb"QƵk>f쟧'O $ey&S^+3`4]| P(4spgfd0iGڶc>#7UPļ̍?r#AԃެRW)#NƖ~3g&O+|s6a'JMAVV6!BdDFq12%<0_†jի߽t'"#DB@$ue5j)sNQ/en>-Xt'a-Z,8SVgⅭHw'DI 6S$!au41TK70=:!E2 ] V*W**" E7njSֿ2i2#"b.X+hU5ZJ뚈u@F~qa0\*€XS)ā0 W ,RmB#yZl>ZS`B@*y>ϱcM*X-UHLk%ݼsݕeB#H*j ,H(IOO߃{U*/s̉)ӯϑS DRiU-)NBdJa!|k7BS_qZǞ=LM𦮝?wN2_]15*.<77Cؔ42!PCcx {!3N\f&&[]?lȸg33@кQӧ70Mi(+,LAi(JMKx/8s3Oʔ=Σ|Xc~NAƟ8׭(70e L1K-#g*s +\_0uʌ HLMcP%Ҳ277q#_2`0jIE:glEL9Z^ 8ii+ "3CK"T `0r Ĉ\)1R-ܹsᮔh;4HK^‚QJi-aB/*|iGQ:ꆚ{== #9$xh(Sbxx)`Ic$0Rsg*(z_ LH@7B/\P*/D ?955ٍr|sM#JQ,G` Va[GN<ܶw粕d툗F2c1/u۳K+m}~뮭ֻ}[Mc2鏈nbūW8|htـ!vEl/Լ1 pL&RҸ$ @geNn(r}D,Eq(730Ttx߿{?9񢎦*ODr~z`ȤG&kޙMcf xY#@U%GUuJJ {'@! @X曗/գ%\k͝ګ̚ƺ: ]4ƽҪ?'3ۮAp |5 AkXrIBDJ9n:=@yc Sa%KOˎug/1qQQ1QgϜc0D%ѣP%724ihWĉs 2IO'?|]6pKکR3~K:}U +𪇏d1ܜ>Q2b9/\䴔7=q轏W&L\rj'l̟DD&E\]C-T2nIk*VP؂2w 5i)i^nNTdxM5\Οhj?u|ņOwNʂi,?|}WIm`!y♅n9L}K$EFePǽ%w֬YpCWb&7~@@{M 'tF*-P]kt:CQaa1Fᔋ-۱cA($T E%A͂O Ͳ]zjR!(HNM6W\Zr j))+VFDEgRgϟ/3 iYMc玆&LD᧩kjIb_"/8(&IO(xO2H0F%{8L1&L ך%%š_.±Clꡋ.b>(N;.a_Xsxڔ \`/'~"۴AId4m_fV-!1<в5zHM6eA{uܑ_}߼:gŋ9HX̙}쏣%%ŏ߳kDZ㿧$qMo{mu|{,ӆއDqapq D|HX4H;ʺCA7"tmٲ̙<$P(-:8 OAC@SrBzZ<0Qxiҥ;s4jR&>"I)\g_|R|JFռ_EɏLLd+@$AYN(WJJp73p Fvmb$'kŚ ~`~%rF{tꖩT IYm%exv` 405,rё񩳦zyK?o|7n0$Ez|"7OI4RJI C!pU--[[q){l0}̌,+.O!X}{v8:&W>:yb]]t3O 36/MI'VU>yAroۺuO|垝; AjF%C̍qEEIٳ 433F"VZq E.(BԒ;75B^OL"yc1:[#gOֿN~6Ӯ(76"k &ϦihR4HOIg$"_l-II f JLZ)$F^$+!:@1$vaı; Ehx>2d_FJ"p[ Tjhf@HHlw{_LO]':߿c'3'2~˖-\B+DHkK3:DNS7xJdFE5J^% BNrlސ0W30RӦMQ0a]vx O?#F3m3X6!WVィ(qNP׀(ދt;EF'_ ƝR5gX;Rah H"14۶╟_/kׂ!@S,ʳ@<C"R֩>O-lb]-?CCCd4h7[*żd2яXYF߯ܲ:{˛sLR,' iW$%S4cmHKYIY>UGHrQwnjB\بWI,OܧV _[4 i¤ǾHFDEOٜ9ql_{wgGu6 mq:ki[<+fA [햓`2%ilQJtٌ'O>+ʬ\.gwMΰau7hy43n[-rG>%._Smάy=mkBt{2:P5 k2d%k.Zj5kG'#7|^nP]lu:cc`^`/."XvI&_}ʟxo%K sՕר]D?_Pd㎯󝙳ժSo#O;W_|Y[nUR]Iy# X2882,Wz ܜHɦ &3nC ޘLS!!|]5J9ywhJK826T|TG`Y$ȖH7f㬥VarkW(<^ dYe~GH$ۊ"*Rz@O+VJT*I0Hq\~"ͼiOa?3oRC^hYlἹ5kgy/S#W/_lIْva+V3{6řUSr(-:O[p ]6_l<Þ[R=u]Aആ L2"h 3<>-윧apL܅fGzlt4kά f͝5eڔ%K5,OzIYM.m [Md^^0)D@8o= ɹN/=,{q?@YYev^L7F9sƌe[luA_zk𛲕E`]: Y?ټsS69luP',mL 2$?@O{$(1 |HCAQb K/ŗ_yt։=Wvv؀<㮿A|u>~)oٲ; {pvE^y٩] 1 bDN3NLF3}WOB$8IB#6E [H C-VP{-u3Eg<#l.k]gO8p_=B_zfJ ڭ.d }?m"fu쮰B$UeҦ$|fa],Ȭ~-УDhJ[d;&$Ax/C;%egGV\tҍ&(byF޻IoاWys^{կ@ţl{G$]Qu% æ5{g EaoKdah̰eȲّpsgu:T/hTTv-;̧,vqٜxty򤌟v2_?Y|+VÞ6m¢e]2nϝ&iA /+`6.;1^8hcsŸ @vJ4k,D;fdX_e-ߙD䐈a!EH&ldwı /;_eO?=}i'φ "t”ٿNOz)@v5;M6y巗-9cx-.J]t_^n[+oKu䕨:WK֤p1BbgX?vrMXOR,4ՂeɸpђwN ʌY׬[ѤDayպcZE+4W%f*H f,tMI.ŠC0FzM*uZF${=vS.`2Җz'^A!ЏxKj]Q&֢*>O$r/ۘdBхnf;wp.VnտՀ3kպU]Ϝ|'z&8RMAo~a}6 m6<|2/zp'q!ّI L,UȎqkWѽz[&ī>bIHNicƌi;g k,VE\@ɶI۰VZKv֯!G ~- &׭g>L6D!t6;=w:<hpVΘ(W%75HRBY[i#\&Eg5olA!Ɲowvk߁ͮB.mgKVk/޵[-JID*0ɗF@dUah(:E[4mtYƍN>^~ pUaP#ZPlg&K$LL#w{$RjZmeD1CRiEma{)~s^'*:D "Rw?itM]vwLm)~gycm+|pcʙvBE0KhK&O6x0+_ϲ)rж #]xFq"=kְwd%UmN"n LD9Ö7!bFBgZ~Wګ'< sDUֳ ɥ_K"X;9ƒ>Wor=ZI ʣ|l(hzGQDqan]=nP<_~ N6q` Z~!:`y6,+,ՍC a>-%J5 TX]Da@]ojPWI&2"p- &MDh&Yo9:R,cuDp-0N* ҥ̱) )]v$Nˠ\5Sj,6FH f戠+aJR-SFQo+m&*3=# F} M.e3(J lt+J k֬9|vwޥs7ѬVX$V N?c=Q'0ipʼ:~R\ba,w%nSGfլ٤ b vzDkdXøBz{";YH-m<FXl;oz*`*lq!JmΚ7_oԸqJʬ}Iy UQkem"; 9<–r)@L@#d 2!CxyO?;y1G|}ߔe8S&LDlʑ4l$enW /6Y)яNĈ2N8upˇB>9c'ҡC{f`Kj9d2lEZ'pcQGC öW]}]w a(Ialbc>ha +0D8@8`Xjf x˶C2rG{ Ud6院k T7?AVb%\ E?Pkq p%/~Ϟ=w߯sڵum.aq3"O{+[6p/_ImIQ!dxɼ%KFw@}vhsy I.n*W'wpf7:xIRĉQY,@!0p%MRW{gom^TV p-[@M;@LL}w۝s׽]7t*_/٥?O:u,6&PTd>)'%$(50>;Rk7jҳ6/_uZIgzlfZu3R`Hk8/=_uRicVKRVR ТeSr퉷&>m'cAEt0-"t,i[t*~ ]BWR52cD\=Ch. 1$XdoV@2kdJ@ЩSfKȭJ[\tE̯(7Ii7MlUj]5"XkTox߽ϛTE{N>O<N2AeJ5JzmvJh.[X,[hA^{MԈE<-pL$qs#j8rK Y<zO# ,zy!I޺Lq 4znyp,%YQy +|s_DOP 4H6L\G8^'aYik:xy%Xm%>@tu)z G|0qq25ed6`Hh ]ɋM #mS}sVM3`N}ApqfϞ"DKZKE0lV I`Ap f=Tp-H% C׃nq$::+eO~׮BߵTnMdߏP99䨃:Jڇ~tp7*x 'mbhc~G?I#cnۋ .\<3CoFaJD.=8oz@+jp"kS% Mlpj:n! q M]슨9pQr lSά*)=?쳢DɹaVs~mu(O\;Jժdi\Jb**X#>ԚR> Ga,x0դNMD;0Fe^xyR`L3u&SfK]c<1~ @I4I=w{c,fp"xd{yY*gksoIw o(n헿|gӣG/n ,ٳV63*O<3Ud}-ˑ0j*'U1394katCi3̠a HNd~.yo6f͚@Q&Xl^TBGRYf JV?۫IMZsO޽jϺWSoO* ̉i*޿O\<NaW#E`zɧ۴n$]ҷ,h %X#M VWhE  %%Ϳt!kV+rɷVdYTf6B= n' SRdS™ɮ"MA1}*yv*!ځ pYgy}{}晧ƍE TSSIWZn5=}̝vRIboV; :k_pi_LoEowwtCOg|˭{%?/Z!^{eK/b?K]P8"Qy{3pu!GlnBל;~Jɓ@‹xt%YxvJ/p(7*0B&\&MF2 kL6cE=zmSv?l|U6<9)Odvcnrjhе7x.;8v$J.L&\1!լ44nDMY;i)WLTG),̂;bC($~O)CӦMdfͻ 4'$ͮ⠖4=>mFYw-0ט)CQ+A TclA:^&MHv ^aIk5$tN e24zmc h*SQ\#zp\~'ȼ1qt*\Ha)yV+Nh ~A~A֬^Hdd1g̤CZ*Cx2[Ar+K(ˈVФ`$N]BhWXߦӿ>k1'6xw}]׿ iϤ;h:{wu "ըY(JV7ݜuf# /)ݶ4P-駞?5--bM1Rﲥv,OU+2oU~v}vzݑ:|핡ߙT\՗)]_N;{ɂy)3 v~+]+-)A/]0QQEJd;D%bK֬pْ 3.GygGgG?t*XN]A67oJXuu g*Vμ # a!kΨYxE)U[' tB&6'X_P(aU([rЙ0[sޜy+s֬0~汾^5H2PKYf%UH]e-K[Q[f6Jy}5j7i.HRi. nGXbN|~qEAL!jԞ5}~Ƕv.qKׯ^Av3+V~mZuY@1ۺRaFAuAWԈX6PLQZ?W:d+ɽpRH񈘤HU4`ٌj “z;fnk믇DArvٙi^o!be4 DRlh}iT|fo%zww_ Ub[Wg^wbiM9H^\l̘^aݺvC'MW\.g.15]FuIC hl T,˛6^q*m՝Dݐ%\Ze"SM5Wa=A#)4)6g-H~}^saaefl޲MxLs$75L ) RbWؖ4?M)5š ަO`STV"cN͚lRRKoWpNmeͪI3}:6#㚵mhSM-Zvu!ӽP!2٩y]a=uWfт+/kؠ9Tq1}Q#G-]fa8-[D;2MxSi)B}Rv&WhWfzvAu;҆FIUHiYjG^K3A}8r[PT^Rf"cx]՘4pv<SM@ bG٨Y.cR|š| +lEPr_㍟Pi5f# :6j΄#0o=Z>f@Ld -M뢜(fTv;d@<u~Q^KAG9= !&\ @E(J#0{~4,\4o7t{-fϽqiO<܍??[RӺU81?/w}<*BB>v߭? :Z`sl0V/X(̞+G.+k԰h=:|5KΊ%ӥK[r恵 e9]jqmv $9+@6|ЅfֈB"@9sع^A|8|kJf)HR&$<`TĦI;Vַбt{O8jࠗ/bJMXL&ﳏ-r7dPI=6+i԰kFE;obE5nаnϞl+бuQzsT`+N3j&Nt=l*P ;ݿG,G%f''[ >{;HZb7l`@V(%Ʉ}=MpglAzy %R(;ȃm#L[!D庻+|/:I4`ѹ>p^a 8AWL8Z'К,//3ei@0^^&ISYnM) VS^mh>+ Tp/_H“k"= ښ3M*1!hGNaNx>.5i'1Z7[nλn}t}<ȣaOtdD1ɓn;$ 2 R`j׺N%ӏ=~Hٹfhݦ%v~M4nҸHaM!5V(ף[F UVeKiCi S,&#AFsJfz B~_ȡܼ1UX#F8 - P'BI&Eȭ["h-2:F">Ҏő ]չqw? 2@F S\GWZj-#Ulڲm 4IДJ9:: E5d [&e&7C1͒v?KÝP6|uyUE(T3rys&O?[w#Ku:fΚ.睿nN+>}qG:hKlY͛S];FOv᧞za\wovؗvO>J_t-[TFCJ#nlTEs|^((%LGXHZǻWXWSѵƻT-Ui$Hkm=†6Dq+WKȭ(O~Cؤ(͠$ #[R;8zJcѱlKR 87)u'iwګ(vuԱ?BqG4~BV-)jVAuxJ xp)~oi t ] vguҖ~J-a[>Y- gYz'tyg\"pNbT_vwC#viCw9_yN8>yo^SN9;}E˒>S7Y8;~OL2q,xʖ-.n\d6c[zE >{խS iG5x*.=U% 7rOhjj32$ܓ/ 7צ`E8E6h"<2 B8oi$cO]oA?Zvf|Vω:4DڲtpJm,'Qb(rǡ^a~6p an h,,wTF~ŗjn8g}d?~LLπP9f*퐰lK/?;eGyp왥-ZsWw)Jp`II%%)mWrIQd()Niq6֫f yhB耔k G;m0 뗳-Ez?N@7VWl2ȰbO9$3>=A$Z\|B}PLɨedV\)oMQEY;mN~3BnhhgG= >x-Ző޸WtsP8\F Ylc0CON4oz(6ӌ&J)v|sǯ!C}8E.WM';A6έA,} tmfUnՊ,n {{gZ4/DAL)㙧_Y31M$*W®(CkF\~gsr)#Q[cGZޣDsN{raE]jvK ~y@X%@3k=J l h{rmp * o''j<}̫(/L{0 Ї7F@`27B(MBT D(2!ka-12zMڗ(4":e&7aczBvi:چa'!qGdR A/#8p+?W"ǎUa E$5 &St_ro6|d@! Svt/qzŻ$t=8$4#"ŖOQ}C6W]~եWc޽Ӎ7zQ'T!HՁ"C+q]~a2F%7bEa 48V> @߾7|J"ARSiO5*Nub)6Tja)#XLC`Y @ >5n2:$>S)a->sJI*O$+fY^ѨaLʻgJ֍ѓHV$)$۔*-gb*^lf?@}@f`pCИ*زiF<ڽ놱ˮ;[?/c[Fm_mF1e S7*'9^{ٷog-m7mx1+t[nA\YW7n~"<\2]ґ\Ii(٫S&/..Wmwի'QA9ʇ^xF*HPR~7JkCR"nME/[iYgߣԭl]l9HL;Ԩa7Jf1>U911ŋԩUsQ J4;\3ކw%2.$jϘ>4&Nd*&P"v!ŞfLQjoF* Np&-uרYW]o#3/boՙ5I 3X #"mt߆[E VRیaNSM3gL,n`U֯oZ^}YF~Æ;2.O2öݶȑ9owܱyj\>3zdצA.,$bf3Cq_ڀ9s= 5bCnUk9۳_5KxT!GR ۵o AEI.<^^z5W.w5ӟ荷\u͕~E[o{_@/־C'{"Ml^õi =86v5?챿ѳGSybo'0ujS7(VXРavK6M%bT..xꙧ(g ǝp҆G|.T7e73ceT!K1^TTBJTpn57fLӹs')loGY+D"IfD=`߮Uԫa~15f8yB`dPaI_yidl9eSN>k9iڴqګi;Ium}oR\D,\& Q![}:yMĨlĴU1"qQ N{~w;vǾGyqDyo \R_G]v%-[57nHmþrؑ_{b[N63Թ2?YIӦM'V̳b|_b0)h(X|kѽg'LWz`.PEjgpSf}q\lN:ŢKXJf~߶XU,8-[6={*aCvi3SI;r!=Zuj7HG?"6rIsM>EHGۮt0)jH 1E3Qk-ZDIصӾjom5l(|tM|4*OpƟ¶Zߞ{ӯY\ÆYԨ!/Բ^}NhԨb+W ۅפI_xokŢ)4<48yAۖ7mTyzuk7t=C?Ͷ sޓ_AĿ_riO<%˿H0U@$8Ėj|Ǩܾ]'|Eudƭ,EV#FHYP 2;oƒ2(ebzjev/=o.CQ}뭷}K|8-׬.kެɌY3]ıAo)sKRXV\lM nU0B ,.:={I͛O EIphҔ9SʱW)UGaaQFb}qΝ:] Ѝ-\(zz~G;{)βowd.[2ԨN,Lf<RG؈X)c~4niKN;=֛oͬ3|U2[f/NvګYC zJG3%4).I:F_aD֬m޲e]7*ڦ*!˖.h\W٫^Bv)LTr>E:eWoa?􉧜񏾻Xye}u\9.NN؅ƙcF}7)Fta;ʱǜKŦ{˖vL|[e ƚ)N(*m:~¨o[V^Q]Y:ن_Q̔QA֠Cmnotfi8lۮ l iZi>$B|81[Ok]y/ް6+7~vbwu;֣"XI"gA hýO:xIy9Tʑb=tٮv ̞>}£AC[T_t!Fdc^rۑdT~fkect)hi9b0$.V<}x!_; ʒY?eڬFyr .0:D*PN{ݵV+& K[r_N6| T7U2M<`c!>}9{9uVI%Dn?w7 _b40P"֭_~I_>ʡs6a!)/-U_̝es%D^Qcy|c˒衄{ʴ_!7/Zq?:X+_$e6 C;fSPdxaxFQ&D;Fn9C·3togܾ4@x5^>AzJBٵn{mw{#f]>88ձL| !PVolk~qSO=$qR-"  Ǎ4 גͤ(3(\NT:|0ktLҧ~[j-9aBkq`|>2XADo]̫s/]y7\'{ݻ` 06eKUia}eP3R5+iԉF8J6.iڨYs&mWY\P]sq&A~z_>>襗q۠^7Ji*4L,_SZ͙kz/Nf͛_pw#STp7 k"ga:c䮘 Dl[)[\#o:MGQR/npy{Fz_J-fV,VڠAꅰ)ո>ꨣE*Vmۻ., ]ߜc$$V=gCd(,L0wDyf͏=\A}3/pbوCHQ83ߪRY74{δ˯ȶ'7 IK.ˮyU1T+mլiʘYA:Cm.cɔxD7ϛ;ނEYP8`֖{mf gNx>x%/Z`kNa |[Ɇ4ld1(d%RϾB?MQ!6I1 >;t`iC^2xڂ_uB84?ɈmB! +Qӝch xГ`Q$Twګ|ԥȏF搿!P*v=Yº? 3EWÐ *._(h.(r)Nb~F\} [ֳg9Jx}{7nYmFNA\zxY\:øbwiorg f[o ~mN"fNd.Hats'bal' +Xrl%R'TB0C2vsnqa~2kA2X͊;8Ȟ5{%Aŗ\4z֐t=6ݞ36#2F-k$pqayM8M8`z*dqC/1$m:P |(S#iNcK=0LOzDVv\#9+`|M@ ˑrVܩ+#F| ;H-3@r'eAKvzJ!On .zF vaXz, 0u?$b\v~}x-  F4")Lʞ%=0b;t>jݺP>}v8=% +mCݎ;L2 xMqc'u C)A2fr,މuBbcx⡇V\$o0uJTm&fF1`ڴ)x̀%{S QC=cs=fwT=d<~óbdI_nHi4bkE-u?75Cgl$'pYB|0+,pÂf2?FQ'Ǎ$.oÞ=?`OvQaCoH=`xޛQݗr"E-]⋥F1vKBeMv>hvN K4oҚ`7ڶiӮnۆn*0}۵g.yݰ-Z "{a+$77LSFp%|Iz TUD y% -7ޚzqH\zfKG`PȒ3'c9a9n (PmhTd yLnă!^ DŽ P8f[o\0l;5P#.u4k֜̃ # /㌇šŜ$uf#`.'п#pO=sZ=S>atd-)*Z9m&?zlAPlؠ)5)q,;Z"u5L3MT&v[)Ѥ+{Uu0q*@$]䖎Ă|0b$Bī >p>xO ~!-Ea}ƙgNWaE:>[jo+ohklFU~p{V!|MRب2^{Cf;]w{"hvӦ-:tP6 ǘ9mi7DjGߩ%CմY?>p3ԫ{~{g?}I8SڻGvk_͙;W8㤩S%PzIv)ƛyUGmԲO=†j lfy>{ֲvv^/nTԼYᄋ4FUFgmOBk^xME}mf dʋe; xVyymn?V@ QD ͜:{Ыo#CBO Ňອ kd T5-,hGs$kKz[3r)%Ǎ/]]{о^7pԥ|8x&*Wm2T^> hѬ7*W_$vXb ={vaC ыi7CXZ|ŲejKh˼ѣǀ:UoڨQ#2x}.Ue>p//[~ʆ j=p;֮UmmVmÚ;t͕WoK.v6];*\ްW._YWXbwAXf]-7}'MXk.ذnsƏ١]kW͙5c޽ ֽKR/竰²%eE+DaW:k1̭YQxW"v,a[_50V+IUIk۴iܧW:<_{uHZVmC^ҥs35*j΁7k^lkU[e'#bXFg2d 5Šd~vJH^ig}3NVXV3oq kKSk!~;nN«чgb;֔]RbTER{`ԨqlzG Fd}Y7Ieh:5XL6 __7kyƷQ;o?͞3&ţnjl֬7J<[oK4l,lՓ/7g ZϞѱC;TNN3 mܼƍopG#wڱuM֯]f NQFTj$c ^KV=)<{.b`_lX~2qCmO{PGz={YdWp~z=oӯ|%ִ91TH{yjfϟٯ %/[6%FNX82Kٶ[IqGVkƛ~SRD=nҲm۷Nm/C衇"jԇzsy:vH[%چo~ny#ʓy2eJs~׽W/?b܆3\̂x7{/dE /9b׾TvPg^I)3;u_ҥe?;/$DΘ1Fbx󿸲Ѷ3qF8j4TX }b%2haT^mBZC2(ħ`¬-3;@#.rk rM3H"v%+7K*Fckyxy > =axʹ(-U7&[o RbPR P;DL*X/q,.ofq?хVqb%nmЄU*:K+ ,1[O3!,(Wvt%\yb3^D->S:=73O;<͌T|Ex+;I$+iɥNP>`cG ]Jq7=I`RtMMB- [B H f-SP?_?v-}FXn⺱~kp) l1£ω9D F?Ƃ҈=iE)oe.ia R"@&NHƔ>Wcܸ141N:Ho!4d7RTݻugC}*gMUޕMREV4z.G+N"[׾Ӧ;n"JKpx76`܈"2D~UXoGl;g:rW~Y GO S!_gRF&r-5O>n;ݶl9s\NEd% FqgV) 6'\3g\/ E+O'DMI}ƺoym3+`/Nu2 !^ (8Q^Q/eDZ<8CкuKÏʘ1U;3"s f `6bX0 ʖE[<}cvM>m]6g࠷+tϣ><+d>N+W؂fl*@cO{iч@;ڠAC~q>QqƓ% 2yVsvObխK'awB9z_NBI#T |LQ[eT-m'7D*SfC #@/F]4!G|&P{N/7pH}f%vrU.Cn2kԪUv+~Iq]ѵ Ԛ>@pn%Z zZ,DXK{{BN1 a2e8~` 8=K1O"dM̑G0"ˏ}M|:)_ZMns="2Ru$+ŏRgײ+6nؓO{WQm t\+i)d-P"V>b3vǿԕL2fQZf>פ,PT '_sq4Y6 SEYE#$$ρ)>`"¤֠\K@UmÎ;U)TK3<եKѸD>w"狰1sf8wJ~7Y6)JYLY%'kʆD|= ^_$4Y*fU@VCo߽1GڷCp0Zcm|^G;0ot3g3{I'Bg[4o\η]|Z$U|~RX/)(XN{{) –! "bT`=.dh}%J2%"'%!r)L8vZ<\JmUXO;4b~}=fXvݯڵ;Mq?Ɛ4t:ڭsISRk 9kA:Ѓtk wƃfMYfΚ٪e!ܽB' PZ7*8=B’rKB_~I`[:,%%ܡsb  ߎYC;0{0"&rW=xF.e[$;7Ө,U qÏ]OĤPWl9ÏjP9MdF/eKתP1!qld˛lR_; <[HP ܪh_i9!o I"nT/"܏58p;5t%ظ]Ę-ŏxo.X܊wupɋZ\dK/¯vjM!HTZ(Ga9{wǟ+5lҟM!ץ7l0FlG61SXjuԛ;Aȥ@H dU+DU,S{iYY48 L2tыC.mm?i_o$ED{6r 1lT:yg]v:SIc V@gw&wy֬T7vނ=תSs PvtFӨUZu,^gJYԀL#S֒Ԫުu S FL#E:mRil%a7kjެmA vȎPK3@BjL+YIPCMN'g2_ GSŚiڝV:˩J_1@;6Zx=Xz,JF*Uޓ2v"><%pwy3})0U@YE0`6V/i\,f[4oevġ, 3h,Rys5)9^dlO4'cE]CsIjԯW_mH,ZS9FU> 5bIg|A{>CX9. k@fP8H6YSअZnw@ g93HHog`E78[ 5ի;,+_w*\ȮU)ӭ%׿\#DFUi2=3(j"!بQ߂l@6m;AjԨ^gn]ѩdz6Qpaq50v*tb?6G' G&!U4=vÐ\mp!m3k pβlH4kZ S]NȓX˻`Q+D7S7E4eOS)l]pWNym,ogNK7xoh+J9j̘E?cgq}u! ~{ke5LH{~XK=r.*S1켛mb[^_2v#Zy6nLe)ҫ~>Q Ɩ*rY]Bɼ)#wbuK2U9q4H_*h<%/Z$SR1_oRxY َPvHyƠxJ,(D5++@)SfJVKRR5l@3A) oɣä)B̨rFGmJmmB[MNȢ8{&{ܦb$?pH54+;,f{%2,mN 4vl g>GFK9̻@ܸ߰ w p=/CJ0Y˯}h䩼^0tZ՗.SZ3 %r,rH0wU'&/wsܸa޾f }db6UW*v;5jJ3f<~A$-c# [#~5 qw$ ZAw,_@ 9Ih-dڱ!N5X >o` _ؤJ S"F!& EEw]$q.vM)6gL.~ϯUϫ\cw YfJ(I   ɗW|pShۦcv3 {<5tذkw-7oа}?l?'pK~ gs^Bc?kaSSws4K`F[9Ua Ȭ>)x@mLI,Tk@YEW$p.3m 6 u[/I#GXWz !! q<9QGd3SI=T̶Q!;!˜Z? )+W65\x'!,fzbC45Sˏ?L&@a)ζ;M1A8@PƒT wy|sx90ܻ'b|&Uп9星8 5I0ۮfMMO'MHza9BFAICvO$5xC9x[Ȕ߮gD-Φ^BZܹƞ!g(’P-JE2teSjC5CTA xO gjN1-vc]]s&RmareS¼ bBQ IDATBλ;!jLtѲ("RUSbUwʓRHV( ]r%Žظ~0t,w=Tou=ڑm+OxFPmp<_#Yd'g9y@X8'% f {mSb?on9vAI΃.G85XRm#^uN8أÈZ^z)9PfkVjQgb'^BQ$:06YŝN#}Ho[a 4,`dlaQKq)5 vX*>[yǦ8Ʀ,G@Q*q}Sp:_T3cUrЌv9Q"޾)İ!1A%1HKv !^h"r%9b|@x]isQGˑA} "}ew=0^Ggy䁲3G9 B]tr~dž[N:ŌzDSwx9cQHEcTG%9 4ꤌKlsJ N^5[jڃ T|zQ=*=omtňQa'Y."\*BhP,P"*Ɯ@yx} َ۰)' u F J+^!O&m9XWH] G A;cX'aa}4;zKbH(*q ?V_$+GE#D_+"m5ͅC/ -%8FN_bn5"Cԁ/{fīD<_ehꘀ#R=B!%r] {5|T~rh᠚ [-DBHD@Z:!0(_ BۈڀyaScSC\W O$D|P>:'d(xl~9r,|t O=WrA|S&6$ĈcA bu #&aH :э*!#*'87s1@:HS$[J3g?79 CA 0AgG: *V-8HX]q+DFfm>MAHL;HL{,jO@դ5"U¥y6TʱeBM ineImWpmEvCζJf=n ;3kͳzb`"z{TOUP3:EW3ץz/} H-!maDXpBVhܨ cTA'4peݓū UGcSH*!z{=O&ĩCu V|!&NԘk Ju+%7aa KbQnbzk4 p:$ׁjqN/ d8ƽh0#f 3(ን0j{ץharv^#87@{|PRA}v%d/\4NVq-ҎsɕW^)-ϳ?yph9GNmZI'2SU ZNh9iҼ{\w9I8)b-8;\Ǧ?O=Žُ\nEa,؉qƂ3pf^b܁APjܹŋ (w(h qłQ9$ODq$.04kf]W!FGDN VX Sa . hw]o2w&eA\ ]09gFsE@8CbtwAbR@~ꭁOWRi̙XYEtr$Wzu8+7B,PLZe*f5`+Lk G/z679FHs0`A MxOK&\%3kL Bݠ>/6/&X/<ϧKkAMC 3k-7 (VE u02EC@"2DC>K|{s2q=amN=zi(iE{54aݣZRR Y%+1>;C=7*!{H2RL(l@#9n͉vbj3L8~"0 .d ! {tN˃YKWΰ(X3[H) IB_Btg>姰)-8є_5L ̙:'ERfOx\\9QXdjC&V FckΪoh 'v1=:F }uE w)@]Ck1a܋QlTW鳫q'(̕ #H2.nᆠ+XHq!ڀHkۛ+쇁BPbP/l Ʋ~_7,".kMOz%F}[3g ~Eb!a]QJˏ{QYK@C.kyƿo=PŒc1a.;y] 03Ah##Fr.O@RBSnz_j8T &E!P1`#ƕZ暼MOnH11{]X7dRCF@BPB0JAxN2%D)a)m}Su4DFzpL:D 1E74wW_}5> S () J6;d_N "#4*4![Ռ LshDq%OcXi@͹+ 8 ,@:$Mbd$3qѳ(6]qi9- pg`N4'akbQd_H<x}ՁP :@swBTrPԅG9t}'orb¤z@+׸К`@%0Ĉƪ8nNrEdB`d!D 8f0o3T0qA4O!iX5ՊuU ?H"-2-r{!D; Pq*rq@j | ? n:ph ڭ=(&WNLNh5_`4DĚØM\XJG;͒ڂ Osn$(\\VΔ0#K3{0k$ջ"Z.*4<V?AF<@s,jt&P"uqfsjx s,`(ޘ[3Pi8 e ySnQC j"<#!@Bivh'/9991=ZLT:鳗H-1箇_2xG"\0mZfHc6cyd}a=CLoVݺe a(0"9 XKoȽH"Dx /GptPCzљ8w1&P qnvΓ?RJb yE 'Z˕DM;TaЍ\(r,n;=kAS8`\mO[arʏי u[,e,=̾0&XvI3P!L{LXK}<1)T<;EfdA"Yo~6 ;M߰_fVz-h;w?$|qcb7;q5o(CpZ6Hxl[Wݦ)M| / CVQ놦B ; JS~@!ǦmH)͡{XxXHДn- sjNs6x~eMlq^d,pϽ*'E. HY)"NAJ7n" @s!x4b mzF>DX? fAw] a>:.f/ _ 0N||!J8s ) ɧvIg,E C}Z] ~pQUj:ǝ-dX`wef%+peNQ."o`<7"Qd-^ExiAXZOaK+ _B֝% Dbx$B-|KNpһc}6?WmMS2!X *(Kual8zg,R-X )C2 *jFU`Z !D ;O"uPA4]פDHy+f˩r 9)&ݢMZs: uǠܒ`M!4q|Œ5iW"(*ֳN(|R8,9h2\|dؚI sPjptVs{`ì3Xh;\qp exBsv5`D.M64l v#ʪ*QRQ|Ƃ:, HD0ds[sUo"F#ɘ# jȚeu+Cg+s$|4. E9~qsft93nnzOG d:$Cx\Gq>@LxfpZaAyrK$؋ua݃A"zrUd3'H ^WC]DJh Ӏzn&g3QtW{A" J\u l=oo=d ؅9@$$QsinSp@G\Kt,V"}Z*B v0wy N(PPP@DAb(Dz{Pk!t~ qϲuh~D({!8 *!8o@H!}mTh`)h 9Pbb><.Rw ̹90]@nfO Aen`"9; T2Ǽt`,fIXTp; 5Zc8x;F6E"+0^7GЮ:f2!E c:"6,V9!Q!ę#7 $vrTVB\):yeZ x|bϑ@Æ#J"2 3,p nL7|%pR.a _ $zU+ĬI`XNʣ:u>pBd|{i8.2E \Hd08GMӪ##󄖑MefР6KE^ŢKbsG&8t7P#,5\H!@%378 4Rv L00VF!!+ݦ?iXs;#a"#넌>8\ɹ f8GxRZP(P+[byBpbZ*zIЎ0a7ɆS< ' ,ո]~U ?c̏UpC!cL p 2@6f-?l 5 `q[LY@J<ŧ~ AM Yꨚ?g=NkըB}_)3P2Un3P[ VRf 1*eZg 1@bTʴV5@bl+XJ*Ĩijtk*W2UQ)Z>U`U+eRѭ}ck_W T!FLkU[ T!־UBJ֪FB}_)3P2Un3P[ VRf 1*eZg 1@bTʴV5@bl+XJ*Ĩijtk*W2UQ)Z>U`U+eRѭ}ck_W T!FLkU[ T!־UBJ֪FB}_)3P2Un3P[ VRf 1*eZg 1@bTʴV5@bl+XJ*Ĩijtk*W2UQ)Z>U`U+eRѭ}ck_W T!FLkU[ ?smOIENDB`n =TCbPNG  IHDRzsRGB pHYs.#.#x?vIDATx^ummYB@11[lEP[D10;~y|/of~f̜scϚu+7?s=mnsxxg~fȋֽ,7tӳ>l''J꩞ʷ?s7=CO_ii}OO OtO^UܨD=*|ۿ=S?__>s<zX޷NUZۧ|ʧjɞT!(bI=kt7P ࠜni&+DAˍ=oЊ| hO.&@ B+<Wۿup :,/jyypnϽ \ `ծnJh[)[:P@KBr=LLCO|ETGTbSr G )GӰ`A>oLo͕uyEF +++˿/r/tͯʯ˾U^;z29xXDvݸ)pO/$u+~! ƿ|a 4 ׉?<?<2 b@JKm{{<5Ō"LQs?sKKW?__DKɟ>AgcF݀q7 \6?zN=F4Pl0R D0H=?ӯjv( y1U^ ܜ>^/G:M'I\ {Q OG䡾i$ PI}U_^jPDzvUnPs/S9Pz͜\Ei뻾 ĔzRM~| ۾ Z__(iMի5Rj`Ч0sj&64X,d(թ纗+S* tggZM#M bU^L*q(p Qf^Oyw(U"&y^~_eF[UD(st;`h{}ɟIP+S x:a']P_O6,ҨfZںӝdBnw[}OzЏ؏A"Jlνu/3p[05 Lu^C|uOD`|9x3.ULw%ݹ0|@~YmU֧z Y$Fpfoo6biΧJ>hE_EMG>mo{[O䦞nY_Yupը{ )cw]9\ͥ׍/!n#N9)Hje.>/PWxB3Jb+Q!p!Z掵\V^yWu^WBAk JAo5 Cg:W@s*n ӊ& FivgrRH&wI5:ۻX |qڏr^v0Acݛz> .ĿB^>/T^xyF!98.L}P'8(U˿5 ^^O) ~N (G= BPW'\%hQarT0?z'Je=yϟ0̖z goXɻjՉ֓CC7&*𠡜oSUhB_U^"X1E`gįn>ce<=*iMiDt^|z@^Qz;2PRI>{n@iQo)Ι>%'w799KVP>8>1=qhT?l&N(u966c+)%D9æq͑qsƿD?`Jq@ t/ 8bCXX$<7eh*-_:-'y*w׍=xAjr4 y>t:܋K.(9p׳JZ9uQ%~*}pMjU_UyL$X_ЄoQuAW[*|Sp:hg5 8!G mگy]m/2/+>W{<%>QIkPls&M 5>ᗴ2F8SYXo*`W~Z׮xPB֌ڧ?n2² 7 GJfyQzTiI@ qa0A1NVQǧG=ᎉ4J!ȏȏѽČ??WW8ʼnY"Ur;Aa>OˍΕk]W]+rm3ڼv%GP! Q̘8NS_@W Pli%Ay1ŏD#_U F(]R!U+qmԺn{J̛̹O Mxebl*/BI&^5.xA%:3S\"uF4PUgyW~Fw#aW|),(M>$ 0[W=ED>Y|<_|%h/j@XFIa:ENJh>IJ.PdP/(BCxɗ|IA >.29M Pgļd(4__{!.iԓVFvvE]:9d:z\#r>>iڮMܢ)IM]p0l m.l܉W-"d㘘S, :0Zanc5a}b $[=+SWQŨҺNU e)U/vTNGPӿ=4 W1Ψ+Qi5A(MjSlI ׄuDhmm+)-DЬkJnj1kJt9lImA$Km}UW5HiepAj >j K OFun tBʣd 5[gv|+-3eӧڑL%z4R ֛ԁ͝5[5S}#6;5͙m88t| hfG/VƢZ7l}N_6/u*ڬ|ՋX ɣVWK m21p\>C,'VxWn,\>d~~HE fcO>)NmBC2݋u}'sˋ^錝քu@x?E>$[ðn<[Z7Nkkt zk}f\*g_{t`0< W43m@^-ݪ^ReQ-eMݏP.66Z/ (U-^zi0*_[إɾa]UT*E]j}w;!9 ҇Ac7KSPY{`HZ,VpQG+ֻigT+}MVXRKϯ.)p7!fV$Ig-zX;'#ߝ@qLTZ[ISFSo$kc0QWBs\:uFKPX,͚;W$@=sLtUUҵj(`D^imڴ2[ժ_S-oOCĽM"I3#ڼW7 VBpdJ Ė]z~]jaPW7w&/L0ƖLJ%Pbe@Zy(½X&-{{>x&K TL\{NZ-Ҩe` Z{uY9Oj&F }5٫!1ML:Y`jXSP?@U^IML--n`2 ?zjH$==:vꥀױ'< hrD-`,Edf-xjzh(9&ʞDgp^ OQb4_[c)l ]Yt:4Y %I:UyW=__CA''4/C2t{|U{m]?<[~Z}h\As ?|#Cy6qi\fzЃ⌈~ueC†8 cQIkW6W:EIZ\)FAkk  L_'©Cxn|ULV +c 3?Hšs$D*ubĄ/̆0ٽj2 @4S1pTu&G[Eyjm= 碖#;ֵ(29+'p]({+5xE}ZHhϭ{7B&`}D?|NiZ%=i)<\$UhŐ( l)ߏ;n|FI}kBܷM3p3a=L 6phJG-vYtn^L`]d%E lc![E..BiC*WE{,>)Um2W$+jȵsR@pB3n|sBI[ݥ3bYHJӠ C^Q-hWFhVl^dYA"]UZ)g!zC6|Dm&KbH~ Zs VZW5+9#T%Z'eptrvYS ste |^ȶTѡ݃јijɿ׵NV>v5#1t?wT=7牠 |O?*Ii2JP:>kQIyT+ě.JW]4.ӑӁ+B c|~ MQ:5̥o+Ԯ@Dy^菪ԭTmG1{5ڲ ~蛼ɛ0H9}pm55pEae~]: Kq Xô`ԶMH?vS8F܍L+{n{*ȗlQ3YY3, D#hA!Ṋ4AoVF5y|Ő9;ew]zz^=OFW WEU7 ʜ$uP`A' / b1ԠO+7ċTm:z}QXHȬ*92Pd"[?+*L,tV؆_y(CO7yF.- N^Kb(_{[RF5 ~Y;f^8-ƵcTZdo> TGӁz̙x@pwz s cCfh>= .\ ٳX-k(G$5UXpӻrn L rh'xc!)xG=p ƿV~%YebǀW@jځ%k ɦaޜ' ,n CN9qŠY+_G ]e|*@I!WB#"w&yルA+q ePt3*=$xP@a) 5W\n+Q<8Ge&n5FO)mCh%O`8mIW*\sSl 7;21&7;ɝ9ct~;vsjcRye .҈/3 >?<$d#S,Ĵ6OD`>yCʳ|ڧ}geDMyMC'߼WYUg d_Пʔsd35Wo*v$T9 )~\xOAVŀBy\Y40'{]@).-Z?|WRK* gUb1f+fݐ}5id)񤼒O?72̬WrS'ZXd=.Nr&Nra/-+-MM7[ }4D((I#?#MW yXuN̜LY;WaܴI)5v5ei͜._IʂFhŤF~hh3nЪ_(`ACiGV2;n6CK\KjWVR:Y䓓O5)*ɢooJ~xדñIO@qpc: BގWbS1CZH6[S]=|fL Ԡߩ۴=I;&؇x7x7nȲ_EU+4-zQX2QAOIzLjpV++rkIzj]TbǮSVflD{C;ۻ? mPh hNݣc6п3M;dOݘ]fnvwy.6[Hfqt;| B)_:@wJWť񶭪5I>ɰmo{[·~e=@P'7-B7biMyH4\pa@yBG)mvaG?;E)E[0m뾉B+jfT`_q\CPsmg"}e=O5̹7(ƿBgH*z|ꆁcOӽ c͍VE[{bq<5S9V} ]`ȷpYI)iOu:wŮPv ZHs)bsa`EeN*0:Ʒ B:%ulx[=) 7,,*0S{4:1WC:uCR?mXG Z(V 5J{ll½#Ѣ>%yX бC2@6(VʠXaM-*F^lx@F⑅e@zӇUtY*r:>KXfiĒԱ C ~V%U*}~{{j vsv5FfOT0IQ~4%\e6$ޱ~ZBm,assvp{1q.QQ]iELZ4|/!,0B"Ukbα5mnU%y yd@`%e;ntU=zRaAor*f.5G)FamR` %YbLxRO|cT(֏U!H ___e! JގTut p_Ià!K0gO/ \OR6d 6Ĵ p >gaD+,eC.ˍ=V + ϩ2Z t_79.,ؙ.06ﶸ(,{wuR`J*9u3UÊI, x,q߾'f p[ bb/+HQ< ot @9,j}!L\},= )!w#K78,1Ύ2lb[ =>ˍ=*VVވJTT'ԉu""Rr>O @^H]m.0 Ym6dI}]9\jeHEp:yId<(p(P?Fr- xLgx8fvsu%^LJȯkCUm ryV4/RϷyBy|:MGD,ȭ LP&TH7?P/IU mh,t6qUZm0YUṕ4ݸ+kXղ覫mUlQܞf'hE# C0ލ`V1?2ri_rb\ӷNhdn5tm,0o!!@EċY\ /rdr$,^g~MwNIPJz/7>+Pa anS&)ndYz{꫐]1") ȴ#iLf_ck33̵rd(%^/kxJ?}SĴZjؘ(´qV$j`(P3IIo6oŽ (Om+>C\lRსRoȥҌ^ ʠ!$]ŧ|I=oi(wmi ۆخouT.DЯFQa\+xѭƞ# 8#X^6N1 /ġXEҦDD(e+&et@p F>7%i+Na'^sYl FЍFr/q C,tae+u~X-g-:G=}a/xwy\lIm ېÞ:pj"Y2d񷆶1Qd? :LpgOeR5S"[]#\ʽPY,XrUDpA݀e&` ۃ5Q(8bfoTioiPo‚~#y ڸoVoo^!3$GCM0](u{Q4B7pe:w|N! 4,nѥĘM &̳ys?sa\[VFZ++;uì>yw)@=o}~`2zYO""<s gK<ڽķ}6dTܔ/Bƭ~b{{6nw -o)n#$ݗ;hmryݠS7l1|<&h bhwA %\ &s%`?JA7~!xD6D&ZbIΒj=ηW4]1P_ {'d1KܺA eObt iyz܈ʡhq"x{A&|;c 7)YY;}M gF?ybT(&C,X,//7+zaJvu3Z'3w`)syb1վ}: $RmuQY-6!p6Lo]fusuBDxN.t[4IXn G:=õ(m%ڱĸ7 pr7^Cm/XC=}ɗ|c1ַZ]dícafuۅ\i&>;J7`@ʕ+APYɳ %$ ]O]#RV;FO+*D2>N?!OV;_k&0B#@po(q%9`k({'nCF-*5 86 Wk\V3E:RoSϦ6tΌ-Dj.kvU>s]Bt9)3ӭG„_[u$l@,+ n0tީN6mTtco= t>d^Ϯ{~UL G I{^G?=oC}B+`saB7i27 ޙnJ'Cya3?xlRmg.&*^4q)er[5073JS(AZ`h7y=Z ҆dS~r;Ӥ kpz5|..EVԽ \1.eNqBWIV޸2)ʼn84SQ"_-89 AԨbEQʔ||ͱxޮY_N+\Ky:I-GqaF@;gA~.;Hbd??_=dDDYǤ{ :PAk4 iY0n&"L#`c\~@lDDTOq @tؕԝ/a(.tXx@fتkW^ކUzۂ)ɐTO^y(]"gyVY$|ExY@ s6#o \ ߠ\ZU>zTKzօc;*o"̒iĉX\-AkwX~vwZt*9\ӢsE0,]l_lr7jPeoKd))7J>M,'<̡ˋ^ ^+ˡphii66kDFUPU] xxMsI<J"UKYFt.t7}aTc3 H'/L->`szT;Yv0Ђxs)DJI|.. )* beA$ $$w}1W-#uPk5d48 LNCu)ՀUHض#}5 EYHG{7 vWʭK^ٲnntVODُܡ(Ɠq2]Wޡ[q+Lwz^+E ˿61cI;' _)*qOd%(p,Ҿyd;6?ڎ"Wя~Jh-pU;;VVAf3>3 I:%I3Q yյnd'9ZKaq37Xh[ gcemr'ScfZ;"ސ]I'vpI4NF%rixs@|~Ecx2npbͷw앹j07GvPӈYڷgP:f󝵥^J R$3V;$bbijYM"\+ԉ%<##d'( )R k:fVzZB=VRO<ğ2B Eni7Jr"%ceYVQ~r^C ~Zy^*$yFmu@LtlJ-DrD(@*\W~>!1-p`5`eO(jQ3kg0m$d6[a" Қɫep"[Ne\AI(ńD &s2k 饥M55 .8 ǖ=ߖvE6yrPMʭ{!)SV5FnTB|LN+I]NѦDX 1J N2l^(׺u^,$#飀l<%]0BOa,C&x7!Tt 6&9!Lqo:5׬AspH+Y XiKH:$aL2(^o3QbKbϮz /]nPn"a3K8nM΃6HZY`Q7 & M7U~jWBҤN[7` 3OGŨYxԴ#Z,9e$=ay2 - E='dɖ&t $)Āwr* c&lkJ-!aFTOjW#%,-mCy]lщw`F\]۞Ciin&T91 {ٓ&vEX\;&+3~&zA .N81 ?EB-f\( Ť-Y%DF = vXEXY^El cU40!+Md_"#z+)J@UH}۝JTHrH l!f,m1usv"Kh.6.QV1ɆRMP髐"Zk=dQt3x֦iWVH q fdՋLm)rLQ;MEo[]2z řx + SuB;u,Hy`\ e]P`rgKJJkN+^gTN8mJn(+!|+8aӚD泯'տHBD}ٰ[& 7'<i2:r $,FEO=#lbB'iF76B(Ow)[BBl([pSi:CwȎ88 $ɞoo6{W_ X.-Bu gzon(:/r^$cWX[JDh#|X/CGmn#]$]2SXb-XjwjB)(v&[ %5T>Ih;IEe)*MabȳׁK?+"ߤX&OGzBڴ˴ (!u6f9F,Kq;L/RM9=7L&..nП S9/e2 2K/`ܕ<`f1_1`@j6/J7ؤFg pǹ k+rk\#6Z`Z`m,ݶ\`7E}`&&</04_ o5qkCŹE$EF \y!jiP\ o8{nOg$-2JkC%›дeDXyN7w:-YEɷՀ2$da}'+0 6U*ۉBZr!eu2o,&P}@g BvP85g\ʃhCʐ7I_FS>9V,׾NU">GgIdC[SAsDA6T($Gҳ]k@pF0~8t)nK{h8^J~É=N¨lC9KCq 5_kl3b-O{{%/'ɺ c[ 4g$YEYcl5 ~.u:!){ kuS,G yIʙ{s<^+p._0.eD ~^Q?C'-,hE{9YJ{bCB{f:O12ȣ9lYW/r/U\xa; ^i!78,F};=̇CL;X*]Djh X+ݶrfΚ,kXEb~+!a'VxF0Y!=Ϙ'6$I7%}ͧ0V{TCBkKˢ۩~ >6:#I!!*_Af@p9/A_`egIOV3ौ> >ʳٝҡpRcԴ/sp+dlbpRsȍwF}H˕%|~TAL) &][c)B0$Ll0iҪ]W Q]sC}[7?"sIۘ_7%JMogI_XfśwV*YW9D! ZO2Cc!@L(]#e'Z>dulTFqpQB6XNQѪR$` /݊J`Ak3J/k֩ER?w,a &VVK!]ѳgnyt`SNKR[ ҹI_dl̀OȍM aG.&$lH9sZKP: *rmѢP T%g:I]9:Z#8Kf $d'[ nwK 5`V|iy(1P,ǮK*]JmԼwWaTfam<ʿhuB+&6wΩMtR( 'rbˆ" -)V«Jq,RnHiѲ|}Yŭײ/c,݁8U[b!ʴh+KTp`aY҆4v̾XKI\|1E"}3`K5a{R+a~ C)znj`Q4+ֻԹeZ "o #7?`ΥsĮˋo?fȍ'hXXSF;`WY[\Œhkvayg…NmU y;ҵ2$R +)45XqkY,/˼of7$x.I*˪fY5b eyg_+h%-رɹH6Ul2Udb\skpqvN1ƞG؜\gvŠFBv[ޥ3zHDZ⸩+26rvE KnqYm!n-È e[Đ3,)XLi9hLTs8b<3@? ^IMJWVAy M~6Ec7Dwv!]yEf@s?&MA KeGwy%L-: W QimnoGV[D{ ˈ\ޚKA0Xc1sީFEbpY~W8^` | `4D:x2L>g}̡ؔ6m iv%@UOTb#K0G\2bPF xX;#Q)-mnw"ɿ4#:֯-+FA;0sZ|yoܙ~TߚPӱ6Q$BԴ[~C" ш:|x8qҾjtȆ*ʘpheH'7 Jϯ*Y K|g9Po|~ڑ{7f:5hE) 92Gmncq2>J#؊Ă2@VQ1\\KjBEhQ},OGDj`ƾ-18PW~'?U1:o;[\j5pЌeW_"()4N/IU8#센@E† 2-"MT,:e2zlq3Ǹ2]K .vAxR3G@QԬ<{raYh2N vhw%k)(c5?= X;>[6ŋ܆#gU[L:yģ+c<6')X N^dy^I^շCtNqi OJ*x/#U;j"Ae!i._;DIE`X(XƠY o*4olͅKaM{Yؐ!vXJ{ȝpGI fY52.BRHhKV rg369lzJL9V͙keRo!ZG D .O"57ĬQ?YZd"$-[`T WOJ$Y-0'䈿|@J DrjgkRRu'זqt W:OC\2 5`:pS4ƌIޡ#r2D1߰N7hF++57lUBJ?' ]?P-VWy?k}3-)),W޷SvkQPHgNߦnʱʓ,^3z09Fg7jL\_F7 27_4%D%B(=H A]uUC*fκiGNoE;.yW-kQl ` NcŲyX1eH tnJE2AE\*XkumEˈu]A3'bew]e0. [#7`+CM*5%e_'k îtHQ[5UOj o%$]Jʙ*BԁacH-Qv%h(ui9QB&΄eד̭'@A%Թ]qốL>p.$P lo%i>85Z0 ^>+?>[lŀɪ 8dX[ֹj`k=ZERŞX`;"?X=_'m2G.g׾y [@C:#9 BQ}%ُ E&sB[XIU Չeax:-zPaG\bA{[ kTQa?X,uF<,R0JTX*@ʋ&3vQUA]6 \W*%/j;h R*q̋ dNb!쏈"Q5Źi.s< ˯F 6B 2Le_*7Y#Qyxzb< ԕ0rDKv~f/AؾshǛ ~ mrk^RHdnrF=O,B@woKS 4Ʀi0mYbInH)lەmPR!6ڡJpeN, rp.c8*"ȕj-^n@VWiȽ2񾋶 wyW qsԳPTy4^Db1ۢ6:3ޫ򞴶 D9 \r yQ%m~ΒC,qUK DDY'NhMIl2N)š- Os6=UC^,lCAzÀPE.g\!Qlw@jR(ߖzgzE()>ж_o~גeD,y)c!6>Z#q"K41,|?k]CݖS'RUdWVY XI#XR~(1kI͐5 ]'^mPlfp?⓴_{+_|?eYbAd2㦹l"Z]M*g;pͤLOArǓ)Ev qyX~ ":ߺdO߹PL s++r%gh6_=] Q#q 6gZ_s\%nĺA"#ƃ~C֯  Y37~csXqh+oDq|R_' L[JJrENN ~)_3 Spj *)L.Em%x,fc(@Q'O4ҫeP&Ngm8;ѿKL%Г>4 [%> 'Z_V"i5 Xp3J G|B%$4q&_C?ޔm1VO+^Te87$=md2##Q@(9mW 4|:MXA.縨 R\m'U{b=&kǖq:bkN937ö䰸HG X(a e" j{414)f]K^ѐ_j6\ tz>NVe]{^ X0+[ "S9zZE% 2$LQPWʍms1X^JQ5!2Ro5*(&:Acg{ A?9%'g~`"ʼYj72 )CMOu 7ytsytrp=(=|\bKt+H\f[lC*S vvM+SVXٕ" $Bz695DMAMp謉Bs" yZN@ٙG*a6oƑ5@8=eGsLFIXY]Pg/"B(_3եYb!@gm?uKW̋ AyozPc̚{3 l +i:=ar!OݣhqpYR W*H)+l)+G̗SxvYlI?vX,Vi&.2,mOPE T@K Ex@:YvLO\iӡ,ujDw#n-Hh$ȫ}L䝘:}7̨T=^^Mn^3MA WV-U[-ro[~\Qy^UմisWmh-*89FIsa# V!ckrg$}-ƪ-o5=n93܋#$f޽^&qWʰC ^\b!͖ͨ"j y\ߩjay.HBz4Xmy^oݷB76]񨟮2$n@!{Ŷ"J.:~<e6tE $\D7RE 00ch7)@r+WT;iغ: zl'?b IBQ4&ɵ볷\^e1}e ʹ#<#mk7ݮVV1T]ɬr0WO Oӎ4Y=q|j#b sג+=)rVf:} nYz%_J?KzJeMf9+>5y} O|kĔ"3szXeb0i$<L|e.TTfH,]z{3t+ɬCJ:};*)WK*(ɇ\Xe$#[JgUkW]Co-"e Q?sRRjUD 5Ët8%9'C)Lѷ\u87$FoM&ucshه,b#cCd^j3ԧxeK96WB b^++~(CmQ/8ߤYx͘2d`Bz3)H4yo1Bs}0 >uJ H/)0/#uÊ;_bsKkL_t\(L2|Aq85;N¤E +2M(ߤXߥq;GKk\>m*^ay.v*$GɸSb⬽^ȷ*ssālvhtik>!E=D-o5) J.1q pa'c *ޖĐgfd\QyZqvК}M)vv+8H;A> S9@LMl0ԯi&/jv3kaaD31Hx ,v،= XsT Aʀ$ ȴB`8ѢӚ>d\ J`׫9 |&L{PE|ڞbav'즅^nZ+OrsՀm:=VcYGM)62q ۗm]z#Un0!)l/q{)ӶAꃳz$"= ~@-"ZKCfo(53v^7G@sm"UO=ެ떥@Uͩ.!!|L*+\b4t .&{yp)^-KzS>u:xz 9L|&n| B0Ĕ%~oն#񚏢82~Knwgs/ ʩ;u,/'p?aj-ŲBe ̿`Y}u hy6 p܀$4@UryPZ m 4)rs,ˆ24 N[e՜`t ^z6q|#~tT[_cgՒNX ̌ c6+OU]Y\0W gFRZKƳg*iQ >S4VUܠQ[kKg:Ied;RШ1xE)#V]=,d2G+ZK-sCt:EzJ\&-!ԣD`K yXy;SU "G&>xnMo\JuR˜M񘔅_1v,A'69:{:ϷbcXUp˭ 8ϋ O󄽙NaʤW&AYڢ}k4-SnS{m&rl4L(p kk䕴oVA7 sD9j5`a $]2J 8@- a{Kٷޒ?A!IStGgZEenNjB:Hܓ֙CMYv"jsψל{0~F2FyˍU LD ΄p3?l+,~<w-;ډޘ`-֙i w<,OFydHom~ Td\U8Lhٜ܋ܓpgjhK3NjgpXBwғ1lFHh0W/600Sqy$7lKO] Ms8p.^[u2Wtxko!0E}Lͷs9S2g8~lKQ+|_sJ0|tNb V_)ILuR[R %l!\,wڈ46WMfѕܡ:ò 䀪L/"}K&Ά)OFnUPîHB%O!E_=v>@oۢXPNR\k,](KǑn`ޒ3ؾ+Rz֦_)$@-Eo-,)蹋^#)$I!,aӕ ^|(MpĀfvnEbq*N-h7{UeJpO|y+ڕ֩G tu!x^c4e`:vC _:/2x^I5ؙnJ+ɯO2U[H[ĥ&KceI>]Gdmi#qL=yn iB[]n`[-LrM_BUr%tOڥCs-2Vϕh*iJACpfNa>u)PӇiCU ! 5˰Ni`QXZTbTX9CzpDV|3̒ZtZKfL`DŽ?Ae4aT@ ȏZQ`3ԾeIՌO%TQ4`"Wl蟑,Zl-` '9/. zj\"JՎ+X8(aE@l -  @头O$C6d$ZL}|GJʬ zgQfآϨI=J!U#`F0(`!5>O[燋 5 &OGX2vsȇL4 \s;HyV3kq|^!Xd3'TLe$"hư$’{u)ѼEI<+õI: FaV\ƀ`=a*r&bDGc7'uvkQaCtEΕ,4@x=JCG8uMUj'%xmSsopB,( RڐQÌް0Ɖ[>i;B%J\sEܱV4={ [a/pAirHB?%/Nq-#DɄ{$F=r6dZ=7jRN^\Ay(px)qBoPCtY+F$.sCϿx)DꄐsacМb&y8KDZ%}woM޶0wQaZC]bi9eH[w.PBC6pJLAl0rgȀ1vh^FrWqYMI'zĉORWLexka#P !)i/C&[EP)PveK B ZcgdNRuVH uU4p!h^Q|kPHYU˟@s^jb TkvrwtCQx$ j:^A=6<Z몇5Gܔ/5./ E(>)72AEF^D/.CD<ahfdܘ; dLɴW!c4|qMK SNqH+ݶ$uqZZD1؇,E@ #q~7iiX`po ~FK')CBI`/ehd xx1'_yRl1ifϩ7-LT j]H hETIE=bR@g8 b $򕘀V%N|Ae#i}lYow5U"J Hxvu }_W l?$aT}l%h '+ azEK)-/j Uƨ[:y@GoZ".FzEY,as%QhJID㊡[Gΐ^$T."Mm>NH+[a&q}9U'4ٙlY_ĄwUs(<xGpi ks|Ŋ0o4\W#څN^c}ob %L*(\S/b,^baE+2e`h)'O,ɕ7aCKŒg&xj&f)!5luhr⫵"$t B*FOYby^uEd2A 2#odKnePEI"to>x6N  xt՛+*L-DKz,O$BфBYH b [YT+JT[dBa'P#bE=VdyN'WHZZL i7$VA5} Lk"rZ^˳=C#- a !oJeKE@YV2IIvmv+1߲fZ eeWj.`D|!ȋ6RА-d`n(~)ih,a/B $)-̝_(YY7<.w"h?+&^LNN"8>cFJ.BH$zS⫦N+A0G#m(4Ā"Ck8 tJI7Z34v=x 0\Tm _W'RO`0"cw~\tId Q9 /!y=/i!4,4Li ǍQ үX2Rf%!=}%w12y+L%M,y$:< lN#"HWA/fWG1|1ӇV2B[F\LZt( 8 ёE*\mHO#AT*cd$B$2[L5`$'a߉XX ar:8AtnKI _D8-*I1Z/dV0T)g,$@Ht@DSŜp#I kh,: :`,uW #>##A1c ej晃H$ID shw~IZ<- \ "o @+$G<%PRhbD(.uH5 .n)J߈VKD IX4'!7uⰇ.NUBBydI"Ʌ(u %g|krR$^D pPJa"DExj +hsx;}×˅yN&\)Ie4Ii,5+"=_ 7C2F!jޘ6I%+D!: PC߸${>GTa*aYq ,' Mֺ~Pp?Lq?sjg(ӾȫU"Hufгj1))! s!R*Oޢ(iTCt i,-y'3\]it=!5D7 ԙӳ R ʅy /}-cǔ4jODQV7 ;% Er ]VSxy$\e3SL-T[v Z#ƴȮ D0a1U`DIl(%Jb` 1$tgD\*''ԛް~GU[WU_ LuBkJ YX$2WL.K~QjuԒ rOt`mA;4(#>pjDp#$U>5@g+2d]!X$4dX{""B${/"B9IA.EMH _KM0nδ@!: J 8Zs@J}Ѝ@Âƌu~b9Q bC5H EԴ+W`ZS?ȶq ף{PQ^%.3M/KmC*r5FC6i(AuN6[DŽ_+ЊP/?tIs(R:m C?J`|uX{B9js1rq8+|9Wީ%aOcm."A@ ʒS?M,'I"4PX}^AY5T$UVgz? 5$ #kH2qqt\+N&H 2_fę5$ÿȷ p998}=PƗAmM  wϕ!NE։8Ex_EU)cV\e3k:1k-$XBLB Pmkcb;Ky6߸i.OD,6}@774J!B+V"̩0ՆXb,Nag=l܏ zN5!NB._1DФTjS3`Y!bpKnonP!3O&hN.meHuL=) %'CRWkp{@Rr\G]sAJ9S4x| -RW5}aE%&/!i(SnȀ' "O)̾LA Z Mn5Q r>oQDH\EM4&BE쳓'Tf*Ykb6zÚЍ'-.: (T3nV3jˋzҎ;֗8A DtɲhlrP' J3Y =$^DYB&;7Ԃ,&/3/u26K h˾MJ }| K1˜as]m3aV/t"^Ќ'x-*cx) y=Խa ]#kMi*Rc8ICKќUx( 憮Vh)Bci;T_^ad-ec&iZB= Qi{KKdBp&hMZ B+jf@R╭"xҽB$S&z[ Kc[k|"h@[O4_ܘjJ!*IEj; eZҾ"/Z0hο5eƬH8| ^UG8+ TUg/Vʓ*.N+.zH<2#]h!o.5D0|k\4 UByY f̥)-NP,":o>Q o35K b~%>эvP|m0)H*bb [:a`n Y9#=ƛ;h"pCAcԞ+:,Ɂдk2~\E/3SL22gh Unq7pQĎmߋeFKTh!OGZgM=*' 9džEn4mhf7jrlI"I:+E#Lh4Ⳑ{pkzHZS=MV`Y[,P+31iHAi]U-jL8wz[ "E8 \ Vx#)0Z-DCHr'"N.HrY$6w:UB? @?`j1]MԹc!$ldLv c$ 7PjG:I9BR.F3Q8r ՠ{ɰ{W]p*F8GDn*% wPVp,#- $e&[vv@IԩsɳCsLhumRҘK]x *@>yM ũ&ʠMkͪ2,JR AX4{dxlZj ~MAHR* M~*G:@J 'JemE\ Sl\whtovW &%ĝ樐 %xvSrY,w|IA `WlC:c-,QB^Ϸ$b C(_eZe4qO6!HHxq҅޼8FȮ8*1-CB9\*8e!" jaDPGZطn64a"檐Cd  V=Z4SNC4s4UFe8M;b!kaDm/ rAjc|:cߜ)x$Rn@(R(s#i_ a@y}nsfX*YTgyZwn|[[.S=y*均Q=Śb*;Q.evZNކa] Z{5E 6GiOx8<(3<XfP,NyPk(>1jڿHFD,,On.6\'ripDEi8Gnx*`%t_Z3Ba%6^@[NyR>a-PtFtq $G)/34* Kl`eW׽_*5rgAEL(Qê!8JڄcL%x&A%?_pw %h|bmdĩImtQʁFy@d,k`ExRFbE:\6QTDsdJ  p 3cP|L `eNaL!T^f^ {#7ͨ+_Ђ67ؤ9;UNZq9!SCӇf:Ms2GF.-+ Ջ>Ө9ֻ"ǡI wakK4po&͗!=yM@7j G[]+uCT8R'ugq1:]c-()Zʘ"M2DZ@J ADa|%WvT u&Kc~`PſPCԘp-Um, j߄!^W"-77Rs Fd=P`[Mhn]R^H;f* ,KR$/y[\z߂u$ ,nU[L]u{%6 }䦕"){Q ByQD(#D DH0+@^Y~>~s,@s:n.ۨa(JNhJ)] Z QhɈ$<"Ȣ0ԥK9Q[Y朤B4\ZmIwig"Gw=DU#ǡ$i9U Ycc*& 1›VĕPg\o!/*>4 EU x.L(*仡Og ~)cِhPmXѮ(@MW4J.Y:LgTNuӢM`hZW[9 7|,_*԰N BU8C+xʉ0jd4ֈK Ԙd[^$)&ڰB\Bv}ccJ@A\|M~P Xv\yt0 1 I7P7h5rSoZ\!J>=5Ej>Ss;&6ը@FY(dosܯ~At}`cMhd pS A70cmz(y{=$UoY?4*R ax؀ĨSkL;7 $#x]Q^DL*aW8WS`=!6h>%@M-/Dn$"R$LHk;jܫ0a3c2!BP! С_-V: XuUI>Ith¢z"yAYoo?4g?A_ݣ@iHT`iHnuB+R91jC֓rs%AKwiS`_"~z% y{-zB@_O^6Y 2 Ca: A.V Ԗ!>r-+`*:^!Ā"ZCk EpPmM2oY;jDCĉd2vDPR(gtt9OU`nxad[=4赡uRP ΁:(/If>Ap$:Xy+GB4 )Q fRB&@}ՒHOWۖRa-0XvSEK( Y72Hr%9aUHË{JpNDPLG^;(!2|ER([G4Skw\MJ0"dNhc*$r4/=A"E (v,5Vg3ٹ&vੱ#`: # =9al@C#MX0$64+z1ޅYtDЇ]A=|V~s"|[#yD1noԠ܌zqFdVȋ8c'i`KigpǨƃ):($0d| wy%X7+`A`pg>Ƙ?3hrn"#d  GY[& uɍn_` -}=WU3 bS`˅yErD[O xsTlS`9)RpEI c)u@nt#1N$X+BմŻf-צ_OTkǼK=.\,8vC͚hp0+Rg\Q4p:7ʻ${5:)cZl*\*}*mC$K]yl4PaˎTeȮ$HawC61eYBcȊB##*oèޑr m"Q/t*^jy tީy|Ftd Ko D!.-'^YtG8EU"!!-p\RH0V3\nd ăԀS p$/9J.a3"-hrLnNܽNrbͰz5s(Ϊ6˒ve4bI*j]Jը)0qK[RR0~QxBtxГta. QMp(C0] ;9 :~ kNpjMP}xvx+z 93UA.y:`EdIBHء ^1(E0`J> 5 - `TAaDT_?gNj."ZPOhZӎߥY@y vDmP>(m-ZScoQ!!ff[dNZ=G%- p7u75U1vuih(`vB:9\}ϟ"+0KJb "[+zZQ6ސAV~><4(I[Q _ v0AˈỲO~%:chG$HPsׁک( &Q^C"@NkK +&(ef9&QlpJ>}2^' !e)9*>iڱ_]伳[UX*Tw'B?+^O`" !}{XƎ y{8{p^>Z4"-GDKJ"+h})?Aؑ=y< N ^pM2bT"(MșOh!^i48/L@F)k2`UM6A& ;C(ӡT7ג-M&)nMMˋ{tZ1Zd]zUI vTJ*JIȈP%DAʳWb"iAA! IbneNWYԣ6,ޒGP^UeTXaq)|7v!9BֿP 23:,;Y>CtSRS1e$Ĵ5J%UmmQmQocTQ`nؑ x?oC+ځκݮosPe2r(z#W.-ZQ* RaNs,; JԈʅ~»|D ($G' CVk!"+;d3Bʥ #$%, e@Li<C=x㔙%I_g0UFGg= z˦KJ"NL%/]S`t&'At"1l>FBph~g1yR],mujV`G1Iz\ Ԁd3Rl9g NntV;6!B1YyqR$H%<14jI\^ -bP[?_Ny|vDBef rxefY^5_Ƭ{E S rɼj P7HlB if3T@hVRd$Q@Ј(chz+ IB@CsH^}3KPr;q%ngir\e( e15e,/ bPI?̘eWg0^YCL2.7^d9g qNjh1m, pnQ`ƕ'@ 67rl=AJXiB9@ Ur|zn~p ȥ G'Up 깇}UoRv1(m!*p ,Xȕ#9~ 9z˕୘3FuSÞ8D\ꭶ~*vh2,>hEKϚjEWKS#]ŧ XsD &GEJ" FV7X{_dEWϩ+gE `˕` /&U'2><-SҒ q"'9iBC47zKtR0Ő%!a;t`N,`h$d5-/DH\b"W੯cA!9ztxm7E#jʷutzr߮B=y13p@@*/jXАw,-(Ukc`(һk";ae1.O4µ钫B#y:^a4c)l,~ aaMMDc5-VSژE tاB47_oa[ʬŴ)4AV҄FuFUM^&HrѹW@%P%ߞ,_1!y^>)+9)BJm13ڄ#d^(+silYln}caARubO6Ay=?+JJSJ`aL?@9BM/-˘*EBCJxXcϖ5yE}B w`Hx@[4DJڈa2P#-"FD'&)**F9WcHUZ =4~h  #j8eLԟ+5h_D~@$鿛Ri$lj6ԯ4[!Ȁ!J1j~s4p ]Z:`-"!c+\a76^.RiT֋~qYt`MO]nTObK[%{H ɟuohѿXyZE5˲KW~FjShyI8wXkW=z؍kۘ) 0'qt֌P9.<e,_#}aRZY ^IxNl; cB^K˿tИZ-P cRV:!o"w F(^%" Gb@!#X$0|4"XH #8(+f{OJ(Pq%yP}^iW1˽s24Y$]E"5e}xO>/ѩ{ƅVzݴ/߼]7gOEܦsЖz I|˚UDŽ,pO+bL1_E$ ,sQr!yע'ʑc>nQE^0t:#1@㥖 _K)o58h̕X)񢇼k|{0V 55s1M\:ٯr ,~.b6RY-vo PʓY-Il'n*9^x7ݓ }jhu>ROz (;Ql--PhL&:l0T ؊9qg 'ʡ %'H$2~9V.' 7d3TLQ=:2uRIe]żҍzPCOV͞g -P^h 6Dn< mm&g>WoQ"ܿ՞gP m^!8-|" FpHJ gА 4-FSC<4Zv}¾RLq0n!!l5cr(@^uU*|96 @XC2[n#R7" f 3r-Wf6.nQ*5ppM/jF][=rB'[u( / Q"@ AwK2ȚnVq/o2U]pZ8-F%[J' ]`,Iu ): .g{_565C-ʇi>բAƳ-B^j Fl'`j'3,su8uHaKyh#~{(Y(e ),t:d1CVځNSƽafE!c KaA_:17$b{kI') 5=l>x8jA>Wl"r0B $C)S<,W&Mucp]8nӤ$*PH[q4/{Kt@1FaD$s&ܴxͻ Tm(skkMgP o#֛0 Lde,}-,GP22Qh'?%}C}C2Ui @+fhg&kU`y0IfTB;<L>&CY"1 b|_I`zEhIO cuD fzD IdEV؇ԪD5fL̞3%-҇'佂aNBIwaL'ZcGFwjH5Q̞g׼癥at$79N|fJ}<)ex"3`"td/g6)2&Y.\$KErqZfqЊ$XVS&"Ä㕢V _%Z#mYFah@ HݷL .xyoZáZ5,nRI'(ǼhwnY ,%=֫v8Rs4ΐP2VxlQVZ/|'DA2&؏^/D2=4M:w֞`"0R!!a.O|.Bt`W!z2I %`)t'@QbLYWix9OLQ{HnxUd6OT-뼓HEIz ou:͑BTB%MG ÷Ҷ6! ZhO|7; !=OQWX@OlG "SPr NyҮUAUkYSr9Q Lo4$% a6o]-)F<y˔ ۥZ3(+ZEIoex>M2M ФR=inAf YL+Ը4^Uj}T$]&eֻ~rJ*Ӕԩ9]:+#e 3}< L". }Q.f)~ oL8$*)%bE v>5Ӥ*֘4m^čIR22cErhNKT>c7xCd1RKk FѼHR8 Ciե\9OMňB$D60 ޤoWX#2*cP^lkY}]t_UG%l7BUO |.V!ngUKltȚI&`we <_I@dK"`xZ&JPCT2R^ SEg&۞(Ee0Q,uw#!ؤ_bȦ-׆&H [K1%}@xȜ [05؀|(0|"(7L?[S0DċA%H'sAS`#5?bϹ res 1=a^..38y뼡qo V+1VO)]ꌉ~#'Ԓ?9#1ԀLwq Q+W˅8yR䞴s\*mn#A`;By"Vȉ)>͞Jo窢ɏK 0SEQ$tkiqN4gp"`CF\s `0kb';IP^zC hen:"&\+SdDKB3]ua_ D͢$$w.V'9U_N\P\Kml\.b-q]Z97-ڬ2hbH &$ }*GH[}D<Т0 |VɒXYJJ{La 3HW> uYAG%U0rKZkdKEt,'K3̸)*׏uhQ%'QxEik/'.&B_^ .eH6ҕ*2MSB4.bCPY5'8 N6RXԢAx+nyY2jq[OWM Z8K>ɕC3Pܤ9$^촓EWY<2_l|Tָ<ԁP7JWj'!>Y,7YBhގxITgQا͐Гo  D8S۾*50SFKBLc@gprqȷlPQRE^)"Ci y7 y;KX nP$RW+ &ЍK 7&P?(- u[R?X]×%J!1sIHa< i;8b-'@/ @gm:3-t iU,3 @j&|FI31JKTA(u5]@5.N3DS?̌uO͜ r@ N01+Vk@NM:Rέ{U~MyCIxuCHP[a* ;YC3" Dm8ĠKؑԄ'[\d:;@ H:GArD(CN B$ chjZ,K @\C1(g!te;FLy TWIDa "mdR߲fĂBy%#E q[oKt( )d[{ ڥ<$ټ$\0t>)ˀ5\0SRh.JeBĤw3R!!r 1[W P͜[_ X e7FW}YP'_M:ҡCE BX:Gha1RF2\z!:'!<, cB^xD9#+\ʹ.`-j(1L6"i ;롚2XGiJ&d!iYjd@aֺyS 4 7eDGǽ_Xu#򩶱p Ƃΐg@=p:{\ʎ`#TE$XS:3OHa/WtڔZ95fҨC$21XVE<)h'IuLbd𦐾O JJT0!7my (Li8thᬛCE VmӇ.Pk/%׸1j(-^ a8Jd:/Na`8m_q[a^CX@E nC|K𵍔JII-O602c)-OΠ\%7ۖD( E d!y yܿF73,(J]L](T,j8/mO[q("x_->{+zp+O8_3ގ4ҁ2$U{ɴ\_k(`_O`%,k2R]M b*ә k ():sN[4آZiXXUQf}EOrIjNԊ@!'~5ˋ58]@MAd Xmԏ&PHqpP<׃N2,Hp* rNp>9,D~V(DVX_s*F–G 8Dd=Ƞ+3`gDS7X+%uՀqݒ5?UFmD%u M&z(ć[(7c)V@pM7RWq~r{;Q%qRSA_RR}if b]⫫!*lY!1MPgn.NzHj+O $eP<2l,'HPN=/ƙ3npZN[tGBEjtaA6.9Vv2Wj\ѹ?ܤ_l dsQ=E0 BrNͬo :ثcXIǟqhz(^L6$R+*Y1R\W~%v]gBm@b)uC=T.`zF[b /M /Ka؍eIEjb6ø)V/_*}GZLI $4ݬ Qr#2q}Tp3[B⎙%%Fh l5#IƌImm< *)E+Ɖs>W39Ycch{ bEVZ[ݨEr,xq'hau#ed hP!ٮҨJ䎕8.=3}9P)tJ(xy(:bmI~XE$AȬ DO\81&*_^ )W^,| N,_gTxu<:RS+2_Y@(nD^V!Q)dclU, 49BXtu9 zZP:SQD<+纊@_ߒZb= ?-A!MM=GT [<<)6 S)Uq!tV:H Zb+Y՜GCq%EKiyz@74'(,+ˀ'ļbb1 C  )#!/'1+4q@ 0<>CRe,R&vr4:@ WD54d;5jٝ z֐,eW|U=^nE WdqvE[gbx՜D|[R6AYfHO9T]͑ 3qa@~p-4 K+܃61&j]kw˨Z߯i衰ARaS3s][UunEA DJR$R@ l}w>^癙;/e5g~wk޿gTJ10+6~޷EƄXZfB Aw C[]d"<*tGSV׮f 8&;l;cŧm'a|(Hz%RxaRWn@ۅT\c>e>3T.uvhJF8ώb16WW갻WybhN77-C␓:#rMTh8Vş!=9O|k7vֶD`*tY,gW[B3{RLo-o$'2DmY=q Pcޅ$tKwtmQ qQHnH>[eΑ`m`"DGJܠ#A@QE^VZo-fԽ:{mZȔo}~+1QNrf%7JI{#Pe:u#pD9h/_UGY42Ayh8(2v{թz[}|+Vd{"[ZF%cNfDS,ĤӠOiH! [\Y$nWXQ G)qrݶ#v~oUR_ TÅp趩{ f1Vg=dGU]ádzFLCM=rt|JO\vܨC Ur# Xo?5K+|ȮV5\CqjDy-ټVbj+uڸi:e6ڬɖx@pPK=YqjԳ"zgf -;6'ZT$ 6" A\Xv*c x2,ZG9^N77S?7;u ܴ?R ;m:_`)W\`j\cmkȫ`5|;j;PyUbO5{[56B?cr6ú:exQ*71M)[đIR 챟 ::ԋlEF{q&_S, rY%:aIlyњkk {~4@`JwJ>1Y}@6x=*$X4~Ky=}|~e^:ֹ~W}(snEzH4- b9_8+Sspr lQǽRItW%5@5(xm9Piiaf9ݴ%*^t{\R,5e^)Ck&D 9e:೗N7W\O( 13lm'< *foTੵڵӴ͵5&ouDa;V<tQa@I1OSr)|]s 7n2Mt'q4qls-D;,־nPTȤkU@m4# }fi*oZ$ -#^Ο9nG8EMc5ZF' . 4fz80(S4uy1 FK>*9e&Y3E!.f+Лpؕ6|]``e5*_EOa]u*|?:zҲI3ާئd8X o?M&/6l:1`^4\ZNtx 8p9YfirbSN JԾRMI/7nhs}9SQX%J'KGWϊ3<:ڟhǫ!)0QoPB9+#%)^DϢWtyyc\Ӎycc<)9{aS^ܬ-BhվDiAն `4YR޾!T~N? k AH?ɂVE^bwB}=Ey<@JKO6" vN&889[~׍B.10t÷ΎǾ"nq`q 'agmL7DKfye{{ &89 ޛmCAYnĉ}/E=αg9ճM[fK{>^֯x3!\;9dC6,{3oB16ώSYĮ~Xa/m#(|ܟ#]?7p.b:Ǽҡ*-Hmj؎՗\I'Jع6lWyN>]Ɲӽ;r::^&zfucht$:JIʛ㑳 gvBZ9X6]h{&ٮ`WMA ~θTn.ڛDC?lgif\'4oqe;nʶp0l֢_x?ٳl/o~ݬT??h}mD:MOgx6K g;nmx0)(Ve8eE\9LOؤu{ey6+{?QF|@XmJ=tb@eB7¦2Df'#uv8 z~S"@5$!qBmAZYqЬn&J$\@L/(ҽQO p|0E'bMoz4cZ':{5 8n D?">v@'834eW!-W<,78 `Tg:0/MfV *ր5e`œe9lywE+mP~6[RX⦅ .m3)fMe;l*5jiP W -emMoYr ffl32ʱ wǗQ SòwhOHީ9w%k7+4st%|*J8'y=7A67l~@Z f}۠ :gF_K[P[X9:jm ݚp$0O/=׹e8;:[ +*/ghV::D1Mdk1K^D Kj'J6m93-/Q:1 aSb(E%DKjj.\2z r#Y!u'Xf҂-K9Kf)c ɝE5cf7EzY`{GWg8SzL~p%SԸx2]׎dw[6)޼7s_װMME`aX(DQ{s@+ao]֏?Avt͜{Y=FI[D )q(NS_ʭ4)ްM2G.ZM){U!A\J,3t)YVɶ}ylLBdآ<δscԧw~ ه4lML'ϴe2HB52Efv;sWv\Fs= L}91?ӥY:Iq:l-ɮJIHj1Lrsy!'%6XqgA=Vh;u_]bl\]w4ȼխnE_UpFxR$~# @/)ѯx+\j( e jmB$HU (mtFue"hCf-a]df`C , PePzhi'iR (2o--s"!hN3bxBϑki&_3 v!31P*0Qؕͬq\lmFK^IDAT(5x;ށ9mxQ%BQn[ӑ׀(wKt](1,si9z˥C=sif$͑~D ͮNiXW= ̕Pm|jͷ GEU.=$ǿB\gb|0r)GLSn7o6`¿nMuw6j^3c.u{ݷ|:9La M(c054}Nm;Qw] 쓄w]O ox!LC˅M%99M]7#)n H((k D0:m%)A1wMZ#ƒ9*Wܦm7( 3?)OiRSv׻x cx@\qi?1Q3e$&^I\wDҚmXF\fSQֱ̖Vu,eIr?Rg` A$G azjSPA|G%O7-XyOC攏-jV(l^p`:ř2}S~Ir{3*)< S:QJI9բجÊEOBp WO&kԼJo'D+mcˏ1_}'.},gV59E/zѝdPjtd! g c&ܫ'=I,pk^sݍwwK^ xġnnߝޓA{Ug>^t:ӛtK_:xOC[տP F7Q{]ʩQK84is4ZrC#01{0!tI" P &pk;\D8{<B%p hl)=dɿ+>R ׿18)XInt(P;"_Sa>k&P^:'PˀR&]:vg;;\Hw3ˋ`igDQ,S)p˲\6P3 =.2QqiIWEvɰ\*'VVEMC~ %U ,[|nzp0Wєިϰ .[T͡JeWO :_#?M",f,-iO"GguGp"oQ:Ym \!N ֧t]te밈 tg DP Y }Go| xU\l%`G=Q|928x@KWU{K_ ypP9wL@ Lr$t)/&b9zn% te/{AٹuY} gDBVBQ#Q9 3 |V o2 N28>H +8}{ޓ世."H#Rl4 X.%:A?]W F(3qCrUNMMIzvdp& &-M _,X>2)0On,.Ixu1¢R{ʃp!5i1X*"mep~ܼcڨQȠ5hr: ͟C,v3ڼϗTp5f ɽ£!:; I{xM /\AL76=GﰅAgڤsfكWCX`%y P\ lfd!d%ӆ#^GGY]Vاhb- `f[ Li(H SBq}mxX T%![X#/F)xJ-i.hӐO) =9rzE HO}e6np܌%e\ 8V {ZA"ydÑdh.~p0tlC+$}N5rI&) '&c8}I2-K (pYPԉj:M{^!0gˆKM !H𖷼%,bW _RdYK9JGm0lz@jRЅ.D'a5 tf{;(lgAyzի7RM*#(1\K榝(y0%+͏2 P\9y ttc%@( FQW@Èqӆ 3 L;҈$SNjMaN\] . sTCpJ4V ϒ ߃3c`ɕ< !A:@c8VY; 4|zk@u;#8zQA њby%*3Q>s\[4u7O]Yke -SM%(슇Ȇ{XI͑`X%hyGoƉ`B*e `+a'|@ $ӳVP/x6eFVz aYGUpgkkc=RU%Z\ K0sw sGY][l lw]`l'1$)9dF<``pnYw:pkF9`93t\ذBL}Q;02ekRL705Wh␂G>5 ?ѷJv9|.lY臈%!3L,d[(Og%re;tȫܜaMh鸡 {y"\5lbvò\0t ؈¦ 0 (K97(^5h!U,6VtPܼ(h䣱%U.k-XDg**se!:Z3jN`; #6̜% Zu e: N5 1hEf<Wz,1d&8k 0!2BV!S&I'KžkM+0 [.j~5+\,n `Ěw8ǧ% Zc<|>' oU6kh]| ykZ˩Y3cvr8@ P0[=ͺu,~PPtv3̥ -M=WG}C7tp^dgF5 -mYvZ=m203P"5jpnoZm@rt$mS<|&E-Ҳ􇣠[GXjxhJ`iu|ԥL\ ryO B47įzq̲Eh6'+40,Ka̧yV@!WbZ`1 62ޔ+(fJu]ww(@43x=@?KօDhs#Ŧbe2mhî{&%]]xANd-m `ѐ5 ,AL9e4MjI1gD)q j@C&khkH$4 A9A\1ΎD1e%RIicQ&-3#K#ʑ4S5yUkpCB:D4z' !F`\6Xsxhro!ЀJȔBO(K[.Z5@j>vb譠s,6GLP S'LjWr(6I:7"'&I^ w&եe MeoTq&INDY o (R n!khec&\VX!,^SlaBX2XR2 :z2m2Ig C(Wa= =DH9X[b:b6n: \0\kYTYRC b Hg:Dw@%|"6Y~C K2w@@k dvF*h] .l#sk8[AqRe@&g4PBG'ho|"mtbjf{Y# 2{*L r+e߬qĘ tFVX(F+ q$8Wt 4AAPTWTXFB l%ܶoxO3 w纶~U#rmQ UBZ) CcŘ4?`ߪqqR#&eqMJĺ⢟ 9ifN.֪+bB$kr]J+vE˪"}oD0+rI0ySFSP¾ZyLfJ5ıN3q.0}ZK1`>%e1 yjC}R*K Q;wZo7+G[{%Cґജ#Zb܂S"vK"=3´2{ȫ!Θs vl: NvP:fad@ V;ʯY +"?)l[To٦ ?{m) 5&G$a,,!kNbu.(n 2j/c pH\ aj 'qdwT"F=|l+# L阕ZP1ٵщp9t{| Quz gL(v]aՎْ>5P;FBkgEJX cw\,֐2ޱvKM4*uLOQf>pWxaD:JÖ#ܦIjT%߹X#) gu^.?Bv"|jqD[9X/NJ:'f9J aФE) ܔ%5 WϾ)+؇[HW80BZzн1u" 24Ydv>bi;%JZ-:L^6@5Pϖ\{z+g\.1 ])Զ`_j4[Epw=ʾ!M$K ]|}&8z*!ҷ# %2eL&m NïTvC%*5Wޚ IxPʰRP@.!m֘SW EB9 DftZ4  P'̡BN 娐WN{6XKu=uP^8/d_0kQ>/ِ^T\[GTиDjxX,3"Y ;xr25$M̜` Fi qԥyAk2m!Ny_kr A輆L&&ָct0qfch+-ZYC 1Vr\֕6;A^ PZHi$EBʴ:OY; Xv%%ZdZ@© :&8{,sʔ='\41 6")oԴtHLnQߞёǥvW:S5:lٲ/g`JT (GpX2%L *{RdbFaTq֬}Ԁcԡ,)/ŖrJr#O4;Rc2)u`g.1ܕX1.Y{ItIVk I٤\,mh,ZK+&,K<8< SiK:E.2;꽆(1<%Kqoӊ:ɽEcF@n8C{3O3vZXw{Gێl XҤ"RӼcw ;dBbY+\?DPzSnL\cVsO`QBE QoC3 _y7=T6!w$ܡL1ĩ[ 40t0c7 ݟy[fK0$_Eu*XZW;.,EGJQzYAC!% cqTcW岕+O%IoPѶ"6JtH 06st>#wb dI[ 5E/@JaTV3䤡`{d# `A14 [ D-3j^l1\'DBI0}PMI9`(=#dsth 9s-.zAE2nEaNAvZpRϹ+ņ`7q"3  *w.\:֡ğĭ! hаZi_~&\5ÒwH?ݸOzEqI:cR$1%a **'g(AIn Mz&kQ@Lu:)fDb GYPtV\O3NF-S3A 2O"W >Td厄F9ݫ+GtGBZo,YQ kp,7mMn+p\ `T`R3ScemP;8XC6-q۶- yQADܰ{#ҼH:8ø*iڠQ;`"DY nJ7L69[\NF$(%KLc:4ۦ:R3 b& oiI7w!zD, *ۋsT.9>L MeOfDlg#r41w%RԖ9m)[dxmu˔M!  U4?*9|@,yLE K=%%g;%wx`'cV*\'AȵL8`"eM=#̞-uZ`]$ef\meDf7vI8lMV:tmJtR[V#CS[mʌ\t#Qf׋ҍ^`ԉf%vuU٫<)Db:k/(X TtJQ5Z]1@lkntQ'6:J#-˒ iZI=z@Ȑ_PZ7Ih3u#RV C3u*n-2:2LE+,P)яōJ+ÁR6#_%{OP\uRAu3Svrz(gF; O m,VMZ*G-z^dSP}(CՃqTÎeOKG[`0% (ᒣtX0cyܷk1qBs%(J%MÉQW}Z}N&DJsX`bPOZͺ$[acLl/1%)Pŷ< '+v S!U+, „B5>1PF1D"8IikW-{XG"=O_*s-ԭibVMA9\?mQQ$gΝLj@ͩ+gȈ0gn8ce,7Y D:HրUy"y&̙DOCHVdwCeGuQcClHq+ #0K]G ]7 %hpD%ٚ1'-37[ȭ&T jVfP4sAqV 댌AQ!yG!@DwLq9tin( bh "X[ :iQV,S [͢èDXDUӶX)e'ؔH e((!vK"TG)p>Nα9rUFE: tR#k=ꕮs L͈? Mh3dgVWX5ʖxVNWEM ML敀$Kb7Oh:c\eDvTu;2oJP1s2 A3;rm$SzpuE늲NK(GRh!ZL\9M-㐃ɕRsQ'j [A`7c(BIӏk׸0Q؉m+P7g㠡.A0m;hyj$ቱ6|ҹEȁ/mK=#d}⡚؈65kا0ҭdmEDx@Y9w}3 4 67ɂ{gJS"k2b}/&7"-ya mEZYO9VS<%l;;N'͔9IjhK<俻iMJC[éi, :q᪮pKH9\ <"ͭCvsPfH>o,G1 X{vT + 䔙*imt8Ц[4Il&}n:{FZW~Hvz624Ą\qWi3\|}8lz}@ʔ"Ce1| bL++%t.mHi؛:ea" 8VfPֲ!B`2(g|KhjIgWGs@M>N v7qIE7H0_XN:'k!B]&h'd:Ig\TQ/Cg)޿R qpÕ*9r?Qؑ^= iX&sC'QVEj!ˍ?cZ ԩhȤWM2cn* ŲU 1%.)opS8_iFŘ6&Hv`F ΊByaV D#85`iwIKA,äF]ݦ%%9eRBf3V`q=Dqӑ7\M@J=o[7 Ngi^ETJ^R!ܟ Jro⏨/Q&2[:ڠXy fKl͸:![Ȑ-P65˦{,Fa搅nEx Hnup MmTX7n)2wS尩ܨ+TĤh1U*KeSUv!;ofۈ@0unQ.W}[t8`i.%5JK%J rnqwSSWK 픋x~65YϣfNJJ Yn4iS7tG",Jei[!BO˃wF%Ѫ]9Pe+p2\0b%IzH?jU9eSbΑ+4nO]k?A4Sf-NvrkEyF܅T/,G4vӴj@ f"ëNyy,N%pX>hxĞ=5NV"-fm;qG4 7Y#(C4e6hdk̡s~V&vd4oY~1@%\`l]a}axT? sa8d%2#t{͹oS&{(S~r8M\aJՀKy^rOV b!rQt+)Ldi/ѐG 𡐧NXuF6L&649*w,ÍNNGoGP#!g(w(H5%.l[aŘe6a$RlIpٴͶ* ,U0Bg5fϚ7 j!@.%SMWxMrnfwx!i;`\W!uv^I#bltI;*tirԺ1eCG@әL$ਫNw.脞gy(}3;$Y\ Q:Bm0dx%&b {DйKNT@o]Dm^=&=VpGR`m IǮ՝~F˺ȥlt-*5`qu&,8DY%q%୺T#kQI% YxÖYt%7zH 7q3sR<vML!,ڃ8=x*/] I#YJS衳QM0WŽi̛9j •?[Y<~uIkU'6@oFF Ao9V%`_ȥwPCmʰraQr_!"JL]˕#T\;DɲgvE{g=6KǾ2~+m;Ŀ52` {䥹, wD38j}|t;)j,7Cd 7#7 Ezz(։y'&݆'d$ͱce+ {2d EFP6Ġ&t" i \}9o+- `aLNNv/ o3|1{Lre5>F*IM, # %M=\n9nit'Ig"5~f.-{(!W52\f:!W7hȵ4,\RO`¦ԧVDK ^ʁo6`NkI#mP'Fʣ0 -"I{=vqy:(ԡ5Jӡt:Gp~Y9L,`/YMcˣ9dIX3V?QJ`mI҉~褣TDrV|Xsummttn77AܷnzHrSҽMy$u>ӈ*T5>(vЍ6+$s_GSp?Õt&Ȩfc P끃(pun,SFXFkU>=q%@p͏M"t[ 2Esp"Aba 캘$`8 m%ʰOͶzzcnM: ᙬ*2f}_2kŸ977>U?}?(OԿ>//kۿ?7?3?__77Iu?S˿Ss˿iWa½??RMAoo7LϪ7NuSС()Gy?c[5jܚ'z^ t*?~GvVV W*f~gRQO|#q^Whmڧ ?zv''M΍Q4F2҃V* UQ2n4ʵrc__BU+4P4PT^__O9Oߺ{]7FY? (uHxׯگ??V%%4zs]ڥG_]'8zQ>y7.D S8*B$O `Y=[:\{Cd:0\i'kޢMI7sLcP__:1)UJBjhf>#rOZ C׷0ȷzʧR0 knupDm M`@n4iiq"ʮSDiW?s?DsN"@OLk@ХsE4kz3D6>ݘ6X.!?z8PS5w4ı0=M#}o_WyWs ]+Âb u+^!)< r> ?@+PKK zr#V>k }$䘷*voMH=u2Z|"8.NN]q`r^xH.Q%-<7Qԣys`%Iw]d| ˿ռֵh ?꫾J{nrF>3eK4I4(:khYcP3BpPT5~7|ww7SЇ>_şٟmC+]J|7̱k_׼3i'< )8Ϯ|h3 S\!{Ȼ5AFyc.yK%{ߞZKp; FW~W"CDgSSu_u?C?İv])H/?MEp`^7'ܾX!&ٵ}}!)F@'ߧ?v{ޓ4LXsK]MO|@I+STl\s,7hVzs][hkjsՀ ?qw"ZRD "6zֳī_TZ|_OH/~7dO ~K}~g}NY??0 i֍ʬPYzVe_e[z Qk߶Vm[f5T!ߪVYovQuPK8~heFes"RKKܛE^kk!D0&&׿>o>PyEVg`Ÿlo?3?"R?W:y뫆pi+-zp//PC^ac󈃛>CFoG=Ak=7(pCNrnc~n_O{KЮ795o2DծvpI}.YPZB@dd0G:fNuAl-{ }[%u-0i88~`ndf ܿoJ/'^4BI8dL?HT&Y$'WB%.AI4ߤH}}A;  ,>vÛ) P -JJ)E/(smʹ`4Nzs%P໾ܼ/ԃ96M:z5Pʏ];̉=O]Y1AS!9qéȟBda^AsΈ^ ҼDpX?c?g\,Paʷĺ#..ΏhA3 \q</ =$6G~GL*y)QX!ã?ʿ٠ ~'ҟs|}?'~n n' ?? *yW|# :9` ޷~뷶`v׾MмΐheR]EW U.K~RSla)J*~Czşd4jMVE E Ie >d>7}be\78-rl1%BFJ)%2_UPxCUh |q߸Z\mn)'ӽ򕯌3.sO'Mz Qۚ~&W,|vs;߹q{T6.LE5G mp`#K+_k fmPJcP8s jÅ63tB!pzڽI -WH3ykXfe|g2N;=1Fs뤾By{ӒOF* ٳ <\Y6BtPm@؇jS"L9&.7^Z>K`aob"g\ro{]SLMT0Pl24"t'ҟ&|"W >C4jQ|X:TI k8ky">Ȏ3 I)! s~N΢n# p_W7qsiR7%1}+WTQI5$N S')6gkyX--,'y},h)_򔧤g+lU}EûVWabk\u(a()O2|aBX暃#M pZKY/.aiTr6͜R&6DQ0^9n~ D$\B+Y ;D*4w>B z6D prAm:m5b6&Y=K嘸k[ ّ5C٬@IV^җt6,u&^5B٫&޲R0 RB{œt[j%ț6Vw ~du>Vi5 ˒>ŪIحP?6~٨@`/`tWS;#3A `TM~Wxğ d\{A[b6 ذ&1o &9snн.?qELݺQ uyE4K 14ʁ JZT.=2 ;]=Ÿ^ϭ.:v#P\V u|UT3Y.m:mY 'q+\ P@ @uBBfzqjweW*BvN@\JUI'`јv6/Aˢ@^ s wHύ +mo+}P9)f,PK*l g7޷pJyC:3|lfgPڙC3j$ܫȎNJe=+u1HYW\>;m1Rk#R),.f"Ytwp}=Ap_IG5I,usR{z*A9{\&(b܀T6+=T=/;Ɂ 9b#QL )@R T ! ԢJC;Z]4[Q.Ujʇ/'zN{Q\f_LcKor˭gI3ZgG'A@ʨooS QBdtq=lc?]a ->)ܔaxayO%v9`Pڡ!#B(UF7T:_>(@Ug-D0P!CmeB :6%Ч  և:e$'qP3mͷ 00C0+wiP/"c`1xdh[e͂aj.!.'9$c6m(nꜬwf Dܦm2tޖ|%5<A5D;< LY[兊NwRwc"ƺE.4kV ZN=ay$ m$0(̀i>ZY:,V((]3@I}uUfhE'D4Pb 2Ц74;sAL!zAaIz&f c.w@4CqD\:8ѦՉLǩ 1+\to}b#lbop,wfP'oW"L*m Ѱ^VYIĨytѦp3+yi:{+&M`G@ {)Ř-VF &nD{<Ԍd)(Q;Ɋ;s'W!" R+hg4ZTM`]¤eɢ="MVR6&^CDcIjJ:a6<N&k@}3''X#i'(]ҟʳx&ZH群{S/bVuq{tpXivHjt ,g 0 (BiF᳒v45ȑ1$[Pa{{AU[rh6 ) ÜOG X dlP9UA8(K؟!p(>?Ϸ> E;f* `; uJ#|w Cz<' 'N:|&Qy @l7ޏ׸F/`1JG% 1XC1sUH 8RKiRZp]Og#E9(zBUk .cAjԢ,ʱ ey@ܣg_ tڒߝP"ab&)rA|5'H#:&igi)4?1İKmw=_Í30pZPpCi'b%SSB%xºY zaN0*m _tۓFJ0$r]'aE[@f%=hOWP QuT5d5+dasTtx *KÒq Hn:P,,PgdLMTg>!"S)Ey CVcfxE"16"sR)JV  J8Z3]Qܨ8JJ_]p}=VX:N g/mb4`0Veb`B!+rV'K-w6ȥaTG[#rX::3r/DgysEŠsJ%Z"+9[>W$Ostb8ps81)-yL!-NlX--ygIgΎ?U@s=ev Yf[FKȩ'z+9yhUzgf>:6$;V="rǬxҰ Ey QFjWJwj38,e,2ǸiyJձ._uf"|Wy׺ⴽ^XO(Ji x2W&ܯkV8]P2qnTZ C9yD7h$mtؙx4`,P-AC/>TBPҔZYρS^2vF]M,^<MuSlz%%$cLZXAzbMG"ZJ 1ޗp)v42>62!;qO I,)Tn^NlK!ML !OiS /#1\[(1| wHSvHB[}1/<4>K֜vC=8R* *>ɂ[Mڱk MIw?U?"K,DᴂG qM?Щ>Q=G7j} +/1wHEr(љ#\W=5g N `ujn;'մ6zn"a[[)Ą{)Zo)IYu2o{roh(;['פz}h(VNY&|`xgvrT*WG8SvQ.7.Z.GK lJ.lndpTtOd1_<WwLkN %,H5j,0E`I&})&2NSYQU+ncDn:knquR:Гٚ׻ȱo)Ϝ7KZN8ȝ3G<ǖ:3U6Â8X3Kb dDE'alTy` {XZZm>0(ҐAБ L!m0+ؼ!-+ `|3/{F5 a/Jf-V?skgP|g-27;M\ JN׭}:1 @*Sf=4. v-M3XcJ>=Gް@sɔ f 7Iå֠yR$*M#;5QlwzTMX' Skk&q-j`sdTgo1JsID- ɽU*h;BQi䣑fiٖu  %Q O7-zc )O6kGcv5=Ji1﨎%%@1#N'*ZmhĐ 6wo u#O:olQǦ;[hjc˧moU!̐/ذ2n:򵢀^ #,E[y 5*ZْfgzkW3l#,z>Se0P<̣-*c͎AT 1um<À8PNDЦJBj5[޲(Ik NB \ͽ濭C"CLFIp9Oe7:7\nRS@5(Lޓ]stZE1Mk&(zvUnT~Z~R|2k7`ڭg' +` h yiE>Ɂ1Us1e|*(O֛0uOtpJB(J{j\pThƥOF1BNõaj4#C"K91T3鹷MĥSjRaSD#79[?g;:ssoqpo'O 3mݳ4|DahpEF&xm=?<z]BfLSF+tM󎌖jiQ&'̈|HG^H]y5c4+;k9l:Ӄ6aUtGp_~s7L־N. `MƑ)x#fr,kͤ[jR} Ski}I: I'vmLn+ ɞOٓ^/n2Q޹D ʚmtM2y)o@^uuBUW)J"P*T8%9& ;Ɂ)2K(JIw-yS+ᯓJWG2aQ{sI #wVI;nX:q*P@s[m FUGyBGY EϝC)8``gWʷG1HPnujAx.>CɂC%N>8 %v(9ZΎ85 #BG'A'ٗ0([jkuhި?L? 2TƐS@%/D):-}C&2lO9z#н"J/o\s*WKf\tʿ_S`]*<+6 eR-2@ܽKaK' BS" ` pe. AJf3#S؏LM碰<)x=.V%DY?2|j%tL95H?c`A "@n`zϴb }U?NɹלRBcI8V댐zĀD3N1lr[:}º;0V6#l],()VBFEC{tYЉ]0DRԨTc (uW)a72 o'zG)tui8hΌMOi1ıPH+1Ki<#nâɞyaV@63՜MH6g$ŽD+|zYO;ٻViN7J(F ,B1uNGxVm^UD")aWa+浂GN _/Qb*󘨥IG :xՓiBQFV0E [pR/:'O+Us' ]h IZN"q{ĔG@k;UUvsJ#h3(`%h|l,yrgpG+rmfs4"L_evJ<&TXL 4VkyIL9a I, %wl[cN^ΞduqܜW£SczJ$VT8tLT!&(;"d2M-_/0;'kAReٹ?1;zϜհD*SOD o5<@@%?D)Ȓpcd\h2Mb K- CX!>s.s:Ĝ`E/L2;KN>^=o]r'@y2/KWٕ^$ho8vW䥞 l݇ʈS8W.%)5qS:[5}[2~b^S,Qp#-s6Qg^HÎ8PN/cF_O7[thΦo{3:O*Nk8#(8b92p\*aJ:~ʺmE׍o[Zɳ\.{3h+yg/Cq/.ô~A2Z8"e—p9a]YGG~2xPBu;tPw6ˮ6=Y#ā"n~t^4Ip rv\pַ+T'ӡ-O@5P.5/ޖ6r߲ĥzRE1ԛ){5Amӓz6gsZYQvWV?S*$ֶD6ؕhk9 {s R%^be\;3BMW@* ̌y z ( Xٳ~rrh,κZQep1fQvrL^o9+{L7L_'cٟğFɆ;&+ZפW$# Nd]>>P-m;AΆ;U>@C[ 6BX0jx)%FRA+^ [T1aK3hߧ*dWI?7L(],]䨗:ҽ=dJj,%k2y=u!N˄Mę@I\±ގ[ xFm\ҵk -1`SrZoUYMg8=+~WjaRXY5 ۆcjNPc5 ]*n*}_+kMuR? C}ij\ʸfȎo+Y<*l&JS6t:7S7jZB\3S3x6[|Ö uqQqxXyڔ\V'K*nr@.ͤi uz.8њ1Uyu& nHdM)d::_y@A3 lGKg[dPoMHd`R63)[Ċ4?Ǔm젣*R-U(:cjC:oaCJ2ވ_!H*O2 \FM,>iRԼhn0 qnlDT6O k>InU. ,kģ^-H̚~iuGj!r)G2s !LNJ]`wa/i} nWB }*d.maIAJBt:'iX=ȕP0ɖ6rJ!Y47֕-eװVoD^o07xKeg;!čD0 ì,SC_9Wv !qDRevή (ύEHM4YN|8Stœ5b ~ #O4oE k:d?ſu\C);A0 K`i.a=nfp:YpA;F!y]^2`̑wu3NdNZD` HY(1 %z^miD*Z,fW9/4{W 5ve:u"kۙ&r۳R|IMCI.|oXub "&~Jo)Y ᙈqqB3"rmXAxluJ6 mHoqv0 áGn_]y2K3P*[Nqɍ?)^PR)UT}1p?ô^7|'{80EQ d cQ#4mN&I$7IcA3779dK(; ZQ)![ j,F+_uj/up)-zf+3L.'H'ZtWr:%8`{g&@ 5&-H-bgǂznY ԇ-efv"zZPHS *F ^S6jpA)`ԽTta8i&ڬm9-RŢFI^Lv2![K^m'b{@CWau g(qoU?ϰͥ,y䕗Nk{CC9&\(b *Wlί 1Ϥ]0aNՊ qŨsl@}Rq% 0tFd$z-(]$oKBOfs5:4◦rh(~)2UU"h#QX'T2٧Br$ϜMn0'>\lj'?eI*cOErNqRyyҶp:o M:Q~O4=;kQf'h<S-tݖ /#3  `I-:  ,,,:0Z.,,Z:88p`8N-BN:uY\X+`sI5^9k[N:`tʁX{j88p9뤳| 88W,+VŁŁ΁X'kŁŁrKw[IENDB`]DyK 4http://www.lhup.edu/~dsimanek/scenario/contents.htmyK http://www.lhup.edu/~dsimanek/scenario/contents.htmyX;H,]ą'cDyK www.StatLit.orgyK Hhttp://www.statlit.org/yX;H,]ą'cDyK www.gallup-robinson.comyK Xhttp://www.gallup-robinson.com/yX;H,]ą'cbDd p  s AJBiblitOrigBLOGGER_PHOTO_ID_5205595310194016482bamC#~B&a{>namC#~B&PNG  IHDRgr/,sBIT|d IDATxyչ^{z}A. 5=*j\㒸ŀ .(0 pf_*zE_HPPP ]QO0L2$#N6d NR&-F rV6!gg۱#IaBUU0#H$8N23!Ll=GӠX,&7#SQUTUviii|Z)**"bوU23=l6ySeMPX,43Hr@g YEB1O4as ] AaŬQA}}=.ˬ>޷ Bs64&́,I[lᥗ^b˖-0AxGBݍ~4(uGkhC_LP>__cJˑ!Df M8 4ÎH(jg'jq1.YA$ v3x`EAP$>`0ȻKVVկD"ttt0`Z ="i:'R=4iA>xcƍq';K2pgUJ^^^DLxW+S7"HuÇsܹKS19=FPHD"dHh@66 YYnp:_^?{>8p v{4MG%TU%H()8oت:DIPEQx10E"yBܹ!999\y啜|hFGGHDKK ,Z@ /Ks$INN'۝j3^g֭[͜9?/mms== FYV&pttv!t\l>MzmZۂ "TVgL:D"#<ȑPUe]6p\q 2ٳgss2v身F'SD"IggDet]tM~{q0H'|)S0m(ԣv殻旿%#F`ѢE̘1RvEyy9p8Pصb ICT l6 d8`G E胜{ƙddd7릛ȑHu(v;H۱ H"DWUJnn.p͆0`ss3?3@u!hTh&t]x\h&v)JJJC=$ZB,]D+qԩbΜ9B!Z[[{FE(ZN&^|E!.^=xUUFzm={袋iL$4!Dj&D$u^,BWO>%N?}~B@ bF"D".ⱘ5j;ʕ.V^-/_.6m$񸈧v{:tXn"hty^C"h$'kr)BQ__/[D2)D,hvD"yH'HH$i"uM=u_dR[V:dx5kĺu.Ə/E2>_y~ dRp &HD"mV}%Ə/֮]+v%4-)"PQ,^7X,.hmk/ 6oө?1c2na7tR;04h>ACSkj8utxwoh! ٵ>2&Lۑe/Br 8k6g#hUϲeyE= :m|73g6o̢7`!6 Bpp: &Mh>.k8w(%%@nԩS$ φ 8C`TUU1pf:']IleyTUUSXXiF /@GɈ#ٹ;ybA,Uՙ8ѣECK</3f 0s[[[gҤI)c׮]tww@mm-$qG|f)fCV2ʁc9 <]iin4 ;N6">[EE|ߟ{.^oC 裏6M0ILDCx !Llkk+pNqI U+W BxE"lݺ?<]?̝^zi!6>x뮻ky7?~<˖-m۶ʿ/._4z*<,\iS`<6mD2/3ӟhg\~ܹ8c=ƃ>h`pzz2)Ȑc(a;CEE9O<8\s7Vq23|J9z֬YŦMy&LRSS%"0*"$)eN9zx^e̙,_իWs9m[-o9|lܸD"… ٴi3>_W^q'ր$xbf6`Pe̹,^qcDzvZ{,g_r }>>9\N6nYgŋ/H"ٜ8ua<hf~E<>O3F^ũ3fg ,Ç7aWnoW\+xu\Btvu!CK !y;I]_M' /KW[n'O|g)Ν+9lq-!RӧO{B{G\zY1]ubi3ğ_\fZuv!>}hllFu?_/DB`0(cǏgӦM^{?~XlxőGN0Y 1rQ[]466Jd뺨&L+VnB_"E{[8aزeYg%yqUWwB!͛/&M /W QVV*;-gu…\D#DRp%Z$IQ]]-L"j*3%KfTf| z u],]T )lj f;E&7p#Y̙s_,裏<B &gy]E+{%\"t]]whzjĠA[o#>Gq~M7F\qB!8pBD[[4MTWW#G4MR@:uXf1xgf1!~3L&ŃQva>'&L mO9YZqM7ogqOA^jk+UUx<;T Gurrra֬9PUXc|JEENL& Ì1ٳgD~N,ʫYgņ {채B!dY&(2h?g}&HxSOfi&935k3gOJ$l{{;DVds{70#Lc(5_z&&L,%גMee%xǃlٲG~~>ix馛hjj`0 #I,Hytuy$RƍG[[s6 \y@! 0n!B,m5pM\x$:---a8 tM̟?dYMB86mH `Ҥ# CL|4| p{v~; j'SL3S 0H%IBSUT]'@g0K(8x_0qD/^$I7;3d2I bذ=㏳cN=\>3Ncv 83Oʥ^ʉ'H0dZ6P(Bwwѣeڴ1=222p= Y5Mר,EaĈtwwsHlB~QcD Іۉ$2{Oo,X'X}h34M#++d2rr8dN칂Ѽ_|_]v9gd:}9#GvU_OEEPP(ēO>i4srr={9/veСC{ձBR._xh&$СCnD23d&~8/[bߜWUEPEq\v<N<gРAU8\x$?~d#8;*{=.2ΝKEEׯ緿-W]uo6#F H`0`KI2\r3.?&dwҭg޿Mj{ꫯ>p8lp8#F`ر>MxG93iooTV⨣O?eРAf{677˕I[[@ʍ۴:t#sܙHRj"f 4!C2qDx vo~;L2uT^{5N/|a5MC$;--TV0jDQQ1B$q:SXկغy󞠹n|F:z55lذp8ɓ2d.pٯ/yToVēr:oll$HAVV]-Lyy>Ϗyfz)ںYuw&x<B%%%\}TWWzjp@]ȸ^M<7̧zS>,v7؊TWW3|pF>Ӆ,w I2$;;aÆ/Laa-B''|3K?#<x"^}Ul6\} P;zk60dpꩧR]]Ol O>ɇ~(\tEv~_c=9S?+V`ƌӘ`{=>C~_s9 I{/;v`ĉ455C,###vb8|7_+`Ŋx㍌=SO<ŋ9Ù7o8N=T4iL: /3w\.b(@.N?t"b6 IlZMaa!s 5558N;4$Ib&NHcc=8:eeǟn<2Сr衃^fʔ[ tttP\\Nss3KNv63gdŊ,X΢EL6~/`0Ȝ9My睔r}xbf͚w܁n'ڊ)Sp!C(++c¿d8Ǝ0n8Z[[\pv$[o1fhzVb% ##NuVNy:D:u*v%x0+F@f0BS$ kK?L'r'vJ?ꨣ3~^>!ڄ^=NCKK+t=$nP([m[z} և N<=HꫵmDQBرCtoxy ¢k azk =7=/]OhO8\4ưwI' [8CF;x\#@LFhF"@u&RVVF"{xl6f uU%8}>lDfS" BYF Mqi*dj}#tIp NBvDY# t=W=Ԓ%D"b1hכ2eB~?DsT~!(h(Jp@ D~D"tuuQQQau{{;PXXhw[X֟0Ȼn],#HMlb1siZFo_}1V&Qp^0(}+.in\qP((TVVrbv $#\1hd& î:K A"CZ3Ng$'As@0@O&@t!p@jQ$vb>^ҷYKԫM(ߟX,$fa8dY6qxי[ZUE*͆MQuwD(*.FŐ2SLYд]z96[J7]]ǘ8iGBBّ$ʌiCX%]JEU$qgjϷ UMdz7*v슒R*\E/}t ١zڂS>ourr?~ }aD~0z{{ɔh4jΦN')P(鱗#G2l0u@ `X\]n F"$`8zn,&rʦQhBb(XPkFFy+xkPݞ~^\.$ItvvS20`WPgg^_ȿ. (ߟ#BGq~H#JϠs(%ܻz3ң+B` ۝cI[Fߧ=:Kϋ$Ip4`cͶ{:f{ʙnŖ ~k(d{Ƙ"AW'?&zx juƹpp8Lqq>5LjԺ [?i1{Ӿ{ $$B?۲7SyR2N+߾KW^)ؾ}}_m7־l4Fz@qG4~b7t-8ְ[ڲq9GtY~4vtJ?ְj7Ґ$)оQ{6}ݟoqo%i}!zLIF2=v"^4cӯIUZ~xs܅4HMOA$>SL8i8'W3!,,,#I7k}>r?&d]ۄׁ{+- }li?s- oh`4Ա5g ;2В FKKKϩ%-,,,P"L1v/m $'wǑGA*>==ꇎfYȩWvf<%~?)\toaaaaq@Dc_Յ6_ǔR-ӆfz<2}t:,V\ ^4i$F'|zW(:[ȐTSZsŁ4^unv:(8k/^̛oIQQ6 UUzx<4;CYЅ@Yf򋶰?B^ ҎEaQ@qq饗2dƏɓ3gdҚ-,,,,L8[⻣i~l5,,,,ljjr˖laaaq1deee~5 o{Irؓ `-,,,,w9[XXX8?`1e]]ݿ[2kXXXX|7 YYXX_i9[XXXdff5[XXX|?Jbaaa·-,,,~\4g 6775,,,,,-o ~l9[XXX|?b~[?ߦZkkXXXX?@`J9[XXXD Y 0`k  Aְ-,,,6m-Z+|gˬaaaa0decc*  ߿yk==f  $0^ B~|zb*$b5g축Xf oCB B:(v{[I,!$ ! ׹1laaa~ЅQzVJRFkC;$4cC,laaa0e]]5D-^fLΩʧ~:Vk!h4m.Hiq؍M(;cOgp2wyoEEE̛7Ihnn60#5Ml߃ gM-(<7+褤N=Tl6huiEO,#t:{٨-,,,,O2~ ]2d0@f/.QXX"N$/SZZJ{{ "cѰäOBㄔ-˻|ߚ ɷtf; kW2" 99ڊd۶: %Hdr>\̠L 2};dx$%-,,aK&YwPVVNAA'x ]wmmms1$q*+KW8jpuY (Cc5(׋NFl$ UUQJ?.tuq\ߧLM---7n 's¡7| ΝKaa!~:\pNg=@uԤPHB(cKnIOø]illWM#}{cːN MOkeg ecUu]HR'P@ M4d  / 65u@: 1p@v; rH&(f%SVVF(" @Qz tM5v{୭dddL&)--EU5E|\.$H$hiia{MKLNNS`歵N4YQvdYNL[ŏcVUUqc)P'  2z0$ I&)xf!AsB !]]] 4!@2@vY "p8Lii)B233),,D"7X04 I.P(d HMLtii)$$ 4lmmus뻤3Z\\4.IND""( QPP@$!LG^^pxF_$' cFʲLVV.h4*'$ sqZ̮Zo|I?@GG',ʕ+I$ql6!,_UkPQQARUq:TVVH/LCCY)B B3=eVEaɒ%ܹEQXl˖-# L7l^C$ Bu.CJ@.H1N&uV>ll6***|7lڴH$B"ث9LJ( ,Y%i*@@$^+Cl_EJpsb|>h_|.rrr ѣI&+vTU%;$Ibر 0R ,S^^,˦&x())X,Fww7>7oF͛Yn-%%h݋,˸N  ziii1۷o7wttb M!m|vZ,Y$d2ɮ]D"HƍYf EEEĠŏo=R'AnseM1d.!}E&#(-- Ubw9{tq¸1cFl6\.ż[̟?H$駟a /7oCe…tttp2j(. JXz-<|֯D0Ʈ][u3fp7( YYY{fE}}{_ TPbD%blJ+1*hPEEBT)Wvo3?e Py=wwΜ9s^O~@uu5vٳeٲwFC̝ +V,72a\.=;`!C?r.2V+:`8NZl6L4 ͛X,X={`4yXx1[lᬳ΢[n~5fh /~'NVJh`IYс\܍b1/'R[YI(&-=:0l۶nUVQZZi\ĉٶm}gϞ|y< :-[bGFjjj4hׯgڵb1HOOnSRRBMM $5{q6ne.*lBmm-`P(DUUv#;;!C0{lCmm-/OW_k.#z=z4gqV?;vP\\̞={뮻袋eK.~; IDAT Zů544~>(S|he}f (t'DNo@f3bv9>5ٌdp`2Ջp8L  JzUUUXy,IMMW^\.B ՘fDQrg"2}OPRR¸qp8j|7 <˖-bf1eM,\3p@\.%%%q}p8tt:6mͣ#GbXhjj"qI'a6IOO'==={pmq9pgHMM% b444~ףGTŢ'DDD! K^׿046R{7ax^z= jB$! "eee޽3n8wfپ};X λÞ=1c} PWW z3f {. h4j*Xv-w罹rX,TUUa[y wz+|󍺯x<ΦMؼy3sa`|g̞=-[m6ؼy3Wf{lڴ 6rV3JYy;<5(Y?`uD:0-PYWB'}Ƶ gyFs8b1  xXScrݤFbz ,i"ȲչD"9Ǒ$z{H޵Wڞ\LMjhE2z@,G R). F"j0>^Nh4UmKՕH>&iYC蠫'bVuM "Pܜ,@~~NFQ233UO6!IR+O=&WI0,a0$d>oj-oAy$IxKB/pdCf&몪*,˘e@-1nol?_[YZZСC;\C龭=łjm6V))-t.K]G1u)l@1JC (4~˘V+i*Kݎ,˭'1E?vƱrmw=|(aksT0H8')uzX&*(rmP&#i| )iYCاaJu^}/:y<|$@0-PG544~Yte[~ҰFygu!)Pu!khrQ:x?tB@^,#2?0ľZO]CCbta1Gx`CnObT?wi :jZF $7JAк@WO!SGCC\%%%j6|]ز4Xsf'+ݞ%ދѢ %'SI}CM^Vٞ^K"q9 V!8&%/nu4dB,2UVnZ |'REYhS$УC*#`Y1z((DL=H$B4hbkR (f;YprL<&Ao "B!$Ib ÈʥFQu>O 2%u6LVlGfqqQTTW_MMMZɨe^zF, v]-Fc&l6a6F ^m+O& Á`l6p|ѧbp_ǨQ8ٻw7%%%hkQ_/QQA[Gcc=vl63vXvŊ+X,\wuX,,YBnn.gqjjQ}jhTWWw d?9p7l)2 /rwOpBƌٻw/{f}:[nwᤓN"==m۶Dyy9٭;>ȣwop:{CxV=>|f3C̔)SX`=z`֬Yٳ,{=xʥ^ڪ f54 ʵUPPpTPn"(ILb8.fqM7q7sI'QWWedd v;-"//;3/8䓩 Iؽp8NUWWDX`xG}Fzzz+Bjkk\f}UAA:& ~Ν;q:D"~?>OLf*++)**;w2p@nwsQ4q ;'7%C@ٮ(b0 ̎;DL:nV^ ƠAbϞ=l߾ '/y6ldgg3qDy;IKK㮻"лw/dGCCZ !Df2S'ʵ$Ƀ e=ݱyyyXVF#ݺuc„ L& 1c())! 2a @~!Sgϟ'Xlݺu\.z=YYY\y׏nݺQTTb[ndffRQQQ),,$//^zKvv6#F@eb]wÆ N == Ν;9묳n3h 222 ;Ȳ2U#}t?<*֮]˰a$:OHz R,4&1tbl2kve@hc=Ƹq4hpEy qxء,jWރނk va6% J9ZYCȠtZr%Fj{LsV\/v&(BUU. ͦH B6OJaF<`0puשՂFNG4l67D1 FFf3>AlD"^B`X>Jb(u:j8Z@[RJ4%c2l7 tX,}mTYCqQlemmHH$BSSYYY(%'!"PC4j狐Ђ%33G2J'`0֫eYjHdjP1Ȳ^gϞz@KMQlQlTێ=NGQ!y444$գGӦJ1m;Bf5+ŋYF숢H<TڰvUrwk{^1NZTPr3j.4CSD:sbZ[l6p? *++l8fX7`2&N~驩aƌ OSNQs|eYF@@FFehfϞݬ_Qbj6Yq?SN;z6=!ږ $ 'O()f& yꩧ裏0ĺBsC $ {bڴi|駬\{M6]c4444f XpbAm:DLL4k6xL-˹K;ٷo'On)j,".W&\;|F<:j>lb񠡡ET}[8plݺm۶\!BrTm$"h@jX,OUH=rYUeodƍ|l60:hT#Oeٲe _=uLGz: zI8'_ʕ+ٱc'/2}1޽+EnF(⮻G /{ջ7.PSiO3}ځiӦ!26%%,0Q$'4oO544];7Β$z*֭ٳgMUU=#G$''5kvN466W_;QtdYT׿OaF__Ipl߾s9]b,?n3 B"5GYBHZ~ƱOG! ݘy֯_C=fkHUSEbUcVYV!S ff8,dYb`X԰ k(=7ɲm%XSkZ:$, bKK)GN(Gǡگd8J}Sv NKt,_%\B(ݻLV֬Zň#:mq-ԩS9#j5~$-- DEEvIMM T5o]YO++CHRDSjhvd(5~& Gwa a^!36v9x"PH5ʥ "DrI$ ҋ)%ze˖zj.y݋B$b,7)|Q%JR>"KRnz@8& xEѨvH1 ͊pǣG"zuCJF@,LC㔗7X,ZVL$EeltEBXzz=x&H$NEE%: HRBb@iϥ%&; ?'d(8'BFLfNNvHVV*`$ ⼦N#r⣊A Dnodڵ|w\.~EZZZb`9Y[1XK~zx 233III!//M6zj֭[ HdQtL&^y>s<͛˹ 0B;\hf۪V}}=zɤ̙3ߏ` *I:N5gǎ=,[_zh4RۖIRZ7F]bbNLL&w1$XC㇢& ;i3fHHy&|vJʆh79AuUאكk #%!Nvv>= '̇~ȩJ>}HIIQ/h4jr81s}]ݳPS,s=EcN bsSRxoj{JJ3s惄B!Ǟ={et:7x#ݻd賨n7 fRRRx뭷ٓKfff=q ] Z@LO\ DhŠʈB;+?pp8غu+ݺu/'`ΝdddNUUOLieQOF1yO+f#  F"-bܹرz a(۷oO/b _e]E]`fƌ<@II g׮]Ȳ̩رcڵT IDAT~g{' b픕gɒ%p p ׏^xH܌͛Gyy97r5Wv'//Aoذar GFF:ߡf#].Ct"N@jE)h"}Œ.FhwLGeҤI\r%̚59CEDAwL;ZSvSZ—˖Qs'׋jb0h ֮YCnn.h׋!" O]zK.~zPWW9!Iz=76m=z".RƍG 0gy,?x磨Y)id+x9lVzņ/l6W\pPWWGvv6 t:?>V/yQTTD$vpFUX]htphY"){ԅuVRD#C;q-o{IK/Dcc##G$==ӧs}!6Obb1$Ib1wf {a ŨX,< 7[9 hj?{.˖-h4)--j~~?vm8N^/ׯ?s.Ft|r=\D d2fY?dY&55P(j Ʋe=z4k֬l6s}1gmFuu5s%??_+ecƤ'$7﹇o|JKK XV طoiiilݺVw؁N#77DSS(((264~V(G%?9bBٿo?A I6NL|=ꪫp8ldggoKNNuuufdY#IFtyHJoC]:]A'uok˯7Sn  Ҽy\pL4 Q8p =瓕o:t(Gg%--e&55ayg'NbܸqXd0e p8ˀ̙3,&O '{Ν;7n:X,Ɩ-[뮻! RSSøqث[wr1eyL&^{ml#FàAFwys璑A<gذaXVN14q#>f=9DD֮[ǰ0\ -D &a~#\4$I0~xn х%;;m۶db9&/D%ɍj@ԅF%tT"ӈR1[taIOOoUٖ&RRRԿ=N2jPDZRfBUa IJ4AD#?{e賸C--mIn;꒓tD44~JU9rdfQJW^n_ "qlqf |b& 1͘fn7fK.aĈB!jjj֭ۡC6 .tBBT Rt]B8NOݎ%##OnXd2LICj6-meY& !"&I5eee 33ۍlh4vhb[q ԂZ86{=z #5$wW~s몪*uY*&׭[ǰaÎq^v-Æ hl+% s7jvU/0xx衇{Q˒cCJ;Ĝ, dYh9p8L$iUhk1!'˕;<b(6]x3t^R X,<ݛ3gUiKӱP›w(wD]"3Ql-4,MaT]555#a뇚u0A%wWJVg#hI # Bu*ۈjxR|N4ìs+戅5Cyy9ݺuKJJw;v'JpM&rJZ>ʕ+/L(Jt>h4VsDD"( nǑSS]dݼ$ w9=˲Y#:~?MMMjUn+t4NxѢE\zm-tcg]<( $2rrp74F!"7ofРAmQ Pv;՜}~~a>~d{(xRSS[")$CB|h4]A\H6(*$LjDh=8M u+$hNLbQ'2[mSm8X5-dNNNO8'@xD;ձqF ^}DTo8y{({⢋.B`0NL)rǒqV!9ƮLAPIv9 JLGB@$ brbǠ i-IVPfeYOdҥ5>O>+VxbN:xL,$==(\pz*gV=2:Hw-p)rB[}}XP 44:+@9wp8pvSYYkV{}ܹS5Jh$==@s9~fϞ_Naa!@et{^R%K{_Wǧq,Lwaxp[׫ff1k,***xǸ{X` `XSz5V$***Zx^.BFΚ5kp\>QqaX:dkYÇs|>EF/hhh3Q]]͉'H$aɒ%\yx<L&fDޭN\.u,S__jj7LEEYYY24c7|wqX'|H$BYY˖-`0ԎFQTC,!r 7oQƏbw㈅5 U&rrr0 |>222TY^/6M WXV57WNUq8dff:u*`ŋA1<ƹ<Ԇ0G"vލ\\XXbL(@RiVH8$a6ٻw/v>^{MՓؿ?x\3?)GFϹ :%"׏ٳgFٸq#nZ0v frWoNLBjO>D"nfpB.V A uhhh4tek~TUR/RULZ|> NK:N\W(oHOGŐAϒرcFu( qPB6l`Ȑ! iwxp8vi444YaV!Ȓz:Zz=7஻"''p8L4f `6Զ4E7 #Y#~dáE"5P^^lVʺvKi5 $'K?x<駟NeeV+jp҇fu}֭[n%X=ie5Lj䓬,Nmڡ^'##im#-|PB!5ȏ&ƣοcN_ `<"69DԩS7nW\q6l[ng.zol zDQA4|L>?+V íā!GT6 lV3n+2FUl[5Uѣh籽s0-JIf,#  RwNl& 5YlNmcb2hhh   tIeJ+ I^@ ^WC-,W¨Pͭ@. ŢvBICjHJٌjd2Ԅ%`6\ܳ/jX,V|*= G4%ب)ihf̘1PWV;yIsFⷿ- >7nTKѨq(WbA1F#$ 3f41Aπ"͛Gii) `լ]Oj0XH1# ,^Ah4b2nJhD1{lIIIi&nV.\HQQ<wq\V޶(obh{뭷TD@}RQnz( 1lݺɓ'cٳwɣ>J  5ɫ.OoUxٟ9ׯgMϟOAAA+2SXXn&sX5:RT=g%]RRBݱlv>DQժ&I!zϊx PWWG4%77~O}?|@ՀWTTP_@ `1 t0՜>t{1DQdܹ԰a`0yf㏙8q" P]]/̝wIFF/"3f̠˗ӳgOZH0?@(D9ip,9֮h#|>Ǐg„ ݛʹ孷h4ʶ歷b,]W^ycvӦxW~J=X,&ii_2o[n]gk(<>\q–,79-a$z=)))8֭>P5f H}}=وHSSӧO7`REA$aȐ!{RXyk2x`6l#;w.XLM,..KroҳgOƏM7ݤFx< 'Nz)N9ʨꫯfƌAC}=v" qg2}t oٺu+ohT*C;B9]s؈@\g6|Syc\uU|TUUo7&C{&~\)Sx׹Kx~)n6H^zaXظq#pF; t: bȑt:Bݺuc@E44L婱oQIMM%++KG$yyy 8`0\R#=￟~h"~m^/NIjht妦vs|<焷G|"+@||>kȑ#kzNÁr9YSOPSS]wŰal65>.Iդ_t:ٹs'uuu,]1zܹॗkF~~>(2dJKK>}:-7 %%%ڵ ՊlV0^yΝKff&_TD]]==wq~-f;dYVJJJXt)7oNS@ZGjj*#F_r?*?#bZW'ۺֆ6o/L^8پ};UUUl޼Lff&.jhbN?=35jCn/NRԼ=e}mTTTbX1"V+W?b2e ,Wp8\%,7ufA[nQ;tID"222ٳ'iiitޝ}qE믿ΰa'3qN8a ^/ɿ'55H,{]3g2tP<F"Å^իq\joĬ,zMuu5#G?555<D"uFAA~O<@vvv"s3f Ew]#"'4-[viL4k?x"}!##‚1[, 8 .oOzo;v,pOe-$//l /˲,yr,%I%IFeIdb,˲j*9 ZFF<\ZZ*755ɒ$ɑm._\=zi&yĉr>}_|QnllkjjZ%91X<w^,'Ki~kYe9ȵn9xTrSSSyn,,jKAT.ZY{Q>ٶ={7 Djj*wuAWЍcw]]ٹ:VkKauu=v{ K.e<<Mp rƃ $1[v(W̘LVf(((PmבH߯N9 ~ 9,rz=91 bϋ՚4 1AG<#;8)?gf^w4lQTb%5+֨XEX"["*&Fc ~ _/{wg؝ Dʼ}|y'((G4`xSB`o f 6aB Lp饗׿3fj*3NCQnFuLnXefW_/tRPP[o 'ĉtӬ=RE]RB1Goib]y IDATCD% 㸾u]t:ή}L hr כN4,h~6ǿoY3g5ECCFdXti34@ @aa:B=w~>| (kh4Ɣ15 zٴi'NDUUn̙cf:4]G4A0FzzYKΑ:]CNe\V m .UOm؂ ]WWǘ1cΦ:>,ׯgΜ9&Pcc#voVΩiIogs6"AD5Tܓ@>>{:;;:R -wR4b1&Mb̙̝;Q2k r/s6N{,~v?߻ [VGvdYflذw}/_ɓ@ˁ.aag(lXBv;7  ,?w6Yxcv0aW]ugqk֬a=@Q$E8PPWۂ ;#JJJBkk,Bdeeu3)d _9gGQ)++Ce{r=X>C-[-B,#''g5@tttBٸ7@YyWPSO=UUihhHT,'=I|P`aW/Y: ,*ҭ*FUU:@eMYa_C*V$UHIl2rJ`Dyc8:,l6?<>4Q_Pl`®"t`کեh=ݴ>>qt:9z 1m4jnI;:{K&455QTTO1cY)k{GH7K!Zk ,0d'|Чl +$I2!@&V^Cg(YgECCoO˲ٓz^fq?pi% y$ ON$1 6x; PDQnٳg 8Soc=m:\-7  PYYٯ)s-rD,'p8<üP!Lrsip\ z;2zb "(2r ӦMC$~ߠi,Vt]߄Udxn1&6-w3R|+W~sΡ,K-[fH$!C,#0DC`+ A{.# t:SYIAGם-Xe(`;5yA 瞻G̙37n'z)**J` 5\.v{7vL{Av]YӘv,X7"wff4Rv&Y|HDcc#cƌW^a֬Y$I3X Yf{ߙ`kVC4g`zt-}[ѻu0 ~h UUE7RT\B57e ~{USZZƠAxg(--k%`f.QۍlnSXXo/ %%% z^$Ia$:KHt|6K;vfdHv h /H{{Z[[ ى(2)}xh4JAA,?c9UUioovNBQ4[+X޹_,vD"׾k̎ں -CB0"2999!xAQl6^WUnwtt3E! RTT12Uީ =}{)[vjRGZ%Մan&NH]] 4 Áf5l|4?Kre]#e bX4N'.ˌt1Le sc餪{D~nq:Ȳz !mhHfjى'//HŶgcC͜y& x*SYEv;hN"i0H$b3)DQDUU坉*agQ/lV.džNNN6!ϿyK/]pKݻ2LGUU 0agy&3gΤ͜ B]OrxZZZ(**2Y>3l6GqǺux#G裏zEvv;p8Lgg'UUUyhkk3z$ᨣ⤓N⣏>B$}Ql69~;~!s1M@,`>#.]Aĵ^$In֬Y?l#466_Y:  ׅBJ ;\{頻bƌ1}t4VTtW(PZEb^x!wy'L"sA𧗥]RO]]txQF1gx 6mVsΡ`0@UUy]N>7wqzq)̞$.21wtt70uT$I"'' H  vٰaGu\~wyȲ_˹+ظq#NYf`DQLt$Ɇ(lܸW_}QYw/K$vEEE_Y9zfիWs!t:?~<ׯGudYwOO/AR`B}ӷ7ù`ڵ\r%x=3vEƦM2dgy&iv'?l:0_]w]ͼy1FZ~Itvv"II$6nZ{DQYzlܸ@Dg3wcƌa„ z8Νĉ ~`ժUpTDlܸfTUeb1|>z9#(..F$^~eF>ꫯCd2IK.EQ:t(6m";; .X,ŮjqqU:,0vINodƒIpgyvWk_nX,f%H0bLwe̙TWWd\ }fҮ5" "2 DGE 5X,ڵk9hhh08skQB!~?`w}Ikl&Um("77IYYDڬW_}QGԩS㩧2MSy`K77x˗SOgL& TVV2uT ƃ>Hcc#{ , JUr㴴PYYٰpupu|Y h"֮]Qɘ1%$n<.;K?1PX @CDYvi6@#Oi*?ݲvwLOoUU)**b̙-@EZ[[Eqcn'z&Hx/E:̝;z;xX,F,],SZZJii)Gyfu t]gĈ<GO?P({J8n]s<]w񐝝/~ |A{D"$Ix xk>}:cƌAW${vM--]̙3QUy߳^{xb|M^֭[GVVEEED"XbԩoSVVo[~?,[\}dggc-2k::cYl.򗧹K7>dY~ڤ̴F#8ɤeۉbdggs뭷駟2sLbWb~Tz+Uh(H4eٲev=Pb't_~9 .c%qwaqn***8C7nwy'~ÇbF"Ht 1~$;FipEFvTU+G5~?#]@%3h#jɣ>ߏ=T?g z:IM}&owF=s ?'3ca1Pdd7++ ͆$Ilذs{<+CZ(p8Fp TUU[o1o<jkkq&8̐!CXz5eeex66/g}yQUUf֭3yĢQFڗ<)mWWG_x>s&O,'^TY9gٲe?ޤ53Y^A>(y=haG!+kjj;vl ϡ7{7fȾd1?cehn$IP |Nv3q<@z!^z%8 ZZZLV" ;;{$1lP4\IIImD"~άYP3AOy(**B$a&L8[f^f#HLh4ʰaCM۱(ڈP,k|MH$F#,Y>'|2mmm&+de]OZ,h0FEEE-b::! ]ٮH0.Ln,l-(vi/+x'x8묳LXQ`r(bv PJt]gĉfի9gg0NgpX|*VSSCeeez AHdq/(..`wc^xE ЙގkZ]0P7lؐ֞HuBڀ$Ii29JJJ92~fHQ@[XU8K.ao3g)GњMHBC3 U8??'2ax|I4IM&mqdYfըJ,E8C=UU50~b1P(hm;'%pwu hs.//GeNA{,Ս$eիF="0>oןYc oq3n-2DN^D[y衇?>g}6@u{?S9$<٢@+Æ +`ܸq\~f}?]308N[UEQps/&$xǘ>}:?8_5 mLtٴ"3<Ψ-H$LQ%#3m4EOeeevs{hQGĬbIkk+hQFq!0e4O|>{7 3hР%t:%N:df̘\@NNp $K|8q"o{1yGkjjVms [Ҍ3n[h/f_r%S⋜{ :MLtٴiVi3g:F}0d\'|{IMM zf%ni ]Ft({1#ȴͬ ֭s=AqJVV]ws ,*?c5,l? P`D(>/g<!UI&fFteӉopԡP,Hno/H& N3,i }Lg d<#>Ǐr|}&tnYoK2(se2Xa줢ܷh4Jii)p`0HEEDz -¸q8&Ү 4tٷR@]L L/S, ~7aǃ싚{2|pt]39cAnaٰG"H( ;gTR$5IMm@MRt=JA3ZЙiYz#rpragD455XH'2+jp8TTTF@6\.~Q;w.---Zniѽ%S{ILf.i%mM@VK]RN!4i ?UѰӥ dvڎ9A7%BI45+I"*6' (Ȳ QOJhHيB),{fde&G~+()amoCb1SP(,'QwqEEYH6",$I"L(:H<1 2~gɲ̔)SXt)]tX<^oD==mtߞ(nH-!h{Z[–x/ /W@W.ݮl]wh Ó%7/O>1+B}=$Î,hZǾeѢw>LM*++ͰP(^{?W^I,{. AFWUˆA]]=K,AUUfΜnS\\n7juuy'㎻xq8\.$իWKӬ- 444i&S7fV}ٜr)\uUYh4jTƍ|ڍw$7R=rIppT%A ׳l甔rc :8<,`w~!=j~?-bŊ444DN:>cBh3$9X-/y)//g,ZR*++f|y䑸\.2^yfN>&MBmm {7~?ȲX+VO> `&gZUUQca{aڴi~3-p:DJ,5-m8w#l[gl+`$ۡ;ɇ`{6IBrrT4F 3/dWmhwIKK 1&LOż{<<,_`0Ȉ#L!W]]̈́ GQN>7 :z^zvף( HīG}ă>ȸqq<WoAg(F[ e(**f޼y̟?Q響g AU\wuL:Nv;]vY0TUVh%ȠAk{o ccYpvf\: ~ Qn۶rƍC]9Yk*yPUH$³>KSSNUIx9C9y7Yt)^?G} /~믿)B4_YYt6ljjkkyXj5?s o?>/i tttc]xw\64>kDQ7{g}6cǎEQFȑ#{7 cƌ,\)S. t6 ^ENNDAYYx9C[9p:|iR $M)RY䦛n{rbX*RUc,X z"'Ö# F?0d駟nիWp8,\Çj*֬Ywѣog̙\MQYYY\)NH$ƍE@ @mm3YokBaa!V"/xb&O~lqyiy?>/uW47p3~x*++y䑇Xv{Dѝ$]M6QXX_ٳinn&//ό04.'E[`---mb֨]$: ~AwR[[ԩSw}z5 EQ&??ѣG3f>."dYkaРAL8{l:,$ >UUYr%x<<@oM̊"ʹ '~zBX|pd2ɚ5k̔l#dݺuF uGg'߬ZE2$Kم6~Iٜ,'p:]hZ4Fjkko"'OF[X=%׋`Æ 6o#G2c ͶdL,t4m9/]q?ɁL 8Dclkk#;;{:v;Rttl`ϟʔd˟1 KDkk++x뭷8)((  O<'z㘶`awL0՜w6i qӌ)323g~5M3# Y,M7:Pϼ=׬i_U7O2kq8fRѣ9y )HV1vw dmsl-?o52E$ ˢc}1 EE~?UUUv"??MMMtttPWWG& gӦM[Ƶfң?c&QLT03BkFTJVlT1Lmp`\nt]v9|̝;SڊveivK}nmqMsOPn-qO&sq#2̾϶qb{.<䓸\.dYfȐ!V栅;q~m&hDrtttpgt:7o)̍pc? v%WnZf FP;<~;H$ō5+"//ΖlaK`{Ÿ>|8seʕ8NE!LzѠ{~`a,BM<OgԨQ{v:::6OI1Lc k6IB5C\.xO<MӸ뮻K/T]A@h6yteEZstla WSSc-\.XD"e]믿f]]st#J%m94jvfDJ&U X UB-3->긙!C( 9vI&|g~3A@@GTT!EgGϒb(b[zdaac|ad7":Lw}?Onn.hUQtFI'HҚ}MגMt?lD MmmYz7,dx*++%>-z3hz 2Jʊ.iA  )(vB\x(¿o<6-EmH M u2": \t݋dvC$ ]3#O4 H<;>boiλ+ҥ3M"& 'e6:Ab@8蠋Hlz7[L\4$,#">%\ի;Yf vh"CiV4 w-@l#UݝǠcofI&x24-VیEC[ꅉPtb v] Dջ[VB6dD܀ Fp_XH2\t]@;:b@ZSPD"A<7QŤT_$ IDATX,AK XD,f&ttt?? 2~իW3d`(\L$455 P' Z+d&$IBts[C (b1s,fEQFP4D"A,3ddl6vUUioo'DLbRih@[*;0E!i&($u !m)@K%+ e`Rl8qR6gcg$6$O<ތU۝`6 Ӊ2>˅fCETUE$n7f/oK/zjA),,ns3m4j/_㡾# > YYdeylbQ6sϡ:p:v$`pvdYpvC;NnwZkfW_}őG'k$I444t:SevJcǖ!iw:xS^6H*c>݋Ov[>|8;v`ٲeddd0tP )//g˖- :ڨ?!UUUzΝ;3P(D~~>pzG}œ9s0aPm۶Q__Ϙ1ce}~ 4zk-8x} 3boFEmF߾}Φ ?hR+v"|#99?)NEE@$V3|p f 5ΎU̐!.ڼ\Akװxbn6 /~+ؿ?_3f[op8رcvX,FcC/_=GQ***=z4dgg2j(s?<]wÆ*!Ti0 _q| :o}7ofCQYY۩o&bX~=c\}/s=ڵkyXbz_dҥXV.]ʹgACC>,fZZZZH$:t3F?W^Eg}FYYfbɒWz ӇZy+W4hW^~`vkki$@O;njrgql%L7kz>kSUUM~غu+>,z)n/̞={x7a׮]Ll˞={;xiii |>}%\­ CBljF+[-[&  vR\\lf޼y:te^ճc'qD_X$Hږ2 7^O~xLt>Xy7o ܟ3x@ IZZZhnn&''H$BVVsᩧ"R\\([G% `4FL0-[Or}m6ٶm.K<Gerrr b âEXl&_W5 ϪUxꩧ8p>A^^@  *%%%x^郢(׏X,G{1nv9tQU>۷H$D"ѣ1L<ȣ\zMFF΂0`---<NC?Fӈ!Z y^[Ïc.n˸: \.xg:|>@ @<GQ]9/>(PX,!cZ|pV4x^=LJ z@GGFJ0n<4^$wh\N233I$)SCSEt11eҥ\uU=.EQh4J,r( G!33͉ul޼3<关5 jE>&, /,&AjFA `ZFU9A{1d9$$g.H(1$ɂ(@ `vD fKn I(( `0H]QGϷ0^v%sEׂ"PɤsѨߚ`n6  z8NJJJAwFL&#h Ţ 3 $Iob%YYYY޽ 0}t BF,KuKe=<`61̈́aF#A" NSIn(-`p`4uFKd0b&3.*O X,b$%Iҿġ7M [v-ƍh4W8K3gΜNFFׯGe233q̚5#GpiܡBRF3j3J.˖ʊ+ٺu+vcXdF X,ݙEoABC~kk+--&"f,V 41<_شi}tZB@ v;wbՋ8.ȲR#,gAPի7^Á(455aǣtCXmVDh}TҢ(B6M{EA222tAQ B(Bvv6YYYH G-P%9N!χl#ⴠHFk?$kbQ222qp\k?󑓓C8֏עfN>DQjbXX,zJ,t]{>Z[RT{Մfd mIMvф }m/ '5Xl^z%nf3N￟Çod$IDYi_Y$C¡(˖/# a0X`v`ncZq\@I}n7 ??$I+. vo())!PUUkފV ͦO(N=CjKvv26Λ444D"?[!inn )t6T_%'N (a ~:rWF.]/ݻQP(5MkjWH XJ<܌l1sLAF(wu{kaܸq̙3Yٵk<3رwydsxS\`Py-`(*O'Jv@rs3xwʢe^Ovt:y'ٷ3Ν;ϢcN;U 94~9mL>wҥ 8wחR:d$`dE枑?[Mf&E1NW'sM32iBNYEhײDFnZDQw?~3J0CkOYYn*_{֮Jt@ >(deet6}UӆX@73bkcrwbXNee%'i 2c1UTPWWΝ;ujW@ҴaSX*}ӦM_fʕf>CTUe۷W^yÇkK/1{lٶm~ &P[[{G^^WfsY!yn F{7O>i.GUʫ+6l/sΚ5EcڴxsqWfZ^^}y.,*bj޸ZijU'd|[Юt۞Sqzt]ǯR1DclAFCLСC0`se|ncI0L455SJgo>Kf~IWW95 =ҫZ"=''ٌd6oVy@ ]ϚZB1Q@P2SOgnf*++9s0 6fϦ}( #@#F`2:x(uԀDwf '. !**o>%F4F>|8PaÆt:߿?6j)Ap`o_lf3?Nn:\2VJr`"M&Qzz|W/'|m__'('T(ɎIYK8q"6l`͚5@(**l6M{>ed Li|sОuMMIskv]'wttl پ}{Җj0Fqd1f$ ׯV95PJ<."пnmYv-999{B!)--@ GlqYf 'Uhik_~,ȡ_sW\Cl_n?9N4ډ$UxCΝ; ]nj(Mm]` yr~=O=QpET1x@ON>hۗ9kPS39561yd.\HVVsvI5fI4NO9>LRW'{V.^4c :˵^ XvGVe=4+{EPѐȌ1j믿gA^/Ԇc!I$ddYѣ0DB'it)-P(4b=rt"Ҿ\5ttU s6r2WB<@UbHr4%O e&H8,'H$(J>lzUqiͶp8H$\{9yaِeY_ni|8-j_(>?77,u]x͎ d+ʞԯ`EUJNnz1 $U3gf&4A,l5dG2LY#K񐙙bVߟ d8D\đItBdDQ“t2VUUvӂHd@rE^M$AAh$'NUUNSDJNq*z?N"l6:=1]!5x8fZ6*um~𛪾},eH!YQNT]j;2 z4&#h8Eu "j2PCPeTU&K2\߀U 1E,EAerss$Q?EI Y6Q\?D"ACCFZ*Xp!.fYϙHDkkI1(m]H5?BC|,eHػc.?Zi4;1&E &5(% aի ! I2#"%O$]O^KDQ94(vLn!饥vZZZk=z4+V`z/H+4,\Kv"tJu(Gmwjo.+݉!eۥ#hv:m ].=tGm@EUAdL?|8'O?4ih۷3r;=45P2k-= $}v(T폯QkUL:ƬPD{jp8cSAiT_iqh ikkCQ:]wE"g׮]H}}^gi*/ MV644<7q4 IDATc#77EL/y4>Bӂ5Dhmmvs=tc(,,dB!=f6lsi DH5GO'lju`4ZDQTf͚ł غu^|V3g8Gϙ~8zVoD8{<4EJ@A]w޸n8=+WO?fD3JKKl40NY꛸HZKHTYTA@Q"H,455Y?l26oތb+ŀғG_:WA:ILjF U &If˖-ARm6nI9rȷH58%j[t B$%顴*@~> J*\.F#3gdլZV$I"p8;|"M+1ih2''r3L㻁U"ui:ʫj'+tt^dBEADE.6`~[,f͚|c=F}}=zö6] 륳44hUxNtv 4YSBD"VJ7_O)jyyX,V233H$U tiB}=bS Xijc=md2QWWGQQ>N6/(v:%jUN]K͆bѝm=j:ՎYv*hfu^Y4h 왞Nf NV灬,~ C=ܹsdee٩z?,8 BI58tMfٳ , ƍөi2KVihhЅfVz!UUD"zԪFQq8]ŃU6oH}}=\ƍG]]$ ,vr9q=DbV^\ȴiϟ' {ϦY/iKZX,Fnn4X֓"Ӛs'fhz<3b13g˗/'77W:ըġCx)))Rs[, h2466b0ͼ۷2:::bL<HRx@$`W^}mF͠AVZ[}aJKKy'>}:7x#˖-cذaLAAfǣkZ0@{1Ko@-+*`ݺuL&?enfJ^x~pvEKK o&ݛ7x\}~O~Zn 6m`0p-C^^̘1B^~e6oyٰaeee̘1ݻwOѣYzy]6*++5kg.ƌìY袋zTUUqE}vl6oN?|sWɆ 1cw_~9X;zfΜɔ)S7nz+X&DQ$ 1}9q*hQtFx'Yv-g՜y晜viڵCF!STTD" 33'2x`zšCعs'6l  2`rss2e yyy;rgSQQA$ѓKl޼￟w}V+V`ݻaÆq0j(.~:&bXgyl q\׏;c2ʢYW?4kh!w-8CD:;CbF# ,ছnw! PQQAqq1;v ;;D"A(pPPP@(D" b͛ǫ9Cee%;w$AmmnZhiirvެ, NQhvK.#}PUD" NyNJMoA&YK`ɒe\xᅌ3 6qFx &LI4h&MK.aƌ"2uuu<Fu}e]ưas6Ih$;;Ʉ({---\wuwy,Yo)gÇ:u**+)//'''[rb6˘6mըJ(?š5k2B TVVoSQQK$I";;vTB@@/]2'FI אZi͛7sgb24NrzA0 2G(H$b^ASS ҿIfF{{;J"Afvv. Oh4RWW( "~p8Q0`pZ8HiirG|z)< .R=EhSSXii RYN^^6ec0h4RYYk{g2˜LUMK=9㩎4{hիW3iҤrS[#9qjh/-**HO ##`0H jR\\ @[[RPP@AAv#XLi~Ղ1ҙ#(xx<A,3F0g Ciq@EaLrrdg/Á,PSS~;qmmm!J=B!Qz;o}D$ IZQ^/oZkfO9G7-"PS+(D!H`u'ɴz,z""UQ0L$ɇ@5vuM\Ҳ'dFUDDA 9W͞Fccm9|099d x^}?@cc#Hh4W_ͭ?b1FBk+BjWmC )5y= -%=09NTi5 J-,;u--;8m e$@FFҦW]]n'( NSN| *eL&7Rd U* զ BfsWmA0M :hnn&hsg;w.7t}*}e&aH$b(AK8 ԱN9FKHH]t>$. ߈CpӦMuYi`* $.Q$IfoD,FKX"va$TE!!HB( EH$Agg;Vˌrzq8XVZZZZ:Z$׿6m---p5UQ"ːHD,#+ `,"d\Hxh4bK,xݯ6!i&ƌL M.(*@gg'_=}6s---zBTc0qD ̒DC}= `4 +믽hg~3yyy!+I#<\Z ''[*I'?O#~JEEϧ3XVZҥC(^o+HԆlFS0X=ʕ+Yr%/"f@gg'DX,… uml6c4OY./$ehڬƿ ,J$a|q/X~=D'|IXbwN.@֦&T5b2D||g7Ձwf…>\Nee9}T֭[86,xWСC { @ú C0"5r*W-VQv84oDBgcӦM(ԩS|9riӦ3PXXȐy7q:!`ݺuݻX4yqڵ'eee,V3svޭT0`ӦM#c0<'~+--=)[=YRgƿ'+l} {bO]C;OwxdxuNL Ol޼nUV_@ mxǸq9pmm-,\>ŋg͸q6m̙3_zF c6)//6=,tҷo_݋+>#z939X,̟?!C0bnEQػw/n/_W\W_Mff&>O?'lC/_FNN6FQf͚|@Uu5} /{у({/L0.)SpadY .`ذa:?<->N5ќGuJѥ~'uZ@=/ :z~>z2ls~5A2@@9߇ei8zԶǛ Qsj$%M.srJ _O<26l923=̄ 8 (JիWSYYIff&6yӻwojkk6l~?W]qx99̤L&;wߥkUVxbƎK^^555L:oÇc0xG-`̘1b1Fɽ˰a2dɓ:u*s9Y9B8m,Fyy 4iNK.aȑ:dZ_Ʃrx#A0 LdqY& t: gͼ`0VZIshjfH_ H-k(_TTDQ$InOjgaDy̽AL&SO !2`u\~t2xeܸqAn7D"q/f߾Oٳg'd7dذa\vel۶k{GkqsWk.=(S9X3g*I֝R4꼽芙i"˅h sp@Rh24!H$0 $ jjjV+X AzD"=X}_CCp,^ VvEڠ˃VC8IάVx ʙgpt8bTTTw^-܂b֭lذ3rH|>޵k_|1C kë́B!ԩSyk^J64 sr4i2mmmqh<daf߮._\mimU?ꫯ*~իD"jrD"nz}ScI;t4UUUUP(W PUEQUYZ9ں#Gj[[~55媪Nږ[ն6ޮ666hǩ}[ojGj׻S__vttjMMH$ӧq'UUj>χ!77?Aׯ6CaX:Yٴiz0#M8.sMEFru@2bPXXpWJb1p؏(JX,f਍@erss9r$Pل$H1ep8;~PT ݫ^8w}󟔔xb~.\'|۹ۘdb,Z?, /tM(=Cnn.0SSSCKK Geرg׮]rCwmLሿ&,F3ZO<%Lqur-Eb1bt:@w$+hiK5ffcc#NӦML:2WVrEmii3]fGysb٘8q"c6%''`0@AA!v[{4$xlVޜNqMii|]#^kɷv,Z7|χÇٱc;v4sN6nܨWvH+@{DT0tvC$2a$++Kw(nb,YtR[O` xꩧ3fGj*&MΝ;y衇袋qDxNMM f~cX^{nx1cƐСCq8:tO>}Xh\r ?O9Vbƍ+̝;r:l_O IDAT!Cã"o@h2b(.Nہt-$PO_=q*ǪnHU%?9`s饗b0pXAAE8t萾TK?/ 0 %~#ۓN|:;;E$`0駟rUW1p@(pĉ)//'0x`=`0ĉ1b]gIӀ$_Ȣ" EEE,Yo5kmldĉ۷~mۦ'vӷo_t揢(aJKK=z4{seʕAZZZ;v H`,χ_3ĘOhbcabb&^Mea{s,$y=w>.nl~}"==F#!iITՎ,rrd3fg`=X A j%a˖inlԧ,x홾hH2DID"#Ns3FT UU<! L\$$ )DtZ_ cΆ&0vyXh4M ʪذ󗀤ŏS3} |bw^@|M?v7n+ZZZXv-H^A[UUvI*PSS륡?\.VJKK BS>q"^Ɍj4$sLg.1j2 1͆jt:).)x T0jb= ס4Dztuuqw{:;;+NW7l7dIy&/c?kGNg\z}T-!q-XnNqQUMkCVt]V}}=yW/aHeN&U2os[˯~KLbF|Ƒxɒ%8Nf̘Aii)'z9y'Xv-/)--eܸqn bl,zy9s5kh1rd%mm|RI?Ռ;D2%_|,|e.8~پ};+W#Fc8sv3f|>gy&> ۶m'? njkk0acƌ#G_UW^ɏ`̘,L!Tɋ@w:t[DCӿňD"N'?[\ˁC4;2ɑ|ttpa2n9s&ӧOoN|>_o`2:څd.@٢?$bnV^ͨQF7G0kЉ`6Xl)S͞OeNs`?n|>1}#{Y@Ỻ(,,4RL!Td2M:-2)QN#A>;'|~.,Dس^2{tp8LkK Uո].&mIPXPD>yث-]=DSguSTI.kQY)]X,E%Sꏽ{2|.2JJJ❷cDpTݎ# V4U%LIiiB/,fK~I9ko|cs7ic$' fd63ۜ6HH'$UDbU#)3zݻwS]]mDɲlXNW6X,#{Y0Y$ٻw/#F@H]J:R)ݾivifx$;EQҀn`s\r:$ ::: p8rT ʞ={ze}TW9'4eMCsݢۅ^]nL@}}6['Gss3@ ,kX,0|ϡP9sp뭷hoo7"eՈZI1kvj# ^m](萣8byժUL8q|&\SU0շo` >!%I$jd"ңpXZm|c-|c(>(d2F3y1na22IAM3g);- $IȲlj|7nw0riS]:kA71Ftڵk9 nԓlvDd4rc6FԤ(Sd} 6 يjaϞ=(СCX,lذB"gܸc C.zp8d2*GQZzC5$ICENdRb$8(T)S0n8%r`R2d$0@A>N:%?4 R*44)0N_& 9&L@4T"Oݾ%BS<1ʲL2Ob6+++q:d*/ʘ~/}w,3uzMm!4̈́D2SO=SO=Qnta;28E\.ep:F3nܱD1Ngyz˸GaժUD"QRX93jٳg/Fuu5Æ v;~; MM$#(*:;Cth[\9s'KáРa[q:<0|Λ8N.NsP_%l\XZJ˗ 0hPRu E\3͔(! CP}i}4きkj2dC"U%I:thu DaqCXrK0d֬r1G3vXf͚ŬY4뮻>_Wwtu8b.~z!k׮ae8n;8J:/FA$΢waR5|>yy,_$q _ 19nUBʋ·lXUU V$8x ׾l*՞A_ۇ^e>=AӤ~ig烖e9g?@BC_ 4-&%H%,]O?Xڵkywᠶv-|Tn'Xxbc,Z>O>>dz{n|MB&?ر[ĺu7o{e̙}9YAbihhˇAok\A8T$yPZ6s(;{P5o,3lZ'ZGnEN`8U}:F⤓Ngy&`owصg;]w ]w|gݻjFEQQӦMcƍ۷ /^z ?|:(&NȫO"  +=z4&K. Du$+&̙'L" .`ܸq5V\.bK,pa^~%l6+eeeL4 YYf +V?ѣ'2GbbPRIdY6| 끙󗓜͙ CJ{D!bnR,2MMMAF#(tz6Rk * vVYɾ}u 6 G͒%Kػw/&Ndݺu۷|\?ZzI}Y^usYgJ wdp8̈#hmmeƍ=DVBw9s&[|;޽{S zА0p l۶X,FQQ>xиYI**ٳGfjooŶmw?(mيf6#ð|_Nrf3ɔ=!>xE{D >-M _oUU|@Ffc$3H$g}T4X(Myq:{|Ab+.>JAA>>1^ɓ's0w\N8N:$à7dHHd2A]>=Wggu~:#F  Zeq5װ`N:$fϞMGG~e'P\\@(yᇹ9x ?d2ɷ- 7p8TZZ̝;S8)* Cs:7|s9oS(*.n%@Fz_ $wzds3نЯ+ݡB|>cNn0 pN}[[[kGvn2?%Dޅf4$MCyj'bf;ݍ9ɘL&^/8NL&LVNdYU5)+c2)LqNqpFJXd"Hq\X,f0PVV٬(Fbq6@ @C}=Ccx nl6TUUQ__Ձs46B01|4MAtvnSԡBYqֈ(?x{!yp[t)SNb~· ?N7%(`M|aeوR!(* V!([zRU00 C^@QS8LFӸnl6h˵_ՐPw7Vۃ%4 #6B8Acl@wsb~?`hMU0- V+\(鴞'<a^ynV; HĈFvO"Fpj*|۰Z+zfPUdˉBH$ ZxDX,jlG*߈hO>1ſ YniiA[l1 F 5`dY AghMӌ,LɈrD"F,ٌj.$)ԁ&bJar:(BDet:;}hii!H$b(xd"f EѰXt8ZV#xguڐk&I?kTUt( 'pf⮻BQ vEEEhZUp8ڐ$}EΉsX46={^Z9JKKy衇ؼy3W_}5+WI&1a$o {vfIy,_7|:}k[˗SOe+VҚƐ&y==!**LeVRjBvcls*+&)#%&@z4cEEt1cspB."$I/EdIl6UUU9Wy)TRmC‹((Hqc(EL? T5I!hVVVo>;瑍b~WYl̝;[o={Nٺu+v׿5/)++;Gr1g>3`|>~on:l6afƍq/9L..}qbd."9<-%---\~TUV1|Ģq^]Նj"`c\ɸe{/DLl?dN@4>nfC?؛vJ2~<_џj3EhTϡmوp}D?!dUMh> UG~f@cwv`n(+c/d2E*a2qGq1k,6nHee%w`!]]]TWWSUUeKԩS׿fc\s5̘1T*ȑ#QU6j(yx fϞ/no>(5Oʊ جVlذN8LII =0"SRR?1'Of׮]կn{wʵ^ffYn kl$%SiEٿTH 8slyy9HO<:$I)@_L$rF:X_BL謧Ɋy1|ӏNt=99q8Xf fM5kt:qD-L>d%$I6cQLl:X̘͖s(="jY,fC/-B!Li>3>"N=_+j3a^{5Ν]wEww7˗/GQ̸\.BvwͲe8S[rceϞ=avؑIm{Uf֯_n2z n6;uuۇlSP_6#F?GU<,]G4.`ܸqwg\{Ml)G5<#,]e|UBɜwt}ʣ-t8@OV3AG:8\r Ǐ硇Yo%:rt:Mgg'gvI`g.:䙝17 7qOElг].G?tPGYgFggv4KEU $eS&wom&Ka M^ѕLdgsssz5FlDa9O2%J*wײe?bQ+5 IDATyXbӏQK/7n]]]lݺ2:ʛolfL8KrXz5't۷3~x.BZ[[qxf6oZ.8\==e 'L;aΝ>SN;T[[9e([cX}v!۶ꫯWWtijjb~~|].tf+a@`dK6`6ikk;vtR~7`X5FIef 5,ͮa# uVY (Ǎ2(R*8IrR)υ˥$IFly1X2I0ޞn7"Pp8hnnAKKKNO^7yXLUUy9Ө&b2I(i_; Ȳi}'Z24MTU=&jZ"ιJrG(iZ4͹G+fL&xqK4}M~ /)uvve)L4%irۀ)--;Сz1oҬV+UUU|>}B՘L&ޠ^w I*ԁᤲX,$I477~gyz;35k腖8y(ȽAP/W_|^g:~FYzK}}=gfܹ\|bׇEQx<ť^ԩSywٺu+Æ (xGye˖ȑz8NN=T.G|)0vLn+WD7nzm۶1~xv"鴡&h*ÏVzX8j{=·ywx嗘7OxM1[7Nii)t-[Ȍ3lݺp8 ԩSO׳r )3d׮]x< B'f='\|1*@-[ N}QQQa]&\>M=E zÝӃLGAu,|qjr)xbĉ ^B<gܹH$OSN>dLB0$0f9V^ɓwk$crݜvi:ini.cڴi{x<l6'O?4Ѫȑ#!R8aCqFB3g䒋/eذlܸ믿Çs_'BOOW]VWRVZagԨQ5#Fz2deeeF1c0d6mĸq_Q\\Lyy955q8\}Eee6nK/i_gtEB?`|<#k8[onnF4x'Yp!gСQXXH,3UT(p2څkH4Lڊj (`IЈR 1gFkmDqjB*ԣ;3 +pX,8N"(XVTE&HH%d]=pر&d #bU#J!xZQ5K&ZE&ĸL&8HՊfJEp8R&4m7PēO¦(;RYVT>Qyыl,reJ,4MÇsp8(++# %(#&gf[oߎd"!ټ" 1IRo/)&$ɔۮR&~߼̂ɔe]JޡG4L9y!!/$_￟7b hkk3R keG6l~Ys*vy|*Bar׳yfnFTǗMf9vBZ#<8$ <#$h-IW]u\s ]wj0s|U;ód%< nUUХ±cr=0g)++#H IY_MN1,Wu0#W-I6D"Auu5?oK[[QF$jkksxJ,8X>ς 0IyG_]7D"0rHn6~ߑH$A*++sxJw2ܷo_/C===Y%zzz${~yG6qjՅ?97pw6*̈~,=++**(=iR)iI&RKyG2f)++㪫bΜ9p8Æj#N*l0:>[nI|90ɼGg3=HFi &0g~߰` QQDC}x<%\M7{ggDQv{y ͆ngwqzxF7lg(^v¾mj_e:19kߕlCHhS$ Ikk+6mwovJzȲ|1w:GUU5rr)̴icK߆DOAA~:e˖zv)łŢ=]1<,=4;;;),"RZZJ0;䮻L&BM ?eꩯ D8g>sJKKMشiMMMfH&L&, ^Yd2iZDr첲2cceFCC@˅(ᠥDqq1D !JWTTc%R~n7,4Z[[())1hfbݔJrhdaihhֆ 2d?P$DIIIvx<|>鴑VY)--5hL&c*++shL&p8hhhrIhnT*e Byy9tbd1lllؐFNAt/ lHdTTTk9M FaaeZ&S^^C#NFyyǷ`0h6F2jb6ڏF:(//7ރp8rh477cXTUe͌7#QpH$B$wexg.mժUFjQq˗bȐ!Z5´Zzk-nEQ]k]]] =hJmƄ ؏ǜxL&d߾}(&SXXرcZl6 ֭[ FLQFSa``0h b˜f^/b fqa͌3?$ J! &!hzU 59 At:MqqEAhhf,ACe^7`,ceٌsNȲ3Ɛ!Cx<9c%I~@PkWPPseF7%hd,FCPb9Dh4J$Ԩz1͆ ey?RTT3V(**sgQ`޽A {@$tEցd2`kM\0­|1)**w ,XxlRMMM1OKK C !N(++|R7;v0|#JPF)))94/Z)q0l߾ѣGQCww7d!C=G_455(**:b4V a34``TEEEt: tttx _]]]\ &={v0eJ9PA6f͚ŽԩS9utՅ٬N$Ӏ _諸GXy!?eq(b?\>MH$=k`%~oooL&3ddwlNs`81)..?7N 2$$ei n(2b(" lww7R)CAۑQfcאָWcdoZٞ,fCۿE*BIJJ*6rA3bW#[$:4[TUUsOvOkDi0!m}ʲLuuu~mz]G( OP'~}}Nl!uuu >`^G⥋P$Caa?4 Igg'$ 7zzz4 Ccc#VOGG>/ǕgRЪJ$:C|b\)**fƷ^TՂ,Ǎ9z;)* :zS YR)4HD"a$wuuQPPn6U$R)ݧ[aTTTH$H&twwS]] sSQT\/Eh dJ<7NXxEQx<?>(s鉑f jf ($6]+EA};j %Ѻ: y5^L`E#tiF<bJ8 ׺E!pᕭ,*DpGqJ%I<^7h4bnrtZ1NbS+@ ł;#lf<;p X`nF <,[)Sp87o1D[[?X}PH8IBnm'Ȭ_!ChkkGaȑ9 $+/2 wx y]9hਜ਼4ͨviQ-%yGy73L& YV;ikK2~::ڰXlRf"mڴ{Ezj˘1cdܹ 6lPͽSn\.@Z /V+stfK&p:]fjkk)((`ǎv%KpyQ__D $ I2! NI-<|>l~a b;2ep| {]( SO=裏ꫯ_|㏟4Fz ]*H$yo;v(o F([n!kqyс$I/`mBz=yfJJJؼy3'|2=z6T ,6*ۙ6mwcƌW^aɒ%Ȳ[o(<󴶶~z< (̼L>|D"A(2j#poiiO65k%4EͨX,m̚uw}7?Go?0s$#%,^x!>(3f̠H$‹/7K4T*mHۃEQL.X=NvW^-D"=QGŦM fxҲ8 /5k ={6ٳn￟:;; Ii,jbtvvr̖-;Өݜ`T=#v\9:W"RSSwߍ*{xTչ?{.d2 ! "A"֊ޞ#RUjE+ң}=*V*)=#r bB%>{{ѣdg33{wֻ^ۛHAJE3hhh`2N&My[y駱X,F~ϤG{a46~m oe'ɤBV5cR2jiin;w"*z)~?k֬GaժUx<û[1g Oxl6hZze:-z 8'Nevpw".I>_e03bVXK/e߾}F›P(DNNΗxtYf O<3g$塴XN8n3.#X p(j郿q=?hlCogzF6<=[Tww!)}V? IUc6ɤ;TWWch];RZƥ/9si&9|bV&}o7cHݟLcEa ˞|{ao@ S\\h)ֆeVϟƍyfF{mbrN17~c6AD<&Nd"o߁C:H$dL:VE7zl5\ȑ#X,8N::pet!nڵlf֭L4e˖qgbZI_FX,Ƶ^˕W^IMM ӦMcƳGN8Qyz2=5:::())!??HoE,"r˖-ȲL{{;999}2=G$Ci; !X,|V|>~g8R^%4i&w#( [n?6)j #MMM4662tPA:>lInp8&dfhR0حV O'9`2aټN5e,JNv;"~2뭷{FAA155[5,mK -$ %%1Lj'@WW''O/ o2t0c^dZb;tttP__$!C~;|>jk?a„ u6'vy}>I:t(|gb1~z0OL&V^w^z%{=NjQ֬Ykpi\3齢GV:p:ڵ 6l֭# Z:[ (B.G^!mmm犋.*VZ%<PUo(PUUB!*" UUs[o !8t萘3g:uO H$pUlh.Y$c1pBqe3f.LlܸQ\bҤIbƍ" EQ2NTUx\*^uQYY).bi&*L"fϮD$DBtvv #"LBEWWxDO " իmmbʕbʔ)XϽj/" !j1c / YnI?^]V *Pƅ*D4o7ŢwSN---"Ld2L&EKKѷu։K`0("_L6M"2Ch,~hmo)/oBe|rqW^yE|36ѨUW]%Ə/VZ%TU.UUU^444H$"TU蘄B!(x7ڵkBD"q]w & 6d2)̙#;h+H IdNiE4i:%UO&Tg[ɲZtΧt)PTTxEQزqht2 d1E ,spH{4o_g$I TL@ G GyUUz8sd HR>_7ZtfCCV)p(얈dͽү*$]VBo'?BR)bN\__墤d2I  ??ߐԓI!jM {p\D"[4?T jP<1um10§$I"HRTdcݺ5p}fyl6~.>$\z)^fz8p{k< v3ᔗ l۶ Y1tvv(!>h3 nm~=̘ų>Cww+)))} cZW_ Nf}v>9N'ݘnwt99N'~[n5o7a$-m~cGrƈڵ+Wx)U":::ȲLKK 4鿺ԩSX,\.b[n% cZI&!IIN_|рb!Le ufqp. 4(/+ >}:Fbݺu3gbYd <Y ,]~M7D$a̙ :6;v,,6.MM̞=Jbڴi<#\.3|0wnҋ$Q~˕,]:EQOݾ=,}Il6;999ݻj=\~߳zj={6wq<0~)>7˟Orssq݆$;["/O+nq\TW]K2Yz5g˖-L1 9Fdrkdx~Xbӟطo;v X\vzzzo<֭[ n:-ZDYY,vdYF$BO?47of޽TU%qn>~`SZZ'`§6˯MH$l6ooR[[x|noW_t"2 ڀh4(b$|vO;w.GFQoy뭷(..6 -̝;6vMMM p!~a8,,,dŊ\r%l߾e˖ֆ$r}qF)//`Ν$>H}<pb9fHp1bFbtttprk.O8D"AEE<\uUl߾ |x<炄)2~xJJJeP(dd;{8L&κ` 1clfΝy8CXE5rrrp:5Jcē'ꅻロ)S0o<5k&%%%FoI`hK@/Nl6n!c{=nf~߰w^ q 8ﻏɓ'sNv&l6~9x嗩dÕW^ɔ)SeUn).. Z4$ oƂ y9p7n5555 3/PU`D,s]w{zFq:F &t:illD$Ȫ^㡼A#A|:4׵92e"\.'~!կؽ{7MMMYUVQ[[kưaq8$v{p;v &@nD(D2`ر̛7رݻR^nl:V~pyY`V<EI5DxWp:{Y Ȳ;xbPWWŋٳgfVif;/~.Ǝˮ]B,]7E1l0>CnV ׯg6l;n )W5ES!,9D"Q.x.1E@>|}\=vbʕPUUEee%ɘIݻK `ݺu3LCCzٹs'r#tOժՐ~P(=&r:'w 裏>z QTYᠭAees={n:::&)(i&(v ̹N`ĈfS[soaY 6rɡqQPP\L4'@ ,8{/{.Ç tR9z46*tbʔ)}&3r33f̤ _I&NHII ۶myqYg1cv(EE\.$^X,ByOQ:\rr(¬Y㏹k;w.9PIٹ ^K/qƑCQQ\tEr! sIDATnq}|>FɨQ픖RUU(l6\.WƋA( UUI&}1r3h%4;PI&>uρH*f5~ G;Gށda6ҮpH$p8lHRW!(ufO`4cCQ;Hx#1s 8eazb,IENDB`CRDdp   s AJBiblSmooBLOGGER_PHOTO_ID_5205676837263229202bCQ "P ?$Cx.QQf>nQ "P ?$Cx.PNG  IHDR42e4sBIT|d IDATxwl߻#"] ذ4{AHTTQhP(%ƨhTb(!DmߝcnDx؝ݛ潟oU\uB!' uB( B!ED!DH!h !""Q$@B4BFB( B!ED!DH!h !""Q$@B4BFB( B!ED!DH!h !""Q$@B4BFB( B!ED!DH!h !""Q$@B4BFB( B!ED!DH!h !ho.8\۶Ba ˡ( DuX!Oq]mhhKB?6a&-!&뛮DVIJ,֭K/;'0oɀ>}:/gϦsL0ŋ.mK%"hEA4TUe}a̙A0lSk&iXuޝӿ. ફ7ߤw;Ƨȝwɴiٳ'iiZӼ!e Yl7tȖjX԰vZ>cLIJ,RXԹޣGjkkq]q6]veĉD&y Bc_9sh u]***ڵ+ BQl gfѢE̙3={|rE)w .]J߾}K},[xR`Y` 4 !̯6K"@nroB!֭[ǠAX|9Gu ]w<K,sΜs9\r%dY***6z( XTlq`Jz"e Kڂ&]ӧ_||EQfmu{mݖ3g2n8W3TTÃ1@: l^^Zfku~mr,p8-xiEd*+`x3/ .!hbYV)5M+s躎8({x@}8L2o5i BF4ZX 6] hKUU j]D<#N$ʻ/no\B5+*Fa8x<?. >bo(7Èе+W]W.!ؚo'08t߱X1B!6l@_Yt_C!W ~p|v!~B( B!ED!DH4%DizsLՅeN2 n=}-T ԯ3,kx|9\=zzK%"hE$@_> 4 h2|(Io]-8~<|7iX DѪH9DQ{SR-h^ǂ0s&y$<$ L}o~B$@\(m,_`唧 d*cLoqka2o3h[K!Z   >e{K" LӔ{!_y; y)пU.煈B x kk3Lx`!k:wΖ!DA`yU8hXԫBM%^'zPש9l]p;a~D,B\a w_)ᮻ` dUy u\%He=lgzriG2|$x(n;,J!Z@T7 ()㚲 gBVI? w`]MצΔY2`X?)l vi.w(@NJtKq?/נTr BVNd3\Ek箼ϗ-UQmԅ ? TFNXIr]|z!=aJ!PEQPKvX=m"$BHp]EQp^]YRH! =:VYq4/Dl)B' "Mq N4M DH~hウ{:u]"BDCDUq\v8l8KW%5tB [UQPUeSObgX)R(T!BCEvLas[s%h7$@u]tMeCma A"&W0Zgx#Yoyv! FPꚰlǡSU7??|L4u(^xbl*BH#,ƶ>5]O51A}C, (#Hs9/<~[vEB|H#5P4LItvCnbcxUǜ90{|>nK !D 4WQtrYrp;b;.p]xw :w-!"h@GRMv}vfJb?@DUisDLۓ}H/Hk>\ΫPtהT(~=}RcdxU 7xMVmݻé´ig)ha$@pPt-?u;n~yVLMɓ!q쳡C !wH(4 ةww / 7?<-;D\T2n"LOB4+)#|Ѡ23w Wy8kthy#~xI.d*+eb)#ˆ.4G3WzMJ TJC_I(DB:p6[/[f"?VRߜS*MJ:W dY6e ! ѩ*JME'I]1M8򚲄{+"!zup{ 3]U-BHVUNU,c_EiBJ M.blHfXn <"b$39 !D#DC"JDqGTƣ2RH!Z &. j*Κ^z?̝OE=VE5 !D&0 oo|/FmM#Cv{e*a9Ҕ%h$@Bʃ5kWc񶐭ƆdE~"i˨,!D$lr9ذ. իat'QM~zƋ,BhUjʒB$ M!׸25 x{ ~fBӷL,?D<&+2?DѲH4PE8(Haxyoι=<0LB2)Diy}!_#G£WÑGB6.U(5,s+pأH#a%hi 1p5^x^xA1j9^^)G" Db4y?qd2$IR|t;i$I4inXiS;80bnB\@X*Dh©DYGY*aCm]-G7a޷axWHR;L sϥ{w_\-5yF" 0MN{?ui:U&ddI4yMMp>xv[F}~g(b?pV's{l2h&hYm& <|4,b8y \4Wg<~뺘}vٖS-g%m&(WH{MYFksD:k3&"MYB]jID =~:9vRd(Ոbk({4lr<xHG.>bGv;3 S[Oa?ҔuT!]`L=' اU#[ !amm&tT*EX,#AR^~0hB,ʛ愽镍qu lGBDQ^e ?4M#`6w 4h4ZZjUʔ&* Q/D~7q(vqΜ׈urm;"B)K~z>↏?8~!`v9 JMaD p hP4RsBXe 7q1dy,3Πo߾tҥ\U%Bd 9^0eb %7dd, X '|l6ˬYڵ+>,dkJ4Җ0ks{e!j3+DBO?e˖1qDziڵ+t&D47L`O);LFh~!,H$x zktA||Vb (B<&sUgr֯碛eUR0L !D=0fnV-ZkƲe4 &.X5iYX3L<hX-YBF)KB!N;4Νرc90`|t:-3ћ;$XW`ZN{x4L0exQ SNeѢE,^y(^C4-?ԅȜ)cs>@<&W0 !Ė٪)qbPh%NDnwEa)MLiBltA0{/dc͚5!={d„ "ͣaMk䗿3gNÃן c;,,ډb VX[o7|Ú5k8qbi)tۼJ}"U fӨE&*8L4Boe@|'}:pۖIj-ZR @:I!C?!Fބ.ꫯ/ӹsgy8mK'z 0*A9sّ_#C?Ne ![ N8>3rC !ϗ.LҲB qxyA!MYBOu]lu]{=(D6t+Z6 lf^yjYz tKH~ UUQ>}{K dNAkGA=a$3E8ur-lx4, h6u]bepB,4Ml0H0O 9}m1ᅳLH<*ZXLB;_͈#~ӧa@U \3ٗ N8QISTY$NH$mݘ5k{fҥlذnݺJ;֣aU,"-0v~pO8XcC*[(h?(; Y~&KSGpԕаmxICP4QtT՛(mVYDUUǡO>,[]f BhF~PU_5߷XH'5Ű8̽HbK1{%ŶA$Dh D\.ǫqe]F",\o)5_Lu] "FN3c|CaWWP A !ڨ5aEQ4߿8 oΪI6s)//{ Epq˂@Y<BeO$xW7?,M2:~wpd3S+XuSQ2ibB-ehOd;v,'Oo߾hI'D&!a6ۊf( j:tĆ=zbG:h(tHx򵸩JEEs ixꩧ0 ￟;RSSéZ.Xmt:t6\{xzp w"R(ZòmQY$0gn+MXmH4t*y_rWpn8?91ݴ(:!YD6l3$daÆrJq-q]QIܳ~Ak&3a rʥ6l C6)[':@ #Fb {1Knf ÐNFQD)q8.f@~>􈆉)R Ѧu|Li(BPSN{TVVJ'z[H(b*X5] 6\/L=eT*D[P`0aL{2ٜIL*!ZDrA~ӭ[71ML&C(@ :قQmFk!M#"D+QIӄa: 9^~ek/VXA$P(C. ? Mk[}}{xH'5(K˗/gά\@ i'O@Dpދ?82|o\g}mžsH :UdpMĬk_bcǎfٌ?\w#G  ޽;h־Z7M_ԩе+2 74V>DIsGN;p5wx,8"Bleք8=z HJRqz x 6B}%2k,;<.Ҟ#V.U<~7 /&xkRJL.T'3Lw ?=KWgMo*łeI55[':ڵkY`a`&m9C9ꨣIjurLl6+MXp0pWu~8]wkރSNsᢋ,UF!(&d:aoe8xHsUt~l^N:1cPYY… BB!$+W;g2$mo4"q]*c`ᦛCH4Nq&ыco{cοltQ@^oϟ':t(p^c.Rpb1K|Ey@` bh&0"WٍP1[xKIfrTƣm DSi+x7!U~v!x#YEx@}uQ əDagěs$SHe:C>hBUU1 hݒFݰ`0(?DÉ|4\C+kX؆/>ϘW䩌Kc5y|w&&'?U4W(`t6#PU>=|M)D"DSK"dr`hD'g#7W,Y_ L-˯2E0 BҥC\'o=5AKG1ѠNh,`R/<9-󪐙3% !Z.W!ry;GkEXQCR_O:w>d?x^z :u޽3jd3[ \$@F,b_#nG?yz >T&h$bhuu ppn^WVB Bl4‰(|B+_bC Ig @2[$Rc٪$L \ { ?I#"!bHN׍Pj,ׅxkʚ8v>{ \[Vu-t #Ke4D2]6~ɪokrM~GE,̺4yk3ݪ*o(q3@(-?7RQRMx`plp魏ӥcH@6Ono'B"bѫFy  萈RΑL#yp9di~ۘTUEuj3yRb)LJ~C!CCCP BIU(0equcܿ't#FxhPPP.E+"BY 阈.ټFSXܶ|-+ )h:';5tRb+iH, E\y!$YxoMSMDb &" L K ioxwo/R5T#TtHee+N=s+NOb&3j**"dټ8sryo $@Dp$S7<6onK⒈m\ ӷgg?g歡KdW#Ab ((n"? Ih,0COC*M4\{~MSj2'eӫGzN}k頳zSaYû3ΠN~0U_}+WZL FD}dxO> /H<.z@qŀзuˡ~ɎA6э@׮4 T[xR)g?a]5[>DIO#QUH8mE4糬u(wߢ0pG-BF Phb s]kY^x] ]z^u$-` MMӱ,'ĩOsb{GVvLS 4EhMAD}~:_xgU^QYgGhб䲘,e= 18!C־|&8nT HOQC_?CocF"(mc/Z{pV/瓪`ۘhb{Oٓ'`6֠I˶EAU;]k6fIϿiwaݻ~Bʿ %=lˆP t L;6kPu >^90EE$L8u[:Us)(uO-g+BD} ǩq]K}x].Rs&i*}Ptϸ!j.6떕TUUv}pYmx}O2ܸM}h"'bs%_9Z,庮ҿ"6l7;ɷVVgߐ+z@CUUE ~ \~aY$nAk&ۮLD""ڏMnofUQAXi1Jr͌8p'9NW/{Zx9ŢI(`M -8zAq^aM \tI:' !EQTF$hX>t?j&ѧs7>3< clA\|*=ߖLd37a"o?wE /F%MES-s]BP(Z &=_͚OaWOyݥޤH(@8 E5U? f}h3]¯*ZhZ6.SŔsF1g u9 ; @L(C,{D^y ;*f/> !Z"cP>2teQ(a0z_NUkhZ_j.\ o\EP\?L\ot?vN9f̀#3Xyx9VDDƿrާp>_EeZ(_ɐ1՛̻MKg (*[Ɩ3.GHHdRJ(!_0amp]EᬟRQEv*1%=cCG:*­QzLZZj3l>W+6 !D&lDQ\N;r[ЫS`B/>詘ݛI_z|`7q3K Bu]b9*W0@Q F=2XE^znIJීcy E p d9rB +h %_0AQPQ`ߝ" sqŜ(~58pWD%XVwI!ZHxq\L۟E5YTAD8 c-NbΝ6z.v4U£$@*T8DHDHD7:.bCMpH񷰼j_/OIr HcI!Z-UU ֭)i;ㅉ(K8*u^& h̢A $"h4UEŶ㔶NuB"Šg H B6Kk8( u #"hR(Ekr[G!hԺАh !MX~.b1`MH۶1Mt)3iQK4lhoEJQ!D :@n;j4}8 ,X@*BUUvavyg[>C Pwޙ=z`6ilu],L*Ne0Eۜ^_u[eK~ lM|m+eYA+9g}F=裏x衇+8?>\s ӦM#RQL$!RԵi 0{+!hImۀT b&ݻw筷_gĉ `ԨQ<34K?Ί+D"r9nFtmۛmjK4Ms|mbP6z Nr6֭[7֭[ʕ+KJ_EQX~=Vw4ML0 EaŊ?᫯k׮WڶM `a ?&SOO44ѣGsyq衇64p>&8p MX[azj~_o׿… n>}:'Oow^1L&C2$_pB Ц?>O>l6[*ۚt:M"`[RSSCԶ-Xwߝ=ܓL&Sy"9իWn+ŋrJVlWZޫW/&OLuu5{weQ(=z49l &0zh8+x}s'Nq~r9FQgKU,xL&=`nlGYjO=zx<!~?.˖rsƺmxWw^en\;dd2:&D"a+F'-\;{ڵ̚5/P(g.]_LEE{Z-^ ˙?>dR9SUUŚ5koϕP~7̝;*f̘} ʛH$H<裬[v%Ho2g̙ٳijj 7Lii)>|+x;eCCTWW3w\= ԗ-[ĉ)++cҥNaW<ȑ#̝;j*++ihhl6twwٳeyfd˖-3CyywDdI{qFy뭷,&M~[n6mdY9dRDD/lڴI֬Y#SL^F""r=ҥKEqaGT?=2uTy\~ߊ`/[L|I9p466s`qj̙rJ=wء?'|"wc}``BɓL}}}"" 7o+W̙3E^{ 8f~Q[[\0NX\\̲ep\:]߇j{aa!]wsq8L4^xP(y뭷l~yf`Ncܸq466kzj4V\Tn7]v.1.^GvZz)v؁%L2n wR[[Kii^'I#NE65A=^{-p:R[[ӧٿ-LXt:MGGCTD"A8&=NH$8z(hxM4`͚5q~_vsRia̘1~N pI Bat;Dz,^m۶qUW,jFuׂEEECmYPߥKI& ~ `PQ%H H]]  %k8@&L eIUUkFzFDlDZXLDD? ǰjhh BzO?t:-DB9nuuuv@>3),,67nX%hT2-eED"!n[DD٬@v9Ε'ImrDp\iMq ^ϧ b1],//OJl6ceG" JSvDɜNu,ucȝ*%w0vX""رcm+k.J_'>d2b1n>ā -rzlkfh9W^x4:d]jed]2dG2=.}+s\ەgEXVj}c'-at&F*2y,ҞelL&]t퍣.d=r>XQx{^4T T*eYþ8|߹S"vwi6@ T*5k| {@lZ>|]v石{n<۷o8>|D"ANե^8܄tdx<I~GŜ?~HTJ`J u?G!LlV'S)39ƺ:N]v҂H$te;a1w CEQ+o#q+Vw^} 7|/ロSzjN:;r9JǓI ?ɓL0uv|455;եسFŦ#PPPoDl6ҥ_^/eáic2$4Z.|}]z{{)++cٸ\.N >p!+.\?T* 7ܠ[XX/eYL0@WZV~p!ƏeY477w^/^Z:uJb>磠t4l76mٳX,ƼyD"ڵ:\.}u~:::K9}4,_|H)X< #Q C]R?8`u]GII^q:ubǏO^'NӟK.|_򗴶o_Gyx^x>***8s _5%%%/aΝǏGDtۗ,Y’%KXx1n;3S}v"ƍ.ݖ~^lBuu5;w'(È]wE~~>nUV 4+577cL&iii'? H$¾}_fڵ<߿9s0}t8Μ9ᠹuA]]cǎ xw/r*++ikk#ֆ۷s7tQT*Ekk+LFvIcc#vMMML&9y$`p8LYY`fdD"L>?UV>rh0~sNT*ʼn'&L x<imme֬Y^vf̘_eYpM7H$شi۶mꫯ믳pB-ZeYDQϟ?ikkԩS \uU8qɓ's طo6lz W^y%EEE,_},YG,ˢd2ɂ hooW_e֭#G0{l*++fTWWsd;ww\|>|>NSowvvrinjjjp8f0"7^9G#QmJe7Wһ6d? 1 0b徾>"؉l6cDǃYUv^tM$$ u*p۫2=JzaYVRx\6H$z:$qʪt!q<*:NEW(yfݳPQ `Pj0 aa`0Q `F aXb0 aa`0Q `F aXb0 aa`0Q `erOEIENDB`Dd%   s A JBibLitBLOGGER_PHOTO_ID_5205683034901037362bQgR>D>nQgR>DPNG  IHDRsPLTE3f3333f333ff3fffff3f3f̙3f3333f3333333333f3333333f3f33ff3f3f3f3333f3333333f3̙333333f333ff3ffffff3f33f3ff3f3f3ffff3fffffffffff3fffffff3fff̙ffff3fffff3f̙3333f33̙3ff3ffff̙f3f̙3f̙̙3f̙3f3333f333ff3fffff̙̙3̙f̙̙̙3f̙3f3f3333f333ff3fffff3f3f̙3fDbPbKGDH cmPPJCmp0712Hs IDATx^ۖ*E{%q?] .u8'ܬ ᣢJ I X(I狿|.W;#z:[#B?,$0A: D/`N$: D,>k[4-{X8w| j=Bg㒆QPbb{O;6by|,Dv}ZϢ-HR-INVXѡ X( m.0ZΣXjXsZg֣r9dхЃ G[t(`ђZ Â9hI`EKcia-:,`N$B#uS{]d #wrP=9󞃬) {|t0Pl]M2R["MnNa?Ul+5ӄE*RSl=z:aq|&6~BE| _a;K=RWLYQg+C6m` seLFUx6QǞ0AJ̮qٰqA$mk|*.ǩ1Xv6GYt[@ZI*\6NRw fǰʲ>y3c_ԲV @,oo `?`q^R-gяآ,05Xx%?# aMk ǂa_5aQZd]Td2@w) nJGq#? jzo`Gׅ(;\ޅ aaf풰}Xص].,p]x/.)aq9)X) GbE`g~DY=ϡsXsXdXE>O .L[N ,7w)=9j-撷=H{.izv1_lQ/nqQw]FaQ$,j&,tQ 5Tkyը ޸JnBa爓Mn|2TwCfB,)Lr)B)Rlx[)ӹzB.,u)xR(XrX$]$偹ԁSяZ+Ԕyp?) XP,x[.xef I+Y4 XfNBo6x)W/-SB~;$@{&`ip[1cǎsNʢ.ADk*~7G<ҫc Ůsxڱhf?ׁX4߰eE@G+ym88q&wm-nW+iӅE$*ALzS,;fzX=sݲ(Xϊ"YMkk_L *^}ʁVl^'G-Ƌie|vAWX7}EwIĤb_u8,/D J|x+w]$wjK,Q:{@ U{,ԞW$i:j4'@џp H tD!:B>Kc wϱ\#rF5&7GsMQ1t]6zyW|{ؿ~,h̽:KŠc|jjdm|ee˥)x,Kw.ATG=|;ϜG@xSsWrcHH&yTphŇt|WWB /KRqUl6P\U-`:T9t yvj-|rVplO^y/|;on[|_j޶×"EaPk\j;s.@ax ]|}-vp?VRעYN뛬Zdbf6ۺ+dKΧV%g}%gz$gAwy3LԺkkAtK\GJ޴0w.dMwMZ_{MpOP*#n|9<nKinIFiӾ9PNG  IHDR01tIME;T# pHYsttk$gAMA ahIDATx?6bۍ_pvC/0.8`7{`=OAq8`{ƿN` Ǚd՗bO$E~1[]%(#% )~ѺʒQeɨdTY2GY巨ҿlbc۷o/^0`Qoϟ?W?Tp~d,__eOB 4Q/~'ğ|s? T|ZT}" fB|ӈzы(^PUvssA2ʙ7b:b:x"Ç_Nw^JLv0杝 (λw~\yُ?[fZ 1ד'OӃ]l?Cnʊ)U*KF%ʒQeɨdWk_uNzp+E.Fٿ~WWj2jG /[Ɍ54&U}By\Lzn]z1^ZI)V݉yu:*Ԕ2$YEiMJ4,΃fF׸nnwJ쌯;FoL;T]S!ٳWZ8sڃiUVY~74XEe͹mȣV5LCvr8WWVzר^NWf#n/֨n\']Z*+R_.޿2ujVj_xL/ơJ=K呵?]_1ci/IT6UV%W5K3FT8X_Ώۑ%&xu]~(#w6z|Z}=:,h_p<%xS˃V_U3Y 븕1/U\ʊg@G_x'ML`15r졾RI+ktWUS*WåZNڗyuUJovwQ`_ʚWW='^eʒm,UL]ӯ-JrUVI*IcuYI*d^xeLV9nVleUD*롡.CVֶV8pW1&+TYc^+{kښߵ1ӓVnU&_/e̅JVjȬ;fLE?e69ά4_ߖj: ߼R*ٓQU*KF%#'%ʒQeɨdTY2s ~A+c.C6f1?\k<{FzL^`;·!2&9K"UOT R|Oї)hpbןbZ~7ߨ Y /pi&L78Bl* jD5@3G2֪ٓ>A\=-;s+ykROLhVϏ>ȞByvl1ez!2NQeɨdTY2,U*K K=deô2 7{}w&s pu:arKucή1qy鴕ϑ}BN´uq:U4AZNu4BwӊSUڔD_IFFNpz&ftOE2C#o'pnz`%^<ɨ8_ oV&L@&)^^(g㪭L/r{PFozVJlsKw;c惁QeK+; [uU nPC~t0=E2KYժ"ŭ ojR_SgHֽO=f}I#=֏͇ M/IZBxS_xB_*+۾åT݋@":-7aNS@ |}xS~*d%N|}Ku-wU G*}˕)뷕MϭꢾL?\"WwO*W]{tV?5u[euU_SU&V_SOG\PP/^?Oq}PwX+5nG+KVpiWn ;f-WOU}%l+B)Ve]U C+KGW P=o+M=j 5*rCRn-ꛧn^ʬZd) jVwOJ:״ g 3s?0U\''dXD뤦g8t$ʒQeɨdT~hdFhdFYsUJM^x!_~6i`>v-B>T/ǏMY Ç򍵳ԩcjsi1>}^}$ɓG._fI$F}zɕ_nr%m=#No#Dj|\Fϝ{8!p|Q|QULdG"ި(_J+lcQ1 v;3SkgF ɰ42TG#Cu42TG#Cu42TG#Cu42T?E}9tk;oOϻhph>׺U9q  Fp{w{$}ifyݗy v"p|ttD4'y6Ry#='j*9IG#+8\>Rksم /+u05ߧ+pDOqu5EF&=s/=$n'+#M\/: T[uқwGnV!{N󮮌X5i*PU%(.'su͓UJMkJl;wvsޒ`F3Wm&[۹;:^ׁR?Η|6:Onػ*thSluvnawקo[-L6/Z0َݞ}\n>\:畲'65G<2 Ԇd$Xyv4O^MV[t۶/jgEHf\882c sE_qa<$ݶ0,K#Æl'H6!ZO#;e"^(;%2ȐjQ#z0P﨑 Tk͍CvȦꮡvȾXupjdӀ5}wl{ldӘ :mdbmaS?|8q[#vΆn^LgS@ jdSCFFjdCږ4^#6Fjt{KWJE>jehC)S|Gimoa Lsͽֶâ.ub>cUY@d#>%P]C}%42TG#CudHhdFhdFhdFhdFhdFї_~^xQvc}9|Ԩa|O?=GmL?CM`)ʭ^Ow*0u["FD/5fQyƘX Euiv LRvc?._-%Vgwb6^,?[jvVA/bӍs.˗vIDQ+X)FrmC=Ũac46|d A35g:GzN|TvfkWBS*=Ϩ&5!9ۮS,__z;.g|꫇ޫى5IcW%{^YqB׿u)b~_.H _~&x6zQ]굳;7K+F\9_1gun)>[O)9z_/?/A/;/%7f4XѝѡGyp 3j"-l[4j"l9[qv۫s@3HJ#T06"jCKxJl#7%5@ڸ OG2k7H,CJxYxxYi35ANVٳwߒU/_\QUrϟ~IVӋ7k}3%OhC}Y*ӗ" 6V[=\xrJrD۲n]Z5E5L/2%˹f#uRH@q~/ʞXr6k8B2USQAGq6,q *:>*bFf|jsL6}#/E.AV,En$Ϙ$u+a\r׍1 +G@gQ|ߏl!NFF)m Qkpio{=aF;1%T o>zp.8:j+{1%N5gR#= 8Ma#Ĭsc @lη}mXrKM>P;BGEAR1GOcQ՟rKX!RudV%vEg8[@̈́+U-V7 cҨUu/K d I}CdybVMo:qN OQ=}F+M_,r!֓'OUqvv&E#QlbDկ~պ '^xu)f#Ņ8!O}QZ bdR?~䐿oHׯJ("ɸ>w}W2+dMԚ" Df7r'vĉbpcJ{]H+p76R /Aƀu}2yMMҊ>۔U  m^ rʋhrwdaGa`p=4ˏ|U o)xSKQ}fw;9w#Mj O`k~ S;D׿f7\'Zs0(V ᑑ dG 0< @`x2# dG 0< @`xl]N9 dhs/HZwz'_w|ɝ~OKgOA)2NOLJ 4|߹d}z>+|NAƩ%Ǹgb,Lx!uU+spUb#=Q?oy8n{A!|>n@r7|˶ȰS1l8?=l֨6Æ'2ژ2i-*a,־ΰIaE1g>vc};HY|=kY덋@^2~#O2AuF_q=E^N72N48y 8=\2Klہ~zʳ󭟾G-Iph+ \[9,p^ JA- 옎U d'ZԖgdvJ$#CGjGI_wDL@ُVK]]G)EM,מKsK^tu+J'*곏tRd%`^Wx_Wx_-FWS0~a[;.e5T:畑QL':*wEñE.^6N:Y(vyv&%&-p&\E (:]c>\<_-[bF6x`o]]MHSN-GNXD1->;;'I3uduss(D땮! 9C:A 뎊_t2.P%!Lbczcyӱbazw]V݂nO/_VFYc_y}dzBde8)s#BIY3aҒ@VluE!@zrY(+`;"Yeg3)rGŦ۸}IYUddkvȊW,[ԬYu;%R%#׏"k0ԬYEaRj +s8Xl9F`;4Rg$e[t|mkF{t^QL0vs~Y#Mwf5[ddm;)-#[v^pG,if)\,Ida{bkZŜc\\B|Ou+S28~(:(T'2ѠS(f_lx"("HnGN-y[1529'U16䜾?IZt2UyU3d?'CcІdvxurs NwOL;iej/857:}ɀsG~ْ%t2Oy^1C^b!yk=gaQ24DLFWga_{;~nŖ w8i~uQl;ES8D`˗v\= ¶Ť,i0IY 22/ExNVz 4Ydd^TV蛕x278P畾aZ>!,Qz3ic'Dqa2ұQL[0#9+b2=w:! d&ұR`1P u9AKED:VW$Q m!].Ӑ=ztOA႓O @vt  d0(Z/=N <\^r$*{I)S!AȀ8jE 0<ى~{ @v Ŀ86L-Q %0*5HQ MLTBӪ@ƑѨ!qDq4zd^6Dlcv׈\0 l3qȅ0Ǿ1"=G79f$2R։qj2EȐlK?6  dG 0< @`x#0<22# dG 0< @`x2# dG 0< @`x2# dG 0< L ;;C"?~,*۷G/^P˿˹| ORU*M&if|)!؀ɖܲW}TD~S{zQpnֵ&:rD&s~H du ׌/P'8M"P5 d:idɝ5|_jr`!y|Ǐ6(mFb,MKGC|rΥ wNnIifb6S|{R'16pVvv.y O-OJ84 8#6%a1#P*$R |"ۺ1'8bև)Ĭ~@v3YGs"֑u 2 2yg\"^-^i/> Lj1"L?hv^X)2dmlo9odz k@[aq ׌Ls\-0c6(T}+nM¨i l&drɲd,NY5W;$U,^~_WA 9r~$$)cyF ;XƼHF1 co~(#o1|ח0psWķɋY `dcN;sO$HƼ`'fu%Uҷd6W~ 7`_I2rgB5ӆ*oغp[b[T%W"8O"?C+%d)GuZ 2[/_T>e/G}ԺP~߶.s"AOᑑ dϾG=[*_?>~x*sҺPeMu)@H#bho~EOX7o|W5rRʪ&Ժ[@ 77_"گ~+{s?׿7>\׳pGфFLjq8+"cC7i1.P盓(y5~/1V1w2[-_;? Z`ۅ7q:3;j5~g{y@F9$)Wlq++ROE,z? o&M@ɾ ;ccuj ;XzLb4އOw1qe Nw{ voNqX@cYޝv2|p݀3rg>LQyDd f7>ՌYe!L&q|;?rZ`ț#@)Vڥq_#秳-aXs鷇d\fyB!7]L Rp2~I5>gM'ZN[ ]SM].O2h(j7Wl9Y&gࣃ?gƞҸNFRS&_o+sL4H`:g:_7,X5߶ >d^0tC@k8\w7nsjcݓ I) #W=mt`TǶy%.tz%i%E&51\Q2V]`L m|}7ӟ@,qvv&67ln폅hBE[EĩM~@Dq><8 @`x2# dG 0< @`x2# dG 0< @`x2# dG 0< @`x2# dG 0_.vph]: @]j]mi>N-dT4" ]ss4>(/"+@ɽSW2#t%%""]` >2 (XSrRY SAx _ch|W /{Q,DRdtB'Ϟ9r:<{j~bV-7w@=s@إ5ȰsG+vΥT:ag:gڧ̑h/^!am]o*(8OWgH\|SDxgԲ _`q4!a"x.}İ:(C! 5{6TjO\B 0b{½]1CE[\DA]6FhX ??c1f芅KMVvIklwQ2t@p"4c\xDk}{ eeZ8Y }I_M~tk5c0oD->; tJ߯"Zۍ-&0ѦҴȮcL `vDbOiv@/r{q@dǣkhh9'p#;\?KKݺbl};.-y?, ܡ,JВr_R+Qإ \R "-#9ow] 9%edFѦ'#3aֿ 0iI:4r1< [6i\ %d:!F5ȺbFF`{?0Z@ݑh/.W^y%ZKdwtzY"EF͒t!٫^=d?=So3رOo#\su5ݥcׇk{Dث㳸qV[^wo|#9#ĖqRhŪ-L8s=;;?xnh:f_W 钑l2l%AثV65K-[YJ-Yl_V\cWW1${7~u4# f8k m_xu̢JY;o9 <$)MkjennWwkzO39iI%zWs].@6H4NF)UO3wPZ$eR$ 9K`K1UmVl&ב3d8ۘ PސDȆAKdE1ϹdAU/YVJKB ޑeH,k@5BX#`|1e:%-;,KDs#Ň\GF֗'bi]JXdI#E辴QQ|GA"be^r x$ Y{NQ|EF_cvu:Cy˦cQLr+#I]e1k._lMf8,`:v IJ5S%˂XpX5JbjdkiKزQL"@*XqGbe-+hd!+zɲG'X,HB [ (m.Yݏ_Z fU5kR}!c%DgYUbR, X'ItC š)58e*wyQE1c[ Zbn%.Ytlʍ CgY *b\\8?uYV X= G6bkARV<.SVplQl")+@VQ,lCy osj`D +*ݯ Y1t׶dGH f)bbt5(6Sf`G=b-,'+;h7* L##Wl,XK#`]bSL (<L7O8 ]AXcٹŎŞ9|m|tzR(6E$eeo@wN(v,?Gd44b^$XCYұ0QL|4\gd HDz$:Aa'd H=$Շm,qLE EZA GtXc.YvCYұ"—,#1}M. , Xm~ GDydCƶŐ@1GxyW֦pc;bg)d38*.5wrv>|?Q Id3HJ]0qD CSWWS_ aC<Y,gb{X"d!WS!D10 p:Fh@/tdᗶ Iұdhh !)mE89N*;A [vglN `>A +67lȊl?ֽ!CN!YI=`+w@V]'}ں#b'O{/~Z T?|ݘ-?pss&.rcn:SgIzX{޽S͛7TXsFP?qĒǔA4{Ք7,:o;|_!T}*Oj.z_#B[ \sNtFZkFQ$d-.l!ON駟^F NJ_ PS?3^zO>%>2Ҹ5GG (M;xYz:O/Ri}iN9{ d~P }κ3jCk^Ҿ5UlX2a`3fYFŬNĒꕻ2;Nds-s盩sRC-X?+i9W\P/ M/_t.pv㬱@cˋ+t1jCK8Fyל%d;xh/^!&?Gzb3~`_ }5f0bzX(fe\EfܮV9}%1+dH%5xzFvb4~bf-s9/F-dE~5"}bM4w,uísЩDDdkՏ 40F=ֹK(jpbϥ8wpde**|VVOּZN<&fR1ž dl6]ȅu xY(N}G55'U{PORY#_]*ZLb˷BN ΪdkՏ nb7P*8Y$UTCnث}+kϘʪ *6qj{&c ̒:YR >bۉs<;@n'@=cFcϕ;I+[fQ%T&be sYUS$r9W?:*В:l$;-!={8*q JjN* q*r0x^6tBɒJz2J_RvpX]a AK d{ʘ2W  ӗ T]f m͹,f3d|ȉJݲR1q0<ٕջ|"XcK_,۱FIB8Ψ$EDj;Q_FE;)*`]̻XYf//Ag!\+ՙ0{Վk h4ڳc^f5ëY~Ϟ+7Hnt Ww猆~Jm55} J2~X/9dl5뻗Jx9W)шS@Ŷmםb -oQϧEC1Huڥ>.[uI,ŊkٹDɍ:_go !+ȟS+GԹ(EBWDR;1Tm}ĊWS԰ U9/O[9.[W1{7\IeV#oIzY]"͖PAt͹:˨elzrI|僚_V2k+}ݿ˿Lwl?Bܧ/cW`=\mq]#R>}ɓ'2l @oDldca??[!]]]os~*ou1UC6?ϭK!!hK4FB޾}{vvGNvT>|xEjH[% QbC_~S$}w{K&dNЉ!QzCuH]xJ#y"f8Ŵu)huAIfc#~p(!|yXf̭#h/9Bj'dPllY,ȻoeaxD`#E{K ޽ټ;l55K~d+Ȟcr24GBԓgeӏ^ƌ17 Y%'I\_W\dRˠ9ےi`^/jfveF*-a1A]5E*oOҗU0Nc?HȰoVFF~ѶG*;O~# ̖oUjdCG%*|9XFvsR ~X$~Ya/_L ԛCl;r1M#!sIq1xg'$e 1 ̶BBAfUWG/22Ax52,هgY{Ȝj";rx!!1FtO7-F,8kOEf3줕TK X u%“l [\[\j6| 8uvuA8l 4ŗXvDv<Фkk=; {tӬ- 4;69Rٯc~ K_w4@vZ솙-ooӘ7 LnR0Eex2Z~uqvƬ&P!q%PǏ+|/i]ŋ wۺ,524#s2q?uY7|?l =E`DOE41uYD"C!Cco߾ϵKkW*߼yG. p 1.Y4FB @c$d4FB @c$d4FB @c$d4FB @c$d4FB @c$d4FB @c$d4FB @c$d4FB @c$duUl7áu=d6CO-u.52H#pw7ڳ\Ȟ]TMϖ9|{z? X.$d8ޫ*5ԧ j39E\~d_YR7l6@`$dP:҇gƒчdw-"+f@'Hn>yhe):GB9Ԩ)Kt:Lʙ={&>c31@bdLlc(LMx362Hf} ߊ\uf94P<;;Yر4ueo0 IXUd:H2 Q9h91&cυH[dcGB`2u<@)g? ' 1꫊pa2~F~2_Qj߫e24k1[.⑐^smf/ʨȯK H`/?Ю2'u2㒢r Yhj{וZ %GIL=i/e`qHS#"i:%\s,|Qieg/>OLm2).C$d0D%ϴOI[)<]Sͤ'Nb~Ѻr[a񷮰g2ݲA†Ɲ={55tfl c֯W٘LĿׇl6̥LY]2֨}z{$j w@n 0##$Sʂ6p)3pmn全ycI$d@:a++3noQà2}(SSk'fv_#5Ls#{'\?%j2#\lh땝dc:̲g4Is?p3+Y.dHȀ>0/$0S0Z!!8.lLWv2s$dp+߈@WHȀZHPI\|3 \ үxMs/e`]̸Xc'N=veBPƆC6VPw^ %~Me_ LLZL$d@ iWɕ1 /_pLb5$dunH˰y$d]b-9ֱ㛍H˰a$d/-wu b 9 {D*5!ӑa{HȰ# }azAe5ʪ [BB]K[%ddczNteC^b\X=#+&R٘ ˰4blbCyYw]e gU7Scs }"!CCA+w5+X9:DBN1=\[]b 9zCsDۏcȬ땫5X(w2V,C z:PN5ʝ9UN#tatgcHdd A!N Rݪur"~\Te^ņK:gsZN7=1Զl,r>bIvϗ-f2\!!CKy{%';3qO%rvz3Jպ.;BBfC-+vll"iL/q7tt剩FY5%2A6Ի^Z63rT!r0Lbexa6zmBBq۰΍IŚ<'hL^ M T_畑aUtUWp#btEr^ L7KLI갆[!!zư+kl,f> ɰI$dX allbbbXx1]^o4@B4mjtMbHPcX+Tg\:HP^69PlKH!' HP R6Ϙ}`oHPpj66N2FBư 5n.F* 8+$CU$d(1QzeٛF $1l@l,0rT Őall1}RIJhU(l y./ӢmՅ+ HŴ?\b k]色!Cdc]l7r^1a$dH6F*+\Dq$dX14zell:FI*  !ЖgcΎ1R= e!Fb 'CA$dF{rɍIGBLta\ٷ~%N2BBdcW_bJ!!C21f>:l#B' !dy?$2{ #H WfcbGBt[yqbR1Da#!YI~5:{D*`$dE:q2֯b#!0bX;_` 2~vfvv7Xu5aD(47痤bz@ W XH>1^\ Ro{ۛY\i]pC`=_L׭=4FB @c$dpcD?!!h 12H#!#Xw$Z6$dM?qnӖL$d kb @c$dp82H#!l@H#!X @c$d4FB8~bq 12qC !CW-X @c$dz%FAB f0 @H#!<: 1 12D @pH#!C,: 1 12$ @àH#!C:t1 12$ @HȐ @W0:2d"' 1l @c$dG'6ar2 a3HȰ9&ư%$d:1l  '1l !'1l J"'P 12F'JÆ"!)/J!ӻwR XȠBl vؙo.69(E3K23ae < ؤ!/C&J摐k 9C*0`WHȀ8AG gE@!!:Bٶ l" `!!F  ]h= @62``gm( ydik"P  $jH4ABn5/]!!j@oI:=#!؜qvjD(042~Ѻ{GB @c$d4FB @c$d4FB @c$d4FB @c$d4y'طMIENDB`Dd :0  # A"t 6%"W|Lku:+@=ct 6%"W|Lt&QGS*1xY]hU>wv'MFBBJIKRS+ekkBրbU*.jۄPn#(΃BA'Y| V1+샴>ڵ*Ɵ)>ts73t#8pvϝsΝQ0b6^A(8=;@\q]PO҆K ܇2tTFQTOEG9u7gpmvZ4MļX.o-9zh4nֻQb"^Nͺ {]싾to4w[⢇ztb:'wٷn?Zk9+oA9 NsK1Hkqխ\|b|)PA>k9Mxv4ͩ %Bz̢N>R?ܺҏi}~tn/EWO\=iwBk|q+i+jcOynmHߧ E?rt3U/ՅvJn}`žx__JѼ]u8AOF/=Yyeaj|_aﻋz֢ܗkӬNm\:>Լ UWQ2d{n;㦄]Aq{K\io q~u_>(%2un(:uuLkœ|$_f_n?@_ܱGMpT^^xQx^YgWh<̼`GP^Xsǿ(&P7#UȟftN}rbmqV13spgYut>RޜmJ9:o|1ث󜛿ӟ@Vcw<)A{1< 8g QxB}K1kGI3ꯕ؇I֢-6(FdqS!s ;sʜ NY-8w k‡wy/NG? ~{&x'-މo.=ki}W(7ڟ?&\-CƁxܽ"뱦9nCc!}vY+O'#  /K<;wh&GV$/7w{ψ^<WߎYXƷy7Ө Dd e>f0  # A"? ȌiMUՈ#w @+@= ȌiMUՈ#w0C nJL5 x\{l=Gs~` >`ǵ8(m*7 M+p>,cdm&RP 8V`*WDHVAEbPJSPBRb"ۙϞ|94GB:!i,e952RFoO#Em#!<"tHNBʁZy G3O}ȆH!nC!k:!i73b"&i<855E$LێXf ϑmprҠ Hyr .Cf@EwJe:zYzOCB~ʬGE)ɂT$- 歨T墤y/. /Z?"q:8%-+VKe5krJNam"J|Bɷ|1mQb|05 ?ҭ7vC'JY".CGv +WUh唣9ϙ#pOV! ۑY) OdU"r}țU rcv#)*{וoPns4Nǹ[!ӧTJkfrrɜX˥%VI{+N^ΙX;W~-\G+N,x=&sbN]3L.9(:]R$3$V=o3,{LĺCt@b3vzjx%ᜉ%걾eG2]eY?3{Lĺ;b "r}\j;sb=5Q(=}=nww ksga5A8k [KMiخ& IpZgjt#au,SpW{3(D {WJv`yUtoEӽ[~_4 !{!o!/[ [1@|vvⓥT!6{/3!eCz4j--T l/=`>_¬ ߭ԣl6Il~&$]dv?a:FܹFSe<) @;H{̵{:$Du.4ٮ@@ߐ{`o^fR\"(Wf2/C;鼘IxR>P;prÕt8+X;3bnGh fsg ]3̂ :tA낲?:; &gwyd>}+AqsfF#AN1 u{fHpSc(r=퓝Ode_,=~2ciFCS\76Tϐo䟄%򯸏*T $FߛeEMP9 G3_O?_ ?'ǀySבۡ駐 xOЊLf>SѻLB(#|^:wέjm2ݧr} L_JD__Wc]KW4(G_W5X5ӱ߇|YߓOjS񝌯+nwrceѧ\ }OE|]u+>Uj>8?ONLhu\ ̇ ߯ѱuϨS񝌯+>*HԧgߩTXf1I/~ju^vG9 (Χ;_Wc}.U.O_ԧTXj^A}Pfz5S(KHw2\\uQwHէ"W1{KzA_We׸'ʌ{ T|' |)4G>u>ŸZZҋI_ˑ@f_^[+"M4&Hy#JCo42w4ƋHs܌[0"MFiJ5NCoƘ{%NpF#{4<9/5cOTZhC>wuf:K0;{xj~Y^`KdK$B|It@)D/FR'>Y+K|l%/7wcBjb^Helu8xҩ,fBYׂUV]`^>NаTJPo7!-lXdl$eLHsFx>8Gܫ/<,>!{Ct 롥z vE 6r`E[ C{q7(ٳUP>${|4W>:􏒭W}za5qc2ZǞ%ڧHvUaO"_`m_gqy[ĸ-gfa#^g _v(!u O"4~æYn@&i\0&Ј<&􁹎4eYg~ D-F; ;3ǶzLYj 4U[1q-LAܯRg>o`_RakY~~_G+5!X?2]],ߓZUB!}/ώ.kNx8`^&4C9lX n]:NNKN+@= wD=6>N(f mJL84x] pT>! Xf$d`-CkQJ),BR2 "JT TpxJPiuAF2δҡ~`=w9p{o6]{ϏBH#-NB~5MABr;(_H&p/ @ |X1ΩЬOATBw؃OVˑFPW:$5W;Tw;d-B-Nge&B.,$$'doGP}< :Ljq:;(6K1.!D\n`ԉ뫱uiݞ='bB<$W377ʐU!BMVMI|Se|*Dܢ'; P#OJ={E|:S"6-}oW#H_,]_R?qa% =p Wȵv}M#"ruwf d7c v#}#mgF9l7N׉JDi'"8y&{v)v# T] ^SלG/S* >_X 鬾^FsJh]{,Y *Dv=P5QuUk?;E}%A,fb5o띢a_q@q݁5QNʕ+ /0 ݣk/^e}; !we4P[|yBk$&EޕP nc(Q.O&vF4!,_1G޹ܰoEr+&EN@.;XE._&"Ƕ&xYXMc %a˖g//;mf&i5Q/>.{Gtg'^e;#Gj2,fb5e2D;%]ҵK#,_1Gm\ mlfVe"Jh OLE.[Vekc۞!,C1^zt}]]JLjx&IFrUng'^i__R^\>=,[ۢ͑HHSCsv0]vmmH{a4Z.,XVh/ז2fFFjj֖掦hs*cb?kڛZ74/qir_sGo-vU!{)k id< s?>Tϖ-Q)mYOY:+9]tBf@ɂ25:X+Q*HRʒ\Rg@iu('s^Se YFI&+G^q{wzj7W+H漟CK{m}PTnCgF^me7@)3sJWB@? ~?_稟u8T+#st# o-|iWsOgg> yFcV*w:\}Vi~aY?7 a赟 :e)Yl}LbSNV^{gzFCg7nx૶;U\).~VnZ}Oڇ½]J9e81m8.qs ?S瓝@Kj˼z,'ܚ!OξÏS݃жg͌p{;4|5s}N? Z~7\Þs Qd^(URĻsW&S3?]o|#} 8O<>޲S݃a,4 Κc|(A;?;yn}>88a`2?DsJK:,4wB}/g nObIٟ͠^go4ׂP??7܈f o>ܣMlA?BY %ˡFACi'*!]?8u'2=-qǠ~9oCL#{^z 70k_Ճ}DWi 9e+xsm٬{z(j{wC{`%ދ{mo{mJ/ Q_{uj'Z?gex=:U}<1P?_佞/٪Y|I;uhKlsVK|ќ3˟׶y$GوߩYŗ{5P#'ʭ&/ QN(c7tIu+>%>ʩ\{Qa˞__'d^x',/ QNeQ?oſ_<ka9Ə~d%~~4P6jlT{x\+>er!RNc@X/cʵw;ԫ6j̲TfQٰ__<k. ~d%~~4EccfXM_ࣜw7c&:^>[ŊOy|Ir*nj&ƿlU?Y&⳪nƫͮ,$G9Ukc=jY;VK|b]JfY#/ [+sbcYM_ࣜw/cO/dŧ<$G9kjW_6ʪY|I]{|fƫͯ,$G9Uk=sbiǗ(;?#fՏ,$O֏Jޙ"6Fmfd%>Ħf.60LgY)/ QNƧJu3_6ΪY|I]{sYxvVcŗ(j}G_>sϊy|I@ ƏlU?3LPYznFbJ~uTw]LW\4{*9Ft/QK1]%шn1=PPuiˌAY8ycdb_"b Ƹ˱==R.++*L 6ˌ"~|&߉N3\1smae/s}dpS=t8e'd"r"ε(Wc#x\)z㆝Oux^YηLy:߾y%OY&i+YcyW q5nZw:@'*x͵IZf~K}Lv-cy*ӏ)ĝeUvEI? O R:@)2j+L깣L=fܡwM~P`wD;y;/< cl4?H\_0?jn_Nnc(?p[FծO^.+ |~(yf:wDr(|lG̯͗ڀM=Snw%{em=_Wڞ tn}^BW㠫^̿벿[rA>1l-^&ODd #h.0   # A "q3Ǯ-D5M`+@=E3Ǯ-D5T1dXsxp\W}6|%dˎuB ;߫Lo&3[[ƅ!3:%tKiZL){JK [uh()nT wm-JZjug~9>ww-ˎ'כ{=yyNGE^¯wF-^Q(g(~vDv xܘnm~Ć&o¿5k=7=^ ߿ M㹂ow}#=WAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA7]f08\VvL\*Z% s+Tr鴫( >ݙ?LPHIőkcݞ+eڐŕJAnݶFțR(h߃Cc;Fx(T\E+#*c9y%EgsQ,:'?ET`9p6Zbdh{Re?6e2CW#?F΍xRʦkn&HH&+rcEBERZ;ꋢfx̼:uW&«%חmo[ܚ_Uܯ'dʹ]ã:9ooŃ3\+N*xH7'Id9lQ+>#D- h|,}cJ]wc&dПͳ(JͲLԗݲXlwF 9>_g*|v,*z oV?=ٟBYLt.nRCwDij(lO*KB40^RPI  =#ၔGM{G6]۷ g ٢+ .>00fBiW)'!$[ÅltueͯU~0UIW>_ȡܛRaZL|6y+Kb!&[o҈ָ0UK%b\i۟LrPjZ$ә8W([![9R*`z<rw"ʩJϜ \l4+\ oLˇ^@4phdO襧A*ϡ\hxQd#Cq150+FgW zhzaX-a@֌q%}6`C͏튲 |qGecf-34[~\3dVy0a`+sx^ ɂfly0Ր+`ٕ4T.+ ؒϜ7 PF'$X\5A>9߷sdҬqe~*p=HXL.ʕ\I w |쁁{x}|lh}MPANQį)vulzҟ\οJ>W?K+Vl f߮<my?˿5'*,|<4 WW65Λ^R3>iW5+4;|o2:?MRC7JŢydO&ǐ'yٕ J-J!Yj @j]i8[+aY.FWϽvQ|RP# aO-T"uk@s٧B=>W%3Wl:ȞA:[]uX㡙(26HjweS]79fh OZ }rܶ Tzڀ`*B[ǰaMB>Wfwn?S/5}Gҩex% \ia elE_:\KxXgRKrj+ xWX~L|&#fp]swI[08=Cy5>~nk֟fʆs{p͌5d;!WZnrUu=#E5TL^J]3WyCsLk k-*r#7Wb+ŕzA"XHݙ*_RWWE`K 6W$!AAAA +%umz"R=~b?0|0?hsGLJ ]iYMbuwk|yJrh\)RIe*J/=h8BLhi-I\p ?y8krFT*h]Ȧc*lXYr%|6Ҥkk3T_7k2ɆhH +S7-7uJpb2'Mp&B? ׯ'6П\*؋Lfè2J޸p6x*`h/c ecJЬaMJF7TI-R8⿼4.*\tS?ؠ6LΌ tr2x$)P~辒f%8 Z@opW~t$y~t=4(P&Η{Mrf ƕ mhYfT.ڙ୧;ef&Ee,L_2AN\Թm͕LTPOVÕA@Ҍmjg6%qK [_){3bЂ+[ޤuBY{y}t$hB7(J kK"LXƕW_&\%Gr6mjnH 873A C)a'-$E\oPhՆm/Pո2Ro>va[=qO?2D^5ILTitmkL*H P3cET]ɷ)&ZV`7aOr3e^ޠE+LѰ$ \+Mҷ+yGjcyrkIM4-PJ3&Jj#]\ifWͦߥT(=r-4KS+ƕ} tC<ECX<:eGQnL%߮l͕Iۀ+}UoY(S4ޓ-Lj])؆${kZM3% zb/oWE:>i_PY\)8C3MC?^ֲhhJ46xiaj6۳#"\+~GrR]wҩW&w~LnhZ:^{32G6y{>4~\1P7 dl8%O5![d} 7WJCrC,fڼҌmAè" xM*D>MjW6} 2 dN֓wD| +I1>Pja-fve.SCVreylK@h֙usI/RJ_>PJn,y(ܙ:v+ Ε]ن l1WWl8rޚ@2ڴeiqжL&שʮ]>}*     u]̞ch$W.-4%78/ǟ94rmE08`]+w2L.rA\)p v[s\Ev#W m>j,p)$T GK̐?k@ ͒\pIZJi:7_ͫ)s]t9}rIFC4Q`JRF5).\S#A&]wc&LoK@˴Dl}2ȖRl_“cXxPYM^0WSկf'4b $+6fP+|{v7ПopE?%|qmlkEőRL _fIiI.W3UsH(wP.pQpYt 5t6ŒAF5dxqޱhmOU٢+x`6U)@ߕ2\V0 T x4ɕM&ET3`~p,Ѥ DS8o!w7om4k!IPj9JlfR4ۀ+[gZ %:͟d.idLfw&̴' r|ܸ*$S]&>\ú7MZ^.W+iߕ+74$q"۳$]qʇRÏ(}Ml6<WZ rᢑ/"n AZl}{͂K>=< p@V]F~{,ߗ3,H/SxU 2pZ>8זdSc~w֗$K?A ˜YRk23__uYjҵ]Mr%3*H#<<{7p?|Acnu|+z uc\tjP>\Ƚr%EltQUrsPRbꢑ,W ͊[^35?ae  "Ã~8BB\ij\idwQ\]7zd P(Y /ەekӆj'.c6+MFuŮL) 쩸\dW goWZmA`3=zI͸2+Y7`+Ҧa(,+s_t6EL} fg\X.}{jfqeF߮ "ЮlIiV/\Nl x p].~{l&e5yve>`hޤ }-$Ztp_B?n'wa7Iplyt 6tt2%1qf&ȼɠ?^? jE42 $IXݹ3s| /~O^B@m1z}+2Y7l2)k/ ~;c|J5]z=#cd2x"|㽙"tŕm|*فc.*6ی+[ W6$m(91W W|nj=1T<S¤ƣu1BkFMҖ]Yxj vf-,˜,/N+UN]}jٞlo콜Лa/S7VfğHI"QSӟvh\*ţ 2f ' g\h DFFyV(Lj!W&Er_ dr,M \a%/EIRdýn!PM&eVj_ eRreg?t4Y{ &2R :F !ӆI+%dJw <ȧ?w?၁r<lR/nôX͚ )Աr!)gܠdK6+m ~(DFCοޢrXVS7Zs%[hC 873jPP6&HEFzFg\]G%Wl] -E#f,>@?3g+0\i| 5+Xɺpea8u 4T(JH]t_Mfȗ~i?>LNp mV{њw{<\_*k;yѶmfWzfWZ{uϴ,>Z|9\+ -L/c0^yBIy:4Jl#Cƣ/{¯hXޗ=:QAk|(wIWfwEt#TC]Y?;=4MW_:Cr䮔pLS|'g[W:Okm =0Pړʀh؟PmPTm6^5<ШewJkfv#CIYneP,Wl(Wf \0םmIp\&kNW+TJ(; ﹒RkRѰoŋ+ХV5+"BrIЎ_f>ibwW0WG㕴+]TzI_x^YoĎ ]נF;Rv WJn[&/oTŕ/2; %]ĕj/We؝ə \Ly[*ۖ 9I+6v$Z,M}x(ξd2vńIqEA_m ķ͓`5k+cI="S-hh.[(`25oWfqlI@|):P.,d)P D7_4Yj~V[9`JwL A2|/84yLi ooP+ ,PSJh wXpKb~~V"-*p'/h܊X) evSPd/Lu7n͕r~@LdPJ_|.pnPv1KRo[p/tFm^?) wE|6G*Lı4ԥ;}ljqx~t5WC6Y&17@һ,z%.d*W RKC׏4KlR1>Pt_%WΏ_v!W&g ooPPy'cJzw6̴wX%Wd*k{ؕ 5l|ib2fe{堡`xr%%wJ Jmە+ʡ0QCØ'e->-B M~$ɛِ*2/\i X>Poȕ0?VS %/'׃f{JZlMZN{4deؓ~4.)]/}`8hÂ=n\i-䕆f%nL~|/%F(2PaJNuJw|}fveMƓva-]&xwW2ly8M=8 `ϦuK6+fk͕.o J>ns.ͪr-@τҾ oWZTؕL ʹ3?{I2o#!Jm~g>ɕfW^ 8kU8f*>LiBV3aѮ4+2L~PmlVO "Ii&.LmOG_@i<9"I1sM 0>8=8Tg) ƃ`.?};xe7"z-2K4}eh3ʾt9W\)P&=AaA&4 Sj" U+_([od^ 7df_ci:Yڀ]C1|z|æ3?VT r40~`d/2t+~a{&Wǒm;YKP52{lr˽5]ykI[,}j.qu+.σmŏWuB6We% %G f M-.r8;Is/Qt2?I: Bsek֝k ;02@E8S~>\%@g/se "#A-=<6.+n']ͧ{Ù~~/^_.){^P$S꯲o{A]Nţqt%+u^oGO~,Nr%BܓJ3+ZѴ"S+-L(+JqůwD'[_9-Aɞ~rP^ ?*LJƕ +*lM]IFq*M!uTLj(%Ki]|#qpqekf@ۇ_@**cfB͢ܛ2?_ V)6 P̖֕mقclC.ʕ`ã8 2pЀ/nӬ3W<oX3,?nٔ{ * 2Sm-szWЅjJܠmOVd`fҸT T5әo͸r϶CˇnUkwo mΕֿ~&{bdsù콃9?&k+}ʃQf\ j?KߑvXogHuw1|l쐯j>MVKR]w#A L&0g[pUd- ԯpqe[a Sn`ψS󞑱lk$LWR$Y>s!4}_8͟ϫ\}r 푗\R .?2/"*emM']> zARa[799_" ^ۋ&j5cyyٸR!zI6nrY" ΝT*JA*rnnN!+p \AX/XZZBauh kuFɮ_/K-pbb<8YJe \}AVϕĕrݗL$¥xř3gq \$Be!pxrB\)+Ν;g%i&T*W ƕ)$=33cAr՘n`N2 <7Ǽb#/C3$0bᒑT.Ȼ"bt9g#8z-ԱXu Kȓ)6l6WTK3SՉ2VtWT U,r4=5PCܹ +lP3+nJݸ\, @Nyg(Ip>MQY7t e'rٳgLLL'<!pL رc\ן9s:->#VCo%Y[pӳ8_-SU7κ7#R8uQuBC=űj3;3]^qeun$ȑ#$9q* A}/$8#?N{ n_Fd5P ĉ;f1\`KDAjψ#U"2c^1)# 5+|f5"@3\0y3Knj ./tF;8=QO3QZGݩ(ꏢhCw)vDц(ڜپ쩓`r)|b%?5qhB&+pPuOq,JlY6nȚ >"i&C֝E`4hdIP}[gZp};mX李 7?e(l|[Ieɓ5qYVM-U7{6pԆksqcc6=:;&;7ڑr=(-5g*9{BX?Q~5ĕºJ; e)~iQW]r .رmCUfZn×V-2 /,Io)K:, Pɗ[GGoڳlj< WmA.BuM͜XT-Fnc6u:،2\o;2So](GW7$?XO$=W#'8( I(eP~4S`;*4or#5#8cgh08FL89v옟RAct(z[~ NMAib*+ufW#PԺ̎?7fu.>C޸k9[moa!X٧3\Rd]dB7&+ŕ:J[3oi+;5[>J\H™}gm(][q#N͝XI&dǩ{ŘsO[Ѱ%xvS/M\TY[\^ܾecpd<03. vD)_Rzy$cǎR yn|UiprOw^u8 7&TQI*KUlyTeL@qRdRd?-D1E ޱI, #MS3 ;P/`ݽe]ExRdlFcnݺuӦMe-Kp{arǏ݊:_♉1sXr`1csUWZGW[BﲸR\)\\ r`H$⣁̴#G0+ ϞιoHL&p)UK8:oIz3i)3\شfѲgzAUNdn+O((;EѯEѻ[o-Qtbw?_g';|fqIR\)\\Iײ?9羓]]]V鸯϶rX o,utlt>W=)<'F`;jm_B-;UiZ&dB_Zq?"7eΫp/Gqg܅c=Z&rlԱcʝΥiV T^ ̕hA@³&A7܉O1\a`Oe|mŃ43Mzs;qv y[s1JUhbQ-z&T%wO Jqp]pp T/aMgǎ=+yu黧!D<ǤZVء0I7tdfma3VtlI'0<=Pvs<ܼekgTY%M 7a\X޲m➞?=y4^FvXlnA\)k2*4ݳ p>"5 'K󷒤wqJ*d.( Ƀp@ylgm֟ .>KnMN{a[DoGGP<6un(==qKjfKt8NOO4fXֶ#Jqp]q%("׀pnkMEkѺmQ6Ɛcse(+;' ~[g`QJqx[Xk/誸r\=G. +3l`i)W W TewDע7oՏ_F/J/7D?sk$Jq \(ϔq65^:y+o,DjƗj*e71Qe˹IWb|R96?zqRWcN-,Ʃ] \9>j\|b $>"b7+S JE)W Ɋ/qlƨZۢk⦫5kǢqZt] ,yܹsU7Wq';礯%+ŕr=a~VM[7-UҢ[f (%WaEKϩ;3V*n~,Ԅs9˟-+ŕr=r (EߜqS \qqm9oU!;[\Ϋg~JAW^\_t.E B0t(㋃j;Yqp)sƝ WW Kca03rSgTV}t䂫~,.PA\)Kո6y9޶Q\)ĕRHƖȥt՚ -gppͽ5CKqBkٻ\;q \W5Uz&ݦICJJlcdBU7 ahac"͛7۳>Qkǎ4o e VL5(خ2n8ǃ'N0#g^{yoG3/ٲe K.+avmɁͰ )7m?ܴi( .Jn݊sZd[aq3mWۼ wLWpb__$&1/ #&ky $dCH͖>eBLRmuM?\D[Ԧ2|v)׎l_8rtg_Q~ ĕ Tbt֤X,P\7|/~:DI,@ S4Ѹ &:|pMZdIdF`92m6 ]y'u>H na*ȫɳDF>rt ]:x:z&qDnCμ(ouV<\BU|ի^p:fۘ JA\yQÚVhv>N8p+Yz%v'`=p0+dG(wBH S3ԩS,,^P#BA{N6olI-a up`#a*R7^3/ßHuEQ5FNՎ%k+a\im@ţUgԳ.,M]:<k;-xs`|;S/%F,oO==KV{,F8`$ MTx$\?e[.4f΍ϸq0w;U9mpQvlXֱcxGZfv􆉛Ea7| |# EɺwAJA\ٌ+_ŋ'O\a>vm ñ]!ӡYǏZKE!k>a#ceqdEĈާZBc>뇛$mA|mtnۅmX9r\y}OD;Qu[ֱwSR]ف+˵o;~&YA\)+]S>hq*-O㐥ѥwLa8ƪ=X,I0p!"&6FKO撡'S(6 ejqp/ZQqYw٪[ QWwHGvlHuXQΨ۲2E]-?CE/eTOR+]6ZX_xms(Ō ##n M>6v[% {wa2:3+!sm;_̀+'LFQw]Y[YZJ~,Ζr-K+*gg]o, -W ¥rѢG_JUYcvEm[atf2% 㜏 t|< HA4ͽyH桥:T@?i?3Z\ TB1[^iﳎq98N\~*շ X9O<_W `譯>S|n A:vrU#\3lUTĕ .0=?`׸rcM{y#یt騞Jfn8fk qu\M;uDu\q-X1u[VQU?q +> rwR\)+q|C^ې ePSO8K?}QYtUĕp @|)v6O5XmJ8)erm.\][ks\雭k[4i,qjf&vtmfU*JurbuWjG:V>$#q TTĕrm\ixu%r(^^1IOh_xx$'rW/ KG)zF%HHжŴ{\5ƌ͊`R;3Wle%v%7U]&S3pĸo[؁Kn։pwD\"c:OeF 8g2k_5_׼v|r۳r_D^Do驺^_ST* ĕÕӎ5LUvm|M7스MV2otr AA n#xkyθǖyK>ş0x O@F-!ݜnji@o1DP gς(ݛ6m"ˮ<krxWl#͙/rn,?A^4{8m1ƹʒYI[ę4EyGlZ/m9na2MtE`+~tEKI'0+֭[9Sw||\㕂RWڲJه` 6`L`ۡKX44ݸk33Ϝ9vOv|ܺ=9x֐+Es*:t9喼h@xk+ 7d HOoE@@\9g瓶ڀbwLGhjk4#.HG ]\$m;;^8|y}q \)PWm  m3GkAd}*htLIy\NbepOk)$($>>d=xa}&?ߓθ7]pk1cz{os/\ޱԏ/lٶ<]ٕ K].JA\)W++^6H"JA\).bڰ JA\)0 Y +q r2DĕR/SUYRW zKG! ĕBCpɶm۔uW JW\NMM JbJA\)|x*C\D PW J0*ḱ4.wĕFbHӨƛJA\˭-튟LlL5( q*_s M Ο~iӌ1eR llwuKTՁV;3@w}e jxmq(ĕmN[s'ra CGcb.=1́.k,<[ꊢxZMT&թjwT/CDkW ʫ Lt%p&Q$^I.lͳ?xC{:OHu$urvJ}"=z' $:q l7r!i{|A| L4`~~~axeN+6FJArW髳`D\)+oLs'''\1m#UY m߯&]6\Iv+ %O S'*+q:4RM7/^DrǎVNӛ7ovo[li鱥y[ QO8K$3a>_+*tp#ܤަW;'Cp al!.ĕNBپ}>&LDP\S.Χ=lTkN6nrpQ?i;TFЭĻĕr}ZN|أEbqPe8)J%YƳÈ5L׷X>:HP5Rj,+a2JA\P,ѶŁ?A{-δ]㮠[l+vT_;tv+} P䏸C;2bGW une5*&&&=[UHGĨ\gf[׍ F_` =` c֦Mpr-4JA\``vxn7r#Tg϶C?emDVz_+ c2БfC?+q3|P>89l 6m2[&9ȑ{8gZfn;d&uۼy37X~k^ݍXq^1~͙kfAɒ1a 4ŕr}c||M: 4hqR,QLFwe^qrot+u}Of#$\gggAJZ2[ÒW u~ó-7b[<lh;{R'32m;వ?y${d>hoK&A23W u6ˎv=7S&Ξ=>S7Lj8UfmM!v çk>M%Kro8n.ĕ>c߬v3-SGlSq@.Y]JFF>b2E$y&G%k.:ŕRaq=H|Z^cgDtf;<9}wo^+ܥT,a q ZtT58{&To0( e*nڴ9h܊iqy{pM.{Y^CpP,%2w}M7Dgw'N. n{{6 ìcq ۷oG5$WR$r:u 4MMM*w|9i~ʃo9E "O猉[Cĭl7"~8C<e{q˿|;UgRW*CgΜ[077_>ST[=r CZ|?8|7 -ΠKءtuVͱ>+;W֤F.ℶ'P(Tum),ĕО@u}{Kˎ[V_.ce k#݀p]!8/| /ȼo/7 8 9MSm e8UQ+q F7b>.`Ȉ''%d()"OppILW3 #@&% Y%$h(DJA\) ĕ JA\) ĕ JA\) q JAW &N\)+AĕRA\)+AĕRA\)+AĕRA\)+ŕ +A\)Rĕ JA\ym155UTjju3gXمi5;99+8aaq`[8O,{ĶEuDĸ1<ѷ@uU N,Ap{,##\s_fb7BaY FXAǏ1uYwرdٷ=̽a&퓟C?A JA\0-#/C~i ⷍeZsmqZA+q Op~azn)54##O'};z(Hh#AVxKEƲD3)0VȐh,> 2? g `,GpfI3+qV~ {;;Ї 8ĕ@-@WWW__ƍ7W:oo-oVv؁-[U57;Nngg'o ;wdXgf)ƕoBa#>E'u  e귾"}۷o޼no~?-/mې 2[2 2ho~3-]pOOvք|~qlh5oWĕM˱1ypeNo Mad-f]ݏ|`cklwmutvKc=dDp`8c1c,p$B0nGDOqPAh g Df]28 Nz!S :].H{{7|3C%1QJ~,={7794ɕ 4fmELe@д:ӫ'NW??4ĕJMo0hJE #4wd(hЄoWm Txȗ¾3~闬EEC:a8!CG#[[63HVńTeVT /#7(\46ϝ7k۷o]g@28znv".}ݼdM\D;v (P~Wp"׾׿lF8o9ARF<ȑW@ȣXF{5 ׼5t6Gpp7+k4,+qe;X,2{;??8pV'5+^{Fh8;?}Ħ)\> eL&#,g{3V_>l zQ>9|?? 8#q>(ga=쳌 u ԧ>H;!BJ^z_}\W vJ89\ǧ?i$ZEP01~G*r"~wo?qG_5C)no߾o|ʯ|;y׻‰:r^xĞ^'9w$Y `K_b,s&Hr%@|>O ҭA)}A{>~~ (&{я}ca쪃6~#?+qe: .VDs#l5%$uŻ o7'}-C ٕ$e ER {{2ce@| A2u|T}"3]nc+ܿmo: 8eH<%9oV3t\3X@k 3ӿkSO&Xe@aep˩Dt`{m,rݱco܆.yf u zַxgdKA! [OjKr7%*MZY#6('_^'Llof~AdFÏ]o:W7Jq l?Pc 6Z5ax6֤ӥ/S5 CO).c[ ҤD_NC_OYz\/d-9f,b|̜cڭ  ȘYp|Բy-m(")@' B$~0IF2:`op5+! YJA\)Qwg؆5و4K;)<˹ǖ]8ڥ[oIT/Jqd,ylEʏ*%ŋ٤w}rT6axDbn qZ2c!C,4蘷L] ᜾Ww4揾;}Os׿oϷ@+Z JVh%@+kcַo8߳}/`93e ?c?vײs<Ƅ8 ƻXl&_wA9 T<)L,rWti P˼QmfGJ9T%("¡1T4e+e7)E•}!|ZrX qAeZtXȊJkKCE5LMGTT}CK?y;['p8§VIz3!6(TW;R5]_Ggc#p[vq3!hd;ҪGmt3c UC+`h.gB@zѡ^ZYT(ɨ" %RJ$+$ NvZ8B&H}QE)uZjvWTTHo۵NH ʬ[gʲk츚N!J`i^WRoT5'+.H&Nm mbjpZi`Ndp_MFQWrz ?]$Ô񝆪+úVڍOX+{ՕΧjJ}{T X7szC vSb{QCK6A_n\J3C+_e TyU+ I~rۭ8}6W |ٱB9ĉcqΫಮhG3YQAW^TJeWuZx%ZY k&CI*Vo i+Tc?3` i[8!3ȽzV#N>ԩkVK `cg5>3ڔk9#F-FA[ H(;Q{kwJa7IEUQP lCO]j;=|BlsNq6 g;ƳՎl֒ٓ [.'ʰ4YRՁ2r؋lfyB(cu侍 !?s8s;K+kx%k=xy윱)j"zաj'oJVt`h >|y.SjQ4=nKK9նv`@@}+%q>2Z(.i( ;$#k\m|2VVv K Z O+U=Kϩg1O T 0N\gdQ=mAZV{ԶlEj-ʎ^3Rc7O3?) J^0#-aU8E^~ʼn6񊨬0:K+H_/"D+USۥf*lcW(t=;4玬Ƙ~o7yuTw81Zceipcvߨ#kw^;2kqѡCv^Rt`4Z{ie/?XUoוl)Pa(zՕ|sו+Â,U>sVњ:&_P3|=k7NxN_YD O@M}*5Nʩ8zIk-Ի8n>e8[;Ҙ#~ȡHyu%9M}<3rE%^#' ݌v4ɶVRWؙ7_Rm!TWlg\H=B] S]ܱjIq OǶ)"*_{uǎ;)k\TN1y]VC5k17wM?R{R-x[3eK7MWEࡪ=D"A ;\wu՗<kgF?ɬ6b1foG׀+ Q\4]M'S'enM*zZˮپ^wt=;aOjwUZ&O쪺A MFxvѵrpVXʈ:ZrBF,#jh-:<;]Nst(Dz>=rCOFsjǡ}0󩪼֜s@2ó6"=o z{T@.ԙd4{G}WwÓVUm'Sò<6`Rۉ6O>,|ϻFWήNF`^i M٫numM|N__ylЇ uGZ ]Ez0SecNkjǾD<*c?TrdIԿm-sݬ֜S飕nfxxj%}92vynG9f^ k{6'C{w"5w=죕n2ףB"kݱZE~V۳Q V-SG%+{DC rS(kr.UK.ǢSw h7ӓWyc檜*:SJLO9vt^UN ;媱S!Y}?vyJiVm㾫6IKcl^Uv"I{,LyPy2UW_UTtFbԕFp3G*D}Y"nW]Kl+]{ ;ắ`>s˱ԯ7b WL^.s-uOڪ9M){s/TV'h_ܕieK&՘{^MWr֩u-9o+8FNE=\~YWmahX1}<݀{ W]ԝnKť-8-D]xI إ:YI `IcYA^S64xyUJh%rMg ij#I.0'=.P+gcYOu%f\b.9UKidze&.RhB+1 ? :b9x4RR˷e 1yI gkh (';\UѱL.ɍSjoB=@Syς 1E#kyzܰⴍL>sԩMRSal+#Bps.tMh9w^^Y3EىC@1,U-{^brU.QYkq'=>8{ꩯ֢Vvde_󣞼ъ62CVt+oOvvΩZ4JoPmXdk%H->+zc\%3*ԍ$ߧ?2=^VŀۮiG+oA5uo?j. ٿ$1v=|'ڪ++{Qn* +5JvKz u%uIwp+ZG^WڣXf]ߴYi]vyVwZG1sisPZǯu_>i+qfE!mͭ='?9+ۇU›ʻ!PVXr\C[`k`?$r..A,uGqJEݔv]XQx#۶kohy(uE.!NH^btgs`E 2X2X2X2X2X@X@X@X@ @ @ @ @ @ < < < < b< b< b< b< b<ddddddd(*LN 8B$W9}YJEYRvl54H)xX#q< b?q^ž/|N0KC[+;RnNf4WcI ' f ܴU&:ǒ.[DƂcqG3=3eʎRE)s'nlr`2ˊ-H̺r̍7'{wGBZuGn L%Ʉ"u%Ytt!ZnzY,Ϋ?z$]UJVxu кpELr!s:Ez#A_#&R ,%$ʓYhɩ}|'MmYQ˱U0myɿk*fNz>հFu$pY(9)i4̎,7'_{hSoK9EZ'2;|gbAh%VB(!5R{\@%zJuT`L칟E iD?Y[ZOr)MQtéc[  ʳlst@|37`O$'w8Uy毯X=-JVB+_GY| TɖTWN5vUr'W]9&nZcC+nJ_}gϥ;'oI{ӵ:*W[t\Ri֨GOҶpB+y]34x^Qfk%yyxZUJ:H]w Z Jz xҸ SARVn M8\0goUi\ G`Ր^~<$[HۛuEI1xnRMN0<?(ʮ_mEخ~<ZLEf3Ecq>&Ddž9Z)?©`V2nu-@+U+)?ypv[?f*9HZe$p IN5\R{7;d % b:}9)G”4k쏄S+?*i Sխ{{#wdZi7w?y1Vh%Z>RV+ԚbZ⁼dU~; &'qw̿>>Ju`?Zi7k;nV>Zl}!w*JRС.ӠrRS,q%tԱN[NUWR'+4h%VV%G%tv%ۥ܉ESJqC"_I||MXjY%@ɂqǖjXΞJݬUonK Zh%VB+Vۓ"jk.9RեXqt ؙ^\frP-Jf3Ecq>&DJX݌&2o|YnM Z D[f;[e )Q;~WWk{yB+rkTIhJZT*ZXC++YK 3)\Uk ಾҝt]-Z5VfEuOr/i/PVg_ bh% MtVϥZ,jYN"fx\XE 2صkkv$rEvdjf\e %ϩJ} :F>im˺Va'\/N];wtVq"G߷''52&|E>{k%'s D}3yiJP?Juw95E9+;n>9lh%ִpG=\Vһ8?ek%ic$3SƝHBg:&ѵ'&˭cEJV>{i\.ԋ).ʞLCJz$nњ 'mIGJԇ?(E[N] JhRʗL}`6B4tGO>3Vչk}PDJKmJ$:3ID멋tM9N|L"UG鯛W8 #|SURJ umbEg(fQ{UW]gMr+;EJNOmLDR@*BՕZT hZFCK"&y@Į- jK2lz'Rj])2K_VB6m5_1 $ JwIU n>(DU]ڕ}K1kB#lPyc.~?|Si'@ R?P[WL@UDJJ?bw~?ׯWr}d D @ @ @ < < < < b< b<\K/Qjɺs2ǜp !88Ȫjwf63^e+eweS+p:4b%`^J{f6z\^K)Z/e3KQm+G;*[J\JB8IUe~ALKntFXJԪ*?#BPVuE+d֭$L%jVWVяdI9zRD;OTR]ӅpOzG.wux@dݮ ;̓͌PC=h-Z/Uuh&HFQmRgF(gRʌJ׋"]+)Mf{Qw.. %̜-d-t\FZɹ=r=u%%v;弗vnݪ)Z٭y6lavfhC+y10&À:[Z2y.)IjS&&j KKZtU-8&3I!= " 7@h%rNji% ׯjl:wF +HWIct9d\TT.ۗHY);#e7{b8TO-Ytw v5CA4ŏovmK_WZtUV !CzS]W6cw^b(ʡtNvb*[طM'Vꎬ6q?t/^ᐻjOmI`{ƒ(v/U+m^-S-8sB&hōNTԘ15?6ȍzy_ vXcgcJD-vd/TT닱{ {qr _xw&ab1 CГ^/[]+ȴbj(EZm^Bcr o8U{hGm:>~KJ@XKٳcWxV{[dd1oY1bw|[B^k+pYqpWնH vVku̮+ժdQɭ;8 )8Tь:)+ {Xԇ3i$uskju(t<=f>;^ݑJ}rzՕjn5lUzď[؜eJLc2A< B_4s׈tXRBEy8E^zx5I֝rm]}z窉Uo 14SX,أ,)l\dUsl)Lbulj쳁۠=t*I UV$ 7:Jv!(둽+KHkV ]_LEo[Ҩ7G#%cTwq#t(5 l]+ GVp7e땼q}ONmksmTǔ=1-`khAmSXֺ+'yjW)tW֚$Nַh* cjrLo .[SR+oϐ5kn\[jÆ |}K`Y.=fI_ V$@+XUB[5(KR;ky; T G+I '2=Y嗠{Rn2y7xVJ%?DvOF1Jt]e#:ewR ~S}5xڨ?iӾO9*Ӭ[T+-0&tTZrS0O4y}rX*;Rv ]Ό_5eXVn:/3I`ebgq}ۻvWo{>O*m'iYeZ۱Vy}zGЭYdO>uUIǏ5bvpyyr #,}A_̬: ;DWW={=~0-zIpg`$^ v#MNEcs3=LƋy"{WF(\OG#k۴0ѵX2SƤ3ir2VZɅx*-ukVÎJJ֬Ҳ4Yw==뗢溶rr]vz> 4,jc_TUNy.mV&5b!hB73fCks S; vxIqDқҙGG4d=}A_{ok6|(j5c#kUۤlv`R?6$w. .,~]U7J2eSRB*TXՒ}`pյX)וTKɯr+OkV5dC+USЁ]qQV?Yi:>u[ BftUoGm[B\0ݷÿmbώ}xFjİ=Qdl.m]S;5'&SӑRw7Ҧ`keInƵMk7&7)OXgtS᩷Cn@54T]04lKb$CJcݘTՕj |"V  ,Pmcjh%Ycod95kX~e6ZUePd$25:VSוU}#.LO&/]Y9*}?U73OōЧNyNQuo'i,Uޘ*.z Vq'8RJ/sK+JZ^<k4<' A-c_Hxqi45CexF]6&mgsrWvY+_n.Ku߈F>g`#ͷgOFSսI8=cXeo>3e{~&Ea.I+6.D8Q-zeW h:zk{6_]Ϩe;Nc:*YPZrhuOa9X9Xz|TeAj|{giCXT$dJ]R#ߢM0&Har YUCnBUER:d)`rڣ @Mr^ "zUd;/VP+~xbܬR[Wp.+3Tu^]lu3h%|甼3M?Kx'|W?q`9]|1w[WNG{4^*r)-~cïv_zvN矞izN>riZx}俾|Z~h^}`⚑ѣOw䫍߼TOb~v%Z#7rL.?^-Z}}B~ye]Oe͝c|sћ{ |n ׍%ARHO6!܅"H7_ztd3Q?O?ﭻ=+],E3mybۤ|{B 2MuoI72tͿ=Vi^4E&<\qGt~ƽ-cW{v7#:>Ҹ&4$RZyǨO: X޻﹟[$>E55~h{Ľn3ϟԷJvm¶LLJ'o=CW?-_~Wѡo;'n Q_:ŹrWVDi姿lW;7o׿B˼(#KmetmJ+]"+I+2 ]Oz4797c2=ov]=G;NNG݇o:9m `ZYJ^.ko?pC?!T8FF\!;Ё?0On/\OВou~9^%|Hn_>7u^%{M_{O>yk<̝p\_ %F:h%@+. |WLQOū;T 9yأs gRW}GOzN5;pz;E;Wej>xݒ>ұowq c+)[+=s+bVM/]e.ϗ c4BW9}^;o({QƧDJqLnq^{~dh_8ƽmcކQeKm7{6 ם>\g")MW'=%C{b;yccW_}c#h䊿fz:_/~# #W\uWn~N{E.ej!:tH& o.Zoe7{4 6mdt䊑ѷ{bElFP<]G8G=o@PWYٽj,蘘/}{H-Lut(I>Izp[=o|-؇1r-wϥʛo-_߹Wvߞ*7w{7RcϿof,YKkbP;n޸ 7xѴ/vMWi/~5[+I@sߔ'F_>Rv3ņw{ț>a}*v!r$ ,HAvv wmUc;' 8n{5W5m%Ae_͢ nd_&9&?ZIeMc[0/.|~HZ{h~kGI[Mc'zl!jlu?I?/Ix}Αw֋RUZk[$Tc*s!u o&ƫ'~>&'momw<'Ƣu%\nZ}YyrrtH_bSC{cWzC7 ۴'y躑 F<ʷ6~qX|!g/^|=2o; fmd;w o}W-s}* Zп7o?obhWoؾ{N!ݗdsZG iowj*~ƒX|_JRJl =3iܦzo۴RaSr/#W㦱s~gWq5p3{'מ: yzaq_c܁yxZTzBI>n ͂^9 FM|Q>21`_f/_a`R2˲c*ő7|*t3$|/O뱹SMfel#w`I?Fs tR 8JFIF           " O!1"AQ2#BSaq3RTbr$4Cs5%ct:!1AQ"aqRS2B34#C ?(((( + jG_qT:t0'Xe: spl:]aL>>UY:W>bL0(@Bvb I/’{CֿdYd2!I̠.79  t+ia=)4hVDd5`AEUN8GjUbYLYH8F\/~8ՕדkGwкqU  $&Pᖈ'0ϪED QV|[𴅚23$L$k9H #luw4xm:5`$LDU0~f'`I&tTB4Hq W1~lti K㸬mgW+{hdi" Qܱ8`@UsA`Y&8%eKJV ڃ 2 d)1*e t}WdFP*9y[jN:S+ۡ`1Eɴ7 \-eh '9FC`I'T[|°K{i]d0p%1]TrECAkn)scy ~6Zi46d>Q@-*qp>X`), UQzppqSh( (((( ((( 5Q'z. 66A10PGCDFsWW^^ h|EĂ;@aOF&N{|6ԝlrmyx3n4*R8|^un}Z( <\:!hd4%Gm y[u2.w.52{1-՝4v#KPR;ܒkk]yR2FAS2;Gph kĜ7Rq9..:!*#+,NW{ki\hYbLx*5,Cc˃,7+kK;o׉; B+  Uo_IL=iyFԗ¼x.-7J߉q%8Yꐦ=Ցx;fj֯z:Sܬ # #ƬK>\B{.&'RÉC!2GXٳ@> klbb+{ֺt,9b\\/Xjm@"\[W>ۇ.#o hLL[ ~Lr%'2x {ɏ1VBѬAL#+ۃbh~/qja߬NF#@fЇLK)8&M8mg{vAbok}k(%P7%~dρL~Dyg]t2T:z4gwh{wPg1:4 WPR7 FCw q3Y99ߦf/} @+L 0.C8RX.RRIVF"J=}VW|QNufr/7Yĵ>^ok}=f}Jmœd2EAt,)$ݗ37Yŵ>q9JX`d`jbO"D[3둗L[V01G5d7߻YBO#5Z~~hV$ EG?6%~#-4Nc*N I#Q=ǚɕ^?ב{/e W'0K|Ƭx$J ܰѧL8 a KHk=+q^@aN0pH]xnb클bCl75<-rDlP,tF.ٵE"DD0v5<x$XNEЈ0:bP}p= n–F Z4[8B=[7-(&BEp]2He`*:h4hT4REUk+pNw5Iq &Jnz7WԌa$_Q; hå ӨRG׌SC{i~i^i% ȺA: ڪ:% Z'czJ.. ́Tj 3ƭgœ~#![a¶з VVȒf>if?l`ӧ 2Fb,"8LK(Ġ n}ϘM#)"l&YD2ظ_(FEԺs>W^u_-Ÿ"Ȉua~ucq'kYZgi&F=Ǔ50ի1At4rಠRFI84IV *bA}ūG4/Es+\NediXIx+,d C=:4:ںh|ʾ B>,+ %c0OL|/7"H-iVԻjPP+tDb1wWhQ Z2= `4jJrB;.%^)x\d@4ZO5oqJv#dlSdPfvBl8oCB%I=fW7WNg綯.OnQ.ۙ߶|oL~bduJƦJO K ax,bB%GN8#uF##$.?-bok}geE8,t":nFCFe+er9b2Ƥ٠K_mϣeKwSB5*)Mb #4~{9TW ,ų&rŰLtؿIWQ")jų#%$۾>,Z5'H`ExmQ B]4G<Ҧ ɔF#da(!S6(~=捽o,7;d63·:` Xsހ\̪U17$oM Wl`yIKQ#7ZhrYQi`Aٔpi$^Cԛ1uE(I* wL[oF֋[;xcgljf@Id? )x*)FƒBhPF `퓋Oi)Qq]A} j {}$~fUU4^q ?U,?BQ_aY:tx}JR#AGꥧG^ : )ӣ{)[\}~Z}(TU _aGA}:txe+{/KOq z (/No}rei xN#AT/ï:<ɗꥧGKOPŽ te+{/KOq z (uV&__q ?U->?BQ_aNAo}reg R#AT/ï: tU/KOq ïNR\}~Z}(TU _aGA}:r=LU->?~*xc }I>՟_aNJrei R#AT/A}gSG{)W\|Z}(TU MRI(AUl=ـN=q]ݨGڑ΋TQg [ gH 2q=dON\R\~Y(TU _aGïNV&__q ?U->?B,Jwc:<>Jrei R#A\?8zi ;3R FZd`˷0$mpQk}*:R#AGꥧGxÇ^x)/Mj \j],R ؂N)7GyT Y(TU +?¡ӣɗꥧGKOP“,E`uݔ0.=7Pu QcWZy3|;l$HcWPX  6@ӭPc`R8;FgޯuT*<,~ XJNSJScRnaيV0ڝUr2gPpUSᶎ %:k xs͠OXb%B8}0|GFcК lKF=,Đқ®c:e_m#=|oWjʫw[ uO3 m!Zm {zR ;28P ‚VARTaf<[o;`5`tH`!-`+q6LV(FAٔ]mĢe 23BTdyA c&nÌ;`*,K(e%A e3ݱ 9C6k: ՇDW[U)!R;7~Fg,[ :ތ?09XK]W?~^DX7QIŸqF#Ac`$ܒ( JTLf"0RH[u,M=Nr[uBg;\*P}޽)e̱_x@@ϔ`2BӥHet=`KBIdieb9 -#In*H$dzW/FZ3oϒ3,tj:^=i#+8=CWyanצ~0L&쟚J!bY,Ė$w$knKFwrr**>f[@UOF# GFn*Emaa=5rTͫ6/oum92t?R5i@$ع ` {URNPWiԜUdxYe{ vёDڿSTy~?O< +^uv/┖f4#zb@T~ gw k7b?WB4%n[Ñn.&#} ?fC${L#~y@-w9>Wԣ;zb PV ߉'/.$a1X@orQQ+FW:) ZhiSb[- fXH=s2GUߊ|I=Fȟ_OnOz+6+Л#wp%*nۚzq d@@HO=19)i78ݩ}*+&*޶-oKOGFn5<'0J],p 9 YIL`ߋƣ Q[)iZV^@$^hSEڶao!9hxӷ?\`Fc5uIjN5oWXDV=OMGU97߼XS5M<MZ0%rok8ei]tI"X]nFwVSbQWF8nFLzabţcQhГ#}>@1xkM)K& $Ajbhl7ן}H Ȍ0RR3 :1uᴮgkh#&6N4YpYbc%U  0 2m`9meŃVΗ_8{ھ^Zk/*UnG_xk_U7V4{z3z ISʒd`6uAJoX߬*r>O_ac}Q}\dDz jQwHĸ=5bAVcOQ ȯOaj(Җp1E;Mốnf6$6]V3lqBI$dvPܒNR^Sn-;jv{-ǁK՟F$d|:6Anڤ)jS\fDru&G08<|JZ?lpwF/V6UVRC+pAƺݞe;67&bfLW<`l`mP> z;*MBU;|R7m`"99}ʆкPHE\|<\R+ vV(([ʳIBaK}nI Ug!Qs*raV9}[8bZ j30FP ;WH:u7'cma)^gZf;㲌|]'zllc㭳N?*)FTJy߫XI+q##*F%PI/$8u.ammu nNb'$ԓ^B'rKJs?*jZ 5=.=.bϿpIhn#x\)FSH>2{i"\=9SXϺ7TzEBH>DDT9!<.=mOJhNud-xoG$ HH4lZSY"P{"N!vv9$\!#j?)` SS]'xJmo[J vlRTiRp9ʪ7,vܳ[>Y#A"@h& %Вrbu<|e+=Ks3}'Z.;/)Ӑ֨ved&6R7 uthi܄|2.[˜WL!@2ڮN`v'lY9ܮSM򇗓q~X q!jUO|1P4gzt_{?tz}M:aX_6RZ#,.:=w .*nZh'd PejO^ct1ib= .kϊ+* L6x㙱4U4ؖ>\TqTUљXaᕁF1ZaZmE=|>:y8ҒmmE/"YDQOQܓ9yMQQ,gb${2yIFOi #NrWr-ịѴ೓Gb*ӇxΩN'3b@{O4#̰gs{; M5wr 騿{)]ܵr>{9!#5W}$%[8崕c1Df&A4͌TϘijVy洷ep$rД^9a .w><+ZCcnd%vBKق"-kF}i?E4R83(5M Z93c$\69Q"%e:~lAmRP W<&&#*G~@s Jπ3Vڏ?NY~"5i(O1یLh>i$ ( +\.?3[gdPN^=eE[\\:@:%~'io}f+Z+ KI=ϋhIu8xba= Hr2 LIs *wfk8 h)dWI2 w?YFZGF,ķc!GA>P>N{xKø5ؤwۏa"HE04HM=lD?)d7!I1DGF8NEfZqchC 0{lv؀1[(YZ1YEEP"Vh11Y(fFes$)w;l14Mz[oi]xȟY"9d7116@Xak8ԜNu4i++F4udC# 2 =A  W*%f .!F9Rv#` rMV z%k~+EXԛu#Qeo$`yQFFV1VSIXʘZ=p(O+_tUB90 Gf@Zrss9U՞|v<2(˘ڤ(rFA ;}[h@9×\<;Y$$Jkꈐ?29'V:j'v?ݾv`1rSAJ{RKjFp)%67,nF Q!TJg8n_pUTf , PLb8׆ڬ.ƱWe K*`Bf`Z8ȫaZ5V!UX&&b @vCܥVu)/ICr$dPOAz N0c#"H6;lࢂKCw[n氠BK4- %WsRL- |c{y""d)"TZՃKSB>'L棶C3[3C3fTِ?,,b<Þ+3K,Wy]?Rbqx.,gTdC>H u?](/W-6bKavgX؃l65GB3ʃ^v}IGɪrrKXos~F\wF,7+Ȏ,-ߩ!|*,} ֱdUѱIr7/%땆nT 'd{L;+7 25+MfԵ?n`56Gq$LA<'GtU?qW;S+(ܳz׭'88gˁW2 (ی'wFixDY݉9Cc*WӾbI摞!6pF`ftt*Iֶ4fIE2x/jBA#$&@ JgajB1g<Qx*:Z^dfe>M9h1ysm[EԷ…][3a6f݆@5 k;(,7E u$wjmlw9L4eLPPK)LHF!tՈT2ϧçEtêF2K c] 7$hP)}ejI++6.c$]+\w/($dFO.qVRG4:2"0gg,Dj=+6Ԭwel~6Wp#&cCc},}Uf8gRkZ=҂Q҆+yW6:D+G]!R<+S {쪍F{IY9{WJ4ԏTmfUu(OpFA;ғf-fŒ"B9$I%u7k_Q-Š( (+( Vh @F{gEX(1M3fՉf"rLi}EuISq.(z%a"l/pFNRsIe,LDn8︩%c:-I߼[ N 1tuH*3 R}DcpF=^e8'v_q!R_)/YKg%YO)5u ʤgyOQUl9FƩcd;;F}o2xcPqh)xI'Sͭ@T` EBaRU4֮PӍ=G`}M`Ȫ^<޶ML, 'P(,4} ;uu!l|3u9*K 9֭gEdO}|v $ 6ᐳ %5,oU fR+Ѷ ?z\{aB)^'0}ʬ^mjDgG ~ǖ&>F=^syMs"ir~t1S~3Mc2/s9-JJrrO|@ ZΝ..:ԫ90*6J!C0-UƋ[8jW&w,~W>'>E=`e5Y 0i#Ǧ1B1SObH+MXwn2۵wc0~)Gf8fMIy$Pov!_%8Q骖Y]S+zb7)8Fp)*.I'P2X`))o M\Rjui;Q:T|AoH1µqqܧg҃ȝ3<$.J2Q(Z߹j(TTJ? ]* .a'],i%H6KHyUI7lv59)o|֑42ǥ$+6ukDw ;(u>CVZg M$V%kӘӑ]prsLZzLAaSRcu=Q6cԸ e*rLTv%{j/,-fanQ^Qq\5qȓo!,*"{zdoR(;^Q׷z(( (((n%VBo'dPN^1Y)I&l"]*F`#؃{UQ;`(t)pxeiQ\$\ +M;' =H o)ʊ:%(#8)5RR CmNxJ%RZRz) `d2>嵿Qp(ӟT|QB@$Fa26ڤ¾ UԦ3*jK¨olDqy+Atz- T ˪.Aa$ĬF%|ӯ^è+1 0v:А3ƾTln XeΦR)05"crJu`e`$C@$ )їSC*hi);튫=iysђGJjENQQ&b(QEbExZ & W=;Q霱fbY噉fcIf>bkT#N1wHiF)7K| {<$c6^/FuE)+'i1(mG׶=WTUrRQL4%-'r>n"";g uWH 8O2xd11VQڑ5TVJe*WqҨݭr;jFV;=jQ\-5Kgh_9&#WNfPDT _FҚlcTd^[B0f=LC(2$b}ձA0[-٪wzPw12VF[~ޕoE"è#~*'Ź{o 4G؎Ha`kʯTn$Z6Z-R$-$|( MG-LΤ3^U\5Z_-BsM5FhPk (EPQ@QEQEEPlGlvJ*AHI:5$B S*BեGVpz2P pYُNwبCv_O(kv6УD],Fq=j5_Jr()}Yrk^lGveNK-F 8 6VyBiΘx88KC m!w[NV,gmX "w: Gj: Vgu v;0=OqaGkʩ9]].rm.tZ}!#ݺ(N3 O@Tom. ~[u?/|gu>渂HǮe9wҚϪyKuG]US4y$Hϥ@0s;oƭP[\Auk]ya2o!:gI;-bJ1~'%8AqV#|Wᶁ'm1좏v U/ykp+i b2'#m)qDX@;_n"k/dTSTjqo9-XepdasVPFi"H=$=*mXlֻ<>U]vym-'kʩ׀mWp.Yqd݂;3ɩf;,|miֶ\/Ko:)mlcS;)D>c+L۲҃xdn^=JLo3͹9'z_u ffxљ%A'AdT)thL-- +(SUϹDj=]z+aPU+/U<%$eHO2A<_n%xx:TDvˌƽL)b4V{HTWDhQKT0}xXmE] ` r ;~$yh1 ݨɷUP{?0ӿ{o^wi?̧9D3ޠ#|7y_O+]eW[y 2/k15gR<-h//rr$#uc*UE7azOkS?NmjżV(r7womD\ Qo6PT`cVo[ׇo}9~TC{8(ߐdMSxF{şm ޠ6 ʘ•}x.zh7KVyPyP8}}{ :twI~71ޟz@.Uc8ú| %Bkn .o6ʤ@r+qco6?$,& ueku/6sb̹F{#?3]<%4lݑ^*H-p{lڀ*XbF`aSPZ&zIS&Q٬殑œՇqd`v;RiBuJX3XaJ:]{㱪 eX~>{a%Fj޹(wʣ#0Qv#J98WF3}@ޠE?.K̩[^8'\EzU 1FAz#aK*l*nNHڂGvCjXYs[T+ 0lwMIh#N=j2EI~' -]uEXo Sژ1VqӜX0{Z9=seQrrsq#~ýAꏙjp 8⑩cnYTd  ,ډ)@,|NWm:Ρvc%NPɧ8!KASػiaƺpWKimJv d@:7ZX`H|^Ox0|JI!c#["EQ"ai@t: Rم!cK_@\Hhlҳ!|cUe,0.̲2&,Nȡ22 6dL5 %kbi>Cc: B6rϤ|w3$.Yon<8wLzjS K#;XVwe ([%$WP͔SH]0U95|Y%S*i,c{!$EyUYCyA! k*!V< &`6kM[.<մ'}Sb\BJt$}FrV0UH l< 4q OVAuv|Fko2F LTآ2 ū#OmIEo M."F+TKF4KjdHS2H6)ΤMJcr3 lur5tN2mj[uuNR@D0za, :FSʇ ͖El6?d(\NX&e:ev13(EC(_`T),ĈY|iKiɕVe- ZXHK8x ^%u# R]=2UP$N^DeVhrBQ 3i$hdy:fM%J3 v:Pe:S-,s@\VQ#WLwcgoڠV2bK#Dt3rXȣp7G2ujQ@QE+KK!*DCu:m !-Gj~n$jBȨ2J,eD@.ofq7@ZBcedv4` >]zEť)wEë_Nel(i漣1#*RBq:eɍTBpGt WXDᕛZ>P$Q6zewٗK=El@U^F=)/1Q8g q|@WiV#IЭPNz≦D3\Si =6-N `h,Pu%/m-30(S2q)LYL핋Gbc&FW(̆P yUsQ\Nlb4\cB+ѵT!)H=[.1$*T4]Y&%G 1A uZHԍw!A,2]'lcL`HrF(ϘdRS rD!ҥY`t`814 PHJ$J#5;>0rPƗӧ뭻2t$!5=5g>mJ2^nI$$}Tn|,=#ڗ"`- H4>_S9҈A . jqdY_#I[=@ 2AqI 锫:TDWL"#"RJŒfܑß$3\ $8&夃.Yh ~Hf^!U Ѩ/#а}! [Ka(#dHCJZAIbHO@8zI9JyHs QDs>q.bL͡ѕQSePtZQ>`R2`i!(1ǩ^i{:( bFn8 N6@\]0I|? 5$m'YcH\-=?X@/e1ԡ@1M H1O+.ʪIu1V|($qt`}!NyiI G3Vn$GX!,x-BQ[\2QFt'cQYk)5*w Y\ i,YX1eh砑:! idPCc82:$(%]Nꡇ@}Y OpHI4g^d`'LȬ1Jjr]y$ R &cǜtAHs ^( .wǏe:.TM 5̋=':!c҅܂L&A^vUFHάzWGQe\cqcCYț:Y6h'tat' /m9QbZra X[2>\Ҁpkt~ ԪFcjK˄!Wȅ# uS d,NbPT}'U!C`f3j7S7u$uWap<8D#&eYH"6uvչQ~-bs%daI!M E !\jE#崠#,0sH6f$$2 ɀ`5Ofc8u:ruJ.h>>H땓 E#(J22H9*ȭ!jW̵C  Phyv"2+p>b O0d$ uLBif 9zY͇#fMq(#dtE!H˒Jsq#i `{φ౯)A! T!uUP1Ac4).Uxz ~0'SĠ.@83ږxxd sK!>!z>#$l)U&dW]De0`)?gHB"V]Fo9 VI0Oa1#)yLD9Tp#FHl, `l5nRq#,2sD f Ht)-t I7N '*YXg  䩘i$t;l$~@z(( (ƚ@cKA BtJ:W[*XC03:PEM.%SԓW e*+&EMKr҉T\, u2jPa"3!$"$C"$,[SZUyΤᵼEKTL&F`!Q1@Byo#((XfPB:=uXR]*ǩDr9Ig8TjI1@E,C_hYB\)V ݲF*0qY4."2 ՚f'[*8r˅a Vq@C #bh@!ВΘ ȤM"vԮA tpHVa\1+hGTj;#8m)97!DCHY·3lr!fTi$n)n E;9|ƳiĮLdeveY ]:o0}5uRX#J 8apY-ÚK=^I&H ,-ȌH m^9`OpcK`0@'Ra9@HM" ;6ʌ)'*G*C'g zFfpHEE3̕)k Ha5DPhP 7 ?^-ݩ-oaY`#!Q\2c7a+Ii3XIpdd0*Fԗu"DIL#kwic_3)@J)`0jOmX| JqѪ@%I5Yvb2Hk<,C=iLpƓ oMqO8xtLXJbȚ7i@# &K*\18 Rp4I!n;qKF#umgJ\fO &`$ `@KI Β$\dskmD@]$Y1 de!#pE0H5fbDtI 19{ie/q'sK:TUd`c*|%W-f0-}q`!v1#te9H:1`wH[Z*Pl,)\\r HFF? NWғCZ̥l9.gRӜUCb\Xs&HgmlC 8pp˲Og=!62:v"&)jGb]B V@U3y},@QM2~o%ƍZz;Á*cpPa!I;T@(2ݖ\6H弌3C(X&[')o͵)e/=؊.nQm::ĸa,N`!krKɪt/I"9 QF@iW2c}(ʳ%aT>yFv H*9O >KVa%h9ħXG|:G u 5[VXdu"8`ۑ@xEAl8أFH%pr#E4Yjm89( 34,K9Ԓ!x\jl6@eQ!A #azdPTe dI{1x3b4'ci*h ˙^dJ1XDeB9MP'$e>ʿHF:M1V$@ 1a|H1&(,] aI'wǿ{{Uo ˣ~uX@eSrôEn  )2f'@Է؝At?q˖~!SB]F H92~eP:,M%,fE:A toZ8V6 P Rw*jƊ( (((yua{$r^VDe.c^Rՠ2$\Cz/8spt5&$bwFi.0^,.W6qH DO77Đm`BՓE׀,h#+$6qpɒFŴNd<3HrN M'rRfMwW dL4.?( ( O >l 6Ktm$ŢCnWj3%RFVKɄ*UP\ap)~'>GٺK#c@(gbʪ(Su%xPXYbx`tY o KcW3hX"< ~AkB3N*5 $ĎmEd|B4"Bgmf4ۇ 4$hDq+%o%lnU $3@YAU}V3G xh[‚%;1`ĒK bI$@V }R.d9q7DA~4_)ӕ#i8rNLHQu2pB ( gm\S!ʔYu(Q&Tm<ЬVdjUހ'ѣP%WjnOZ?`=mT*onIb7c!E6[_ Kj|Ib!sfFx%PM!WGdXJ@8\$i*˭N{mq޵k^*cVEgDۗh*wR$P$NaEhZSR)w.<=:0!.xBJ +*">U *=6E3 #8'Asږ~Zǭu4YzƢרV~XeY.]('DyC>@txW+nV* (i1-+=4<iEJWҀE9 {>ӌRD12 1wwU14qc6 &ܸB ;7f^X6O0r ?C,5#F@b¼Ao*Du$lsN=Qdg#?ީ?E,[[MO OHbHWJDDv[d 0U~Δi5-g$QF"g$>2TjR9 Ci-9=4J."DA$(bY2d1@%h;:uZӉH\@6,qD^6$4$>HL![`_vHdH6+aRT)g%2G&v򵰃&[Z1RiɚxS`Hٵ+r^WpFb( ֊( (((( (((( (((( (((( (((( ((((?$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kd$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kd$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdn$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdė$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kd$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdp$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdơ$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kd$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdr$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kdȫ$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!v h555555555 5 #v :V l%65 9 / a]pdyta|?kd$$Ifl q1 q1!q%&&&&&&&&&&%6((((44 la]pdyta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh55555#v:V l659/ a]p2yta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|OkdŽ$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd-$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okda$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd1$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okde$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd5$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okdi$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd!$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd9'$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okdm-$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd3$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh555555555 5 5 5 5 5 5555555#v:V l(6559/ a]pyta|Okd9$$IflI M QU "Y%(*]- 02a5 8&&&&&&&&&&&&&&&&&&&&&(6TTTT44 la]pyta|$$If]!vh55555#v#v#v#v#v:V l6,555559/ a]p2yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|$$If]!vh555555#v#v#v#v#v#v:V l6,5555559/ a]p<yta|K$$If]!vh55F555 #v#vF#v#v :V l6,5555 9/ / / / / / /  / a]p2yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta|%$$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ /  / a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta| $$If]!vh55F5555#v#vF#v#v#v:V l6,555559/ a]p<yta|Dd ^9cc0  # A"vCyG8RkR>@=JCyG8Rk  1x;u˲&uT Yf $$ tL ڢD`{%2me -gZ/^|3ImuMO 琟,y_~ {0zǹQ*j>+ܖ*O^?T ^7ܔǸq/kqsO/ {ӟ];'K]xq͕b}7&q*[RNRQuzu)pކo# ߔAsL߯ nzWIZ? oB#oupwN!mo{7Z&ܔ*[s\,:^ϑk0yigLG'ݵ=u,Ww)S:O=Vk˫ڒ:v6g!G!/͜2>^۵zΉ/|VuEHϟο9M8v׾;m7WK雿=N~kwϡb~%GCyrZ;C'yҿw#t]n3;$aRRWFY7<76睘w+mL9w\3]'u3m!]rMSs!RW=rw'*Ǹ^_792==ogcN>G=jX=_Sٯ S^Vn&?%L[ֳ[CU){ObQ뒷oJ{>W~ dۦ̃Ӿz'M$3Xn]tUzS{p֧~/q?U+RSձƎ=e7x"&)s 22Q;+q)=ڒ;>}JlV:nu_~JvHStl\G,_U^]Q~S~*mL|{+>N,OY|GDIjxʶݨeHq\yN ]zx:yedVNr[OMY{?R+R2;W)YkISSnA)s%o_EJOG'ktyov_n]Mޱ?c᜺(?%U\/u'+U[Zv2w]FMh'(ݜ+o5׀NK޼>5)Fs}G~uԸ}/J~wdrYO>)+Ğed֮/?E~ʖҶ܃zQ߻vvpN]Sh]YjK?+Gs:(?%&߄!~k5&ӿ&̅^}%y~mZ_YښЭؑrO=x3nݸ?ң>O϶wiu^~ʻsطw}ooJ;3Y{uvpN]=OO<*Cm)c?9SoruoϧsH~5?3&s9o^9u̪kn8Vs\V)`K;mK~ʎxԽi&gѳRLb~fS='>&~ޭM]O姜/?mܨóz]״'?|{Y1&;wQtW!?eZ1u/m:̸#?> ~`MR_OIwyʨkZw{6kKÄ)&SL߶߆s-:-/2Oɨ]򛚟N;xqv7Y˸<}C4t-)ٵo#g7T&딟2?MO1Hk:`?MuOٱe&?e@}>ca:c)?~{Su.]z}纕Cr@",].H}쑟2#?MR)beԩODOlcSO~ʾ9q0=sxXغζ:RoK/Nvƶ9OKY3ԞO_~]~ O{gP|}RWQO).WŁ:ҵ~90n{=2c,gl[f)#)ym,}(gI|P7Mu_~J5kݍضgSާMOIMU~vk:x&[#?evl]#~mYs-G˞u.PwO-3{Q\mL~ʽr+Fvk[ 簶mn-O[\?7~;&dz{Ե}rNpڸ̯;Cek]e)cSZ]lyOyCn#;WkJK{&1S))se2'oNOɻu3NCκ>mzWu}ez8l8t#?E?/od<{\2]LOIYc'kظqc!?ylbMZ~ʻy&gdl'VNOu[MeS3VFs %)ΗbIxs]cSv^{}7C[7O| ^)C|cPqc\SwSߖ]?:> Ǧ%O* f~V~O]9NIwOϓ8m^̊S<q)}_sm1_OWO3!v$_t>q۾aZnEL޻>yxkbݝ3/{L 槼cͭǶߧ̎MUe?yB9N7Wd}7|{G~J黆S=Scc)oa>/;쩕rf~s{d|Z{=W&uE;: }<>!?^#e+)fZ~'QJ6$)M;y&SzQ\m5mԱ,*oJ;U<.Ű_?sn;}qҶov.{g2pZ;K`M;M83an5L{ϴ̳n^צ|~[cSRYdX )bO9=GE~ʜg7^c)}hBŔ/qkQU^T"e|?O蛦|Nu{{sǟ]s󦵣)K) n}kjr)/ڇؔfν2o>kVn]s0}BqsVpS0ҷd̛:?LQ{Dݵ'mxM߯2Wo7|R[ߕ)p7x|w>ocڴu{gjLk~ڳ :_N)a[cSu~ =x{~>cO,munxW~gѽߐR{ΊI{:͉|g#z?w}wSpOmZ@QCUK?]Bo2;X;ΉSͤr~{owjS޵ݤ}lSjϙoY~J=T29wǮ$Tڊ:{s_w;@xYnkL׮AWs36}T:~qӟ]O^Mؽ}H~zإOɯs}ֱGw6j3yn*޿~v?>߸4]1iyOKV\5lkˎ꨾`g|K~>ܤui)S2E\m)1r_n<Ӥ1/Ǎ0<ϟ>kǧ}SoOE9ퟞ}刵lʞ;\GuT68ߒo?G^1)?qYݞ{zk|Q~qc&?1RsuC޳9WkpR#y[W|{e596޾p8u:1}]h&yq^zW?SWy}SGNSx!?.S,S.<9s}j ]vs+B:o< {9_ `>ϔ[^osnգ HɣGyZ~g36_Sʀ:~lǴǿ+Wn< & ϭN}zͯߪCs}[s*F#g˵=ORMuxL| `ߩo]Ǎ.+}ک4>cSߏ/``~9P8$Мns`k}}pYO:zl t`~ySoLXFXSؿO=&si0B ?/B d/)q޺4|9'iN>bq~4|?.' ;~O֥IS&P4R1_8CleƮmU\^֧ISߏ};y=TU/ %?82^C10o׿c, N_{u ȍ]۪Sɘ~VuH#:tP~^V@nǀc*qmODGqu*69̺0)vc186'c"v}#긺NR݄u %LsoD=W`k:L_0m\-ar5P`uy0-~c18>A_'{@<u1`ҘJcr_ss%I(ؐWռ`{@<u2{N!_&. HEidžý%1)x W sU W }:toPϾ=u2g=>G` u̞CN>q@`z>nP8۽bɾ8>=[I\t|<1D<{!b {A{Js/qyЧE~ _|;A)c_1$v!>G~!~wmMO}}ЧO~ W-~7>{~c9'C}?S1ǻ~sUs Ob=~Чs<XG9'C3 )נ55nW. _s@r?N{}=w$SLkSuM t;OP7<{[S 8|ߞcM885ܾ=)fSިT_''Ƅ/9s3ox_udwNoe+s%$=}/yZ xq}9y=<| ups.Gǽ8uOnn7]ՇZԍwuLjzƭ1i?cMvd?WUomınՍN#L.ċ~//Q>wkء%e*ݮ˸5U1kźwMˑ{|孱T_Sj[UgL{z /ƈڭu긯t/rS^p֞^a) _XI{&-[RY^;׳m:)U'WYRbYE۸uH(>2^Twv2=>Vmtn{wOMI߷畼We]}ߣ3_]ϡmk1_~ʽcmMP ]wES-?U;<}7m}mcx\yk݄隟rں\5ccU,fFLp&ns{z}qSǭ~t=Ov­H/Tܔqu!^m=)7vojݼ^$ǁX'u/nxceQ=֍x &{^/uWnc_};{ULܿ_:sO#~tŋ--']3;y{ݍB͗֡|÷ uט ^B)@zXc~@y췿U^},pO/➪FڳnӿXGnuJy]^{݋ELjcńaߩyӹ82yӿ"x]:_攽ԝuz7tza{͉OpE['I|9y:ouzFuϸs~Jj:u7kS\&kg>Kb[[n׵~L>(9oOTovq(طx\6an]~ʝs-{qi\cL;|/_;NuJ~sY ]ǯu|sy$qS٩u^}9O8OH;Wg$?Oyk+]͗:umk̝䕧1>C~{2q}8Y~}oVV_CzS08rC7u5{IӺä5)ר̶yi>97ε-?SbxR~9BڹS;G\)7E~ͮtПwoDŽm*ϴ Ayٷ9)ۋ1mXy\X~J]y$#&LΏI,?E?[DŽJᯎ7a.| ))9k>C~s+׳UzƬnusOtr[)r{~ʫqۆ>@~f;* ?el7kG+f'yiT!uos ¾PoS3[wvy'XScOOI|SH[+׳UzƬcOϕ^k~˻J{p=Oi}aۘ[~ʜ8t&NCY~JH~J~yO9Cy֞sR~ʟ:U;gH]nm䞸jl췗ĸ~8Oz,q א/OPӾo餸11Bj)7a\>wSn3RW6ۊ);SV|ǡS ?Řj<+?Śͺm}CyO7;pOɫSs!?%'OzMx[$e;t1jx4>1>'=㾿g1C)֘5&~r)i:wH~>W~1=ҧ SO+wNq;|+({姼77Oɬߗ~I).?:hkydS+Ter~ʄwݾ{<&~yB[I]ң>C?+@~5!fuSײS4ǘRwOi-Z5Oɛ+wN }zռq~s~)?EHͷrE:G\_0n&w1M2 ?OQw`/ǡ: ?\:+tyF]ׯcOsw]α[[s=cNk6U۪7>I.㽴9Nwvݫ)OS]~:67sƍ: >1aJSs8Zb <6纯Oy;4v&?w~Or_=9@ߜ'UgccJ^txFSYO ֏98f&?%+7e!^Yr޺ݾ*Ͽ-wDm{#^WPw!kS[*ͳ|ն^W&}zi/&_ë&mػǞS^N1O_[xJMZS^䧨_u1<ĉu|uNqni~,?E~Jźyx)=xR~JqܫĮ?u̎--ӿ7)b7/IXLQIoS-'tKz^nǓoT^B)wIE}s)@mz ֐S6 bN8=}o@:7/%VmV:gV&s6OW&N7+XS]D)-)/T?Nq{(bI)*Bvcuܿ"4յԼN`~&<_u\9s}5.{˷LVA>[}XqgY|%Gi@Ƙqѻ?Sr7 VRTr2O:ٔ? ?ef)?w,>FLOvy.5q-?3/uŒ}Tx]9zbObj{MwYQw]OcwSs_ku›|5U' +^myn8 ^cN#姼dvw>2y&{NKu:y7yMdЧ;޷|^tD:>&\}5= 1;V,N kHnkͩ?/:]g=>z8T~JN_YnSC~ucS~O$:گnq7#/b#&v#lq}yS֐Rָ^=﻾=Nwy };q„uʺR]7rv*qw&S@?/ /QSu{4Ozդyz)Ę]ؚGǙ0NOLH䏏^̽Sؗjޑ6V2?x'{~oNT/6:KTo[Q'oZOy*nl6MKcq`nYYul"?'i )yIJ>u'ŷ.SޗIc~BHyn~IFc$帘;vǼ[ǟ۷/Q}*%ylyz{ЦF݋Ijk'zk۔? ?E괆!؝ i`Oޫ秤ncSNcRnʩwħŬR1N78w9ƪ:&%uu}szu=$oM9Wqܿ9O3ΐJIߏOO;t?۱]$tQy/&󫼜rzoZ;ON#E/P&˯~S=2=1I_Jrk+pm l)]jzkA=?Ixy `Ǿ{%:ӛeҵ>lWr?&|7{Bʾ݉ow9}F$Iȥ{/>ԵiV?}=A]{k%9@8WqCq1ؓS  "?7!f$wx]&^<ߌ|7u\;}'iiվlEfoJLZK:e\=in>iaw;PQW{@"޿ @U:qqfM?^{DcC~J |c~Jڞlc+I%V/q8]YRmuO1Ŝ.7rS^x ߏ]ݮ{'|ү!=Ӷv=oHOObO @G{,3@x"^OASU?#?ElYY**g{YNN^3l~fjv-?gnC9LN3Ta/kC{fdʞ7 :[7Hn|;E?isO-c{.^7ܼѾN+?E?vr? [qs3T9`/)Ye 7eބAdW}n~S+,?E B~ ߒ rTw_돮{] ?9=˘#cosY_}l9=˸0Ջ=]n~r֑ϔy;UGƮuó8ܹ=xb_x{o|n{g|~|[D ]Y)Gecj~J3o{ [$L=s&3k0W x ^~gjgScT|܍ױ>EE;NiI `^.@߹=x675WU~1o<{ܿ9׭?vKy]q}C%)ܨ/[s=qL㺌X5;qܓe$?oob U)o~Xr : KvK+ܾvXQoyؓk_G~ʮgMzџ.Ӟ߯ä;]3KRu-kn[12o]E?|:FE|ބѱ; m|+޸NuL;&@=X?n>^ë}{p{/rßl;~ӿY_&姼-=WNjM\{Koˋc.G.W[dr9j5~y6@Eo`F; >‰g\N~./?IxE%߮蛺֗.)P{oO(oTc6s-s{)Li]~E:ճw}bͻrP.Ye3 }aGU扽|r,}')G^]]yQr S緉)m"),>*%SnC~J2>oWtxQmޤ/::owŷYcO>ros^S\ܔ-ŧy~})LS)[Sϊ;Z3N?9]Sn:ձO@=$i{*J&~{ɦ6}:VYw6+9qz+qwlo`O)\Oeua'tnc0SUt&M}|^/ӔwWAO?/*1l nc.ŧx}/b_:!!nǴ~^8ɹ)S}vJ۽q{j=S ?enRw){֞O\nn9*.u|m>cv昧˚_OxC{Y\o$޹=jYҭkmv *<.㡤p;t;˫sX$Nje7u'?GX_>CҹOx35՗͚ou|ДvGt7UիIl}1qz G?$3kŷ:bw(~rν]My_\~Jeq̭Iޡ09c'_~ʞ`$v(m{SU]mw뫛X~Nv鯷[;=>eλ'N(MOy{N)SK)S&ɿǮZx6JyoSF|D΅?/nV=|/ cHoz\}nf|sʫsl~YurBw延S{Kzn:˜}!>O_O2m4=jY~Ry7/qjl/|:f0NnW=噽x'w:#T}a[nڷ=&[I9|Q^_t>~V~ʌpuMylڸ`{̪ܺs=OuGw|LVGȵegdsrza2츷$O~\n6~z]3'YmwGotɹ#7{=7uYU=pv:1WuĤ+W=ש?/SzM{,?E~Jk:w_tWd~Nu~ﭭ/Sqs{uA)cC~Ա;;{|;[~"6I]Yߟ uz[8i떛RsorIIIczZr ; $c[S܄Su1["=ߎ!O9GC;&lكu֗/L,IiH~J8N~ ]㥩kc֤=ꭲ {=Mz~Jyvċg> /߫~2|vrSnT'>6m{7U;HOh[5{z{|QQN~ϐҹګ3;OU9-?%i.`|4'>kZ䧜IO%))Tα}Ҹ,}Dzw;7r_(~ρ r ݷ 9f՘9ᙧA_^|czn?u3uk:)q"=fJogb~JŹ˿jVcټ^t섘^]S[yVѾl}|1z1FH@&8a8eS̗csnˤS生k}>$?λY[g>Oًtoh3Ҟߧ9S;7N}^ɱ㜾˻ϷYjIkl)k| &7]{tkOuWoqnl"?8c9e)?%\h!?xs't7'鰧xJs&O:KY}V}ɍ6Ûe1Ge^orpk-gw :GV]mu\k祏_|^ה~΄=^qtrxS~G~JuέWl;scO2)>;_Α4J~.){cS# 生k}.?^~JBF\hsJߠR32sT2ޛ6+ƺǒ'9yn-76ƍr:ꘟr7utiS;S2O5>Cےb}\nL>MCS#]NN?V~)S&{rK=|^~)sK)ocQ!?e^5=&m'?f-ű_~O}'>gږ:gH~?>ލ{m=v1 q~LX;wl)gNd/?eokzSQ}|.yO~:9yncO1>61:_|V^ɺOgҹ7?r}Vݵ 쓿=N 7ӱ_ПÞޖx{>pyjKee $?3V*-c)mOuOKSݜ=uMyy(m)GC>Sfi}k(U}oU]b2VfSN;K8NqSY=>!{Q'(?E;͡=l)C:'?E~ʦ:!wmU~7ߊ::\/ʯǗޝ|ljnesںU'^)? w+?%{i{)gR{q)doN֥uk~J>= uOwɩےR'mQv$5v\Rd姐8L)7ƍ7F^Oї1yh縹;Sz)ϫ7:˸7T8|ﮑs{=/C~ga4}m$?E~Jbꞟ-]g>-wԮk~JRUln{:ImNck]3S ?g]&Lٳ^Sr&I1Rnމk8A~Jswyտ&nOѿ%.:!\X~:y.MQ\[Tѯ/lݚ5:$?U{ksWͫ'{N|_aOۗKo]m:<`Ϙ/}.1vw ̳cቱkܛ2qzz$?%l*Ő쪛7xޝP6S||ڛF996u9~c;eC~'7}^ή委1-?E~JZOCm8;6|3x{{We; Krn)s)rry̖=cM㫊W8[Oܔ| )No*ElӖlc^m^VcMOٶgf_n9rS;Sv;iN~ʾq|bɚ'׷m^"?%AXHQR_FSy5)k.m:5?ek^orW8g<;9.)3癕}v]T^sn)mCܔuV~Juc1)U[}7n F~J8uk=9~9֕R?x>SJj:_~ݝ>{s>yĚ1 u{sy1MN['ï~W=s5:M.땾 Մu,)ƹ]Q~1q=u-8}/؆'Tmm{s7jfunT'?^c@}]NOXf\1SnNMu}BJn2~wT]j[Wǒgp+Ns~^q7`Vy))s>)B,oC~6X;Q1Vk\SKwǧ /c}=!?t Эncmo+i;6ǥͽO8}ױT%6OHkikrSzO>_֤g.?}O)Ǯ直1S|)uS<O7OSLwO9n?߮+W]= {?1K[cKN¤xھ)7UrY/ߏ^{bn~> ݧŎHߩ{qO0ysM)u E)I~>xA~JFLȵԾY)W~JH۝;ծӎN~;+{Zëysܸu,=`7_ܜW'e]ދԱp3\W*ˍ=q~_:ݫǵ@|ۚ1y )Q~J{bazح+7*}ߝ@ڮ1WUtOgy뵵sk M\O<7{%{Yt(ylSwpNh]}u_;H/ k~7q]KðaT86`X;7Pv91mZA3R[db@bŇWsVݝ6Zo^$7<%'1Hy}{ ߇x1י6O}:Uz7N:'XS2oO?W4Ὁ9*)LE]c_c['ȮS{;q=οߑnNV똯{$W{}So$O:ic߲t񋶸176Z.߇7wpmqMv}YS:$kR~JbH{]ݓ?>މt{u}1nu-S~˻ӻ~ߟnS_{IwF·u|ں[zܔ:9?uv)[;ǯc±)#>R_@\a[~ ԟkJ~:ݑoOE]Szֻ瞾_ԾĽ07:Ng7eVcۭ:eMUw|n۩RbLzckqNv~R 8LNJ|S3!?q\>s֪7SRǺ8ؗv=O?$5)iXen>Om9bo$t5r<5;}wzG_XgU'A;?87y0{̗r瑟.eKPUSwhsؗ7M+vr:Kd~3Oܣ:=?TeV׻۶6P7PN:{Rv0V:P3ywoOy=n_͛'OOZ;%ݪcwWɞ919q*9n?Z!3n?+ L}b$@l{/s~{|m ̗vo ϱ;#\s9*+Sf~o B@z8u\)wǓRlzfMech3牺:Gp?Mb@NRޝ6kQE?\t#Hb <BYGԶYױC2PyfZu j}u[N=Gy3w@$M>=`|ʕ9Iؽֱfc3dLf'3UojzeOty-W5_ugva]}%v75g=_M͛:g]ZǚU/{338gL#Dvֲz=ϵ.Nuf?6Fu|١-Zzrb%LYOc5>q>` 9֙g`/jzgqdsz3 u3ӿ9dQKT{{?'=L3ϨOF]ZZOVՑZ=ANuf?qF[wv?[SVy5Zg{׽| 8g輦P2:鿛`ϫwf)p^{ob9{ K}=ogk)Nsىp:uoz>Nw!Н{8~dgwն5;sN6Ž\[;'Os)ofzg:Pce.=tȣuj.ENOTy&g"psʕȨieDF<+$)Y#Q -"U)*![)!)[>EF+vϧ|?tGTS2׸T,W3O~&1ͬ|?QSی |JV)F"O_>2(oǔqd]r)]ƳM`>T| Pez˳`޶b 2*@ɹ.}Y5SobS6km໪}O']v=~>zͺ໊'uoy IwkTIfa+eӶ沲VdT 2/tgYnpմwۻsz-UeE6cB>*uk]3:N-ruQcͮo`Gt}5Q}^Tkv=W`PVԴ~WU\ei=Ea-:fqm˱u_d3Tެt2yɓru8&fy:eX+D/"wϿbmŤ98S2e}ղ-u;DOo۾SɧoC>E>eRNxvNG2ȧ_ {ID>em'ݏ.2^+ֲ{EOVO58|6dSS&딺? |9\' OYۆ|J~>ڜ\SkREڃGcOɧmC>E>%Ml aR>^ɕ>{Clڽ)g֍|JlM.K:y't%jn;2*p|x_uRvl {5SGMڜ\vb> ˓=DΞ'O>W7qɧ~\ =.7 L>|8SF>%^oڝOY%:r*8|X1\wO٣^mO̘g,])O:؝|X12Wu*2v;S&!8|X5dTSzR/0ۊ}ߣ;O>T>׺Lǎ)+@.Dl|J?vSrߌ|ʛDqSOZؕ|ʵwgǵϪ6SV==joVyp)zztkc>c=d>g<ʧnNw?^ƚt:vUL1{ee/`gy= Y"qR~kcŜ?yvk>]^4d+IOr}]֧˜efV2QͬO˘:S)ϿESFDECϛ>g =D?b|qu{Nw u%5>G =;O3MY5U㭜רgx")|G!|{>ekx+_ӱ^V3˨Nʧ| ݊| v :Vאz\ٿ̕C`G)^s(!Ny gȥL>^ɧpzW+d| <% |ʧDKg 2edD*)w~&JSL<+2s")s~@)o~*Y? nw?S] @H|~WDߞOy{|3UT=#g^u{韶gsUPNdϪäO:~t@LɧvV}t}\ͥȩ!q5"OeiDF_U9ʨ-X՞ @@i_PY݆ @|Jȱk>EF :;<3TjɧZ)1=!Pk| @ydr"17KNH>~U t$ͪ5Q:QΦ|oreTnS1] ߤoٔ|_>F>YǑZ)oڔOv#+&d:dS~Ǫq^YSe8v[/<)oJ>UFa3G*s)gn/;zrٔtS`.;bt:Vz}ne\@{W5'+vsiBv>姶&ɧw=Gdea:S83 U|OΙ7}8/Շ}]}\^=J2Ȧ\k3jWQfV[dS8I׮{j+eH)vOÝ^vQ ;=}SdZ6%q9NCE՛| 'Oy>_f%Sڍ˛ۤ>ܙce:}O-*'"IxOה>D| #|"&'RO|nKt[)k5 SbΞI*u{~|J:OCV>%*;adT*7׬:\+;-ҡ/xEl uͧTyZSч|J~@G)yg2)t=.'S:|WNd2,͉IV^3wΧt~>MgD[>SD|\q_.2 Wɧ̞>Tf\G8r_>"O6N}:r].2S"Sߞ K`Ykl;SdB2UyCw:}g]vdT8Qٔ5)kY?"_kOV_p ZZ)5%@W)v w^TS:krݲs)L5ow^~7;S&ɔ5/c.D"Sv8Z|Jy츖s˧Н}sO:bEdOmv~p&X*b]mdsS:kq,׏ROZMdb6~wlcU!k 1{^VSގmȧܫNu1\yN*)_ȧO١WS]Ȧ\)ߏeS:\36Sjusfʧt*S2%rZ>e>Dc˦:ScF%/s!61b΢,>DαvCV]ˮ yW}PЦS{~GEȦC I=)mtE͐IsES|Oʺ:Z캝(O)g*khLrg_{ED>S\G+ڑ ;+"4wz:ɧgȧ}_"0| odDdSSOO v[N| `/*r")O{8eDgS 0| oɧVFNl| KO/r))V)vb\͛cb=કȢ{T;Ylj5'tmDα2)kF>O)W|ڮ|ʳSQj|J|J6[)2y:;S&S1%2%{>%Ojȧi*ٵ1avʧLZjɧi">Wn)@->mD;2WPQҮ|9@->m>cO)Nj>5u=1}x)wH>N˧\爵\)gu2_]>\OP e=-q1t;~ʧU7^f>N;oYOjȑɣDޓ.Rsم缼7}t$['{S !}DzٔOF|JfYO@'?;%:;9u::O-<^Jt{sr'zbS:}D랙QOy?czʓd"󘫞@{ɻe8WD᜚F_ IA=xJጚV ` j tPSw {4b+;YD᜚+AᬺDe8 i@j 'y UWޮ[g޳p)5δzѺdڪ M)pC=kD7uNM&Y=$ˠ1޲֯zAAMWݻc3ڟ0u+d=&ˠo{ʱT^FַJ jo1Ru>ys32SzvkD+7TX>f?Y5OxD $)T&pF=tT^k?Y5ͪC͸VOj./Y5=ϑq-$ j9/Ҝ)k3恺,z޹Oêgx{ͧ@kb}kO2iO?CE>>)bM.Y?pnGӸgTǠj>fQZqw s2čI|Ѥ3kǮسMҵ_kuT<ʘSpY}~~P=Ņ<=Cͳg|„lg't+~Df !p_)jWs?g|Bu69=20Us`^dTMI>;ߺ~#ϒ1N+r>ewyGAMt?ykstF[>Lɧ|wIũ|cEAM'մ>gtľ.|6'>ލ-b- j1Y_MΧdLauUugeط{MWb/xB>ZԽxƕ9GSO2.5:) )x}hceF%bND%smCAM4W |ʵ6'!2'r:ݾY8,fuJM۞+M>Z1̹8aĻ|}2ifM<h)ڜn3Z9Wdؗ,NitWDO6}oTϑзncf` {tsq"c~=W |ʵ6'1gDݷs|' gu u̦y_dT,)ڜƩE?ٿLY+؟,vkdWkȧ\ksB)je{YOAM5{7:{@kmNhcJ#jmʧR2iƸibo7Z |ʵ6'!2oO<#2iu@kVO6Szʧȧg Z5vUΧ|`5kmNhC>)gV 7YNכ@'1N)| @oQ{wXM>6'!2S`.j)?9 sIϳw9|*)_Xij>% mLɧDAf&"VM{4 ZUS~nsBޫY1rXxΧ|`eZmt͓Tr:αU2WL!oɧt9ڈ:g|nL{];'v6TGsO!bC>}xMoįcȧJAzْ7d~gţswf}x{AlPw"zحj?9&xޝV>ɵYs"?QlOoe `7{<%23~])Yu>{L:NC ۈ8w=٧Φ/ސOw[ | @+CDJ|"P(M| @\ʧ>%Ћ$iSxC>7SxC>7SxC>7Sx HeD>ιq%"LϧȨ/:w2 9}8GfsN&Y/Y3 2)y9~u RmdΩ,yz' 4ZL ꩮS gԴjnePSuސeؿ燚eػsD]2@' T] g ,95UW ,}Z>^YuyG {1QNaoTMZfؙ,95]]Wsx*{] `,{ӽ;Σ'DX;v=uw7ߵc̓cIfsk^jg3f\˱v2 2oS]߬^@Ok8֍]t2ȨԴ}#b|'TSS9gCMgxۊ~c@ف]عU5ߛW e3 jW]PO[ڍ5#`+ t]%oP17kSOQӬt˧Tcn'plZ~| 92m;pΆf3TSX:"`gd[=U> ?RSDm8gm42*)Ve\͛W=ΕU}ʬITٵ˚.XkasO8|q~;wMAD=ϊ{Uye]Γ>eAd+jS>eVe8)PYϷ^+=*L1zsʱϬIugs\mjePY=duͧWoઝMXvFe]Ƿ8G%-VezF\J~ww;ooIW>MOȦd#O^{:oXr9)_乢TU]3ߣ'OMN:Ly^/9XPecM9uJ>ӻtoN2s)ca;^e߉jj5=+OO)~Cʧ]LȧL[5U"ɴ|Jw+)B{GzSgkStR];'7? D9씫c>S@6vϧD"+?46StR]+6MS{Mxӗbm5nIk>eҞ.52u*{S{M2PijO~7oc؅|ʾ5xJ{Wl PM>%;) QcT]>|k.*:Sie];~ɤUSxz~nfNȧDuwODJ~Ⱥ0)jZW)u}{o{$)m̨d vȧDi䳬owg\X )j:5]WmB>%)\gid<.N\v @j5)wڴL!ƛ͖is-;3ܙYI5 5}Wn5r'KOmcd9̵sh2ߙRO{mxV,'SN.lʛL F T,"dsi/ S)lLkf>˝k:OicʧijvVׯ:dxt @#82*{OOi#)_.͓9ϑ06ٺd:qZMU}֞"٤SSٌ}sv| pNYliE!r@g)ΒD>{YvOy{#[9jZϬvu]Lg4_'MOʧ~#ksb纞:6כ@7׍Sͧtkaܫ {Aw&׆|JL[椵|ʪ{R{0S|EijwS7b\1[yI rTS^ PyʻV1-GQmDeOٻ6@ \ PA>% 6NɧLȒtd+= Fu>ٞP|7=%سX./c,I\I9 V|{dysFJzƔ3>;ι7Zџyw_y)oi6;cY:?C&b#ƪK*)Qs=#66@5|F T=ce6%jgXF|G ﶳ37xMwE>/|^w%"E5"7W 34vΧUdٍNZxBm|g?e_7]3Ca}LVuvwéLUMp;Sɧ@w&M?p{Gg)Nz7vvʟ{fS"OM8{?fS*m8'{p;߈G:I6% fS"NZs>M`w&E&׷?e0>o1)琝}ψl3ʨպ< PC>M{vWdu2UeSV[Fɳt8K8n|vW%ij6_=.%yV>GƵoe;VD>e*o/TO3TV_F̑L?CE>L>%k P/qH>e3?7?=˛[uinb~^UݟEC21_ߴS;-I~֩m˧o;g*s+Ys'Au>%{s)s&Mϛ{>Z96Y}xR?|JVOj)O}Zk=#F,Ye>% 1ͨ}J2LΦd1osRV-^wt&?5G @iٔ#͚gԇ;]y;s:Oأ ۔|Jv]E>%vN>SrI;:5N҇;\y{@$I*t<:lOѯ)}_Ś7t6=Roka ٔig\}Nڇ|J7t&7On'ٔOxNuڇOUa佻SdT?'OzR}ʧt'd9q>=IeYuZ#t>d޳,I>%wz6Γ-;S:uFS[`=.2r 1aP>`|ʛ>}Dv)ṧqg|wCo<[:1ߧmG`95)O2!S;aJ{\+ާ|~&tͦ|۔6NΦvwۼ+:U}iU鹔LP{җ;4OFb>E6%ުzff6}ߜӬ^[} 2巓vx^wFqy.wI)]krݲs)L)km\S\tvĎIseJ # :Oۿvu0Ū5IٔOFX ^|EX'Mm8dN&_+VF$L6SF׳waĽS`.L&W25X}Ȫa ws&OMxmȧp=ESSϧЇsӼM)_)Sr_gl)_]ʔLʩd)2簢/puul Oy**G92-NC֜jtȧ|tdzT29'!s.myAs)<>d<ώ}ȞGf/r:xEsNȧi're?&sM<׎Φ_nkhtrg_{E:egYC`+{ۑt'?{6}pt&TR>^+Q`*ΉTq wg;@EL\ ȗBDsIXO>7*SC>7Sw)""ޝ~Ċ̋ȦяXQ,+"7"Bd\ɍ乓KM?yIYQAK?]ͥȩ9(2*g{McE>o +) 3ʔ̺pޒQ~co!w)/<;?'FGOpuhgl~EsGsAZzoEF3A`՚ºOsEsF@`0]sF9@Ϥ +狟]1qC}H\=^|s_w5xwͱ#X:<`rggN\/u>0kni3*2-䳘M|Mc޷g>YW7"ʳ8+%WyLW쫷z\qjbEwo[~1ΘVʨdͻxs&=G~> sU{+CF9AdE2:S2c g f>Y]˚+˧3m{ķ6_WWK_vf쑏ʧL۝r*;$gR컎_u&v/aRvuUߣ26ƕsT;cjM nΚʺQme=d^}nu{Jn!=#nS):T^B~z_wT,Q|ھKxΜN v:):>SjF}"P{ t[m0!R/(2'|lJX u.P]?!OV;Z6%W+Sj}%~j~7gǏ *՝cv/Sɓ)S(:vy̹zf**^1}}e{ej/su*z/@f+퉉txOÓYL_U)gdj]S2ǗvTn1LLRN~cv8\گz=Mfw,Od{gڈzO۾S~>+WZ?]u5:g:ǾNZV~adw];)󕪿!`cT=k_ ]Y1Rm=$9zٔYU/1>ODZuUS䉹,R}~8L!RsXa$2/wا߭6>a@u|+隧E͍ګ߽WSVȣ|9VױGS^W˚AcWgyjSdS)= +|Uߏ+@5ZOϧ<}9:e??ΧTǎ~Uw;NGS"MkHur;wfywJrM2ܕ!xbtȧXGZ>Y}::J=ku= 3eEnQZǏ7!ݿNnYeO9cx7C?^+ie^)L>%o@u|2ZߗOwϵE'}[>euE>eV;G}I> ɝ|JuP|JŲVG߷5L6%w@9|ʾ˓sڗE>OO9/Qc~^>#拳;|g ӽ m\Sz\r=jϮ)Xewϥ+wM^܄|J~OSveu%ڹ|A>睔OgR_Է'+wRɧܫk&g~y9܇OY&m@˧Կvr }li=eO9:wʺO?ߒOY[VS2')us.OYSW%dvnq?y,ʧ{Rn{YYڹf}Χt)8v R{#sڭƣ|:8T6jߟO|O{f7眐O1:|-|/v^=!rwzq9'׾oiFުc˧_wD׽pS; ʧs){1wx|NZ}ogOs]O9ɵ{U{ɧN>.SͮBk|O>Ѭ̅|9rז^ :OO(|J(ҳ9e>|J6>ei?u.Rw$[D[]~x28A)}ձv՘|N}mɧ̸dS;O.2sΐ\ͧ<)_OqiYO741oɧt_d# gMH>E>%)u }g|҆#лoks%`Vg]@)= )Guq9s rʧk|1VnJ{zmlW黪c|)3ԸO=-SΧ.::3ņbR>N|OCvNOzPWֆ'Cr<o{9SjΕ"~2)9wf- T5)R+>פ>4eqS9Z]7r }lE}_P>evS"l☽3x{li*?'fSVlJݵ k͊r;|:<;'QmR)FDɧ}[mcK;n/^R>e_]ȧ\ɧĔuSs)+Y>Zfh͈kN @eUg웨G/;CBnv[Xʧ;r ӿGbv_M[u\Oʧ2}Ӯ9}ckbT]oge}uk}øt]^cK;|>ow2ŝcWO:׌(ck~{@uumkUYgT,_rWPU96)66]WuB?zSyP6]˔Qٔ'W:0߮;A2=):^lVq]wu k{]Th o{;vm^-+խuzBn=:5U>]Lʵc|TȧwXMɧxAz]9]et/h]{XE̗ͯ3kuѱ |J^܋$3<.O-\)Н׮sj`׳2Ekw~yVk7;|ﺐOɽ~ku d"Txv~u9qlMJ8yv֧On!;tuS︆VQ\kp~j_sFζ˼he>Wu Z/l|J>5mnkS k5_6ѯ=8yw;N]~gyEncRn!sǵpg cc(9\NϧdN\Gq TXE\jae|J6^\?*9eYʊP_UݯWJL\˧~tX@X1̵ij0%}ǻێ|ߏ߫iwFzO];W>;w2U컫+eI}n)}TF[N/r 9/>}cȝwݣF_>:OO3]ύ ӫ퟿SU,WZv:_,?W9?r^g}zOam{wJ,TZE7ȧ?\Uѽv;Vew?m5WS)ew}̿3+{ݟB=-c6*}7C>9\S%#s珼oYn}O>ŸwsU h}@%]qJ4.CSԺpoNYo2kӭtyqm] d)dqߊq<Թp}>Y} w)]}e{A`\xE&ܷroozڸv}\K>L})W 2'޹VWJ6&.'\~>SN)̋h@ƕ~=mL\~>Sv׫o>~8]dOaz6&>mU)|Q]| ڭn]/}g+Ps-uܧ>Wȴx|gDͻ&?ONkT'k fO>qk|\X{Jtmn4VjA CSȧԪs}~]@+[Wy|N]>}j>n R=)3(qZ>j]=ɧȧT/:dSvS>~%"ҡ\ j~w9SK]@M)2*Yj}+Sr@IٔeO/)w)o]-R]%YeSWPO}O3)o_12 VW|JF]@وUYwMŲUkȨp.ߣgAwȧhCkQSv|ާ;q,T'r>:ÊcOO^1NoY`7{8oϽdgvىu;ɧܫ+Q )Yoe"r.SVOqYV}U)[نnj.ӊzTW|\o+gVvV8`)o)o޵l&*USg"ճ $T_u:dJ1>up yސO \);TSɧVnjJ SxS>C~D2*]p*SW1R.JyoeO+;r%"P[F6ySz̨\=l @/2*o"Qf ߪL ٮTvw9Qz?C#<Ճ PCFjJ.cg|V [ Vʩ:'dOyr ^Q:fSV﨏.-#|ٌ^̌:[;%οsD-[)k?SgO2E;~;׎|Jw( B>e1;CE>,*۱3G*)٧gW@U)?WkCۑv]Wc'#W@){>WkCn^o8M)}6k,v;?g@ϩ23 ǎ:f>U-*woEU]&3 SbJ>9*[>|J2w+4$SO_`^S)}| 3)1u<[8=3)\Y`']-MlʷrV<)N.)}tJ^ {>s%:OYS)SEǔl E>%~~ϕL>e9;S^)l]=9v-w#T)!"kU()Uxg;S֞)3lt[Vk;)SS֞C6E>[o;O>c>esς8|sȧS/]SR6ȖOys ٔ#DQ˧oS7}R6"2OoOVoO>oT>g+ϵw'ɧ\c7{V>wP>'7pkuy fOʧţd|r;ϋ8|ʵ<~|)B.\ Is竖Myp"ku1>;}}JW.W@Jse~HH>Z])S;}739dR.*SiG2SyiSŪJOSz݊L'ͧx SŪQO2hwzXiot)˧ȧt;˻;w1S<#`:_]w&SV{>s Ff>ϿɦS?Mb>bUeߵX95zN)&S*OOgWZFdSs߭bTx9'O׿>vt~~ϾyǎΓO~moyVt;IQJ{g+˷m>{,y<- v=o|Shj>^.})xLD}/cWdg-5)O׿{(2ۨJEǬMY]U{๊όfJ>gmӱO+u|rWVD?g]qE9`;-sV}K>o']7'=C0`UQSP!Yޫ>?ّO9 S)3˧V 7w8v+2&)}0яY8Ǖgw;s%0| y}r(y~D)Y8OF.S;N}s=xgB>%}ޑO;);=L_)"#Os|{:~Y2Mͧ<9t,"s")3 |O%:+"DOyz|{>vYdES`?-m\_7jOyY"̙"@<(Ƿ֟s՘ 5kZN,vut-2ZN,ɑLaj7S>i "#Sbׯ_i.EN}?S{/ dSt "_+):wX)Yu>)%)%)_>%ʧȨ)ސO S>eGFeB>At)??1QU mۜlħLf.QٟdW>2*;~l:U6O jfKvSv]4 25ҙQWO`ՙrW+(O:/+s)+-|[OV̀ȧ8i{8`'ޙ#9a Ů|wXuo@gSGD>RoO@*+%M=ߞ?=Uo|?wgIV_|ڷkWuge;{BU֡b){<ȧCE>To'gcGwBMCy9.Ԏ|zuS>-_>Tə7uLlӿ_FUux_cNSxZ7lgeju|{iuݫ“1M̧t׭/6&gSvTU:sQ]oۼXߌ M>%]]e>;J*M˦T<w9O5S&O)L2%RUsy{̕ToںXO{W6R %;k'ЕOܷewY)밳w te@+wwY`KSjzj);}n)7 lOu8Ǵ~Zm,iR)떞O+{IS:yZ겣Uz7S2ӭn3$V*-g;Py}Y9zU~ߨʋ$~=j&K )}|w!pC§Sڹy cW;UzSwߟ"Hʧt2mߝC~+%*U9zz「MG9glSdBNCR&a,߬|o'|0,Ab6zM*lN+I|wUt]y{o&ڕ'Lۄ2nϥT6LN+]UF]幭<7{D;ѧџOMimU{ִlӿ>V>|Jx)J>4Oa%|Wե`|J9'{H^$>SH黬]>\{߷`qj>eXҏ; t?Vֱ"Sz7%S]SHhO`~n|SȓLʦT2U2N +%"S͔L:Ў탳X!&x+Mީec$ԡw3dȷ+bNmM-C>Ws||ȧOy{lؐM)USUku)uNvSʔLɩܘOّçc-dS`|Tr$*qr2UTSf9);*Sѹnw*UiU*ҷe<_o @(*OsڗzȦ޼[~(q?sTFB." y0cG>opZ[Ówڻ)ghL{J˧x@Ok|:TgES۽QVnt)wIS{wEF`6ިΉ|MOC>7)!)2vix3zy{{*")T%VUfD6q ԪȍȦP9n:Fެyl07O<>o2)+0< ;Ȗ@_~ Z'dmusE9?zUmo:7<=uU;_g8Vw_U:9W1ޮZU0%򴌄~o?ץ.9򦎩mӽw;TueUEŸ̾T/y'Ȧ|^NՓ0+ 79RQv9L6>|éɧM,{繤H8|Cg ڛ=S*}>;lwT{gSj=y~ɦdUڷ Kb>3Sq~u>)=el=أ##i6E>SgrA>e_lO8Oim<WW> P[ }WeV$%S]&!٘O;P>VU^$)S}&!غ+˨Q4"=O)coSj%S{B)E>%o}S| @ݹlOPO{\7̎fS~ۄ2SޏB>%=)= /R_k[{>)86)09!c՚Iٔ;C>%c8N"O2-S'qr>%}NP}7'y)&""ROqo_y+Mʨhe8|CWM{ۜyBۺ>>xgS:p)'&v}Yҏg))Ļ}U1eZaZ~ZbOʩLͧ7kjw>֝fT*ٞO|Tf O"};>ӽBcE!!b '훲'̝''y&;:)Ӻvg'ǘUͣ~MȒB>*Sx~UyJM72dRfZm=з Pr;y)2*TgES")ёQ8| oTD>ɦȧ!O_O]O]']U9μN\sn};Vp5ֶkgXsn];VpO֛Yw@5|Kd cM&e?fG`>~@: m Dn[L 3Ld7D2 LfB[!3ސoLhk7d&52 m ̄xCfB[!3ސoLhk7d&f~RQS?:Nky"npBfbWVۓ{/e %ajF28:N;'ۮ@kߕ}&x90qs)uڮo2Wi39*0W+N]}'f5tg&nO|ʪV;2WrT݇?6MOH]sug|ʪuo5qv|>ugAi]~n\1yT5!c}` v=+3秞gG_Uf&I\ѷ*{J|ӿHw8^orW:COol0n_xr[<)N70eu'5v~c)vI&T]knZbP5icSeɬ1?%Kz㢺-v/emk;)Ns.M,IhsUy:;Sծae}O|w'sLj 77>ˮy?qzOck{j`g_o;]N?x;fH_C|jVg8L8ĞgEN}^v\rS\OznŽ61݇swU2OSۓξ\6']*Qq}3n?;cpr6e=cVj[˧9ִas󘓯뷬O>>H_ O)y&f Nmw&wsuT}rVzِKw+:pBk :?Lj9*cw2y|ܐOy~}D m| N_s<%[,oW|[Vg[{u;)!Yl~gǪwSc!>+i}Gt+2%R-H鱞eg_W30pҏ'5}YSдS{bxϽn{W>{R|*2Z>HUSI"TSVn}n;ܩzWiֽ}RN{!\ջٔsOYYq=}k1~Nȧ{g~W'M.O>1}O;E4Ztהg4))}Kw^nkk&]UaO9oS#n7Sn|ؗ,#s\>%,~6Isiʵxe67KY]|)c~oɧ̹ʧNقrVQ6Oɜ76OOI,W>f|ȧ}Nɧ̾>XO{n)rSο/O/=*ʓOxN>%|&OYn:u?=b}yf;N:xS2Jmcg)9ɧ7oO.?,OIL)w]; ȧ>z(ܔmڤ|v>&rgcZӲT3^cn|c{k|ʼ{O˔OɧtfS k]ߵS<7O9Y| ߈~7YKD>%k^'|ʉweHiGS17Ɛ{OOOݏ|Jw.eܐO>xwSǮ|ga)@);S>%k}۾c||]9&OZkjU9w~{Z=wSONչkٔ)s#Ք})w{ntsi{|\ORݞOITƓIdJ=?5"-oʧw;yZw5:zZ>E>cʧܫx/sg|Yz]rO9{Rɧy"%|=7*1?Of嘔O?Oyw踟1˓O!23J>e37]vy_M10)g~OuM<HuS>eGSm&OO&wS>=Թ!`=I.ϚЉ_SFɧ0ynȨFT_keiЪ'ȧ-)k}Z>eXO|?G7m~zcU`u4sY|JNy)eRu:Zsw27SS|J_W}NMKo=W8oݘOYYwsLLY(1sۿ_yIk,w]';\x˺㣧Ɯ|ʝ(_oSZ<O/ϵ'Ϻ>ҟSsw&qG9:%d ѕM۟kǜOOۧ瓿_91N>&םOQO&zJ|JkLk+v77zG~c_N{ޑ޿'ur>>O֙-uNOi+ M}^tޞ4>nG;3K>{|ʔ8'S}Lˍ:jV~Νg3< ESӄvH~(R;O?]i'U3*G}7묜z߼'H縯3)iS?W?N^:imׯiy;\k;#SgOYf)׷s9iȧdG)3eȧ|Wwcm,SOnw6NnZ}= U$eS);巐VdKL~dOyWot^N!}M|ʼx~)5"SڹrS<ߪʐO1R׍v%eN kk{Ԯ|}S7mhbfm}>7xS{탛kȧ;uc|ʧ}Y>e5/vΉN:r'v|Úqҷ C=ZiwSw_wE.]{Uz}O֞Wi##SS֞G열wOƚ|J_>r})c_{Үy)xROl9yдWI C=Z}27#N\_[ךw.j{3Os4gʽg 8{ȧ?^|w}H]*x¹cwoZ ں~Lo߉{:)'d;ƌo˜xo\?=}Ҳ3uk>eu>ev4v `O9o*~1Z[i\wJOko6;sMLleTzcG}IZrޟ0_u>A8^7dQ:Kc^w$y,~u0`NOdVDnRkafd Yϒ7U{g&}S6gkSnԙi[celW'w?^ٳe2>ad{M?d*T? 鬁72-Ҭۢ69>ŷSrukjps ==]uvyQN6l+]ؔy}.}~:z_;!_o[|V gI+Ue(uԇ)ߺt^O=nIwi5-N-ۓRҼ ~>?ߊ@gy,OϓVSd*3UBiJXw)g_U,i{p6=&󠔳Ɣ-h基A| owŧ|f 17F%ԙo'Sܙ];ϔۣ$ħ-ɿfiݜpڦr/Puf~TϘ[IzJ{ln'XG;!zo뷽8V}>h}ԛS78G:H< 9iv:ڍgHS]=I~\煾^? 1c, 8ߨg@o3=o,oR}bF9m uƜg S-u_e]o@:o# 6`)塺zoI|ܫw(s hC|ƞԥ9wO/ ,{Mw@،iurꢺ D{RħP1&3.P7^0wߤ9)TɌ槵Y: xNJq>yoy4OػCR\+q50ξ ;c=$^FR).w~3Os,w:齇2PWoP. cvM*s'R7࿻>F&c5FC< pJ~C{'o lrekY >Glb}1*FkVNǔQvOޟS^װ;YgJTF8ѧOħݟ5?POfIRoL">I<Ϩ7sumRwLұ~bHe{QG: ڥP#9e~ӇLLoq)P7,z}RRN}džy @ƛ?xm @͘{euUnĢxqq0@rl/Ή@>J?@|ܕ)pg| @ޘz|/΋~ŧc(yujS1zy/ФXg77@__9uxuMG`~1wO7ysֽU1egLOO!60>C]cCP@`V1oGdS6'ܣc=wg&/SOj׾iӣ;`nߣ2_6kcLܣ|O +cIݿO;*{ImۉR]?_GJb?tRo n3@n1D}SQFos{/HΖ~҉#U럸q0ڿ)' ޭz};yȳ}`EҎV?ks>'o) OʆyIBvD3WsN՗K~u߼/yJ}INB?[ц\f|JR=ꪫ@1_g:z:(CvkMOI?IͻQYW6䍵7g9fq)/]u;yQ?Vĝ|i)1O9cTO ΟHzRlJΫv:<)oFs޾']7Z*LM7c&aŧib3_HIs s蛓~LeTGG)OeqAZK?bbv؃5+Ӻ+OR+%n*CigT7}/I}r!}L՟%5S&:!>eBwު(}kߍ Ӿ!?)"LM觿Ź)K:mG2H>SSBx}sTM<` ,)}~-FzM܍l>3:TΘn/S {*EtmU?S7܊OGGV}S/aܵp\"16%=oٟP>S12wy^ҵSK ky}Wܘn/M`,]*ڂ[*ΏЯ9-H|6; }c;gFwlҫfS{OS'9fz9r3?79)uybE7e?[pf7gGYI<`㻮QnckUcnTߚX+>eNr۟Mm'b}K)V^[OW.fJ;GuUǧݹ oJnH4V{Oҷ?^we}n\v:(=cغy>֟Tԟ ϛ6?TۺӝfSr˳ᯏ%ħԌg})Yk!O: .>|||خ_Ƥr`S"}?ĸM7iN8߻O3%W~3:zO&>eN:L6py{}=}\zcgom9ooҧ[;y/>6ϯ4>`<)[{U:FSmbo8VҶ|sZvs8-.ߞZiy|ߎOINjs)ŧ7cUyϺLLwQ9]ej=jkUz>wĽݾ5g{SzH*S0/pbI}ŧ(33Tw\t\|^SzwmR6-_ŧWŧO_G{^:]x2.>e:uOZOuwֿjӯ#Rے}s-rTQ61%ژ2ND3vWb[0KL=,v{/>EY>RSj5ze=YSCtk{=i.O )L|}xon~w' i:a9*Oұ0Ͻ5obZ}~qՇyTQ)M獯}~zWU߾!^+sQħmnZAbHE\AzM^U^:R'))2%-7Wom/<<9'1muH|ʎ!ρ& ԩdJ| ħeFxn>o2~KN_o7o2RQ'[|­}'r>rWݿ|g7W~׮&|.k6IXSf|K|J:ӆ3)8$զK!=oodž|ΊO9>.]W?}M?Ӯj;wk /~0f>^^NZ^s^~\˘s֘j?ebV|2a·ŧϗz)>3m{}>} &gξ__?x|noONm^>n7:+i~MO&O]}vv6m.7emlr|ʋm緟|,/ NQ/s˩k&>Wj61r{MG9ds6ǧtwϱܶe if}A|JO^WԝSau|[>oѩ/)S6SZSmVŻǧ`|4%O'~Ǣ*(_[|J)ޯټr(wQ| 05.1?2wd31ǘP7dc[)܉O3=P&1߈O7Fm>7|;ix{SȄ^?n2,>exMcUeܼXrֺꞰ5[qEor!72 '\[|ܺ&>|#?gz|&>3%ϩ[߼W)e06ħ(W^SħTmϘ)Yk֥5>%nkmMϥϳ'Ữn]#pw;L+{|™cy}\vxjj+>%܌ݔ/1%,O鯏K3ħW>O/[9U6Sޜcy'7'>%'zĴ+m7SSvħOhŧOI*Suߍ2pf,wlyzKyN.GSK){{RKg܇+SSܴ"> N8W|j{XYo]ŧ(߶/*O7WOVŧl+[[Ԏsyػ8?wu+OW4?H,KIkiY~c~lOIWSMkGW;e~!>%Z;ߔY|ʮ5u7C|>8ŧ~&ie̐N|ʾ~rYgc4S,7ձ~Z|ʞk<'>_uZYM_߮cSv͇'rn&՟uSNmNֲwͩjRyعf^U녰{csu/ħ)}eR|qL|r&o*gӾ">3ØֶSS$>#6eDRojl/O9{I@/̃-ʛ͞>v=|xz|ʴ0OL)i)ﮪO<׆u?)&j g͑\ )9eE|JoB;5ֳo)oSo%?ϴ}x2oagϯ/S>glXVF\ħQ)Ƨq'Ɋ-9!ɛ[[fz$ΖlzWZ&'+xi9yuݪ./s۝#{t)O-۞kH|JNY[ħxcz{#>ewn<ֳ~)>3_TۆX(S$<ʧ|N[NZ<z)=&ħTK7+x)mVs?^3tsyloFNVS_WOߟGr>F]\2hs>_<#>T&m?l&>ה4LOɾJROCl}♶c9 Sv{ħYgl+d\&>eb׏/%ϻS1!.@|JVy\fnco$/gٷI=i=l [m=}9zZ[v{}:o6tNTOˣıDrw ,9 !>~S)>%omzu>k9]g2)gLoħ7Nmϩ+ʧzwħϑħHǎ<0XW`:.1 qcsuSk;˧$[۰uGvOn]׶Ƨ/C\\|ʝ6M|Jf}./r\0L&22mus[۰m]'M79'eZ qָz;O9["olGzyٶƑX@`Xsیħ6~ェOׇԁIu~gy]1P|bK|Jelk(ۿ~3>nWsR>20޺qIߵiO>w5$nتsͬMOo*)>;[{ts`!og=WN+>y^ħd߳j-]LO;O)McTħX2X|J}T{ŵqظ+GgZ&>OJ/S =396UU̻?vך8!YRI;8yaBO>ύ-.]ձ LO{)}}Ϥo=m_&_{U;31ħO057"7'7Qnħcܭbw{yLȿ7tzn{(SvT>kل1twM& _yGƌ:~SI_*k گSrʊ߲)>ew7acOO1NNʃ)IϴNzBlCOι疴yqu}9Wϡ6Kֽo?9&;h˾[e2܌{l_;uݛϼlL9żu)e0qNu=3[&x9Fųŧ}63ڲĽ<߮<ӗV07lO7G6Og\W\>'2V(iFJ:@|ʞ5Sm5Y,!Nbq'$a}wo}yiZЦż6Hp{=|Ծ]|J߽ħdu>S̷|;{}~ykv6;ф=ogOjr4vt~:MYGl;~JۋzYkؐ9|>oҷ:MsMŧK|Jv^WqԲLevfS`/+MKO3Ϫ6{~4)v_q_21&m1|*O]c+qdem}|vcߧk %>E=%@!>%1ħ|WV켞-)k6夣/1>XŹŧt1ysyv/=:z?6f_L3u&ϟ&;ٷn+Uc!^_ڦAGҟoûS8_ߘr㺉'>%;N35mn)׼Ok6lzԮ' m e\itzgc[?&a}tz1?.7"q^u >%_Mxqdbޡ]`-.};&7Z1%@Q{.hsU21oħC|ʽ<Բ柾ƍr:7./e|bzpFuϮQ>')/G-P7=+oVÐ6@{u;koc*ǿ뤧sJ|ʉ}irJҝmˋ{ͺy~N)? ,֏+6~ `WOgzxIW:aͺka!hs6;gԵQ9>MM[lv;NbJ Le=HϦ9Mke1~L?y9lWuQT>1ܨ⾠=uohaKm@@w'qkjY,2e/sJ'6=UmewO.CUS1^@~k|*SRyy?s;[c[mt3-zڗ~d?j3%REEݺgMV<fQܚo_쁿Yn)2Ɲ*W ~^(i LGsd|bN2=аkJpܸGsڷ%gbl'quM)`K2Ao9uB}DCj{zz$xN_Oy9VYF1l^ox!9}~yִ{]i sS2/WWO |2$$lVaǫ9a. [bؕ!i5Mo{ 0/߿|VFQ oL]IgIlAlJ^_>>%:˰SoB{=;YD1uZ|z+3yTvǧ|N>% Nzq7oBr])njπk s巏; ^Ow{yΑO%'B|ʌ4ؔy ;^~-'|,'o :LGOOQO\06҇t?K굻j:1M=?Ko)`rߕ#10uni?e'\X]:Hf~9O+ YKmi/>+G_`RǪ'W>1sygL ̄n*prmzL}Z[ ~[X`|6c%]CUYr">/`[_iRhdsTeJP/Ty2} >&EI&S }Ri|^켩#no'L+&`oߗ~ջfPyTO;L)38wy{@[ʧ[yoQ^nUPV>/>{-@;˫++*,>>{@;n;~`9߾8>ixC| 0̘ ^j+?^c c1e;{}2@X ) oO/[fM8*S%>E| ^p)">[jOdOO2$+O?ϕch$zyl@ۯ|0O}/@.ħi ϧ>s Ŧm ϧ> }>|9ro;b~oheN/Ƨ~Oٙ^<7:!M`ze2vx瓶;o92Q| k/vcZ@ꛬv]~iӝv)3~6;{xF)Cg|Fےgw[{WW;TRm)e>->%=8f12]m^eNwP0{lx@0 6睘')ߔ)7+!O*CIki{n)㉔xqsw/g`LfC^W%N4ʳ.OI{S~=:OIGGL7^ܱ߮cY=sOIdx)F^-MNS刺%cTn5a\mOGWMOs'RxW>^So`"c;XP_}՞ 1!]{A_7b?{޸Ƅzq]Ki;P^r?]$Mڹ uo/y!>`G?1݋WcXW8|'qsͽ ;fB۲}÷{tOS1LgO5F}ΤsĽcc9k4ӽ4v5^kxϿߓ,{I?q|( ]{ǧ^]exz{4&g$nv~MK1?̈)52-msҳ42 [/0w<ҸxuޮjK[ޗTVnCoz0!>erPQS5ډ {&pZ;-ڱ{O=mI|6S2L/mX| U8sxSڨ)%t]4>}߄oN|/a}E}H·6OQ'TqzIORQ )kk!>埧vG|{L()Ϸ<ܤq{%wq+jKtg4^՛S7{\ISmkҖ)+~1ؔmzmB%FhSgAu~~&`lkOcb[={Onwoi#>|ZmiħLg;v};9ڏ{ƺ|9՟~*=okjOq;?v/i\7'>>Λ0I'ħL(I)mӆK{Wmh_=7|*SԾkoqF~})Rwe>"srS|9K>C^ҿݝX.'$էi1zoP'}{Y!K͵Ri\5J{Ic )s"-6%}^N(߷GVz|JG4sϻ{ŧK/I0CJLYH/o~[xXnK??o4FM=uZF;ur|Jw+GI;r51UiTqeOi㽍ר?)ڧg[ן/S611ħnǺ͹_y->=)S6Shn>ܽRZ6_&ѹv ST:5vw+7ui)yMT >埶ՔGG|߮=}~xcܤ{9NO(SfjgSOϸ{lSvM&g9ߜ_$>>ŧP٦LۧoЕu/}g{~~~[ڿUUϪүkdsԪ{Tvߛ3%Sk i9u|V:1e2u]cj['/>o,?S2&zvĴ82vB|co)my_w|K;!]n߻j-n0'v~{^/Ukco=Wxg_hlslʷ'_)Yi2\榤{M){skS=iS)/Ŷ/َRیi}m{%mħO#)ٞæ{/+h~=5V'Ƨ ҭc9!ᥱĔÄ8)$>W긹rߵTO;<Ui7bLOInOI[8;影Բ:e;a~Sz&ħyud[vsO?&z=uIH}F)]v}Wnϧ)|knܪ{ eRNllKS'qݗdMnO73^Tۿ SʹbvvosO+SS/)8ӌ75i3yϻc׾7e]~J1U2L)wyʸA2/z-or4}O}zTgG Ѷq}7b^شJ|oO8[lsڵe8Skb}qSfoSHn??f>M,3IOoC9s'_c{TϤ-⤽I Lxל>}i\;qHN|JVyz6Ն6W^Oْb'4NxPSħԮN]C?09|E?[6S[R|2y/Q/) kk/}uo7$}]߼ۻo{k2S&rc>{zN5SLiSvw$cZݖ.S0imoS+SvLni^Z<1FX|kZCSfO.ڧkcٚ]|6g[nO?SR|ܜY|s'\)w5!FE3ۻo)TSb -Cu&dsʤ&6m;)z&t4}}^3eL|ʾqtJ|9'>eNĴ4xn)i!>j ccx %A|~\_?.VuSו4[79vgǜbS|ʋNT|Ⱥ{Omo?ܳ=)㺩?]gfu[72q?+cq^y乺;)yefzOYVO;R݆wao/ħWs^LYorm֕{ŧ^Z{V%>e\3>klWoZa ŧ;)cLwYlJ\uZAWuyc){L֗þ9e:}9뒯7/ŧhŧ2)}='b;%-ħO\ocT0}J|ې='>S;ڃyobO3Ƙt7QO?)y¼Q>'vnOٻn91m&;1}uye9*t(Ydӹ,Sr4ߴMOX6guN2m&lܛ(>%c~#wS%>%3-R)}ʡ5=Aw\91&ǧ|Z>Om<ǫuX|ϗo_{ӱ9OwO9͕;z橰wL66v-6LOOyo֙./>卶'ŧ*cH|JϽ^+~\|O|=o^M?o(s=)ۜSkm+=ħiħdL.>% 0'?g>gb䴙z57)srWsa~ҞhkcS2(ԧц}i9>OF{1Z^7a]6O~۞qe=gk/oB嚪)I|vFeΑۖJ|O|J:k'EݜNnKĔW怛&9NݟNߩ^4o_[7vVOf]FN$=kSfOoŧK|߶u\|76Rs)S+h&NEŧOҋĴzDzYTn>g87ߒ6OykB,S%m\o9m_K)SSN?[ĵMi귉O}͹_==M8^|J2M_xoK}V6'ŦX7ϳwoݸ{ש{xkT϶56%]{&&9+&ħL9O4J%ձ))b f%m[I#J;?u,Zkܼ9eNS1Mo)~ۦqe=7ΝsI5ħTۜ'fީ} >hZ ͦ1>}SFL׆7Ʋ%} wieB<0%!1mNoVTCL}rm)`#e,-]Kڶ6}I1uub=>=~Sk]LgZ=1ZxS eOkj{d'{ڔ-s|۴8)/N:bwfikMIoHU\z~2R)Zc,ԮwOYv}ez~F)s{OSfص~S*/ǕS[SOG){nŧ+6ħ)'ʴ9u)@n/O%/>eYރojwnJ׮HM{׫|U|Jbh*>G5BR[Vu uĭ}1)*>eO|5gfy)SF|JM^^mnyyu>U|J^tIc)rhz+c)us<&>6=6+5>eu}ض[4:wucawDZ$ov; q2 $)^'2~.s~Sz9SӾ+-N+!!ħ,7!>E|JUl~Vb^OY܏LOIك6q>G|J#>ezMeK|߯\SjC}Svŧ$+0uٗNL+WiSI)㮔Uc-yL?qmζ}}[ʽq޴I~VY_{?&V&\{Άt2߼/SjzJP6ħ.>c/GUUI);SRK4ے{&Ωq/^&쭮();gxWqmoO~0)>dmc?ؖL'5SNM|JNݗzԵO(aqN=ħ3zDxn6i̘V|JsO뮇ůK@SkҞ}b|3)zD^|:n56qop+}0y_3>sS~|; Sc ħK)eg>>);?ՆTQDsħLJ{\)5R)3 )9U.GMW+>f㮼v^~hg&̤G+RxMec~m97I׽&m}~t?2+ykkڼN|[v^?7t {iʹtI/[SNc8}WļuyN=SVj\oMw^Z>Wun뚓6o{BTn&՟۱CO.JI>u(?V̫ħ;">ŧyv/[_Q2?]ħ!O9{rqk%)OQj̺5OW3OԶmÚaW|mWS+)U;?WU֡|:lhP91O{NƛeU|JOݼ3TfW)w?M)֡ħܹ歺`SYڿҸ=i%5> eXN^ocno9lB s `IElrI9ְ;LUy'isu0ZO+OɧM) h[O.}_LOƐDzu%u/o[B)7[=[.VܯiԦŦo8鼄 R8#/OKkWعd 1>|{ )sI_0Kn{O} /M|֪i ϵ;+钼6a{MϊO@[{2FoSUw3wO=1|OF嬫xGXfZ_<Ħ朷_`H׋O)1Sh9-A?u#?O5}'/c~_[KR_h~Z\k BsN'uYUo7i/%0mN+T[ơe4Z`Z3eZ[}>q;1>7כ0?Λgv–9xʻܴG?)=ci:v~g~-]NǛ ձSpwU/?vƵ9y14yPgSb n!7c)zOߴ|(w߽}-Ne۱44tO/ԑؑT=9]:~232^>Q0eoh+`0m_}M)6N1}Jm0mgTkWwMBOOIJm[}+Bb{Vu ' `}^?_> gy,I1b&ͣ~AͿ,)=5)q$%]RGHKe}zN侻;A =Q1wO~*㚷~smK tljV*ir߿q߲y-ec@f:n){]mc[cէǦTkT\wӧJ3/ŤT=^lA=P'`ri+x'980w :֨|O->|vv^7۶vU[`r->0ߟ-10オ1CJ'34i cwO: jاi`f  bI^kGVq ?+;Syyoi8Wc{ݡN{ `^~yr:_үi`vv`c{ `^hyr2_7yjnhseLo\T9^ `^rLo D}Y: e<ﰻ`? d1v< ڃܼ6_umgn3ζ<sBBwskoX֜м{kmGhxc.h^Ov@F_-iOhu_b`v?qhV%Vf>4vQo]N+-;jwm<gi?վkA8Ki۵0f=3t5̯>i3\[ 4_`Vnffd7?OMq)s??o1*|J Ĩ"FL {lxtS|3xt[|wX蛇7~'j E[ g@6w'-:|'~[~I|ٷ=3OyK;ܟ|;&$Ύr7wNOφo=|J:sʉDVƫOIʿX}^w[ef߾PWfgpS> ߖgboj>){vf=6wZ.tB?tSi5-6zG ;Huoo=ki~)ߖ_+5> Jʷie}µoUe9W7ےؔq”q㖹Hg,VBlʶk<{|N>w&Ce~DڙS-?[2OKXo֡Iv<=I;>/cĦgu```~=SbN|)>{-7Q_5I1O7KRl'+#9oL*ھ)gLmWcJ=sSOW;R];m/[G{܌Ө,_]efr?ӽ;ujRCTF&]ws|Jl_}EK{`\Ko;YjZLHw9'۝ | }3+*#=6Xlb]~o];i}m񏘷z(>eoz{^3>O|77Ӟ3]N\V>u|])2R"=>%[eN$fL|^֪_3u{lKe[S?[2omc?cIQn|O_{PfQOKuu i+6ɴmc{٦5.6;-sy{O1~ں~i﹀s^=Y1ݴMI{?miy}:[TY:a9!)qړR=>86_V$ݣO3)gWT^w>bXp6셉(SmS2ħK2fSίCPo Uħݣr>}/O+)y`<1 )m;=ߣHsnm̗k{tyߐ/2ڎz35DMwL\|̴H^{xeֿ|ڬ6+ҷ٦RL*7ucN7y~W|^|y3:o@g&}25{+uFAG&KJ|혈OqKK:5ϔo'՟ߚ )y0Nq]727HMIkħ;kB|ˤ">eM(kcħG[S*K1a{ 'S1">%m=QciO{m.km{rkok)λp_'ֹOUjT}kbqSf^nNORhS1$D)}iW1eO:c<Ŷ>`orbvԙe|9cr|ʶXeV˺}\BڈO=r'=ʃl-cu=7VOOT&\t8`/{S&UGzi=!&1V7rܠr a,>%+]ħ)VOy?{mVFRHKI[)"()oSvħOyi09ŧ|&Q?Ob|ɺ*>E-s(5Yu+l>+g& [:cHmicIL]LΤObKxmMpJL k~SK|JvZBO|J{w=)*SjֹǴ2)6erR4--RSR3OҘXLx/K1?'U|ʎ5?}܎|>/n+^ݶ6>{S--*c*& rK=:W?ҙw}:LZv5k9w#k :i^Y]ħXظ./>%9&!>eG=-)[>M?﫸|v=@|ʮ6$ħFgTf7SRL_w3M'ħdY?S2ƘSuI^7Ǟ|b 3=N3o=Guuw+ɤkΟi1pM)~mHǩ*>S2Jyc1)z~6OȔoXMjҾiaM~+~bv?)s)19>%i]09bri^*SqZ'>%clt޴n<}}G> s8UQq#;Z'UNwS~CZ{zR}sQ|ʌz~xܬS뮳_k)lYH;3DjZOٿv&*>E|Ķ_YμJ|JRL7'^|g{q8|S| 7oo%=M76>ھNXwKN6O}/nG>nON&'<gL=k&MOz׹ :]ħ/>x"m̞N*g&YKc"6uM^|Jf܇OזTW;mxxI{6SB|5}?1fOc8zc)&SN-ŧ6 K̇}my-Ҥ7oܿOYP7;c&Ύk_\cX&r,>e?yOOɯSz ŧOIR[lMIM )<[ŧܿ1Fx֯;v55br!>E|kO5LնF׭A K7)#]⍼zu'> ~SI:25{|nNSj򴘈kr^ILSvSħ$gԶ帔i(tEy›׎ħOIH}(>~?rO9{-)0)}k /qS^S`w4imk按s|' uZ[ڐ7SםRSNӓNձ {I&_|Jwbߞ>^r_)зZ|,6MɫǧtEj|KGȕ`lUTM裦|m>hZkb)U߿Կ$ٶwu||i|y?0Npm)YsgMܿO3G<rŧdq7v|ʷ_O?N>椱FߙK!>?_^6ۖ5OןM_OMϿi3m]oZ?~|sM)uP|ԳSc)&[)Sn_[|'m,z3=kCSOɹϋc.)ǂS.SSv~Iy"_]'g]i;ms se۸grp?-ħdlcSŧt))E3Էש+ϻL̷W/SІv+f"9`Kž딺j_U}y?\քLS_u$>r6?oŧX*Snk/g6nr6ħyZ,>e^6[SY_8YV^?)}ˤ_ٞ=ޞ5S:ܯ{ _S+n:u|Hk[s}ib,EgyMLڧ1u5em3uU븓oKWx-6 M_WS&}#d7L~=ʓ=3?)~{?)#|1WB{ZAA|ʬ?-&bӸ z1_Wnj6|M|~ߺ։uI{&s-<^ZlJm;_Qؐ/کrc>WfN[CHk#s>4y|}Pzf{gn9^'>iy)c}햱)HkuS߭^{j8s:~skxƕ9RyOZħ]ħm^ZLLONg2#-^OI'ۗccR߷|Ǝ;i4uoдc)&MXKܷ>dҼrjAb=~)oL `s +{ڥ/_޾?r~*k_|7SY[oen}LiIRSfםi'><2UY"|oIv ʧxOp~{5o˧/'q3דx2?i!`Ro~iNU}/)y)IHO-cIi}[ħ9/?7+ӆr;i)>lYxm\!㼴Ys4sؚ|8ל2y]#u)>7ƤWտԏ+oå(Lɋ1'8=!6>?I^֧qӉcgkkIs$^u2ȝ[5Z?+O„gxO|ʻej>ߩI/[cnƧ||ڢ c4aW9щOٽVrwIubҳOoC5_ħԍŧC|JM{;_Dۓ7}.y53M:Ruv=7xƟ-38/I&XYK޵;R[!IcD)S^OI¤yƓ{[S6SU|ʽfNq/ߍ'j_ݬs6mi+'|kyӸpҚpZ2'>eZ촵I֧lieG\B5A<I nzPˢ9<$(ӪVO7Su?ĤgM;co\yH{}O*g }gt<__KS~ߦ6מ3*?eǜ1yb~ sWUתKg 7{+GgR̜}`>D5ɐvRls5UU'ԧіYWo˷ۿ;xeϿ_+)uF$Ɯ)9jZ~J ?%clܲO0]ݞmx9a6O ):=r}¼ }MYonJ-g=*86(O@\ܔ9=W˖\\oW=O6L~u )Sy}F)y(Mο;ng'S&U)}=/x3fo-{PQ.BIuwQt9lW;a<L~bxYY]}3&zM=D)m+K&:dù̩)/[[UYIs>{=sحغLfB~Jμ$%c۽SAW׮Z~D u,ҎrQ?K'1RNaqObdgUwRs*8~}S7uqޟwP@x#&;mȏgSOIߛ~ xZ~ʉϙ^79 <cymSb|u''}rު\6AΎoO:G6ԱĖ9GYشwAξ99(curnija8Y/bf|Ts$nsM;O\wFS'VTHCOIIs3J4]u>+=-vlgPRqugj9nM8=+lB[{{!fϙ]lʑؼv̙/0FO_X!?qyԧާ'"|UL+-uG³o~B>`Y̧HxOc{~DŽr:X=Mo ey3 $iP0W1B~XP=>wUg0P~Nb:}E7˻;> ziY6r~Jy>qzNbߢݬﵕyAEp u 1ڼ/-n(?>A<0k|IO>w+t?w;^nމ.[g}toɿ;]opo:ߓtֿj-%?eމ~w"{cp.vN_3Wx{ 0>OT m_QƉ3e7!uS?Pn|-sΘמ^ծV)9)@WC,~?']8iQQᬘL 0#^(L|sg1= i)Ur݊Iv(?:1볎\1Cy)U{WcE۱.'_P<{M-{ys/={|Riz1W!{\r8}NmcZ*>`]<qp^Lxm/RoOn} \H~7Dz@}?sck ?xs\`޸dwup:6[䌡+88!&Nc6ig;}q.v_̙+Mxw\ T] Iߜssu 1t[q%?@͚*squa[`6MC18yi^<~+7vmĥDmt_\>Ywvs{1o׏]]gu'kƮu/p3:T↵/p\3@TC.68@Z֤֥pVg5bך~o@Rp^'VEI4Ft$q 5( nx ^SLywgvOx~s?:7&#u3b6@W\0u+m!Ϻ\ q!%%W1@Ą?]O,!';O-5 ڔw.s6lq޿DUkAcx O9𤳰~MuWrRgkUUeؿo&<*?'gL)>5!b50'&Tq oe}F|W'p ' dBuU}ḿ/j&1nۄHSo=0gNq{|u!w5@E,txwa0WYgm͹)9yaUgNVǩqֆxl*9{icK}^NbW.qo8Oؑ^zܺ)C*Izѵ@~qx#OcNULKo]g77OuHN3'5nߎ!s1rw"%^β>}?SE kDb5Ivw5{;ecQ[b~b\Ҧ7o+?ekGe槼x7%}s S03:X =>ogW^#u8!WHSjWL),c.65l\~;}D~[q]~ ι1k6q:{%m}R~?sVkbnJ;&T)~:UŔS9yvh۾?xN{w~7)usou>y} EX@:њ vknV_w{L8 ^lI9)%য়ɵWpb; y[sS3'mk8^FS\{Os1?cuom`hFUJϓae.so9_VS.~ζT&>ýVrFضH&y`? g6{1}uS ~_ORϟ_{ &o9'vյ΍|~c}ל^+|n#қ뺺M\O{_}لs&*yF۱ݿ6ސ2m/?L_gƊkWO{G7;OyMO ZǓ٩Ϟ'dmZC=[#^'㐵95g\{cgGU9MbO=o>j`RnAYKى9Sz=mn;+?Թvz)};w:~Ӿ7tRyڋz>?{.nF?RLw{Mo> \~$cuw LRGbrg~&*.OlǧGwj;SIJ~zc[۔F:cnBst;OZOϷM}>{O7\Rzm<$ި}l)EkL{S+̻/)\0OQ3o_#~'YO;LKzs⹶hW[kN|wYߣbo%i"'.9w}KImqcJ~ʆ3SKϮ\OٷYbO_y}`S3tKo,}̹iwnNM읗uwR9gfNMwZ2yC>7$3VM|m^;꯳_~¦}}H{o嘹=7哲IN 9*Q mNݫxL0ateMg_\O[SLoo/fhw&?%Mmt&a8pY"?uh>U7S]C4޾>Rў;)k k6eTL賀1~;4R70P"V*)}e}׳iiyI뜴ܔr2HQN9%?3 &//LiS,I{]yB~J<|r?`wvp䧼vg7ɿ$拳1ٴ/5^/˧ޣ59)7wn,}?^'b橱Fw:&`&zSpV_O7߈⎵jl[~J~9LQO?W3W??ig=LOйo!FWxa>q^f]Ort:?%Xa?לz))sg} _-si,5mחGڸ[wuz-u.ŐqM~ p{c.y!?eκT~Jw~NƆ̟'_ڝ>row6_ܭ1Sf]Rگs>sC~JZU[{V:۷=={L7S0ڭyc{k+?)sv|F[IL'wՉω<Ͻ&{fھqmgŷ秜Sމ='3ޥڥ$y{yא"?`˜Yq;tfxoK~~gLwڱY4i xtz-[~ϧ^2;C~ SbgEW_1瞵Oo7wcUZܟt|cX^γS~8xA3us遷Ɵ}yKzw)|#/w?7l(}e=?_/_S\LWiY\ϰoO7wؒӖZ[Mt$ɞC&7ܷ^O~JsRyW~JsX)yƠC˞^R\v?IOΉ:~?\͓H_\OO>ʮÉ:S~/ޘOrc=2a 91h{~5R{ qyƋsp)s蔽QUF!?Ev8O~{0O͟?;7L;qIUߟn+c}B7 W9zixsOUYGL}%?%slOOhI}87gU m>z~JלgrL},.O_bq8ϗ."?E~JOOn}Φq_~JמBٶ/vXkpY>]~_]ϫ6=)gxr6v/ u۔z2tR=22q0;ΉSؿqŹ~6ixz5ؙ}S~~?g:fyMc6o_upn;&? +H?֋};y_i6t?9;L]O^'1>{uU:ۺ2s\Mj9ߡ6/uU`KjgC;֧b9D~ʹ5i}5ivb^OS[_+ySyߙֵ1$P9XSN+k{3MD緻 e0im$6槜s}<)3zIK<9M; loK_֝JWωnsSGMƌW>SϬNOSS殥Sngo}WO#HCLZ䦜|Nv3Z=ϩj=û99䴽!)wSco3ܗƜӦKj~ʧ4'/)fJ{/Hwl~ǁƜkLOIMsyr~ʉnY7=hZ.aN8 O)cr:no6`o~p]s{_N>3.?nDw\79[qSRX91/S{8}3OyӮvua uǬ_dLzYB&*Sl\w暓g\;y'Iպ=-ZUqd~^~Ľ9v}+jnsRbLRl?/ǁx&ʤuw#)AO*>-fxO{)U[㴾_t3'> 3.;y\v?5fBI³pB~Jl47N.q~َߵQoκX9Chs 8n7Ǟ}[wSމ}b>nձv+Hϓbo=ޓޝï3S'J6;!a-=})?哿m l< )q<+>Ϙ_M|=S>}W`sƓs-n!|ROʑٶHO{ug׺<17elk7kUyYuN&?E~6idr]OOı)c>!v=_~3P~NW$L_'vgڳsyNzu<^Cly?|8'uH˧H͕Qj?X/P7Һ/9n)X)3Oɪ jܐS=؏o?Hsy֘;e<5GVۺlvV˧S27snOS=Mi̗\I9~[cקu|<:;v٥o7c֌=0Bo /*XAF5i*ױS uq:c|;T^i-du KOq_rz;;|D; |VlMIxoko{K:æwTkϦmߙ)t@}?dWPOyߧ~ooZnʟ˄kTh',}2/6gv\p O6vr>1ݎO'p\؛tR}z/̣[].BݼOy b gwo -G%|oAvno5+~]ާKAr;` M1Xkr{t3'5/Zq:o[b3@/b,p~Rݿ_k kgߣ2)<ߵq]Lrwk))f6 `_X%1?mk koQu)˻vnHlkΏ`_ -- M=^nXY_ocq?m @F/ Tg-?[c~SoT @fz; TEs^{Sg(o\cs]&ފn/ >͡3?/#]"\@[X8ۗM"e 5z]]4&n.Xo gQ?E},?%.~Kos]<46pn.!X8׏_3w>_]( /&姈&ãq uOhw~Jw|?nlC9q`2)uGSu.& =ke|7q"?%.hR~hl`RӤ 0y MO,M} 7,̕yw;yo Sv5#w9Of]mN;g`B< ঊ}WN_k),ƀnw>O;1Ӯ9O ?ur}K='CeX>3ζ++norJJP[?<\:߶ފѝ)Ie@m][g!Wvx-h;+S')̫'IoL~_)?egɹ-Ss?SuU Իu=sҕS?}|6.]˽TCe^ VuguOEW`uxo7ߧF}~p/We9TǍ>s0%ayёsƸ ט?K9깱sCR3Wˍ򘖛kTuGVu'nWkL@;[g<}Z]tǗq u~?廧ܔw,}K|DE?J9i:1Loc-pVyPjgn[߶늳Ƶ+ϣo>KvW~FW}|z/TkL)2GOp1FNXu{|1?ežC5RYn PaYʟJI]yokПv417֎?S{zO}O[)qO%wr.&1bj{OxW#o[rvLQ'?eF=mk'X>)Lx٫>_[ncY٪v'bI}b,|o$UJ}mxO9;{a~dW.Sל>Oo9Od&+[S*]ڙ[ʍgO8Yqֵw~겨97F4M諟\ül]ÓcZ#{IRUnCg;&ט6O[)kr}>l[}MYv|1 LRO8dt]]oI1̩z9Jnʞl9c>=bַs^߿bn&?eϜz^~ʞZb!ϩ^jog3b^R9.nV1!^wgN1vugS֌>s\{pZ~ʹx^]iC0wWL*yEr9O'c[:[רgһϟu'I#H}R'UqvKS?ImKܮL)'s]zl;s S=-ݵP}-PǐIvShO[b]wb~u1;i~g> &Ϛn)gOle2n|nzG^)KW|uQۿ8H ܔxuM|9'b;&v2"?%OKqe:S bn&?e|z˼]~ʾ;-acL1bo+?HY۝̄6i]rm!w7(?%NnSneJ_.&}~B\OyO{U%ӫPIxu 0ަܸo?ys)b~Sv4{l U{SvWb_SfOc<5ܻ7tuܯS 9TRKFjI^s^GO[Gm" ڭ ݦۨc|j~ 30sXgjkSr1dT'ku佣4R~Sw2e- ٷV9u )ssi}V:vZ[PR1-t1~0;Ѯ䦜Ti5CgV;bM^WKɈUX{\)=wJnSiϜ2g+?%S67܈Ƙؿo+3^S or&Xe8em2%_:4szm^\߽01/V'R>kۚ{4&gO^$WS:ɟ92tm\rNGyAB}Oط4o<'w{ G~JP~JF|L͹4g5f)SԎۯ?mt>>3g 龪 pzڞ{o]'a_= Чrzyufڞ~Q6?;lSko=9󾌳eZoaiV~䧘*S^)拿W'S76kO#1nOFڳ2;njO SNq_(msoɷjO{ON_/!7/n^cޓw[XVIϟ4hcc𖲑uo;O&n.3yMgBʞk691{_o}U{k o56OS}+v!?}{c34Rk$?zo/cX?9?e^3\צ{6/2$=[aP=}ȤqͿu,i'e+?eWlSv)֒Sf%_sR.vu{&0!M/)3B~J4edZt2ynii$M){-Sy)}ȝO^)3ޣ܇oU36]ܚ>+zMӞǞ9eO.s7>;ϖ"?e[?8UYm5~ww ?k>d?9 1#r_SWT/9YS돎9=fw7aO#9vO;oŰhW1)nO^wv>qxb>FRuVj.)Ϯ;L j{YH;&{SfY_n~u c}ǻ^rT^9z69:+t_SmQ~}S/)zuuco,{o!=~O/7Lh~o-T^< Pf4._9Geэ'K=k/ngv/;א%ݭ{~v^Kjl~7MO6_^3޶aήc)/ݞ;lxĤ߸`l57J}oS^ӘC잷LXO?7fmSsq9 ޯ[;w-l_nmjz]N9&?Ц ͘r}g봉{ Sv/y~vݛ*OxtWI cܔqx= ΏwC^wq>L' ۜӘҦ.c^1y~(?eG?O~ߡjiv=7qrvM?﷟i)k:چy/vcoxS]S~ʌX_;_OImiksF~䧼ֱnO?o\>J|qŹķmpR{܎yv"??W ws v2wxuos^6!vON<_"|=}iD5 cL)S)ksSթL~1mlN^Pf_| &Sjc߆k~_OF7!LhX)omWLy9?5ߊSfq<ǭ\N7ps^Ư3FgnQ}[y SfyYOh_UMg׺:-O^ܩO1tm߽4/ )Mǿ LS /eO ־~:kM~̵FF~ sc}:(9nwmD)qo17<'4}:1/zbņw~F|":G2)oU1nM O6emW/"?s"__rO^So/7IO1wzH_?}F~ʼFVIOyk?lK2sTNǖ~?1Nlƌ9sF{T&vNkHMI?ugUۿlOG|ĭ5b ?Ŝ䧘]yM|QJI ikZ3O1glO3;S&>Ym0进8!?㿋M}|j/p3=?s>5׸t;h^C}&ׯ ^Hy?y7w4㦵mek|ne.?^ڤ-Q"?Ŝ:;6T_RvFb^rrs-9;cτkM.BkP락묩אL]$T{q'<{^F_Oۇt6'i97nM,)޵ұ|;&%:́oWN֎;Ϫ8gދi-rO1W䖵볗cy׬8!Ge޿>9*Cu:R~J}cnnjO]iJ~ʽ [P^D:ṷ+mͭiUץ&7l8#v7__|0?q_ΪxٹΆ=0)kNU/?~bN}zSo.Q.)>IʜO~JZ8fưȘo|JY1yCSMŹ5e\M[ h8U}c͙a-8yOrz,=ayr63){.SK)Բ5׬/ grb}rj~ʭ%ϭ䧘S߾~;LYM]+O1ǜ{f[1Q~ʬgzӽg)WsSvݨӴ9A؝>?ר\J>3ks3m}W7T߲=kb-=͔8{z^aC6Oݗǭn^>O!-^O=)Է'?lrIOA~}zK?*?9ǫ7הoڜ%u$)Dž6'-gQ_7ϱw|s?9swwWjw@jL8ǯoq|NYzgǷ様 }=乕=/O-==c )^k8s0ւaS=}olz޵"vu}uԼg6[S]_α*ΝkM6\6D:K;bLw[l^S=oO1@n0){\䧰56LlS'>G_G0um037)pK^QgվˉL]0v 5o+no-}^XNy*M Cœ6HDo䧼^^{fS_K~5kmņG~w=9zj~^R\y]MZoOhU vWgOS%m؞mo39oˉHia !ϷyHw_o)9XOe^+)ٱE~J}.KmLx=6t]_.S=lv^/g}k)g޵Ykms~MIN'',5Jqa~id;:y`ژVV.9Kk);SĆuN6y9tyWC]g^mc]gγ沓ҟ?Oor ߣy[S3K: wv>٬8O}C~u0]sʵkg&ݬgIu~Ko۴6xZ~uk޻h)ro@{Z{%?֙m&ޟ7m|JY&Zd[jJ+)ߕ= wW]sBg=|v~ʉT1nlqn|6 ;0}XUg?Lu]R.єS6=C]uqOw:Ϧ~F9=m^ PޛN`-?]s R~ʝ:Ix>{s熓Siϥ[*|B>?8w}~*꿪mOloJ³ce?Qk;5h_/#wot=0O]s 9WT5R1R13RrGǷg'w[8F">T}m/v6v3/Z=TՑT^zyDF>> {c9,O!e:ǟ/d]Ug)_8[]o|Axs8vVk;L<$6AM{!65!]6V8ӟ&'?&=Ź`j~ N>Sr?z1y7O>O\?Rٮ[9UW5Cb?۱@ 2Ϭ|V+8Ͽ_oaB_0vߋ5oxFw8Oe.9ZuJNskg쏇c/Jڲ&2<y}8K (Ysߤ7Ό=e5'էgm ̍ b01q{2:~Ҟ 8W^FW՝vb @=(wl+~˦w$jNƔuzMROeu13N~ ԗس9]uzxe{R~ b>2mm2w'?e޼FBn,> .b՝憠 B\=c~Ro &Dswn}Sk/쏥,.srs\ܘXc~:_9Iъ{zbSc%bYӿ/dю>(by`yx ͏?9?w0Wp=1 [ms_bhWټzm}L'~v_n7~zb O0w~W({)5䧈,OQ0G~>=W6Oo?~zn@,:; ۙ!+~v?1@ޜ Oe 目gl@`8b "0wW)]~ l)b`snOe 뛮gl@`8br᝾ m:ȗ"?p}pk,UD]Mh/\eZGKΙ {`c[8)Mh1P;_.2fu=ބc{e 0u >Ono[9ԳM[?wgrq4^k{n݇}n4F@xr\7ځ6NݱU&?]Ac?@ǸrqYWI=k {u+^n5G~ S@c??@ms:FN2O؀@oO؀Icͷ$pn|@>H[~2v\;ߴw|-;cٶ8o xgFB?oJ`wV3JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222?" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?+ޟދ1I΃j~]rz`xρo wTW&Ʒ% hFȱ( B`7XWE}G.5tɥdM:4TW+\?`$F2ZI93r9orsOظ-}BAa <TW+o[HuKď)V8ȩ[q?7{R3zz-!kwo4J3a,BKXɹ U ?Q\vV~ť4\]U q/m(qI|.dݻj ]Oi7t%/CA,K5Ik"X5dYUBNF$#@q8n=:)d~5 #O<,\"3hR+J]hldעvxAx +<[>ywkauތo}؍.|浗+ \YjƜ<!Dbx@PsEqzg%>4T0 Y90FFrAemL6SYXZ7b+i<̀B0V-$˩\ޗqBN$pY2$}5(QۤAs~Oqnpe|``s Ep#-4n7l{p PTgN__Кu+ DZ-֦[Y 3zފMwUlH-yI+"nO?Rc?7z((((((((((((((LKEQEWS:ydaߌܤg&|)6d' 5$* >WcjR^: }k1#VY$rXoaZkkmjV.ܰ)M{j޵;,+n$R@1?\L |P1?>Gfi m-b!;O`EBft% ޶?1r?5{ƨM~˭sP;5@<]75s%Zg7"Lf:Ӛ.Ӌ]k^Noi6Ϧv6ZGT֬,[Fv'MѦ"!+FA9&cVs4e䱋THD%屳Kto)d.Y~e8qV4OiR_}K{2,Ih[(< RIcOIEdm?-Gkh`AuEu +_K `xh… #lr睵Y,'+y-yV9|yZVތsP'ZGq t.p_ԤgoR8 yλ)PQ %߂-n/Wz\zm;X>g=zV[ۈ?8w7J^ɩj7֚M"-̅r(S#u[-zI'{USt *y'wݼ3^TxOYlzKiG oՖ ooH^Cˍfh;Cc';zO-KP3j&)G0 =08 h=l"|E@n}/H\eRֵ1d- H>%m5N:v׷\=G#ʎa棃cqX>#ȗgk[ v'X'uQ@Q@H$;\34{ n*8V#Psڟ"Uy谙(2fcyɭZGa5M>泟U6is,JۆY`rI4>)x2XJ#KȦ| tRGI7li%~[S9Mya7zo(fh 1 wi5/Ǟ o]FK[k+BŠD6`AlG#iJaϫ.-Ȕ[ka##k_>yvۻ8r60o^sSSM6Ha$o26#fsx98PEo5~!' !a򑙕v}GggogvwpkjY*Ψ6ḍĿ xU]/HMax'@iZKxVךCiqqnVkxFDEP@7oƛwIYiYT!  :-!K@Q@ q{u#}R(@QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE! کKiPRRKښu8jմW[ E"`A29)>ZijYE~Ap F~akzVigkm1TΨHYIJq*fIb$H4AѴdZn0IE@6" +j izFHF"0?2‚!XE5 i#ҝN ( )i_– ( =h[ih`QE ( ( ( ( ( ( ( ( ( ( ּEhDڅQI;"&mpH-8լ_!,,$)#ʌz:hM3Fԅkmt#ڧY<"c#yp3+Zo_V,^}gJlm,ntv$vqfppNUIm1PȖ8(Fa 6vz^/[.[+"ȑշb B:v+|%㶂 P̌ۊVlciA8=QEQEQEG e}t$OEpnrZ!U'khRJm^͢^^QoIQZUÃ$"TV^7(YCc}Z(NK@g(B3i4ҖER(((( y-ah$ROPAjޔ[joYQ!cvq_|Cè˫%lW-r9' <ÉCb4̟1]Id&.(Y|FeKzZj٭vsE+J3oͳ;1jyt W~tK5 n 9W@4QEQEQEVcIMr=Bݵ7ʮ $18j#5X- CAlě2AX.dscR?:8%,CNL70mߙ7kF4oo[9u{gݕ Ń 3FiJin8FG @3ԚG5ƏoDdgsU?ji8Ff1޽Gv>ni;yk9ۢ(((.`h%/$0UEIjZ$ivne4s U 7 pj߾t2[xoKYnN#e9 $t˝{HbӮuK(/*"I7.''' z\'8Rml/5=+֗v嘸ߵmXRPdg{mOk7ɤ7bc0|zi>Ǫed5.1fnLr2>L?NkF?; n4@ĵ_c$.U>@Q@Q@ ih ( ( ( ( ( ( ( ( (E(g? {ًKiSMԭukl/*U 5n ( ( ( F{3KE4/j]㎕x-2b+1@f9!ӀqM8cxCnr I;˸dn|oVݡ3#}"mbte{0@:( ( ( ( );ouWZ_. 1!UK d3ӢKq7/cg4O$Lj|:FK & 9]I*q`Q@Q@Q@)؀oojHYa1d3}hZxQivjZ3c Ndq, hw2x1&S"@⡨xDLk[vml~MmJڪj;k I.4(w]H@b.H.i( E@ IӚ3@ H( !90#cKMqa܊wJ66]_M(Me3Q^cGOG[Y몶*o^'b;$\)ق&"/$|QjN,^@ӻ= 5n)7qsQPݺ8F0碖?5 $ 5 d*ΗmR8iqB'Y_I4Ed?;t>qL9a ~lyQ6#o +m>&уz},TNEPOhk%ĺ-.k[Y rWH#N23dg5i*A;DX?*/޶ܓ6rH+u3VON ja@ZoZC.zT͸,U-kա/*N!?EPu TxCTE`u?CRw?EPAEsxCTutOEPAEs~|U/*xwïgXԡ,)o%(tH&`S!-]xT{+Q\KtGY^J7[>(8C0GX8>(ҿ%O Z⟁,IBF|IhO!vtW~0"_&o I@G'q?u=$?`wW `;"]`+"{AS{CI\unhw},\dF_vJqyu;|B@x--\czђlF0jyCI\N?s\.+"_|EwW{AsS[wp5oE0= $59i' `M =6~?x"4ܲj?o`dEanՙ5ńˀIw",0SyOxk_/9MuR<~Żk-<3E"n8ef hi=\_]5>T{aSs`mEI$h7\?#f/;@Ey< ߈zChZyok H .QEQE#YzLՈu8 |%?ć4jWxUmA973TS`]j,Q.#仅0/ q$qk[?~&oˏҏ9so/x$ҥN5Р]1uW+IR0pxqi_b[FI9B{483X?RQOү]W??0L?j%)3դS?cSGb,]ČܪlVn0r09 ?Pްğ Yc>m_&msƋke,izObBp><Yof.#k6DkYmq 4cĬڇ?(NJӖ+K[yi6Q2 2ONz|edPw@۩<Q$\@t.ݤ΢842\&ZH. t vr¶p"Sluٌ\78jOcA u |NHٔ~a%/v*@<#Ao>zZ֜\*ĖA8ȢǶvzfjɧ1,PI%1V.>f2HH~>V>hO"?g̺}nWEFe bx/U;/# +R]=/TnZ[8ǔiWq};N:3iM +*HFIL5/le 3ŵΘm3I}!j/eܟ" duZp|B[{GMipպ2ȑ8,k$0g5c-b[+=/Rn䲒DJM0'vqvn f9E]1WoҦoZ {iu{ՙ3m2G*AY[ et>{?N7ˋpal'ؐ?z?)o “|XogoRofiWKlEتKiEf X<Uv&=Ɖ(ۨ\"ʋ'oI px';?G)???].H5 vY gvYTRYJWܵđZ<]GZll2~_)Oˏ9I S ;qŧ%:\iKx/,{vFUlѐjW=^)`KvHU<CJ S?]))ʨ|qI M=n%n@ RnHT T$c'[^=]/6yAi$O$GY2Q `2 ')O/V."Ea vאYZjK9!ȂBbUoɒ\ Ml*Լ"!dVФ6K2W ! 񑃁@) b,E#l[x1"'t-0"?iFO4>۹?rl'B|PcpTW[c+:TmC *qF''I')D҂eO?^(0l­CG*"(UE @>((4Rw oi3fDVp U[Xt]'M-'a$9ؼ+c=lJ(6"%cjwkxgFc; fUdmxN Ƈ; s@," O _OKWK!r?apy5A|ZZCaw0-Jch)>B0 #e|Àv /hkCOX%[yO3` Q/ubT3mV+G#/ _jZ۽êcBۢZ\6&9#h=c_zsDdf7gA(`̼1P@8<؞KBc0;iKxb\y8iD@F;punbH綸ͼr DW;0lzWiEp_F@^`cxV[}y>VͲD#Ya 7 -`[ÿ[I[ xbUvG[hry{_< |' Qhl3s<1sQ@Oxj-V1IvUP.q3->:W3: +RFby7v֊hiYqrYfX%k]~=Kod҄C <&Pwnxc0t M!t` Ly;P.&|G6-R_=':t5@}}[Jė]%HJvRJ?*i[ۄE2}Ҩܧq&FTp8'=IML$iv3<}Jhir!?ײ^BSͰ*UO*-Fk9k_wMq%:c*}ۜ67m؄'U6qk,Om>=Cqmkh4ar bHkEqr:m?6X-4 D!fBQ@!FG\Ö#{nU 8"hq<l]M&yC;H+ +\UHԃI4s\5/G3]'(_XrG{w_ ܖkpv0,@i~u/ /&qJ'w$s[P: Ңd1Y`oreݳd&M}:}A6;aYG$}G dڢ0#I{ 4REw"k+%G]3D6q彸˼hD?3s@͡iЍKRX|5;sU|)jws :I,iu*E36sQ+jgam1yfcYš9M0.LBa9d[TP-oY[dyu*T+1:4@cDQST)0ѐp++[[]O Q4q;"8=>TQu6̝öI]S ąb'b!A;9oZ,W1,f>ZؕQU 7]=2^tLd YI$2kV̲[ȝQ䑤s+++;;YI'U[{#9g!Lr'q1=/ti.hb-[{IH(˸PnK!cb 8Zth2/h\Q)7yC?ɦG}/7Ѭ̀Id8p>^+Z`QEQEQEQEQEQE(ŽQ@QE ((E1-PQH(Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@PA((((((((1UDd!  c \A, 2013-Knapp-Change10x1img1R]T 99FVhCA`9T>F1T 99FVhCA`JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222@" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ++Wէ4NlZab`~֙Bmj|T\aU,|2mNIjtQE!Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ek]}!^ibG Xv>O;yt4 &mqpt=E7[i?0QV*;=( ((((((((((((((((.]xdhE| AupJ#<nMg:u㺸Yd3X8IE8Wt&«XYvrYAwk&7V|dx'GuuCEu]q^+ޡSMcd-3!`)\6'O֣Vl-dta,Bwyֵ( ֖Ow` pj]7Zfӵ k: fCêc^-9qT {@Y6^Ӵv{MˁWyIb$?ݦյᮋRrwfA@I6]}nm½괃ݖA5[:Gfމdg+F>(?ER4-+;88gWd=yU((((((((((Ce%?$;|,nW?xO²tQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEU{ynlbxLxՊ(?ۼf}>%P4R6,dLO<}yFgkic)vF#w]'O=KqO!?jG$sDDʲ=AVvi:M*}ɊDu@4VdL-5K2"MٌB'ֈ\q4#^#kU#9 @tV])t.l X捿ₜ54X{q;YFp_Dм{׺No}]<|O OGR@j:LPO)G֜u o,r(.[Fhr/ BN -֑AariQ@W!V3-ծl"]m v u?B8OQt^cmvrh>= Ve w֡C)mGVQs_-ӫm# Z+"P2Iqkt[8>Hk6rKpqI.-e0=C?4~eql/mdd@Q@Q@Q@Q@Q@mw?mВ?aʳ&u2EcXoyj2["6ȏ̏퉃eu # x!̱}2ߧY! E>V6GO׳*'ڵ( ӯqXyuvPHvV$m)Z"FVR2NAVf#Veu* /EiYWT6zK0rG Ϸ.#V;rIdRX["zs2\Ku/io-$yaVR}KVT5Ly=9 4\۽Acs+UfckytF-ѺDcq٧NdL'30$V%}IJu_>i=폱r%s6񁓞sڻ*C! VO" *武A}ZZt-eFGZҴq;;1+gXn#=E^(((((((((((((((((x4n 򋋅V(HԶN뤏I? {w-52mfj:]n 0w(qʟPz) EdͤEkZ^sj`qHd>c,&%@KyrDom3hVik\\{!c[%ѻ}VQEQEQEQEQEbx)I#qG\}2Yc'Y8gsu$ @E[kRO芟֛i]C*Ŭ?t/U$XUb}x}Kf '?#퍿G/$;ctBoL^X#1?hw< ?2; M^ /R$[$ ZwZ ej0?hZ[A $q UQ0eZƕb0na 1?5rv)n- yib]$ǿZeWRR0AXWF6v[9G?"0VA !ٵؗQGm) ޡ[*Eµ Td\iȻO6v,`w۳`}k6`1r25h˝v9mQlyrqV,5; V xd8}(t=*5ok%|-dW/<%YLj@F ÉeIiIeq qo\9? WUPS@WPҢ5[W BxR=~Ż?(˶\G^د,W e\nIy]68(R4P0’9χQF.EYv;7v5ESwwVV (Š(((((((((((((((((((_@<㷶fdcn,$kJ{=b"t$a \zf-~M̈́05t2BD}1č%kQ@TY-k* g%?*M46DWFeaG<=gam4:\pa~&V/#}ѯ.(ZZZ`k>ڼ@ { J~6OoVq򈢷Yw\c:P]EbzmatWQcH\޾eϢV>-RIݥbB@1mH.ұA^k7v1ai35)@1/f `_z("Gbok;d*Ac lWйu5j((((#<2Y{jVogouuxR Kd}<$HcoWolvU Pǣ0Ϯ9}iuQG%ݐb/\ !O^M-ť;fj1H\G^Em]]X/{:GY}Z6VZF6Q UQP j~"WC[&S;/N8c^\9? WUPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPUb亙'x@9@X8ҬQ@6&.uVP8%߈;-yu3]"c5fNԡDBOEC.g!'m#HPx({Pv#Яz֝pHn0A"[Ԝk>}|jĐhZV:7t+qrtn *ruwW)s\'z%ޗbɽ #=Iu'֩hZ5%[*bէ{"X01ǠMk-m@>T5KtB/TPeb}`7BͺJsp<\<ү'ϗV?嗀>RmaR[hr\j+y)쥽?"S◓bgխQM.QEIIExTv&8& c YS^74=2XWTpߑր4,&K,5*! =@n5RG6zWSxh7?Ŗw~=kfI?zR+R{ uծ{LRjwXƜZa씀O1M6=ͧD!,碍Mu[YKd0lGqH=zUoK%mM8JLa/:k,x^I\*'Y ޤg3e$H}7gμi_ḏO "zv{cxn-.!쐸u?B8 2rkζftұ*]͟N9dW7q{yzm31F ?k^ `Z8`Kg1g'h3@`:M|lx1Z4P?xB1ZՎ zѩP1n{<)[j_c:l#ʲ}Jc޻J(Ӟ.fռKnц`ommL[ySͻWAmtwfosbToB*oyR(Vv kmg\)#>׎}J{od?!?Ex?qh5!C:$+ǫy̟|V.bm ݓn?'4k.}/_*u}qo+JMXZךugn#ɼd=b.ZWwޓ(Q*ՍQ?ƹEH)0#4>OܢǷiZ%jCٵs )n|;^^-֍tFq#4xH5 >d#pHŕr:*T|$Tj]WC@Q@V%׌|5a}qc}wVHnR&ada#4\,mY:w ^f_\l2nRV dε(((((((((((((ZkZ3l8aidNDO\Ǎ!Ť=Vwi,`M%L(gP~SI'dTU!,J2T#ۊZ)&?kd:"#ҭ&{++8;eP rHS׭Y= _VԬ^}S/&Lr?pn$C ,@VRnmvK*Hފ *hKej:n$f 8NÏbM[M ;xeequ?UY7Wmoj 5̰1>rR[EO7R$o1SMNMQyd?D1Oc[jQx@:Xw} !=UԼO$K&?TjC*"*Xfa%QF}ǖdlsZRkgiw0Z1ɠ VzN$KJv6qDF?^ (o% _f—a̧1s,{ѸA9#}m:k5KqɓR\^Vcmi-X ;]$|@$U dW"(, \jkv}E̚mτ{ٌib?͜W@iւSr!Q.mQǤGokwtQ)uMc\">!KxV5Mi+,7|k@ӣҙ؆@;=rl?j٢Ҭ(d[iQݝ#2Ir1nԹi:\d_V=zdB EeOϦ'1g`T>o)-z$i 1e«#aջ-BRϰ8 ]sX5(m2""ocX*N{K°[ĄBG>E`gК_NO%G"6nm B{a=r6HS|SoI{^I9ٌ0AS/~eG"Wnk!{<hHI_^J6+Mh{K8`$d@qYZxm[QVkK}m{=8Z_&E(U Z-=~x@z䮮k[Nj+[ʘy3÷4߈-w}RyRZ;'p:$ vU{+MJ.n,FQEQEQEݝ^CsnLRǭ/O-Xr}|,bJ(]?~(?k c}$$.HD;E[Jqmۋ,3H=W8}Y4 n^DN5.M :Eo4v5'4MO5\47h}|$in;XGvZQR\jo]-u,,rB=.[+aus[ F.YX ;IxM]96vS~⹅fT'WF EIYh:M ojZG.l4hmt ̒+`5H?qYƖpĺ8U 9WK@}hd [)-\ c1ZCZot~-Ps^/ڝ?甖?ԢkO^_7: +*G2/_+QL~p-^n!K RcZ&:b\GJ\Fz<1lŨ?d++"WnT#HEFw'p*ۖ;gh('T_xP$h,f4|p;jks!`Q_]Ol;'}bѼY!uKO.b",Xv~brkz{ DK&L’QpLv *OCL(QEQEQEQEQEQEQEQEQEQEQEQEC>el .WxA 8 y+ŶL}ֱq} +$fY^G xst^}xȩmY[.u 7S.49)7Gq ȣ[)O u&ռ3jW wg ,` N2}kMw\jݝRvV"]Hиr@uo i0[uBڅT HGQ뚥}oӢ(QEU-OI-j+nR222Кe֝up?u_q׉uˁyC~x$.6 2C! A;y$$P4Ro布OmUdIw'Lַ-/clWw=LӃ0?ZMcm%h92?4x(QgHk+rJ,3=D s"^jPq$W,%Obv7X׆/V'OdB'X.caP`e& (>ϙ YlI@_m5Kz}Χ|Ui,?S\ŵWU0\Y:Pz{C3׃Vru6nblȸ/ OG }֌HL&# *xYE>zi k(Z;ڨߥXTG+ [kh6Uaz̖k{iUGl{n9"dx$Σ?DG(ڵo&y?Zi}QӦ=?ZtPkiU {5 ?ҵg,ϊ5gS[4P)nD&9n&8؅o@ۏ i_f1蚭@( <.~$2I[w\J`jH|%g"-5n wa? *x8lz=6yC-;\t?SWӞ+_BRdJu; UڥǥnVK7p`aJqq@h:W:pJ9>BzS"pu9y|zPDž[!otNZ{IGy*X{ARX\=s{H0mb(R_Y+隮5}n;VS]Vui[Й@4?%1d[3}-1 }&(>-qk΀Z|)uz}3_S[x#GҢNuI*[PkyQ8ܞ_<1گC)Gz<2ټ2l7"q Ml,쯴>goNo?d++K_©˩ɴI +=ֻ(+o xSTont]*KgW)ߓa\㜝{/Jz{klLL#V%U'H Ԃ;W7kJm+;{LRΨ 9bSoC W{y.R2BAaR@f=h:z*8 sw>^Ig"OG$*/#9PO!JbβZG sjkDZͩ=X?jZ×SȄ)>/ YFYe{!(r8ao$ Lo1䰖$@? ]UŴvos sC "V Qɖiշa` mVU4XjqӴM_]oByKxX롱)bkh.7뒶'eA]c 4Xm+bFHճ(~?c[\iRaP+sw9{zP"o*Gp9ǬQ9Q}ޗ "vxliYpt<*]_Κ2^[J06 LBnM0A-,;;!tFqpilsB >(OiLi~Y䱒gvɒ!Fqkv6 پ}hhzpxS'Eoj8Yo.'6)U+i{+X4+&^cT@ P]ށgzmIs/hi6Ki7$y=ilw#C~#]ڶik{Gwkg$iNC6Pb02Q4d(kOuqmy WMbѯ ӣk0􀬋`]rzpywqoOk)}14)X0GUC zQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@PtM/\u[/a_5"7*x<1P*Z{xO4<4kfM-#XaME(ZEQEQL8'O$րEeml>c81"wIViçϣ/U<EPEPEPEPT.t{+#knar득8Pl궞\rK [|Uq𠫶xUIlլ/$%H$Q9 BqVCsim{) $dt8=o-~[» 7Ǯ`I>yaKT72V?EV4>yhriw+iFͷˑڸ/\-aIkqyQ*tUvG XZH|r^@Uf4ɵ!ʽF@=#ڥ.5"q6j2J8;S]/Qӭ$M/Syũ:$I=J;-NE]\h-mqn_@p? }鯨6f{!`ܦ0;8PO{׿=mF,IVX-x;OMIhQ^S 8[[ӤE?aSqp={v բ4շHp]ngPϟ'"+^Vv l((?d++<'aY? (+oYϪjWzPŲ$x+13º*)[[6_KӡSgk40[BhK©#n8.sQE1Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@U{m,)d\b |8.ʼu䎞P  chPPz$1A튓C5-KITi3>bBTu}7MxPiHXy AɪySU̺1ˋbT:9mwRZ\KXUw2z }% _*iT\jlmOsstQP9y۽sF-`&lm ar2#<2A{xb:ߕ 3&#W弦$dF{`tPNy0YY셒HMY4PEemJ Y"> V'_-ĉ m:0C޳M4ip X5@V^\gߎcl/u->q88l9R٭#7Eg?فHϭaK?>R G#;;;獉uԆU=I1Q@u.Vjpmٔ#>u0zա8҄ɗ?RM8eO5[ԁ,z;][%ͬO),NX{ f[O2̓4n_c*FAoA#Nv{Ă؉Fc?dwZ(+k{t)nI"pk90Ȇm^M-6;z'[;&USmJ/7=8}s9? WUW]//[~ChI#Oo] ?sn;((((((((((((((((((((}{g%;ƘȂH"ǯa^bOni$j+py<'$s^miV~. ksO/4Q ;UQIY+(mo*‡$ԲzUՅcɒΘs=c%ɽs7 ߝ6RF4ȠT5CZPez͜~EōYG17|n4!i@nO* 7VMhEVL`5 ;{0QޭQ@ڇD/ɹT{mfW;Y*yoo`:駥g/.дVry7^"ke?´ KP5 {BI9h9줼i--8i/{P?ꯥiQ@pve$[?=%Y`vmc֧пH <ɸ7f|:SEiv5mo4f1eϾj M n5}SM V}P[K֑m#!eEo2FpN;\ޗ˨U|y>mOPm#`ƾsH:fO FmHgӮCW1@OC)wTQ\Or<=j2Z@mOډoJ]%uzUךEijrHt Td{Ӷn(0((((((((((((((((OFVRѵKk)[i͑R!R #w Ek+-M$IO(EPEPEPEPEPEPEPEPEPEPEPEPEPEPEP?xO²tCEu]`_zj7Fw>s(A$dT 0uAjZUnXk CldpTPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPU,;9, 1"6#*x8 ¬Q@ cHB""EQEQEQEQEQEQEQEQEQEQEQEQEQEQEr!z w}gKٻ󎙮sH(((((((((((((((((((YEYk3Α2`q0ӥX K1ąʢܻ8$rriQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@lw$o^߲]yٻ?ō8⺊C! VO" ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (zepZZǍO $< @((((((((((((((((C! VO"sHygB_]k8^y)cA'QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW&I_M3K`V2z@gE)+fU{伒E.6I<&T^Fr3Cמbb+إvqOt3H!1#rq,q>tQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@w>}->]Bi+ea)ca0Gr(NiZGm(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((($$If!vh55|50545|5#v#v|#v0#v4#v|#v:V <,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V ,55|50545|5/ 4yt% $$If!vh55|50545|5#v#v|#v0#v4#v|#v:V <,55|50545|5/ 4yt% $$Ifl!vh55 55555#v#v #v#v#v:V l,6,5559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh55 55555#v#v #v#v#v:V l;6,5559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ / / / / / alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l;6,55559/ / / / / / alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l;6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ / /  / alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l;6,55559/ /  / alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt% $$Ifl!vh555Y55555#v#v#vY#v#v#v:V l,6,55559/ alyt%  Dd I5t$ll>  # A"" 5)YR-xǨf a>@=^ 5)YR-xǨdG=5+jZ, x\kl>ؙ{ )Ե-mQ"7b @*M+GmJ)j@*H@V$MQǑbEDL*-UͦEQHմݺ܇̎g&֎tuowϙ3U`;)2LK^j٣%֯n`dL ٮr!B vH8Ĺ[3R[K8ʒ5n'],e(F{m, j=337Mh#$ܙzc9Eb|}W0J"ߢ~nw\ ~:T 諭l\<8Pӻyxυb%WEu5e@ťݜ \e&ro. nOW6u9ɸ+8UhW|%ƅDFĒ'B|ɵU)vőoWvc8oŸ(> b}6EV:kzƎ<^ ύ$q|f 5>>11o4MU8ouϼǤ"H\LER,C J30Oz}tIvF8o}J}{dv] e\c2.DZkXĵq,Z8q-rq9E\xp-?_?<]ؗǾ4DZ/iؗuǾu:VqBc9R~F吝[[ھϖ$N1]g2=l+qnT=1B ט^8iQ3VKBl:Z^SI4!4Zy|q)7fh>khlh\Hx/N'8a I5f[{i=j5464.F|.LK&Ӹ̓ƀ9f0ҺLƊ,b9'圊u>W.*u:sjLc5XXV:nUqkՆţ1IAsP9ϬZfaN:9qqhEh4 UjيՙuH^;YiYbrN=jǭ^{u:sjLc5X~{wNg64.IrjĜE鵫Ff{lӛsF@LۺxJO֧4jq圝9pj9g39戳u:sjLc5Xq1͎8!Z3V$95bA"8ci}Ĺ:9qqh'4> ۴cmyZ6-d979y3U˞:NgAIb5bF)wZ.蓴Ng64.IrjĜEΙi=ڴ6I朆š KEeQVףN{IZiҘ[$D=wүV2^{^Meʕn#B8WK[ :ÏI{B)iN2v^𽲻xBn^;iEs%k89ҹ/'?g4gVqœV![fWs󖼟S`K3ȻboZa^y{1E,{60/b=30/bqޣ1Zx*}P "ǻ6qޜ +Ixk)|3IqJK~߾ݢ9K쿳09x7$ o3s0{"610o0a~G奌]4^7y}%ND|?R4u?,w?~6Ki1Ͻԯai}ڇvߢ&'d}+|ҞUVf;b uaY)L i]XMRW^|l?[gvHoAVCNxA`;7~[)|]>PD8!R~If+5Gᳬ9GiwuLB/}W}KK9sQs ˑ/HOHוG]u4٢6Mlwf7wQ*]5U'lp%r]z,7%Lv&?o8ͨ$$Ifx!vh555T5t#v#v#vT#vt:V ,555T5t/ 4axyt% $$Ifx!vh555T5t#v#v#vT#vt:V ,555T5t/ 4axyt% $$Ifx!vh555T5t#v#v#vT#vt:V T,555T5t/ 4axyt% $$Ifx!vh5d555$ #vd#v#v#v$ :V T,5d555$ / 4axyt% |$$Ifx!vh5L5$ #vL#v$ :V ,5L5$ / 4axyt% $$Ifx!vh5d555$ #vd#v#v#v$ :V S,5d555$ / 4axyt% $$Ifx!vh5d555$ #vd#v#v#v$ :V ,5d555$ / 4axyt% $$Ifx!vh5d555$ #vd#v#v#v$ :V ,5d555$ / 4axyt% $$Ifx!vh5d555$ #vd#v#v#v$ :V T,5d555$ / 4axyt% Dd I5t$NN>  # A""S @=' d= {]qXO&墈~SY>y0p|m5Q'?\s]^!cu.̺e~\_}wߝՠW,_O)}c}+'#Ŷp0o庴,O88FۣNAC%c{bkR87oнA[ŽJ>Ne;ĺ*ggg Z96{x6ι mga]V=[;`:>#h%b]ꡑA8JqԈ 6D.pf-03P lo.C=jPs* 8QPs*wB0qNGB9Ω7^(9c|P9oiP?s%2fu}.[:ɵă>G;^I髰_$X}@bSJی1viʜr2'I*9(siʜr2'I*1&mc22}Q-_6 8rSlmTv}T|N9\]1v.@j ޛ$ҷ]:el/z 7矵r+z.rh3,~ 6vU;mvl< [Sb hH;.]IzDcc%98 [%9NZ;iK^?iU9V9V#>Es7%s>jvͤQ䘲՜Fל=V椬j"e5^͚*rXE2VS֤5`M[)kjhj G9Q֜5`Xf:l6{}tkN_sjH )h=p55gל~{~{jh&V#*roCo6۽>Z~89kr5^Їl;vnͩr\a8qimȞ ZןѢ1ㄫ9[^sv5' Q36>59U䈱ef}hj G9Q֜sFiѭ9U#_< *3UrZ~89kr5'Tܣj}^֜*cSs/.9xy}K{eEnmWz!J؍ɣz-gغaпR[gR,gCôl.ƴsu7&1?SS)e2ww89eq{Kn59 epSs{b\- ǽ㢬}S9Kj1.WsGŸ(+=@n(apF|q3#u~%wN-ƔXٺT3x/W/j׏Dc|(ƗGj9ZW/eScR—aIؐ,ƗrJ]Yu g|Yg[>C 5w /L-Kܻk?dFݷ#طFt~z?y:J[n/t-,Ž>~{҃߿za?Gno'D_Ud=8?1#|d8/`oDx+N/ 3 ?W@suys ¾qq< yH}#w{nkj$ OqY<'!blugA-=oLJ>/}ad;߫󾤇8kJ> @r-=| ϐg})1~ @=}L\U?߾.i8vb'-d8|xϏeǕW*Θek?``ba5[fC*-cЍf3n*UI,LbUdQ Y,177}xR޺Dޟo=q9sʕ+ە?z}u{+Wʕ+W7?r {OwëoW_^?uKvٕ6_\{vW};W^\r}sn{ek׺n}\y[:n9~Ymڏj9>>Nϟ?w Y;G[?YOΪwTͯ=|Ӵ7>bˇ7}q}G;JuQΝͷ?z+se0b8ŇT;_~FTJmϗy?ڞ]sGuNS#:ey'ӡA~<|&⫴~ <~G6_|u?$?Wi jcg^S?9::*7|Wu6~,ⷿݑqj|xyNXrC2Nwᣴ#GKTg30XS 5d>ԟKC}ߏzƩDI {`<ݐgd\ѱq,>Ao0.\LP}tg l~Iw@@'%Asɔd?}iG?;Nn FF''Ss}_S[>SAw0./cUGϦħudy\ sɏ$S>_~h\лx}hO]uN-]cjE?_/;xvu)E~hlZ%In`'ͲA^Mkȏ\ZP(V b_hmgw8mۿub/C~ /ZG~4s0A${jA4w/ބkq )]W]V}܆ފO#)=gx/H!~1>_>8ɎGc:[!6bA~6s!93cu=3y"|jPݴ'?SZ0ykn7PZNjT|.H>'ŐdNŗJב%$o9Xq0|%l͕9zs,8>_u<ݴ_OMv?j)+!_<[ςϋ7ټq'w2=odO0nx8Lb4lZ97h{tr ;g_=)ZAB~-?ό?9֣=nyl|j僚YSx:dO G's-;Slp_Ȼ9'>--P-eBrA§#}\d+Y:J'/k ~Lmi7kw箍\s\ FK >gW:b^W(V1*"OMcľmQZE(o)-ke_0]Xܸ%ݕ12zK'uq)S§ A^bE7?"ch'S)_pElX1ĺI XZK|S93_IdT1-M]G6ٍ֚;n7HO&t<$"Tÿ ZI&)~u[?4NjLw#Ԧ8-Ss?s\w)a61I~`@HA_y5% C-Ϋr41s1PKZ > qD^A~mq(ۭOLZ1dG8wXnUܖA\GעFvԮ'нxpVj?jXJv\앬iL?tw3l_lItWʟ9jS]LNPюls#{ 0^tc~]h?{5*mSJKY}(\?U]A"_<~>vl>tS }|k|I%"܄Q_1XPS_%}9r'Mk^;l?ē:CƲM%q'χ'-z${x>f_ױV|dž:ޜSλ>>kGn8UXk&?~B5_(,6胥w"*縗뺦ߌRo'=k%Oʵ}4_)eC~Du;)uC1=۹G~8OsqW"p >g Ebʍ2JZwq)>.h}nSmu_O|qky76 !~t7|gSw~N=;XTUȋۙD&ΣIgy}ΪSO #XŶ?{ZEO9F/a}Fs[?UiqŸ2N~,M'e(.x\eSXϞ` ARsSݏc8G[,~:G\']6>5 |gi~Ԗ=U4[<_|1%#GVC/> eS}!Ç;Ǵ?zKI̭לs8tZ_|.1)[y/'c§dc,dQ_@ V]*'3=|/ʹ!w~~?ϑ)#~dJɞ1>g1 bl?;Q 1.BדMDrK|J} :s5Z+ǂ<*\eS'n;>2&>5ʃUl] 7#uo}2{|?ڼd>:ǖȩ%T#c ;ý3Y"9C|:Go_ M9߃hw.`T}aI~,\|&:ːwaOtGO vٿT|B%2QlptP/ >휃P\JEs;y;sOUP/ ';Ȑϡ9gv|?8nw:ȏ8Mqh?96{N ɊVb6\) zei%"b_* \ZogO#:$󴏁YLYn1􃼶)Cnk6IdT _z|_|}!?j|!qG>uBѯt 1C ?~ܧne+OC1ih?=YUN\lݖov>Ǫp͛fgZ§۾)n>85wtys:q<&~O=Գ|8?>[iKq!SO7?x|%Dq؊o-O=Zo}x|gr| |ROŔ%}0A_'|B9 13_E]-5v@܋ę=p}}mpm&fQRuNuڎ_?=Fp%_(_SGG?I\1TF7 _( _\ AfH~0U9 ǎ\??t۝Zgh pY`YzxA:8>^I:lH\Y>0/!c$7Ot?X+a`OXZ=K?~w1ٿ_}HGMqhf|zQI.hG=[v}bzu|p9xŽl]}i~lm,?3ƿftj]w::lS?>zNթ{1n_1,c#&>qbw.vK1}lmކ4zlYli+}kGj=|':t}XvtCT? >u㣼"|8*࠺:cSM!$ n15\pUH) 4cMV&.N'k-O쵍$rjAJ]oE?_B}ćNKzFOM+Y?rG HZZRN#[>ikO,)6]7⃠_c?bcO|\v7/əIdX%C`}ܓ | -s[ǐF.'gp--iA?_ᜤIzT^hˈħ SaG}?e_y\-9Ǟk1D;b^yqE+ٿd[kE?_\#F.G?kUoչď2>ĸ2\?zB|Ăςϋ^-O'_>ݑwJ@Yhy[Yy|y)Ou-צ<;"N|9rgEٟ|OA!O؇ 8G'iKsLYYy|IU_kɞJ;'5.g!KŃ7\||^^ce?]) bKGA/X\ > >//<95__!dHAeJC/H?-Ö#C-oggE]| )s=t%/ss[< G-;Qiwjuـnu mH6Ww|iw Uwny/\> sSr}{n§&rbH.nA A7s=h ~c\?OSvd}HWC~F|cSkwUɛƌ1<?wVk!jlAos5+'ɼ KӮ KUyxz, q׊si)ZZYُs."&o4z-ް` {N5*v=G|wm\;wZn/:&D ?3L\Dbg)1txKXf_zq]d=d֚x~^?4zPy~F/wz;Hi-TWxALGeGM{(Ζ?׈WSo$ZՕ?Ym+vsʁ}6O9s;/9.R^'YUwyB ?O]_% {9TGDo=勼ٔ1#9x{{JkOx?&~v% >}G? )AK2F ))׵}<~%C񟪨1͹wq<՟}QҵYq܈l|IKũH>WOge߿s?TLvdogK'y6~+迳S7>zf=ׅ2>[+W|/W3ג )=s-䛬xVr{8u%tƎ׭X.¸S/>?ug. olZ֑lI~9 ħ, 7 ogT ;s6g+ZmsLuc"!.?J_^sǒ887PZ30ܞ"Qk;?CsħyUw$?OUٲ~ i%˓RJG$gίy8ݺp]PfH~bnuX"?_x9j=oAZ|6d =I'v/q{s8NJo4CҟOM#Y AW !F1kyft}5Hט#Yĕ1ۍ0Cvg:YF§l'n28#j h.C\?C:\2k|s'G@?cSdݼD2^Iy,H|rm"O)QOUklTk8aq{.SbuZws  !tiZRqy<'sSẂmemc]%z}| g fK.Ɇ7>5enXZ&[/;]$oj?}7?#{~.’.3DOgv;w|uZ-\#VG OZs+ڀ4G>ߙ+>һ{9)؉<*L|j$S2OQ[?;҂ϒЏ}?3 k q]s^c\_sOMc<~Gh|h~ӂ8(7o5gK8fnkvS͛+>Qzϊӏkp |b1ʔ #p9:ܮȏ=zyR Zy]~I}#:%ޛV.c9|">G 9wX$~L_Ss;x{exF|ӳs\s%\q-םη5WOns}\Yy &-<ǐ5g]ya?E_qAV$zq>\ݟxuSg!:'*u6'sw=הe |wMy!sN^!&ri}&_uu{Z#89K99KTË(G .e՝s/7+YN?9 :K~HY}*rItc۩xIʥd}}ϯ~UM.XGVR=|}<>m1(SW YS Y|G8z>Q _C9Y5w(߳1]~Nlԓ!|)+cqQ; r~9?fS&VJ~яN1d掏pr9ϣ󱏁OM_ĥ.nqSc^ klVKk[8j/­8G|M'H}EXĺ9_o=i tt𑡯ak~N~;jNQib.__wO[䪚P`+92tǃowC1E:bbL䡌&qKscdS"!_q/|etc8Vl6qj9d {#vUtѦ݃MĹs$?rKk,# 즉_{n; ?9Azuy_k|DcZ:FfdJ>-[PYeӂX_?dAkgl]M䫿Z5~z|rQW~gh;OoIϧfߨg'? &'eϞZ0ZNʠ‡q{9WO?ݹd=^%[Tyj|Fkn7s>ޏ"gXE=e |?ך:~VE|Ϟ=Dq]ךLL;aqN'qYKGs5[WIƕ9{Y >%0gج.'_~o2itƤ7Sd^{́[mW~L]mtbCP?~8K`?vƯ|/=>SGa6w|ڊCP[xsbgG{#1ݘwS[w~5d8[olpNY{s:d__so:?7ؾqs˩X>Ճ}qksGϨl?}kL|˵BpvΟu=NIf\+n?7kaϢtSf<"v䱈=C"6~Kiws§7J\H /9”%rZϽOϻκ.Qlt*OUZsoD{>.m{0zY)ٙNy%ڜtďlO1#9ߦ,rgpG; YqUk Zi퍾5^ 57ڝd)sE }=c1p|vbμ}goŅ|?o+FqNkI_I(y}J:J\s{~> ?'cyvcS ƫ\CHNp\BR{6>s_yacS/?g||Q߈@%v}替;FAʟʫIAf?[sm%_|MEq C^HWؒOukjQ^l׆nq'+>[91}G'2x\ѹRl~njY~Lė掏8 ܟZ*<ƹq/!#/W{d?=U1vwߧGbqA}tN }5zˡ#]C>s/OMӻ!6XMzvҼ8]/-j 4w1cwiN/Ǿ`u?_ߎ ݄Dެ{q؂> jȓqmXG\,> cow2qW_*ى§6m.?! Frfg}q54qdFAFqm2q\|qD7]~|n^㜲 #@H/,*Q=^Wlk||B2>z8l˴?з_;siR>wl._[GQCacϏȐ?ar'sJmT뷲>_yϽ񷫝1s~yWcc~BqB ⃎jZ|1ձ9|TlO9z^+{ |1u۔c.js]d6&$O[0~ kd~O6!&ĔPm̵0T^uv:=>Yޞw->o+[>w߯ޛ 'd%|k27*e>ƌuD1Szxa*ԅNF?iQ|FWI2ědLN/osss:빏,dzm\wl|L]d`}V&~Nb貝}dz KCcV r3ƕycKN֚ oCf7WF6.OùcqnѮ{{=Im(G%2Rqr.x GrǞ:9"/Οosj?qv~¡~k >QϣȞgʢgc&\)=yf\??|gƍi˟YX}w6?|nyͷҾ hu}de Z:d St#':xyv-G?%7_Cߛ7QoW 93*qLu%,. !5[ƾmik6RNsCi|t|?nYHѷ,Ə=ag憏{߾|K/c/oe S`A^WvVI܋|0!z=+wXC'q*flGcjYXU=O?$}%9gQchlo/rn!9c<Se 3o=)1&3cPڔuOp.$ n!trLϼX>`G}F?K!%>}}$sD}6sħ4/ pY91cs)n7\E܅~^e%DNڧs(O"Ϝs_v]8A >9?|g4G rPߟ8v{w,|և1]{)rYXӫ\Ɛ8t.} N&/ρPXHqN9#uP}&aMZ3_ yrKtw3ok,W4uzZoJ+?y m,Dnjh?,6۹f. zf>Q>f W(w'Xט:ɏ=s+V|C~'CЙneHb6tCq%r /OnωWf\3s?q.?g|ԏ'94"cSױbWEVheM-AԞ%!%:h 5}s vuwZodG8>g\c=4OM}.^p;'l-]g#:gS\rN(;{>NF,G:oYs6 1&L&Cfsgs8?=_؂ؗ tV\072zK~4 GxN?4L|jC`HNhn#!7I{AjRsH-5Hb~q-N띲2FO>8dvTo7N2=#QN%gY=Zyl,UnsuDv#}qpcSSl_dQer7[rôx7C7Qgbs<ڗܦziM .йl+>Zh!Y>a<*.Zr!cf9>4k@vZʾw#2:E]I1`sG8ψvؿ.ǙcWj.d8MEq0|E\]W8v<6 >/\_rl8W'7dgIe7i 77YNN0n8 -/KK{+8fsE̴=:bq/,\b\U)˧[zӂqCm2?,uIdsK3>sn>X\4GލajgI}M>?|!bL#3gӢm >m I+I~ldRަ#oƔ?[F-I{LP~՞;>ϥ͞qrS ^#1;[;ם)r~/`@~?G }_CA9Ak,|jbw2kl;<Ⱥ 1Ė~b˾\ Ɯk9=lQ~8Ϡct?1Ul-ǐ7[}g\;7rz?(qIOe_6Do1[1tWIr%rt{ 2^lzM4y?sO=ǜIm8FOw<^f<qx8gsc3E~й SnN 6WĽȖV9i69wϫWw*i>yc-#S猏>WDهc91§5b7玊k1GmgÅ??>=D837g|"ikk%1I~` E#Hh5[ﶠ8wCi|B X[Zq5~Iɤyrix룞ǡ|Pʾcۙ#U)wc󡛲nsZ)T^Nu9ćW;>%Q;de#n1a)*s¤gEpMsW?Kz^bWâ^v) z|?q3q護9P8fsC}v?C{o7o Ϗȱkl9~9ǽ]5fJ|Ӛ˰ݹ:#{I9=zq}׏K:9\>L抏ׯ>937(i%֗Yqޗ$}7~J]W[K}\-c}~G ٴ@!cMc΍O5/?|PYŸ7xk*Rϝ'c'&6ӊbCǟXǘ`.]|SJ|:nk+ _گ[8㸍suR̋ /ZǭGK>+dg827]~g-ˏGg V-t!C'oen)X@n 𧻧I';&i^|U[;Ͽ6S{9{$A 4vz>gNʿR>^#?.:VE86ɍ1BOM'[1rw1|pPmDps.SE12󋟍zHO>OIwȖE~ȷـ ?:p'@#tw۫_(ꭜr72j:M#uȆut0s#J1=_ܽ0_dvb\s+l"?d1ck|Ϲ"8/QFw.7t>CJ@n(V'9FW:H#=m<ǖ8V>uhh3zcˑ1A~?eߊ? >T򥷯^w\|d N\YGs\(_͞gK~0v|KyGOzyDb{ O9a]~-^ylC[t^={N:al2A9Gt1?K)˭<~VbTwnQ'G6]?Kjnw%%\#}hY[pLRg%i?&>p+s2=D 'Q[]C)#>l'Mc:I}'c[_ES=ד4IP!{#+.U:masQL֖֭|mfsq?΃_t} |_xs9 NA_ pb/ԃca_noܼYu )¼~a8wg8 رjv~Xڡ#LoPC/c`e41~xmOTyA*E7A[[t5ih}⢫c(a0g|x&N %k.y=}|4Ձ̈]NU\"7ܭ:_؎NgovoyǸ]ܔ%抏RuNubcr/)~ߙ>zI?Vގa0ZJs|:|L_sdg+;\nWRPm~dV91b0G|]ԈG~ 'Rn%˵: ɏ.y+v?=yS?Xzpmݷ?[ڿ"\^zbqózxA>^O'~m=Ao\cȏp㫎=?NCai}a}a?- ^ӯ+t\qO[É-Ayek-X~E2bf [~w~;>萉?f:x_ |G[G^}.fsGϩ:o?&mz ?g7 Žb[٪ۼ iR{|TU~Wؒ[/=׆sO1e虩9gnߏ?>W.x9Ǵ?v,#x{ι)ų~xrDnZ7#>9~vZ[_kiÖgvk;ln(jq[;H9#{Fu /,__V[GO[:ƃٻ_{A~H:G7H SO+yC\ex^8?s1ڎvWģ[C-z}U/T,6Ea1iIfz/_~^m=A~DEqncCiM3W|~{7L}J:E<>-ɏ5?NLO ;-ߠs%\]zZz4˟]9sP<]'"?>M|׷ _{+Ǿ b Esɩ҈4rlIv{&P\;>Kir^6>{ɟry%?zB|ĂςϋJ>".xy$I_P6/GcW&ZM/58y?wE0pG=[mslςϋ#?nE|!O؇ 8G'iKsLYYy|z -גM:)vN:v֩.g!KŃ7\||^\O<̲.٢;<࢘dRQ9ςϋ/EܧGE~B:!~GRK,,>?j:_(!C-oggEi s ~Tk \#GE9w|b|.V*!_J)kwwXqkXGy=۴?y+T|0)L;>{V*=Oksʡ!r|q_(_-wU\e\n3K_9NyRq}ȩ"CB~}a)51يG:~ܜq]#Msc/fG'Rm'+|Җ9ȏ}8!'"uyw yiLqzim^˜{/]#ډM. sQtg.Ӻ _}NK5#Ѽizq]56g|$'{%nX"?8MA7>2ג /Nel'n?W|\_Pqza3Źt [7AMk]tZSZ_%r~8#>u4kϾױY}iZ+sm<_-Ey+Gv{O߆\k4>;x2E9YMG-bNo<{Oϛ/ u7L|xk?$/s#?(vR*QdcT^(C.ەQALk7NNG/4~P폞Q%e3}3>z-_rwt}G<en-׶9`:Adz WS?XU?[jQ~hh_Wr)wm_E9=ŏs\ȥS< 3R˯{ߏ$clBrC2?k)Wjo33~3Y2=QV&^!yoD,戏RodX b+/Oӳkp0r*ilLx֛{Ȝ lߊ+ہ?i;ϱx=V3;Fk爏? +ݣz ;U"%dB _ls:2o`wRčG=Xӱ8wu^EQ:po%opߙ՗gQCx^]S}#sćInt[cز~K>[3]n}{N/x xOԷ{U"!§!?1\( GW̟]bsŦGN}E_pi)~Οݹ.7gn ~ٯCl%#&_w3]#!s~159_6w=IKGɇˎiE~$W^M|V)~;8`t^KS& d|8l|Ou|r̾4.?dusĝĕ156o56usǧ+9)iE~ꓞP^ggͅ93xm ƓhCc,Rlw+ZW7/戏 'Sӊ E9A9Z㺫8'%ʜqx5;>K<>ȏNV/0%҂U|^mq]2W9Wv0M;cQ'8sf8ǝVk,=_e)|B~Q;/ަ}uy-,]P**y7,4w4>ȏd+ՠ m󂹜Ze#rU<[b$Cs65̡]_6>g1t|Z?'>Fn7kfke}!M"SAk2k4s'yrYl|ZHWc>Z?smw}!_y+ βiÿ|>IsGG=\l|Z?۞z*$-q|ymܖ9wTuqnA>~uS~M#LWeeZ-q0ļ`KM sEv1njX ݴ_uxgSӒ9 /[m#懖VE^+G)~4xֹ "S87[iIfUȏ=zJ_N1;)ڋSR:9OE}\5=O:G:wCtNܷ538r)9㣾z/5aqh+\y`Uޡ 5<yMp`yb~ҹJXPs>ʯopNxx0>)+z|%{飴ǽe+k?ɟ,|/)w+)GMR ˅9rȌ#2}H0k5l!g}A^Aa~DZ}gUU+7>1rLzT#|tv_'y~ BNlO-nѾGqL~hɍ]m=t1g~m>:hz*X ~$;%\p(1|ߗ ]kr_C9YUQ#%|#;b0g|(<3ߪ[q_u<|Z_97 VP\&VJC]sG\)?{y |ڑגA-6Ͼz{QK'EAЧH}Q*.,E~؀3/9oyz=H Aϧy&E>~֞ۄ}?9A:GXȥg?h}R޹3oA"Td͙!!ǘCCv y;Z{Tv[ߏqrMel1a1zy هd?\ 'g9`>dw?L>cin? nK,my>$ۑ9B3Q:z `/!q0_aOK "d wwx#_Ȝߏ(KUg߁h >u)!Yaq}sSLyo?}2$A<.AqG̓-.`%3Z[3mbLg~?>{ٿTdk䒂CcWZ 4&ٿN;P _|OlM-F<;,圬Ua8g|x]?6>5Yʹ?1#[1QwݝԯG9.<\d~W>QzS)LOC=bp bcآLL[qx2?C].O!ޟ1JXXxyWW[qj;>xg7|\'+اƱ8[olp\QȾĽ5wW'Cq s75gӧΧ=c{Sg rmȧRo)o ]:%rRAtI-rlq'}ឰLWxa6dZ >5\`a8ʜQ|h3vS{yF?B/96?NN\{>r 50W|J1k:>Z"1q~$ޠG4\yX^,+s7w֫>\ (X׋!SxTgg{ħ1d2C< z}e9X@:Q >gY_f]uX\Z,|Z/L\𭣃QjK!ɋȏ}l"Cs/I#ǸoGűY `0s;ۆ$kJ>cu5>C'uP_Sét㻜6d_?A|ĴN;>ߦ}@(bpslk||B z6 =&& z/zmy` }SY8ǘ^µ5&㟫J-|G;Xy}>z4|XH9,aK~ 9ف=v3w6xXycFz.5]MϫK'\cnGU'~ag0 >5sP-/|p_<8aܫ$n~2#BӘ1(d2S\oϻ'K?i\V|F50)P|)_#cBu{=*>?Z( ^3~K_l~t{l >QB}xT-~޼S7>BlX)nL|x)-2u[ lO|b|!z~7T?/z|Uŭ153bqj}f&[7ք#)8˾s{-(Q\?̂O< z~}X=a7Jk}=w~:2±;%quu73/ žkc2_;Š7&?͟~=7_KبP70ϐO%[j֫umy [>Qgݷߤ<ў0u3Ϛmw9R.rɦ<l Ng {8Ow$"d~E9#~xsl񑯋s}M#lu5Xă:bC ğ<ۈZx}}W?Gss.ʗ>_c#AȚ rS|No}/9|9=nyoyg~n? )a1&uw2aF~ԞNy~;9 Kn3-?-#Ϝs_vcs:t|J畮"d,|RN璿zE1^_*9'Z^康(~RRιGr~={^N>rD֯+nf,|~m?vTlp̚rh3a)?Y5~jK|Pjbr. >.q~3>{jI붶[Ɏ?{u͹}:oc_OnܮȾN}*Y"c P,A 3XZ_/pFkyRia}rb2e|Þ]8O+q\𯫍}t|GE۶~,-;xv|Smn#)=O+x9Ft!ߐG\/引躧 Dʞ[gֱ[_|.v:XWnXya!BhU qvj?Al_uM|uzK|NsH>D=%w="?ޝ!?^0l_?Z;ek>S$\ձG'!ʊħ \%q&7<>Gz(q?+G >gq^{2˘1iE~CVҚ4x>ERnSm]mMX9C[uƮэkXO3Y<>-3ڼScӾwN=R+ >!n(Se!Nگ?N~QjW^*q7͂ϋC> 5&rl8W'7dgIe7i 77YNN0n8 >NN6˵?'tAdܠ#T^Y(>pYEsp"?|tWO|rZ0nSvÓE.8>Y m11}<7F}8"h9Ϸ||^9<^-/?|!ֲ#3yΩSٶ/S>qǑVv8@/O+cp"R^oS'WQv/,t?}9بs!X|:97zXl|Z (jd3gG+sl3!Om|txqU7_|;c)9§NF<犻zF,.v/M^C!Ai)Ox}lϵ_m~%>35q/R$[}Oކq8\Qւz>·/e5!uRwoC{/O_YeC9]ǐx8RK_1 t I=tXUw})./}Ӌ?`lR}?yv4Teh<Ӓ%c|7w8/\.ئ|}CWsĹp>FuVխLJ:>Z)%de#n@5!_u쨝vA%$044%N;z.igdW%>ꪺ e1?G{l)ph!xs/M z>Q=~s8Oz#C_>WݠN\(?q3s ~!> \gQpԁcyF׽9kl7^Kq~CGg_ϥ.AOo/m3ݔgVϭ<#{yWsq?s>d|7g[v}o,|Xa=59Ol -/RוnhY[iR_>i??!eӣg;ۣTh4?OS L5;%>1$b{NK >u*E_1i)~!=Dqsյ?՞87X !Xcg7nﲠcӊCºm{i#Jc_/D1C>s8t|J~ cc/Ƕ6mɓ9yړ =矚eg7e(bw_?Z>>rN/suX޺Xe0>u :F^1iE~(`݈si|Rzi,s~/ƜqRɆaF2>L+ݔ9 /i\~X #8vyXԺXپﮃ ^BgIOMYEVGȏCJGׇi\7S{ѳ>u8|وE,ߊ}eYgޒ\r<_z>k1ؔ+#d*4X)7xlːnbCk=c1)^+?d׶ >>kns|.lpwHĵ1QKbG$=C=ᗿJ\BxݟeomԜ64r~a^#r짮վn^L<l[8 .?JzFl\'|.f;j,|}\wAv$Bfyr?_Ks1`\,45n\m:W?s9kvL|4a\`9Sn-:ǐnd~xb܂Qz>8>?{b5x}W=L|"Z 2ؗ$a?Ajט\EPx[뜷H;2g sCD;Ƙ0\yc竝NʫiEj5C|7b8~%>}ve qnyS_gbpru~qϛz=<3N=S͛ of;9~¥ 7/Bܪ>q}/7~׉nX5xgxf=O$+/vqr>ElEݹKzW}ߖ~p9 7#%~Ӭ';̫9vslϾz7;Z C )[s)[>$'ydCuEߐx=e>@>1x<ԌQ\Eh!yJ">\9;WmGmeG?xt%SUzߘ=gT]gw\!1୭{RkwuiɏSC^Ⳇz1=g߱n׽Lյu~%~TO;yp#A;IktxOoM~\跃n9>>Je^֏{~'r帽}4OtѾ;W_:uǺ9k-#+o1IuG5RK1 ؼ+(D~Cה#f[us Q‡ՀN~*;mC_UP= ˉK=~NcJn`C?U+րInuK$<>ý-?Qؒ[/=l9 z^Nvtc[:v^">_G||n>I>r~yFQ0^ȏ9V]Br>Bȉ $_j ^?jwOOǤ2OH?u 06nĿnL|ZSYcӊn*Cl"37YxUpf_:1gS<S aY^6>l4aB7iA8+>HM.qG5{k-SJ9?ooM G?:}G}cJ|j?xQ +2Pm1NlήZ*X Χq1 r<6R)ϱ-+r-dEmޕ1KIgyu~P v"_";#q>|ta.7RAG+/?( /"KyIFs>**}`CZtI7I& >{^gWn5(plߌqqxrx>Gx8%m(F >e|z& .x%y}U:ɵ^g48UW'+dl81l!s\u&ґ!Y&t/s̯3O ~9]Ty.ȑ UOyR𡟳vт~8'˷ۓ !p1k˼pN%w寮#?DZߩ{%^^O<#D(V-IN~rIө^8A2?j/#>?'?}Y7 Ï\׬x?x]|۽y|Ng~Һ̠~9Hȑ}8fk9ȓOMgh Y}7? >~t;<0)'ɓ?G00 y7GϼP|g zkճ"}{|J~ B`m9dsp ?h)ߏA^fy Ok/,a$]90j_$V4GG E~\^|cHd؎W4ǔ)gW6 g~Lu5 4f-}ncն)7k1ɤגA9?|xGx?Lۡ#?rX6,sV_ ~ݷ0oKv "?F~?|jgKn?9K aJsY6ܳjd;>ԗĒ?uo?w}&XZ; @, ({L=5/1/rh@e;A|<` } ka &NGA4w` @ƾ&rh@j@ӹޜ[gwX|}5y3hk= |]~VX 1R7 }Z3v: j`aO0̧nS7 y?=_ܐ\2x)_K<էc>7OL9ן<} }!6p:w=Sֺd\tsMw;$v>LVrcҞL[rS.X+?o:ol٘Po2ϾQ_66:ܓy?=am|͘8'e OOvݰ?w?8ք\z4޼pBV|KWlG\ug3.lC{-ӞQq_|-u,>/@4]{L?wŶ͕;9o?h+:~u;5jA]3L};09>5 o[Onӊ9tӵysڽo;'= CSzΔNM/Lx'~'A ?뙛G{_j堺Syiw~}k(Nnӊ}uõY喽xm94ys=grIB>Z?P7҆շ~*586AuG'?S|Kk~lNӊ}u8j7۳}/F;sSs"[;'ӿmͫ]9i{ێ->+vfzkP[N^}s91vpz";q>oam-V7 > yǷ=nr|9k{ nG:Iyݛkc|԰zrOJn6?%]jc~:͆sk~~},?SS[j_9gjc{|^icI?8?j/S>@5ܛ϶{u KuRM tG/4M9غfO=\ڼ9)OtVSsJvҤuȥdRpjosnXlR7L„Z洦|MS{P;U7vgZ>!&>?imM_}|q͛SoSg s}O@.nec?2;>Ħ Iϳm1iRp˜$2kRwt %Yю9W6ku~yZqonדMuwZaKI{vT7,6܆~Λ},|Ie,M^8~u:21؄af:nxZOwggonjOu Ka5gQv#& 'Bݰ-1cU7n_ǜ\OKWS'skz/q^~|4H|u=M]79Zvj'ؙOn+&[s[>{cZۜ{(hB~s7#N>3@ᦘt㘓0m^l'oXCLbui="^~|nX*N?o9| b|}?1&ͶIk9`nezoOK{W7Ngr]<'<[v_1yniNJ:7^~|nX*V7ܔ[^?nX>1/6}mnߩ޹3'< 5UuûyY7Ncr۩9 OX7|N:7hŧnXp[n]ݰk cysnlڼ֞ꆻ3}MgN{ u{OKbnxgpzLRnx\>9N6s>;o_C#2W;G'lwrk r k(aq@>=an'%U7n85&''ƹmG1R7v7o!{au)}Tݰ6}ҞGnX.y-OR7מ 90'|a}YݰV7\;auٹu~[㹭cbT~هgG Qu6Kר n[61Ҿǒy]9&ݨsM;'ǐKW7>J:^~|^'>[X{^ ?}'?a\7ܖ=w '\y? ȧڻ[)9 5eX7jsL|ܤn3L\7NN-=Mڭ jlUӺ!?}N_̛滐 r)o]xnܮᮾz;G<{})3}}J}5*'mZR7\? Ga:fo;M:6ussU7Ѧ7>sb=O+6gn僒׫)<鿻U7-vNZӥ'do|ޛ=}tOˡnnGpӞܺM얱45 rsiT7,^5k&W;{z7&}tssm75ļ7bWk$Νl9n>6?&9γ}B׎⼺ظ?{}>޹~iO.4mt\Im'?;=nsp6<ϥQjJ:ƜU83~/Gu޵5kN&yhwnٺ7KiZ9!/̛sֲ iqs{,7ϴ0&XG 0^Ws_GƟ &XG01nNW?*c;. 49Xͮ6FMO/;@yL˰qM}T{:`Ǝq v0y p:}MG߄"ǼK]b''ba˰q *^3Z@mF:$[+<9_Y05xa!1>axH-005xӦktAal7 =_I`LkuîA2kl7 ?_I`\kM\dwAr1> +c ޶kL09],OcXS6\sݑq99\wc &p]2ơZ"?;)X9t5pחF_yL9.i~[n|u?smƸzkH JO1̧F?S; ? |#?u0HO0TKҿjVa ˏ>QT_OGVgr ꏴ叴wmGc`ϭv6H4+5p^K07V pwep qZ5}==r5}ɽr6}=}<>b:`cpr?=[9:dz: y#N̊o@ \[`'GvVL;ކcenMb35?QkzkЧ~NęI1v=/`i֍\glga7/i)~/[3mmsbj5yo;R="oڗMѴ8ǻr)]L=0c֎g(V浝}۹7>vN-5גlʙn=׭t23L6US}(em'Әj?'?T;{?y m9*8׈zVyَ Z<o4#d1w[C~}^}`ڵn^3q5|O6a9MUurqd|tb1^k1=9ާ?Iǿmgޜsku.~AySejZ#l؃L)lR7NI{k^}1Ow>%:1^nd:Z5W?yiLeSs]B]N]Ҧ sP]#IyMͷ3ϻ'~IڧԵ{UsD}[sx_%@{gz3q,M>f3'lK]qoZkO*oL1NloFGT7nؼ9{V 9g{ ]<3y9FOmө\fHRq|;} u:ցkǬrS'3in?|:1w'g{ݰ1M'LZ郳a{ h[Oμr|4.gҖ\Z>]L}kk3sYv LXT=֟:kf&Þ9jsZ9>ɜt?韹}i-R1sXĥ3<'6&] yzzL|w|ڴT׺`j_R7ܼk3w)bu^n?ý=ߦm뵯mz{?ٴV&|Mʴ\z\4i<kO }tʚޜOX_'"i'p!o'H~j,sm:='?ym}|75@32ck&ӾZl?0M& ?:O.=v[G'7~}tSuۛ۴Nj6of%jojow75'ժۼuȔ硝hE6 E~v<7E" ^۱JMNnuv]cҳ'Imzfo֘3t{k 94뜙}Rpy.2 vb[*5Z5\f[s5)r!ueB_8o6n77Eϛ12Jf+:wNݹT_u󿽪6m^['iA^z׾(:뜹sd۳ cN|~n8ywu÷㚺a)Ǩn8'Iuçs*wd|AϽ{D g\5/6cRny{v:d}{ʹx;hSzJqs=gG^qC? @U7ZSP7ԦO\WV7,o9fu{ ׶IBݰuXZo5#VǮnXƼysTp9=rI}ssӏӘΝ?Wpu}6bc_RvG鍱k}ifL=uLƠͺaf& wo䴫ڒy3{gwꆽq;b]bxxaocz* vKp57iyV?i| X]m rX'鍱C]̵ҍ6M?ƶmuSo>n87֨&sy;ή( 3a"ߩnX6?a}rCpk?<}q2~nNSNĺuasb:=QOIoyusH՞^7jm͉ƮI ʃ'gΛiK~vN/|oc~enxZ'9R7gos8'Ω=wUݰ@Kݰn? /}*7u% [['%SϽs?7մaΦctm\{k}sVy17M~|6U7lon?$@U7,w*֦&m6ck{Z7l' 4fuQ7nؼi|}{./  n^q}l7ڛs6>;cߔ;צ[ wTfgvawؑS7}  aN9޻T7l}w_ݰϿQWd,O=fya8m jVdϾ~xzwϿU7Mw+IgMuí}JQ7}SiuW?NB|wmnXݰƞF;֮[S kqmϞ`9n֚m 7s}-ՆNNaSqzZp>&1'i7s8ߟWm;Ƽazs73qʯvJN'ۢM[Y?ۛGn~ީ ͹>?S/~õ9}{)=#=Y9_3oP|49g.;fx˞#iҶweZlCAnϞOɤ:Ѷ__|?Im_e77=o6)6ԺmH3ehXq}o7qo w.Sk9ҟC!G 98oc֦x`m#^%{h;0@w~'s5 箶s8ym5=auG⼾8C}s s =:Fq^bmgM9؜6{ڸnX01`b@?緝4"(kSc}3m3oG`b7an[ۀ=)~o;y!9Fq^bm_&-w3qx&ce> G[נb6'tsC{Iq^cm?[Z0p2&gc.7<[;@T&-0uUSV[%f6vw`k:}1Tޓ3y*mjm'1&_佀97֖ag^hYCa)~\w G/Λ;Tx|@71yKMkh1 ?OM֭*kSE΍wsY{ 7=sy m-bg}mz#58oD?{aزT7 @ڼжS6o=sX+Λ;1&bSjD_oo%5^h>ޫOҖ< 'OҖ|㟊jMҖ|߈ij]%u яR< XiKZ㻺a`OO@8﹕gnW.viȸ"}LKxA0BjA Ij|<]ouy?qiGN8ge,}=jc&/ɔ&MNOqqזOֵ5rG|\>o7ݴ947c~j6nnxbnȭ۔sZC=#c~^j?MG cgڷym/ĥ\ZJn鹝I9kw~u]~v3ןן~ύf{{r.m&n|[Şs=ko7g[}--Ĩs@rL9Oڹcw\望ra'Q>S7ft:~~?? ?sM,9x뎮\gf^#RϚy=VOs҉l_uZ7ϸgyؾ9[{7\}[{Zuo^xjM3:~+G6I&?~\7'nاSǜiʕ6YE q~ynG`^Z{Bλ>?Ӱ1nsw5 {yuήuR[Sݰ佰ԛȷr b~8)7{5J7q1m87uo~fu)yi8jĹX-6OmmLw{;W&Ǿglh˶κ;i]ZcyᙹO5k;ug:9pg}ѳfސglge p#޾yV$oq 9oG7lJ?߮}N}0yNjX'526{ךn퉚r#9^ksǾm]0[y}}RslR7Ϟ#K7W*sOygT3lYc[1aikscjT~?Ԧs4O5ç@)tzۄ{礦u|C~H+$]'|c7?N9 >p yMo\Բl\?rjoGagNL5hߑc<)1Opg?^76;kG'Ģga<֭1u49T,kOM~c뺕3nq=g9oG#$ o^Hn>Ꞇ_ӿ1p5ޤ>y5eܧ魑wΩ:uwuiϽ޶۾w$η۞ 3XoR7vt{$[Sj/h5]{Cp3;RprJy ׂ iƉϾJi3o ҹNiYS <-S~ak9ujͧ=MC i}q;m<7Wa?7wN>sR뽕 7j3۟y&GbR[އz?[Ⱥ[Yg&Ϧ [O[/I94uý}= rI; 돉5jֱFn>+ϴgM{!}=mGBږoL_xѼ9i8JM uMkc5-^O^%׬LNpr=԰km˕o8ܿ ?低gw[;4=uFxƌxO5v-kzkYiNi>@ݰ֞9y 빆ػ6 qs8jSo==7i{`N)NbʳvKRnn펺a4 15{FjS7ny|ڼkPcSJS7l>alOw9۞]CyqEwϑ>lϥ܎u)㷜NO'e-oxxOKRuOi;' )u uýkzkYy'-g$:uvZnq]gchXλjqkvN$6W7MG7nchKνr R|wc`J:mw|KS=ss歺[kJuOiqX~bܩnxwmDcpW7܍5rNp؟~܍Sݰaur sܼyj8zwkvH8o;} @.eqG}A9o?Zk^ 3o_'S/6 rqL~bܩnXֺaV7ֳܺ;%uÙc5FMχ P7nؚ({3=榽[5q4ܛCzP0@.m~VKNLݰu1|LM$o7J<uûԶᶽ}\^HMݳS7nX!o4nX,?znFƑsߕS]s)ۧ׃Ն=Tc-̖8pU̎KӖ \L8SvcU}kzq ?7Αwɇl·ڳV7,/KSƑj9ݹ|+\ʤ9zz]ng[O:Mh^fR O%6ɓQOtϝ {ްau=?xܗ WsvS|BBϕ[לW}H&1 KڲIy=Ubݰ-g}MΙ {_O=WC0s V7nXp^>oP7<4m!{rl [M 9M{Juq[un&@Z.s48RϟH<ך73y8 gck\98<ϛ;' o^$ũORp\ru̦M6Cs U7{ٴV7%Pn?l#hgFR.z`c}A5ku÷}{;&Q4<qYs͚qtB_Ֆ{ꆭΧ:uù^OВi8Q7<;/ ߟڪ|s9wZhC ޷P7ܼ6ۯ_HixSd OΝMkЦ5^ZVj[ [#ϞO[us'`c>DtǾ~Cݰ”L_/^_Lm ׹ {윃5aᔹoRps6O#k}wnS~,Ol_ΏQriP=HFEN_ש޵^j7\z>Dt{$ {͔6_8sqԺwR;zf=棦p#jO{C1ix ٜtέg~[wovOy޳>^zΧS[}{ō5b:Buÿ{iuÍϠl~Gܖyu)6gf3'<3Aϟ<Sw]{;kO0U> vԳ+=77[Ͼ7n659A}AùS}=2kZ>%Z#g"7ګ77~|rqo$;>m9+׍:mÍ}ĺ7MNa8?ǺaeRpʵcjc:o2iޣ60q.zcM[[=O7߯|4} f:ww˺u:T4)2ag~Oǥ|nQLKNoWuٿ=}oa48aoݖЦmǺ=6mGSk" yi ;Δ\&9uܟ~ĶQn9{-e]3sN}6oطk[ι -.MlKZ=o_#o9jͩ71rz>iϺu紟qXWo16gmn'k>r{+ҏ~kjj]޼+7^R7nxO5M_s]>c{ºjt_Jncu99}w?M?{q:wT.qYOGga8}fJN9/ɩm W=N[ϩnɫ4vԎܘt]; E{ꆶI9mkηj걵Bn엧Ic/η}~ߞg$},{j3$;~J,5Lˍuږ?{2);W>=yh O~5ξtGIɹ0y]3c=9& ?wDpo5:O[PCmlagv%e`KEۜۿџm=|'^^u==u^ɟi1&SImGrSl5[חם>v:s'=9}:=M; 5f?kjt|ԸNNiOlotnRHڞG>G|?7=)͉ii߉Į5{ź>3Q{n4G4ٻwNSx突ȏohmh_цRWԎjG׳棎w;@J_Ƭ= }W&/T}aO94\CLJ{~ !='!ܚ#Y89 h'uJ<5~gu `$#V ڭݜs'b);y哾oYxmjݦۥ81}R':+vZ3b⿰wo;H@:DUeDDm`C΋Hy@^An\ҧI۩x/amD=+摶T;吏3 Nu eis@^5.Sa܎E[!zSb_-Վg Ħ_7|GQs\qɟʑ倶XĬ=DԎvs΀u 5S0 9`-կtugc0w>M,&3 398kְ{^c;y6>g&@P0dKcs\8{]0a^@^S`?{^46~Wh.捹p2on3' k!ss@Rl]AN ; *:v } 9̋khȹe !VM0!o<tfAVLXc6/<2Yq }{c\yg3>lLihهI%HYs^ǚd  м5ʙ7o1g0%c)RܓO>Lٗi_~, w`5zaE0@_~}?@ !^`Ok@5@66nMd5{} Mj-2Vk6L7oH)S0wH'mcT0`O '}c򓱩v6r1ĸT; W۸Iۘ|r\a|EmmL>=.=guu}R9t3Ůmm=䴵ʼ{k?ֿS@.p'7Ƥam\p>n՜V&^o3"ͱ>uĤ}=6Ť e}=':NF:u޸5w7>m|Zǵ9$Ƨ\|HOx>!R鍿֤7Im%zy۳vtJ-u~⸝:6mulS&{1`y?Z<>=ߗk1>5K޷׵K뻹7NcQJw q [Wy`MTጪ"'oǦuhYHvn{OЎ 7kwuΜK=gli<ּ'Oħ~~Bgr;7NK[XKmgK-9ܿ}oO{#[jS7Ԛ)VyϛR؞g7Υ YίK}ߑz{3{3 '퇧GlnͲW̛;>g3é5O}VuѶo816Ow};yiS^}חr_%_Zc5ڵfzdMz.~bsLR7wo2n~jx[ca[q;{sQ{wׅ'ז5ى_[sSzlޘSgY;:|tSs1)eO'?3=?N5ajӷ{w7MϤox98\f OƔ8\M6e$NvRk7o\''$kw孿}ިp<ᬡ][ g3na7 x&Pvj|?ͩӴ^y?)wo8r`^y30cW0!,9y_>>sSL_wkЭRHj{6'3'Ά$Wrm)36ڵa.7LKg Ogvykԭ#sho3i߬HicF# EGC]Jn~o]R>߹}wѶ1Xjk1|ש>k|Fd\~Cx[b5{G 7 54뤚[ RŸm eIAb3ioέϙxy{m|] yAk\2Ssmy5Iyg"o^YiSr`jݰyK2Grn8wOx?ӛٻr>cLkNJ֘w5:>C4=O7Np_ihJ~6mo^oZ÷wpv|צAcخ:DuHlG> mkL -:f+9Z_͛n?J_78oU7l yCָcS,;{m&clEwy6!I;M5܃Ǥڹsڴl{7}U;=5}mnX'Ϛk׋siRvn9>ɇ!q^ݰْղ^&)u}g{ߗ7Ew#ΫiM QƄڹ3~dSÞ8jN'OnifNuIv넚-{}ksj.LyZu153zƔP<ɉꆭIJU ?yuyF1m4߷៵WT;?9o9GkIr=1ǣ1xV͖{d}ɖ:uꆽoxoCc2LeP7ܱ/qݜ>;gs⧺akҶ;5br+5+qBj <#2u^?_-Ӛ8_jI=1Tp}1fgzai{}IȄ߁|cŐCǥxrLlz cߚԹWzs'mu]ur{/ݹg7_Vh2~' sMj(u0SsvU7l.-w?VRV8+63Ky6Z7,eؙU{ٛ5'gw>/ZnxT|wt{㞾Q7ܕޘۛS7w;m^769d\Nmuꆷ [/ť_ƨK'֤{7Q.po}}uT-gr-ur| nxO2}YaTp}3jnau]FGnؽ-9d\6saq_qکSݰ5iϝ.>/m9kmsu{S}|_o9_n'''u/⍺IgKg9~g gKb>,חtq|$qVuBg+yoߏ~M_ΗV7] KkyS|R7<uo-ur-|&qɺlIcatk|' 4i2|Z0nֹ$VͬR76런9/ [u(n=޹~5wꆝmɇ͡auwĦ|&y6E϶Ԝޚ{wO[a[}knߴic^x!qO u÷Ɛso')_jI=9K sbc}^yäԺM0:C DUu M? _7v[JOa1_;Rmg@!8̶hW/nmv^vjL g]bpoV#3AMYF,TQ|Rt&;ݔ3oKbkyB8~-o]kRɩU7,)uí{uÍ=O 7o*{\cĽN\1&9kܓϓO~) ȍϼݖSR>}J96> ɇ!u-g"yߩV7=}򼅺;qf;ooùauꆟ?+V7 l oűh}u=9PN7Ux5w}U7<{6e~ln;kx:r&n/t5<>񷼚x9$nxӋK7ƥw`uΞg ~X0К IJ73Ԗ-nX^nۦ\S|ߺ_I[&ξkHLo5Ύm =mg oc]9u÷IN5\33}O&C\f쳷z~[ny !T3|8S?+aLW۲eOHK{y7֚{Ϳ z~6zchscølz!]ߵp<xYmƛk&<ҔlKm0Sr~5iܟL}xx^^k㍵d}ݱJI}~{+b|%&9o_57䳆|aBw㼡a.59yx~ۡ9޴1ys-Iz6)f{}-X޼qizy{ooӖ{3άV@ٔx'~ /lڽmOTͣ5jgݰܙk9zbnxޚkIYì δɿ7aO:>Օ:|$=Iy66;gxo q⒱􍸗 lΚnޙ <^dXh J(kT;*֣i5su,[xl;{ sj]M3pf-sefIhc9m7 \&Gg';Aגy_I]aRذNnOc|kzk9;L9kM 7i9Zx1I=}k:ڼ-}Kݰxt6q}?5|O$|yziwt%->183ǚ{/ޱ`{}'XϻvF8k]#?yĵ<4ޫ6A<w>nMm[=i'/wڧ'u\j{L|gĸdianY2}8.=>mߖW>';pfto;WyWK.g-ۼ/ 6AchSOȡLߚw鿳:'tIu1;L$eN#%3\?Sqz\>ŀsc5ۍU3 @ZɌ>s1s/ oqYUr{&Ǩs)Unb֒?VX:oo[9[L+7_9[A g"̭S9u!O\bZH>3mqmћ?'Ʒqy^"O+ӹ?Ԟ_{1xbh@. sS~Clg߇s{y[GcTLxmbo27W|{i0>5Aߋƫ>C%@CʴxmOlI47^6MnC:c8!@Lkvf4>ѝ}W}-ʗx3T fos`^cSƪ6PpۼasSmўj;єvw9<1=imoЖwƱl<y?:{<kA~6'|g\C~Ӏ?{;{<kACemp?F`̏}z<>= C0?0cg`6}?;bgˀ;{<kAٹ>C'cs/f6Xf{:q C=#н`]|Cǫ{W 0dsOu+f`&?RYՀysۜ¬nR7  |j`5Zaׯ/` !l{cS8`:O'q1 \8kq1löncY 6[vu\8ax5|71o?9?qqn wsg'6C0>~` `!4K5sοO `!4K5k)=O]8mL!so `!I5`[=076&9o΃1퍵z73ͧ1z:??skh αZs !0sӞ|wk'W9֢r;x'gKyMmkǧssvo10߸ܘ^oO=|7DO|JO<'4)16{\<>\z|ݎe4J/rkĞqi<9oc>57i˵Sϧ>vOhyϧ?>Fglrm^ ebbӔgޜq- !ΏQ߷uK zoZw#y;ǯ9/djsƾ7紹X7<5gg5v[by߼߰ύMy9cՍnxfzz-Tis[ϕD~>߿Ե'mʕ$?SL?w{?>y{u S3 Ҟ?Ӯ3ct;N|uޛ<<'6{8mw?ub mJ';LKour]kҤ}FZ\O'mȕuwD^l|^yvɿ4>ߍ|*bʵ̥=gjShQ7ln'?iޫvp*V:1;<3'㼏pWubi=|r^#e][VߘO濜d:߸VN+|v}O۴6p<996m9߿Ua{R^^ omKқC}Z{nm^yg&\izxޝh% wygxw-Zm`yO̧IRDȚ)~ޱё7s 9~^QBOc̸LK6k=1{ytmGrX^7zg&8N||dmA rJnkkrzS]8mm-1~䳝sخw7%LŦ؜NMCz7<>OoGGb|CQD{=6?{"̄8`<)5nqQKsBn{R7~|j qHA0w>̣Ki4=?{r&WM 91n?O_ox.-.5%[zbL2=ZצnC;jy8'.M7N'}wW͙& 䳆7&}sq%zux=o'W[V-ךwzΪzb!f=l%`n8=͑=2Fo׿Vnk[}:s1s=ӋѤq\vgljȑMR>nҟKoKYf˧:ēmyU&APϽyoW{9Λƨ{TЛgܣ]}o^e OȵOϓr&9ӄaKg %c+߮EGm$Q1 w%<ً<1gl?n y?wE:Q1:=Kn~;OuҴ\4wN$9ac3yܦ :_jKɟ>G3;} IΔ:ManoV7L?!u T7}߫/筣r%'w]|i=R7wJޚ0Sm393\u%92{$i267voMu7LݰQ3us5baYy ~Ěw V޺ᆘj?{:cynxg mʙvLs4nqw/M}>G3ڷz7柺νmeqg-M}? ψɹ]>"&݋vƻ{?nsO;qau֣mS7nvk}ss(=oYpcݰEb^*ۜޞ i>N!9P7=v݋Krz1tOtwΤnLS9pri=߶SU7ļk__';Z&۟א_6 瞺ᴚa濝T~~w~g|V7u&S퐺O Fݰ4e?!Z⻔Ͼzoi 2Ⓔ|Rיr{ R㜺augqI*pCݰc,? ؾֶgͦ8}'?'VLxoT@T7W7|"7.M76LjavLꆍvZ7w9ҟKoK 3=_uƚSu kwgjuYkͽ}޺~UݰXנn1,n'5M){ߠa>mLSϻ A4%ߴWۖ3'͙ҟ46 5۽nxs\zⷱZGט8} :e^s\Ľ9M["~N~n=ݭ3ㄺ?#.S.~xa>Mxwsʞ孼>@ ;o|ឺa1)?.M&I3bSlmQ9hRWw}]pWoc^[?[R7,~>f){[j~Oq^m|ģ g({ϴ5:I̿16)o Ccsm|s9S߿\8i֣k̶Ony]q9%&M}Ԕ$N7N5=~e5}.\{g$ͺᓟc_1z#gs繸7v?9v"3,0SsM^lC{{m=<)觟79 +·ǧyw֦SGKɳk=[n=d|ע?_p=y>)1.mC?z1ߎIc^[-o>9{֣7bk??WAs)u8ۚL:R~-gTnӆ}'$%yds_cx/68%_7Ǝ{O`=/qo0 !}yEȹ8F=_~5p>5o?~'>Gݏx?by5mTb½eRc2o~ncv&qT_]RiR&֥W>GZgwg'gdó"7s&g3cӄ36Iox|iu)˟|uß]ߧum={ńEc-&%D4;~c(0!s_m=riZ?gV5ˑć h͘3q5ݮOZ]gKό4vrx:}I_31m.`B~uS[/ 6CHYS/RΔFäRXq?İ4^7=$~L]ksQ8_z{Oq&yU{6=Xs^;#{23~[Ze9nxVbܢI1} 6JB~ɗs͹a>nܟO_ɱIy~ocgr:@\e07UqnڜI/^_7lM1LL̀*$͉as8xwlMA/_6:˙<30O }ɛk3&6& oTΧۿm iΩu<sܘ︛<cƘ~D00-a_`cws6`3~vͰ Sɔak#uwV 'X| !;Ƌ~W7 uP0\jR cnybnLu@c{uX"kd:3scN;aF_r7_yƎ;]@šng80?9#po;nLc^WJ@< `י410qAƋihxzmI]c ~ɑL_ 3t:lS5z< axv o3-9{L 4 j`~_^LO ;+N ;5{G `g_`pvp O `w;ԂjvWyԀN/<yQ cg9F-ɿdr3Z{?7jv[s5_|{@<;'iNOc 3Z.o5}=XOů)?ng:;h-/X7MHºErķȯgwk}^]zOnsghˆ>߼>MC>{AJ GrﴟldcS_+!MoU$ixHkY~}k[Ym-y L-Ο>=68o1ҰϘOy[?]jZm9VNʗR1)Ȝ̮6g0>>l3LZ:ᤏ '5b|{%87ѭlIq}9_u_}ѲY}~3>loĦ39mzr1`}{FyjƙvmΓZu<cs S~z.1~⽦l~ Oȷ7ii?;Sq>sf=S z~o\Q?}y}~^s_{T ̚>ozT7lwײ yf~myŝX[놛Rc!}=hݼfk@~jXیI;c8YMkS,?Ϸ::u='~O;4I֩#o[9hY~z4nT7}MSrn=߈I^~KeLV_He3ɵI6gN Oqӳ"!orgn8VW>@B=כQ6mu yzj~Wa&SWv>ҴLh@n8̘n{OUQ7U7sMؼ\ָkj³[&-soM9yR^yrLmwNo\_ݰ[pwgrγSApnnxNtgơ>ה7<R7}}քm6^7Z׻85IkAo;9놓!iݜRKnx҉Tpʓ ԇ&#gk鵺8zMxOn|7>6v~ގ9;VpS7;?ӿ!Ǜ {߰y{Ius~~kHyum V7|v}o\SrOȓCǏmz뿺9h{M< 3űsnxwn8szޘno[^׿ӫȮ+jZĚTuaX7~}SR)OLC=blqFo30}<{iO77oz (7}aos=qMU3'ꆽox_^ROn8o7ԹS7cH>yu-uM2.=osYs 7l8u=ڷ'ٖ =k'u g]ubqf_[5Ԏ}a n)Qkqo\{Sjs{sK{ N0 7έmukau;94N4ѻ3פwy|u 0-oP7lMH'?R8!ΫyΡn5Ȍov nX5(}]W7 wΩynXpau3k gxy}au#;خy[Pi} w1uý > 7.x~I 7Ib۔ש`vS洺aqos?Nwߪn8sn=fB0̦5U7s-}տo[[~~ƾww4M|^؜s.0nx9߮a>_nm{SpB5;ioouv+ߜ:6?[גۺg3פ%斺zq1ؘ7$\ƺеG.߆{ok~$iNkXhϦqn8n9?!mkj~u6Eo\'i~6ISټP7nޟgֺa[qDܽU[׉ M?<>>'USys ykM=3u3Ac̕hϦI}yjʚ$71 9g X7'Ԁ5<ϰnD|_a^Ѷ)semNrl_[ΘpF Lg '5[){w5S7놷-j'߿q_|ydjOUs26[mj]~βirwzq~6样hbқmlSA 6ԌOVT;yi֮-7~ʶsc9iy{p˳Mm6n_յ7 '>tn:9VM c~Z4%ΨݜN݈O)dCwx[sO>؜\tRش>wyrMS7zݗy󻷝mCCҜlYnS3a-5 i_{wI5nD6#{k!~װaw=c?"ךSꆭ+8ln bg|Mk o=~'OMm!7iο·=CڹČ8?}251.fmߛ11z?ZZ'/߰=-)O^sC۵mys)I>Z7!ko5oiҷ6|[Cc9-o\6Rl=[!~bʽӍc4srЭuru1>nxgm[7]<9e7uÛ7vgg 'lƵ'Y@ލOg7w[}myLۼ7vxgS{AY~gMAiM?:ӆn/1]d5M9=Qw}xB"MjMmO638o$,}8ͮ~;m;,5Ě=+6mK\:pBZ}9G~r+ͤwPtO?Wc=ͩy<{ӵϭg"֍m>oש.&or4!K㶯~wt閼i<Rs/ؾ{z%h4mmx~g1Oǀ 5w_&niؓiz-ؘ7@ZmOk ZI_ùv=>j }O@ t뤳=-0k/h&}| éM6Oƪr@̞_7l=u[0k`AXms܀[vy3| ?A@ 4E )0 mk7OݰwLo=3nR#~äKZ并{o:ird0ka<k8Lyd>g뼻foM}^_бN[ 7x# eL8Ld>5fsK\?ZG;}= cN{}ڊ XG`~0gI/^=ۆ@ dO+#n950߯_{_K=^ds_'s`>s/y^/9a?gygvyeNLYn ; 2&<]Ow M \c;l>y۟;>Ϸ5󛵋$gyN-s 7_< w[t7 \ &-*gW1?n矹'i_66w3srw>oXMgb*JJOb]^bW;ƚ8sLIpwkk~϶'jOISnݤ9֦M9ӧkl.kW>gx \C/vb|r,Ro%i?OOߎ> 5lzĹqbk w@~Qmk6~ ߽MOroKO95L {/WG'?d{yoţ['S|!{}۾eL|ckWUl-3{և9cԜ]7쾲6>ц'>M珺$oO^?׎sJ}.ys5[9lZ&Û+qOlqh~[{r'ے֖5șme}oS8<iF,Sóc}n;3Y%k'au R8t)k|y53mc=o_y޳EOE֜qq:nKgJƍmv[--<|SϜϯ2o~%i>MnSuꆭn͘ uÞnR75k9)ޘ]7̒R6ΙqoVdy[n߻K1rb:[o d76?ms?ms{?ӔsO>3[n失jߓ׉{9uoauvM{1;ě}o'oʙ93T5n\6mMgCx+&>qu:u_ 2=&|}{9n}6u?B=FlO<'O?:SiτK[>w5owJ1fs>w๶of}[^gLƻɦ<=~^\wm:]ؼyo'!xk3>voWi^4Ĺ>sr<N;2RjNs5ޟ)go Q~9jJ1nuMnc }?1ۜCO̙ę0*ֳ7ָg d`rnX|ϗoo}*/gF٫$ƆIcTݰ4eV7ljmQk>n8(:ca緺}x֩S:ҜI'_kmυ8ꆭEݰtv;gV7}K7csimn876+ҿGcn]l^N^Oٚ}}uS:3Pz܇yvmkhԶ g`g^{sP7l<޹}VXb{LkNmuΫFߙ8>6˳TݰauùO1nظ9{z;O7a>:1( ˗s&cgǴ>u{]J ڛya]5nxOݰ Y gY||yv簺}q[ݰ{c/ܩkyc9u)6{|)9gv}wF>[ Kw&#vrM7[>7lo[ΞtÚz^uso 8oץ]˦h99~vn89>33S{quý}ήɗfLbsקn8Z7{>چXgOݰԦXyfn8뚯EݰC5B0Ms@ݰZݰdfaRr$Gy1).5:i7ߗ5vZ} ۃ[Xno<=ٱѯ5aNx߰ -\c~o͞ݦ ΞmuvuygKw-o7jn['~Jgϙ3ɕv{76{2Ǝ{nc'sOn3 kx|{3{/ûlܯu; 'ͥ7lW7ܽ=&[#fScڻo᮳dkɗ&LrMKp'}uꆝKoou%{Hkd~%56vfnakQ7nxP7oP7L[]7ߛvߵUA{={Nu{ƦswڟR7|z?qgy{'׾7  >nFޙk{ۑ9iaYy &+j|o gGró>=Bn9 7,7\USz\w=Ol5Zsp'.fw(u~t{&~vLz>k-G是΀|.fnX}_ gSj)#yTIrr'] '7|,9sѓEN{&+[$7,7,7,7\/7\}n rw}VΊjCggḙz\2&W*O3q9CzZ' ׾1&7l\8{޻yILnY߭?uzO$S.oXU W_5q3r?Jnu]ǵl&|OpsLgrH{lų}j9@8*߬{^}i>m>V?s^Ƅq0?<'6n?7M{~v؏nOsF;JakzN'Υơzgn|&m/1&]~;xf9{ggH*4ד~/teԿηՠlͺptwluzBNg} Wj+s=&}g6) e<]kU3:{79skӮL}jdo]:Peͺ ܯN;+z{f搶^]2vK?KC^\v|g2߭xIE'YiJ: K{RzO1bܜ ;s~U7heU{N{j}K2sݿmfj[m-+YR>6=Uu#ͥjπUuW4ƩpZkݤm菊s),tqdi/,kd w<ꥴΚ?՞oF<5yVޕ䝷[32S[mK:ƫƧ9ss3Ύv6;{6|x/u4.&yϡ^<>c צ}¸ԞT_<':Ytwۮm׎mvO\ͥ=lz]{~4w9ޞ:g)gc{Ԣݿi\2wMRg)'g>'溿t~(нvY{ܳ1vLLoԵr)y3.[lmj{*w뺯#w+>הX r{u{kRBJ_CUk&kgΘ|IA}rY9׮Qk/[?ky6泶e3[ME)gQ~6.5PjgsӧG3ڲKi|qO+Yz^4N\3Y7k4%_F\uMjϝ+߶=Ϛp1 @^Mbvjl>*c2wiwGkh>5VLiuWy}{}Vs]{wP}_F\OYW#)my=CRkՕ{63g= u#GGP >R&5~un냹 XAwvi~[GXkIn]d:an>uauo.}X u^k5 7`-f|2?~ X j&`o͠nnIo0`/7 XǬ 7 #_7kuj`8_7C9Ohz`m65PpqsgU`+ &XuppkTl>#O`}P3T*ϩm_ok6 X{ PyPQ`?ɺ]7$7  200!/ wCǯYTT@/EI+O2Y̰,> 쩾7S}@oR#c:?),sUp+[uW~rȻ ,o&fW|\F4=7\+7 ׹ ]2W!13sN{HɍVܮ~Zvli7!3\峤^{S>#-/z~w\Wn];Ҿwgk\Wn]Us~`/uM\\7sUK̉޸׫$~}iש־#7iש־#7'?[`5<ٺ/COwkuk]>ɇV̈V l]_[ #7zCnr ^ry_S $7\zr쒖۶r_3:]~ʹt |Nn5Ӯm ~ 3 [ iWΒ~w>Hˎܨp6wMna9Rsjnue8En>SJVB[nV V=OZ97͵+0U>yו]t#W$\=7= P|q ;r]s>0"7|eT5-7e--;\mdF5!7^eBzJιzڪP+g_+YT QR¯Ϋ):)ip"*Ze]aR=9ê8EnX 'rȕʭ ʓʭ>kOw۾Q}?Q?Q?Q?Q?Qf?g?U󣷳'ۿBnG+logw03rTۮ)eJ~xF~teVn8PT yI9t`GAn~TLr3rNfvgk;̿tj<?|uYk`iQ7W^*w|{zS7'!7|8Ws-7|:O)7 S~jڴ<0PaRu~U >sMaNv?N?7~YiyrMany@l wׁr'7 @׳0P%W{ZrrmՒGn8 @s0νfBﬠLrùLwzԓcVnN󼠞sMaԓAn5zvPS@n[9aԔAngug5%d}݄~AM F_7~PS@} wzԕP_k/ 0n=?+.a!ԕP[7ߝ]4,7 @3je`gnI.Xngu%V97}dpP[@]7=ٕuu<pWBnV^WnZn㯚q%Ut?G-*fWޫ~9Bm uU{ǟi v@-Uw>_sjJn\y}@V=5_wV֔jK;~XYS-&`uMz5_wV%ԖPw^,zj_wVהjK;~XYO+&`u=zfz/Oyfimo @~p>rǻ~`.;SI5z*n'Q5;QץՓճyx{[{v[VMw.{u% 7ZV]%7s;wCn~;Gn}W+ՖPG>\ 7[Ob]'7<`j7 7:Fm%7<`JYGn\_uR_@'zu5}P[@-rç?ӓVLv@.] @s ]ua5ݧ:S?ӓaF|=z}ԖP 7 Ze>k=kO\שַjHʙ]uz{tr#7|;{Ns3 &7L{zp:2;LoL gOMZ?ިR3s@r]}@.~Q_ ;r]}@.~Q_ {rq@zF͓\_ i3Bn8Z\z#7\,rq@z._ @*9<0^u|{o3וu6`ψ1|v'c)s ﹮0`u6`-ω1|n)g{rU׺[ >e.wxV1edalim(7{5^nXMg/?>yNY1;ƕyyq糝lNna\Ouܰg |wcl_>'WuNzpB$7l1|F3sHnxE裛#{_Xv=׹f| 5 k]Enq6iOX i3]U\Ҹ30#̟ʟFc#w0zN\[n%ah5MnP/Az)osZ]'7l<Kyw}8y[~z :a{ H>;Xy3ew;yDhKК8<֞S}𧟕!˾prYyS>>ΔNjsox/S͉hŵRrs{wejg0e}XM* #3[33y GQ>6ͣﮥ_v^W?}䭹]u_V# :}gl=.=W۴;>4/u<=S~=PyI]4 /7_#rƝm lpw6VfJ{ߓz#\ʼJuvԪa\<=뤚i^v^q sYss6̣s}\_{G{͎v&y~5q{)frOxj]g.'?KnXF憧#s%}. rZo~:ߍ߻ [s\Op޻.cXn<2K?17F3z>5uvvXnX ѽQy>7ש1~|0sNrjg}YkΡ5mF_ rYq=6̣{9dr$ 8W jgx}YkΡ3媱P!7=gGU&-hϭ=<2SQͼqNnޞtR39ç?sH{t2[\5v⪶g] qu4zϣ=ϮyOJrt>L:O^?O>鲷w;wns k?om@&媱 7|=d 睪޷ydrk1}.%ea:Uk@;8wk ]fO~Vkr8]E'Cn<2frgJ Ysܰ3vr3j&{\ ˷c!7|=swy ?KҘ;j#7U3MN @&g OoOaYG<>RrUd]'5;U䆭)7s'[ҹV$7$j:sv{;y{Ucxz{ GQ9%7~nro߷Ľ혦U; j*vw,7|feةj7c5V)7,7lGUJ{l]mboWk7r$[s6=nw[冫ՉInXnؾ7`̰twÛQn<2:-ˬ]} xv]o1ߜ;;R_4Ӥe=7&?ocLkϟ9<59]eo2cU{8W]kN ĕvp~`_Gߛr|:&?ocLkϽkBºOmC8MG}ߣ8:+)3ϯ;ԧ[VnXԩVWiC}w`a>ߌ5u-u-چ98so_s#3XoO=FnXͤwvy otzOnC֩a53|bԡ"8s7j|7e׽w;j.Wn<2f3b)mʼͽ]nX4 /7gzʾ1=Eryw7Wܧ9y_<o{T}ޱkܮ -vS3=wػn =INcMDJepzI NEudk]֭a>Y'sq'99]7\ݰI9w{n=ڌXTp|v\lGs:Sf~nX$q`ߣ͈yq&ho{zݰ=d?%٣nO>)grWO(w^79ߡ͈8ΚώW#:Q~^=1g:}3gJ~|6#N 'jUώkX#h:1?jkHb?S{#gr'}Tu(g3덬ounk8mMKȞKKh|Z0{&~tv3wTĽ PfıӻcZ6og\_#cO_p{D̙GK7|͸ët33OJ8f3m\O:i }4}YW$ȧ9{D̙ߪou3Mët33W7 f3حnZV촆>V1a}LG7늦vcF}?nr&wc1t3-lN,w֗SYg}4Nn׿L8NO?@;7;<>SͩaH;9L-Kz{L.k8a%~.G3"[k S=0{{N|GݙߙwNÝZ?h>TWbyA{p|O} H=L<:YrZər.{d31UYϸ}jQhs|VYQ=a쯴&>ƧCbZn8m+͕7@wN&s˙kw|f-nw?L{~1YX3m>l'hNp5=>SgjҜ' Ps}g=PsQ_Y;s'vjqvgS|w'^gZa.:>sK{8vo{h}L{GzF"MT'qr\!grV3\aΧujq߽N`:ߝǪʻc^9co K9a8uS?/ƥSPψ)9ӌ;:c?,7Vo|Uo#5ԺVR:2߷g亲wxw*yNp~w|V[{aWjko*5|bS?3+u\PC.grOw3{.޾&3GՉXwK;Uvkgl|ޏ_?~M_fo앎W죚aFVM,f>9 kSW7 Iȍ{iK^㼘VYSYzıZ|יvl)9)Cy63UNty` ψdܬK2fNcwnO]Icխεgw˄30kʘוN^K>I|rsak9'W3 0ɸ:)uU7 8/v{8:̩a{ Nk>{>z;&uk^ 7&ԩ1n8aN ;g iu~n긞Θ<(l9U7opw&C.ԥZn2~f긒r9ݜ9U79[s rK;7͓cwΩ94uKm+-juY- wmwt?T3s[u<9qyRӾ1wuA6ǖwHȕR&s3ޕ6g0%-)/U:C ωu,ꆟu?|4!S7 9WN-RGݰӄN0T_ja޴_巆 L} gJϴN0;n)uOPEmr:d!L &񾿲fRqv;$?\WB*/&u \WɹO:ׄt4)L @7|Ϸk)u 7A37L0XѥfX`jnN?W5zTVs~ntIRa5[U;^o uL~Ω3#S7 Y*WAH3Qߝ&h?sn$'M ɇR.Y?nxsqgdsn/r:t` @=\@NnuA;ӭ5xwqg;}bno.lG _zn:񋷘@9>XP7~1s{#վ5xs1;~+ob.|Go }_w1/}ug|ڳo.|Go_fZyOb{:/3'gZn_5_]kW'Ω>_iyW7 Y񋻸@]pyU7 |;? *snxwqgn7~qw>3o+Is}_9/R~ xwq`uN)Z틽#_@-^qcΈ99틽>!tb/O/g.!o+\R~9;WoHP7=W}NrVϛyKyKwN_7@[ ;=5C/ߋO]=7COZ`U^LѮis gw;3rĿT; u≠C-ߏ [@6ukx;weѓ~OOꆿqw;r{ҮNGHqsneyЗngy @rn'n]0٩fx@B~~tTy}Z@?k;qL)q !{?:ygJ+kHn|x[Uפ@߷[w2r˞T{Tn}J;weQJRpw&+%d!=G)I' wߵ:y[U?M'nߵNbȼC-ݏk`uLk'1d!G w}A6OmrȼC$԰_Q{TMꆟyrl@vnYfxSԡny{m;{XnX0@ ߵNbȽG- ίv?:R7͓cwrg WtS ?o3}l)q nXp]C00%-)d!ϮkuK[RȼC;a9 p%-)d!*]k+!V3}]S7%;Dr=pj|J !u?O p;B~Խ];-m{2r^>R7ʹvRc@]B~zߙ4O~fZ;q .!u?JLo~f긒@]B~4O߷:{g%ܯ73߷:{gM5B6S}X/GܩOgI*_y:Q7| ޼83/{܏kDm'I`U^+s?x:qױN  !{?v:ՖutnlUޜy;W[{vsuگT[yhg߫c'}VW y2O;nxUO'@JOu !;״eSUMNz qVh|ojȋ{?5֤;5$R7|7ߚ};'hX߬q 9}:Xs瞑>7Hw &3{`w̝;RqV{Z@mSߙ;}˸=֫&v-]ҝcźK]\o}@\eXu\cU8@'ion Se7Zqw~ sc7Zqw Sq7Zqw Sq7Zq{yzaGN+b03?XNXϛ=O0.{ywߩ ƠVoff{[v@eu} oOE0$@n ȔV7ܥNg}6CfP7l ̠fؼ՝i= mjYSmc^wq4_ڋb`׮ԅy?f2JTH 5c?}55jc_n=͌gL]9c3_ϊ{ x& L{!LoXL=cn&7,.u17Sx~,.5X|2}9sǒ8[p^=1G0Ս>gP՚Y9PoXy(7,b<uw1v p;E~]"Tyhur+o􏊻p;̗O W༊q+#;Gzcw|gx9j?Zww~U;Syh]0'Uxmgݟ;@oXp}&}doX0u ӻOAn@QoxUP^owuvztCWZ3<䡫sw9Nc's9au@j}ë w.w^{T;ҿ#75@dwoXC뻂_3tAv֝[N}׉-94w>D?Nyvꏓ}5YS}W\DnoXBol,f.z*77H}_ٟ@QsjU \F?:@gVw7s 7wדM 9hjoN>XkkpwV.唾aupۓsٜ,>ӟn,Iut= ;n kTU$bб7f+<@ݼTƸU$c7 𻄞%rSuVT@jo8o}߹}bL jƭ:gZVoXzDnN@gyqZgq9ZLpX@zoX&[|qVw/.p^7o ϫ=M%]NެoX0Aw@}%]NެoX0T3 s qw7w}q\{A s qw7w}W_}\{jNժ y'{Wq:I,p29'%vvsQpNBE\SܕOa@,w@,Tءӻyo N]3 gwq~*UwЩoT ș]C1u].AQo]>9'\RzI{+A\[ *>ΤF  x'uг;L w8C;88uU|z3)x?G`w)Usr=>K>gRPAC]& {x@:I,p"΋;)'N{Uխ;̜i!UooV<w1@zL J;ЁofJ;A^ު}͙Fu$˫o.tf! & W< ̈;ښ+:̩40~NK~v}`V9ksN> b$3`>{}w[0 K=VB܁M9<+洸)sK8;|r'=sJ{s}Vrbta'9e7 $`N`j=>1#15@̇;s:)si7 $`^`=>1N]}Vz]g08KI0B69sL1̵&j'o0z8Gnwyf1Ŝ{:}ZWj5 ۔qğ]Z+sTngɯ1J$P>ykf?1Sooޙ{VwΝ*׬oׂ¸}=c=س{iAsǙWԻ&ͳ V3!sm j cAuwvݫ&94s\n{W_Iy`v+Y9Q}NNzϑe\Yk6wdCsWY~:奓ƳZ1_r{=SϫZ=2zg3 /ľr{~V[VJgT)iq:a: &ϙJyfĭZn:K#}=zқawy3{_*Ѵ5k=ysn5NtYǔr{/2KY?'t.rJ}o9 k׫jﲟ2ecP-~/ҏy١H|ء֨^wgjjE響6{n~;r5ޛ+Nknϊޏf|w[rH9˿{I3C 0O ]/yO}Js5+Ir'wk[mU3{~:W& '}vyYym߻ͱ8Uu<3Ϫs^^{:O.V.17 ڟSutJkr]s fk4隫a/ܪާf^Z=GŹiXFJ.n##С/;R4s=jq;̽*Is3]VEp>Av-w]j.c}'\'kǷzڡ߹=ë>KQ%f刺]͹B<{ hS#&q~3oUNOkwzLxK.(UtTOc&LN okuFtrϴ?fԛt:V>Nώy>k Kusgמg)8`fuXΨV#GϮqpvo8{}wGi𾵤;o5?j.-7n'̵ u-]WuܮUO{~u3G)=XOF}ν{Gcp%xّ;kw: 5tžaci][b cVa,7Yu=gp/ ggAs }N'|q's%\=kb߰ϙ\<˽sK2yak|߹:oX+uv/9K;Rrj}sz|B?) ;y~"5^wx2_yzםbڷۙ`;,svZfn N\c:a!T;zV7l;;!0m%??VwwC՟t5zq#}:ZRSS6]ou(]87̧INX{{7 ׌ܮw)}~yv!_L~v-?YW 7zksrJ[G}}û]ǩ$sR0]9Ŋ5$}0fXYc<#3kK0v֜bڷy]>tN93ק*ʓƻ{ps\\^Yϭ\SG1Ogs椾a}é9^(oVs5] (˙Ή:Kݱtnoxބkznox^\-y=7{x'17\nb/|/V~u?S%}}v}k797/0F󘮚 }Gqa}ykX}_I~譵-uM&}{~%?O }Z{scgtbU]vZ[oxʕJiu9}4lQ7NF\ߘ>ZX7l_3n, ϛ7z\O WV;o8ozn\7vN\;C}+ie7r< N/ OI})טw`UXyg$R߰aqw!ӟ7vxW>tr}Ϙ ^s NsSʿ9)5RuV*}yg +kչ{SNQ;'K}jtw˟ w㥵Gv//;}þCTVk|>QnSiU~T7M[כoUߍ:7a{a+}ÙcozT8757k~γg_^s?^o۷ vFQod/3ΕA51Z\kiusas&!vozT8;57VS^PXN)7locf<=;S7쾗ݻ߫<~R  Q1JW{\oNoZ[M;>f5iCpDϲ{e{U߰58c~\ߪHW!n?<ʵuO3kQּܻa}s\[߰\`FW^aus|hrp%ǬS_0mJ{K]w9IzN-'K' !;l6}N}ÕnͿ*Y3*}^/ { [z̚1e^a:V=uֲ>$;/{!~7wf;]k}+b}LOu}P_x y'w33}2돥uv̋;$U'kj=+bk볷zza_\Xp0u;VӺ43WJ;vG-kNվafKq]bҺzR]3 X]~c [zko [ocq>[m5֥ig#}氾1a&Krq7{,K'2sW1sN-qF޺СLakکoXZ\k9Cu+]ro}un]k\~"NO5}4Hǩ %}LȫuFt'{c9Zȧu]7lop_E߫|ybpҞ37;{*s&g=.kW5Fyz{0lB3ޏ{X:e9t|׭=oΫ7z'ݥ 5\Gy>W77V7<>#s#}g瓾a>|4UƲvs׭=kFsjRw|}ۧrsJmYϜ([߰|Ns֛s_1MNԾcuιsyq sֺ5oXngWΎ*7bo^sxI'cֳsLYg17Y9gvηOdj߰Y75#\o__/cٱ_z4eUeV1~7<%+>ڽSYg1Mx7sz^kJn9pky:'wݾw:[o~'G؛v}ga{]w8Ƴ^ԫ~gw{1+o u6 ΋iʻ?k֛96Ig>ky̧ gֱ˛wr^cΥ wzar^W7/u|geL_v5}T}oYXU㓞;u4}bHk{1Kw!'s&\{} *1U3+SQoA~xk.%'g徼Y58&tӻc^N?i933^t6I{)?)[#'6K7NV3D.9LR0g7ژ3wvUl'{s}b>;\èeźҼ>nAoפoSkb-; [+y-kWGR$Ȭ Ծq7\/.鿯u{ƝyMj>~ǜ9낵U{̣iVK_1P-&鿫d)IZHg|wNs\ꮼ?߽Z`3a{NWZu@V'c0o8T3)z)>`s;?3c.0gpc$`wq^)sqy ?7 >V\`}ïG P13 D o1IL}v堨; 7.os4`nmuۭ'0>>Ӹ;&p. .ugXҟo25`f;l@c4&O_qv :#H{.zbo'@87fv죀gb/Ԩ=O_g5kx9>g@x7|f|aq;wj*=O3}+a9f;wjJg@QS0j?߈;|C@_q|fR{7'rPy輸=n t8Z9w#plu<ـ{crsG@\}o3wM 90L@7 Lͩ9[p8.p|@17 s:=)v 7'vG 3zo᧿a 1̨k׎ہ353r; & <}ß_LvW 7|5c}P3Kzu =sE@ݚ~@՟{oxgos:@L^m ܮ5n{'rє~\}@5'^0;5ySݬ>Om>/@D߰a`m-Oz'\}+ϸ*a׭`ڰ<$& J+ u ݟ_lޙyksbJ3;,|j -{7Gz 5ѭ7 jԚ8S'=7|RWz\\>P9#TɗhMiq\;tz'ؿ=׃zV߹JKq:w^ND`^^ R]ʟo ONuԭ4w8:+Uk{|Zs];~ nGb~-$N{u{K@Ƚwץo)שH~rYJO`nTVTIL?TwUmqs+'~/LW@\zm:@6=b.P'W{@u;J@\ҳ}ç~W5Uj굑TwR8w/ȕOVT\ȦoXko~ש*C*bu Џ:@%(\vt9a5uTw􏊻|rN}aq;_ sr3A0SToX X7IKп7:@Mwc dZ[)GǸл7:@W;):9䎟;w3^B_u6u'}\3_k[y|cu;J[gSLC^] ?gz p+VcLNuPVcl {'+P5ϫY^;ӯSTuX:` t-겎]PS${^oשo\4w5VTr]ݽn߾Τ8G06UXW.~Fg * toHZ=jg# 5GJ}O06U[o>/3Ф]0@xYy~薇v}֦5Kw tCqΔDfOɛ{?TC+sS{֧2YPd'XI0>% 뼸\TA0>!p*9@' qSy\T:7 O;Cjn iksyqb9}@ڄ<1xa m}bP3 1(jN 1B̚+Z@ڴs;{ĉf in*͉ u]ҿsCt͏,~wqS<+&}=^٘ANH_y~olELΑX9 a}ڇ{qX3 }}8ۇqs ym8w_LV9 >yv\+oV|L? izᄎYwjON}rn1v:{y/|N0?}ПaO0}ПaO0o7  @?}ۓa}M0oz{3or @?}ПaO0o7  @?}ПaO0o7 ^+)}P;=;_7xWO@Þw|ҋ10XN{ yO}X+WWx9qk?㎹p~Z>9LYS<.g97ܛgثj)M+| s*Oq?G=yP?O2s*'ggS 1<vd܍?س  Q)G8`P;O2 @gb{úD@d_wǫ q*Oq7<'MW>/Mؿ+ U\b/Vhgq5s%cn<2i1+s9C=gؘ'}ۇ}l|Y,m}5'crc>׮CºdΉ9usLTŜ9jΊ9wszYq~q~*bu kUwzYq~q~*bu kUwzYq~qv*N4GI̺&N>;Sqw9:/̾<N>;ߜ2?Q{ws@.12?Q{sus@.12?Q{ws@.ORT̸ÓYqgn.dIzd{|h?di3O7%Q.վ;>bn.Iu\Z)_{~\dr+?K]ۺYLߏP kʹu{{NoY9Py]F?ӱ& 븪J^}@KV<{*KǓa!@\izWos|C߰| V#Vӳ~'6UYOyRZrͤC߰| V=۳~/6UX [8lݬ'%qzcy#Gtn| 7g|;t{t{M}w}cMB/@ƚRǥ]-rswc^/_}ϚjTK7`W7L<<o*_ ϷKݯT7|)qFL]IA:f9.q0Aݰ| n:uwnDpvhTKqPp/u9ǩ9m/Nz3Ywmʱ)5~^!}3V8}ߠnX7lIqW7Lnȕ ;@u*P7|} 0).u^󉵘n3~ϴʗ '|}Џvz:yCZ7jkAݰ| n=;$OuƜq7q:,g> seHqIp1'g|9.p0Ap\RU;> ZH^IA?@\ZNu=reH5nwlgqk T7,&Y@kvؿAp\u.uû2hN&yuäupnMLuYP5&%a1y 0+ }Lw>? srֺx?~ӕu@]\kgʸCnxޘ֍=39N3 oV7,_* Wy/uQyuäuc~<+Z}@tdpW-׼aHZ.9yF<)V7^/ovNV7 }sC~V7 -kmq\W&O=ǥX̹ynX[J+}5_V7㹰grC]yus~N;/Imu~p7[Y /a|z[QYnˌ5nX."s^_'ŗ%oĜnR {VjMIkL@9qB 1$Cigdzsy!T7cw5έ|kՄ䚾e8lju+ښ+a/LCNǞ9%n5ySM3| RЪI~/ss'qZimɺgͯ tnLrLrw7Mj KC%qb쯸Zʵ*&2}?ܻC#O?ʿ(_s*XG}OV%+=sN\5 Խb?滟R͌9+}wvL[wqzJNe=}?d zv/@s*XCwhmX_:L|?}٠z,zfR7loN=˭i;ٚrN8kior?rgͣnXϩ` >|vR'͙9&;T˃ּa{{|Ysܹ99|I /'Guu3}Znkh{lfϛ'vۤ5yaR\J*nv%.9N>i_տw˕Rqmu%vQ!i;5oܙ&ϛ+qg/Llk\KJ |)_B-Hn9& u%qdJkH_ٓ?T}:<95$ߓ=Ϟ]pk-qI?o߱O܍eI}ڋM>}3祎3'2ey]򐤺aN{{<Ηw.8{{͎Q#6̜Mgևo17Uտn^&=bҴ~T7_7ur&KP$ZR7\%̜0Ϭ߸cnzc7Wٔa8kK{ZI枺cnS7b Ξk^y.ׇp;,nyH|J=s\~t?tZi{N'.y̭>}6p{} 'cX4=ugs8u0P/{_99!6睲=owo:9D1s+]r߭% ꆝxw3X铪yH٧ D ʯvP_b==?7kJ{^uÞ7w7}j4&I97{NO9ǝ:+?J7ߜrh<$y]?$OV7nF@ s杹ij'z=UOCݰiq;Ә猻݇7q?iοMnXgbt~s˝{߿jP7l_Z+ļyg.>?_kqA;S=oOb3ٲI(Sq;?WN{&Nxo W\۫ x6eMsf>SJ kiUݾ]7w?{8ƽfOcnؘת;?W9nu_/M8YwcT<$1O?넵nƼJ kiyU]7w?{˸׊֜d p:9ǝk9N^ݰ9a]un%H̙'߻_{O1Rs&y,.1).Y'gߋ6Wg"OvnsˎԺ}jqV7\z?ߍd=aϛW= \؛ .MC曺gYf188o?3>$YIr'39rS["U>;'C(;!ꆭԥȓ@R`᜾ϴ_~f}Hȳ!ћgs:w螴:sC>(yHny2;__-q.r^o"OC߾tC [w R؜=ΟOϤSd9޴m]^*gNıԺa{sq+a%~0Kݰ}<ΟYu~}i9dyr8]7\g/p3r%}B\?Z W_k'̺S7<<=Ϟ;w{\{kk:vf}0R3U\}Vݰg79w̑v9du}nxs; @\˜;nAVs׷v}j썻3qFTNu3u6Kywnկo.ǝ̳֟ cwc'?Oak9ȓ{Obp;*_r]z>gYSSgSw?r@KMZmarl+B~$7Uϛyt;7Ps;OHCWρ kY:`%*ޗ}yM'Mǿ6n8v;o=:Oxf<ۮwwǏ:JsImL)kh嚚xߝC9g3$1u ;gؗ#E~IOK?u c临ƽs9gF3sܩcjZr<$);9׿gHc~-_y]۩;tx ~Wau .{ULs֍uvЏqqu۟S_&ݘsIۭ3cnxޝ| JggT_1g}ߴ4ĥѫWڻ[ϊ6? NٗNtC';3r;u ~/OL7_Gڗ07ONI65/~9Vzוk1eWGwqrRRI?rvMC+VPg^~ݷa?yPswsVN::Ѿ殻SR5Sk YU!VĎZi899wȗgܘ˹uUmRy!gⵟgz*_=gw>B9*.5ifr]5&?+<\;9ǭˤnX0؛!xRK+~;ޱGW؎w 4cI߄gݵ~wD~j t;wki6:1i?2Gv<_g⼻Ļ6&=$q/9s\ͭ鼶Vw)UYa_9.n8#{TIY9~Z͗eۘ8ƽ14-u=cavs{|W|a,1qn-+_rģ9Sz'1Οa|<*}^w38NbwgBV.h<ܘS mo&́iߝ؇̧&fH͉}N~67 ㈏S:'ua5eLܘSdRff̥sI;wê4iNUNuþw?֔qc>%_uԉ L\Ae{}tF[;Kgyn^ôs\Sj^ gCə ym> .◸d5r}0dI{r&ym> .~◸d*R>5 sݫc;>)K=Ը\/<=nsw:ĥ/}%>YSp3ɗzqy^yX׷P7ĥv,/S3s6m3ym> .~'?5e>|gw%qm?kkqq:n-s׻} ڙ9r%ܢ>o^[;K_w t7I%E}޼Vw^{V2wۧy}|^Ms7HP7ĥޯ[ y5Àܾo>.9#5ϛk-Kּa -6ɗz |N~ԫC}޼Vw^{n1L\ iId>'7w >o^[;KW&.Yꆁ$_wn2;j;mP7ĥޫ[ yu@j/;;wɉz6ϛRd \ϾFFM9э6ϛRd \׾NFM6ϛ>+s7ȗ`):fyZ}A\zG\ue.~ZL7B >o^[;K_K|@Z;g6#Ͼ)+-浵u_5nKP_&w >o^[;x^u_5n6#Ͼn|? kOZW7,&M_ ^7ȗ`):fԶ^0?uub5o.1kʾ=wgBK0sT7|@;4us߳Kyuxc߬޻3%o>f@ݰ;ElJX;S ){Tc } m\=@=|Q7N2ֺaysW>W5K0;L: kQ7N2ɿ߭jk\* ˗ ;L:k6>m˻mT7 RM5o.'Lnxޘ;@{u%A-}OqwwY;.|o5Lʺ71~wGY5}Wm| R{ꆡLX޿by58s㵒};'YzK @?wsĥm]+;׼5rw8$YK wL]Ky}Z}a׷J,wuXjw&#r&{%ȗ uç;yaBwu@9 oae'u .%ǥ*1@L {Ժ}:rkL7 \/Zqz,$k^0wNNG&LJo&`^[N)Rdͫͪ}:qwL4uЋZmOwwq^0{͛@=3N7O9 gs v~o[>qwa͛ @{gn;}W Nqf̍;'Ɲco.b}NV=Ըw(ƜY{ Gr&V3tόq'ib̙'yL g¸Zь1'eb̙'y;71{4cn1?:^|L@xh܍;Isq{`9wX=q7Q9%ss>g"_ܣwNELb<0ݸ3;[~/ܣwNeN,2惹.fq`߼1Ӝ3;TMOccf~ {b1zd}.gUug3VeS?'\OO/Mί*~T;ڰ) {{3eV7 U?N6cA\Iv/ϪߟvP}nZ ~y@iaU?*晹Pc淺ag˫%;T}kz<c@{u2yܡ{Xũx0yKuc5J' Pcm|=cuٞ4bv |;s"W^8_x ' @?u۫5 aO0n{nX0R7  @?uПaO0nS7 3 @W@w@w@&uПaO0nݰaiݰanX18Y;~9];~P;   `50aC0̢^fP7 sy~c~?_9~~cebymvfX$֖5unX$֕u^X$֗unX$֘쵣W0<_+_T' kXT:Ixo-XJ'H{V9]F[asNP7 ן1k9Y3 3e>WkjH_ڼSCLr&s83iWgݳ9=Ww'Y+)mgת87jښ'?}];r&g+mJkj?wxߘ'cSĢ܎ݨv;'}$ӊ RugQ3oߚn?Wi^fNۋR֍lW͙ ծ;1sg?}}0}m|*/g\;cY5Vq.U֓س;O=i!gJokyzpq:5ë?~=ubk0Odz B=U${ 秽vtʙ[Jw#w ;ݝxf-;~NmQ+wnXtXקƪ{NURe~ԩS]qw)1ajΔ6ߪքV]㒿gSp6*wӇ)}W;<rDw˴ݗL[^yo5ܬ{yJֿU߿Rp趮 k*ugbb^>Y7l̓LSy ߫#۷IuNӼ_7v*%Ósw~/e~șfꆫUuÝw{=ɦ}{mG1P71=?NmzW7|fNԍ=)9V C4l5nw2&u qn}Z;2杺~ƚ DΘAfޮnxޙI$@Iui 5)51Χ|['n뙛ry>c|w(𾴺g|^Ϛύ{ffM Vi|8eCU쓳L˙*5q_}jub(ԥn==k= Pnޏ1,r3^Ŷ;ib(ԦnowmxٳynGN1^pޏq77,u=4+1jP7ݿ'놟~VϚ{zb^WnxxQc+ f|ejTW)cu *=SUKtnxgu;a>1>9p>ԎΉObļό~3U;Ukkʘ~3uwGWcg-7VDzO?gnƾF焜/v{sn{<R؎T~^.I>wY-Yne T7|~=z{{)mU07<5{U~v Fۭϱ6b15w*G UaIu%+B-Wo߈j΍~_=qߵ+QRsypwȋN[cu$M3~}Rp`A+9m gyſSUPQbY5çYnxO'Cɼxd~ރ&L+>C'wN+U<Q/|ŀyYS@~&u}k&Sw*{'w|<^<+1I5cdv;>y}-GaN\NgʿTߓȄtynZ'4g1PiboW7 g@r`owV<cf{Y-5{5۝?j)|3xV O0wYS=L9-(x)/ߎIm3Z !6ZS~c~Jj+5AQcM}TH= ֎ۧP"ؿnJ[¹u`o_ @ ED]_#~$R[{Oj+R;B`on @?50aA0̠^P/ ǯ_S+o5%Ҋ~WCU_z'ݟO;OwΗWd1ٶm}ӿ}2i=Տ?~ĵc {ԟǏ=Ǐ?S{Ǐ?~ ̏?~O?~Q7 @G`yُwU?~ p{h8|+ٺ˹v]lǹ M 1^n'9=S;_K ? ';S;kK7sv^ߟ<}j} |8pNK x<'9عL?Wˋ'|wn?sw:{eOhk6Vn;qȼ[rU}~9 kc =ݩt{5w gUa-c=3c>zƓ*sIX_۸'1mOTw<5XE $cYNjum^?=>;Yu?N9~޷yY,RMdx1vZQWն&%&wsuç=:U[jeQ?w^ڙJ9='EuTy ޟ&I[*cS_/^>Ok{bZ9W\Kc]n:=7I;7&7V`xo؃vR&9@g<)$#̨r͛ ̼i٨"rxoMg/IR^Ծ]YqT~1-sAɱʾ=O/Vfu<9u]̽5ϑ;=p`ƻ6~rr K=Ew~/ڶeD=eYs禜fԞ8ط5}ΓYOg xޕ?ea>6|am7%Eߓ߼I} ki{ݙz eoig–9z{yo̴$'>exw[v>kz=»̏mnFS~ s/ΎKM t׽=͟hOSZꞟNVYҩt\J~\/8fN>%wzv{'L?7|8:79; ),'/og4*nu]oM O N\wӾCz1bomtkOӔ>Іsmt.y;$ֳ K9y?׶ oc~~wﵷ_l1I[M:j=hq7ۆoxwM4fJY#S>F渥?́JQw<9nɄ6ݵOSѻwm{xt]wɓ}3~䉿ahVj)r6}g]a]~?x}؋Z3ם )bZJgplވ_i={J=}ow/w {xv]Q_k ?YIv}óͻ~=Eryn{ml}}/?{Zsuo\ ھF ߧtO}n\KiVh?cw>_r){᜝ͬ:8F=MYߔ7oxٟ;1I߰"}Ó7gם+s|rN==;j_7;9&&=ݷ@K}z<7 7s7Saynհ쫧S ( ?W+OkL?ח)ן=k%e~#}}kw4tԠ{us5Cym|=n|?CmɛSkc}jX@^Ϸ)?"VsO;,gG7<}Z1~ٚ Y_ox17I߰ })5[7]q~ZMyEֽ~7v>_iS{/XV#a9^~7ox7,&_c棳&}sdz^rZ  枥}77otjI_O롹Jw=N9^~7l.'}=5hٷl\kV:ok[)g6y}{u};͌Xoy5,M߰)7djIg7U#{}؝c>7YCAξgڛ6P}>_^o:L=7ûȫsV;Y 7<}>n_LWtv~>/lcObrMiox^>#oʤ^=?/k?h6Rw70s|zF5&8ts#gYpWyF{{ѰShx>5nŢO|M5l'X}'?՞$֖k7o>>7ӟrw};y=7$}ý}OӉij~$(dSӛRqx,=Ws;OOeߘ7S1w9jkL7Ͷ9ֽ3 h`瞝FB_Ѵ>R'O]mySyU~ߧ֓?w|jzL&5>]E>iLߓh:/7O/0%^c}OM(aIw<1Amg9Ig17usiFO?q&IƔֻޕߴ)-{^ܩsl)gӦX]w1^w ߯=)&yyC iϠ6s|{o)!~)N8S{.结֙-{&9~goI1}6}7+ϥ}֒׽Hp_ntcϙvrPK/ ժj-]8{cnO#rU_~r,DVDZbZ#5\՟i^:Sju,}Ob%߷R>;zoNάtR⭹ݲٳ2O'!b޾ĸ*}/skm?gky$y>iZr{E?+Lͱ=Ϳ#=nƴ=;@9_sȭ{s2Hϓ㖴_p>5OԙfʹK;A~}w~Oޯ c8n~ڬ1u'%yzj]K'Ɯܻ}Sgbλ&?K]Woۮ17VY )`7ߔ0g'sF}4%)z?7UxL{'og|_or|Np]~z~9='5>YN_K{F~o9~xf?y~NJC~ߘrf`.@κSIw85}r_%~L˛i9=V޼]./I;0?堉תN=ou S x仍QbXuQ'kO2뀍X@O^إ{ G5v;09V1ޘ6nPc+}5ƮT9E9bkߍ)/)d ص Gξas\qcJ:kHC6>Bw=)wb1aMȱ=~r*>9ǛXtq1@oߍ<;-IJgƻin 13x;?wq}u_^??\;523GnţO]J}zxy@`z?C\;>Hˉc9 c$7c[X p#J 3?=s )oߺF)ZP܎Wbw&8-^<4@wm$i1K 379NƱwcL}Sa 3u^$[q,S16@1y057M;kKoƘl'cs;Hxgwř ;|7K;gcwk;xzr}s_-1IymSͷ|GĴ:|)i,F~~#y&ךֳ-MicjwoNdգMkkYCyguI9)e `ئ}6k?;kOlnZW="ܫ)Y$֨M^gTu}yӞ?XC]Xg g@5&]e/<#)k;ӻjgq|?כ۸h1N^|F~o־M=O>7~ksck3: WM5}--=/Ӟ?XC}5=lzo$?oJM9oj::q}wpJ]~ԛ7q?SC2w{~o|gniJ=k }|+OO Z5n|?2ZG^<?ևo1F5hnn{cwrS}R|j=φn͛3%Ϸyrg„ʮ[[j~O7wM]S>5a6k(oxšOOP?{F΍v m:JMwc}r=$郹i13:979Lyl5z{[͆~lνi0e}ZGS oQ_MchKM}w6mM}.#G[CkSP ߥq\m7ּK]C.ﯥ(1<j-s57OLuƖ:8)sو}9w'P&[=[3Z>moc ߺ754mv^<;kOor޷9N <}0c('!o>jw_s˹\37wְO9=vosgy}c6׭5ջXmVZںWt39to>]1hs{/6MYw-/7˟'Ol =wnw}[FOrϟbu4mߑ7Lݿ'uջX'j?{}|7Ocu~}[k Otz7Q}|7?SXZ˼]G5>ON<>oo ?7W'$VM~7<7L7Q:Pcɩ7ܹ ?]⤳ aoƽ „=K55$d<>onqMC ~ g#j}zYuN,yYp9($;k=7s~o}P'OkWϺӿaϾe:J?O yט}wꃙ_˓Q!SbSk Tݼ{j;v }OYl[fav~Ghc޴<;<}/h&&?>j޵{䳩zm넄>7g}:7Y O<7޷nB߰.5O}{s;-]8Nk %ՠS7|ܵ3sdr߰ ;ݰkCĦߤu_o=VZ zg]g{mi|zր7\s֙5v~o@7o{޷fkߜ ɭ[s^N߰:%:O.qHm-΋u}{'{޼sek2id?:aJK0 cW7oX߰ [ߥommM{zg#u}!: wNrX}q.f~tދsek9=/EHœYgX[z]K{} 3^auJu:7\y\ۥ7;79Cw\|Fcm~:SћC-:ݚ} jx԰WIMEϼ _ZpjON)lDn>SpY~7>@.vIuɩa1 %>:`Cl7lak޿#m~z胙[ -uȔԼ'ѯl|?^JN}{}w}쑭ӛ|]G1 &gu:q} O7jxQގ?>7<^w :dJlj^#g?IaѓSs;XyW=OO=so#}o_aԮ߿IupJM rRyyjRi߰>YXk)yv6"^ϩ| 'ݔçӹj |V7gѴO7l=?;K}l|o}7l߲eP'gߛ~ӿ:dRlj]#ۙ{u9::[k }sg{AkdE8a>w}_5nmH]o_\{ _C-;QJ OEg o7_!bS7lD>3n]5]sO)HZ梷n?dl}ߑ1qo^Y9jzg[[93D=ͳ{k{'V> )g3>8e<>CKZGScЛk ]jgZo>58?kߓ͚7\gțF7ʝ0G]g[m'C=;sӞزy|v3Oi|<Թu3I14:%RϢ^5sv%i1k{w\*z<|j `b⺛iH^+Mڶ\N\g[m7|UI2?nϙϱia-n?zj}=Nᦺ+egD~7Bs满XYo{Kл 7gj3P5-;nI1cd7Vڹ7y;i.ȃR;u.H}zOG߰c54y~Tӧ?ɱi󺱷̭퍻~='ْRI1]~'3ɽZK'7=2gO} l Ԗ}gkH|z?ym-}אݝu~w]Q s8e$~s4}-%~'6zgqO=7syqI^C߽{0q|xh8GKl֗5djS:JxkhYLzO;Zߡv7m 5Lo緮k>3|sp~o^ZlG?u-Vl[uUǨOtʻ'듮eg& {vpj!u0g_޷ZZp_R>k$75Cq1zb>L|!}C&^sugDd,,~  S ZA"twins-bryans-160Twins BryansR=8Ż鿿?mC= >F=8Ż鿿?mCJFIFddDuckyD XICC_PROFILE HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)KmAdobed         s!1AQa"q2B#R3b$r%C4Scs5D'6Tdt& EFVU(eufv7GWgw8HXhx)9IYiy*:JZjzm!1AQa"q2#BRbr3$4CS%cs5DT &6E'dtU7()󄔤euFVfvGWgw8HXhx9IYiy*:JZjz ?6{OA*)l5}n+1._! ##B[Ɋ/$ge!߶C&aQ5G^;;X"eЈDtzwͤHxsO 6mԯ"_JYRuP?ס>˰jn:ۇ11dT :yu\a<߽ͭ#! I @L>sX:!\\O% q AWbFڑDqR 6x_A´iNEzwn:szMܷ:]Eb-CDj@$P|=2rJ iG $x)A<$= :Nh(Z>gb CTuʵqd1_HWg־1 ήuۦ`kF't2 .UIZV $qFX[+ՖK8m- KSԭ*ĆIԳ3#:"Yego"!r/cKh8TG^z;r)%ƽ?,Q=H-PxM24 SϙbV$v|K߮2K'Y6.TI Fo&쭒Or$x9TilՒK]ɬ- YVHgyJOÄ v5c\5elAz4f]eC.JCyϨlSeuvF{{x`𩢧ƕ2Yv,cIdpF^KKMPpmӁ阳8~s[^I洶WY+f!#%mF,iW3^zVmWz@NM32rp8[ 8fG~QI]4sNBsEhXs9lsBe^Q#-cezt%،NvK=/ER:tHJ)ZT5VS'!d %̆OYxtX%W^=* ANrȔ:Ɨm4n CdA[b 冸5-ˑK>y8 (aCG"vzI44eGzT6cP`HZj}Ew༻n5!ryX.U&tZf@FKִ}BPsB~n*Z6`]vqaG?9ʩA߄2% (r69d]b9 )kӍq$d/" 8b O(b~xiFQeݹ"7 wfv~ЈGr,.~5`OJi漑<hv4e.#̈́Hpk<+ M,T%* n؞=A>KxG2TٿI mSR.oO,:L%Uzd?&xLrlCt]b;*ITYեДM R@n|RH5{,sa0[wx+YU Ii *0c)piս5VXd"Q$B >36b5w[{ܓpuX&+M;T;r M|q=$Qj7:*$F;%x]jIy+XIOROQ鼎އlC*؅r3L沷`W`$`x(N."hN]2w Cr+C%@35օ4170^\Fy$RJJJ wc  /ٯiU $yj‚NfUVm -dvbMh0 2l.&e1,J@ A$(+!^YMgqu-vE2H プ@riwV{'>IDF9y3+{ MVF@@#(eeR1yJ|j6U=imM9 V'ћ}.d>e@b2v0Οpm但dnR[Z :UBڙ n'ig@K-&;}m~lA5Ӕt1/Cs >t7EcEqE0$. ?QR!Π}z HbUUS#[ӕ[ɮ$P ؂h7R\͉+8`eb/?^fsMJaxݱ4??URzQ9q"̾e·$[вfGCd#vauXfa3TO/ DŨ]>8cYS مȬhk+ õ3WԁcL,X$ hANYE>V~QOl!r[0M4k'bw1>.WKhd)5r%L^#|6kwzYKv!cVXX~[wʰtEѧզ-&Ejz֢2U %AN@T۔[+M ~ +Jf-DEErK1LW$*_U &lg?scdOyKJ$/15J۳T(h:0}y7[v[91%ى˕}t4U~LXj~ZcUI;թ|p1s.] Cn +YdtMV&R ^ANe-^L<ͧz# FZgI,ر>pHmUF`;:lyDԎƥѴKm"H޼{j{)˄L0f8 ]Khzua$YܽE9$*>Ґ"Jƀ$q⪪(\ kAAq`K/J?8<<4A%Ɨ*AK=נjdivj0>|Z~j둷~ 0dt{wIB|f˾Mzj\j+y,VCH٪1#{xH>^_T3u+qḧckO`FF {ȫ] l{XR˿.ﬗPԝ.˭ǣ_.!V!;-Бq}!;ITn_FxQIAup9J0DZ3Bj[NJ۫iCkmn;{x*"Tlљ4yf~0NM„3+zjQJ\Dqw=I5'R<ҾMVT!sȫE ]Ra"˨F\SZ i凒ghW|RXĨ}n?ry,WM<Ƅd 0݉>9vƾ"FBFHxB 6E$^MB9$Ú*Toy N,'wBq܃l2HTbpPPrD~OjcMP{ߤ>W_!~U*xvuqI ~̌hStk&H7.L%[k-:;UȲ^8G}U)pcV1IRh"7ѲtȾ!݀na^5-Y56yT*k3G\"[ƠW#DUwMP ${^d(! *A3-!B5F)DZ{\PXQJƻ`A).ZF2HiǛ rc:.borѳuc?54@)6I9l%1N6+r͘|99%42DWV6$H~CwJ3*{bKƷbumr_`_B[k( TuRÏens74M[+[NS*N5R;|6/erodKeR_7~el󲴔7㱊f~+.~@WGyKH<@4̗,njO5xō[nAJ"-ѵ%8_ku C0Q޽\[U`AA)b?ƥJaJx$ϛSLV ?TMZ%ĩTOl2#~DZIr3RAkQ~!8}>};wȁHiC ijVmun/mOĉ jX$k( xa6:lh &MOp"[Rex&wڷ/Mil%*}2 rP'~YV t* 8>O{[ǜgJ,,#oJAircgXg4zu*Rm'ʾr7\t<9l\!-ecaRKe y^!#GbRRG=G FhɫAoˈw%>m]~l>2ϓL}C?%Enk ] \)ުT<%#G>Z4I]ZJR%5j9Ā#$\k}/hud+ZmфzAY#F}74?AOҋpI5SPM},FBN%ucw9 OWcgwaoa韫\')jyfOI)ǿwwvyΒK [Xe!z?7Z ^?q|<:cH[KA_dOsǾe=#dxAS$G% ѿ7)OS2>NsE"DpfV6=a4(ۘwcGLty*v⛱%62^i[O;Vw ,-8ࣳ}TZF>F3'A Hk{Wl&0?PΟgO (o,V#Oy)RNp',l}x&yib<*0Y٥Oa5NM6ov@N8Wjv̌S .~2aFH^J4LE*vIEƅ*1H+c@iھ8Y4hȾOMX4ُ2%x'睔pkV[\ f5gD4pNaq.WQ0!4ntc9il_N6"j QHf$o9ف}eMD8bOya>`z2E1t3X柲!W:T(.0DCS4Ǧ)Sm1K<Li/FQx1.[JI޽Fs:dNCƎ|Xa/D12Oe{,N,|XB\]F拹3+KCIc ݅ў^&GGR(TӸ4̑ql8M=Ōt8먫#|CqA4/@?4v#Xi3'`llqqkKe5KAYV%=J { 5iea֣疻 ^,t%a2mPjvƒDSrioMk7`ֱܯ#jS1f&=h2P}Z@%)-:+.J?eZ +'7k{?ňe6g.,d#h%V}Ȯ\80hl92x)u½XTLV o(P8){s(8 j <)Iuf u{!یۗP+׿'Ȁi2>7Z#&=X:r>5•hڽ;\Ul06"=2 n ;y݉cJ6AM?>8CedWTR*Fj2kc'ug( X_*y'͟Q|٪ R4B uӦg؎-B6GCM"7iz%NO-R54sS^K][9!_h5_5/>LUw5%!\NOveon" ܵ'PhIk:NcH&X8okyZ.k(V\[ K.nlܸ4,zĴԒ&[Xwv~T!PzbC^CĀ;$nD gDc ʼ 6 +)@ӦJI) QVk!@w]6Lt5BoȶdPp]E!&&\,6i]ou Y+5ĉtTIۯC\=DERCWOL:l S$>6b/-ǃ ߡ#ƀ8߫z_7KŷIKbjJOMqzH.gHU vg1ڹpᣵ^-5cUANMʀi)I#H} 3{*ɩAkO]}ABDQt^_Nu(!sz;ē2tѷ $ѥFgE*E+M>8L4 [XӍA?K1? IHKoɉ1T59J ѐ)jNLַ)֛Xoѧe9.x,YW+ovpJmi}ſ(NGfyj.sj>3HҤ g1Fv@Y_v5f`PʹƜDwP5K:e4 n%P~Ӑ*GO}8FQDu}O {D4h1Ga҃:l@u&h6))J׶*xZ?[X2KtnRNJwD*cZG:lU)v "LI ,PU}Y/?-%"tٚėSHJa=>O;/sfbP`+B0vƟ6C)H<ͨBijW'yF5lkc `N\y|[9%QH# Nŋ}vII?Prtsz;cMc.r=eJОLD{vP ដؼC'zZ8- N -Yku)Ҕ4}7qӀ?SMYV j7^LbRrk;$TrV!ϽΞH+.5. ?Rd]_KV/4$ _6939cƯH!^o9g,5 y (ea$@4 EN8Cާ9 y;vllqіa2!^1R>SpK([٤D< }F'?3E=#x}]OwJW3a*=0.S8{fmZoPQߏ@F_TbIk%ӹ] ,Ǜ`_'-لu" u{w74 4M/ͤĐ-!qVjUoPE{CF"3n{X^iZ +RCL2Tx 9H\H>%K!p lhzxmKļu E-9MC~&9Q8#aճ7G5}>q-BYPobwћ8{2N )Vꉥ,K⥪Eout+WvvRI$HX|KB)_ ~Ó7P2DtG/xu(%q!a_b_lC<&.2r J]ߔ[9γ x }bR|19989'#kWpDֶPxZX]z4S|3jLJ}é^d0ͦ8Ժ {:,gc=7&Vߏ-e:|X&lRd7h7.VIP#%!aAjw-Be?WHt5d3zWɧ1=3"(Gkxl[YqcRI;I@@@ NormalCJ_HaJmH sH tH Z@Z V Heading 1$<@&5CJ KH OJQJ\^JaJ f@f B Heading 2$<@&)56B*CJOJQJ\]^JaJphV@V Z` Heading 3$<@&5CJOJQJ\^JaJDA@D Default Paragraph FontRi@R  Table Normal4 l4a (k@(No ListnOn BHeading 2 Char956B*CJOJQJ\]^J_HaJmH phsH tH HZ@H _ Plain TextCJOJ(QJ(\^J(aJe@ _HTML Preformatted7 2( Px 4 #\'*.25@9CJOJPJQJ^JaJR^@"R _ Normal (Web)dd[$\$OJPJQJ^J4U@14 < Hyperlink >*phOA BitJ @RJ BFooter  !B*OJQJaJph.)@a. B Page NumberH@H BTOC 2 ! B*OJQJaJphbOb a|Default 7$8$H$-B*CJOJ QJ ^J _HaJmH phsH tH 0a@0 a| HTML Cite6]FV@F a|FollowedHyperlink >*B* phO a|a>O> a| Normal (Web)5 f\@fa| z-Top of Form$&dPa$<CJOJQJ^JaJl]@la|z-Bottom of Form$$dNa$<CJOJQJ^JaJ`O` a|Normal (Web)12 dB*CJOJQJ^JaJph O a|hugeXOX ffulltext-references!dd[$\$CJaJ@O!@ f fulltext-it16<].X@1. yDEmphasis6]6OB6 Ip0 ft0$dd[$\$6OR6 Ip1 ft1%dd[$\$6Ob6 Ip2 ft1&dd[$\$6Or6 Ip3 ft1'dd[$\$6O6 Ip4 ft2(dd[$\$6O6 Ip5 ft3)dd[$\$6O6 Ip6 ft3*dd[$\$6O6 Ip7 ft2+dd[$\$6O6 Ip8 ft4,dd[$\$6O6 Ip9 ft1-dd[$\$8O8 Ip10 ft1.dd[$\$8O8 Ip11 ft4/dd[$\$8O8 Ip12 ft10dd[$\$8O8 Ip13 ft51dd[$\$8O"8 Ip14 ft12dd[$\$6O26 Ip7 ft13dd[$\$8OB8 Ip15 ft34dd[$\$8OR8 Ip16 ft15dd[$\$8Ob8 Ip17 ft36dd[$\$8Or8 Ip18 ft17dd[$\$8O8 Ip19 ft18dd[$\$8O8 Ip20 ft39dd[$\$8O8 Ip21 ft1:dd[$\$8O8 Ip22 ft3;dd[$\$8O8 Ip23 ft3<dd[$\$8O8 Ip24 ft1=dd[$\$8O8 Ip25 ft3>dd[$\$8O8 Ip26 ft3?dd[$\$8O8 Ip27 ft3@dd[$\$O Ift68O"8 Ip28 ft3Bdd[$\$8O28 Ip29 ft3Cdd[$\$8OB8 Ip30 ft1Ddd[$\$8OR8 Ip31 ft1Edd[$\$8Ob8 Ip32 ft3Fdd[$\$8Or8 Ip33 ft3Gdd[$\$8O8 Ip34 ft3Hdd[$\$8O8 Ip35 ft1Idd[$\$8O8 Ip36 ft1Jdd[$\$8O8 Ip37 ft3Kdd[$\$8O8 Ip24 ft2Ldd[$\$8O8 Ip38 ft5Mdd[$\$8O8 Ip39 ft3Ndd[$\$O Ift1O Ift78O8 Ip40 ft5Qdd[$\$O! Ift88O28 Ip41 ft1Sdd[$\$OA Ift98OR8 Ip42 ft3Udd[$\$8Ob8 Ip21 ft2Vdd[$\$8Or8 Ip43 ft4Wdd[$\$O Ift28O8 Ip44 ft5Ydd[$\$ O Ift108O8 Ip45 ft5[dd[$\$6O6 Mp7 ft4\dd[$\$6O6 Mp8 ft1]dd[$\$8O8 Mp11 ft1^dd[$\$8O8 Mp13 ft1_dd[$\$8O8 Mp15 ft1`dd[$\$8O8 Mp17 ft5add[$\$8O"8 Mp20 ft1bdd[$\$8O28 Mp22 ft1cdd[$\$6OB6 Mp8 ft5ddd[$\$8OR8 Mp23 ft1edd[$\$6Ob6 Mp8 ft6fdd[$\$8Or8 Mp24 ft6gdd[$\$6O6 Mp8 ft7hdd[$\$8O8 Mp25 ft6idd[$\$8O8 Mp26 ft6jdd[$\$8O8 Mp27 ft6kdd[$\$8O8 Mp28 ft6ldd[$\$8O8 Mp29 ft8mdd[$\$8O8 Mp27 ft8ndd[$\$8O8 Mp28 ft8odd[$\$8O8 Mp24 ft8pdd[$\$6O6 Mp8 ft9qdd[$\$8O"8 Mp25 ft8rdd[$\$8O28 Mp29 ft6sdd[$\$8OB8 Mp8 ft10tdd[$\$8OR8 Mp30 ft6udd[$\$:Ob: Mp11 ft11vdd[$\$:Or: Mp11 ft12wdd[$\$8O8 Mp8 ft13xdd[$\$8O8 Mp31 ft6ydd[$\$8O8 Mp8 ft14zdd[$\$:O: Mp29 ft13{dd[$\$:O: Mp27 ft13|dd[$\$:O: Mp28 ft13}dd[$\$:O: Mp24 ft13~dd[$\$:O: Mp11 ft13dd[$\$8O8 Mp11 ft6dd[$\$8O8 Mp15 ft6dd[$\$8O"8 Mp11 ft8dd[$\$8O28 Mp32 ft1dd[$\$8OB8 Mp33 ft6dd[$\$8OR8 Mp34 ft8dd[$\$8Ob8 Mp35 ft6dd[$\$8Or8 Mp36 ft6dd[$\$8O8 Mp37 ft6dd[$\$8O8 Mp34 ft6dd[$\$8O8 Mp38 ft6dd[$\$8O8 Mp8 ft15dd[$\$8O8 Mp39 ft6dd[$\$:O: Mp40 ft16dd[$\$6O6 Mp8 ft8dd[$\$8O8 Mp31 ft8dd[$\$8O 8 Mp41 ft6dd[$\$8O 8 Mp41 ft8dd[$\$8O" 8 Mp43 ft1dd[$\$8O2 8 Mp44 ft1dd[$\$8OB 8 Mp30 ft8dd[$\$8OR 8 Mp45 ft8dd[$\$8Ob 8 Mp45 ft6dd[$\$:Or : Mp30 ft17dd[$\$:O : Mp11 ft17dd[$\$:O : Mp45 ft17dd[$\$8O 8 Mp46 ft3dd[$\$8O 8 Mp26 ft8dd[$\$:O : Mp47 ft18dd[$\$8O 8 Mp44 ft2dd[$\$8O 8 Mp48 ft3dd[$\$8O 8 Mp49 ft1dd[$\$:O : Mp50 ft18dd[$\$8O 8 Mp51 ft3dd[$\$8O" 8 Mp52 ft1dd[$\$8O2 8 Mp53 ft3dd[$\$8OB 8 Mp54 ft1dd[$\$:OR : Mp55 ft18dd[$\$:Ob : Mp56 ft18dd[$\$8Or 8 Mp57 ft3dd[$\$ O Mft198O 8 Mp58 ft3dd[$\$:O : Mp59 ft19dd[$\$:O : Mp52 ft20dd[$\$:O : Mp60 ft19dd[$\$:O : Mp61 ft19dd[$\$:O : Mp23 ft20dd[$\$:O : Mp43 ft21dd[$\$:O : Mp62 ft21dd[$\$ O Mft22:O" : Mp63 ft23dd[$\$:O2 : Mp64 ft24dd[$\$ OA Mft25:OR : Mp65 ft18dd[$\$:Ob : Mp66 ft18dd[$\$ Oq Mft268O 8 Mp67 ft4dd[$\$8O 8 Mp68 ft1dd[$\$8O 8 Mp62 ft1dd[$\$6O 6 Mp4 ft1dd[$\$:O : Mp69 ft18dd[$\$8O 8 Mp70 ft4dd[$\$:O : Mp71 ft24dd[$\$6O 6 !p5 ft1dd[$\$6O 6 !p6 ft1dd[$\$6O 6 !p7 ft3dd[$\$6O" 6 !p9 ft3dd[$\$8O2 8 !p10 ft3dd[$\$8OB 8 !p11 ft5dd[$\$8OR 8 !p13 ft4dd[$\$8Ob 8 !p14 ft2dd[$\$8Or 8 !p15 ft5dd[$\$8O 8 !p16 ft6dd[$\$8O 8 !p18 ft4dd[$\$8O 8 !p19 ft4dd[$\$8O 8 !p20 ft4dd[$\$8O 8 !p21 ft4dd[$\$8O 8 !p22 ft4dd[$\$8O 8 !p23 ft4dd[$\$8O 8 !p25 ft5dd[$\$8O 8 !p27 ft4dd[$\$6O 6 !p5 ft2dd[$\$8O" 8 !p28 ft4dd[$\$8O2 8 !p29 ft4dd[$\$8OB 8 !p30 ft4dd[$\$8OR 8 !p31 ft3dd[$\$8Ob 8 !p32 ft2dd[$\$8Or 8 !p33 ft4dd[$\$8O 8 !p34 ft2dd[$\$8O 8 !p36 ft2dd[$\$8O 8 !p38 ft3dd[$\$8O 8 !p39 ft1dd[$\$8O 8 !p40 ft4dd[$\$8O 8 !p41 ft3dd[$\$8O 8 !p42 ft4dd[$\$8O 8 !p44 ft4dd[$\$8O8 !p45 ft4dd[$\$8O8 !p46 ft4dd[$\$8O"8 !p47 ft5dd[$\$8O28 !p49 ft5dd[$\$ OA !ft118OR8 !p50 ft5dd[$\$8Ob8 !p51 ft1dd[$\$ Oq !ft128O8 !p54 ft4dd[$\$8O8 !p55 ft3dd[$\$8O8 !p56 ft5dd[$\$8O8 !p57 ft5dd[$\$8O8 !p58 ft1dd[$\$8O8 !p59 ft5dd[$\$8O8 !p60 ft5dd[$\$8O8 !p61 ft3dd[$\$8O8 !p62 ft5dd[$\$8O8 !p63 ft5dd[$\$8O"8 !p64 ft3dd[$\$8O28 !p65 ft3dd[$\$8OB8 !p66 ft3dd[$\$8OR8 !p67 ft3dd[$\$4@b4 (nvHeader !B'@qB ~Comment ReferenceCJaJD@D ~ Comment TextCJOJQJaJFOF ~ Char Char2OJQJ_HmH sH tH H@H ~ Balloon TextCJOJ>QJ>^J>aJN11{K ,? ;t!>Thi  RS*+{|OPVW:;. / h i = > t u I J ijLM)*'(^_-.vw""*%+%&&&&w)x) + +----p.q...W2X2c2d2H3I3Y3Z3333333344,4?4Q4c4u44444445G55556:6K6x6667?7_7|7770818::':O:x::::::(;*;@;;;;; < <==]?^?l?m?E@F@1C2CDDGGJIKIJJKKLLMMgOhOTTVVXXZZk]l]___``aaaaaaab@bhbbbbbcc6f7fsgtghtijjmmoopp`pap)q*qFqGqsstt u u0u9uBuKuYubupuuuuuuuewgwwwAxxxxxIyyy9zHzZzzzzzzz>{{{{}}P~Q~~~~~ij+,<=NObcefޏߏ>?jkFGp,-} fgҗӗVW34™y KLΞϞ,-۟!"+,56äĤZ[}~׬جDFghպֺ>?KL BC%&>?RS89d#1BPaprsxy^_/cd`ajkJK[hiefYZFG12XY-.PQ<=>TU/0 IJ:<Z[gv23  l m   G  RS67.hi""$$$$&&((f+g+++6-7-_0`04444}66667777]9^9::::`?a?x?y?AAAAFFIIIIJJJJJJ)K^K|KiNjNOOQQ$T%T8T9TvUwUUU\Z]ZZZ\\'^)^m^n^``bdfggg iii/k0kokpknnoqop6pttww.w/w0wawbw/{0{~~+,؄ل}~ɉʉBC[\|}lm’Ò,-ߔqrJKQRZ[-.[\!"yz]^UVĩũDE^_TU˲ͲVX89Uu̾;yz)*STp@A| jkyzXY<=UWGHOPmn)*>?lm:;-O P      stBC56vw{  ##%%&&&&)),,....x122_2u2v2w2-5.5N5O5#8$8\:]:::;;;;;;;;;;;;;;<<??gAhAAABBBC CACBCCC`CaC{CCSDTDDD E!EFFFFFFGG,G-G.GsGtGGGIIIIIJJEJFJGJfJJJLL\N]NNN;PS?SLUMUUUWWZZ4\6\```````bbddKeLeeeggii'm(m>m?m)o*oCpDpEpFpWpXppppppppssttttttuuOuPuuuuuuu v vUvVv{v|vwwzzzz()./89[\]wx56KLMNPQՙfg    ͠ΠРѠ GH%FGHop !ijխ֭~̲ͲBCӺԺ  RûŻǻɻʻͻλϻлѻӻջ׻ٻۻܻ߻   !#$&'(*,.024679:<>@BDFHJKLMNOPQRSTUVW\^`behknqrst)*Lqr~ij}~ABP\gr}DE !#%&(*,.02349;=?ACEGIKMOQSUWY[]_abgikmoqsuwy{}    !#$%'()*+,-./0123456;=?ACEGIKMOQSUWY[]_acdikmoqsuwy{}    "$&(*,.024689>@BDFHJLNPRTVXZ\^`bdfglnprtvxz|~     !"#$%&'()*+,-./0123456789>CHMRW\afkpuz89IJKXYtu}~&' DE "'()*+,.1238ABDGHIJKLNQRSZ\]_bcdiklnqrs$%(*+,278;=>?GLMPTUVWXY\^_`abcfijklmnqtuvwxy|~   "%&'()*-/0123479:;<=>ACDEFGHKMNOPQRUYZ[\]^acdefghkmnopqruwxyz{|   #$%&'(+-./0125789:;<?ABCDEFIKLMNOPSUVWXYZ]_`abcdgijklmnqstuvwx{}~      !"%'()*+,/1234569;<=>?@CEFGHIJMOPQRSTU  "#OPV`abcdfhijklmorsty%(),/019;<?BCDLOPSVWX\abehijpuvy|}~    !"#$%(*+,-./2456789<>?@ABCFHIJKLMPRSTUVWZ\]^_`adfghijknpqrstuxz{|}~67+,PQef12WXvwT~!!M!N!}#~###%%$(%(:(;(>*?*++..(.)./.0.....00 3 333$4%466;;<<<<{=|=>>_>`>>>|?}?M@N@O@MANABBCCFFGGIILL;Ng?gghhh$i%iii]j^jjj^k_kkkk.l/lll?m@mmmnnooooUpVppp>q?qqq7r:rvswsssktlt-u.uuuvv)w*wwwx x^x_xxx{y|yyy>z?zzz]{^{{{L|M|||.}/}}}Q~R~JK^_ڀۀ*+JK˃̃optunoijst !"yzst%&,-@Akl  k?TU͗Η=>šÚKM͛ϛ12۝ܝno:;ן[\֠נghǢȢVW*ͧiũ}~ӬԬ֭/0ذٰRS67سٳlm_`ٸڸde͹ι89LMBCͽν?@¾þD8 FGTU?@=>op   '(GHVWCDwx()WXWX^_]^45{>?56PQ>?WfgIJjk^_  " # n o     m n   , -   n o   UVWRS]^fg !  %&o(XP}H?KMN1 2 }!~!" "X%Y%%%I&J&{'|'S(T(((,*-*--..//00'2(2448899:;;4<5< = =>>N?O???4B5BCCDD7E8EFF>I?I=J>JMMPPSS5UlUmUnUVVqYrYZZ ] ]``ccddggXhYhyjjjjToUopprr\xxxxzzm~n~%& GHIxyȓɓϘИNOΝϝ89XYDE~STvwEFhiʪ˪  DZȱABuv`a56¼üpq>?qr  &'./uv01"#565 6 R S   ]^tu?P""$$%%&n&o&p&D)h)i)j)V*W*,,-,..2255677788::v<w<??DDDEEEEzH{H?I@IpJqJLLPOQOSSTTUVUWUiVjVVV#X$XXXYYcYdYYYYYwZxZ[[[[\ \\\]]x]y]]]^^n^o^^^__7`8` a aaaaadbebccdd]d^d$e%eeeeeffgfff[g\ggg4h5hiiiiii6j7jjjykzkIlJlllllRmSmmmmmnn*o+ooo6p7pppqqrrrr\s]sss4t5ttt?z{tu߭?lLzٯگ޲ִ߲״UV+,>?<=ݿ޿VWc*+?@EFv#ef  *+ st34,-HINOtu8IJ?@EFbc&'yzq-.34vwde,-;<AB  ) * > ?       Whi-.q>?DE=>BCyzZ"["S#T#$$$ % %'%(%-%.%*(+())***M+N+z+{+++--////0_0<1=1222223R3S3X3Y3g5h566^6_6y6z6668899999999]=^= ? ?????,@-@F@G@L@M@BB=CQCRCEEFFFsGtGGGGGIIQLRLSL^L_LLL.M/MMM^N_NNNOOPPzP{PPP]Q^QQQ4R5RRR[S^STTTTTThUkUUU6VVVVVVWW]X^XXX!YYYYYZZW[X[[\\\]]X]Y]]]<^=^^^____=`>```Uaaabblbnbb?ccddGeIeeeff+fNfzfff6g7guivijjHjZjnjjjjjjpp&p'prrrrrrs2s=svsssssstt&t1t@tKtVtttttvvwwnwwBxxyyyyGzIzzzzz{C{{{{3}4}}}}}}~~(~3~>~I~U~`~k~v~~~~~~~ABC+,$%01%&56"#>   JKXYabՑ֑ij֗ח}~WYvw8:abBC!"ѣңtu!"^_Z[;<EF'wƬҬӬ}~Ԯ$sǯ [ޱ߱  ҲoGjk[}~:׶"#QR  ]89UVAӽ!pE -.|f3)|o23$t8sd-.:;ijPQ5*|jkQRU^fgn'(IJXY#56GH#$Kr67,-}~#$EF]^ 56ST01UV ! p q Z [ vwhiBCd4=>hi01  [\\]$% 5 6 ^ _ !!!!t$u$f%g%B&C&[&\&?'@'J(K(((f)g)**q,r,,,.//#/$/001122C3D3e3f3A4B4O4P45561727788^9_9 ;!;9;:;==??@@rDuDEEEE>F?F0G1GIIKKKKMMOOSSTT T.T2?2@2A2B2C2I2J2K2L2M2N2O2P2Q2e2h2k2n2p2q2r2s2t2222222222222222222222222222222222222222222222222222222233/3F3G3Q3V3e3m3n3w3|33333333333333333333333334 4 444444%4(4-4.4/414x5y555555555555555555566 666666 6"6*6+6.606566696;6=6>6Y6Z677&717778797B7H7J7K7L7M7O7Q7R7S7T7U7X7Z7^7_7`7a7d7f7j7k7l7m7p7r7t7z7{7|7}7~777777777777777777777777777777777777777788 888888888888!8#8'8(8)8*8-8/83848586898;8=8C8D8E8V8a8b8c8d8m8s8u8v8w8x8{8}8~88888888888888888888888888888888888888888888888888889999999999K<L<<====l=m=????AAAA:C;CICJCCEEEE~FFHHHHzK{KKKK L!LLL-M.MM N NkNlNNN(O)OuOOOOOPPPPFQGQQRRRRxSySUUUU+V,VVVVVVVVW&W:WNWOWWWXXSXUXvXwXZZZZ*[+[\\V\r\\\\\\\] ]]1]C][]p]]]]]]]]]^^^3___F```HaaaRbbcxcycedfd|ddddddddddsftfffgggg hh"hWhhhh&iYiiii(j[jjjj(k[kkkk)l\llll%mYmmmm%nVnnnnoOoooopJp}pppqDqwqqqqqq)rrHsss{u|uvvvvwwxxyy6{7{8{A{G{M{S{Y{_{e{k{q{x{y{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{|||| ||||||| |#|$|'|*|-|0|3|6|9|<|?|B|C|F|I|L|O|R|U|X|[|^|a|b|e|h|k|n|q|t|w|z|}||||||||||||||||}}}}~A~d~~~~~OPpqUVFGcdф҄z{JTUP_` Γϓ ./89efgЗїҗ !"$%45ТҢӢ !5N[\^ƤǤ]EdeFGƩͩөԩ6<@DEcgkpqtSBC123INUbcwy{}~ƱDZ !&'(fgָظٸ,-ABC_ƹ-H)*78̽ '(T\]=>)*+89jkl)*78 DE$&'opCD45Z$%&BCDE67NO012=>23fqr  }~efJ$%kopjFWX-wV1efst+ , F G v w * + l m   P Q Y Z   8 9        G H I                    T U   , ! 9! :! }! ! ! ! ! ! " " c" " " " # # e# f# g# % % % % ) ) + + k. l. 0 0 0 0 )6 *6 8 8 -9 .9 4< 5< J< K< o> p> ^@ _@ u@ v@ C C E E F F |H }H H H I EI zI }I I J 7J 8J pL qL O O T T uV vV W W @X AX X X 3[ 4[ G[ H[ k\ l\ \ \ z] {] ] ] Ta Ua sa ta -c /c Xc Yc .e /e Ke Le [f \f f f g ? ? #B $B ^C _C D D E E E 4F 5F KF yF F F G G G G G G 0H `H H H I GI I I I -J cJ J J J K K K gK K K K K K K N N *O +O O O O O O P JP UP `P P P P Q JQ UQ Q Q Q R )R 4R XR cR nR R R R *S +S 5S xS S S S S T 6T aT T T T T U +U :U EU XU cU U U U ,V -V 7V zV V V V V !W HW sW W W W X "X EX `X kX ~X X X Y Y Y [ [ \ \ 4^ 5^ G^ H^ q^ |^ ^ ^ ^ ^ _ G_ }_ _ _ ` /` Q` z` ` ` ` a Ja a a a b b b b Vc Wc c c c c d d he Hf If Sf Tf %h &h el fl on pn o o t t !y "y Xy Yy | |       ӂ Ԃ Ӆ ԅ . 0 Q s ؇  = ^  ߈  A a ‰  # C e Ȋ * Z z ۋ ݋ ߋ Č Ō : ; _ `   ' (   З җ  0 a c  I w y ҙ  [ \ ] B C y z l m D E ұ ӱ   ն ׶  = m ַ ط  y z ݺ ޺ ` a 3 4 5 6 a b    P Q ; < w x       ( ) *      ! ' ( ) 2 8 > ? @ P V \ ] ^ h n t u v   d @ A Y Z r s L M @ A 2 ? @ Z [  " # F G  I J     7 8 9 : A B Y Z % s  f  V Y Z     ] ^           E F 5 6      7 ? L M R _ ` e j k p w x y                               $ % & ( , 3 : > B F G H J N R V Z ^ b c d f j q u y }                     F !     ! " # # # # # # ;% <% V% W% ]% c% i% v% w% ' ' ' ' ( ( B) D) ) ) ) ) =+ >+ - - . . . . 3 3 w4 x4 5 5 e7 f7 : : ; ; ; ; < < < < 4< I< U< j< < < < < = = = = = = = > > > $> /> :> ^> i> t> > > > > > > :? w? y? ? ? ? ? ? @ @ @ @ @ @ A 4A 6A FA GA C C }E E E G G FH GH oH pH H H H H H H H H H I I I I I J J J K K L L L L L 1M M M N N N N qS }S ~S S S XT YT T T U U mU nU U U U 7V 8V V V W W ]W W W W W W W W W X UX hX yX X X X X X Y Y Y .Y YY lY }Y Y Y Y Y Y Z 5Z HZ qZ zZ {Z Z Z Z Z Z Z Z [ $[ 5[ _[ t[ [ [ [ [ [ \ .\ X\ |\ \ \ \ \ \ \ ] ] -] >] Q] e] ] ] ] ] ] ] ] ] ] ] ^ ^ .^ K^ ]^ o^ ^ ^ ^ ^ ^ ^ !_ 3_ \_ n_ _ _ _ _ _ ` ` ` +` ,` 7` K` [` k` ~` ` ` ` ` ` ` a 2a Da ma a a a a a a a a b %b 5b Gb db xb b b b b b c (c Jc ^c pc c c c c c c c c d 8d ad sd d d d d e e e e 3e He Ze le e e e e e e e e f f #f 5f Gf pf f f f f f f f f g /g 0g ?g @g Kg ]g og g g g g g "h Lh gh h h h h i Mi xi i i i i i j .j Bj Tj dj j j j j j j j j k k +k =k Ok ak k k k k k k %l Pl |l l l l l l l l m &m 8m Hm sm }m m m m m n $n %n 0n Dn Tn dn vn n n n n n n o ;o ?o Io Jo p p p p s s s s w w w w w w w w x x Wx Xx hx ix yx zx x x | | } p} q} } } } ~ T~ ~ ~ ~ { |   x y I J   ۃ ܃ Ά  ډ ۉ Պ ֊ ֋ ׋   4 5 7 8 Z [ F G z { I J , - ( ) A B 5 | } = > ҥ ӥ n p ڨ ۨ ' (   Z [ ʭ ˭      , . = ° ð " . / C D Q ; Q T e f g 8 9    * + / 0 i j x y   6 7 U   @ A f g h i   ! " # E F G G H : ; N O m n F G k l   1 2   - . ; < a b x y  R [ h i   n J  : ; y # T U .     l m n y z { ` 4 5 ? @ u v . / V W | }  * O t           !       c d     E F J K F G j k l q  _    -    J ?   "   I ! L" # X# $ % j' ( ) ) * * + + , , 9, :, ;, , , , / / 1 1 B3 C3 5 5 7 7 p9 q9 : : z< {< L= M= R> S> T> > > > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? '? )? *? -? 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000000000000000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000!0!0!00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M0M00/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/004U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U04U00xj0xj0xj0xj0xj0xj0xj0xj0xj00[x0[x0[x0[x0[x0[x0[x0[x0[x0[x0[x00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}0}00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000>0>0>0>0>0>0>00O0O0O0O0O0O0O0O0O0O0O00&0&0&00C)0C)0C)0C)0C)0C)0C)0C)0C)0C)0C)0C)0C)006060606060606060606060606060606060606060606060606060606060600T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T$0T$0T(0T)0T*0T+0T,0T-0T.0T/0T20T30T50T60T70T80T90T:0T30T;0T=0T>0T20T?0TB0TE0TF0T=0TG0TH0TI0TI0T0T:0TK0TL0TM0TN0TQ0TS0TU0TV0T30TV0T30T30TD0T[0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T(0T0T0T,0T0T0T0T00T(0T0T0T0T0T0T0T0T0T0T?0T0T0T0T0T0Te0T0T0T0TI0T0TK0T0T0T0T0T0T0T0T0T0T0T0TD0T0T0T0T0T0T0T0T0T0T0T0T20T0T0T0T0T0T0T0TJ0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T 0T 0T0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T 0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T0T(0T0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q0q0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0q0q0q0q0q0q0q0q 0q 0q 0q 0q 0q 0q 0q 0q 0q 0 q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q 0q 0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q0q 0q0q0q 0q00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000@0ȑ00{@0ȑ00{@0ȑ00{@0ȑ00{@0!@0@0@0@0@0ȑ00| {!>Thi  RS*+PVɻʻۻܻ#$67JKUVqr34ab56cd  89fg!"78 +,ABKL\]kl$%78LMXYbcmnxy  )*34=>GHQR]^ghqr{| '(12;<EFOPYZcdmnwx !"+,56?@IJSTcdlm();<OPabuv$%./89BCLMVW`ajktu~@¾þD8 FGTU?@=>op   '(GHVWCDwx()WWX^_]^45{>?56PQ>?WfgIJjk^_  " # n o     m n   , -   n o   UVWRS7778t=12^LLMOPzP{PPP^QQ4R5RR[S^STTTTTThUkUUU6VVVVVVWW]XYYZZW[X[\\\]]<^=^^^__=`tu![ vwhiBCd  [\\] 5 6 ^ _ !!!!\&?'@'J(K(((f)g)**q,,.__"aybbccighhkkmpprrsvvxx{~ jqyO^KsƑq}5ړm#$ŖƖߖIJopbcޛߛ&';^ݪ+7?~5DGîRۯeAPϱW$[,gʽֽ׽пѿXYHHvD!]9TZ `2Zy~<Z"~ }  f7y    !"" #h##E$$_% &|&''(~(()S**t/u/////////////// 0 00 04050I0J0^0_0s0t000000000000011h1i1q1r1111111111122/2028292A2B2O2P2r2s2222222222222m3n33333333333 4 444.4/455555555 6666*6+65666=6>68797L7M7T7U7`7a7l7m7{7|7777777777777888888)8*85868D8E8c8d8w8x888888888888888888899RRRRxSySUUUU+V,VVVVVVVVW&W:WNWOWWWXXSXUXvXwXZZZZ*[+[\\V\r\\\\\\\] ]]1]C][]p]]]]]]]]^^^3___F```HaaaRbbcxcycedfd|ddddddddddsftfffggg hh"hWhhhh&iYiiii(j[jjjj(k[kkkk)l\llll%mYmmmm%nVnnnnoOoooopJp}pppqDqwqqqqqq)rrHsss{u|uvvvvwwxxyy7{A{G{M{S{Y{_{e{k{q{x{y{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{|||| ||||||| |#|$|'|*|-|0|3|6|9|<|?|B|C|F|I|L|O|R|U|X|[|^|a|b|e|h|k|n|q|t|w|z|}||||||||||||||||}}}}~A~d~~~~~OPpqUVFGcdф҄z{JTUТҢӢ !5N[\deFƩͩөԩ6<@DEcgkpq123INUbcwy{}~ƱDZ !&'fg  e   ’ #  # y H G : ; w     i j p ` a g { |     e f  7 8 1# 2# & G' H' ' ' l( a 3   L M _ ` j k w x       $ % F G b c     N N }S ~S n o ;o ?o ܃ Z [ F G z { I J } = > ° . / D f g 8 U  - < a y  R [ h i  n J  y # T U     l     !   J K + , ? ? -? ʑ00 ʑ00 ʑ00 ʑ00 ʑ00 ʑ00 ʑ00 ʑ00 ʑ00ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 0ʑ0 000ʑ0 0ʑ00 0000ʑ00 ʑ0!0"ʑ030ʑ0#0$ ʑ030ʑ0%0&D ʑ030ʑ0'0(| ʑ030ʑ0)0* ʑ030ʑ0+0, ʑ030ʑ0-0.$ ʑ030ʑ0/00\ ʑ030ʑ0102 ʑ030ʑ0304 ʑ030ʑ0506 ʑ030ʑ0708< ʑ030ʑ090:t ʑ030ʑ0;0< ʑ030ʑ0=0> ʑ030ʑ0?0@ ʑ030ʑ0A0BT ʑ030ʑ0C0D ʑ030ʑ0E0F ʑ030ʑ0G0H ʑ030ʑ0I0J4 ʑ030ʑ0K0Ll ʑ030ʑ0M0N ʑ030ʑ0O0Pʑ030ʑ0Q0RTʑ030ʑ0S0Tʑ030ʑ0U0Vʑ030ʑ0W0Xʑ030ʑ0Y0Z4ʑ030ʑ0[0\lʑ030ʑ0]0^ʑ030ʑ0_0`ʑ030ʑ0a0bʑ030ʑ0c0dLʑ030ʑ0e0fʑ030ʑ0g0hʑ030ʑ0i0jʑ030ʑ0k0l,ʑ030ʑ0m0ndʑ030ʑ0o0pʑ030ʑ0q0rʑ030ʑ0s0t ʑ030ʑ0u0vDʑ030ʑ0w0x|ʑ030ʑ0y0zʑ030ʑ0{0|ʑ030ʑ0}0~$ʑ030ʑ00\ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00<ʑ030ʑ00tʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00Tʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ004ʑ030ʑ00lʑ030ʑ00ʑ030ʑ00lʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00Lʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00,ʑ030ʑ00dʑ030ʑ00ʑ030ʑ00ʑ030ʑ00 ʑ030ʑ00Dʑ030ʑ00|ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00$ʑ030ʑ00\ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00<ʑ030ʑ00tʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ00Tʑ030ʑ00ʑ030ʑ00ʑ030ʑ00ʑ030ʑ004 ʑ030ʑ00l ʑ030ʑ00 ʑ030ʑ00 ʑ030ʑ00!ʑ030ʑ00L!ʑ030ʑ00!ʑ030ʑ00!ʑ030ʑ00!ʑ030ʑ00,"ʑ030ʑ00d"ʑ030ʑ00"ʑ030ʑ00"ʑ030ʑ00 #ʑ030ʑ00D#ʑ030ʑ00|#ʑ030ʑ00#ʑ030ʑ00#ʑ030ʑ01$$ʑ030ʑ01P&ʑ030ʑ01&ʑ030ʑ01&ʑ030ʑ0 1 &ʑ030ʑ0 1 0'ʑ030ʑ0 1h'ʑ030ʑ01'ʑ030ʑ01'ʑ030ʑ01(ʑ030ʑ01H(ʑ030ʑ01(ʑ030ʑ01(ʑ030ʑ01(ʑ030ʑ01()ʑ030ʑ01 `)ʑ030ʑ0!1")ʑ030ʑ0#1$)ʑ030ʑ0%1&*ʑ030ʑ0'1(@*ʑ030ʑ0)1*x*ʑ030ʑ0+1,*ʑ030ʑ0-1.*ʑ030ʑ0/10 +ʑ030ʑ0112X+ʑ030ʑ0314+ʑ030ʑ0516+ʑ030ʑ0718,ʑ030ʑ091:8,ʑ030ʑ0;1<p,ʑ030ʑ0=1>,ʑ030ʑ0?1@,ʑ030ʑ0A1B-ʑ030ʑ0C1DP-ʑ030ʑ0E1F-ʑ030ʑ0G1H-ʑ030ʑ0I1J-ʑ030ʑ0K1L0.ʑ030ʑ0M1Nh.ʑ030ʑ0O1P.ʑ030ʑ0Q1R.ʑ030ʑ0S1T/ʑ030ʑ0U1VH/ʑ030ʑ0W1X/ʑ030ʑ0Y1Z/ʑ030ʑ0[1\/ʑ030ʑ0]1^(0ʑ030ʑ0_1``0ʑ030ʑ0a1b0ʑ030ʑ0c1d0ʑ030ʑ0e1f1ʑ030ʑ0g1h@1ʑ030ʑ0i1jx1ʑ030ʑ0k1l1ʑ030ʑ0m1n1ʑ030ʑ0o1p 2ʑ030ʑ0q1rX2ʑ030ʑ0s1t2ʑ030ʑ0u1v2ʑ030ʑ0w1x3ʑ030ʑ0y1z83ʑ030ʑ0{1|p3ʑ030ʑ0}1~3ʑ030ʑ013ʑ030ʑ016ʑ030ʑ016ʑ030ʑ016ʑ030ʑ0147ʑ030ʑ01l7ʑ030ʑ017ʑ030ʑ017ʑ030ʑ018ʑ030ʑ01L8ʑ030ʑ018ʑ030ʑ0304, ʑ030ʑ030ʑ030ʑ030ʑ030ʑ030ʑ03000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000K00K00K00 @0 @0K00K00K00K00K00K00K00 K00K00K0 0 K0 0K0 0K00K00K00K00WK00K00K00ʑ00* K00K00K00K00K00K00ʑ00&ʑ00%K00ʑ00K00K00K00ʑ01K00K00K00K00K00K00 @0K00 K00 K00K00K00K00K00K00K00K00 @0K00@0K0(0 @0K0*0+K0*0K00K00K00K00K00K00K00ʑ02dʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02dʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02dʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02dʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02ʑ02dʑ0200000000000000J0000000000000000J0000000000000000000000000000000000000000000000J00J00J0 0J0 0J0 00 00 00 000@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0K0^0K0_0K0_0`K0_0K0_0K0b0cK00@0@0@0K00@0ʑ03ʑ03ʑ03!030000000000000000000000000000000000000000000000000000000000000000ʑ0300000000 00 00 00 00 00 00 00 0000000000000000000000000000000000000000000000000000000000000@ 0000@ 0000@ 00H00@ 00 00@ 0 0 T00@ 0 0 00@ 0000@ 0000@ 0000@ 0000@ 00H00@ 0000@ 0000@ 0000@ 00x00@ 0 0!00@ 0"0#00@ 0$0%A!00@ 0&0'0B!00@ 0(0)hB!00@ 0*0+B!00@ 0,0-B!00@ 0.0/`I!00@ 0001I!00@ 0203I!00@ 0405J!00@ 0607@J!00@ 0809l!00@ 0:0;!00@ 0<0=!00@ 0>0?!00@ 0@0AL!00@ 0B0C!00@ 0D0E!00@ 0F0G\F!00@ 0H0IF!00@ 0J0KF!00@ 0L0MG!00@ 0N0O00@ 00\>00@ 00>00@ 00>00@ 00?00@ 00B[ZsvPV OrL(AQ1h$&2>5i1ʝݩP>r "4LϏZm"Ɵ5.^ŐUP0,CG3L<V.7ESAyZ6a*c@jn2ooqtvv wwGn0 7 S> m n ! g S 1 z   * % \> GF TK \ eb F *  7 ;/ F Kjv$'(*+,./1468:<?BDFILNQUX]`dfhlpsv(mz|~  +6Ybmv    # $ 2 5 8 ; w y | ~ O hM.#6<BRiaxAi?ϦxZ 3p6OchtR/FR-^h#H~1\#*4_^B|P}%Sz#########$ $$7$>$R$b$k$w$$$$$$$$$$$$$$%%%%%&%-%1%;%B%F%O%V%Z%c%l%p%y%%%%%%%%%%%%%%%%%%%%%&&&&#&'&0&8&<&E&L&P&Y&`&d&m&t&x&&&&&&&&&&&&&&&&&&&&'''''"')'-'6'='A'J'Q'U'^'e'i'r'y'}'''''''''''''22223 33"3*3C3\3d3w3333333333344444%4,40494@4D4M4T4X4a4h4l4u4|4444444444444444444455 555 5)50545=5D5H5Q5X5\5e5l5p5z58BfWciyڃ:pơE/wr}-!'/7@OVe~!7Ndi17]p\ǵ5h-{((3CL]erI~TF>WF}c[qH L$.=DKXh_tofY%%h;Z-.8UbBj)nFnWn`nkn}nnnnnnnnnnnoo%o0oDoOoZonoyoooooooo9p@pLpSptp}pppppppppqq qqq!qCqTqgqwqqqqqqqqq>rWrorrrrrrtttttttuuvv%v0v;vCvav~vvvvvvvvvvw w2wHwQw\wcwmwwwwwwww~~҃ߐ&٤p̻]4  HR1]OjT+  % 2 7 < O i o $ A L R t L V  3" ( f4 ? N Z v a E d n D |   ( < AA =F QF kF F F F F F jK Q Y 3] _ sb fq G † φ  -  q >  I 8 p  o - :  n% 0 > | xN29Jv%)-023579;=>@ACEGHJKMOPRSTVWYZ[\^_abcegijkmnoqrtuwxyz{|}~      !"#$%&')*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklnopqrstuvwxy{}    !"#$%&'()*,-./012345789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXZ[\]^_`acdefghijklnopqrstuwxyz{|}~                       ! " % & ' ( ) * + , - . / 0 1 3 4 6 7 9 : t u v x z { }  u&H9I7e}{~  #$$x112ՙbd%BD0֭z|YYYȪ&8K-$.\.՘8@GəЙ9HOǚΚ:Lhyy z y { H V   ! # -# . Z x | h j Q       v   7 C ,? XXXCCCXC:XCXX XX4X4X4X4X4XX4XCC11XXXCXXXX  (!!/Xb$P]q}OL:P Kb$ =TCb(@((  J  # A"<  # AB S  ?dXeX,?  XTzT! Lunneborg marascuilo79Ruback _Toc319605239 _Toc319605240 _Toc319605241 _Toc319605242 _Toc319605243 _Toc319605244 _Toc319605245 _Toc319605246 _Toc319605247 _Toc319605248 _Toc319605249 _Toc319605250 _Toc319605251 _Toc319605252 _Toc319605253 _Toc319605254 _Toc319605255 _Toc319605256 _Toc319605257 _Hlt35174833 _Hlt35174834 OLE_LINK1page1page2page3page4page6page5page7mcnemar kN/6Uzj]x@Q&E)6Tyy79{K, -?  @@ /kUjxFm&g)7Tyy9{K> -? ?l|J0@l > H H J J J J J J Q Q W W x\ \ b b p q sr ~r } }   ! & ( - E I m o  ə љ   U Z     ֬ ٬  ) .  @ B N P     e m < D L U l r B H d k q  e m % , G M M P , 6 n w J R Z _     . . 3 3 76 A6 j7 r7 Z8 b8 8 8 (A +A D D H H T T V W W W q q Mw Uw x x iy ny y y !| &| X ]      1 5 ; A X ^ h l } ܃  > B ` g ĉ ʉ  ' - 0 6 ӑ ۑ s y " , ^ f  g j ĩ Щ ԩ  K N * 2 i q u x ٭ b e    " 7 < o w ű / 3 B I Z a S Z ƴ x ; C T Y ĸ ˸   I O @ B  * K O C L r z   L Q S V    " -   Z c k t z  ; D U ] [ c < E - / R T + 2   d j     K R / G ' , . 5 7" D" " " G% W% ) ) ) ) , , - - . . . . 6 6 9 ,9 = = ? ? ? ? ? ? ? ? ? ? ? ? *? -?  i S|}* M Y Z d i 9 > s H J ?iLM)*Xj ##'';)G),,//4455D6J67 7P:T:y:|:::::A;E;J;M;>>@@ B'BJJKLLLMMQYRaWiW[[]]^_````Na_aeellmmknrnttuuBzFzCNˌPTҐ! ^n%3-@H}We(WfؘޘLwy™ۙy!.$/+֝`dBQ]ğܟv{68"UV"4XYwox  kv~KTkx{ip(;=4>F Q =L!!%%((((C)E))),,G>I>7C>CIIJ#JLJOJJJJJ+K.K`KjKwVV[[ff ii>l?lp p~pp7}<}}}!+ޑV&lxZ`^hz2IL35TVprAMZd?l-. " P S {    tu-3 1$1_2e2=3k377::;;;<>>??mAsAAAlCqC{CCCCfEvEyG}GMJRJfJjJlJrJQQZZ[[\I]PiQijj)k.kllll?mpmttttttttttu$uPuWuuuuuuuVvZv\v`v2y7y {{9@Ê&'y{ǢܧǯóESAI-6r{cg  cqVZ{03ot= R V #$^+j+k.s.....0 0i0k012289888[:^:x;{;z<<sAuACCFFDGOGGGG HwI{IIILMoo&pypppppq?q[qcqhqqqqqPrUrrsssssttttAuNu0v7vavevvvvv>wDwwwwwx!xtxxxx+y9y|yyyyyyUzbz}{{{{`|f|| }/}P}X}a}}}}}R~x~~~sƀـ<>́Ё+PRKS[ã08<$2!0o0y}~LjψՈ&)‰6<ɊՊ/: ߌ!t}k9IP[Đ̐ڐA`hs#1OT“ʓړkaeݕ DL] :SUt|Η>bjpܘ1<֚bj̛ٛߜJS:;ןrwhqy'*7Wdlpɦ*Lͧ  }ة P|֫$Ҭ֭ܭ" Folx *Qft(3òݲN[ٳ'Դݴm  ֶݶUZxyι׹߹9AGlbh$,Acq۽Taþ߾D~LT D}7:muB/Ub*<Djkv^ HL{}RXKdh)=Q_ (W`x|;J0;V[6ACR>{ }   o$r$6&:&--X/[/0"0<2G25!58899'B+BtBxBJJMM8RURRRN]Q]]]aaefg5g7gAgCgg{{^}}} ~dhPSĞ֥TsFeIN\k]_ŶζȸջۻBM#005$$//V;`;HIeN OPQQU"UWUUUUjVVVVVW'W-WWW2XLXXXXXXX)Y6YdYmYuYYYY ZZZ4ZDZIZxZZZZ*[0[[[5\:\z\\\\3]7]]]]]'^0^^^^^ __v____`*`,`3`8`V`^`k`a+aaa b%bub{bbbblccc d&d^djdrdwdddee%eMeUebeee fffCf{ffffff{gggggg_hlhiiiiiiiiLjYjojjjjjj3kvkkkkk]lkllllmgmmmmmm"n*n7nnn>oDooo"p2pJpXpppqqr0r8rFrirkrrros{sss5tOtWt]tttttRu[uuuyvvvv5wFwYwtwww~xxxxPy[ycyiy1z>zzzzzY{_{{|F|K| }}}}~~#~)~T~`~h~t~'-1ORTÁˁ3>FMӂقH\]x frz /7D !>@Glj!-;|0;&TXɏяdo 3AEhtіՖ $GKkoBH@Hǡˡpx(,Y]x̴մ(k{ɻֻEKǼҼ =R]SZ!,lw 5@{-4z}'*_f_:J" &*KVzMrt  +V E|~  !!')([+`+--4.<.0!0_0n044;;<<==j??@@>CBC?EFEuLLLLLLVM_MmMxMNNN OOOOO)P/PJPxPPPPPnQxQQQQQGRMRRRSST(T0T6^hr4:ʼnƉ̉ĊȓדUZptRWVi'9?F'6qȧΧק="+3@_l{`e'2w{ǬˬԮޮ$&sxǯ\`Ҳղ!ouGJ [_:D׶߶]aAGӽ߽!$pxEG|fk38)3|oy$(Hs8@4:s|dn5@')*E"*wfr 7U`^e|%1;.1UX|\ay}  24  7Eguwi-1dn4<iEI^bagad!!f"i"$$4$6$++//d4w4K5W5V6X666y7777;8M89.9@@U@a@@@EENNfQoQTT T"T.T0TTJTLTTTTTTTTTTTzU}UWWWWWWWW XXX'X6X?X[\:]E]M]Q]]]X^c^^^^^@_B_ b.bi i r#rttzz+~.~[~_~\`<E}Ò"$5YZړn˜ʘpqLOXj g&9WU[]IRZbKu@]]`,0=Aw8: F P   BCd  /  !!!!!!!!-"0"""I#g#####D$Y$$$$$$$$ %^%_%%%%%% &3&4&K&$'a'''''>(E((((())|******5-8-/32314V444y55>6X67%7|779h9B9CFFKKKKKKKLL,MDMdMM N#NTNUNjNlNNNOO O=OlOOOOPP0P1PvPwPPPPPQ#Q%QQQT)TVVVVOWUWWWXXUX[XXXZZ[[\\W\[\s\w\\\] ] ]]]]]]]]^^m^^^^^R_W___` `Y`^``` aa]abaaabbzbbbb:c@cfdqdeegqqqq-r4rrrLsXs4{9{}}bolo Uu}# Z^Γ ߖ~<>Pc9?)ոٸDf9`wxTX]or-r8< @Y!)7CQ\;COLOQ[',)fp#.L~$JX9D\jkv&1Nl*+4IW-8w-Va2ds{  S u h k     " '     0 3     2 8 f    , 0 1 A h l    + ! ! U! b! &" ." " " " " ]# _# ( P( ) ) * + . . 0 0 2 :2 9 9 : _; < < = %= E> J> (A wA A A H H U U \ \ a b c ,c *g -g g g } R o H V d i l h j m N p  1  ! z |       E H Z d o# # $ $ b% p% % % W& b& & & & & U' g' ' ' ( ( :( k( ( ( ( ) ) ) * * 5 5 8 8 ? ? ? #? A? ]? _? ? 8A ;A G G 1H :H H H *I .I aI eI I I I I J J @J DJ sJ wJ J J J J K K O P +S 4S -V 6V r^ {^ Z` ^` c c c c d Rd d d e e &h >h k k fl l r r U{ i{ { {   r   & ! - m q 1 7  ͡ ֡ m     n q ɹ   c m  A B E P  z Q Z * - ( * @  2 m * ^ n G m e      ] F  %   ' B( ( ( D) M) W* `* *- - . #. . . 3 3 5 6 7 7 8 8 %A (A FD LD }E E G G L L 1M 4M S S T T T T U 'U U U U U V V V V #W 1W W W X X qZ sZ \ \ ] ] ^ ^ ^ ^ ` ` ma sa a b c 0n @o Ho p p s s w w w w w w w w x x Xx gx } } ̂ ΂  r z ʃ ڃ ܃   Ά ц E o ވ  n t   5 = ? ̨ Ԩ / 7 ? I ̲ ϲ h n Y ״ ߴ ` p e s ķ ʷ  - e   ] j Ȼ . ʾ @ B R H N U  M Q < ? : >   . 8 W ^ m u   d ? G M J 9 ?  # ' U l [ S Z x            D Y _  q {  5 <       L S : D   # # $ $ * * * * * + + + ^+ + , 1 1 8 8 G< S< > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? &? *? -? 33333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333ii9:" $ ` a 7 8 o p D E deghGHJK$%'(141477|7799BBggolol|lllllllllllllllmmmmm3m6m8m;mHmNmfmimlmom{mmmmmmmmmmmmmmmnnnnn1n4n6n9nFnKnbnengnjnwn{n|n}nnnnnnnnnnnnnnnnnnnnn oo)o-o/o2o?oDo[o_oaodoqovooooooooooooooooop p%p(p+p.p:p?pXp[p^papmprpppppppppppppppppqqq!q$q'q3q9qQqTqWqZqfqlqqqqqqqqqqqqqqq5{6{7{8{Γϓ\]_`55DEUVٸڸTT9< ? ? ? ? ? ? ? ? ? ? ? ? ? &? *? -? ii9:" $ ` a 7 8 o p D E deghGHJK$%'(141477|7799BBggolol|lllllllllllllllmmmmm3m6m8m;mHmNmfmimlmom{mmmmmmmmmmmmmmmnnnnn1n4n6n9nFnKnbnengnjnwn{n|n}nnnnnnnnnnnnnnnnnnnnn oo)o-o/o2o?oDo[o_oaodoqovooooooooooooooooop p%p(p+p.p:p?pXp[p^papmprpppppppppppppppppqqq!q$q'q3q9qQqTqWqZqfqlqqqqqqqqqqqqqqq5{6{7{8{Γϓ\]_`55DEUVٸڸTT9< ? ? ? ? ? ? ? ? ? ? ? ? ? &? *? -? !zY8G& W3%,tk`; rs  ^ `o( ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH.^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH.^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH.^`OJ?PJQJ?^Jo(-^`OJ(QJ(^J(o(hHopp^p`OJ@QJ@o(hH@ @ ^@ `OJQJo(hH^`OJ(QJ(^J(o(hHo^`OJ@QJ@o(hH^`OJQJo(hH^`OJ(QJ(^J(o(hHoPP^P`OJ@QJ@o(hH^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH.^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH.8G&tkW3 r!0                          ?                          %%aO}uh2c:iC0Sv}EM!PC|] ,P L % (* t] I g ' t BC*s<J|$0SDVwr/9X;RWh[o#"."p Z`iNr 2FJRa !J"!hp!N"###E#t#E$ %>% J%'@@'['o(Ks(!)-=)])H+_+#,(,A,[Z,j,$-.-g.9.w.]J/R|0%1J1 2252H24444uU45 5@57k5TQ7S7h8H84W8Ru8G!9]9M:)/:I:lM:; ;Y$;T;l;8<>L>b>?{@HA CbCtsDyDEFWFW4HKHVH.zHrIS1JAKcK?9LZOyPQYDQ.RiR sR 3SDT.UKV15WmoX=YDZWZ[\[Yx[H\\2\s<\`F\4^^.^_^-_]_=)`c 8duBd]d-efgk.lUl^l"um)nhSndnrnxn o$o1qq"qPr=5sn tMtptuYv'v(nv'qv%wyxd{.{a|'=~D fEN@h>iLz&&Md9ri)94V#hlMHl*OqRM;|< -8/+b@+B[MSVB^!&_Es "kc{ l)y-b1Vbo,+w6~f.Z0^quf,HWLDWZ8Aa^]wC&3_q8x6=f}Bl!=,KZl<hFcOXlaO2`m+K@X;E"FlZ(`a;EI$OdP(aJyo_8lF:BZUi$}&=FN`P:NmWFgwP'*_Gyf6]dQqY p>eQC8#B>]0/'W@8>Q:QE]a.fXsCs|Ou:>`t Q/af.o)Il0ZbuLbq)ûŻǻɻʻͻλϻлѻӻջ׻ٻۻܻ߻   !#$&'(*,.024679:<>@BDFHJKLMNOPQRSTUVW\^`behknqr*Lr~E !#%&(*,.02349;=?ACEGIKMOQSUWY[]_abgikmoqsuwy{}    !#$%'()*+,-./0123456;=?ACEGIKMOQSUWY[]_acdikmoqsuwy{}    "$&(*,.024689>@BDFHJLNPRTVXZ\^`bdfglnprtvxz|~     !"#$%&'()*+,-./0123456789>CHMRW\afkpuz' "'()*+,.1238ABDGHIJKLNQRSZ\]_bcdiklnqrs$%(*+,278;=>?GLMPTUVWXY\^_`abcfijklmnqtuvwxy|~   "%&'()*-/0123479:;<=>ACDEFGHKMNOPQRUYZ[\]^acdefghkmnopqruwxyz{|   #$%&'(+-./0125789:;<?ABCDEFIKLMNOPSUVWXYZ]_`abcdgijklmnqstuvwx{}~      !"%'()*+,/1234569;<=>?@CEFGHIJMOPQRSTU "PV`abcdfhijklmorsty%(),/019;<?BCDLOPSVWX\abehijpuvy|}~    !"#$%(*+,-./2456789<>?@ABCFHIJKLMPRSTUVWZ\]^_`adfghijknpqrstuxz{|}~_x2?2@2A2B2C2I2J2K2L2M2N2O2P2Q2e2h2k2n2p2q2r2s2t222222222222222222222222222222222222222222222222222222G3Q3V3e3m3n3w3|33333333333333333333333334 4 444444%4(4-4.4/4555555555555555566 666666 6"6*6+6.606566696;6=6>67&717778797B7H7J7K7L7M7O7Q7R7S7T7U7X7Z7^7_7`7a7d7f7j7k7l7m7p7r7t7z7{7|77777777777777777777777777777777777777788 888888888888!8#8'8(8)8*8-8/83848586898;8=8C8D8E8V8a8b8c8d8m8s8u8v8w8x8{8}8~8888888888888888888888888888888888888888888888888888999997{A{G{M{S{Y{_{e{k{q{x{y{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{|||| ||||||| |#|$|'|*|-|0|3|6|9|<|?|B|C|F|I|L|O|R|U|X|[|^|a|b|e|h|k|n|q|t|w|z|}||||||||||||||ТҢӢ !5N[\GƩͩөԩ6<@DEcgkpq123INUbcwy{}~ƱDZ !&' 8!       ! ' ( ) 2 8 > ? @ P V \ ] ^ h n t u v  7 ? L M R _ ` e j k p w x y                               $ % & ( , 3 : > B F G H J N R V Z ^ b c d f j q u y }            C OF :   -? !W!W)5U!W)5)5)5@7p  --tt:;mAmByGyHlJlKlolplqtu./089:R}R~rrԹԺYY^^ //Z[             b b             J J J u v w x y { | } ~    ,? P@PP,@PP@PP@PPP@P P P@PP$@PP,@P&PP@PPP@PXP@P`P@PdP@PPP@PP(@PP@PP@PP@PPPRP@P\P^P@PPPP @PPT @PP @P P @PPd @PP @PP @PP @PP@PRP@PVP@PZP@PP @PP(@PP8@PP@PB PD PH PJ PN PP P@PV P@P` P@P P @PV P@PZ P@P^ P@Pb Ph Pj P@Pn Pp P@Pt P@Pz P@P~ P P@P P P@P P@P P P P P P@P P@P P@P. P`@P2 P4 P6 Pp@P: Px@P P@P$@UnknownAG:Ax Times New Roman5Symbol3& :Cx ArialG5  jMS Mincho-3 fgI& ??Arial Unicode MSmTimesNewRomanPSMTArial Unicode MSUNew-Baskerville-RomanAqBHLBHI+TimesNewRomanTimes New Roman;AdvEPSTIMA RealpagePAL2K RealpagePAL2-Bold; CodeCodeALucidaBrightOLucidaBright-ItalicCUnivers-BlackCTimesNewRoman[New-Baskerville-SemiBoldA_ Times-RomanTimes New RomanCAdvTTaa6ae907GAdvTT4af3d8cd.I=AdvPED12869AdvP0068=AdvPED1283=AdvPED1282IJansonText-RomanKJansonText-ItalicQTimesNewRoman,ItalicU TimesNewRomanPS-BoldMT=AdvP4DF60EYTimesNewRomanPS-ItalicMTGGaramond-Italic9GaramondCGaramond-Bold; AdvPS9B2BC  ArialMTArialA& Trebuchet MSG AvantGarde-BoldG AvantGarde-BookCBerkeley-Book71 Courier?5 :Cx Courier New/ F68=AdvP41153C?Melior-Bold5Melior[Universal-GreekwithMathPiCMelior-Italic;" Helvetica/ F69;AdvPSA88AM Arial-BoldItalicMT/ F17M StoneSans-Semibold;TeX-cmr12=TeX-cmti129 AdvPSGODC AdvTT82c4f4c4G AdvTT7b6c0d50.B=Times-BoldGAdvTTb8864ccf.B9 AdvP159C9 AdvP15B25& >[`)Tahoma7Georgia;Wingdings"1 h)'Tk A9k A9!4d8 8 2qHX ?2**FIRST DRAFT**TomTom$      Oh+'0l  ( 4 @LT\d**FIRST DRAFT**TomNormalTom84Microsoft Office Word@Z@PD@vnik ՜.+,D՜.+,D hp   Microsoft9A8 ' **FIRST DRAFT** Title 8@ _PID_HLINKSA8)dIhttp://en.wikipedia.org/wiki/List_of_ATP_number_1_ranked_doubles_players ma5http://en.wikipedia.org/wiki/Grand_Slam_%28tennis%29"U^(http://en.wikipedia.org/wiki/Mike_Bryanj['http://en.wikipedia.org/wiki/Bob_BryanF U../../../index/list_6881_0A R../../../businessmedicine`:O#http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDIQFjAC&url=http%3A%2F%2Fwww.foxnews.com%2Fstory%2F2006%2F01%2F28%2Fhumuhumunukunukuapuaa-ousted-in-hawaii&ei=GXXtUo3dCpfZoASxjYDgBA&usg=AFQjCNEZgtWQoVvkadEt2jXy93qjE1AU5w&sig2=7TRbgivq3eBWdYf1dgLIeQ&bvm=bv.60444564,d.cGUwqBHhttp://everything2.com/title/Journal+of+the+Italian+Actuarial+Institite?5http://www-math.mit.edu/phase2/UJM/vol1/RMONTE-F.PDFGR<http://www.manythings.org/rs/O^9Rhttp://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22ZELMA+WHITESIDES%22v86Ohttp://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22JESSIE+BOLTEN%22z93Ohttp://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22KATY+PATTILLO%22ae0Thttp://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22PENELOPE+KORNREICH%22Z-Ihttp://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22+MINAYA%22pn*8http://www.fortunecity.com/emachines/e11/86/random.htmlq$$(http://www.augsburg.edu/ppages/~schieldU!http://www.statpages.org//EFhttp://www.active-maths.co.uk/fractions/whiteboard/fracdec_index.html[:Bhttp://courses.wcupa.edu/rbove/Berenson/CD-ROM Topics/Section 7_3}m http://www.gallup-robinson.com/e)http://www.statlit.org/b'4http://www.lhup.edu/~dsimanek/scenario/contents.htm  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~(Root Entry F@ i*Data W H 1Table|YWordDocumentSummaryInformation(DocumentSummaryInformation8CompObjq  FMicrosoft Office Word Document MSWordDocWord.Document.89q