ࡱ>   bjbjߍ QN* { {  8 ! b":#:A#A#A#$$$a>a>a>a>a>ag/j>aY $$$$ $>a0{ A#A#b000$$ A# A# d(x); H0} the p-value of the resultif its very small infer evidence against the null (or evidence of a genuine discrepancy). The probability distribution of d(X) is its sampling distribution It lets us calculate the probability of inferring evidence for H erroneously--an error probability. As a result, aspects of the data and hypotheses generation may have to be taken account of: they may alter the error probabilities and thereby the probativeness of the test. This introduces complications But that is the key to controlling and assess error probabilities. (In my own revision of error statistical methods, I insist on an assessment that is relative to the actual outcome, as opposed to standard predesignated error probabilities, but existing methods can serve this role.) C. Simplicity and Freedom (vs control of error probabilities) That error probabilistic properties may alter the construal of results gets a formal rendering: we violate the (strong) likelihood principle LP. (likelihoods arent enough) Among aspects of the data generation that could alter error probabilities are stopping rules. By contrast The irrelevance of stopping rules to statistical inference restores a simplicity and freedom to experimental design that had been lost by classical emphasis on significance levels (in the sense of Neyman and Pearson. (Savage Forum 1963, 239) We are prepared to exchange simplicity and freedom for controlling error probabilities One way to illustrate the violation of the LP in error statistics is via the Optional Stopping Effect. We have a random sample from a Normal distribution with mean and standard deviation s, i.e. Xi ~ N(,s) and we test H0: =0, vs. H1: (0. stopping rule: Keep sampling until H is rejected at the .05 level (i.e., keep sampling until | EMBED Equation.DSMT4 | ( 1.96 s/ EMBED "Equation" \* mergeformat ). The rule is guaranteed to stop, it is assured of rejecting the null even if true. More generally, actual significance level differs from, and will be greater than .05. Violates the weak repeated sampling rule (Cox and Hinkley) There are many equivalent ways to get this kind of violation of error probabilities (hinting for significance, selection effects) It need not have anything to do with stopping rules, it can result from data-dependent selection of hypotheses for testing, or rejecting a null so long as any better fitting alternative exists It is sometimes said that in requiring the actual type 1 error probability be small (i.e., requiring very small p-values) before the null is rejected in favor of the alternative), we are appealing to the simpler hypothesis (the null) In a sense it is simpler but the actual rationale: if moderate p-values are taken as evidence of a genuine discrepancy from the null, then it will make it too easy to erroneously infer a real effect We cant even assess whether an observed agreement (between data and a hypothesis) really is big or small without it the sampling distribution. If we accept the criterion suggested by the method of likelihood it is still necessary to determine its sampling distribution in order to control the error involved in rejecting a true hypothesis, because a knowledge of L [the likelihood ratio] alone is not adequate to insure control of this error. (Pearson and Neyman 1967, 106). D. Should we trust our intuitions in simple cases? It is often noted that if the test is restricted to a comparative test, limited to simple or point against point hypotheses, then there is an upper error bound (so the problem with optional stopping is avoided). (Savage switches to such cases, Savage Forum 1962) But thats a very different, very artificial example. So, to the question, should we trust our intuitions about general principles from simple cases? The answer is no (we should look for exceptions) It was the case of the complex (i.e., compound) alternative that led statistician George Barnard to reject the (strong) LP (surprising Savage). I turn now to more familiar appeals to simplicity (not for methods or principles, but for inference to hypotheses, models, theories) Simplicity, Severity, Error Correction Underdetermination and Simplicity Clearly a big rationale for the appeal to simplicity is the supposition that we are otherwise stuck with terrible underdetermination But since there will always be an infinite number of theories which yield the same data with the same degree of inductive probability---but which make different predictions.without the criterion of simplicity we can make no step beyond the observable data. Without this all-important a priori criterion, we would be utterly lost. Swinburne (Simplicity as Evidence of Truth 1997, 15) The problem might be seen as what more do we need to avoid underdetermination x agrees with or fits H ______ Explanatory power, novelty, simplicity, well-testedness, severity Popper Mere fits are too cheap to be worth having (Popper) In opposition to [the] inductivist attitude, I assert that C(H,x) must not be interpreted as the degree of corroboration of H by x, unless x reports the results of our sincere efforts to overthrow H. The requirement of sincerity cannot be formalizedno more than the inductivist requirement that e must represent our total observational knowledge. (Popper 1959, p. 418.) Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory---or in other words, only if they result from serious attempts to refute the theory, . (Popper, 1994, p. 89) It is no wonder Popper is often compared to error statisticians (Fisher, and/or Neyman and Pearson) True Popper was never able show qua pure deductivist[that]we should expect the theory to fail if it is not true (Guenbaum 198, 130) The best tested so far need not be well tested; his methods gave no way to assess error probabilities. Complexity and Inseverity The primary Popperian goal was always severity and avoidance of ad hoc strategems that would lower the testability of hypotheses From my point of view, a system must be described as complex in the highest degree if, one holds fast to it as a system established forever which one is determined to rescue, whenever it is in danger, by the introduction of auxiliary hypotheses. For the degree of falsifiability of a system thus protected is equal to zero. (Popper LSD, 331). Characteristic of pseudoscience. Note, its the system that is complex (I would say procedure or method). Popper typically, misleadingly, suggests it is the hypothesis or theory that should be testable or simple. The fact that he was unable to implement the idea using logical probability does not stop us from using contemporary statistical tools to do so Severity Principle The Popperian intuition is right-headed: If a procedure had little or no ability to find flaws in H, then finding none scarcely counts in Hs favor. Can put in terms of having evidence Severity Principle (Weak): Data x provides poor evidence for H if it results from a method or procedure that has little or no ability of finding flaws in H, even if H is false. As weak as this is, it is stronger than a mere falsificationist requirement: it may be logically possible to falsify a hypothesis, while the procedure may make it virtually impossible for such falsifying evidence to be obtained. Although one can get considerable mileage even stopping with this negative conception (as perhaps Popperians would), I hold the further, positive conception: Severity Principle (Full): Data x provide a good indication of or evidence for hypothesis H (just) to the extent that test T severely passes H with x. I talk about SEV a lot elsewhere, and cannot get into qualifications here. Severity has three arguments: a test, an outcome or result, and an inference or a claim. The severity with which H passes test T with outcome x may be abbreviated by: SEV(Test T, outcome x, claim H). That H is severely tested will always abbreviate that H has passed the severe or stringent probe, not, for example merely that H was subjected to one (corroboration) This contrasts with a common tendency to speak of a severe test divorced from the specific inference leads to fallacies we need to avoid. A test may be made so sensitive (or powerful) that discrepancies from a hypothesis H are inferred too readily. (fallacy of rejection) However, the severity associated with such an inference is decreased, the more sensitive the test (not the reverse). One analogously avoids fallacies of acceptance, Ill illustrate with a fanciful example.. My weight If no change in weight registers on any of a series of well-calibrated and stable scales, both before leaving and upon my return from London, even though, say, they easily detect a difference when I lift a .1-pound potato, then we argue that the data warrant inferring that my weight gain is negligible within the limits of the sensitivity of the scales. H: my weight gain is no greater than d, where d > 0 is an amount easily detected by these scales. H, we would say, has passed a severe test were I to have gained d pounds or more (i.e., were H false), then this method would almost certainly have detected this. No Rigging! Perhaps underdeterminationists would say I could insist all the scales are wrongthey work fine with weighing vegetables, etc. (Cartesian demon of scales) Rigged alternative H* H*: H is false but all data will be as if it is true. All experiments systematically mask the falsity of H (Gellerized hypothesis) Are H and H* empirically equivalent? If so, they are not testably equivalent on the severity account For any hypothesis H, one can always adduce a rigged H* (even if H is true and has passed highly severe tests!) Were we to deny x0 is evidence for H because of the possibility of rigging, we would be prevented from correctly finding out about weight or whatever If the scales work reliably on test objects with known weight, what sort of extraordinary circumstance could cause them all to go astray just when we do not know the weight of the test object (can the scales read my mind? (C.S. Peirce) Its simpler to assume the scales that work on my potato also work with unknown weights (in the intended range) but thats not why it is warranted It is the learning goal that precludes such rigging, conspiracies, gellerization highly unreliable strategy. Granted, this is a special case where there is knowledge of the probative capacities of the instrument, and this figures importantly in this account for justifying inductive (evidence-transcending) inferences Central strategy for checking assumptions using known procedures: to ensure errors ramify: if we were wrong, we would find systematic departures from the known weight (likewise with the use of known probability models, e.g., Bernouilli model with p = .5 and coin tossing) To move from highly inaccurate measurements to far more accurate ones Not an appeal to the uniformity of nature, but "that the supernal powers withhold their hands and let me alone, and that no mysterious uniformity interferes with the action of chance". The associated warrant for ampliative inference beyond todays paper. Sometimes it feels as if simplicity is appealed to in order to save some (flawed) accounts of inference from themselves An account that regards H and H* empirically (predictively) equivalent? By contrast, two pieces of data that equally well fit a hypothesis, may differ greatly in their evidential value due to differences in the probativeness of the tests from which they arose. Empirical learning is complex in the sense of piecemeal One way to distinguish the pieces is by considering what error of inference is of concern Formal error statistical tests provide tools to ensure errors will be correctly detected (i.e., signaled) with high probabilitiesbut in scientific contexts, their role will be not to assure low long run error rates (behavioristic) but to learn about the source of the given data set Within the piece: we are not distinguishing a hypothesis or theory from its rivals, experiment sets out to distinguish and rule out a specific erroneous interpretation of the data from this experiment An error can concern any aspect of a model or hypothesis in the series of models, i.e., any mistaken understandings of an aspect of the phenomenon in question these are errors about real vs. spurious effects, causes, parameters, model assumptions, links from statistical to substantive, classification errors, etc. I dont distinguish theoretical/observational There is a corresponding localization of what one is entitled to infer severely: H is false refers to a specific error that the hypothesis H is denying. For example, we still need to distinguish the inference from rejecting a null of 0 effect from theories to explain the effect (they are on different levels) Much less does evidence for a real effect warrant realism (entity realism, or other) Discussions of error correcting or self-correcting methods often confuse two interpretations of the long-run metaphor: (a) Asymptotic error-correction (as n_"): I have a sample of 100 and I consider accumulating more and more data as n increases the inference or estimate about m approaches the true value of m (b) Error probabilities of a test: I have a sample of 100 and I consider hypothetical replications of the experimenteach with samples of 100 (the relative frequency with which a sample mean differs by more than 2 standard deviations from the true mean is .05). So, I can use the observed mean to estimate how far off the correct value is. The error probability tells me about the procedure underlying the actual 100-fold sample, e.g., that theres good evidence it was not merely fortuitous or due to chance. Sampling distribution supplies the counterfactual needed: the value of employing a sampling distribution to represent statistically what it would be like were one or another assumption of the data generating mechanism violated: In one-sided Normal sampling with known s, for example: H: m <  EMBED "Equation" \* mergeformat 0 + 1.96sx (i.e., m < CIu ) passes severely because were this inference false, and the true mean m > CIu Simplicity and Economy The idea that a central aim of statistical method is to speed things up in this way is at the heart of the rationale of error statistical methods: The concern we might say is with making good on the long run claims in the short run, within the usual amount of time for a given research project. It changes a fortuitous event which may take weeks or may take many decennia into an operation governed by intelligence, which will be finished within a month. (Peirce 7.78) Giving good leave: An important consideration Peirce gives under economy is that it may give good leave as the billiard-players say. If it fails to fit the facts, the test may be instructive about the next hypothesis. Even if we wanted to know if a quadratic equation holds between quantities, we would do well to test a linear model first because the residuals will be more readily interpretative. The residuals, differences between observed and predicted values, may teach more about the next hypothesis to try. Studying the residuals we can probe if the statistical adequacy of a model, the residuals are like white noise Im allowing a loose talk of fit as others do, a satisfactory notion, as my colleague Aris Spanos always stresses, requires adequately capturing systematic statistical information, (residuals are white noise). Again, one senses that simplicity is appealed to in order to save an inadequate account from itself Error fixing gambits in model validation. Example: A statistically significant difference from a null that asserts independence in a linear regression model, might be taken as warranting one of many alternatives that could explain non-independence (Aris Spanos) H1 :the errors are correlated with their past, expressed as a lag between trials. H1 now fits the data all right, but since this is just one of many ways to account for the lack of independence, alternative H1 passes with low severity. This method has little if any chance of discerning other hypotheses that could also explain the violation of independence. It is one thing to arrive at such an alternative based on the observed discrepancy with the requirement that it be subjected to further tests Severity and informativeness (be stringent but learn something) Recalling the simplicity slogan: If a simpler (method, model, theory) will suffice, then go with it? We cant ignore different requirements and goals. Unlike many here, I know little of machine learning, but it strikes me as a very special case (where straight rules might suffice) As a philosopher of science, Im always looking for a very general account of learning, finding things out Scientific understanding goes beyond predicting events To one who thinks fitting the facts and predictive accuracy is what is mainly wanted in inference, it must seem mystifying that scientists are not especially satisfied with that alone, they always want to push the boundaries to learn something new, to rock the boat Is there a tension between simplicity and breaking out of paradigms Wouldnt it be simpler not to challenge adequate predicting theories? Maybe, but scientists rock the boat to find out more, because they want understanding that goes beyond predicting One may be entirely agnostic on realism; models are approximate and idealized, that doesnt prevent getting a correct understanding using them Why did researchers deliberately construct rivals if GTR was predicting adequately (maybe it had a high posterior)? Some say severity is too tough to satisfy, but they overlook the value of recognizing inseverity Our severe tester sets about exploring just why we are not allowed to say that GTR is severely probed as a wholewhy has it inseverely passed based on given tests How could it be a mistake to regard the existing evidence as good evidence for GTR? (even in the regions probed by solar system tests) Parameterized Post Newtonian (PPN) framework: a list of parameters that allows a systematic articulation of violations of, or alternatives to, what GTR says about specific gravity effects (they want to avoid being biased toward GTR) Set up largely as straw men with which to set firmer constraints on these parameters, check which portions of GTR has and have been well-tested (Earman 1992) Each PPN parameter is set as a null hypothesis of a test. For example, l, the deflection of light parameter, measures  spacial curvature ; The GTR value for the PPN parameter under test serves as the null hypothesis from which discrepancies are sought (usually set at 1). H0: l = l GTR By identifying the null with the prediction from GTR, any discrepancies are given a very good chance to be detected, so if no significant departure is found, this constitutes evidence for the GTR prediction with respect to the effect under test, i.e., l. The tests rule out GTR violations exceeding the bounds for which the test had very high probative ability (infer upper bounds to possible violations) (could equivalently be viewed as inferring a confidence interval estimate l = L + e) Simplicity and Parameter Adjustment (Our conference organizer is keen for me to touch on this, and GTR offers a good case) Deliberately constructing viable rivals theories did not preclude "fixing arbitrary parameters" to ensure rivals yield correct predictions with regard to the severely affirmed effects For example, the addition of a scalar field in Brans-Dicke theory depended on an adjustable constant w: The smaller its value the larger the effect of the scalar field and thus the bigger the difference with GTR, but as w gets larger the two became indistinguishable. (An interesting difference would have been with a small w like 40; its latest lower bound is pushing 20,000!) The value for l is fixed in GTR, but constraining a rival like the B-D theory to fit the GTR prediction involves adjusting a parameter w: Several Bayesians (e.g., Berger, Rosenkrantz) maintain that a theory that is free of adjustable parameters is simpler and therefore enjoys a higher prior probability (Jefferys and J. Berger 1992, 72; Ockhams razor and Bayesian analysis) Here they are explicitly referring to adjustments of gravity theories. Others maintain the opposite On the Bayesian analysis, this countenancing of parameter fixing is not surprising, since it is not at all clear that GTR deserves a higher prior than the constrained Brans and Dicke theory (Earman, 1992, p. 115). why should the prior likelihood of the evidence depend upon whether it was used in constructing T?; Earman, 1992, p. 116), As Ive argued elsewhere, there are many cases where data are used to arrive at and support parameters that result in the fitted claim passing with high severity To correctly diagnose the differential merit, the severe testing approach instructs us to consider the particular inference and the ways it can be in error in relation to the corresponding test procedure. In adjusting w, thereby constraining BransDicke theory to fit the estimated w, what is being learned regarding the BransDicke theory is how large would w need to be to agree with the estimated l In this second case, inferences that pass with high severity are of the form w must be at least 500. (~confidence interval estimate) The questions, hence the possible errors, hence the severity differs. But the data-dependent GTR alternatives play a second role; namely to show that GTR has not passed severely as a whole: that were a rival account of the mechanism of gravity correct, the existing tests would not have detected this. This was the major contribution provided by the rivals articulated within the PPN framework (of viable rivals to GTR) The constrained GTR rivals successfully show the existing tests did not rule out, with severity, alternative explanations for the l effect given in the viable rivals Some view their role as estimating cosmological constants, thus estimating violations that would be expected in strong gravity domains. Discovering new things: Nordvedt Effect h But what I really want to emphasize is the kind of strategy that enables finding a new effect Discovering new things is creative, but its not the miracle Popper makes it out to be In the 1960s Nordvedt identified a testable difference, that BransDicke theory would conflict with In the 1960s Nordvedt discovered in the 1960s that B-D theory would conflict with GTR by predicting a violation of the Strong Equivalence Principle (basically the Weak Equivalence Principle for massive self-gravitating bodies, e.g., stars and planets, black holes); a new parameter to describe this effect, the Nordvedt effect, was introduced into the PPN framework, i.e., h. h would be 0 for GTR, so the null hypothesis tested is H0: h = 0 as against non-0 for rivals. Measurements of the round trip travel times between the earth and moon (between 1969 and 1975) enabled the existence of such an anomaly for GTR to be probed severely (actually, the measurements continue today). Because the tests are highly sensitive, these measurements provided evidence that the Nordvedt effect is absent, set upper bounds to the possible violations I talk about experimental GTR elsewhere. Unification May Grow Out of Testing Constraints many of the parameters are functions of the othersan extremely valuable source for cross-checking and fortifying inferences (e.g., l measures the same thing as the so-called time delay, and the Nordevdt parameter h gives estimates of several others.) we may arrive at a unification, but note that the impetus was simultaneously (if not mainly), getting more constrained tests to learn more Combined interval estimates, constrains the values of the parameters, enabling entire chunks of theories to be ruled out at a time (i.e., all theories that predict the values of the parameter outside the interval estimates). Concluding remarks In the first third of this paper I considered appeals to simplicity in appraising inductive statistical accounts and principles, and denied it was a good guide to avoid too-easy confirmations and fits In the remaining 2/3, I considered how a desire for simplicity grows out of the desire for constraints against too easy inductive inferences A general simplicity slogan goes something like this: If a simpler (method, model, theory) will suffice, then go with it? (usually theres an addition, all else being equal---but this enables certain simplicity positions to be retained Any example I might point to where something else is really operative could be dismissed by saying it violates the all things are equal requirement. But then of course the simplicity position is itself maximally complicated (using Poppers notion) That appraisals of inferences are altered by the overall error probing capacities of tests complicates the account, but in so doing enables it to avoid having to resort to familiar appeals to simplicity of other accounts Its an appeal to well testedness, which gets at what is really at issue, or so I argue but severity provides a general desideratum for when selection effects need to be taken account of The severity intuition: we have good evidence that we are correct about a claim or hypothesis just to the extent that we have ruled out the ways we can be wrong in taking the claim or hypothesis to be true. Far from wishing to justify enumerative induction from all observed As have been Bs to an inference that all or most As are Bs in a given population, such a rule would license inferences that had not passed severe tests-- highly unreliable rule. An induction following this pattern is warranted only when the inference has passed a severe test The goal of correct understanding, and learning more is not simple (it will not appeal to neatniks) But the piecemeal account enjoys the benefits of applicability of ready to wear and easy to check methods the goal of attaining a more comprehensive understanding of phenomena the exploitation of multiple linkages to constrain, cross-check, and subtract out, errorshigher severity It is more difficult to explain things away within these interconnected checks Enables the capacity to discover a new effect, entity, anomaly If a theory says nothing about a phenomenon, its tests generally have no chance of discerning how it may be wrong regarding that phenomenon (e.g., central dogma of molecular biology did not speak of prions) Simplicity criteria correlate with whats actually doing the work in knowledge promotingbut the correlation is quite imperfect, and when it holds, it is only indirectly getting at the problem It is always a second-hand emotion to whats responsible for: the goals of ensuring and appraising how well-tested claims are, how well methods control and avoid error.      DATE \@ "M/d/yy" 6/23/12  PAGE 4  PAGE 3  Swinburne claims that Sober holds essentially Poppers view: the simpler theory is the one that answers more of your questions. But even if Popper identified simpler with more falsifiable or informative, it is utterly irrelevant to the question of what is warranted to infer. Swinburnes conception of simplicity is comparative: one theory is simpler than another if the simplest formulation of the first is simpler than that of the second. For Swinburne, do not postulate theoretical entities if you can get a theory that yields the data without it.These accounts tend to reduce science to predicting events rather than to understanding theoretical processes, so they are very limited in their relevance for most science. This may fit with current machine learning where the goal is merely classification with given known objects. But it comes up short for all of scientific induction. Spanos: It is implicitly assumed that curves with the same high goodness-of- fit capture the regularities in the data equally well. It is argued that what renders a curve fittest is the non-systematic (in a probabilistic sense) nature of its residuals, not how small they are; fitted curves with excellent fit and predictive accuracy can be shown to be statistically inadequate - they do not account for the regularities in the data. The approximating function gm(xk; )= mi=0 ii(xk) is chosen to be as elabo- rate as necessary to ensure statistical adequacy, but no more elaborate. This guards against overfitting because unnecessary elaborate structure gives The predictive accuracy of a fitted curve (statistical model) is no longer just a matter of small prediction errors (which could be accidental), but non-systematic. rise to systematic residuals. This prevailing view is called into question by demonstrating that the Ptolemaic model yields excellent goodness-of- fit but does not account for the regularities in the data; it is shown to be statistically inadequate - Keplers model was shown above to be statistically adequate. Despite being relatively small, a closer look reveals that the standardized residuals show crucial departures from the white-noise assumptions. The predictive accuracy of the Ptolemaic model is very weak since it underpredicts systematically; a symptom of statistical inadequacy. The most crucial weakness of the Akaike model selection procedure is that it ignores statistical adequacy. It is argued that the Akaike procedures is often unreliable because it constitutes a form of Neyman-Pearson (N-P) hypothesis testing with unknown error probabilities. LlmnB C c e f g    G H I  m  żż˯~ҥqqcqqqqhYh:6CJ$OJQJhwh:CJ$OJQJhwh:6CJ$OJQJh:5CJ$OJQJh;7h:5CJ$OJQJh:CJ$OJQJhd)h:CJ$OJQJhwh:CJ$ h:CJ$ h:5CJ$hwh:5CJ$h:0J-56CJ$h h:0J-56CJ$hKCh:6PJ&MZmnf g   H I   hd1$7$8$H$^hgd:d1$7$8$H$gd:gd:$a$gd: %$a$gd: 1$7$8$H$gd: Y 4 KLMN%hXDYD^hgd:%8XDYD^8gd:% & FXDYDgd:8d1$7$8$H$^8gd: $ & F a$gd:$a$gd:d1$7$8$H$^gd:d1$7$8$H$gd: W Y e  1 2 3 4 ; B KLNлФviYQh:CJ$aJhrh:6CJ$OJQJaJh:6CJ$OJQJaJhrh:5CJ$OJQJaJ h:5CJ$hwh:5CJ$hR5 h:CJ$OJQJhwh:CJ$hwh:6CJ$OJQJ(jhh:0J6CJ$OJQJUh:6CJ$OJQJhh:6CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJJK #$STVXYWru * & F^gd:*gd: ($0]0a$gd: 0x1$]0gd:$0x1$]0`a$gd:$x1$]^`a$gd:%hXDYD^hgd:-46HK[s Y]km&()W6`svYѽѴڴڴzhh:6CJ$h:6CJ$OJQJh:CJ$OJQJh<h:6CJ$OJQJhwh:CJ$OJQJhrh:CJ$hwh:5CJ$hwh:6CJ$hwh:CJ$ h:CJ$hwh:CJ$aJh:CJ$aJhh:5CJ$aJ-uw7YZC %XDYDgd:%ZXDYD^Zgd: ^gd:gd: ($0]0a$gd:1$gd: 1$`gd:0x1$]0`gd: 0x1$]0gd:Y> BL)DE~    (|ne[hh:5CJ$h:56CJ$h;7h:5CJ$OJQJh+!h:CJ$h:>*CJ$OJQJaJhp.h:>*CJ$OJQJaJh:CJ$OJQJaJh+!h:CJ$OJQJaJh:CJOJQJaJhkh:>*CJ$PJhp.h:>*CJ$PJhz h:>*CJ$PJh:CJ$PJh0h:CJ$PJ!  HU B & F 0x1$]0^`gd: & F 0x1$]0`gd:0x1$]0^gd: & F0x1$]0gd: 0x1$]0gd:d1$7$8$H$gd: XDYDgd:(GH#+,;Ufj} BL] !!!!蹬}r}reeehwh:CJ$OJQJh:>*CJ$OJQJhZh:>*CJ$OJQJhHh:CJ$OJQJh:CJ$PJha h:5CJ$PJhFh:CJ$OJQJh:CJ$OJQJhFh:6CJ$ h:>*CJ$hQh:>*CJ$hwh:6CJ$hwh:CJ$h<h:CJ$ h:CJ$'\ ] k t ~ $1$Ifgd:1$gd: 1$`gd: ^!!""aWIIII & F1$^gd: 1$`gd:kd$$Ifl\! t0644 la!!""&"Q"q""""""""R#V#^#_#a#s#t#$7$=$O$R$ǷǥǒǃugZZLZh 2h:6CJ$OJQJh5^ch:CJ$OJQJh5^ch:5CJ$PJaJh9i[h:5CJ$PJaJh0h:CJ$OJPJQJ%h 2h:B*CJ$OJPJQJph"h 2h:hgCJ$OJPJQJh 2h:>*CJ$OJPJQJh 2h:CJ$OJPJQJh:CJ$PJh0h:CJ$PJhwh:CJ$OJQJh:CJ$OJQJ"^#_#a#t#R$$%%%&A'''(((()xgd: & F xgd: hx^hgd: . & F dgd:.dgd: %XDYDgd: ($0]0a$gd: &x`gd:& & Fx^gd:R$^$}$$$$$$$%%;%<%=%]%%%%&&Y&[&\&]&&&&&&&&&&&&&&B'a'e'i'm'''''''''((ξܾ΢Β΢܄h*h:6CJ$OJQJhwh:6CJ$H*OJQJh:6CJ$OJQJ!hwh:56CJ$H*OJQJhwh:56CJ$OJQJhwh:6CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJhth:CJ$OJQJ2((((((((((,)l)m)))))**}*****++,+-+.++,, , ,,,%,},,,,,,,,,,.鶨܌h:6CJ$OJQJhh:6CJ$OJQJhh:>*CJ$OJQJhUh:>*CJ$OJQJhh:CJ$OJQJh:5CJ$OJQJhP h:5CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJhUh:CJ$OJQJ/))***,+-+$,%,|,~,,.t....///?0@0 0]0`gd:0]0gd:gd: ]gd:$a$gd: $^a$gd:gd:xgd:.....$.&.D.F.H.^.`.b.h.j.t...../:/*B*CJ$OJPJQJphhwh:CJ$OJQJaJ$h:CJ$OJQJaJ$ha@h:6CJ$OJQJhP h:CJ$OJQJhwh:5CJ$OJQJaJ$h:5CJ$OJQJaJ$h h:>*CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJh:OJQJ6#7$777F8G88889999";$;r;; & Fgd: ^gd:d1$7$8$H$gd:`gd:^gd: & Fgd:8^8gd: & Fgd:gd:1$7$8$H$^gd:Q7R7v777778$8-808E8F8G8U888888999999ǷoboUKAh:CJ$OJQJh:B*CJ$phhwh:B*CJ$phh:56B*CJ$phh sh:56B*CJ$ph!h:5CJ$OJPJQJ_HaJ'h sh:5CJ$OJPJQJ_HaJ$hwh:CJ$OJPJQJ_HaJh:CJ$OJPJQJ_HaJ(hwh:>*B*CJ$OJPJQJph%hG8h:B*CJ$OJPJQJphh:B*CJ$OJPJQJph99M:::::;!;$;r;s;t;;;;;;;;;;;;;;;伲ysssiYE'hq#>h:5CJ$OJPJQJ_HaJh:CJ$OJPJQJ_HaJhEh:6CJ$ h:CJ$h:56CJ$ h:6CJ$hwh:6CJ$hwh:CJ$hwh:5CJ$hwh:56CJ$h:B*CJ$phh:CJ$OJQJh=5h:6>*CJ$OJQJh=5h:>*CJ$OJQJhwh:CJ$OJQJhx}Th:>*CJ$OJQJ;;;;<<==>??@@"@@@B!BkBBBgC 1$7$8$H$gd:gd:gd:^gd: $a$gd:gd: & Fgd:;<<Q<U<V<<<<<===C>D>>>??????@@"@D@d@@@@óóóãњzvlbzbXzhwh:6CJ$h<h:6CJ$h<h:5CJ$h: h:6CJ$h=5h:>*CJ$h+\Xh:CJ$ h:CJ$hE%h:CJ$hwh:6>*CJ$OJQJhwh:5>*CJ$OJQJhwh:>*CJ$OJQJhwh:CJ$OJQJ#hwh:5CJ$OJQJ^JaJh:CJ$OJPJQJ_HaJ@@@@oArAAAABBBhCzC{CCCCDDDD9D:DSDUDZDʺnnbSChwh:5CJ$OJQJaJhwh:CJ$OJQJaJh:CJ$OJQJaJ'hqGxh:6CJ$OJPJQJ_HaJ$hqGxh:CJ$OJPJQJ_HaJ$hwh:CJ$OJPJQJ_HaJ"hwh:56CJ$OJQJaJh:CJ$OJPJQJ_HaJhwh:CJ$hVFh:>*CJ$ h:CJ$h_rh:>*CJ$hK wh:>*CJ$h*h:CJ$gC{CCCCDD9D:DDDEEpFqFG GUGVGGH$HH1$7$8$H$`gd:`gd: & F1$7$8$H$gd: 1$7$8$H$gd: $1$7$8$H$a$gd:ZD[D\DwDyDDDDDDDDEEPFpFqFFFFFűűűšō~r~`PAPhwh:CJ$OJQJaJhwh:5CJ$OJQJaJ"hwh:56CJ$OJQJaJh:CJ$OJQJaJhh:CJ$OJQJaJ'hVFh:>*CJ$OJPJQJ_HaJh:CJ$OJPJQJ_HaJ'hwh:6CJ$OJPJQJ_HaJ$hwh:CJ$OJPJQJ_HaJ$hwh:CJ$OJPJQJ_HaJ'hwh:5CJ$OJPJQJ_HaJFFFFFFFFGGGGVGcGrGGGGGGGH HHH)H>HbHhHHHHH6I7IIIJ'JXJJⶰ}p}hwh:B*CJ$phhwh:6CJ$hwh:5CJ$hwh:56CJ$hwh:>*CJ$hwh:CJ$ h:CJ$h:CJ$OJQJaJhwh:5CJ$OJQJaJhwh:6CJ$OJQJaJhwh:CJ$OJQJaJhwh:CJ$OJQJaJ(HH]IIYJJJJJJJJTLVLM M4NANNNNN$a$gd:^gd:h`hgd: $da$gd:`gd: 1$7$8$H$gd: & Fgd:gd:JJJJJJJJJKKKRLVLXLLLLLM M"M\MrMtMMMMM4N5N7N8N?N@NANNȾܥܛܑ܄܄ym܄ܑccchh:5CJ$hwh:56CJ$hwh:CJ$aJhwh:CJ$OJQJhwh:6CJ$hwh:>*CJ$ h:>*CJ$"hwh:56CJ$OJQJaJh ph:5CJ$ h:5CJ$hwh:B*CJ$phhwh:CJ$ h:CJ$hwh:5CJ$PJhOh:CJ$$NNNNNNNNNN+O_OOOOQPaPbPcPtPuPvP}PPP5QOQQiRjRkRRRRRRRR`SSSSS˿袬蘈蘢tkh:6CJ$PJh&h:CJ$PJh&h:CJ$hwh:CJ$H* h:CJ$hwh:5CJ$hwh:6CJ$ h:6CJ$hwh:6CJ$aJh0h:6>*CJ$h:6>*CJ$h0h:>*CJ$hwh:>*CJ$hwh:CJ$hwh:CJ$OJQJaJ*NN*O+O`OaOyOzOOOPPQPPPQQjRkRRRSSSSSS 1$7$8$H$gd:h^hgd:^gd:gd:S!T'T+TJTT U U U:U;U{UUUVVVVVVVVW WWWWWWWWWWWX%X&XXYBYCYDYż}tttjhwh:5CJ$hqGxh:CJ$h@8h:5>*CJ$h@8h:56CJ$h@8h:5CJ$heh:56CJ$heh:6CJ$ h:6CJ$h:56CJ$hI`h:56CJ$hI`h:6>*CJ$hwh:6CJ$h:B*CJ$phhwh:CJ$ h:CJ$)ST U U UUUVVVVVVWWW&X'XCYDY ZZJ[z[`gd:1$gd:h`hgd: Pgd:gd: & Fxgd: & Fgd:DYYYYZ ZZZZ[C[H[J[x[z[[[\ \ \\&\B\E\U\t\y\\\\\]]]]]]5]6]Q]]]]øo!hwh:B*CJ$OJ QJ phh:B*CJ$phhwh:B*CJ$phhhh:B*CJ$ph h:6CJ$heh:B*CJ$phheh:CJ$PJh:6CJ$PJhwh:6CJ$PJhwh:>*B*CJ$phhwh:6CJ$ h:CJ$hwh:CJ$*z[[\\\\]]]]]]X_Z_;``aaaaa2c4ccgd:gd: 1$7$8$H$gd: Pgd: `gd:]^,^^^V_X_`__!`aaaaaaaaa.c4cccccccccʽֽvkcYOYH h:>*CJ$h:B*CJ$phh:CJ$OJQJh:CJ$PJh h:CJ$PJhNh:5CJ$PJhhh:6CJ$PJ h:CJ$hwh:CJ$!hwh:B*CJ$OJ QJ phhhh:6B*CJ$phhwh:B*CJ$phh:6B*CJ$phhwh:6B*CJ$phhwh:5B*CJ$phh:5B*CJ$phcccdddd d dddddd0d2d6d8d>d@ddddddddddd"ef#f%f'ffffff g\g_gfgkgmgng~uh h:CJ$hglh:CJ$hglh:>*CJ$hhh:5CJ$hwh:CJ$ h:>*CJ$h:5CJ$H*h:CJ$PJh:CJ$OJQJ h:CJ$H*j h:CJ$EHUVjm< h:CJ$UVh:CJ$EHjh:CJ$U h:CJ$.cdFdHddddd"e$e&f'ffflgmgngogqggiyizi^gd: $^a$gd:$a$gd:gd:&gd:0]0^`gd:ngogpgqggggggi3i4ixijjjjjj$k+k,kNkQkkkkkkl-l.l/l0l1lllllllmüwiwih_h:CJ$H*OJQJh_h:6CJ$OJQJh_h:CJ$OJQJh:CJ$OJQJh_h:CJ$h_h:6CJ$h_h:h$h6CJ$ h:6CJ$h}h:5CJ$ h:5CJ$hoh:>*CJ$hoh:CJ$hhh:5CJ$ h:CJ$hwh:CJ$)ziiijjj$kQk-lllm!mmm/n0nsntnnnd1$7$8$H$gd:1$7$8$H$`gd: $da$gd: &$x`a$gd:gd:^gd:mmmmm m!m1m>mmm.n0n1n3nOnPnrnsntnnnnnnnFo-p4pAqqqùwqgq`Vh-h:6CJ$ h:5CJ$h:B*CJ$ph h:CJ$h\h:6CJ$OJQJhd)h:CJ$OJQJhwh:B*CJ$phhh:CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJhwh:CJ$h:CJ$OJQJh_h:CJ$OJQJh_h:CJ$H*OJQJh_h:6CJ$OJQJnnn oooo4p5p?qAqqqqq?r@rrrGsHsss d`gd:`gd:gd:1$7$8$H$`gd:hd1$7$8$H$^hgd:qqqrDsGsssssssstMtNtOtPttttttt u u+uRuSuuuuuuvvvJwZwbwxxxxyy y yyy{{{̿̿̿̿̿֨֒ֈ֒֒ֈ֒hwh:CJ$H*hwh:CJ$OJQJh'h:CJ$hnxqh:CJ$hwh:6CJ$OJQJhwh:CJ$OJQJh:CJ$OJQJhwh:CJ$h{h:CJ$ h:CJ$h-h:6CJ$ h:6CJ$4sOtPttttuuvvvDwwxyy{{{J|gd:`gd:^gd:1$7$8$H$^gd: & F 1$7$8$H$gd:^gd:  & F dgd: d`gd:{D|||||||||F}} hjl~ށj Z_gmDŽ켯Тwwh#ph:6CJ$OJ QJ ]!h#ph:B*CJ$OJ QJ ph?h:CJ$OJ QJ h#ph:CJ$OJ QJ hwh:CJ$OJQJh:CJ$OJQJh3hxh:5CJ$ h:CJ$h!h:CJ$hwh:CJ$OJQJhwh:CJ$hwh:>*CJ$.J|L|||F}}}~~hlw%gd:dgd:`gd:&d P `gd:gd:wx89lHJZ\`89`gd:%gd:78:t9ڇ݇lmFHJLZ\`ױ󱣚}se^Tsh'h:5CJ$ h:5CJ$h'h:5CJ$OJ QJ hwh:5CJ$hwh:CJ$OJQJ h:CJ$hwh:6CJ$hwh:CJ$hYvh:>*CJ$OJ QJ h:CJ$OJ QJ h:CJ$OJQJ^J h#ph:6CJ$OJ QJ ]h#ph:CJ$OJ QJ h#ph:CJ$OJ QJ ^J h#ph:CJ$OJ QJ 9[amv`bfhjl1uwFJVXBDH0úwwhwh:CJ$OJQJhvoKh:5CJ$hwh:56CJ$hwh:CJ$H*hwh:56CJ$OJQJ h:CJ$hwh:CJ$h.\h:CJ$OJ QJ h:CJ$OJ QJ h.\h:CJ$OJ QJ hwh:5CJ$hwh:5CJ$OJQJ* \^LvHFH/01 d`gd:gd:d`dgd:^gd: & F gd: $`a$gd:`gd:019WXk()uv6 lٚ ú鞐|ukbUUUhUh:B*CJ$phh:6>*CJ$hvoKh:5CJ$ h:5CJ$hd)h:CJ$h:5CJ$OJQJh;7h:5CJ$OJQJhd)h:CJ$OJQJ h:CJ$hh:CJ$hwh:CJ$h}h:5B*CJ$phh:B*CJ$phhP1h:CJ$OJQJh:CJ$OJQJhwh:CJ$OJQJ!1()q lnKؚٚ `gd:hd1$7$8$H$^hgd:d1$7$8$H$gd:`gd:gd:  & F dgd: !*,.›̛ԛ՛֛789:;O =n !12F̷{{m{{{{hYh:6CJ$OJQJhwh:CJ$OJQJhZ#h:CJ$ h:CJ$hwh:CJ$hvoKh:CJ$OJQJh:CJ$OJQJh:6CJ$OJ QJ h:CJ$OJ QJ hUh:CJ$OJ QJ hUh:6B*CJ$phhUh:B*CJ$phh:B*CJ$ph*ٚ՛֛8;<VO۞ !d1$7$8$H$^gd:^gd: & Fgd: & Fgd:gd:d1$7$8$H$gd:gd:F àɠʠˠ̠͠ϠР֠נؠ٠ڠ`(*vک۩ڿuhXh:CJEH aJhXh:CJhXh:CJaJjh:0JUh:0JmHnHu h:0Jjh:0JUh:jh:Uhwh:B*CJ$ph h:CJhwh:CJ$ h:CJ$hwh:CJ$OJQJh:CJ$OJQJ.ΠϠ۠ܠ$a$ dgd:`gd: `gd:gd: ^gd:ܠݠޠߠ`*۩ gd:%gd:gd:۩ hwh:CJ$h:hXh:CJhXh:CJaJ-0:p:BP= /!"#P$P%h ,, ` e(HH(dh com.apple.print.PageFormat.FormattingPrinter com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.FormattingPrinter Brother_HL_2140_series com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMHorizontalRes com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMHorizontalRes 300 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMOrientation com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMOrientation 2 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMScaling com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMScaling 1 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMVerticalRes com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMVerticalRes 300 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMVerticalScaling com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMVerticalScaling 1 com.apple.print.ticket.stateFlag 0 com.apple.print.subTicket.paper_info_ticket PMPPDPaperCodeName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMPPDPaperCodeName Letter com.apple.print.ticket.stateFlag 0 PMPPDTranslationStringPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMPPDTranslationStringPaperName Letter com.apple.print.ticket.stateFlag 0 PMTiogaPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMTiogaPaperName na-letter com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMAdjustedPageRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMAdjustedPageRect 0 0 2400 3200 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMAdjustedPaperRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMAdjustedPaperRect -75 -50 2475 3250.0000000000005 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMCustomPaper com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMCustomPaper com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMPaperName na-letter com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMUnadjustedPageRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMUnadjustedPageRect 0 0 768 576 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMUnadjustedPaperRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMUnadjustedPaperRect -12 -18 780 594 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.ppd.PMPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.ppd.PMPaperName Letter com.apple.print.ticket.stateFlag 0 com.apple.print.ticket.APIVersion 00.20 com.apple.print.ticket.type com.apple.print.PaperInfoTicket com.apple.print.ticket.APIVersion 00.20 com.apple.print.ticket.type com.apple.print.PageFormatTicket $$If!vh5555#v:V l t0657Dd TT0  # A2yDʋH`!yDʋHv  XJXJx]RNA=3yKkc"` x)b,h,lf X1H8e>  ti"!% s؀ݽ3ws0 DVE"1Vt5֢zF4 %/܀IlB%mJ 71DEG9,D17XiZߕ`­{7#9GNo&nh$JP]'yEg=%c%.wIRK;;׈)z!WwK'nKA:RHUT-s+S=T4W獚Q 9Ki;KLO"̆a{ͣ2:NCm޺ wbJSXbDfq&z= %C&ًWݎՅ#$ -ɱ&8`MvBbjv;GeWc9,ֆCMR=9vC.= I̫uTGjvԪ^&U%I_WQ$+!h2ʃ#JJ?"Y4K>0 Ҟcp ]5ٟ~wDd |@B  S A? B/MInA<̘  0TeZS[S4P&2MInA<̘0 x}Tn@@-&N\+!"#^'V:?U+=}  NFloν?#vˋmߘyl,Np]g +۫<#»hHVݰ*p*c B9x~׌$2@ca*I.KL[evXv(Y_9>x¹rmShI( AAz޵z:N3wuBN&T n-mH{I"cP" ) c6@W>&0m'fz[@>dr3b22*ɲPpcg EH#Ro0,7[MCX#<0Y je)a-0koֻw5XA ƴ6eYob*VSNI-%٤$|FঞvAgK]Xh ?Pf҇D@ wkFerH&Nih~H/DҐ\h*AFAfwp' >}8I7>Qi}G|!J oMVAv|䠨9h\Ӝ37a.8b+8FS㥊5t'S' ܳ>M~$ŢRE\Hn>9;tZf v gb ~eijt0 x]O;0 }v? 1q -< `iF߳ xAIdܳ 4xE&k1IS0tG''qGfkS߫cN4=#uĨ}Eh1A1ȹoܫn9`JaW@^:wR|{t3ˎ l=?4Dd a<  C A2~gg9s 7 `!x~gg9s 7 PSPSxcdd``ng```baV fxxx,F^equ02Ȓcjl@?H 8#ĺ"L@ 0 vo/P b@#ry V8pj&#;VdebP5pgbCΎz>!iO"g1 PSPSdxUQ]N@`b&?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Root Entry FdnQ@Data WordDocumentQObjectPool .dnQdnQ_1235906111F0dnQ8dnQOle CompObjiObjInfo  !"#$*12345679:;<=>@ FMathType 5.0 Equation MathType EFEquation.DSMT49q"DSMT5WinAllBasicCodePagesTimes New RomanSymbolCourier NewMT Extra!/'_!!/G_APAPAE%B_AC_A %!AHA_D_E_E_A   dXEquation Native _1020152219 FSdnQ`dnQOle PIC  TT>>FMicrosoft Equation 3.0DNQE Equation.3n.39qTCIP5~~dxpr  ,Times New Roman .* CompObj fObjInfo  OlePres000Ole10Native%" currentpoint "( n" #"#/MTsave save def 40 dict begin currentpoint 3 -1 roll sub neg 3 1 roll sub 608 div 512 3 -1 roll exch div scale currentpoint translate 64 32 translate /thick 0 def /th { dup setlinewidth /thick exch def } def /sqr { 3 index div /thick exch def gsave translate dup dup neg scale dup 4 -1 roll exch div 3 1 roll div 0 setlinewidth newpath 0 0 moveto dup .34 mul 0 exch lineto .375 .214 rlineto dup thick add dup .375 exch lineto 2 index exch lineto dup thick 2 div sub dup 3 index exch lineto .6 exch lineto .375 0 lineto clip thick setlinewidth newpath dup .34 mul 0 exch moveto .15 .085 rlineto .375 0 lineto thick 2 div sub dup .6 exch lineto lineto stroke grestore } def 484 448 384 0 448 16 sqr /cat { dup length 2 index length add string dup dup 5 -1 roll exch copy length 4 -1 roll putinterval } def /ff { dup FontDirectory exch known not { dup dup length string cvs (|______) exch cat dup FontDirectory exch known {exch} if pop } if findfont } def /fs 0 def /cf 0 def /sf {exch dup /fs exch def dup neg matrix scale makefont setfont} def /f1 {ff dup /cf exch def sf} def /ns {cf sf} def /sh {moveto show} def 384 /TimesNewRomanPS-ItalicMT f1 (n) 260 416 sh end MTsave restore d!MATHA  n yp 2X Equation  n Ole10FmtProgID & Equation Native '1_1017146770 F}odnQPzdnQOle (d>> F Equation1ELO 2X EquationPIC )dCompObj+5ObjInfo,OlePres000-(Ole10Native.Ole10FmtProgID / 1TablekSummaryInformation(0 Oh+'0  $0 T ` l x'Theory Testing and Deborah G. Mayo Normal.dotmDeborah Mayo4Microsoft Macintosh Word@G@5O@J܃SQ@43~TQa mDocumentSummaryInformation88CompObj?` ՜.+,D՜.+,h$x  ' Theory Testing and TitleP :B_PID_LINKBASE'A F Microsoft Word 97-2004 Document4 OJPJQJ_HmH nH sH tH <`< NormalCJ_HmH sH tH J@J / Heading 1$dh@&5OJ PJQJ >@>  Heading 2$dh@&6^!^  Heading 3$d@@&CJKHOJPJQJmH sH u^^  Heading 4$$@@&"6CJKHOJPJQJmH sH u2A2  Heading 5@&2A2  Heading 6@&2A2  Heading 7@&2A2  Heading 8@&2 A2  Heading 9 @&DA`D Default Paragraph FontVi@V  Table Normal :V 44 la (k (No List .)@. Page NumberFC@F Body Text Indent$a$PJ>*@> Endnote ReferenceH*LO"L second indent dh OJ PJQJ ^S2^ Body Text Indent 3dh`5OJ PJQJ D&AD Footnote ReferenceCJEH:R: quotek]^kPJf@bf  Footnote Text"$0dd^`0a$CJOJPJQJ8@r8 Header  !PJ8 @8 Footer  !PJ<+@< +0 Endnote TextCJPJFTF Block Textdh]^\\ biblio grant styleH^`H OJPJQJ@B@@ 0 Body Textdh 5OJ QJ BP@B 1 Body Text 2 CJOJQJZR@Z 2Body Text Indent 2dhP`OJ QJ >Q@> Body Text 3$a$5B*8Z@8 , Plain Text CJPJ6U6 Hyperlink >*B*ph\"\ References"^`CJOJPJQJmH sH ub2b Sample #$n^n`a$CJOJPJQJmH sH uVBV  List Paragraph$dd[$\$OJPJQJaJD^@RD [M>0 Normal (Web) %XDYDCJPJ`ob` [M>Default&1$7$8$H$)B*CJOJPJQJ_HmH phsH tH RrR ~\ quotation'$d^a$ OJ PJQJ <>@< )!,Title($a$CJ PJnHtH<< (!, Title CharCJ PJnHtHjOj qx# Medium Grid 1 - Accent 21 *^m$OJPJQJaJ>> &H0Endnote Text CharPJ:: ]uPlain Text CharPJ(W@( }|2`Strong5XOX $para-no-indent.dxxPJaJmH sH HH xvHeading 1 Char5CJOJ PJQJ DD xvBody Text Char5CJOJ QJ DD xvBody Text 2 Char CJOJQJR!R xvBody Text Indent 2 Char CJOJ QJ j3j Z Table Grid7:V303PK![Content_Types].xmlj0 u$Nwc$ans@8JbVKS(.Y$8MVgLYS]"(U֎_o[gv; f>KH|;\XV!]օ Oȥsh]Hg3߶PK!֧6 _rels/.relsj0 }Q%v/C/}(h"O = C?hv=Ʌ%[xp{۵_Pѣ<1H0ORBdJE4b$q_6LR7`0̞O,En7Lib/SeеPK!kytheme/theme/themeManager.xml M @}w7c(EbˮCAǠҟ7՛K Y, e.|,H,lxɴIsQ}#Ր ֵ+!,^$j=GW)E+& 8PK!\theme/theme/theme1.xmlYOoE#F{o'NDuر i-q;N3' G$$DAč*iEP~wq4;{o?g^;N:$BR64Mvsi-@R4Œ mUb V*XX! cyg$w.Q "@oWL8*Bycjđ0蠦r,[LC9VbX*x_yuoBL͐u_. DKfN1엓:+ۥ~`jn[Zp֖zg,tV@bW/Oټl6Ws[R?S֒7 _כ[֪7 _w]ŌShN'^Bxk_[dC]zOլ\K=.:@MgdCf/o\ycB95B24S CEL|gO'sקo>W=n#p̰ZN|ӪV:8z1f؃k;ڇcp7#z8]Y / \{t\}}spķ=ʠoRVL3N(B<|ݥuK>P.EMLhɦM .co;əmr"*0#̡=6Kր0i1;$P0!YݩjbiXJB5IgAФ޲a6{P g֢)҉-Ìq8RmcWyXg/u]6Q_Ê5H Z2PU]Ǽ"GGFbCSOD%,p 6ޚwq̲R_gJSbj9)ed(w:/ak;6jAq11_xzG~F<:ɮ>O&kNa4dht\?J&l O٠NRpwhpse)tp)af] 27n}mk]\S,+a2g^Az )˙>E G鿰L7)'PK! ѐ'theme/theme/_rels/themeManager.xml.relsM 0wooӺ&݈Э5 6?$Q ,.aic21h:qm@RN;d`o7gK(M&$R(.1r'JЊT8V"AȻHu}|$b{P8g/]QAsم(#L[PK-![Content_Types].xmlPK-!֧6 /_rels/.relsPK-!kytheme/theme/themeManager.xmlPK-!\theme/theme/theme1.xmlPK-! ѐ' theme/theme/_rels/themeManager.xml.relsPK] ) ,  4 /4 <IKMP Y(!R$(.l/3Q79;@ZDFJNSDY]cngmq{0F۩ ]`bdfiklnoqstvwyz|} u ")@06;gCHNSz[czinsJ|w1ٚܠ ^_aceghjmprux{~######TTT:::#+/68<CEP!!8@0(  d <  B S  ?Ya  236?@E  ) 1 9 M X 05?'*:=\gDQN[!!$$)),---f/x////40?0Y0Z0]0)141 33I3T33334(454>4p4z45577"929;:E:PAfAhBiBsBBBEEGGHHIIIeJrJLLLTTUUYYYY[[[[[]]IcScccff fQf\ffffjjllmmmqnvn{nnnnn oopppp qqqqttuuuuuuvvBwJwyyzzzr|&,χ҇܇߇!pvCEPTUWlqЍۍ͐%+̑ґDHMY   #$)S   W u 7 ;HIKwz Kn#%+-Moun o z ###A#I#R#S#########$$F$^%%%&)))-S.[..y/z//////0X0Z0012223V3]336'6,6-6r6778 < <+<====`>>>>@@@;ArEEEEFFGHdHiHHMMQNOOOOQQQ_RETmTuTTTTTTTTUZZZ[[+\`&a/a0aQdSdWddMffffhhhhhhh(innn"oqqqqvvvwwwyy5z6zzzz?{A{^{s{$| ~ ~~~"fiЄ.14χ҇܇߇!@C ܐݐ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ZdVJp'c Z3 G ,(95<q)g!?z=,^D`xSF"=N{4g|>sIA8#i~zNBsIp+KLW `Ow<HZ.2[L,^lTc ,x8#i`O ^ioL4>= r="s.2[`wp ,xN{4"L{`Od{/|Qi~, dU~:&'5>HTUD@T #$@APQSTUefghikltuwxz{@4@.`@L@^@bd@vxz|@@@ @(@P@UnknownGTimes New Roman5Symbol3 Arial3TimesOM CenturySchLCambria/OffmF,$SMCMR10Times New RomaneMMinion-RegularTimes New RomanKM AdvP497E2Cambria5 GenevaEMMinionCambriaCApple SymbolsUMMathematicalPiCambria; Eurostile9MNew York7Cambria? Courier New;Wingdings#hӺںǨga mCx=WA4d׏O2qHX?;j2Theory Testing and Deborah G. Mayo Deborah MayoP            NB6WWord.Document.8