ࡱ> VXU ^Qbjbj .T@+bb,<t!!!!!$!!!?!!:,8!0w[ U0RA"*A"8A"8LA"b k: Hypothesis Testing: Whats the Idea? Scott Stevens Hypothesis testing is closely related to the sampling and estimation work we have already done. The work isn't hard, but the conclusions that we draw may not seem natural to you at first, so take a moment to get the idea clear in your mind. Hypothesis testing begins with a hypothesis about a population. We then examine a random sample from this population and evaluate the hypothesis in light of what we find. This evaluation is based on probabilities. Now, what you'd almost certainly want to be able to say is something like this: "Based on this sample, I'm 90% sure this hypothesis is true." Unfortunately, you can NEVER draw such a conclusion from a hypothesis test. Hypothesis tests never tell you how likely (or unlikely) it is that the hypothesis is true. There are good reasons for this, but we'll talk about them a little later. Hypothesis test conclusions instead talk about how consistent the sample is with the given hypothesis. If you flip a coin one time, and it comes up heads, you wouldn't suspect the coin of being unfair. A head will come up 50% of the time when you flip a fair coin. The flip gives you no reason to reject the "null hypothesis" that the coin is fair. But if you flip the coin 10 times, and it comes up heads, all 10 times, you're going to have serious doubts about the coin's fairness. And you should. 10 heads in 10 flips will only occur about 1 time in 1000 tries. So if the coin is fair, you just witnessed a 1-in-1000 event. It could happen, but the odds are strongly against itby 1000 to 1. You are left with believing one of two things: either you just happened to see a 1-in-1000 event, or the coin isn't actually fair. Almost anyone would conclude the latter, and judge the coin as being rigged. Now let's back up a bit, and change the preceding situation in one way. Let's imagine that I told you that I had a "trick coin" that flips heads 95% of the time. I flip it once, and it comes up heads, just as before. Again, you have no cause to reject the null hypothesis of a "almost always heads" coin. If I flip it 10 times and it comes up heads 10 times in a row, you still don't reject my claim. The result is completely consistent with the claim. This doesn't mean that the coin really does flip heads 95% of the timeit just means you've seen nothing that would make you disbelieve me. That's how hypothesis testing always works. You have a null hypothesisan assumed statement about the population. You proceed on the assumption that it's right. You then look at a sample, and determine how unusual the sample is in light of that hypothesis. If it's weird enough, then you're forced to revisit your original hypothesisyou reject it. If the sample is a "common enough" result under the situation described by your null hypothesis, then you "fail to reject" the null hypothesis. We're not saying it's truewe're just saying we don't have evidence to say that it's false. (This is much like our legal system, in which people are found "not guilty". It doesn't mean that they're innocent. I just means that there wasn't enough evidence to conclude that they were guilty.) In principle, the null hypothesis could be almost any statement about a populations characteristics, but for COB 191, we can be more specific. The null hypothesis in Chapter 9 will always be in one of these forms. (In each case, the number 8 could be replaced with any other number, and 0.6 could be replaced with any other number between 0 and 1.) Two tailed testsUpper tailed testsLower tailed testsH0: = 8H0: < 8H0: > 8H0:  = 0.6H0:  < 0.6H0:  > 0.6 What to notice: The null hypothesis is always about a population parameter, not a sample statistic. In 191, that means it involves a Greek letter. The null hypothesis may be = (which gives a two tailed test) or > or <, (which gives a one tailed test)but the symbol in the null hypothesis will always include the equality part. That is, the null will not be > or <. Well come back and talk more about why these are called one tailed and two tailed tests later. So, summing up: when you draw your conclusion from an hypothesis test, you are going to conclude one of the following: The sample is too "weird" to believe that the null hypothesis is true, so we reject the null hypothesis, or, ii) The sample is sufficiently consistent with the null hypothesis that we cannot reject it. Saying this compactly isn't easy in normal English, so I'm going to define a term that I can use to get it across. The term is "weirdness". You won't find it in a stat book, but it carries the right sense. When we do a hypothesis test, we reject a hypothesis if the sample is "too weird". If a realtor tells you that the average house price in a city is $50,000 or less, and your sample of 300 house prices give an average price of $250,000, then you're going to disbelieve the realtor. The sample is too weird for the null hypothesis to be believable. If, on the other hand, your sample gave an average house price of $43,000, this isn't weird at all. It's completely consistent with the claim. In fact, even if your sample has a mean house value of $50,050, you probably wouldn't call the realtor a liar. True, the sample has a mean that is $50 more than the figure that the realtor stated, but that $50 could reasonably be attributed to sampling erroryou just happened to pick houses in your sample that were a little more pricey than the average. Because we shouldn't try to do mathematics with terms that are only vaguely defined, let me lock down "weirdness" now. The "Weirdness" of a Sample for a Hypothesis Test We reject the null hypothesis of a hypothesis test only if the sample is "too weird". ( If the null hypothesis is that m = a certain number, then the farther the sample mean is from that number, below or above, the weirder the sample. So getting a sample mean that is 10 above is just as weird as getting a sample mean that is 10 below . ( If the null hypothesis is that m > a certain number, then the farther the sample mean is below that number, the weirder the sample. (Samples with means above that number are fine.) ( If the null hypothesis is that m < a certain number, then the farther the sample mean is above that number, the weirder the sample. (Samples with means below that number are fine.) ( If the null hypothesis is that  = a certain number, then the farther the sample proportion is from that number, below or above, the weirder the sample. So getting a sample proportion that is 0.10 above  is just as weird as getting a sample proportion that is 10 below . ( If the null hypothesis is that  > a certain number, then the farther the sample proportion is below that number, the weirder the sample. (Samples with proportions above that number are fine.) ( If the null hypothesis is that  < a certain number, then the farther the sample proportion is above that number, the weirder the sample. (Samples with proportions below that number are fine.) This approach to defining weirdness applies to any hypothesis test for which the sampling distribution can be assumed to be normal. That means that it applies to everything in this chapter, and most topics in your book. When it doesn't, we'll talk about it. Make sure that the box on the previous page makes sense to you, because I'm going to use the word "weirdness" in all that follows, and I'm going to assume that you understand what I mean. The definition of weirdness is just common sense, really. Think about the kind of data that would contradict a claim of the null hypothesis. If, for example, I said that at least 70% of JMU students were male, then my claim would not be called into question if my sample of 100 JMU students were 90% male. It would be called into question, though, if my sample of 100 JMU students were only 20% male. In a population that is at least 70% male, a sample that's only 20% male is very weird. Note that this idea of weirdness is predicated on the working assumption that the null hypothesis is true. 20% males is only "very weird" if you're working under the assumption that the population is 70% male. Now, stick with this assumption that the null hypothesis is true, which is what we do until the very end of any hypothesis test. Imagine all possible samples from the hypothesized population. Some of them would be weirder than the one that we got, and some would be less weird than the one we got. If the null hypothesis is that at least 70% of the students are male, then our sample of only 20% males was very weird. But a sample with only 4% males would be even weirder. The question is this: Assuming that the null hypothesis is true, what fraction of all samples are as weird or weirder than the one that we got? This is called the P-value of the sample. Begin with the assumption that the null hypothesis is true. The fraction of all samples from this population that would be as weird or weirder than your sample is called the P-value of your sample. So the smaller the P-value, the weirder the sample. If the P value is 0.01, it means that (again, assuming the null hypothesis is true) only 1% of all samples would give results as weird or weirder than the one that you found in your sample. Every hypothesis test has the same form. First, we are given the null hypothesis, which is kind of a "straw man". Our job is to see if there's enough evidence to conclude that this hypothesis is false. We're also given a level of significance&( " & 3: +TVjlvxLMSTȾϓhSD5>*CJ\hSD5CJ\hvhvCJ\hhv>*CJ\hhvCJH*\hhvCJ\ hvCJ\hv5CJ\ hnpCJh5CJ\h6CJ]h hCJ hCJ 1&45' (  st*P$$Ifa$gdl $a$PRh~K555$$Ifa$gdl kd$$IflF ,"    t06    44 lapytq[[[$$Ifa$gdl kd$$IflF ,"   t06    44 layt  TUqoooggooobbgdCN & FgdSDkde$$IflF ,"   t06    44 layt STU68B :l   ( !!!"""" "@"""üö~uo~huu h>*CJ hCJh6CJ]hCJOJQJ jhCJho4h5CJ hCJhCN56CJ\]hCN5CJ\ hCNCJ hCJ\ hCNCJ\hv5CJ\hSD5CJ\hSDhSD5CJ\hSD56CJ\hSDhSD5>*CJ\$=9:lm v!!!"P#R##$d%d&d'dNOPQ&$$d%d&d'dNOPQa$h^hgdCN & FgdCN""R#T####### $$$$%%%<%&&'''X'Z'\'^'`'''''((((((()n)x)%+<+P+++,,,Y,p,,,\-~--hhv>*CJ hvCJ hCNCJhCN5CJ\h5CJ\ hCJhhCJ hnpCJh6CJ] h>*CJhCJOJQJ jhCJ hCJh56CJ\]8R#"$$$%&'''(()** *$+%+++..gdCN&$d%d&d'dNOPQgd#$d%d&d'dNOPQ--->.../////0}00,131D11122222223y33333333LLLLLMLLLLM.MYMMM&NFNhNiNOOOOP+Q hBCJh6CJ] jahCJUh5CJ\ h~$"CJ hCJ hTs(CJhCNhCNCJ hCN5CJhCNhv>*CJ hv6CJhv5CJ\ hvCJ hCNCJ9.E1F1G122233lPmP^Q&$d%d&d'dNOPQgdCNgdCN , which we symbolize (. This is, if you like, the weirdness threshold. If ( = 0.05, or 5%, this means that I'll reject the null hypothesis only if the sample is weirder than 95% of the samples I could have gotten from the hypothesized population. We then look at our sample to see if it's too weird, in the sense defined on the previous page. If its P-value is smaller than alpha, it's weirder than our weirdness cutoff, and we say, "No, that hypothesis isn't true." (We could be wrong, of course, and ( tells us how likely it is that we're wrong when we say "no".) If the P-value is greater than or equal to (, then our sample isn't sufficiently outrageous for us to commit ourselves by rejecting the null hypothesis. If your sample had a P-value of 0.10, then it's still a pretty strange sample (90% of all samples from the hypothesized population would be less strange), but things that happen 1 time in 10 aren't all that rare. If we're using ( = 0.05, we won't commit ourselves to rejecting the null hypothesis unless our sample is stranger than 95% of the samples we'd expect to see from the hypothesized population. You're probably going to need to reread this during and after the first couple of homework problems. The mechanics of how to conduct these tests can be found in the other website document, Techniques of Hypothesis Testing. Read that next! +QKQLQNQ]Q^Q hCJ hGCCJ hGC5CJhB56CJ hB6CJ,1h/ =!"#$% $$If!vh5 5 5 #v :V l  t065 pyt$$If!vh5 5 5 #v :V l t065 yt$$If!vh5 5 5 #v :V l t065 yt^ 666666666vvvvvvvvv666666>6666666666666666666666666666666666666666666666666hH6666666666666666666666666666666666666666666666666666666666666666662 0@P`p2( 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p8XV~_HmH nH sH tH @`@ NormalCJ_HaJmH sH tH BB  Heading 1$$@&a$CJ aJDA D Default Paragraph FontVi@V 0 Table Normal :V 44 la (k ( 0No List 2B@2  Body TextCJj`j v Table Grid7:V0PK![Content_Types].xmlj0Eжr(΢Iw},-j4 wP-t#bΙ{UTU^hd}㨫)*1P' ^W0)T9<l#$yi};~@(Hu* Dנz/0ǰ $ X3aZ,D0j~3߶b~i>3\`?/[G\!-Rk.sԻ..a濭?PK!֧6 _rels/.relsj0 }Q%v/C/}(h"O = C?hv=Ʌ%[xp{۵_Pѣ<1H0ORBdJE4b$q_6LR7`0̞O,En7Lib/SeеPK!kytheme/theme/themeManager.xml M @}w7c(EbˮCAǠҟ7՛K Y, e.|,H,lxɴIsQ}#Ր ֵ+!,^$j=GW)E+& 8PK!Ptheme/theme/theme1.xmlYOo6w toc'vuر-MniP@I}úama[إ4:lЯGRX^6؊>$ !)O^rC$y@/yH*񄴽)޵߻UDb`}"qۋJחX^)I`nEp)liV[]1M<OP6r=zgbIguSebORD۫qu gZo~ٺlAplxpT0+[}`jzAV2Fi@qv֬5\|ʜ̭NleXdsjcs7f W+Ն7`g ȘJj|h(KD- dXiJ؇(x$( :;˹! I_TS 1?E??ZBΪmU/?~xY'y5g&΋/ɋ>GMGeD3Vq%'#q$8K)fw9:ĵ x}rxwr:\TZaG*y8IjbRc|XŻǿI u3KGnD1NIBs RuK>V.EL+M2#'fi ~V vl{u8zH *:(W☕ ~JTe\O*tHGHY}KNP*ݾ˦TѼ9/#A7qZ$*c?qUnwN%Oi4 =3ڗP 1Pm \\9Mؓ2aD];Yt\[x]}Wr|]g- eW )6-rCSj id DЇAΜIqbJ#x꺃 6k#ASh&ʌt(Q%p%m&]caSl=X\P1Mh9MVdDAaVB[݈fJíP|8 քAV^f Hn- "d>znNJ ة>b&2vKyϼD:,AGm\nziÙ.uχYC6OMf3or$5NHT[XF64T,ќM0E)`#5XY`פ;%1U٥m;R>QD DcpU'&LE/pm%]8firS4d 7y\`JnίI R3U~7+׸#m qBiDi*L69mY&iHE=(K&N!V.KeLDĕ{D vEꦚdeNƟe(MN9ߜR6&3(a/DUz<{ˊYȳV)9Z[4^n5!J?Q3eBoCM m<.vpIYfZY_p[=al-Y}Nc͙ŋ4vfavl'SA8|*u{-ߟ0%M07%<ҍPK! ѐ'theme/theme/_rels/themeManager.xml.relsM 0wooӺ&݈Э5 6?$Q ,.aic21h:qm@RN;d`o7gK(M&$R(.1r'JЊT8V"AȻHu}|$b{P8g/]QAsم(#L[PK-![Content_Types].xmlPK-!֧6 +_rels/.relsPK-!kytheme/theme/themeManager.xmlPK-!Ptheme/theme/theme1.xmlPK-! ѐ' theme/theme/_rels/themeManager.xml.relsPK] @+T"-+Q^Q!#)PR#.^Q "$8@0(  B S  ?:+>+B+01A F 9>>D04qy!!""R#b####,$$$%%&&G(J(0+B+3333333333333333333333.#/#0##$$$$$$a%s%%%%%&&&&&&';''(((***?+B+Ya~VV!E|X80^8`0o() ^`hH. pL^p`LhH. @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PL^P`LhH.h^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJQJo(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJQJo(hHVV!EYa^0                   Cv~$"Ts(GCSDnpCNo4B@+B+@?+?+H?+?+@{|%@+@0@ "$&(T@@UnknownG* Times New Roman5Symbol3. * Arial?= * Courier New;WingdingsA BCambria Math"hzC& 7t$Nt$N!20*+*+ 2QHX $Pnp2!xx$Hypothesis Testing: What s the Idea Scott Stevens Scott Stevens  Oh+'0 ,8 X d p|(Hypothesis Testing: Whats the IdeaScott StevensNormalScott Stevens13Microsoft Office Word@ @k<@V[t$՜.+,04 hp  ,CIS/OM Program, James Madison UniversityN*+ %Hypothesis Testing: Whats the Idea Title  !"#$%&'()*,-./012456789:;<=>?@ABCDFGHIJKLNOPQRSTWRoot Entry F0[YData +1Table3U"WordDocument.TSummaryInformation(EDocumentSummaryInformation8MCompObjy  F'Microsoft Office Word 97-2003 Document MSWordDocWord.Document.89q