ࡱ> :<9!` Ybjbj\\ -`>>%/"""""""6zzz8LL6-(Vh~~~~~~Q,S,S,S,S,S,S,$*.h0w,E"~~w,""~~,"~"~Q,Q,2])""q+~J 09z~*.,4,0-+R1T1\q+1"q+~L6,,~~~w,w,~~~-666Dz666z666"""""" I. Introduction At the heart of hypothesis testing is the ability to answer the question: What is the probability of getting the obtained result or results even more extreme if chance alone is responsible for the differences between the experimental and control scores? The answer to the above question involves two steps: Calculating the appropriate statistic, and Evaluating the statistic based on its sampling distribution. II. Sampling Distributions What is a sampling distribution? Definition (p. 268) The sampling distribution of a statistic gives (1) all the values that the statistic can take and (2) the probability of getting each value under the assumption that it resulted from chance alone. A sampling distribution of a statistic is a theoretical frequency distribution of the scores for or values of a statistic, such as a mean. Any statistic that can be computed for a sample has a sampling distribution. A sampling distribution is the distribution of statistics that would be produced in repeated random sampling (with replacement) from the same population. It is all possible values of a statistic and their probabilities of occurring for a sample of a particular size (Vogt, 1999). If the probability of getting the obtained value of the statistic or any value more extreme is equal to or less than the alpha level, we reject H0 and accept H1. If not, we retain H0. If we reject H0 and it is true, we have made a Type I Error. If we retain H0 and it is false, we have made a Type II Error. The above process applies to all experiments involving hypothesis testing. What changes from experiment to experiment is the statistic that is used and its accompanying sampling distribution. A. Generating Sampling Distributions A sampling distribution has been defined as a probability distribution of all the possible values of a statistic under the assumption that chance alone is operating. One way of deriving sampling distributions is from basic probability considerations. Sampling distributions can also be derived from an empirical sampling approach. Here we have an actual or theoretical set of population scores that exists if the independent variable has no effect. We derive the sampling distribution of the statistic by: Determining all the possible different samples of size N that can be formed from the population of scores (that is all unique samples of the same size). Calculating the statistic for each of the samples Calculating the probability of getting each value of the statistic if chance alone is operating Definition (p. 269) The null-hypothesis population is an actual or theoretical set of population scores that would result if the experiment were done on the entire population and the independent variable had no effect. It is called the null-hypothesis population because it is used to test the validity of the null hypothesis. Definition (p. 272) A sampling distribution gives all the values a statistic can take, along with the probability of getting each value if sampling is random from the null-hypothesis population. A sampling distribution is constructed by assuming that an infinite number of samples of a given size have been drawn from a particular population and that their distributions have been recorded. Then the statistic, such as the mean, is computed for the scores of each of these hypothetical samples; then this infinite number of statistics is arranged in a distribution in order to arrive at the sampling distribution. The sampling distribution is compared with the actual sample statistic to determine if that statistic is or is not likely to be the way it is due to chance (Vogt, 1999). III. The Normal Deviate (z) Test The normal deviate (z) test is a test that is used when we know the parameters of the null-hypothesis population. That is, when we know the population mean (() and standard deviation ((). The z test uses the mean of the sample ( EMBED Equation.3 ) as a basic statistic. A. Sampling Distribution of the Mean Definition (p. 273) The sampling distribution of the mean gives all the values the mean can take, along with the probability of getting each value if sampling is random from the null-hypothesis population. The sampling distribution of the mean can be determined empirically and theoretically, the latter through the use of a theorem called the Central Limit Theorem. The Central Limit Theorem tells us that, regardless of the shape of the population of raw scores, the sampling distribution of the mean approaches a normal distribution as sample size N increases. Empirically we can determine the sampling distribution of the mean by actually taking a specific population of raw scores having a mean (() and standard deviation (() and: drawing all possible different samples of a fixed size N, calculating the mean of each sample, and calculating the probability of getting each mean value if chance alone were operating. The sampling distribution of the mean has the following general characteristics (for samples of any size N): The sampling distribution of the mean is made up of sample mean scores. As such, it, too, must have a mean and a standard deviation. The mean of the distribution is symbolized as  EMBED Equation.3  (mean of the sampling distribution of the mean) The standard deviation of the distribution is symbolized as  EMBED Equation.3  (standard deviation of the sampling distribution of the mean, also called the standard error of the mean because each sample mean can be considered an estimate of the mean of the raw-score population). Variability between sample means then occurs due to errors in estimation hence the phrase standard error of the mean for  EMBED Equation.3 . The mean of the sampling distribution of the mean is equal to the mean of the raw scores ( EMBED Equation.3 ). Recall that each sample mean is an estimate of the mean of the raw-score population (differing by chance). The standard deviation of the sampling distribution of the mean is equal to the standard deviation of the raw-score population divided by  EMBED Equation.3  ( EMBED Equation.3 ). The standard deviation of the sampling distribution of the mean varies directly with the standard deviation of the raw-score population and inversely with  EMBED Equation.3 . If the scores in the population are more variable, ( goes up and so does the variability between the means based on these scores. Is normally shaped, depending on the shape of the raw-score population and on sample size. That is, if the shape of the population of raw scores is normally distributed, the sampling distribution of the mean will also be normally distributed, regardless of sample size. However, if the population of raw scores is not normally distributed, the shape of the sampling distribution of the mean depends on the sample size. If N is sufficiently large, the sampling distribution of the mean is approximately normal. If N > 30, it is usually assumed that the sampling distribution of the mean will be normally shaped. If N > 300, the shape of the population of raw scores is no longer a consideration with this size N, regardless of the shape of the raw-score population, the sampling distribution of the mean will deviate so little from normality that, for statistical considerations, we can consider it normally distributed. There are two factors that determine the shape of the sampling distribution of the mean: the shape of the population of raw scores (if the population of raw scores is normally distributed, the sampling distribution of the mean will also be normally distributed if the population of raw scores is not normally distributed, the shape of the sampling distribution depends on the sample size) the sample size, N (if N is sufficiently large, the sampling distribution of the mean is approximately normal the further the raw scores deviate from normality, the larger the sample size must be for the sampling distribution of the mean to be normally shaped) The formula for the normal deviate (z) test (the equation for zobt) is very similar to the z equation but instead of using raw scores, we use mean values. z equation:  EMBED Equation.3  and since  EMBED Equation.3 , the equation simplifies to  EMBED Equation.3  where  EMBED Equation.3  B. Alternative Solution Using zobt and the Critical Region for Rejection of H0 Definition (p. 281) The critical region for rejection of the null hypothesis is the area under the curve that contains all the values of the statistic that allow rejection of the null hypothesis. Definition (p. 281) The critical value of a statistic is the value of the statistic that bounds the critical region. To analyze the data using the alternative method, all we need to do is calculate zobt, determine the critical value of z (zcrit), and assess whether zobt falls within the critical region for rejection of H0. Based on the data from our question (study), we calculate  EMBED Equation.3  We find zcrit for the region of rejection by using the area under the normal curve table. The critical region for rejection of H0 is determined by the alpha level. Values are determined depending on whether we are testing a one-tailed (directional) hypothesis or a two-tailed (non-directional) hypothesis. To reject H0, the obtained sample mean ( EMBED Equation.3 ) must have a z-transformed value (zobt) that falls within the critical region of rejection. That is, the value falls in the tail. If |zobt or tobt| < |zcrit or tcrit| ( Retain the null hypothesis If |zobt or tobt| > |zcrit or tcrit| ( Reject the null hypothesis Review Practice Problem 12.1 on pages 284-285 (two-tailed, non-directional) and Practice Problem 12.2 on pages 285-286 (one-tailed, directional) for examples of hypothesis testing. Refer to class handout (Hypothesis Testing and Type I and Type II Error) for further discussion. E. Conditions Under Which the z Test is Appropriate The z test (z statistic) is appropriate when the experiment involves a single sample mean ( EMBED Equation.3 ) and the parameters of the null-hypothesis population are known (i.e., when ( and ( are known). To use this test the sampling distribution of the mean should be normally distributed. This is the mathematical assumption underlying the z test. This requires that N > 30 or that the null-hypothesis population itself be normally distributed. F. Power and the z Test Conceptually, power is the sensitivity of the experiment to detect a real effect of the independent variable, if there is one. Power is defined mathematically as the probability that the experiment will result in rejecting the null hypothesis if the independent variable has a real effect. Power + Beta (() = 1.00. Thus, power varies inversely with beta (1   = Power). Power varies directly with N. Increasing N increases power. Power varies directly with the size of the real effect of the independent variable. The power of an experiment is greater for large effects than for small effects. Power varies directly with the alpha level ((). If alpha is made more stringent (conservative, e.g., from 0.05 to 0.01), power decreases. Power of a Test broadly, is the ability of a technique, such as a statistical test, to detect relationships or differences. Specifically, the probability of rejecting a null hypothesis when it is false and therefore should be rejected (i.e., a correct decision). The power of a test is calculated by subtracting the probability of a Type II error () from 1.0. The maximum total power a test can have is 1.0 and the minimum is zero; .80 is often considered an acceptable level for a particular test in a particular study. Power is also called statistical power (Vogt,Z   % C D E Q R S g h i x y z (MPTi@MPijq) jmhH4hH456:CJOJQJhH45:CJOJQJh hH46H* hH46hOh'hH456OJQJ^JhH456OJQJhhOhH4OJQJ^JhOhH46OJQJ^JhH45Z C n k  & F h88x^8$x]^a$gdmz p^p` & F hx^hx^h & F h88x^8 & F h88x^8 & F hx^%YY 2sc=> !p^p` & F hx^ & F hx^ & F h88x^8hx^h & F hx^ & F h88x^8>OPq-x{ & F hx^ & F h88x^8 !px^p`hx^h & F h88x^8 & F hx^xx^$x]^a$gdmz p^p` )*12UVijkl2ijfgz{|}lbjhH4EHU$j7E hH4CJUVmHnHujhH4EHU$j7E hH4CJUVmHnHu hu6b6hu6b jmhH4hhH456OJQJ^JhH456OJQJhjhH4EHU$je6E hH4CJUVmHnHujhH4U hH46hH4 jshH4&F8`?2!!1#$ & F hppx^p & F hx^ & F hx^ & F hx^ & F hx^ & F hx^ & F h88x^8 & F hx^Pj16GH[\]^_ ꒅݼrhj hH4EHU$j7E hH4CJUVmHnHujj h`Eht4EHUjcG ht4CJUVj:hH4EHU$j7E hH4CJUVmHnHuhH4j)h`EEHU$j7E h`ECJUVmHnHujh`EU h`E6h`E hu6b6hu6bjhH4U" 2!3!""4#5#6#7########$$%R&c&d&e&f&k&l&Y'[''''''''''(((((('(((׽ׯƚׇ}j%hH4EHU$j7E hH4CJUVmHnHu hH46H* h+o6 hO6h+ohO h/b>* h/b6h/b hH4>* hH46ht4 jshH46hH4jhH4UjhH4EHU$j7E hH4CJUVmHnHu1$&%T&[''[(((())** & F h88x^8 ! !p^p` hdhx^hhx^hx^ & F hx^ & F h88x^8 & F pxgdO & F hx^gdO ((;(<(=(>([(\(o(p(q(r(s(x(y(z(((((((((((((((()))))ĺssgcXcXhH456OJQJh>mhH45:H*OJQJhH456:H*OJQJhH456:OJQJhH45:OJQJj^hH4EHU$j#7E hH4CJUVmHnHu hH46jhH4EHU$j7E hH4CJUVmHnHujhH4UjhH4EHU$j7E hH4CJUVmHnHuhH4")l*m*p************%+&+9+:+;+<+E+F+J++++y,z,{,,,,,,,,,,5-6-9-=->-A-F-G-K-O-P-T-V-W-w-x-{-------- hH4>* jhH4mHnHujVhH4EHU$jD7E hH4CJUVmHnHujhH4EHU$j7E hH4CJUVmHnHujhH4U hH46H* hH46hH4>*=+++o,1-s----l.m.n..... !&dPgd$a$gd & F8^8gd $dNgdgd & F h8dh^ & F hx^ & F hx^-------------&.).*.-.m.n.........../ / ///`/a/t/u/v/w//·xnj[hH4EHU$jD7E hH4CJUVmHnHujhH4UhH456:OJQJhH45:OJQJh,P6:CJOJQJh,P:CJOJQJh:CJOJQJhH4:CJOJQJh,Ph>mh jhH4mHnHu hH46hH4 hH46H*(..//l0000f122,3:444X X Xxgdmz 8x^8gdmC$x]^a$gdmzgdO & F h88x^8 & F hx^ & F h88x^8hx^h !x////d0e0000000000.202222233f4g4444557778XXX X XXXX0X1X4X|xtxh'h=Hh mbhmz hmChmCU hOhOh hmCh'hmC56OJQJ^JhO jahH4hvX jbhH4hH456:OJQJhH45:OJQJh5:OJQJ hH4>* hH46 jshH4hH4 jmhH4, 1999). References Pagano, R. R. (2007). Understanding Statistics in the Behavioral Sciences (8th ed.). Belmont, CA: Wadsworth. Vogt, W. P. (1999). Dictionary of Statistics & Methodology: A Nontechnical Guide for the Social Sciences (2nd ed.). Thousands Oak, CA: Sage Publications.     Chapter 12 Page  PAGE 6 Chapter 12 Sampling Distributions, Sampling Distribution of the Mean, the Normal Deviate (z) Test X X X XXXXXXXXX%Y'Y(Y*Y+Y-Y.Y0Y1Y 0 ps[\)84C``ÏI)$5!d.P"CX,s;.&`l6HKhDd h|B  S A? 2{8u~Wm`!O8u~@`|0xcdd``$d@9`,&FF(`T)ɁIRcgbR x@l@ UXRYPd e-dhb YOAͤr؅,L ! ~ Ay 7. _}8B!f22ar댨<&yI9 ٹx> 0 ps[\)84C``ÏI)$5!d.P"CX,s;.&`l6HKh0Dd |B  S A? 2/1XDXCjv~`!n/1XDXCj&`0<xuJPƿsk CtptppBĂBNnuᦎݔ`3zo}tՖ&lw?A1:KG n:]s c>itu(,\"ӆRW#F]g!1_qDVyhYߥW_GDd B  S A? 2GV՟ `!GV՟  dxKBQǿS_b /PРQ)ۚѡBZ |IJ.==}{9 Q2 'J"|<ؤbc+)լ4?@CABDEFHGIKJLMNPOQ\[STUVWXYZ]_`abcdefghijklmnopqrstuvRoot Entry< FR:9=Data 1`!WordDocument;-`ObjectPool>09R:9_1161232229F0909Ole CompObjfObjInfo  #(+,/2369<?ABCDEFHIJKM FMicrosoft Equation 3.0 DS Equation Equation.39q)HI X obt FMicrosoft Equation 3.0 DS EqEquation Native E_1161233626 F0909Ole CompObj fuation Equation.39q!  X FMicrosoft Equation 3.0 DS Equation Equation.39q!S  XObjInfo Equation Native  =_1161233611 F0909Ole  CompObj fObjInfoEquation Native =_1161233434F0909Ole CompObjfObjInfoEquation Native F FMicrosoft Equation 3.0 DS Equation Equation.39q*0dn  X = FMicrosoft Equation 3.0 DS Eq_11925185536F0909Ole CompObjfObjInfouation Equation.39qC'  N  FMicrosoft Equation 3.0 DS Equation Equation.39q@h `  X =Equation Native 2_1161233821'F0909Ole CompObj fObjInfo!Equation Native \_1161233920$F0909Ole ! N  FMicrosoft Equation 3.0 DS Equation Equation.39qto  N  FMicrosoft Equation 3.0 DS EqCompObj#%"fObjInfo&$Equation Native %2_1161236443"1)F0909Ole &CompObj(*'fObjInfo+)Equation Native *uation Equation.39q̒ z obt =X obt " X  X FMicrosoft Equation 3.0 DS Equation Equation.39q_1161236910.F0909Ole -CompObj-/.fObjInfo00qto z obt =X obt " X FMicrosoft Equation 3.0 DS Equation Equation.39qEquation Native 1_1161237027,3F0909Ole 4CompObj245fObjInfo57Equation Native 8\_11612380848F0909Ole :@)t  X = N  FMicrosoft Equation 3.0 DS Equation Equation.39q)Y X obtCompObj79;fObjInfo:=Equation Native >E1Table^p1|x]P1N@]BHX!"T)) ?(DQ *.yADI~AR%XZynnfv ېj 1GUU):=5׺wۂ:Geň\ zz헄5d ŖwyDvђƢprfVrS;ᇥK~k(I%ܥ[=-1YQ>s}XvWg/n4PS;$棫| &rM leߥx>PeBH1U{&Jв@Ci]vjX?FTƏs jޤ?DWB   OPq-xF8`?21&T[[ !!""=###o$1%s%%%%l&m&n&&&&&&''l((((f) *Z**:+++. . . . . .........%/'/(/*/+/-/./0/1/$*. XMYY "%(*-/YUikfz|G[]   ' ; = [ o q y %#9#;#$$$`'t'v'/:::::::::::::::#%!ada!a|aLX!ak#Ys.|.../_z.~.../8*urn:schemas-microsoft-com:office:smarttagsCity9*urn:schemas-microsoft-com:office:smarttagsplace9*urn:schemas-microsoft-com:office:smarttagsState  -Umnpqrf}~     ! & ' > @ C D L [ r s x l"p"""""%#<#=#?#@#D#E#J#$$$$$$$$5%9%=%A%F%K%O%T%w%{%%%%%%%`'x'y'|'}''.$.h.l.m.q.s.z...$/%/%/'/'/(/(/*/+/-/./0/1/A/K///]_ R S -8CG`?Z [ !!"=#o$ %&&''4.s...$/%/%/'/'/(/(/*/+/-/./0/1/A/K///33333333333CnUk-8`? Z [ s y "=#o$ % %1%''1(l(4.r.s...$/%/%/'/'/(/(/*/+/-/./0/1/A///Umf}  ' > [ r y %#<#$$`'x'h.l.$/%/%/'/'/(/(/*/+/-/./0/1///" ^! N" Y% S0 d6 z9 S^< }> DT? .? gA6N 7Q eV i i @k`F9o &/x Gy V   hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo(hh^h`. hh^h`OJQJo(hh^h`. hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo(hh^h`. hh^h`OJQJo( hh^h`OJQJo(hh^h`.h pp^p`hH.h @ @ ^@ `hH.h L^`LhH.h ^`hH.h ^`hH.h L^`LhH.h PP^P`hH.h   ^ `hH.h L^`LhH. hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo( hh^h`OJQJo(}>i iz9S0d6"N"GygA6NS^<9o^!DT?V Y%.?eV&/x7Q@k         %$H4=HHe 'Tx!B8]:mCrxHPv`/bu6b mb>mmzO4 G[h,P'vX32O6v+o~t4`Eh^*_ &&-/@.@*+-./p@p2ph@p6p@UnknownG: Times New Roman5Symbol3& : Arial;Wingdings5& zaTahoma"1hh˧F˧F˧F (U (U4 / / 2QHX?H42IRobert A. Horn, Ph. D.Robert A. Horn, Ph.D.\