ࡱ> DF?@ABCa jbjbA]A] >+?+?KCd d d z   ddd8"e$Fe yfhfh|h|h|h|h|h|hxxxxxxx,zR}x |h|h|h|h|hxh |h|hxhhh|h4 |h |hxh     |hxhh hZh Zzdh4hhx0yh}h}hh Wd dA template for computer-aided diagnostic analyses of test outcome data Dylan Wiliam Department of Education and Professional Studies Kings College London Franklin-Wilkins Building 150 Stamford Street London SE1 9NN United Kingdom dylan.wiliam@kcl.ac.uk Tel: +44 20 7848 3153 Fax: +44 20 7848 3182 Running head: Computer-aided analysis of test data Abstract This paper reports on the development and implementation of a computer package to assist senior and middle managers in schools, and individual classroom teachers, to carry out diagnostic and other analyses of test-outcome data. An initial development phase included the use of a questionnaire sent to a random sample of schools, supplemented by focus groups of principals and other key users in both primary and secondary schools. From the needs identified in this initial phase, a specification for the software was produced, which included a variety of aggregate analyses, as well as differential-item-functioning (DIF), and diagnostic plots based on Gutman scaling and Satos Student-Problem (S-P) technique. The software was then developed and piloted in a selection of schools using the outcomes from the national tests in mathematics and science taken by all 14-year-olds in England and Wales. Only one quarter of the schools selected actually chose to try out the software, but almost all of those that did so found the analyses useful, and planned to use the software in subsequent years. The software was made available to all schools in the country in the following year. Background From 1992 to 1997 the School Curriculum and Assessment Authority (SCAA) was the government agency responsible for the development of national curriculum tests for 7, 11 and 14 year-olds (the end of each of the first key stages of education) in England. In January 1995, SCAA commissioned the School of Education at Kings College London to investigate ways in which the data from these national tests, and the accompanying assessments made by teachers, could be presented in ways that would be useful to school principals, senior managers and teachers. More specifically, the purpose of the work was to investigate ways of obtaining and presenting information from the national curriculum tests at key stages 1-3 [] which will be of use to schools. Because of the open-ended nature of the investigation, it was decided to begin the work with focus-group meetings, rather than interviews or questionnaires, since focus groups are becoming increasingly accepted within educational research as a useful tool in the investigation of ill-structured problems (Vaughn, Schumm and Sinagub, 1996). Because of the very substantial differences in structure, organisation and expertise, separate focus groups were held for practitioners working in primary schools and in secondary schools. In developing this work, we were aware of a strong tradition of error analyses that have frequently been conducted on the results of particular tests or examinations. These range from the informal Chief Examiners report which present the overall impressions of those responsible for the administration of the assessment at one extreme to sophisticated item-response analyses at the other. These analyses can be viewed as combining the richest data on items with the coarsest data on students. However, these kinds of analyses must be re-designed for each assessment, and are probably beyond the scope of most practitioners to carry our for themselves. We were also aware of work at the other extreme in the tradition of school effectiveness (in effect using coarse data on items combined with rich data on individuals) which provided highly specific and contextualised information about the performance of the school, but which was relatively coarse-grained. Furthermore, the kinds of factors relevant to most value-added measures are factors over which schools have relatively little control. Therefore, in order to bridge the gap between the two extremes, we decided that emphasis should be given to those analyses that were: a) relevant to the particular circumstances of the school, and b) related to factors that were within the schools capacity to influence. The selection of these as priorities were strongly endorsed at the meetings with focus groups of teachers. Of course the requirement that results are relevant to the particular circumstances of the school does not rule out the use of national normative dataindeed, many results are only meaningful when they are compared with national or other normative data. At first sight, it may appear that there are results that appear to require no normative information for their interpretation. For example, if we discover that a particular student has consistently failed to demonstrate a particular skill in the tests, then we do not need to know the results of her peers to make sense of this data. But of course, this data is only interesting if it is something that we might have expected the student to be able to dolurking behind the criterion-referenced evaluation, perhaps even responsible for it, is the norm-referenced evaluation (Angoff, 1974 p4). In many of the discussions held with teachers, it was clear that teachers were not comfortable with the word comparison as a description, and so, both in the questionnaires and the samples of possible analyses, the word has sometimes been avoided. This should not blind us to the fact that, ultimately, all meaningful information has a comparative or normative element. However, in order to keep the logistics of the preparation of the analysis software to a reasonable level of complexity, the only external normative data that was considered for this project are results from the national cohort (rather than data from the school district, or data from similar schools derived as a result of some matching process). The development of the analyses As part of the general preparatory work, a literature search generated many possible approaches to the analysis and presentation of test results, and these were developed and discussed in informal meetings with principals and other senior staff in schools. During these informal discussions, a great deal of concern was expressed by teachers that the kinds of information that the schools would find most useful would also be the most sensitive information. It became very clear that schools would rather forgo information, no matter how potentially useful it might be, if the school did not have total control over who had access to such data. For this reason, the possibility of schools sending data to a bureau of some kind for analysis was considered much less attractive than a software package that would be provided for the school to make use of in whichever way it chose. Because of the strength of feeling that was evident on this matter, it was decided at an early stage in the development that the bureau option was not viable. One immediate benefit of this decision was that the nature of the project became much simpler to describe to teachers. Almost all the concerns and negative reactions that had been elicited in earlier meetings with teachers were obviated when it was made clear that the project brief was, essentially, to devise a specification for software that would be supplied to schools, for the schools to use (or not use!) as they saw fit. Because of the very different data-collection needs of whole-subject score analyses and item-level analyses, it was decided to distinguish clearly between the two kinds of analyses in the development. Whole-subject analyses It was clear from informal visits to schools as part of this project that many schools had been developing their own ways of displaying the results of national curriculum tests, principally through the use of spreadsheet packages such as Microsofts Excel. The most popular methods of presenting results were in the form of bar-charts, and while these have considerable visual impact, drawing inferences from them can be difficult. For example, consider the barchart shown in figure 1, which shows the levels of achievement of 14-year-old students in English, reported on the eight-point scale used for reporting national curriculum test results. Figure 1: Barchart of English results over three years This barchart shows a clear improvement over time at levels 7 and 3, a slight decrease at level 6, and a mixed trend at level 4. The proportion of students awarded level 2 has been decreasing steadily, but this is presumably due to increasing numbers of students achieving higher levels. Seeing any consistent trends in data presented in this form is very difficult. and the solution is, of course, to use cumulative frequency graphs. However, traditional cumulative frequency graphs, which show the proportion of students achieving up to a given level have the disadvantage that a cumulative frequency polygon that represents better overall performance will lie below one showing worse performance. Since almost everyone has a natural tendency to interpret graphs with higher meaning better, such a graph would be misleading, and working out why the lower of two cumulative frequency graphs represents the better performance appears to be conceptually quite difficult. An alternative approach that we developed for the purposes of this project was therefore to draw a reverse cumulative frequency graph, which instead of displaying the proportion achieving no more than a given level (going from 0% to 100%), begins at 100% and discumulates by looking at the proportion achieving at least a given level. Figure 2 displays the same data shown in figure 1 but as a reverse cumulative frequency graph. From figure 2 it is clear that performance at the school has been increasing consistently at all levels, apart from at level 4, where there has been no change.  Figure 2: Reverse cumulative frequency graph of English results over three years Reverse cumulative frequency graphs therefore give a less misleading display of results than conventional cumulative frequency graphs and bar-charts. However, in view of the widespread existing use of barcharts, it was decided that any software that was eventually produced should be capable of presenting data in both forms (ie reverse cumulative frequency graphs and bar-charts). In view of their potential for misleading, traditional cumulative frequency graphs should not be supported. The kinds of comparisons of overall results that were identified as of interest were: comparisons of the attainment of a school cohort with the national cohort in a particular subject; comparison of the attainment of a school cohort in a particular subject with that of previous years; comparison of the attainment of the school cohort in different tested subjects; and comparison of the attainment of boys and girls in the school cohort in each tested subject. Comparison of the attainment of boys and girls in particular was frequently mentioned by staff in schools as a priority but these raw test results are often difficult to interpret. For example the fact that boys results in mathematics in a particular school are better than those of girls is not necessarily evidence of bias, but could be caused by differences in the initial achievements of boys and girls going to that school. However, more meaningful comparisons of males and females are possible if the assessment systems combines both internal (ie school-derived) and external (eg test-based) components because we can then focus instead on the differences between external and internal results for females and males. There may not be any reason for us to assume that the males mathematics results in a school should be the same as those of females, nor to assume that the internal component score should be the same as that on the external component, especially because these two components are generally not measuring exactly the same constructs. However, if there marked dissimilarities between the external-internal differences for males and females, then this would be a cause for concern. In the same way, we might expect the external-internal difference to be the same across ethnic minorities, across students with and without special needs, and across different teachers, and any marked dissimilarities might suggest fruitful avenues for further investigation by the school. The difficulty with such analyses is that the grade-scales used typically in school settings are quite coarseit is comparatively rare to find a scale as fine as the eight-level scale used in England. However, these scales are typically the result of the discretisation of an underlying continuous distributionit is very rare that levels or grades are assumed to related to qualitative differences between kinds of performance. External components are typically marked on a continuous score scale, with cut-scores being set to determine grades or levels, and where it is possible to treat internal and external components as approximately continuous, more interesting analyses become available. Figure 3 displays the result of a comparison of external and internal components scored on a continuous scale by means of a dot plot. Such displays give a clear idea of the range in the data, and were popular with users because of their immediate impact, but the actual extent of the dispersion, and central tendency (eg mean, median) are more difficult to discern. A better method of displaying the data is the box plot (Tukey, 1977) shown in figure 4. In a boxplot, the box itself covers those data between the 25th and 75th percentiles and the line in the middle of the box represents the median. The whiskers are designed to include about 99% of the data (where the data is normally distributed), and those values beyond the whiskers are shown as individual outliers. Although many people find them difficult to interpret at first, once the conventions of the representation are understood they provide a wealth of information rapidly.  Figure 3: Dotplot of the difference between internal score and external score (internal-external) in mathematics for females and males  Figure 4: Boxplot of the difference between internal score and external score (internal-external) in mathematics for females and males Side-by-side dotplots and boxplots such as those shown in figures 3 and 4 are not, of course, restricted to two categories, and there is considerable scope for schools to devise their own analyses, Comparisons between subjects and classes One of the analyses requested by senior managers was the ability to compare results between subjects, despite the difficulties in interpreting such comparisons (see, for example, Wood, 1976/1987). For example, in 1994 at key stage 1, the average English level across the country was slightly higher than the average mathematics level, while at key stage 3, the average English level was slightly lower than the average mathematics level. Nevertheless, we would expect the differences between (say) the mathematics level and (say) the English level to be comparable across different classes. At first sight this would appear to offer a potentially useful management tool for evaluating the relative effectiveness of teachers, at least in secondary schools. If the levels in mathematics were, on average, across the year-group, 0.2 of a level higher than those for English, but for one particular mathematics class were (say) 0.5 higher, this would suggest that the mathematics teacher had been particularly effective. In order to investigate the feasibility of such analyses, a variety of simulations were conducted, in which a simulated school cohort of 120 14-year-old students with true levels in English, mathematics and science confounded with teaching-group effects was generated. The mathematics and science groups were assumed to be taught in homogenous ability classes (sets) while the English classes were taught in mixed-ability classes (the prevailing model in English schools at present). The resulting pattern of set allocation, with most students taught in the same set for science as for mathematics is consistent with other research on setting in schools (see for example, Abraham, 1989). The data was modelled so as to give distributions of attainment as found in the 1994 national tests for 14-year-olds (Department for Education, 1994), and inter-subject correlations as found in the 1991 national pilot of national curriculum assessments (rEM = 0.67, rES = 0.71, rMS = 0.78). The students were allocated to teaching groups for mathematics and science based on a test with a reliability of 0.8 and were assumed to be taught in mixed-ability classes for English. Teacher effects were then built in, equating to effect sizes (Hunter & Schmidt, 1991) of up to 0.7 of a standard deviation (these are extremely large effects, and much larger than would be expected in any real teaching situation). Despite the unrealistically large teacher effects used in the simulations, it was impossible to recover statistically significant absolute (as opposed to relative) teacher effects, due largely to the extent of the overlap in set allocation between science and mathematics. Given the apparently insuperable technical difficulties of such analyses, combined with their political sensitivity, no further work was done on these types of comparisons. Analyses involving sub-domain scores All the analyses discussed so far have used whole-subject scores. Finer distinctions in the test levels essentially involve collecting data on individual items, although these items can often be grouped to form sub-scales, relating to sub-domains of the original domains (in the way that Mathematics can be divided into arithmetic, algebra, geometry, etc. or English can be sub-divided into speaking, listening, writing, and reading). Distributions of raw attainment target levels for different classes in the school can provide some useful information, but if widespread use is made of ability grouping (as is the case in England and Wales, particularly in mathematics and science), comparisons between classes can be difficult to interpret. If, however, the differences between the overall domain score and the score in each sub-domain score (normalised, if necessary to be on the same scale), then potentially much more revealing information is provided. For example, figure 5 below shows a situation in which, for sub-domain 5 (in this particular case, statistics and probability), the levels achieved in each teaching group are at (in the case of group 1) or below the overall domain score. If this were an internal component, this could mean that the requirements for the sub-domain are being interpreted too harshly, or it could mean that the performance of students in this sub-domain is below what might be expected, given their performance in the other domains. For this particular school, therefore, it might be profitable to investigate how (if at all) their teaching of and standards in this topic differs from the others. A complementary display of the same data (figure 6) presents the results for different sub-domains together, for each teaching group, which draws attention to the fact that (for example) attainment in sub-domain 3 is relatively high in group 1 and relatively low in group 4, but that the reverse is true for sub-domain 4. Such data may indicate where in their schools teachers might look for advice about teaching certain topics. Summary results for individual students A concern voiced particularly strongly by primary school teachers was that some form of adjustment for the age of a student should be possible. Many instances were cited of children who achieved a standard below that of their class but who were a year or two younger than the rest of the class. Teachers felt that just to report this, without acknowledging that it was a satisfactory achievement given the students age, could create a misleading impression. However, if, as is increasingly the case around the world, an assessment system is intended to support criterion-referenced inferences, it would be inappropriate to adjust individual grades, since the grades are presumably meant to describe achievement rather than potential. An alternative is to use grades or levels for reporting achievement, but to report, alongside the level achieved, an age-equivalent score. Conceptually, this is quite straightforward, and there exists a well developed technology for the production of age norms for standardising tests. However, as Wiliam (1992) showed, tests vary quite markedly in the extent to which the spread of achievement within the cohort increases with age and this variation can be marked even for different tests of the same subject. For example, in the Suffolk Wide-Span Reading Test, the standard deviation of achievement is about three years for students from the ages of 7 to the age of 12, whereas in other tests, the standard deviation increased reasonably steadily with age. In some tests, the standard deviation of the attainment age was one fifth of the chronological age, and in others it was more than one-third of the chronological age (Wiliam, 1992).  Figure 5: Analysis of sub-domain score with domain score by sub-domain and by teaching group  Figure 6: Analysis of sub-domain score with domain score by teaching group and by sub-domain The sensitivity of these kinds of analyses is confirmed in the evidence from a study undertaken by NFER (1995) which investigated standardised scores at KS1 for the levels 2-3 mathematics test, the level 2-3 spelling test and the level 3 reading comprehension test. In the level 3 reading comprehension test, the standard deviation corresponds to about two years development for the average child, while in the levels 2-3 mathematics test, it was approximately 1.5 years, and in the spelling test, only one year. Furthermore, as Schulz and Nicewander (1997) have shown, even the presence of increasing-variance effects can be the result of the metrics used. The volatility of these results suggests that if standardised scores are to be provided for open tests, they will need to be derived anew for each new test. Item analyses None of the analyses discussed so far require candidates scores on individual items to be entered into a computer. All the foregoing analyses can therefore be conducted with very little extra effort, but have little diagnostic value, which was identified as a priority by many teachers during informal consultations. If individual item scores are available for each individual, then a considerable range of analyses with formative and diagnostic value are possible. Where items are marked as either right or wrong (dichotomous items) the most common item analysis has been a straightforward facility index for each iteman indication of the proportion of students getting the item correct. Where facility indices are available both for the school (or class) and the national cohort, teachers can compare their classes performance across different items with those of the national cohort. Unfortunately, facilities become very difficult to interpret where there is a selective entry to a test. If the facility in the national cohort of a particular item is 50% and a particular school finds that its cohort has a facility for that same item of only 30%, then one can not necessarily conclude that the school needs to pay more attention to that particular topic. It could merely signify that the school was more lenient in its entry policy so that the students taking the test in that school were from a broader achievement than in the national cohort. This variability of sampling is, of course, also why pass rates are almost meaningless as measures of academic achievement, telling us much more about entry policies than the candidates abilities. There are many solutions to this difficulty. One is to make assumptions about the rate at which students not entered would have answered items if they had taken them. This process of imputing an individuals performance can be done on the basis of a theoretical model, such as item-response modelling, or by the use of empirical techniques which use anchor items from the overlapping levels. Whatever approach is used, there will be difficulties in the interpretation of the data, and therefore, it appears more prudent to use a relatively straightforward method, and to urge caution in the interpretation of its results. While in the majority of tests, items are marked dichotomously, many constructed response tests are scored polytomously, with anything from 2 marks (ie 0, 1 or 2) to twenty or thirty marks being awarded. Analyses that are appropriate only for dichotomously scored items can generally be used with such items only by setting a threshold score for a polytomous item, and then regarding students achieving this threshold as getting the item right, and those who do not as getting it wrong. However it is important to note that when such dichotomising procedures are used, the proportion of correct answers does not relate in any simple way to the traditional notion of facility, and the actual patterns derived would depend on how the threshold score was set. Where the number of marks per item is small (no more than 5), this approach can yield quite meaningful data, but as the mark-scale becomes longer (and thus the number of items becomes smaller) this approach is less and less satisfactory. Differential item functioning analyses As noted earlier, the fact that males have higher marks than females in a particular school is not necessarily evidence of bias. For the same reason, if the facility index for a particular item in a test for the males in a particular school is higher than that for females, it does not necessarily mean that the item is biased against females. It could be that the boys know more about that domain. To distinguish the two cases, it is conventional to use the term differential item functioning to describe a situation where the facility indices for two groups are different on a particular item, and to reserve the term bias for those situations where groups of students who are equally good at the skill that the question is meant to measure get different marks (see Holland & Wainer, 1993 for a definitive treatment). Consider an item from a science test which is answered correctly by 120 out of the sample of 158, and on which the overall performance of the students on the test was graded from level 3 to level 6. Of the 158 candidates, 53 were awarded level 6, 36 were awarded level 5, 40 were awarded level 4 and 29 were awarded level 3. We can go further and classify those getting the answer wrong and right at each level according to whether they are male or female. This is gives the 2x2x4 contingency table shown in figure 7. As can be seen, there is an unsurprising tendency for students awarded the higher levels to be more likely to answer the item correctly. However, there is also a marked tendency at each level for girls to do less well than boys. Because we are looking at this data level by level, we can (largely) discount explanations such a predominance of higher achieving boys in the sample. Moreover, a statistical test of this pattern (Mantel-Haenszel, see Holland & Wainer, op cit) shows the difference between boys and girls to be highly significant (p<.001).  Figure 7: cross-tabulation of females and males responses to an item on a science test Of course, by themselves, such findings are of more interest to item developers than to schools. However, if the extent of differential item functioning within the national cohort were also given, schools would be able to investigate whether the patterns in their school were representative of the national trends or not. Even where the patterns are consistent with the national picture, schools may still find it unacceptable that females or particular ethnic minorities do systematically less well on certain items, and want to investigate possible courses of action. However, given the sensitivity and technical difficulties of producing national normative data for ethnic minorities, it seems prudent that national norms provided for comparative purposes should be restricted to proportions of males and females answering each item correctly in the national cohort, stratified by overall level awarded. Diagnostic analysis for individual students The item analyses discussed above look at performance on individual items by cohorts of students, and the whole-subject-score analyses described earlier can be thought of as the performance of individual students on cohorts of items. Much richer analyses are possible if we examine individual item-responses for individual students. The detailed pattern of item-responses for each student creates what might be regarded as an agenda for future action by the teacher, but the strength of this datathat it is related to an individual studentis also its weakness. If the response patterns of all students are the same, then we just end up with a copy of the facility indices, and if the response patterns of the students are different, then this suggests a different course of action needs to be taken by the teacher for each student, which is clearly impractical in most teaching situations. Fortunately, it is rarely the case that every students response pattern is different from every other response pattern: teaching is challenging because students are so different, but it is only possible because they are so similar! One of the most important regularities in students responses is that on well-designed tests, their responses can often be said to scale. The essence of scaling is that we expect that students who answer (say) exactly three items on a six-item test would get the easiest three items correct. Figure 8 gives a hypothetical set of item responses by 20 candidates to a six-item test. The columns (items) have been arranged in increasing order of item difficulty (as found by the students), and the rows (students) have been arranged in decreasing order of total score (this is quite easily achieved with a spreadsheet package such as Excel).  Figure 8: hypothetical pattern of responses by 20 students on a six-item test Guttman (1944) defined an ideal response pattern for items as one in which, apart from in the marginal entries, all ones have ones above them and to their left, and all zeroes have zeroes below them and to their right. In other words, it would be possible to draw a (not necessarily straight) contour-line across the matrix so that all the entries above and to the left of the line were ones, and all the entries below and to the right of the line were zeroes. The example in figure 8 has many departures from perfect scalability (termed errors by Guttman). A natural approach to analysing the degree of response consistency is to establish the proportion of errors. However, this is problematic, because there are different ways of defining, and thus rectifying, the errors. For example, student 6 above can be regarded as having two errors (the responses to items 4 and 5) or just one (the response to item 6). Alternatively, the errors can be counted by assuming that the students total is correct, so that student six, with a total score of four, should have answered only the easiest four items correctly (this was the approach adopted by Guttman). Broadly the approaches to defining the degree of scalability are of three types: those that analyse scalability in terms of item-behaviour (ie looking vertically) those that analyse scalability in terms of student-behaviour (ie looking horizontally) those that analyse scalability both in terms of item-behaviour and in student-behaviour. Taking Guttmans approach first, we can see that, students 1, 2, 3, 9, 12, 13, 15, 16, 17, 18, 19 and 20 in figure 8 would all be regarded as responding consistently, answering questions correctly from the bottom up while students 4, 5, 6, 7, 8, 10, 11 and 14 each contribute two errors. Guttman defined a response matrix as being reproducible to the extent that the patterns of responses could be determined from student scores or the items difficulties. The coefficient of reproducibility is the proportion of error-free responses in the matrix, so that:  (1) where e is the total number of errors in the table, n is the number of students and k is the number of items. Here the value of the coefficient of reproducibility is 0.87 (ie 104/120). However, while this coefficient of reproducibility reaches unity when there are no errors, the lower bound is not zero. In fact an upper bound for the number of errors is given by summing over all items the smaller of the number of zeroes and the number of ones in the column for that item. In our example, the maximum number of errors is therefore 5 + 9 + 10 + 9 + 6 + 2 = 41, so that REP cannot be less than 0.66 (ie 79/120). Menzel (1953) defined the coefficient of scalability as the ratio of actual improvement over minimum scalability, compared to the maximum possible improvement. Using the definitions of REP and Min[REP], this simplifies to:  (2) so that the table above has a coefficient of scalability of (41-16)/41 = 0.61. Sirotnik (1987 p51) suggests as guidelines that values of at least 0.9 for the coefficient of reproducibility and 0.6 for the coefficient of scalability are required before one can infer reasonable scalability. The vertical and horizontal perspectives described above are combined in the Student-Problem or S-P technique developed by Sato and others in Japan (McArthur, 1987). The essence of the S-P technique is that two curves are superimposed on the ordered student-by-item matrix, the S-curve corresponding to expected ideal scaled pattern of responses based on the student scores, and the P-curve corresponding to expected ideal scaled pattern of responses based on the item scores or facilities. The two curves, superimposed on the matrix from figure 8 are shown in figure 9. Intuitively, the departure from perfect scalability is represented by the area between the two curves in the S-P technique, while the Guttman approach is simply to total the number of zeroes above and ones below the S-curve. A variety of indices have been proposed for evaluating the extent of the departure from scalability, and these are discussed by Harnisch and Linn (1981), but most of the proposed indices are rather complex, and are likely to be of more interest to item developers than teachers. As Haladyna (1994 p169) notes, a simple count of zeroes to the left of the S curve (or, equivalently, of ones to the right) provides an adequate means for identifying students with unusual response patterns. The application of this technique can be illustrated by application to sample data collected in the development of the 1995 national curriculum science test for 14-year-olds.  Figure 9: hypothetical pattern of responses showing S and P curves The data-set consists of the scores of 159 students on 19 constructed-response questions, some of which are polytomously scored. In order to get a matrix of dichotomous scores, students were regarded as having achieved the item if they attained a threshold score that had been determined by subject experts as indicating a minimal acceptable response to the item (in practice this was, in most cases, between half and one-third of the available marks). The rows of the matrix were arranged in decreasing order of student total score, and the columns were arranged in increasing order of item difficulty. The coefficient of reproducibility was found to be 0.92 and the coefficient of scalability was 0.64, indicating a satisfactory degree of scaling. Out of the total of 159 students, only 11 had the idealised response pattern, 70 had one non-scaling response and 63 had two. Just 13 students had three non-scaling responses and only two students had four. For this dataset there is an obvious choice of a dividing line between those whose responses can be considered to scale and those whose do not: those with three or more non-scaling responses can be regarded as unusual and those with two or less would not. However, in general, it is not possible to set a pre-defined proportion as defining students whose performance deserves particular attention. This will, in part depend on how well the test scales, and how the marks of the candidates are distributed (candidates with extremely high or low marks will tend to have less non-scaling responses for obvious reasons). However, it will also depend on how much time the teacher is prepared to devote to investigating and to remedying specific student difficulties. A study of junior school teachers in Germany found that just knowing about students strengths and weaknesses was not associated with higher achievement unless the teacher had a range of teaching strategies with which to act on this information (Helmke & Schrader, 1987). In other words, there is no point generating data unless you can do something with it. For this reason, it is important that any software first tallies the frequencies of numbers of non-scaling response, and presents this to the user, before asking for what proportion of students the diagnostic data should be produced. The important point about such analyses is that they focus attention on students whose responses are unusual. The students whose response patterns scale have learnt skills and knowledge in the same order as the majority of the rest of the class or the school. For them, their priority is to move onto the more difficult material. However, the students with non-scaling response patterns are getting wrong items which peers of similar overall attainment are getting right. This may be for a variety of reasons (eg because they have idiosyncratic preferred learning styles) but their response patterns indicate clearly that it is unwise to treat them like the other members of the cohort, and special attention and remediation is likely to be needed. Once the teacher has decided the number of students for whom diagnostic information is to be produced, there are a variety of formats in which it could be produced. The simplest format would be for the items with aberrant responses to be printed out: The following students had four aberrant responses: PupilUnexpected wrong answersUnexpected right answersColin MacDonald3, 5, 6, 1213, 14, 18, 19Jane Smith7, 8, 9, 1112, 16, 17, 18 However, this display does not provide information about how easy the wrongly answered easy items were, nor how difficult the unexpected correctly-answered items were. This is the idea behind the plots developed by Masters & Evans (1986), where a vertical axis represents increasing difficulty, and incorrect items are marked on one side and correct items marked on the other. While giving information about the extent to which a student has correctly answered difficult questions, and failed to answer correctly easy questions, such a plot does not provide detailed diagnostic information, particularly as to the nature of any specific weaknesses. This can be partly addressed if we can make use of any sub-domains that can be identified. If we assume for the purposes of illustration that questions 7, 9, 10, 15, 16, 17, 18 relate to sub-domain 1, questions 1, 2, and 3 relate to sub-domain 2 and questions 3, 4, 5, 6, 8, 11, 12, 13, 14, 19 relate to sub-domain 3, then we can introduce a second dimension to the plot developed by Masters and Evans, which we might term a two-way diagnostic plot, an example of which, for one of the two students discussed above (Colin MacDonald) is shown in figure 10. In this plot the vertical axis represents increasing difficulty. This could be measured using item response parameters, but because the emphasis is on students whose responses are different from the rest of the class or year-group, a local measure of difficulty would be more appropriate. For this reason simple facility for the cohort within the school is used as a measure of difficulty. The S-P technique is quite robust for small numbers of students, and since statistical significance is not required, there should be no difficulty in identifying unusual response patterns in groups as small as ten or fifteen students. Item 10 (which we are assuming was related to sub-domain 1) was correctly answered by 97% of the sample, and is therefore marked almost at the bottom of the plot, in the column for sub-domain 1. Item 6 (sub-domain 3) was slightly harder, being answered correctly by 90% of the sample, and so it is shown one tenth of the way up the difficulty axis. All 19 items are plotted in this way (this part of the plot will be the same for all candidates in the school), and, for Colins diagnostic plot, the horizontal line is drawn above the 14 easiest items, because Colin got 14 items correct in total. A perfectly-scaling response would entail answering all items below the line correctly, and all items above the line incorrectly. Because we are interested here in departures from perfect scaling, we place a box around any items below the line answered incorrectly, and any items above the line answered correctly (there will, of course, be an equal number of each). As can be seen, Colins responses show that he has answered incorrectly three items from sub-domain 3 that his peers found relatively easy, while he answered correctly three items from the same sub-domain that his peers found difficult. If the teacher of this class has any time to deal with any students on an individual basis, this analysis suggests that Colin should be a priority, and that sub-domain 3 would be a good place to start.  Figure 10: two-way diagnostic plot for Colin MacDonald The diagnostic analyses developed above are applicable only to dichotomously scored items, although, as the example above shows, by setting a threshold score, polytomous data can be made to fit, provided the number of marks per item is relatively small and the number of items relatively large. Such an approach is not, however, applicable to subjects (like English) where a paper might consist of a small number of polytomous items. With such small numbers of items, departures from expected patterns of performance have to be very large to show up. For example a pilot version of a national curriculum test of English Literature for 14-year-olds was taken by 107 students, with each students attempt resulting in 3 scores, one on a scale of 0-25, one on scale of 0-10 and one on a scale of 0-5, with the average for each scale being around 10, 4 and 3 respectively. If we assume that whatever total score a student achieved, the marks should remain in this ratio, then we can establish an expected profile given the total score, much as we did for the dichotomous data in the science test above. An analysis of the student profiles shows that of the 107 students, 14 had a profile which differs significantly from the expected profile on at least one component, two differed on two components and only one differed on all three. However, since these analyses are at best indicative, rather than statistically rigorous, it seems that, by analogy with the diagnostic plots for dichotomous data, it would be appropriate to ask the user to specify the numbers of unexpected or non-scaling profiles to be printed out. Rather than a diagnostic plot, the printout in this case would take the form of an expected profile and an actual profile for the selected students. The development and administration of the questionnaire Having developed a range of potential forms of analyses, and a structure for their presentation, a preliminary questionnaire was developed. As work proceeded, it became clear that much of the questionnaire would be taken up with explaining what the analyses could do, just so that the reactions of respondents could be sought. This made the questionnaire a very long document, and somewhat forbidding, which could be expected to reduce the response rate. The decision was therefore taken to split the questionnaire into a questionnaire of 4 pages, which made reference to a much larger (28 page) booklet of samples of possible printouts. Care was taken to produce the questionnaire and the sample booklet to a relatively complete stage, since incomplete samples can often elicit adverse reactions. Separate questionnaires and sample booklets were produced for primary and secondary schools, not because the analyses proposed were different, but because it was felt to be important that the samples that respondents encountered in the sample booklets were seen to be relevant to their needs. The major purpose of the questionnaire was to determine which of the possible forms of analysis schools would find useful. Experience with questionnaires suggests that if respondents are asked Which of the following would you find useful?, all possible responses are likely to be ticked. For this reason, respondents were asked to express their preferences by ranking the available options, even though such ranked data is less straightforward to analyse. The questionnaires and sample booklets were field-tested at two consultation meetings (one for primary schools and one for secondary schools). A total of 15 key users of assessment information (headteachers, deputy headteachers and senior teachers) were invited to each meeting. Since part of the function of the meeting was to discuss how the results of national curriculum assessments might be used in the future, it was not felt appropriate to attempt to gather a representative sample of users, but instead to focus on the leading edge. Participants were therefore invited on personal recommendations from local authority inspectors and advisers as being people who were aware of current developments in national curriculum assessment, and who had already given thought to some of the issues involved. Due to last-minute difficulties, only 8 of those invited were able to attend the secondary meeting, although 13 attended the primary school meeting. Participants at these meetings were sent copies of the draft questionnaire and sample booklet beforehand, and the meetings allowed their reactions to the different kinds of analyses to be probed. As a result of the consultation meetings, several changes were made to both the sample booklets and the questionnaires, which then were mailed out at the end of June to a random sample of 300 primary and 200 secondary schools. In all 131 questionnaires were received by the cut-off date: 68 from the 300 sent out to primary schools (23%) and 61 from the 200 sent to secondary schools (31%), and although the response rate was disappointing, the responses were well-balanced in term of size and type of school. The full details of the responses to the questionnaires can be found in Wiliam (1995)here only a summary of the results is given. Two of the respondents (one secondary and one primary) indicated that they found no value in the exercise at all, leaving all responses blank, and stating, in effect, that they saw no value in any analyses of national curriculum tests, since the tests had no validity. All the other schools who replied made some positive responses, although many schools saw little value in some of the analyses offered. Furthermore, the generally low response rate for the whole questionnaire (and the significantly lower response rate from primary schools) makes generalisations to the whole population somewhat speculative. One interpretation of the poor response rate is that only a quarter of schools are interested in obtaining such information from national curriculum tests, tasks and teacher assessment. However, even on this pessimistic assumption, one thousand secondary schools, and five thousand primary schools would be prepared to make some use of such software were it to be produced, suggesting that substantial investment in the development of such software was warranted. In both primary and secondary schools, diagnostic information was regarded as the most important kind of information. Looking at the performance of individuals within a class, comparing one subjects results with another, and comparing results from different years were next in priority (with slightly higher emphasis given by primaries than secondaries in all cases). The least popular kind of information was that which compared the performance of different classesperhaps not surprising in primary schools, where there is typically only one class per cohort, but a potentially significant finding for secondary schools. In comparisons of different subjects, respondents were evenly split between those who preferred scores to be adjusted to take into account national variations in performance, and those who preferred raw scores, suggesting that both should be made available. In comparisons of classes, tabular representations were more popular than boxplots, which were, in turn more popular than dotplots. In fact, dotplots gained so little support that serious consideration was given to omitting them from the software specification. The adjustment of individuals scores, produced, not surprisingly, clear differences between secondary and primary schools. All schools emphasised the need to be able to control for prior attainment, with age also considered important. After this, however, primary schools emphasised the length of schooling, while secondary schools emphasised sex. There was little support for controlling for season of birth or ethnicity. Of the item-level analyses, the diagnostic plots were significantly favoured over the differential-item-functioning (DIF) analyses in secondary schools, with opinion in primary schools evenly divided. Differences between the school and the national cohort were preferred, with sex coming a close second in secondary schools, and a rather more distant second in primary schools. There was also some support for differences between classes, but little interest in investigating differential effects for special needs, ethnicity, or bilingualism. Finally, and perhaps surprisingly, over one-fifth of the schools (admittedly only one-twentieth of those mailed), and over a quarter of primary schools who responded would be prepared to key in the item-level data themselves, in order to produce the diagnostic and DIF analyses. Taken as a whole, the responses to the questionnaire gave clear support to the idea of developing software to analyse and present the results of national curriculum assessments and identified which options would be useful to schools and which would not. The major outstanding issue was how the data is to be transferred into a machine readable form. If the analysis is restricted to whole-subject scores, then the data entry is straightforward, but the kinds of analysis possible are quite limited. However, the clear message from the questionnaire responses was that the most valued function that analyses of national test data could fulfil was the provision of diagnostic information on individual students, which requires item level data, and it was apparent that only a small minority of schools would be prepared to enter this data themselves. Where the tests are multiple-choice tests, then capturing the necessary item-level data is straightforward, but where constructed-response questions predominate (as is the case in national curriculum testing in England and Wales), the cost of entering this data will be significant, whether it is done by schools, or by other external markers, irrespective of the method used. However, the focus groups and the questionnaire did suggest a clear specification for what such a software package should be able to do. A summary of the kinds of analyses are given in the appendix. Piloting the software Early in 1996 a contract was awarded to the commercial software developer SIMS to develop and pilot a version of the specified software for use with the mathematics and science tests for 14-year-olds. At the same time, Kings College Londons School of Education was asked to undertake an evaluation of the software pilot. Although most of the features of the software are the same as in the specifications described above and in the appendix, there were some minor differences. Details can be found in Wiliam (1996). The sample of schools was drawn by identifying five local education authorities (LEAs) for each subject and then asking advisers in each LEA to nominate schools that would be able to participate in the project. The planned sample was for 80 schools, 40 for mathematics and 40 for science. In addition, it was planned that half the schools would receive optical-mark reader (OMR) forms onto which their students item responses had already been coded, while the other half would have to arrange for the OMR forms to be coded themselves (in most cases this was done by the students themselves). The final sample was 81 schools40 in mathematics and 41 in science. The software together with an associated user manual was dispatched to the schools, timed in order to arrive at the same time as the students scripts were returned towards the end of June. In addition, a Statistics and Data Presentation Guide (Wiliam, 1996), prepared for the project by Kings College London, was included in the mailing to schools, in order to assist in the interpretation of the various kinds of analysis available within the software. In the week beginning 22 July 1996, approximately one quarter of the schools (20 schools in all) were contacted by telephone in order to determine the extent of their progress. At this stage none of the schools contacted had progressed as far as using the software to analyse their own results, although many schools indicated that they were intending to work on the software over the summer break. During October and November 1996, each of the 81 schools was contacted by telephone in order to ascertain progress. In the end, telephone contact was made with the project co-ordinator in 67 of the 81 (83%) of the schools in the pilot. As a result of these contacts it became clear that only a small number of schools had made any significant progress on using the software. Many schools indicated that they had experienced technical problems with getting the software to run, or intended to work on the software either over the half term holiday or during the second half of the term. For this reason, the draft questionnaire which had been developed during the summer was substantially revised in order to focus in more detail on the areas of difficulty experienced by schools, and the mailing of the questionnaire was postponed for four weeks. A draft version of the questionnaire was sent to all the pilot schools in one LEA in November, prior to a meeting held there, and detailed discussions were held with representatives from two schools regarding the draft questionnaire. As a result, small changes were made to the questionnaire, and the final version of the questionnaire was mailed out to all 81 schools in the sample in late November, with a requested return date of December 16. The questionnaire contained 56 questions, approximately half of which required only a yes or no answer, and was intended to take no more than 30 minutes to complete (although for the majority of schools the completion time was far shorter than this). The remainder of the questionnaire consisted of 5 multiple-choice and 24 open-response questions. By the nominated return date, a total of 23 questionnaires had been received, but following intensive telephone follow-up, a further 26 questionnaires were received by the end of January. This sample of 49 is referred to in this paper as the questionnaire sample. However, the extensive notes derived from telephone contacts with project co-ordinators, particularly during the months of October and November, provide information on most of the material covered in questions in the closed-response items in questionnaire on a further 26 schools giving information on a total of 75 schools in all (the augmented sample). Questionnaire results There were no significant differences between the schools that did and did not return the questionnaires, either in terms of who marked the OMR forms (ie school or test markers) or subject piloted, although there were some (but not significant) differences between LEAs. Of the 49 respondents, 7 said that they had not received the materials (although in some cases this appeared to be due to changes in responsibility within the schoolsthe materials had been delivered to the school). Of the 42 who had received the materials a further 6 did not attempt to install the software for a variety of reasons, ranging from lack of time to a concern that the software might interfere with other important software running on the same system. However, all but two of the schools who tried to install the software were successful. Table 1 below reports the quantitative data for the questionnaire sample and, where relevant, the augmented sample. Question Questionnaire sampleAugmented sampleQuestionnumberResponsePositive%ResponsePositive%Software received?6494286%756789%Installation attempted?7423686%674973%Installation successful?9363392%494592%Item responses coded by school?21211467%251664%Code-time difference?2412867%14964%Scan successful?29331030%451329%Insuperable problems?31261558%352469%Demonstration data used?32231043%321238%Software manual used?33272696%Software manual useful?34262492%Statistics guide used?36251976%Statistics guide useful?37201995%Lists/box- or leaf-plots useful?39211571%Frequency graphs useful?42191368%Reverse CF graphs useful?4418528%Sub-domain reports useful?4617741%Facility indices useful?48191368%Significant items reports useful?5016531%Unexpected responses useful?5219737%Diagnostic charts useful?5417635%Table 4: summary of responses to dichotomous items in the questionnaire Of the 33 schools that had installed the software successfully, 17 had received OMR forms that had been coded with the students item responses by the test markers. Of the 16 schools receiving blank OMR forms, all but 4 coded their students item responses onto the forms. In the majority of cases (59%), the forms were coded by the students themselves. In two schools, the forms were coded by other students, and in the remaining two schools by the teachers themselves. The median time taken to code the forms was 15 minutes, although this figure should be interpreted with caution since 3 of the 14 schools failed to indicate the time taken to code the forms and those in the augmented sample did report longer times. Furthermore, although one of the schools where the forms were marked by teachers did not indicate the time taken to code each form, the other gave a figure of only 2 minutes per form, and the two schools where older students were used to code the forms gave a time of only five minutes per form. Both the median and the mean time taken to code each form where this was done by students was around 17.5 minutes, although in 7 out of the 10 schools where forms were coded by the students themselves, there were significant differences in the time taken to code the forms by the slowest and fastest students. The quickest 20% of students averaged around 10 minutes to code their forms, while the slowest 20% averaged around 30 minutes, and in one school took as long as 50 minutes. A total of 35 schools got as far having coded forms, but three of these schools participated no further, principally due to the time taken to get this far. Of the 32 schools that attempted to scan the forms, only 9 managed to do so without significant problems. Of the remaining 23, 15 encountered such great difficulties as to prevent them making any further progress with analysing their students data. In total, therefore, only 17 schools actually managed to use the software in the way originally intended, although a further 7 schools did undertake analyses using the demonstration data supplied with the software. These 24 schools are referred to collectively as user schools. The data from the phone contacts made in October and November suggests that there are only 2 schools who had made significant progress who did not return the questionnaire. Of the 17 schools who analysed their own data, 10 had their OMR forms coded by the KS3 test markers, and 7 marked their own OMR formsthere is therefore no evidence that the requirement for the schools to mark the forms themselves acted as a disincentive. However, an analysis of these 17 schools by subject reveals that 10 of them were piloting mathematics and only 7 were piloting science, and further that there were only two science schools that marked their own OMR forms. Furthermore, 4 of the 11 user schools piloting mathematics were in a single LEA. Only 20 of the 24 user schools completed the section of the questionnaire which dealt with the analyses available within the Diagnostic Software, and their responses are summarised in table 3 below. In the table, the rows (ie schools) have been ordered from most positive to least positive, and the columns (ie questions) have been ordered from most popular to least popular. The entries in the table indicate whether the school found each of the analyses useful (1) or not (0). The significance of the heavy and light lines drawn on table 3 are explained in note 3 on page 22. School*Q39Q42Q48Q46Q52Q54Q44Q50Total111111101721110111173111110016411011101651111001056111011005711110010581010011049111001004100101001031101110000312111000003131010100031400001001215110000002160110000021710100000218100000001191000000012000000000015131377655*Schools shown in italics used only the demonstration data Key: Q39: Lists/box- or leaf-plots; Q42: Frequency graphs; Q44: Reverse CF graphs; Q46:Sub-domain reports; Q48: Facility indices; Q50: Significant items reports; Q52:Unexpected responses reports; Q54: Diagnostic charts. Table 3: preferences of the 24 user schools for the available analyses The three most popular analyses, found useful by around two-thirds of the 20 user schools responding, were the subject reports (which included lists, boxplots and stem-and-leaf plots), the frequency graphs and the facility indices. The other five analyses were far less popular, found useful by slightly less than half as many schools as the more popular analyses. Applying the techniques of Gutman scaling used in the item-analysis to this data, it is clear that most of the responses are accounted for by a natural order of popularity amongst the analyses, and a differing interest in the analyses between the schools. Overall Evaluation The schools in the pilot sample were not intended to be in any way representative of the whole population of schools. They were deliberately chosen as schools both willing, and likely to be able to participate in a project that required a reasonably high level of familiarity with computer-aided administration systems. Therefore the relatively low proportion (23%) of schools that actually used the software in the way intended must be considered rather disappointing. However, it is worth noting that a large proportion of the problems arose directly from inadequacies in the equipment available, and such difficulties can be expected to diminish as computer provision in schools improves. Another substantial source of difficulty was attributable to schools use of early versions of system software, which can again be expected to improve as users migrate towards Windows-based environments. The biggest difficulty, and one to which these hardware and software problems contributed, is, of course, time, and it is likely that for many schools the investment of time required in order to use the software effectively is unlikely ever to be justified by the benefits. However, there were a significant number of schools (over a quarter of those who actually received the software) who managed to install and use the software, and there was no evidence that those who had their OMR forms coded for them by the test markers were more likely to participate or succeed than those who marked their own forms. Two schools used the software successfully, but did not feel that the time invested was justified by the usefulness of the results, although one of these conceded that this was largely because they already had in place an intensive testing system. The remainder of the schools who used the software to analyse the data were very positive, and almost all the schools who used the software for the analysis of their own students results indicated that they would use the software again in 1997, even if the forms had to be coded by the school. One perhaps surprising result is the speed and the accuracy with which students coded their own forms. Most schools found that the time taken to code the form fitted well within a single lesson, even for lower-achieving students, and the accuracy of their coding appears to be little different from that achieved by the test markers. Apart from using the software for presentational purposes, there is no doubt that the major use made of the software was for investigating item facilities, and many of the science departments particularly valued the opportunity to compare facilities for individual items with national norms (national norms were not available for the mathematics test). For this purpose, many of the schools found the reports of facilities item-by-item rather too detailed and would have welcomed summaries for related sets of items. Despite the limited use made of the software by the pilot schools, there appears to be little doubt that the software offers a valuable resource for those schools who are already using information technology to aid administration, and usage can be expected to increase steadily as experience with the software (both within schools, and in LEAs) grows. Since there was some support for each of the analyses provided in the 1996 pilot software, the specification for the software made available to all schools in England and Wales in 1997 was broadly similar to that on which the 1996 software was based, although a number of small improvements were made. Schools in one particular LEA had made extensive use of the software. One school, in preparing for a school inspection, had used the facility indices obtained from the software to identify key topics that it was felt the department had not taught well. On the basis of this data, the department had identified a short-term plan, involving paying greater attention to specific topics, and a long-term plan, which involved revisions to the departments schemes of work in order address the difficulties. The software therefore has significant potential to assist schools in identifying aspects of their work in which they are relatively successful, and aspects which might benefit from further development, as well as identifying particular aspects of successful practice within schools (eg which teachers teach which topics well). Since the utility of the software can be expected to improve as schools gain more experience in its use, the evidence collected here suggests considerable long-term benefits for this kind of software. References Abraham, J. (1989). Testing Hargreaves' and Lacey's differentiation-polarisation theory in a setted comprehensive. British Journal of Sociology, 40(1), 46-81. Angoff, W. H. (1974). Criterion-referencing, norm-referencing and the SAT. College Board Review, 92(Summer), 2-5, 21. Department for Education (1994). Testing 14 year olds in 1994: results of the National Curriculum assessments in England. London, UK: Department for Education. Guttman, L. (1944). A basis for scaling qualitative data. American Sociological Review, 9, 139-150. Haladyna, T. M. (1994). Developing and validating multiple-choice test items. Hillsdale, NJ: Lawrence Erlbaum Associates. Harnisch, D. L. & Linn, R. L. (1981). Analysis of item response patterns: questionable test data and dissimilar curriculum practices. Journal of Educational Measurement, 18(3), 133-146. Helmke, A. & Schrader, F. W. (1987). Interactional effects of instructional quality and teacher judgement accuracy on achievement. Teaching and Teacher Education, 3(2), 91-98. Holland, P. & Wainer, H. (Eds.). (1993). Differential item functioning: theory and practice. Hillsdale, NJ: Lawrence Erlbaum Associates. Hunter, J. E. & Schmidt, F. L. (1991). Meta-analysis. In R. K. Hambleton & J. N. Zaal (Eds.), Advances in educational and psychological testing (pp. 157-183). Boston, MA: Kluwer Academic Publishers. Masters, G. N. & Evans, J. (1986). A sense of direction in criterion referenced assessment. Studies in Educational Evaluation, 12, 257-265. McArthur, D. L. (1987). Analysis of patterns: the S-P technique. In D. L. McArthur (Ed.) Alternative approaches to the assessment of achievement (pp. 79-98). Norwell, MA: Kluwer. Menzel, H. (1953). A new coefficient for scalogram analysis. Public Opinion Quarterly, 17, 268-280. National Foundation for Educational Research (1995, May). Additional information from key stage 1 tests . Report prepared for School Curriculum and Assessment Authority. Slough, UK: National Foundation for Educational Research. Schulz, E. M. & Nicewander, W. A. (1997). Grade equivalent and IRT representations of growth. Journal of Educational Measurement, 34(4), 315-331. Sirotnik, K. A. (1987). Towards a more sensible achievement measurement: a retrospective. In D. L. McArthur (Ed.) Alternative approaches to the assessment of achievement (pp. 21-78). Norwell, MA: Kluwer. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley. Vaughn, S.; Schumm, J. S. & Sinagub, J. M. (1996). Focus group interviews in education and psychology. Thousand Oaks, CA: Sage. Wiliam, D. (1992). Special needs and the distribution of attainment in the national curriculum. British Journal of Educational Psychology, 62, 397-403. Wiliam, D. (1995). Deriving more useful information from national curriculum tests . Report prepared for School Curriculum and Assessment Authority. London, UK: Kings College London School of Education. Wiliam, D. (1996). SCAA key stage 3 diagnostic software: statistics and data presentation guide. London, UK: School Curriculum and Assessment Authority. Wood, R. (1976/1987). Your chemistry equals my French. In R. Wood (Ed.) Measurement and assessment in education and psychology (pp. 40-44). London, UK: Falmer.  An earlier version of this paper was presented to the 25th Annual Conference of the International Association for Educational Assessment held at Bled, Slovenia, May 1999.  One school in the augmented sample reported on the telephone that one class took nearly 100 minutes to code their own forms.  Most of the schools who found the Subject reports useful regarded the plain lists as the most useful, with boxplots next most useful and stem-and-leaf plots least useful. In fact only one school who found the subject reports useful did not follow this order of preference.  PAGE 6 GHI1:$$P&Q&(()$)++++,,22E3P3<<}=~=@@FFFFFFYYtYuYvYYkkwl{lllqqmqnqKvSvbvgvzz]|b|e|jLh0KUjh0KUj+h0KUh0KCJEHaJj h0KUjOh0KUjh0KUjh0KU h0K6h0Kjh0Kh0K0JUh0K5CJaJ<3I.acl  j0{;,d^,dd$da$ $da$K -.>>BEgH&JKJLNbRRUYYuYwYY]]^`cd]d]d]dd d#dcei j?mEooqqSuuy {e|g||~=̄ӄ H dd $d^`$#d d#$da$dde|f|̈́΄لڄ'(Rl2 346HIJ]^rs h0KOJQJh0K5OJQJjh0Kh0K0JUjkRh0KUh0KCJOJQJaJjjEh0KUjHBh0Kh0KEHUjM?h0Kh0KEHU h0K6h0Kj4h0KU<8m@e@.J72fl dP$If0d^0 d#d ddʜ˜֜<[kdYQ$$IfP4F  0      4 Pa0f4 dP$If[kdP$$IfP4F  0      4 Pa0f4֜wΩ9d d#dd[kdQ$$IfP4F  0      4 Pa0f4 dP$Ifg8XxY 6<+z8 "1 /d$IfddNE /d$Ifkdbs$$If4ֈx <b  Sqq 4 af4 /$d$Ifa$/$dP$Ifa$ )+,5>@ /$d$Ifa$ @AT /d$Ifkdet$$If4 x <b"  SqSSqSS((((4 af4TVWZ]abehl / d$If / d$If /d$If lm /d$Ifkduu$$If4 x <b"  SqSSqSS((((4 af4 / d$If / d$If /d$If  /d$Ifkdv$$If4 x <b"  SqSSqSS((((4 af4 / d$If / d$If /d$If  /d$Ifkdw$$If4 x <b"  SqSSqSS((((4 af4 / d$If / d$If /d$If  /d$Ifkdx$$If4 x <b"  SqSSqSS((((4 af4 #$')-.137 / d$If / d$If /d$If 78I /d$Ifkdy$$If4 x <b"  SqSSqSS((((4 af4ILMPSWX[^b / d$If / d$If /d$If bcy /d$Ifkdz$$If4 x <b"  SqSSqSS((((4 af4y|} / d$If / d$If /d$If  /d$Ifkd{$$If4 x <b"  SqSSqSS((((4 af4 / d$If / d$If /d$If  /d$Ifkd|$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd}$$If4 x <b"  SqSSqSS((((4 af4 / d$If / d$If / d$If / d$If /d$If 1 /d$Ifkd~$$If4 x <b"  SqSSqSS((((4 af41458;?@ABC/ d$If / d$If / d$If / d$If /d$If CD] /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4]`adgklmno/ d$If / d$If / d$If / d$If /d$If op /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4 $%&'(/ d$If / d$If / d$If / d$If /d$If ()B /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4BEFILPQRST/ d$If / d$If / d$If / d$If /d$If TUw /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4wz{~/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If  /d$Ifkd$$If4 x <b"  SqSSqSS((((4 af4/ d$If / d$If / d$If / d$If /d$If , dPkd$$If4 x <b"  SqSSqSS((((4 af4,  / d$If /d$Ifd  /d$Ifkd$$If84 Hd  (0l((((4 8aPf4  / d$If ! /d$Ifkd$$If84 Hd  (0l((((4 8aPf4!#%')+-/13 / d$If 346 /d$IfkdW$$If84 Hd  (0l((((4 8aPf468:<>@BDFH / d$If HIK /d$Ifkd$$If84 Hd  (0l((((4 8aPf4KMOQSUWY[] / d$If ]^` /d$Ifkdٌ$$If84 Hd  (0l((((4 8aPf4`bdfhjlnpr / d$If rsu /d$Ifkd$$If84 Hd  (0l((((4 8aPf4uwy{} / d$If  /d$Ifkd[$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkdݏ$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd_$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd $$If84 Hd  (0l((((4 8aPf4 / d$If   /d$Ifkd$$If84 Hd  (0l((((4 8aPf4  / d$If "45JK`advw;=7|Uxzu ^ 79W#o(JKLjh0Kh0K0JUh0KCJaJ h0K5jh0Kh0K0JU h0K6h0KOJQJh0KO" /d$Ifkd$$If84 Hd  (0l((((4 8aPf4"$&(*,.024 / d$If 458 /d$Ifkdc$$If84 Hd  (0l((((4 8aPf48:<>@BDFHJ / d$If JKN /d$Ifkd$$$If84 Hd  (0l((((4 8aPf4NPRTVXZ\^` / d$If `ad /d$Ifkd$$If84 Hd  (0l((((4 8aPf4dfhjlnprtv / d$If vwz /d$Ifkd$$If84 Hd  (0l((((4 8aPf4z|~ / d$If  /d$Ifkdg$$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd($$If84 Hd  (0l((((4 8aPf4 / d$If  /d$Ifkd$$If84 Hd  (0l((((4 8aPf4 / d$If kd$$If84 Hd  (0l((((4 8aPf4 05G i   I`?Ld^`Ldd d /d]^28+DJKy $>@&a$$d^`$ $^`$dLd^`LLyzh0KCJaJh0KmHnHujh0KUjh0Kh0K0JUh0K dd< 0000hP|. A!"#$% Dd!(<  C 77ABxMEdSTD TLMEdS?BTS8f>xu]HSaϴ[-rBsp:uV 31bE-+2 >0hB&]VB]ZE]D7Yzl2<=}yq[mg.Q$$'G瞵bOAzZA `}6i`Q , f`\pKp]0, 8-$݂v6A " <* ̪Oü`R0,H "?(t .  4PXW0"tߞ^1iQ)Hrp}RqY뇌ͼ8f}4sf6ཬ*NFY3[_{ٕffX3K_?tfo,/frpЫ3P):n,Iܡ3KRIX?flD) uf]uFw8P4q?/:A C S&s5JuRN)[*(TƸ.2BQCLzCb&LFDaZ㡅CK$mJLr$oE9VoWb*H=13 ] sjjə׈Xf~65p>u?L-ۦ&t7!,C^^D|W˧|:y@"24bMrrWc `9psQZ9,lS\\4׺C+\KcF7M&WuR<\ANjۦ3HwChe},677d7x56VUZy)jZ-nGDd(<  C ABA'Fh4XL TA'Fh4X GBD^?8f>Yx_HSQlK@kgJ ̍HJiJlԪhQAOO)DO"RПz驧^z^V?;/{sP]*T3t.͊{ R�E<Q 6,? |>k%x =` nׁ+%0 N(N ā@@=^04ec D/lfm7l)~NsT/TX|PSvM+t)KZ:mf¼_J`t {F!# KqdR23ҙvߓjA&W4YU.ޢ$DJHKba;qa4q3A$(.I J"-g#$DZ'$ѷ^m*'rSYU/bԦZ7&ۄx\'x%;x-UMT>ڲo]\psˮ]p c\P'jd KT8C8uYT٦QZFϏGGtԹ|n"?rVgp.{q\gDn&rg0Kt=܇Şqjf>k"Xf \ ] \CMVqPAjWڒ鿫/cȵCN_k~ !tmGD T>!tmGDM@T+xuMOQߑ61nX)_XF @Io(NZCA#"D b4F]\0=gN2zӹw;shD{mZ˖T`C 9ZʲNeϤJKqM0v-ojJJ+gF);T]~1o;jNY ٣/奖**ړM2j4Gd0 x@?2,$h4L4_a(u.(MLKoBh5MLoQ&.\}Q/LSEoS!NAaqEFc[ i@PN^M;`&C:CAXC ;wo[mߜqi=(EcF_a@aЏ 4mDw Ӏ"p F4g 6>΋cg ;+04[6F <:}&@,$CwuB VF% aIS'!jm5H 5م٣Ӈk^̈{z--Dg7g0Am,hQLeN?6]rY3\%d)(j̞@]C}1Uvb咬x.\N)%aIBt^F2?KDR`(RAy>Dd$<  C ABg~PVgycs1 Tg~PVgycs`uLHH'PxuQ=OA}+(Xk' K+11`v29v1:(!aN !R $NQ}G/lOdx(} X ؑ.fǹ%4ّxq(:0Q\O')jlx# ܣ -:YuW:/;giHFs3b΀tTFy)M63bQTL^q7 o]6䏪#03eU$K"KIeSMEJ 5z$T'8SAn>rDd@X <  C ABDFZ}  o TDFZ}   ?xU]he=7fF6J]Pڄd4iѶ$SQ0m I4EK}h()mn$OJR( (O[<$ :M6~3{ν~3pCɧ\s`R6i@} iVٓ ڰb2ɲ!6mH$kEbB#I0ddؐ99CH Y%YM:FłXE!VPEe; 7l (L+d^PxNᐂСS(VXV _5 s K ^=ݥG y\@Y,\g gl\wd>b9p,V7g5,gx3^Ļ]>bgӸl||}ڳ{ͳ^< O.i@ >FkKzcIJFЄ ēCķiW9M䊄&wo\6)^W;OL1:>EgJ0+Ӟ}2\JbL>2xjzv&]ګ-o?J\wW6{3[3oy׼b?͟ѼYd~J [./cS.Ns~CGC8Y#aI0bZXн+}16)uR~o_:DX <  C ABK1W TK1W =xVMlTU=_LLLA:3-ҒNH)LtiBّ u&nX WibLn7:iw=w}߽m͝//#<&5nSΕB-Pv%Ʀc-X0dd֐Iΐ,I֐ IB͋!$LL2O27-li`lTXk*HaVor W.* +d>TxWBBuBP ]cYU)d2 A[\E OC+"]Y>+|w }=^ݡyLnW{yLqz=}%/]dߘFq|`*x^nVr1|\]oKh|d uO*x^z/3_/&%E( ^H=uQ/ sZ=r7n",tW帜7XWxA*]/)I !PF}4"ԁj3(!Ȭ|xl<-m?hB6Оؾ^vVB* wQyEaοZЄO^'W/ 8@c<-.+<+7uVTy h.~ ҅' كK} Ϙ/{I4[*Fa*-Y=l"r,||᡾ly f&ycuIL*eJv.C3b7Cg 3t.f8R>L}f8 ǘT^6h-黄tsP?#|I3IJ˛UJ\GUaC/?@`ajP7;_pc{8 wi"}3OI* 1̵,^Һvgz6Uw&1J?8G&ZW sVbDd<  C ABOb8-s/]~k  TOb8-s/]~k rJ?.tx:Kpu=%$1;R0TH`HetTbٙ,p%򠲕cn9rrŕ\sr!RlIi2Y|HErH`ׯ_F=~}McYVb`E'a,+3]Ss׬πS<ʥM׌/`?OEg>یؖп]~J0H0Ępk4Qg\oN{K0rwW@aI? 釖BO?0?iclWl_:oa2}@Gg{.S2Gc@ѵG1eFpO$e(/ٶPk/5bYހFH1q֏F kc`9txxMLQ)A?{쐠3I*#`W>glnu~ v0b `Ri.YdNYJhM6J~C;6!3q 2 '9PѾXnf}}1`P. m3 c'W>k=v9D[a nJ:,o3\bh>뛨Lj gX~`7`vzp 1C2; \0QS >7Xk[DxCyS \t_oGGf7˰#[Iࡡ#?.ϾClڷ~ݲ.,k#bj_؏\|aMoK'[݅6o e>ӼGd";ɾ/j>0S>Ƕ|1x̥aiMa41+BZ%@Hѓ ϥCf/vi1$p:!|wOwi<8K%A)E.6`FIBaSNQv14rwASc>5z@Q0VG, >6Z}[?R:RFbO'4کҳaaniT209xHƗ#FYbŮwstlt0㬬FN g\!8b' !}INzA$tcB)̆=k~v3; ׆<(m̺ 9^6ڻem4!?T\Ub!J22$c "]k@U゠JQŠ^= v1pDX.lB<PX K{^a_aI+n2v`WZ+-핖>ҒWZ+-핖JK{^i핖~fkYF_m3Z ZV\rulQWxӎX%0 L/t&)LfBB>tezbj$?qb2}8 {;vSOOLO?y|2iCtŨO%]zv3C$]Or[v\`M%۬(Pэs/\sOU`K)FވkWGf/tvT7åRl~h |,rٖ KHj9| EB7Csf`HF2 4HoP&NX{~zbEf{Ч`uݵs:@N yz"501C*q"{6-߳HvPW .0Ϧ5pL30W9AҖ:G=Uy̢Dk\ ʵWlWZ!F*E2x0)_vxQid7@zbJj-9oa[V'l/jֲ5H(V ]pdZ>rd0T9U^L:NÆ9ZOrofDlmNx|Ff iF;0&C3%?m3Rq=hRd( `Vz]'clnugz+ڂr)t IKe;i-3 .*jvHb5'y=Y7khՓSJO bEc]Hhxd;N4 bQJà/ؼ6l]KDf$UBtj7j~-\EWۍk"݈RjJ ̺@f 74`n!`5}'*xvnmx"*k gݵ->1>/{]X߮F:po@2q:VZZB$5AJ"2lA+u}|ZqD0;T`pum:`|Eqt.})biJWg)ۊbZVvr 96 9\9xK`E]tF0~"guz! t$c o ߃Arr1@Ь;F \eTNh<: ) }q?W_+Hҕ|9I~˯1ء.h 뛔q2j[pL9=NMu \l,*N8waܖˣԸ9;-aD(KS&ɔp!=0.BNƦ*q{%֑4͟ˆKwmyb˳q5|<,rO<`Jx73鬧*B`XHuF~".9:g5`lNAA1Y~G:!#?3.aŴѦľ~*ZR$Řʬ]\pN+1k@J\S K!w4Հa  aHq6(vK>tE2}jWg|23_\&BF͓ě+ݗpo+F~diAу n@,RXA 72XD]LZՐ#n]LFϛɓM=dGܘHMhǟBِx5]{{npzXtX1 _3̾hh]4_1.3~ט ݘ8oVw~wX81{_25gs/&sS4Eq:q1ku1oZ6"ǭƥTj $&M0-': ̄D{:mѹM-PiSG9{GqROtKU|!+~* َ#D;h',l6ZHmÂ,Bpö)2fh/!)M[)0-3X6$wL83o\ ]b DFpi널L[ g:Fni4ӂ '3kAaC'n;qe7MNC 4 Zf@o¦rAqJ4{8aaĮl/Fxˏ053Tv'#B.gɯyG?`@z_o^ %r7 .Co|#B+-k Nݥo K֕P3C;Ue%g7W3ұS }.^He.KBR&@WUS CPXxrEClp:^ :-D)}^k.4B-W KX(v&t1]D(~T1E1,r:$9StEGO&9 w,t}Pk hJ%wO1~ ʟiK+5UoI[CKy &*p"TlJH Tꗫ;7e\epӴ[ff]6' <ǧRR-Ս&::@n&Y~90yY/NIZ/O$<`;9<[6V}R.q.е}z|<ߙ\&%ps-`ER YεJ2"DV[)¯{1Muz ̷p:bLҢoK RXt3 b4a=[-C [Y;z "r\¥"/IV*ej_X7,N2iwrl⅌'Z2n͓Z;8̍M&mL ^`2آ }>7wnMQ,/q !X}l ;l0SDJMH~b>jq5$e$bDvWGD,.:7q򈮎9L~`e3twx݌/?^7ѡ[F͚eM:I0k򺮾o2|(=V+^ℵ є##t|x.RHdGǂ.//c1jEdx5|= :R=&܅I+jʩ-zĀHaBsb]f~GTKZ\tߊ~2݋pE7ݾ' fo{~t< ]GP_ç/=[;~-]dCY3ֵsd\o&}sF[yvrK3Lr@ooOS <% L}@ GAݧH>EEvτҠHn Molld5*(S=z1N_LfO{d2S>b+>i yͭH4ol~l~nn{͒HrQtc4k0>|\†3:8f6,r3;f$ Vdve.5q3޻ö3 0F3ﭩ{iayZꝝgd)cj:~|}/|ڇf[hì6J]`Yi/SDZ3O ћ&~~q_Z Nk`̆ú%]+޻ٽ^o+t` \ V'*+ ab:s[Ir/IU9T,dB:IP\TuX*O V{R^!cidNPN6A P'a4p&j}]-r WNC%93ZA~3nѦ]EtX>颴Eγඣָ9nwR # +M`2 +Ӽ(@z ^x5uL[j xe,b݊<곷IRydpB+XrBBх.pHU:(PP|%X~\7 H$ʺz2*fˈC^-Wb5ѺTuJ;I2gSMFRQL`< -XS}CcFh!chԅpf7\,CPnwsk#\-6Ig%b-3Y p5;n 9Ue)ț%̚Xc[SeCĬj,2{}~9d ӞBleFd4[ }8c ^AL("?$d:& \C80ZKX迳dk 2iXn{GLYdJ)&0oL ܧ-}A7Kk ~Js͌[zZ~KsA -2`?8S/@!4L!Th-c0X+([i#=ސvPuر)s9I*ah Fхvs $&AqqeA&JČAtf+(.JVqHY=Y)B)8?wUgzk?h Dd, <  C AB  mRͺmI 4 T mRͺmI$ xMGkk3Xd ؅=T\BN / XD4%cF>펳A:| m4BDJGl1q/X\Cl5"0*hmui.k{ܓϜ#—p9keﲟWj%k.zw2Ŋ~E$!̾ǞGQ՛M88obk<{dg {Pu%>OVt_c~ʮYwƅX5h\,=mQ3Tl?'&"&{jb"P>m_=(>TRE X}m\ӨhE1 5^(~L:GET[D$?݅>F5WV*'V\fT|]Wj=Uct_D$jG._B]T-Q/4/07k~⃨9fO-|+jڋq㴍q.{+z/5B9]W{|l?'B?qk~e NWB3<]͈yIF<|GԴiu|f~lY}j-ׄ /?dMY'k_'=Ś8'O~_C,=>_QJ ?&Qá̦6h2luh"z!濺̾<7Λe`cʠH{1iv 3f^Y!6dpԽfNIޠRE8^u.tdrV:a`5׭ 17 1p<)k^<ώ ѹ0ΚZTDZ+._D O"!.+rgE$pȧA$<4N Lq =/4b IH/4fHԆv$^9=+\^-ʏNL?Q~(DӏNj~p{~xh~ b!C$9\n.=jˉ"p,ޔc`!H?4~H"H? ;KщHP׺g^qȻD\;vV7 C#$q8ABÏNL?:1PD;HN)*T٢([:lYB"}Xv$H"Xv$~H"cE6% H/ĸ/m亣0DIC$ R D!02D)CTe"xcXjc3$ӈ#S0H" K&I!HC)  i &2 QD҉i$E!(C:fBv2 b,9R6I!T31WdDFMeH' D2DeH' WlD Ę!HC4H" fΜ{tGKLd"#pE#tHGD+6zh3U#QG"B? JA.3G"eD1)c$;F1eYA>nْ1 (T#2F cڃ[GK1ҷeVE޼Jse rgAu];[=dZS;Y:}x8w(>Drʟi7qφ@5Hs Izwpl{躣kQt'F"|?Z[>v ]aʹSn),"Ne6]\DY%0^̝v??KjY;[>t^iҴ>G\qgF X3yLa}yV>1*#Cxٹb[W9 hLJBHq$c8P#PE}lFzUcRjѱQڀW[9n|D0 U e;x' k_;G2EDdt<   C A Bk5 ςR7㚞G? T?5 ςR7㚞e9  x}n@ghBZQqS"b%'9pub^GMq*Uҧx/u>%o3{\Ϡ>/fN<#5|$@HټT7,BPVsؔp/ÄV>'ǥEC}Ҹg'dCLt@lݳjzu<Q*<ך8-O 4QG(DgWu{{,_<-}/Z չnfڥ#rrCDGk'N"Dd<   C yyA  B"Էs<9z\nB Tf"Էs<9z\dx.4xTn@o[R@TKѨDŎS;=DUB\8jF6RaWp:iSJaiwgޛy3Be62%|r]l~m+259SKok%܃r vZpg62eU"13Bp!;v6ʼn OpaZ$ia' 4gj`Ge5FFO D``r1 +tL,,xKQ"Vv}!*21=ѬXC3 ]lQhM-IU}~lT2YRKY$Q"I'7~g/1w- 7զۣ~Wلqǥ/^_I-&81+rF/@I5sC40 0060"ψ:ٸR ~F]R׼q{}g1GDsmW6.[;͖hJȬ "fDwNB.A?UϿ~Jf Ddl <   C 22A  B HOж[) p* E T HOж[) p*f'x x_$Gkf؛do$?o.݃ 3= =wArDAЇ| # |Ā>?Uݛ;v.S]_}]U=L{bd?`=baC"_/_\2kQ4LVVGQ Mָŭ^p;&WAo0Uvkq:Q;p#*8LFm  Éd+Y'ZF@'bxqO=XqXoF G!DYk*N Љn.`gA?u;v w"&t p ظ/E]q2}{5uMlڼE^x m =nr-^a/Z`񎇰5&7wj=a~8N:Dڅ֐wQ;ar+`6d]H6J4Vkv.6 { \p\I`5=&.V8YUQwZX\0q4QL6^l[#!+pp 7E}MwP[-*YVb|Ή=ynʶ؛/Y%,u'?~ǥγ7'j!jye{8Š_NKwcJ>״W\9z>pMx;Σ5ukK\?^sx55LGE<@z櫨iǦ?oFfK/eK~O8|P VwWKrEsi%&ħ9-`^ugKm,'l칪~ &&PȤ(z"4-qh"ۭ澪ȼz$'(Ϋi`a̞.Oy1é6.2f^]QJFvVV(r4ߓ*]v5 $B)avŹ8V]d.ljyJ8i@pSVS^ LgT6aB1 H$Q(5FVq{,413Iyܔ,E7.k?S8lf9@gߩ/,O̿FJZ:Q*}R<}aZ7~pcJX{}(vyPCMU>xyGyqN~zh\<,E3)e"K"yTgh~hS+ [ u;wdrUy.4>yd2੢VE0   e;:x'xFaVҁefx}cϲ5ٛXS(aU'GylqMϲyE)[a+^+faO +.Ǿ:wm=q4w_ib?agms,*M,O7Ǭ15]\uZMv-_<}eJ:υ9+~s? $$If0!vh505 5 #v #v0:V P45 50/ 4 P4a0f4$$If0!vh505 5 #v #v0:V P45 50/ 4 P4a0f4$$If0!vh505 5 #v #v0:V P45 50/ 4 P4a0f4 Dd< <   C A  Bg E~흶-랈ڮC R T; E~흶-랈ڮq"b28 xlb~+z1H]0Tίy. .q$*@kihyͫuz-_kÈAJ ?.Y&C ܔ]l% XͿZl ٷ ʼK @W6JPla͂*+:eˆ#jη4gIVn⦭i*^?,.˲y\%z }IIլ:rZWakL3=Ҋa!(o B1Hx*9m/Sz"ي/?|k)%hN E;KG6"g3>KU\2 4#\b.A䖍(ذ^9.Dh}yE'#@C}J\ 4k{H5]0ɻD_Av q ^dXI6oy&!H_-۬UP#L7ZUQr:>: qU o Z$cH}"0(^Rw(%!HB@w&Zs^,K&BӍ#{!9sZ&\J/PAѶoR^D/@TF}%܉%;넍~FaE2`zO#y0 -5I٧:'޳]ǯAby}jxQGM$K.@vz,cS틅'`u>%Yk Aӊ%rkA׷%$݊" [zoNOͩ3B/ i`+5?8܄fL Ғ")1}brN0Η(Sb0V,ifeu'z{  ǂl RbsYB(RQK֢5dՈ1DĘ g+,.E]EB^/R&31:_cTѤ2(H=a\L͇75и#rխ$2]!3b@X3&N~s [$a+C<%kn ߴJrهG'_~Omp9Nax }}7~Ni|c<:_ٯ>'>>{mه'G ?~>t췧@}7K>>>ߟg=|oXkL4[g*b$<KGq6>m2H\ńa1-ӎJ"HLE z0r5H 8)RN[jRංl+P[U@% 8]T0U`#ɃZgju\!֗lea*kQaN*l1~ C8u.F Ysg,!kt;^ثdqf]U,vY 4N*3!/c;x`mVy׼\R0q^@ؑvo1-zU`Tf)<ML!ؗ})wqd('6D n#]!W@Ȓ2߃]jwz~0iVrG%'rK\dn%-0 !IV(/@05E\s=I,D7xߗ}gx$Zky-p =H sȹhIq:& L[ /NH=Kle\hv /EQQ@"&@U#&.'q0`z{WX*רY?clRV({20ӋAyC,kȘvPOLH6]5{>db3U-xXSK$U9Ce53 ٢*!ԩ)(uAUlmA˶ɬ&KiiO>uwB!8)Z+l(3E]Pn-:l_,XtS]6tR.kA}h}k&|lG6/+[ u)U1d9lBXw`BލM[]Z摤 FXMJE*j9:a`#ԅؠi qg)`Tg!ϕ|R Oeƭ7jqI[:;Hz!%AQ *w; |]]+T/ j>d[$"a:=9hqTeX3u8HV3Ys=b3 ':Y6;_ϯ1ݗ2+^ zDa`,> kl\1’7 l1&Y2 5c0D,.$<19f5lhܫã9wm|7=<:M<[blu : CIy8Z\xZ| &П)tu\>D5ɺ ]a6Tx?<[!W )ɀXag͝^r 3mb9xkN4("gsR,W 6qAz+NM7"Y N&٠ >}Rö8Äݹj0!#9Vs" X^ȎhV $?kJ3!3| `L#lu}AGK@CfB.VJA4} xX1l6K6Iv>1@-0 ,0`I^ Z ]dM}-eφU<:LBW2'_esC7h >*X0,F`G}fXSH:SOqHaH[3TX)_KxZ&3*E5zפP)'eīzJ =ʂN-#r"VBtL؀|qCǗ8#*٣Cu7K^ L.`%&~|=? >"`0l K&Y +.[2y>K4vp{ˬڂIWEAٳE)h}aYU Ep-1]M-23co@ {eb7غ^Vkyn0Eߛ8锭,?Ovb9{1׬Zӥ={2[N0i=c8 4͏oal,ߙO #g8 `0qG 5O^}_mX h]Cߙ̃90 & ̄8 Ν^4y]z :O)z'*`$'.VDO0@M=0o>"c7PvQe [7Д( 6aI<`* Aм4lCI":]l:̢ XI:ԍ#Eys]25އ+6™,ˢ,5%x&@rXq+ByB!( ͦXJs[y䍖i,iCܑ3XKȞzL GhO$@@ە'D{ $t 33g>ŇhsNʟ-1A?^2>3lNi܋HJc D,D A_Dӱt5|.+l]uvsJ3sp]7#[Q4 {`|{mq30v8gŮ\B6,ۑSIVtX+ޏ=ٗ2$rXd'?rP0O?'S jՅuU=sνwKɋ$vJ1̃> C_f&i#BEB[m,( b ΃ 4 "5mm3οY3s4?70Y^{ugL2^N ̓TJz.`2}ti}/ #[KNa5q ߇]#eKwez`_8q{/^eM9GY?)>c II֯#"hQJDL)B9i߀-ȒoZv[N[>~xPz 1pXu/\E:M1L^L.Gz~{`-%ھT޲Q9Qq2locL"yp"ȃP*^jlNbVc۽cppّH cg ul$GP~y4Hi"^D7=؂T /zm.mIi9ぴf7Tլ-~W2(,8Z= %~W=ܤڻ_tWU}T~M 2 >n TJ;uu"=c{R FZC4Ж mK[P6v+R. ,q;};󒰹:5ǢMRw5UW_c3S.V6NWۧcے5FՕlբK}7jbv}JԏG RԌBcTcY.ؓb c^)H$$zT㌱'ҧxk`tN<~|rF1_~0d"' tH[-FV@+hE|E|E|E|E<@k5k5k5k5k5 ċ/BLFK,G <(3+:3?3? ((z#%%ќ˛xx3(Q2?sd3f-Ag~Ia}rD4ӚA͌XGjQC6ɓO?MZ>Þ= |/\ p._ F3珉݊MĞr~-3ޟnѕz9˔xbwuN0s?"e2o{e1)'0;`~*e`^^Yy6n=d­̍Y܎nwS*8osD:?OD:?O?=~[Ykb,S[i 6=wg_)6yy0^v~IލL׏yE12wsB ߲9|eٚt4|FXiG; ڙghy66]=^COH{;5˘(/ݓh-;n7 <}v5?}O=?}O}ӷ_ Ei=\~uyg~ }U( o 3m+\ ʫT Cwa+oEۃn)c2O"g;F6ə$$If!vh5 5S5q55q5#v#vq#v#vq#vS#v :V 4 55q55q5S5 / / / / 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 / 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 / 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 4 4f4$$If!v h5 5S5q5S5S55q5S5 S5 #v #v S#vq#v#vS#vq#vS#v :V 45 5 S5q55S5q5S5 / 4 4f4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 50/ 4 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 50/ 4 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 504 84aPf4$$IfP!v h5055555555 5 l#v l#v #v0:V 845 l5 50/ 4 84aPf44N@N Normal dCJOJQJmH sH tH u>@> Heading 1 a$5CJ$F@F Heading 2$ 05CJB@!B Heading 3 0D56F@F Heading 4$P 064@A4 Heading 556B@QB Heading 6^x`56@a6 Heading 7^56@q6 Heading 8^52 @q2 Heading 9 ^DA@D Default Paragraph FontRi@R  Table Normal4 l4a (k@(No List>*@> 0KEndnote ReferenceH*:@: TOC 4 !p#]R^N>@> TOC 3  !] ^HH@H TOC 2  !] ^dh5:@: TOC 1 9!R]R4 @4 Footer  8!4@R4 Header  o#D&@aD Footnote ReferenceCJEHH@rH Footnote Text^`tdCJ.)@. Page Number2O2 foote  o#4O4 quotes ]0^0CJJOJ question |^0`dP56O6 response dPHOH table entries$dCJ4O4 bullet^`t>O> num paras ^0`:O: table label a$6nOn Ten-point table,Tab10)! ` `8BO"B table title "a$h60O0 Graphic#a$$:OB: single spaced$8OR8 1cm indent %^0VObV 1cm indent hanging bullet &^0`tLOrL 1cm hanging indent'^0`LOL 2cm hanging indent(^``8O8 2cm indent )^`\O\ 2cm indent hanging bullet*^``tZOZ 2cm indent hanging bullet+^``tROR two-point table, ] ^\rOr Twelve-point table4- + ` , \ x0<FOF Figure caption .a$d6>O> table entry /$CJ<O< bullets0^`tdHOH Table label1a$d6DO"D Num paras2 h^D`dHO2H Double spaced3dh CJOJQJG.BEE3I.acl    j0{; -.<P T$&&&(9))*[***h,0E366}77888<?gB&DKDFHbLLOSSuSwSSWWXZ]_c d?gEiokkSoos uevgvvx{{=|||~~8m@e@.J72flʖ˖֖wΣ9g8XxY 6<+z8 "1 )+,5>@ATVWZ]abehlm #$')-.1378ILMPSWX[^bcy|} 1458;?@ABCD]`adgklmnop $%&'()BEFILPQRSTUwz{~,  !#%')+-/13468:<>@BDFHIKMOQSUWY[]^`bdfhjlnprsuwy{} "$&(*,.02458:<>@BDFHJKNPRTVXZ\^`adfhjlnprtvwz|~ 05G i  I  `  ?   28+DJKy00ŀ000000000000000000000000000000000000000000000000000000000000(0000 000000#00 000000000000000000000000#00 00#00 0000(0000000000(00000000(0000000000000000(0000000000008000000#00 0000800000000#00 000000000000000000000000000000000000#00 0000000000000000 00 00 00 00 00 00 00 00 00 00 00 0000000000#00 00000000000000000000000000000000000000000000000000000000000000/00 /00 /00 /00 /0P/00 0P /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 00000000000000/00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00 /00 /00 /00 /00 /00 /00 /00 /00 /00 00 /00/00 0000000000000000000000000000000000000000000000000000000000000000000000000000ŀ00000n8*00n8e| L"m}=c֜@Tl 7Iby1C]o(BTw, !36HK]`ru "48JN`dvz !#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklnopqrstuvwxyz{|}~ !JJ2flʖ˖֖1 )+,5>@ATVWZ]abehlm #$')-.1378ILMPSWX[^bcy|} 1458;?@ABCD]`adgklmnop $%&'()BEFILPQRSTUwz{~ !#%')+-/13468:<>@BDFHIKMOQSUWY[]^`bdfhjlnprsuwy{} "$&(*,.02458:<>@BDFHJKNPRTVXZ\^`adfhjlnprtvwz|~K@t4 @UnknownGTimes New Roman5Symbol3 Arial7Courier5 Geneva9YZNew York9Palatino" t\\B({u+xxj+0K2A template for computer-aided diagnostic analyses Dylan Wiliam Dylan Wiliam Oh+'0lK ,8 X d p|'4A template for computer-aided diagnostic analyses Dylan WiliamNormalDylan Wiliam2Microsoft Word 11.3.8@F#@HT@HTB({GIPICTIIR ,, MSWD , Palatino :.(4A temp-F)l-)ate-)M -)for-F)N -)com-F)np-)$uter)j--F)ai-)2ded di-F)a-)g-F) no-)Estic-)Z -)an-F)@a-)lyses-) -X) -( of test o-F)u-)$tcome-) -)d-F)#a-)t-F)a !-(1 :-+   )-( WDylan-c)p -) Wili-c)Ma-)m-P )$ -(Depar-c)rt-)me-c)8n-)t of Ed-c)yu-)cat-c)4i-) o-c)n-) an-c)7d-) P-c)#r-)ofe-c)8s-)s-c)i-) onal-)O -c) S-)t-c) u-)dies-P )J -(King-c)Y-) s -c)C-)ollege -c)xL-)on-c).d-)o-c)n-) -P )  -(vFranklin-c)--)Wilk-c)Xi-) n-c)s-) Bu-c)<i-) ldi-c)1n-)g-P ) -($150 Stam-c)f-)or-c)&d-) S-c) t-)reet-P )E -(=6Lo-c)/nd-)3on-). -c) S-)E1-). -) 9-c)N-)#N-P )" -(<Unit-c)Qe-)d-) -c) K-)ing-c);do-)1m-P )$ - (X -(dylan-c)i.-) wili-c)Fa-)m@-c)Ck-)cl.-c)(a-)c.u)5k-P ) -(tTel:-c)C -) +44-)C -) 20-c)* -) 7848-)T -) 3153- )T -(Fax:-c)K -) +44-)C -) 20-c)* -) 7848-)T -) 3182- )T -(eRun-c)Ln-)ing head-c):-)  -c) Co-)5mput-c)ce-)r)-)aid-c):e-)d-) -) anal-c)Ny-)s-c)i-) s of te-c)ls-)t data-P )h dPPNT%PDF-1.3 % 4 0 obj << /Length 5 0 R /Filter /FlateDecode >> stream x+TT(Tw.6TH.V0d!k`bh`dib%TH!WH^1i endstream endobj 5 0 obj 81 endobj 2 0 obj << /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 65534 65534] >> endobj 6 0 obj << /ProcSet [ /PDF /Text ] /ColorSpace << /Cs1 7 0 R >> /Font << /F1.0 8 0 R >> >> endobj 9 0 obj << /Length 10 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >> stream xMHaї$T& R+SeL b}wg-E"u.VDNC:DuE^";cT03y|URcE4`λޘvztLUF\)s:k-iYj6|vP4*wd>,y vڴ=S԰79 ڸ@`ӋmvUl5`P=Gj)kP*}6~^/~.~a2 nײ0%f|U 9l7?j`l7"tiNf]?uhgM Zʲ4i[&LY_x {xO$̥߬S]%֧&7g̞>r=g8`候 8rʶ<dWT'<eL~.u"A=9뗚]>313X3-$e}u,gmg664$ыEzL*LZ_j_]Xy[?Xs N/ ]|msϚƫk_WfȸA2)oz-di2|m٣j|5ԥej8ɮeE7[Q|IM%ײxf)|6\ k`Ҳ䍐.> endobj 11 0 obj << /Type /Catalog /Pages 3 0 R >> endobj 12 0 obj << /Length 13 0 R /Length1 4516 /Filter /FlateDecode >> stream x{|Lgޙɘf"4wbd*<&DIJ52DzDE0zl[U4KZ\2JK[.ne-m.rmwwUyTP$r (vC m`2bepOS>8&'dKlp{=%PX}yx({HTG7$0&%\ͽ)gi$0N UP [AЈ.Ľl6m•m>ПyI{y.E,d.P,`̂E&y.m8X/ ¸'-Q %Cy&@0Ӆb;~ |0_+l WQ5+jCDa0;(k:ąp~.[̭.y= !P@q;}1fRWհ1E!j?s Gqi'PAƹ8 l3|^-"?N4nǛPŧoMM6ƛl4iwz"&Lj >2ŅM >k\{X zh 0hs3#&=]2;>td,7U/<*&X<' doxvRd.x1vFxCX Vȳ-bR`!*V>5P/$ [3f^MU͆5jKw*ËN)Q;{쉍'vd! ԇEͩ*h6fSW\uc#u=e32'Y߯E@d|aN\AeB$# '7ʻMbHjO2Q)ĿjpkX58Z#5079oZv4)ѦoF6mJ c-CiϢ 7kR idPPNT#ttT\mPmgg*3#r%,4~+/koPrT1ʓbRk|*5uP_?ide rMerUN 5[]*c劭4+T&W5}JOL1<ݠ*b[ճZAeʙx>ѲcE#,Jlh "7:WeԮ#DdzLI> endobj 15 0 obj [ 250 ] endobj 8 0 obj << /Type /Font /Subtype /TrueType /BaseFont /LWENEJ+Palatino-Roman /FontDescriptor 14 0 R /Widths 15 0 R /FirstChar 32 /LastChar 32 /Encoding /MacRomanEncoding >> endobj 1 0 obj << /Producer (Mac OS X 10.5.1 Quartz PDFContext) /CreationDate (D:20080111195610Z00'00') /ModDate (D:20080111195610Z00'00') >> endobj xref 0 16 0000000000 65535 f 0000005259 00000 n 0000000195 00000 n 0000001350 00000 n 0000000022 00000 n 0000000177 00000 n 0000000303 00000 n 0000001315 00000 n 0000005081 00000 n 0000000401 00000 n 0000001295 00000 n 0000001437 00000 n 0000001487 00000 n 0000004806 00000 n 0000004827 00000 n 0000005057 00000 n trailer << /Size 16 /Root 11 0 R /Info 1 0 R /ID [ <0d65c1e10157e2cc0268f0aa264c24eb> <0d65c1e10157e2cc0268f0aa264c24eb> ] >> startxref 5401 %%EOF -(  dPPNTdPPNT%PDF-1.3 % 4 0 obj << /Length 5 0 R /Filter /FlateDecode >> stream x+TT(Tw.6TH.V0d!k`nh`dib`R! nz \!i  xBHk@h endstream endobj 5 0 obj 81 endobj 2 0 obj << /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 65534 65534] >> endobj 6 0 obj << /ProcSet [ /PDF /Text ] /ColorSpace << /Cs1 7 0 R >> /Font << /F1.0 8 0 R >> >> endobj 9 0 obj << /Length 10 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >> stream xMHaї$T& R+SeL b}wg-E"u.VDNC:DuE^";cT03y|URcE4`λޘvztLUF\)s:k-iYj6|vP4*wd>,y vڴ=S԰79 ڸ@`ӋmvUl5`P=Gj)kP*}6~^/~.~a2 nײ0%f|U 9l7?j`l7"tiNf]?uhgM Zʲ4i[&LY_x {xO$̥߬S]%֧&7g̞>r=g8`候 8rʶ<dWT'<eL~.u"A=9뗚]>313X3-$e}u,gmg664$ыEzL*LZ_j_]Xy[?Xs N/ ]|msϚƫk_WfȸA2)oz-di2|m٣j|5ԥej8ɮeE7[Q|IM%ײxf)|6\ k`Ҳ䍐.> endobj 11 0 obj << /Type /Catalog /Pages 3 0 R >> endobj 12 0 obj << /Length 13 0 R /Length1 4516 /Filter /FlateDecode >> stream x{|Lgޙɘf"4wbd*<&DIJ52DzDE0zl[U4KZ\2JK[.ne-m.rmwwUyTP$r (vC m`2bepOS>8&'dKlp{=%PX}yx({HTG7$0&%\ͽ)gi$0N UP [AЈ.Ľl6m•m>ПyI{y.E,d.P,`̂E&y.m8X/ ¸'-Q %Cy&@0Ӆb;~ |0_+l WQ5+jCDa0;(k:ąp~.[̭.y= !P@q;}1fRWհ1E!j?s Gqi'PAƹ8 l3|^-"?N4nǛPŧoMM6ƛl4iwz"&Lj >2ŅM >k\{X zh 0hs3#&=]2;>td,7U/<*&X<' doxvRd.x1vFxCX Vȳ-bR`!*V>5P/$ [3f^MU͆5jKw*ËN)Q;{쉍'vd! ԇEͩ*h6fSW\uc#u=e32'Y߯E@d|aN\AeB$# '7ʻMbHjO2Q)ĿjpkX58Z#5079oZv4)ѦoF6mJ c-CiϢ 7kR idPPNT#ttT\mPmgg*3#r%,4~+/koPrT1ʓbRk|*5uP_?ide rMerUN 5[]*c劭4+T&W5}JOL1<ݠ*b[ճZAeʙx>ѲcE#,Jlh "7:WeԮ#DdzLI> endobj 15 0 obj [ 250 ] endobj 8 0 obj << /Type /Font /Subtype /TrueType /BaseFont /LWENEJ+Palatino-Roman /FontDescriptor 14 0 R /Widths 15 0 R /FirstChar 32 /LastChar 32 /Encoding /MacRomanEncoding >> endobj 1 0 obj << /Producer (Mac OS X 10.5.1 Quartz PDFContext) /CreationDate (D:20080111195610Z00'00') /ModDate (D:20080111195610Z00'00') >> endobj xref 0 16 0000000000 65535 f 0000005259 00000 n 0000000195 00000 n 0000001350 00000 n 0000000022 00000 n 0000000177 00000 n 0000000303 00000 n 0000001315 00000 n 0000005081 00000 n 0000000401 00000 n 0000001295 00000 n 0000001437 00000 n 0000001487 00000 n 0000004806 00000 n 0000004827 00000 n 0000005057 00000 n trailer << /Size 16 /Root 11 0 R /Info 1 0 R /ID [ <0d65c1e10157e2cc0268f0aa264c24eb> <0d65c1e10157e2cc0268f0aa264c24eb> ] >> startxref 5401 %%EOF (  dPPNT 1 z | - (  !-( 1 %-+  -)An earlier version o-`n( f-)  this-P)F -) p-`n)a-)per-P)7 -) was-)B -) presented-") -) t-`n) o-) the-P)= -) 25th-`n)H -) A-`n)n-)nual-P)J -`n) C-) onference o-`n)f-)  the ( International Ass-`n( o-)ciation-P)o -`n) f-) or-P)# -) E-`n)d-)ucat-`n)Ei-) onal-P)H -) Assessment-`n) -) held-P)J -) at-) -) Bled,-")T -) Sl-`n)o-)venia,-P)d -) May-`n)K -) 1999.-  )U  ! ! ! !  ! ! !  ! ! !  ! ! !  ! ! !  ! ! !  ! ! !  ! ! !  ! ! ! ՜.+,0 hp  'Institute of Educationu 3A template for computer-aided diagnostic analyses Abstract Background$ The development of the analyses Whole-subject analyses1 Comparisons between subjects and classes- Analyses involving sub-domain scores0 Summary results for individual students Item analyses< The development and administration of the questionnaire Piloting the software Questionnaire results Overall Evaluation References Title Headings   !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./012345689:;<=>ERoot Entry F)TGData y1Table}WordDocument>SummaryInformation(KDocumentSummaryInformation87CompObjXObjectPool)T)T FMicrosoft Word DocumentNB6WWord.Document.8