ࡱ> S@ Ybjbj *Rz z z z &H4>>>>|z@F*AAAAAYYY$GRi k)Y^Yk kz z AAY6{{{kz AA{k{{{R4P,AA "w >wsȨ,H79vD7<,z z z z ,7n$@Y=_{QcDfYYYd!:1 }{:1Automatic Question Answering: Beyond the Factoid Radu Soricut Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292, USA radu@isi.eduEric Brill Microsoft Research One Microsoft Way Redmond, WA 98052, USA brill@microsoft.com  Abstract In this paper we describe and evaluate a Question Answering system that goes beyond answering factoid questions. We focus on FAQ-like questions and answers, and build our system around a noisy-channel architecture which exploits both a language model for answers and a transformation model for answer/question terms, trained on a corpus of 1 million question/answer pairs collected from the Web. Introduction The Question Answering (QA) task has received a great deal of attention from the Computational Linguistics research community in the last few years (e.g., Text REtrieval Conference TREC 2001-2003). The definition of the task, however, is generally restricted to answering factoid questions: questions for which a complete answer can be given in 50 bytes or less, which is roughly a few words. Even with this limitation in place, factoid question answering is by no means an easy task. The challenges posed by answering factoid question have been addressed using a large variety of techniques, such as question parsing (Hovy et al., 2001; Moldovan et al., 2002), question-type determination (Brill et al., 2001; Ittycheraih and Roukos, 2002; Hovy et al., 2001; Moldovan et al., 2002), WordNet exploitation (Hovy et al., 2001; Pasca and Harabagiu, 2001; Prager et al., 2001), Web exploitation (Brill et al., 2001; Kwok et al., 2001), noisy-channel transformations (Echihabi and Marcu, 2003), semantic analysis (Xu et al., 2002; Hovy et al., 2001; Moldovan et al., 2002), and inferencing (Moldovan et al., 2002). The obvious limitation of any factoid QA system is that many questions that people want answers for are not factoid questions. It is also frequently the case that non-factoid questions are the ones for which answers cannot as readily be found by simply using a good search engine. It follows that there is a good economic incentive in moving the QA task to a more general level: it is likely that a system able to answer complex questions of the type people generally and/or frequently ask has greater potential impact than one restricted to answering only factoid questions. A natural move is to recast the question answering task to handling questions people frequently ask or want answers for, as seen in Frequently Asked Questions (FAQ) lists. These questions are sometimes factoid questions (such as, What is Scotland's national costume?), but in general are more complex questions (such as, How does a film qualify for an Academy Award?, which requires an answer along the following lines: A feature film must screen in a Los Angeles County theater in 35 or 70mm or in a 24-frame progressive scan digital format suitable for exhibiting in existing commercial digital cinema sites for paid admission for seven consecutive days. The seven day run must begin before midnight, December 31, of the qualifying year. []). In this paper, we make a first attempt towards solving a QA problem more generic than factoid QA, for which there are no restrictions on the type of questions that are handled, and there is no assumption that the answers to be provided are factoids. In our solution to this problem we employ learning mechanisms for question-answer transformations (Agichtein et al., 2001; Radev et al., 2001), and also exploit large document collections such as the Web for finding answers (Brill et al., 2001; Kwok et al., 2001). We build our QA system around a noisy-channel architecture which exploits both a language model for answers and a transformation model for answer/question terms, trained on a corpus of 1 million question/answer pairs collected from the Web. Our evaluations show that our system achieves reasonable performance in terms of answer accuracy for a large variety of complex, non-factoid questions. Beyond Factoid Question Answering One of the first challenges to be faced in automatic question answering is the lexical and stylistic gap between the question string and the answer string. For factoid questions, these gaps are usually bridged by question reformulations, from simple rewrites (Brill et al., 2001), to more sophisticated paraphrases (Hermjakob et al., 2001), to question-to-answer translations (Radev et al., 2001). We ran several preliminary trials using various question reformulation techniques. We found out that in general, when complex questions are involved, reformulating the question (using either simple rewrites or question-answer term translations) more often hurts the performance than improves on it. Another widely used technique in factoid QA is sentence parsing, along with question-type determination. As mentioned by Hovy et al. (2001), their hierarchical QA typology contains 79 nodes, which in many cases can be even further differentiated. While we acknowledge that QA typologies and hierarchical question types have the potential to be extremely useful beyond factoid QA, the volume of work involved is likely to exceed by orders of magnitude the one involved in the existing factoid QA typologies. We postpone such work for future endeavors. The techniques we propose for handling our extended QA task are less linguistically motivated and more statistically driven. In order to have access to the right statistics, we first build a question-answer pair training corpus by mining FAQ pages from the Web, as described in Section 3. Instead of sentence parsing, we devise a statistical chunker that is used to transform a question into a phrase-based query (see Section 4). After a search engine uses the formulated query to return the N most relevant documents from the Web, an answer to the given question is found by computing an answer language model probability (indicating how similar the proposed answer is to answers seen in the training corpus), and an answer/question translation model probability (indicating how similar the proposed answer/question pair is to pairs seen in the training corpus). In Section 5 we describe the evaluations we performed in order to assess our systems performance, while in Section 6 we analyze some of the issues that negatively affected our systems performance. A Question-Answer Corpus for FAQs In order to employ the learning mechanisms described in the previous section, we first need to build a large training corpus consisting of question-answer pairs of a broad lexical coverage. Previous work using FAQs as a source for finding an appropriate answer (Burke et al., 1996) or for learning lexical correlations (Berger et al., 2000) focused on using the publicly available Usenet FAQ collection and other non-public FAQ collections, and reportedly worked with an order of thousands of question-answer pairs. Our approach to question/answer pair collection takes a different path. If one poses the simple query FAQ to an existing search engine, one can observe that roughly 85% of the returned URL strings corresponding to genuine FAQ pages contain the substring faq, while virtually all of the URLs that contain the substring faq are genuine FAQ pages. It follows that, if one has access to a large collection of the Webs existent URLs, a simple pattern-matching for faq on these URLs will have a recall close to 85% and precision close to 100% on returning FAQ URLs from those available in the collection. Our URL collection contains approximately 1 billion URLs, and using this technique we extracted roughly 2.7 million URLs containing the (uncased) string faq, which amounts to roughly 2.3 million FAQ URLs to be used for collecting question/answer pairs. The collected FAQ pages displayed a variety of formats and presentations. It seems that the variety of ways questions and answers are usually listed in FAQ pages does not allow for a simple high-precision high-recall solution for extracting question/answer pairs: if one assumes that only certain templates are used when presenting FAQ lists, one can obtain clean question/answer pairs at the cost of losing many other such pairs (which happen to be presented in different templates); on the other hand, assuming very loose constraints on the way information is presented on such pages, one can obtain a bountiful set of question/answer pairs, plus other pairs that do not qualify as such. We settled for a two-step approach: a first recall-oriented pass based on universal indicators such as punctuation and lexical cues allowed us to retrieve most of the question/answer pairs, along with other noise data; a second precision-oriented pass used several filters, such as language identification, length constrains, and lexical cues to reduce the level of noise of the question/answer pair corpus. Using this method, we were able to collect a total of roughly 1 million question/answer pairs, exceeding by orders of magnitude the amount of data previously used for learning question/answer statistics. A QA System Architecture The architecure of our QA system is presented in Figure 1. There are 4 separate modules that handle various stages in the systems pipeline: the first module is called Question2Query, in which questions posed in natural language are transformed into phrase-based queries before being handed down to the SearchEngine module. The second module is an Information Retrieval engine which takes a query as input and returns a list of documents deemed to be relevant to the query in a sorted manner. A third module, called Filter, is in charge of filtering out the returned list of documents, in order to provide acceptable input to the next module. The forth module, AnswerExtraction, analyzes the content presented and chooses the text fragment deemed to be the best answer to the posed question.  SHAPE \* MERGEFORMAT  Figure 1: The QA system architecture This architecture allows us to flexibly test for various changes in the pipeline and evaluate their overall effect. We present next detailed descriptions of how each module works, and outline several choices that present themselves as acceptable options to be evaluated. The Question2Query Module A query is defined to be a keyword-based string that users are expected to feed as input to a search engine. Such a string is often thought of as a representation for a users information need, and being proficient in expressing ones need in such terms is one of the key points in successfully using a search engine. A natural language-posed question can be thought of as such a query. It has the advantage that it forces the user to pay more attention to formulating the information need (and not typing the first keywords that come to mind). It has the disadvantage that it contains not only the keywords a search engine normally expects, but also a lot of extraneous details as part of its syntactic and discourse constraints, plus an inherently underspecified unit-segmentation problem, which can all confuse the search engine. To counterbalance some of these disadvantages, we build a statistical chunker that uses a dynamic programming algorithm to chunk the question into chunks/phrases. The chunker is trained on the answer side of the Training corpus in order to learn 2 and 3-word collocations, defined using the likelihood ratio of Dunning (1993). Note that we are chunking the question using answer-side statistics, precisely as a measure for bridging the stylistic gap between questions and answers. Our chunker uses the extracted collocation statistics to make an optimal chunking using a Dijkstra-style dynamic programming algorithm. In Figure 2 we present an example of the results returned by our statistical chunker. Important cues such as differ from and herbal medications are presented as phrases to the search engine, therefore increasing the recall of the search. Note that, unlike a segmentation offered by a parser (Hermjakob et al., 2001), our phrases are not necessarily syntactic constituents. A statistics-based chunker also has the advantage that it can be used as-is for question segmentation in languages other than English, provided training data (i.e., plain written text) is available.  SHAPE \* MERGEFORMAT  Figure 2: Question segmentation into query using a statistical chunker The SearchEngine Module This module consists of a configurable interface with available off-the-shelf search engines. It currently supports MSNSearch and Google. Switching from one search engine to another allowed us to measure the impact of the IR engine on the QA task. The Filter Module This module is in charge of providing the AnswerExtraction module with the content of the pages returned by the search engine, after certain filtering steps. One first step is to reduce the volume of pages returned to only a manageable amount. We implement this step as choosing to return the first N hits provided by the search engine. Other filtering steps performed by the Filter Module include tokenization and segmentation of text into sentences. One more filtering step was needed for evaluation purposes only: because both our training and test data were collected from the Web (using the procedure described in Section 3), there was a good chance that asking a question previously collected returned its already available answer, thus optimistically biasing our evaluation. The Filter Module therefore had access to the reference answers for the test questions as well, and ensured that, if the reference answer matched a string in some retrieved page, that page was discarded. Moreover, we found that slight variations of the same answer could defeat the purpose of the string-matching check. For the purpose of our evaluation, we considered that if the question/reference answer pair had a string of 10 words or more identical with a string in some retrieved page, that page was discarded as well. Note that, outside the evaluation procedure, the string-matching filtering step is not needed, and our systems performance can only increase by removing it. The AnswerExtraction Module Authors of previous work on statistical approaches to answer finding (Berger et al., 2000) emphasized the need to bridge the lexical chasm between the question terms and the answer terms. Berger et al. showed that techniques that did not bridge the lexical chasm were likely to perform worse than techniques that did. For comparison purposes, we consider two different algorithms for our AnswerExtraction module: one that does not bridge the lexical chasm, based on N-gram co-occurrences between the question terms and the answer terms; and one that attempts to bridge the lexical chasm using Statistical Machine Translation inspired techniques (Brown et al., 1993) in order to find the best answer for a given question. For both algorithms, each 3 consecutive sentences from the documents provided by the Filter module form a potential answer. The choice of 3 sentences comes from the average number of sentences in the answers from our training corpus. The choice of consecutiveness comes from the empirical observation that answers built up from consecutive sentences tend to be more coherent and contain more non-redundant information than answers built up from non-consecutive sentences. N-gram Co-Occurrence Statistics for Answer Extraction N-gram co-occurrence statistics have been successfully used in automatic evaluation (Papineni et al. 2002, Lin and Hovy 2003), and more recently as training criteria in statistical machine translation (Och 2003). We implemented an answer extraction algorithm using the BLEU score of Papineni et al. (2002) as a means of assessing the overlap between the question and the proposed answers. For each potential answer, the overlap with the question was assessed with BLEU (with the brevity penalty set to penalize answers shorter than 3 times the length of the question). The best scoring potential answer was presented by the AnswerExtraction Module as the answer to the question. Statistical Translation for Answer Extraction As proposed by Berger et al. (2000), the lexical gap between questions and answers can be bridged by a statistical translation model between answer terms and question terms. Their model, however, uses only an Answer/Question translation model (see Figure 3) as a means to find the answer. A more complete model for answer extraction can be formulated in terms of a noisy channel, along the lines of Berger and Lafferty (2000) for the Information Retrieval task, as illustrated in Figure 3: an answer generation model proposes an answer A according to an answer generation probability distribution; answer A is further transformed into question Q by an answer/question translation model according to a question-given-answer conditional probability distribution. The task of the AnswerExtraction algorithm is to take the given question q and find an answer a in the potential answer list that is most likely both an appropriate and well-formed answer.  SHAPE \* MERGEFORMAT  Figure 3: A noisy-channel model for answer extraction The AnswerExtraction procedure employed depends on the task T we want it to accomplish. Let the task T be defined as find a 3-sentence answer for a given question. Then we can formulate the algorithm as finding the a-posteriori most likely answer given question and task, and write it as p(a|q,T). We can use Bayes law to write this as:  EMBED Equation.3  (1) Because the denominator is fixed given question and task, we can ignore it and find the answer that maximizes the probability of being both a well-formed and an appropriate answer as:  EMBED Equation.3  (2) The decomposition of the formula into a question-independent term and a question-dependent term allows us to separately model the quality of a proposed answer a with respect to task T, and to determine the appropriateness of the proposed answer a with respect to question q to be answered in the context of task T. Because task T fits the characteristics of the question-answer pair corpus described in Section 3, we can use the answer side of this corpus to compute the prior probability p(a|T). The role of the prior is to help downgrading those answers that are too long or too short, or are otherwise not well-formed. We use a standard trigram language model to compute the probability distribution p("|T). The mapping of answer terms to question terms is modeled using Black et al. s (1993) simplest model, called IBM Model 1. For this reason, we call our model Model 1 as well. Under this model, a question is generated from an answer a of length n according to the following steps: first, a length m is chosen for the question, according to the distribution y(m|n) (we assume this distribution is uniform); then, for each position j in q, a position i in a is chosen from which qj is generated, according to the distribution t("| ai ). The answer is assumed to include a NULL word, whose purpose is to generate the content-free words in the question (such as in Can you please tell me?). The correspondence between the answer terms and the question terms is called an alignment, and the probability p(q|a) is computed as the sum over all possible alignments. We express this probability using the following formula:  EMBED Equation.3 (3) where t(qj| ai ) are the probabilities of translating answer terms into question terms, and c(ai|a) are the relative counts of the answer terms. Our parallel corpus of questions and answers can be used to compute the translation table t(qj| ai ) using the EM algorithm, as described by Brown et al. (1993). Note that, similarly with the statistical machine translation framework, we deal here with inverse probabilities, i.e. the probability of a question term given an answer, and not the more intuitive probability of answer term given question. Following Berger and Lafferty (2000), an even simpler model than Model 1 can be devised by skewing the translation distribution t("| ai ) such that all the probability mass goes to the term ai. This simpler model is called Model 0. In Section 5 we evaluate the proficiency of both Model 1 and Model 0 in the answer extraction task. Evaluations and Discussions We evaluated our QA system systematically for each module, in order to assess the impact of various algorithms on the overall performance of the system. The evaluation was done by a human judge on a set of 115 Test questions, which contained a large variety of non-factoid questions. Each answer was rated as either correct(C), somehow related(S), wrong(W), or cannot tell(N). The somehow related option allowed the judge to indicate the fact that the answer was only partially correct (for example, because of missing information, or because the answer was more general/specific than required by the question, etc.). The cannot tell option was used in those cases when the validity of the answer could not be assessed. Note that the judge did not have access to any reference answers in order to asses the quality of a proposed answer. Only general knowledge and human judgment were involved when assessing the validity of the proposed answers. Also note that, mainly because our systems answers were restricted to a maximum of 3 sentences, the evaluation guidelines stated that answers that contained the right information plus other extraneous information were to be rated correct. For the given set of Test questions, we estimated the performance of the system using the formula (|C|+.5|S|)/(|C|+|S|+|W|). This formula gives a score of 1 if the questions that are not N rated are all considered correct, and a score of 0 if they are all considered wrong. A score of 0.5 means that, in average, 1 out of 2 questions is answered correctly. Question2Query Module Evaluation We evaluated the Question2Query module while keeping fixed the configuration of the other modules (MSNSearch as the search engine, the top 10 hits in the Filter module), except for the AnswerExtraction module, for which we tested both the N-gram co-occurrence based algorithm (NG-AE) and a Model 1 based algorithm (M1e-AE, see Section 5.4). The evaluation assessed the impact of the statistical chunker used to transform questions into queries, against the baseline strategy of submitting the question as-is to the search engine. As illustrated in Figure 4, the overall performance of the QA system significantly increased when the question was segmented before being submitted to the SearchEngine module, for both AnswerExtraction algorithms. The score increased from 0.18 to 0.23 when using the NG-AE algorithm, and from 0.34 to 0.38 when using the M1e-AE algorithm.  EMBED MSGraph.Chart.8 \s Figure 4: Evaluation of the Question2Query module SearchEngine Module Evaluation The evaluation of the SearchEngine module assessed the impact of different search engines on the overall system performance. We fixed the configurations of the other modules (segmented question for the Question2Query module, top 10 hits in the Filter module), except for the AnswerExtraction module, for which we tested the performance while using for answer extraction the NG-AE, M1e-AE, and ONG-AE algorithms. The later algorithm works exactly like NG-AE, with the exception that the potential answers are compared with a reference answer available to an Oracle, rather than against the question. The performance obtained using the ONG-AE algorithm can be thought of as indicative of the ceiling in the performance that can be achieved by an AE algorithm given the potential answers available. As illustrated in Figure 5, both the MSNSearch and Google search engines achieved comparable performance accuracy. The scores were 0.23 and 0.24 when using the NG-AE algorithm, 0.38 and 0.37 when using the M1e-AE algorithm, and 0.46 and 0.46 when using the ONG-AE algorithm, for MSNSearch and Google, respectively. As a side note, it is worth mentioning that only 5% of the URLs returned by the two search engines for the entire Test set of questions overlapped. Therefore, the comparable performance accuracy was not due to the fact that the AnswerExtraction module had access to the same set of potential answers, but rather to the fact that the 10 best hits of both search engines provide similar answering options.  EMBED MSGraph.Chart.8 \s Figure 5: MSNSearch and Google give similar performance both in terms of realistic AE algorithms and oracle-based AE algorithms Filter Module Evaluation As mentioned in Section 4, the Filter module filters out the low score documents returned by the search engine and provides a set of potential answers extracted from the N-best list of documents. The evaluation of the Filter module therefore assessed the trade-off between computation time and accuracy of the overall system: the size of the set of potential answers directly influences the accuracy of the system while increasing the computation time of the AnswerExtraction module. The ONG-AE algorithm gives an accurate estimate of the performance ceiling induced by the set of potential answers available to the AnswerExtraction Module. As illustrated in Figure 6, there is a significant performance ceiling increase from considering only the document returned as the first hit (0.36) to considering the first 10 hits (0.46). There is only a slight increase in performance ceiling, however, from considering the first 10 hits to considering the first 50 hits (0.46 to 0.49).  EMBED MSGraph.Chart.8 \s Figure 6: The scores obtained using the ONG-AE answer extraction algorithm for various N-best lists AnswerExtraction Module Evaluation The Answer-Extraction module was evaluated while fixing all the other module configurations (segmented question for the Question2Query module, MSNSearch as the search engine, and top 10 hits in the Filter module). The algorithm based on the BLEU score, NG-AE, and its Oracle-informed variant ONG-AE, do not depend on the amount of training data available, and therefore they performed uniformly at 0.23 and 0.46, respectively (Figure 7). The score of 0.46 can be interpreted as a performance ceiling of the AE algorithms given the available set of potential answers. The algorithms based on the noisy-channel architecture displayed increased performance with the increase in the amount of available training data, reaching as high as 0.38. An interesting observation is that the extraction algorithm using Model 1 (M1-AE) performed poorer than the extraction algorithm using Model 0 (M0-AE), for the available training data. Our explanation is that the probability distribution of question terms given answer terms learnt by Model 1 is well informed (many mappings are allowed) but badly distributed, whereas the probability distribution learnt by Model 0 is poorly informed (indeed, only one mapping is allowed), but better distributed. Note the steep learning curve of Model 1, whose performance gets increasingly better as the distribution probabilities of various answer terms (including the NULL word) become more informed (more mappings are learnt), compared to the gentle learning curve of Model 0, whose performance increases slightly only as more words become known as self-translations to the system (and the distribution of the NULL word gets better approximated). From the above analysis, it follows that a model whose probability distribution of question terms given answer terms is both well informed and well distributed is likely to outperform both M1-AE and M0-AE. Such a model was obtained when Model 1 was trained on both the question/answer parallel corpus from Section 3 and an artificially created parallel corpus in which each question had itself as its translation. This training regime allowed the model to assign high probabilities to identity mappings (and therefore be better distributed), while also distributing some probability mass to other question-answer term pairs (and therefore be well informed). We call the extraction algorithm that uses this model M1e-AE, and the top score of 0.38 was obtained by M1e-AE when trained on 1 million question/answer pairs. Note that the learning curve of algorithm M1e-AE in Figure 7 indeed indicates that this answer extraction procedure is well informed about the distribution probabilities of various answer terms (it has the same steepness in the learning curve as for M1-AE), while at the same time uses a better distribution of the probability mass for each answer term compared to M1-AE (it outperforms M1-AE by roughly a constant amount for each training set size in the evaluation).  Figure 7: The performance of our QA system with various answer extraction algorithms and different training set sizes Performance issues In building our system, we have demonstrated that a statistical model can capitalize on large amounts of readily available training data to achieve reasonable performance on answering non-factoid questions. Our system was able to successfully answer a large variety of complex, non-factoid questions, such as: Q: Can I travel with the Ameripass in Mexico? A: The Ameripass includes all of USA excluding Alaska. In Canada, it is valid for travel to Montreal, Quebec, Toronto, Ontario, Vancouver BC, and New Westminster BC. In Mexico it is valid for travel to Matamoros, Nuevo Laredo, Tamaulipas and Tijuana Baja California. Q: What is a PIC Freeze? A: How to Prevent Slamming: Institute a PIC Freeze The best way to keep from being slammed, or having a long distance company change your service without your permission, is to request a "PIC freeze" for your line. PIC, or pre-subscribed interexchange carrier, is the telecom industry 's term for a long distance company. For those questions which were not answered correctly, we identified some of the most frequent causes which led to erroneous answers: answer was not in the retrieved pages (see the 46% performance ceiling given by the Oracle) answer was of the wrong type (e.g., an answer for how-to instead of what-is): Q: What are best graduate schools for AI? A: If you are applying for grad school in AI, and you did some research for an AI person, even if it was long ago and you don't consider it to be relevant, get a recommendation if you think that the person likes you at all. [] it pointed to where an answer might be instead of answering the question: Q: What do research studies say about massage therapy? A: It supports research on the efficacy of therapeutic massage through the public, charitable AMTA Foundation. Additional information about massage therapy and about AMTA is available via the Web at www.amtamassage.org. the translation model overweighed the answer language model (too good a "translation", too bad an answer) Q: What are private and public keys? A: Private and public keys Private and public keys Algorithms can use two types of keys: private and public. did not pick up the key content word (in the example below, eggs) Q: What makes eggs have thin, brittle shells? A: The soft-shelled clams, such as steamer, razor, and geoduck clams, have thin brittle shells that can't completely close. Cod - A popular lean, firm, white meat fish from the Pacific and the North Atlantic. It is worth pointing out that most of these errors do not arise from within a single module, but rather they are the result of various interactions between modules that miss on some relevant information. Conclusions Previous work on question answering has focused almost exclusively on building systems for handling factoid questions. These systems have recently achieved impressive performance (Moldovan et al., 2002). The world beyond the factoid questions, however, is largely unexplored, with few notable exceptions (Berger et al., 2001; Agichtein et al., 2002; Girju 2003). The present paper attempts to explore the portion related to answering FAQ-like questions, without restricting the domain or type of the questions to be handled, or restricting the type of answers to be provided. While we still have a long way to go in order to achieve robust non-factoid QA, this work is a step in a direction that goes beyond restricted questions and answers. We consider the present QA system as a baseline on which more finely tuned QA architectures can be built. Learning from the experience of factoid question answering, one of the most important features to be added is a question typology for the FAQ domain. Efforts towards handling specific question types, such as causal questions, are already under way (Girju 2003). A carefully devised typology, correlated with a systematic approach to fine tuning, seem to be the lessons for success in answering both factoid and beyond factoid questions. References Eugene Agichten, Steve Lawrence, and Luis Gravano. 2002. Learning to Find Answers to Questions on the Web. ACM Transactions on Internet Technology. Adam L. Berger, John D. Lafferty. 1999. Information Retrieval as Statistical Translation. Proceedings of the SIGIR 1999, Berkeley, CA. Adam Berger, Rich Caruana, David Cohn, Dayne Freitag, Vibhu Mittal. 2000. Bridging the Lexical Chasm: Statistical Approaches to Answer-Finding. Research and Development in Information Retrieval, pages 192--199. Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, Andrew Ng. 2001. Data-Intensive Question Answering. Proceedings of the TREC-2001Conference, NIST. Gaithersburg, MD. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263--312. Robin Burke, Kristian Hammond, Vladimir Kulyukin, Steven Lytinen, Noriko Tomuro, and Scott Schoenberg. 1997. Question Answering from Frequently-Asked-Question Files: Experiences with the FAQ Finder System. Tech. Rep. TR-97-05, Dept. of Computer Science, University of Chicago. Ted Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, Vol. 19, No. 1. Abdessamad Echihabi and Daniel Marcu. 2003. A Noisy-Channel Approach to Question Answering. Proceedings of the ACL 2003. Sapporo, Japan. Roxana Garju. 2003. Automatic Detection of Causal Relations for Question Answering. Proceedings of the ACL 2003, Workshop on "Multilingual Summarization and Question Answering - Machine Learning and Beyond", Sapporo, Japan. Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural Language Based Reformulation Resource and Web Exploitation for Question Answering. Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Abraham Ittycheriah and Salim Roukos. 2002. IBM's Statistical Question Answering System-TREC 11. Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Cody C. T. Kwok, Oren Etzioni, Daniel S. Weld. Scaling Question Answering to the Web. 2001. WWW10. Hong Kong. Chin-Yew Lin and E.H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics. Proceedings of the HLT/NAACL 2003. Edmonton, Canada. Dan Moldovan, Sanda Harabagiu, Roxana Girju, Paul Morarescu, Finley Lacatusu, Adrian Novischi, Adriana Badulescu, Orest Bolohan. 2002. LCC Tools for Question Answering. Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Franz Joseph Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. Proceedings of the ACL 2003. Sapporo, Japan. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. Proceedings of the ACL 2002. Philadephia, PA. Marius Pasca, Sanda Harabagiu, 2001. The Informative Role of WordNet in Open-Domain Question Answering. Proceedings of the NAACL 2001 Workshop on WordNet and Other Lexical Resources, Carnegie Mellon University. Pittsburgh, PA. John M. Prager, Jennifer Chu-Carroll, Krysztof Czuba. 2001. Use of WordNet Hypernyms for Answering What-Is Questions. Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Dragomir Radev, Hong Qi, Zhiping Zheng, Sasha Blair-Goldensohn, Zhu Zhang, Weiguo Fan, and John Prager. 2001. Mining the Web for Answers to Natural Language Questions. Tenth International Conference onInformation and Knowledge Management. Atlanta, GA. Jinxi Xu, Ana Licuanan, Jonathan May, Scott Miller, Ralph Weischedel. 2002. TREC 2002 QA at BBN: Answer Selection and Confidence Estimation. Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Some funkiness with the subscript numbers appearing. >>>>>>>>>>>> This formula still looks weird on my screen and print-out I see a bunch of subscript numbers (1 4 2 43) under the two prob0123@      ' M S T | }   ! N O U ] | Ǻ麪醀y h_ThO h SNHh S h_TNHh_T hONHhOhIoh(?mHsHh3 HHh4{h]h]S0hf OJQJ^Jh]OJQJ^JhpHh4{h]Hh4{h]h] hc=hc=h,h(?h@!e,123@_  L'$C$Eƀ4{Ifgd] '$Ifgd] $Ifgd]gdc=gd,gd{<X    ' J]XXgd AFEƀ3{gdgdR$a$gdRLkd$$IflH0"#4 la ! " > U Y Z [ \ ` a b j k u z < > I l m x y    0 2 < D G I d j k y { 񷱷񷱷h_ h NHh h&6 hCxNHhCx hQNHh9h0hOhtoh h6NHhhz{h|{h6 h_TNHh_hhQh_T h6h6< /12DFWYe  HI +,1CLQjkY *@P^¾¾ h`Ah_Th_T hl5NHh hl5hO hvqNHhvqhW3 hQNHhz{hz{hQ6h0hQhc+ h NHhVgeh&6h h_ h_NH<^_cdow'(56GHJ}~!"E`ao%, hUNHh&6hz{hUh1R hWNHhW hbNHhbh#h  hW3NHhW3 hqNHhqh S h*Mh_Thl5 h`Ah_Th_T h_TNH<,.?AMWZ)*34:;st*6=ACVWz|$%¼¼¼¸¼вƬhtohhQk hJNH hr7~NHh  h!NHh!hJ h{ NHhr7~h{ h h8NHh8h# hWNHh?hWhUh&6@Js-(gd AFEƀ#\{Fgd AFEƀmj{gdFEƀb{fgd%&9;H[^_p46:NOu{#*+3P   LM]9:|~h Gh,hM" ho=~NHho=~ hx6GNHh hx6G h6zNHh6zhmCh&6h_ hdNHhd htohhtoh htohNHhtohh? h?NH? +,-67_gmn  @ C M X i l s u ~ !! ! !=!A!U!{!!!!!!!!!!! "ݻݯݫݧ h\NHh\h_h3hgh_ThGls h NHh?h4_ h SNHh S hM"NHhM"h  ho=~NHho=~h Gh-Ph Gh-P6?!*!.#&++..//nid\d)$a$gdq)gdR)gdbFEƀ3{gdFEƀ3{gdgd A "("6"8"G"T"c"i"q"s"y"}""""###-#.#H#O#T####$$$%$4$$$$$$ %%!%'%(%0%5%Q%]%^%g%r%%%%%%%%%%%%%%%&&&&]&z&{&&˾˾˾ӾڸڴӾӾӾӾ hkNHhY hcNHhkh_Th_TNHh#h^ h_Th_Thc h2:NHh4_h?h2:h Sh&6h\h D&&&&&&&&&&&&'''F'M'p''''''''j(k(((((())#)+)J)K)R)\)])m)})))))))* *!*A*M*S*]*^************++8+E+K+Q+ľȺh G h4_NHh4_h(ph_Th_TNHhY h NHh h S h_Th_T hcNHh?hch_TIQ+R++++++++++++++++,",#,d,k,y,{,,,,,,,,,,,,,,,,--- ---6-7-o-r-s-x------... . .J.ǶǺﺦǺǜǦǜ h SNHh?hIh&6hw3+hQkhtohh4_h_Th_TNHh ShWmHsHh SmHsHh4_mHsHh_Th8nh(? h_Th_Th G h GNH=J.X.Y.Z.^.d.h.i.j.r.s............./////#/S/T/X/j/k/|///'0(0-0.0@0G0H0000μ񢚢ٖ}h(? h/,h/,h/,h S hINHhQkhZWJCJaJhZWJh~CJaJjh_Th~UmHsHjch_ThbU h_Th~jh_Th~Uh~h_Th_TNH h_Th}Q h}Q h_Th_ThIh?./.0H03v5B8889_Z)gdRFEƀ3{.gdR)gdpFgd02gd AFEƀ3{.gdR)gd S00%1&1:1\1l1m112822222x3y3333333444=4>4|4444444W5X5t5u5v5555556 666Z6[666666662747U7W7X7ʹخء h_NHh&6h/,h/,NH h/,h/,h  hIh02 h02NHh]h02hB; h)NHh)h Shqh_ hINHhIhYh/, h/,NH=X7a7w7x77777777777888848;8<8A8B8C8Z8[8\8]8^8_8g8h8l8m88888ɳə{n{n{jhpFhZWJh/,CJNHaJhZWJh/,CJaJhZWJhZWJCJaJh CJaJh/,h/,CJaJjh/,hv)AUmHsHjh/,UmHnHuh/,mHsHjh/,UmHsH hv)Ahv)A haNHhYha h NHh hv)Ah)h_%888888888 99,9-9[9|99999999999 :[:\::::::;;;t;;;;;;;;;;;;;;)<*<G<K<`<a<l<<<<<<= =ѪѰ hN4NHhQV hpFNHhVh>h/,6ha h/,NHhHG ho h/,hpFh/,h/,NHhY h/,h/,h~T h/,hN4hpFhpFCJaJ?99;??@uBNDnii)gdRFEƀ3{.gdRgd AFEƀ3{.gdR = ====B=D=\=]=i=======F>H>N>X>Y>z>>>>> ? ?&?=?D?m?x?y???????????@S@e@g@p@r@{@|@@@@@@@@@@@@@Ƽƴ购ƭکh/,h/,NHhu h/,h/,hah&6h S h/MNHh/MhHG hpFNH hVNHhV h1|NHh1|hpF hN4NHhN4hQV hQVNH?@@@@@AAA$A(A3A4A?A@A^AfAAAAAAAAAAABBBB.B0B2BBGBnBoBuBBBBBBBBBCC*C9C?C@CtCuCCCCCDDDDD3DEDFD񽹽ǵǵѵ hVNH hzNHhzh&6hqNl hk8NHhk8 hb.nNHhb.nh<# h/MNH hVfNHhVfhSRQhpFh/M h/,h/,hVBFDMDNDVDXDmDzD{DDDDDDDDE*E.E:EFEGEZE[EhEjEEEEE FFF'FEFHFbFiFjFFFFF G G-G.GAG\GrGuGGGGGGGGG*H-H.H/Hÿû骠hpF hi]NHhE`ehi] hzh4? hzNHh<#h&6hk8h4?h4?NHhVfhtLhEi h4?h4? h4?NHh$Uh4?h/M hk8hk8hz=NDD[E.G\GHK2K^KiKa\T & FgdRgdVfI & FEƀ3{..gdRgd A)gdRI & FEƀ3{..gdR /H4H8HPHTH\HlHvHwH}HHHHHHHHI II&I+I5IGIIITIUIVIYI_IdImIwIxIIIIIIIIIIIIIIIIIIIJ+JsJtJJJJJJJJJJJJ׾׸𰨰װװho h s6h sha hYNHhYhVfh%<;h%<;6ho h%<;6h%<;hqNlh$U h4?h4? h:NHh:h hi]hpFBJJJJ KKKKKK-K.K/K0K1K2K:K;KUdUlU2V3VVVVVVVVVVV򦜦򗏗hWhh4?6 h4?6h,h4?6H*h,h4?6h*h4?H*hQkh4?6H*hQkh4?6h*h4?6hbh4?OJQJhWh4?6h4{dh4?6 h4?NHh4?h$!x h$!xNH6V W WWW"W#W$W%W/W2W3W6W8W9W:WbWcWWWWWWWX XXXXXX X;X=X>X^XXXXXX Y Y!Y:YJY񾴾񛑛~zzh$!x hqNlNHhqNlh4?h4?NHh4?h4?6H*h4?h4?6 h$UNHh$U h4?h$Uh4?h$U6H*h4?h$U6j'h4?hVfEHU$j\C h6 OJQJUV]^Jjh4?h4?U h4?h4?h4?hW-JYKYOYWYvYYYYYYYYYZ ZZZFZHZ|Z~ZZZZZZZZZ[[ ["[Z[j[l[[[[[[[/\6\9\O\P\Y\\\\\\]]]]]$]﹵﫤h02hVfhQkhOK h`Mh`Mh`M h)NHh)h$UhWh4?h4?6H*h4?h4?6hah h4?hh4?h4?NHhqNl h4?h4?h$!x h$!xNH:$]%]&]']+]0]1]5]8]9]:]?]D]I]P]Q]\]e]t]w]]]]]]]]^^^,^1^2^3^9^E^V^a^b^c^d^i^m^^^^^^^_ _H_N_W_[_\_d____򿻿ƪƤƿܿƷƷƻƻƻhBh) hiYNH h$!xNHhh`Mh`MNHhiYhOK h`Mh`Mh$!xh02h$!x6NHh02h$!x6h02h`M6h02h02h026NHh02h026 h026;_______``?`_```d````````````` aFadakaaaaaaaaaaaaaabb!b"b7b=bCbUbcbfbjbrbsbybbºƴ¯ƫƴƫƤܠh`Mh`MNH h`Mh`Mh# hCs?hCs?h;4 h026 hH.NHh02h026h02hH. hCs?NHhVlh)hCs?h`MhOKhOK6 hiYNHhiYhB hOKNHhOK8abkc}eeee ikyliFEƀ3{.gdRgd A)gdRFEƀ3{.gdR bbbbbbbbbbbbbbbbbb6c8c=c>cIcJcRcWchcjckccccccccccccccc)dCdHdQdddndud}d~ddddddddddddddddd¸鰪 hH.NHhH.h hQk hCs?NHhG;h`Mh) hSNHhSho hOKh`Mh`MNH h`Mh`MhWh#h;4hCs?Cddddde%e*e.ehVhhhnhqhrhxhhhhhhhhhhhhi ihahhSh`Mh`MNHhch;4 hG;NHhG; h`Mh`MhOKh#O i i%i,i0iEiTi[i\inioi~iiiii9j:jEjFjZjrj}jjjjjjjjj k kk5k6kkEkRkkkkkkkkkkkkkkkkkkklܸh.xCJaJjh*UjKC h*UVh`Mjh`MUhr} hOKNHhWh;4h_ h*NHh*h`Mh`MNHhOKhS h`Mh`Mh.xhZWJ:ll+l,lnlxlylllllmmmmmmmmmn@nVnYn^ncndnon{nnnnnnnnnnoo oo.o3oFoIoJoWocodouozoooooopp¾·¨··Ʒ····ƷƷ·· ·hWhb.nh? h.x h#NHh`Mh`MNH h`Mh`Mhr}h#hA+2h*h`MhZWJh*CJaJhZWJh`MCJNHaJhZWJh`MCJaJhZWJhZWJCJaJ9yllop qqCsw||}nFEƀ3{.gdRgd AFEƀ3{.gdR ppPpUpVpWp_pdpfpgpppppppppppppppp q"q(q+q6qzFzQzfzgzizlzrzzzzzzzz {ζh.xho h] h|q~NHhr} h aNHh)hb.nha h]! NHh ah]! h@NHh@ h)iNHh)ih|q~F {{4{5{C{L{{{{{{{{{|||||<|I|J|S|X|||||||||||||||}}}}0}1}}}}}}}~ ~~ºʏ䏓h,B h%kNHh%kh(Ch Aho CJNHaJh Aho CJaJh Ah`MCJaJh%kCJaJh.xCJaJh`Mj!hUhv# h)NHhh) h]NHh)i hqNHhqh]4}1}g~h~~)gd@!,^gd-u=gd-u=)gd-u=FEƀ3{gd ~~*~+~>~E~F~S~V~W~\~e~f~g~h~+,`anouԀՀ !12[\ˁ́9 ԿԿh3oh FCJaJ h3oNHhQkh3o h3oh3o h@!,NHh,Bh@!,h-u=CJaJh,Bh-u=CJNHaJh,Bh-u=CJaJ h-u=h-u=h-u= h(CNHh%kh|Tdh/h(C69dHkbV h^gdU!K^gdU!KI & F Eƀmj{gd3oI & F Eƀmj{gd3o #BCGHRÃă23|}Մքkl҅Ӆ؅م܅ļؚ؏zsjsjscs hU!KhFhU!KhU!KNH hU!KhU!Khn`CJaJhn`hn`CJNHaJhn`hn`CJaJ hn`NHhFh Fh FCJNHaJh Fh FCJaJhU!KCJaJh3oh Fh FNH h Fh Fhn`h Fh FCJaJh3oh FCJaJh3oh FCJNHaJ&Hʃ5bI & F Eƀmj{gdn`^gdU!KI & F Eƀmj{gd FÌΌba\\WRgdRgdRgd AFEƀmj{gdgd?4^gdU!KI & F Eƀmj{gdU!K +./9Vfx{~YZ'/01@D\]ehiwxzƈLj %,-de» h-NHh-h&6 hpNHhphj5hah%k h+UNHh+Uh h(Ch(Ch(C h?4h?4h?4 hz{NHhz{hFCJaJhU!KCJNHaJhFhFCJaJhU!KCJaJ6eˉ͉̉։ۉ )E̊͊>?\]oNOŒÌΌ&'8J¼¼¸h 4h6NHh 4h6h 4hh(? h/,h[ h{ NHh{ hkNHhk h-NHh-hVh hKNHhKh[h`M h}NHhz{h%k h0yNHhch}h0yh+U3J\]abÍʍˍ13CDwy"$ÿ÷ۄtghmh{ OJQJ^Jhh{ 6NHOJQJ^Jhh{ 6OJQJ^J hmh<7ht;h h NH h h h<7h<76NHh<7h<76h h<7h{ hmhmhmh{ NHhhm6hh{ 6 hmh{ h h 4NHh 4%bd7LˑT4 +ؗrn<4gd Agd?4$a$gdI+%.gdR$PRbcdŏƏ !&'67 %&=L_ûû솂|tm|ie]eh #h #6h #h h=mh=mh=mh=m6 h=mNHh=mh{ hmh{ NHhOJQJ^Jhh{ 6NHOJQJ^Jhh{ 6OJQJ^Jhhm6hh{ 6 hmhmhmh{ OJQJ^Jhmh{ OJQJ^J hmh{ hmhmh{ 6"ˑ&-.JTh&'234pœƓɓ˓ 69:kmopŔݔĻĻĻܴ˴˴˴ˉh<7hmhmNHhmhm6hmhhm6NHhhm6 hmhmhphpNH hphph0BG h0BG6hphp6h 4hp hNHhh6hh # h #NH2ݔ8MN}~CdeklǾ}o}c}hNHOJQJ^Jhh6OJQJ^JhOJQJ^JhmOJQJ^JhmhmOJQJ^Jhhm6hmhmNHhmh hmhmhhPJ hPJht;hrhy[hNH hy[hhh<7h<7h<76NHh<7h<76#!+JjnpƗחؗ 45:;?@]^ɼyiy\XOXOKh" hmh{ NHhmhmh{ OJQJ^Jhh{ 6NHOJQJ^Jhh{ 6OJQJ^Jhhm6hh{ 6 hmh{ h #h #B*phh #B*phhy[h #B*phhy[h #6B*phh #6B*phhy[h #6B*\phh # hy[h #hOJQJ^JhpOJQJ^J/1rə͙ڙޙ:;_`hjmn)*12:;<=ٛڛ̷̾񰛰񰛰}ysy hVNHhVjhV0J-U hmh UhmhmOJQJ^JhmhmNHhhm6h hmhm hmh<7 hpNHh<7h<76hph<7hh6NHhh6h hhhmh" hmh{ -abilities. Does that not show up when you view it? I dont see anything wrong; it looks perfectly fine both on the screen and on the printout. Question2Query Module Q Search Engine Module Filter Module Answer Extraction Module A Query Documents Answer List Training Corpus Web Query How do herbal medications differ from conventional drugs? "How do" "herbal medications" "differ from" "conventional" "drugs" Answer Generation Model A Q Answer Extraction Algorithm q a Answer/Question Translation Model 45 '()/Pi  ؿؿhW h%<;hVh/MhVNH h/MhV hrkhV hX >hVh|hVNHhnKhVNH hnKhV h|hV hhVhSRQjhV0J-UhVU hVNH;   ()/gd A$a$gdI+%gd?4/0123456789:;<=>?@ABCDEFGHIJKgd)gdU!Kgd?4KLMNOPw $a$gd$Ugd Agd?4 #068XY hmh Uhi(h$U h$UNH h/MhVhV$06789:;<=>?@ABCDEFGHIJKLMgd)gdqgd?4$a$gd$UMNOPQRSTUVWXYgdRgd?4 $&PP/ =!"#$%/ 0&PP/ =!"#$% P 5 0&PP:p4?/ =!"#$% P a$$If!vh55#v:V lH#5/ 4Dd qD  3 @@"?DdEeD  3 @@"?Dd! D  3 @@"?Dd l b  c $A? ?3"`?2a9tS"w-ŔrW`!a9tS"w-Ŕr dxڕS=KA%Kb!j-$X)`*H8 G*XB.U-\ys"Dgܢݍ<^2t{t@CʽSnGik\N&RxQ6N6z /ͽ;m: J>[(J^.M82k `ְ͉UBg7gfq.6#M\L|o~F_ ݯzFAUh,ݑg4g_!~Q*2伾d_V=|(ϵ}7q8aHkq?$oEz&f#q@3ZT>*-e|1c疗Z~9}"-t9 ȿ*~zeH0?BXQ{z߮WwRz.U;R::噚"N?) .'v)m!"A׆1;q> #'$̂ o&jGE4Dd OIkb  c $A? ?3"`?2 NfZ& 6_k`!NfZ& 6_8i10, xڥMhA߼M󱛏GcIsjQуAM=1mP$ "҃CxR(HA0K=TAz1`W`&a6y%PLޠA[bZ- ]q6^obƠS2m2'tmxer6Sh:_Z.mG;9|b8ѭEo(l-zu8Z}ph\ )xtZ-[_u0jۓb!9܌3W\e|Fg|fl>џݢ?IZgbHɤl_"/)m!*?_<:'eo?#ouh(GS Z^gut=ldr*W é0)3^qoeΧO$D=ѸlD[dM}@-^eE{Ld01+Hc^e[g2.|}sT,DeOq6͚Mʗ@uԉ?=!Nu}"_:?/}Ƨ}ֿnyM]_a띭y>^;̝Q]>J#j&#LOrw{ߚ S,-|A̍f+2#z7f1XyZmI*<Dd  b  c $A? ?3"`?"1 q.=v: , @= q.=v:07-xMhUpvm6MMS&iLkh)`v.*F0-IQl*9|B x'E/B BYm/EXg7o3yf7i7o{ {ZB_>/wt E)8uI\\F .dw0$!\[ahA8ôIbvO=ߛp7V|5$`3^,aO#_ub\#ۢM#eHٯ#\ WӢx~ \pNikST৾ߍ `nLDRSH{l--|LpGqlǠ1qx6q8zMCvO^y5O~lMII]f.P6nvz-P|{}Ҽe>$pgŵsL?GpR<?GY^e[}A 00 ;ElgWVGfPؠ AN{VC 8תX0*Qs3vU?Dzo#1߷ڟ!Og}^2n1ϣL7O3 SIAĠ?d2 VU$*j^Iܐ>^+xo:O?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Root Entry% F#w +Data mRWordDocument$*ObjectPool'^"w #w _1122817725FP"w P"w Ole CompObjfObjInfo !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGJMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrsvyz{|}~ FMicrosoft Equation 3.0 DS Equation Equation.39q$ p(a|q,T)=p(q|a,T)"p(a|T)p(q|T)Equation Native _1140945166 FP"w P"w Ole CompObj f FMicrosoft Equation 3.0 DS Equation Equation.39q+h4 a='argmax a p(a|T) question"independent 8"p(q|a,T) question"dependent ObjInfo Equation Native  G_1130144495 FP"w k"w Ole 8 FMicrosoft Equation 3.0 DS Equation Equation.39q, p(q|a)=(m|n)(nn+1( j=1m " t(q jiCompObjfObjInfoEquation Native _1129026061Fk"w k"w =1n " |a i )"c(a i |a))++1n+1t(q j |NULL)) FMicrosoft Graph ChartGBiff5MSGraph.Chart.89q Ole EPRINT<CompObjeObjInfolv} EMF<w$J* F, EMF+@XXF\PEMF+"@ @ $@ !@ 0@?@     !" !" !  Rp"ArialXX  /0|Ndv >x^51|t0 LL q9$Hq`qD$Hw>wyOw \0 dv% % % % " !%    % " !%    % " !%    q'% %     V077F77'% (   V0}F77F}F7V0F}FF}}FuP{&% 7  6F66Fn6n6F6F6F"6"6F}6}% % % " !% % %    q% (    V0<77F&% ( uP{V0xK<7F}F7V0AxFF}}F% % % " !% % %    q&% ( 'MM%  MM MM  V08<7337'% (   V0x  <}7}7}7% ( 'ss%  ss ssV0x83}&% ( 'M3% (  M3 M3V0<77'3f% (   3fV0<777% ( 's&M%  s&M s&MV03&% ( 'MM% (  MM MMV0<7  7'% (   V0R<W7W7W7% ( 'ss%  ss ssV0R W&% ( 'M3% (  M3 M3V0a<f7ff7'3f% (   3fV0k<7ff77% ( 's&M%  s&M s&MV0f f% ( % ( &% 7   676 76 6 F6 F6 TT Z/@j@ LP0%T`h/@j@hLT0.1%%T`/@j@LT0.2%%T`i/@j@LT0.3%%T`v/@j@vLT0.4%%76776C76C76CTlUn/@j@ULXNG-AE04--TpfUX/@j@fULXM1e-AE9%%--% % " !% %    % % " !% %    %   +uO{% % % " !% % %   yLw% % % " !% % %   yLw&% ( '%     +    Tlg/@j@LXAs-is-%% % % % " !% % %   yLw% % % " !% % %   yLw&% ( '3f% (   3f  +1Z    T 0m/@j@^ L`Segmented-%)<%)%) % % % " !% % %   yLw% % % " !% % %    % ( % ( % "  !  " !  ( " F4(EMF+*@$??FEMF+@ Workbook _1129026841Fk"w k"w Ole HEPRINT B""$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* "-"_);_(@_).))_(* #,##0_);_(* \(#,##0\);_(* "-"_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* "-"??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* "-"??_);_(@_)1Arial1Arial1Arial1Arial1Arial1Arial= ! ,##0.00_z ` J ` J 883ffff̙̙3f3fff3f3f33333f33333\R3&STUNG-AEM1e-AEAs-is-?Gz? SegmenteduV?RQ?WY2 -= $>X4п~3d 3Q As-isQQQ3_4E4 3Q  SegmentedQQQ3_4E4 3Q NorthQQQ3_4E4D $% M=3O&Q4$% M=3O&Q4FA 3O 3 b#M!  O43*#M! M! M NM43" :dd% E3O% % M 53OQ423 M NM44444  FMicrosoft Graph ChartGBiff5MSGraph.Chart.89q B""$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* "-"_);_(@_).))_(* #,lv EMF $J* F, EMF+@XXF\PEMF+"@ @ $@ !@ 0@?@     !" !" !  Rp"ArialXX  /0|Ndv >x^51|00 LL q9$Hq`qD$Hw>wyOw _\0 _dv% % % % " !%   &% " !%   &% " !%    q'% %     V0&==;&&=='% (   V0;==;;&=V0;&;&;&;&iP~&% =  6;&6&6;626;66;6&6;66;6% % % " !% % %    q% (    V0!B&==;&&&% ( iP~V0@B=;;&=V06+;&;&;&% % % " !% % %    q&% ( 'MM%  MM MM  V0B= &='% (   V0ZB_=_  =_=% ( 'ss%  ss ssV0Z ~_  &% ( 'M3% (  M3 M3V07B=22&='3f% (   3fV0B===% ( 's&M%  s&M s&MV072&% ( 'MM% (  MM MMV0$B=@)&='% (   V0;B=@@==% ( 'ss%  ss ssV0$E@))@@&% ( 'M3% (  M3 M3V0H6qBM=MRl;l&M='3f% (   3fV0MRB=RMRM==% ( 's&M%  s&M s&MV06qWMRl;;RMR&% ( 'MM% (  MM MMV0)RB.=.MM&.='% (   V03B=..==% ( 'ss%  ss ssV0R.M.&% ( 'M3% (  M3 M3V0B=&='3f% (   3fV0)B.=.=.=% ( 's&M%  s&M s&MV0)M.% ( % ( &% =   6=6=62626&6&6TT`/@j@LP0D%T`/@j@LT0.1l%%T`U/@j@LT0.2e%%T`/@j@LT0.3o%%T`I/@j@LT0.4k%%T`v/@j@vLT0.5o%%=6==6IV=6VI=6I=6ITlO["/@j@O[LXNG-AE04--Tp{[m/@j@{[LXM1e-AE9%%--Tp[/@j@[LXONG-AE404--% % " !% %   &% % " !% %   &%   +iO~% % % " !% % %   mLz% % % " !% % %   mLz&% ( '%     +    T0/@j@ L`MSNSearcha9-0-%%%) % % % " !% % %   mLz% % % " !% % %   mLz&% ( '3f% (   3f  +4]    Tp#p/@j@aLXGoogle4)))% % % % " !% % %   mLz% % % " !% % %   &% ( % ( % " &!  " !  ( " F4(EMF+*@$??FEMF+@ CompObjIeObjInfoKWorkbookL _1130083783!Fk"w k"w ##0_);_(* \(#,##0\);_(* "-"_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* "-"??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* "-"??_);_(@_)1Arial1Arial1Arial1Arial1Arial1Arial= 0 ,##0.00_z ` J ` J 883ffff̙̙3f3fff3f3f33333f33333\R3&STUNG-AEM1e-AEONG-AE MSNSearchuV?RQ?oʡ?Google\(\?rh|?m?WY(-= %>X43d 3Q  MSNSearchQQQ3_4E4 3Q GoogleQQQ3_4E4 3Q NorthQQQ3_4E4D $% M=3O&Q4$% M=3O&Q4FA( 3O( 3 b#M!  O43*#M! M! M NM43" :dd@ +3O@ % M 53OQ423 M NM44444  FMicrosoft Graph ChartGBiff5MSGraph.Chart.89q Ole tCompObj "ueObjInfo#wWorkbookx B""$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* "-"_);_(@_).))_(* #,##0_);_(* \(#,##0\);_(* "-"_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* "-"??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* "-"??_);_(@_)1Arial1Arial1Arial1Arial1Arial1Arial= " ,##0.00_z ` J ` J 883ffff̙̙3f3fff3f3f33333f33333\R3&$q STU First Hit First 10 Hits First 50 HitsONG-AE ףp= ?q= ףp?\(\?WY-= &>X4?q3d 3Q ONG-AEQQQ3_4E4 3Q  WestQQQ3_4E4 3Q NorthQQQ3_4E4D $% sM=3O&Q4$% sM=3O&Q4FA5 m 3O5 m 3 b#M!  O43*#M! M! M NM43" :ddP 3OP % sM 53OQ423 M NM44444       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\Y!wA%9dX/#|X٤ց:h۳%}.\/^7ϥL{5RW \:5sYkqH\ aF7ũ=k¶8i1?J«=NA۞.=uUmɨ3ky%Op}>y"k4M\猒w xy~hkuexf%ԼiMi ak`'*=V-̿+K mF^:G,9FHRF?VF]j_^a^k[`]`iL5~;m^qȤ[o~N;f$̹lWkwbӕgnDZ kF <Xyp.yT]ԧ0ڹm-OoN!\1sv5:w[~ ; p &Q1 Mw^tOeVܨs=@;=t/fv ؽi[=|XLJyOXN2  Dd  b   c $A? ?3"`?"6:>r&x@=6:>r&x 7-ʵyxMLUpvT\""Z@wvcb4mu(RKDKv[%iBB1i p2ؕ⭉6ƃ^4j{iLy1}|.e-~dy͛73#Oai7;E<] )3|lSòԗ>  ^)]@!%چOZFG-<J,ڪ`fa'O# % Y2E}BQx wk ]00 Ok}^zև_-v%3¥k {`%|R k :ag~(]๝!hx ۍza?z/?\bNo캝q;ц2 JJl%}iYLYi#$-i= G_iZK10G v$\YB2JO/QVWDz[DXFsiG,KGf&yDQVC5UJDLɺ]H/ x=5l<=̟|L~3.Lҏ@5 FF#}Po[4jRdcXrKX>7wT_T3`}7<T~iq%r)7bй>*bG#>E5s- _=1#l{^!$e##:vv"|%N0)ˌ5(D[\e{eiw)}Vw8* Wޠ}8bqԳ[Oq;?uuOGIq=Qg?a[\e{̲ݴҥPe'-ϔ$xM 6XDHfدw|%w0)˅̧f{eiVȧ)ZofxM!6+ ̠Sȧ ZGXʞxail7+e+S6^ix^8]|vɝGlk&e4lol7 AitkEǺZ,ov>cqM|Kً/l>WJ'h~6z?Y.He[Ng fAX|}Vc1{%@7;,}Gɧۈ_?,x6 ^xl#?#vu9pLoΑ>>W_#_I-A}Pk_ڗl_%Ş)0˹sTsȜg֠_5֣˂6T۶udE1҇HZ;ȻNoBѲ5uP3a+>^vK/P" Dd  b  c $A? ?3"`?"L__.'cS4@=L__.'cS4@4۞,xYML\U>oQ2),ұJ-!N!N`J1i`1Fght!1&\@lZw.Љ ذlbwimдϹpcvR7s{;= >DD,a} .ȑğH OF89 A@,"@q27!]Rci q׌4"g5E:(e^Pk B~cAt"j6x nc!E7#uH',*@7S0 Ξڣ>3GUbRrTy3AÙAZ75 z|S;,b>(tX'bR -.6`6]i;fZRC8:T}!mm6\ 'w0cfdmi[lüW,W83yʖ@ yJs zHq4TJ)-|}wm*'O9nĊWYVpfyZ{WN]FjGs Iկd{;~_Eh[liG+U{NǸUĸUĸUĸUĸUĸ/;#\1g,VvsZ-!S}d(A d#.K#"D=>[X~f{]KؐW]8?3b~˜0Ec~E y/s_v)Fx*ڣ_/):}SY zdžӆ ]/"T"@\罙ԭ-<ɷ%g.mÿe> PN_{GSj9?U9?g-}{߬jvxyv3'.F{|?9o8Jg8_yZ膇׀GʵFV>`wS|Sk!> `ܩw8ؠ5d''_ΧU1Dd &J   C &Alcurve 20LPwILe%0\!`!0LPwILe%0_^DKY0x xT^&D~ VdaGAb&I&"UQCT@ Vť.";(b֥J\;YgU2}>ys{'(3M3LxS}-3ed; ;o#hĦ^Ƶj55"Eo٣!Mmܟb*"huWZoTAGʦJ(*.~?n3S5ϗ=L/>U *ެeR_3Ds^+ެ1K$̲蓮BlxJ+j1*P`TJWXU$q" On,.n<׍as,ɮ>;|N?&(r;6M=.iez\̌t=؎*[r W8me{\EUbfĄ&(>'AW>9%U.tuwZ΁l ܞ ˰QC^e5ep{x (ۨҪ2WG gnOqiJo([?#ϰl=l=Vkbg$'eDE]U׹\Ů*o%|D9d{E.LgC+3,}˜ESFf_Ҋ [k۬E쫸 (.xDؿ]( ȝrzf&hm)V >LbiE+]A7G(Ynj%(28KDvB]6mXYR9t^DZ&3!+U!E.u]\zgʋJ\^ea"<-\Niܖ;<Նދj4)Cc7F~",ϜR$*#D#{J :]"%LC3C*D3JcNvVnYV1@Yrzt eeᙑ*a-Ntu+C.(mƔLe%U2e)i@xJŲQZU4=l3U.vq9+a*]Riӕ)+rK+%,JVW\Yj*=)7p Ċ)yXFe\~:9%]*q OUQ$Fm,i\bULlSm;=;7HL _Ж*JS͂c 2KL wq&saX@2KBٱLb.g[U~Vsa)qȢ)88iЀ-S3HzgQ/gc%.|%<6;Tnnf ^PxD +4pŋ5kY)*K%N$.fBLZx *KKK綣|,.,)aʝ(򸽢$!^ <[܅٪XC9V7BCU?12vlI,"Nf_6CrYRawdSQl= ̑ca{od;oMR&Y3F G\XwF>Nиl]=¡R1K<EI.:PUΆ a2ˬyj|ڊsd;m '''X(2|wtE2OqT,p+y4*8ؖ~aY<)sPG!f; X,:8!OH_~H?hffU'Oۊ|y( =Lj,yfL#_Ĩ^H !5&O*YDaS%Dd$MOHz_C?7𫎻rZkEeOaQVLwyB!@x{O-4*mWޗX#SILl<|1Љq0]Bܿ$* ,aU>QU )cͳR')?>yOvJKSL)rqIlGjz"O gOg8Rv=C$%;"w :?[]'Y#R0"D;!=nF"tAP.[TGHHNE"`Dv8B FD=%!#BR0"Ғ!))HGHAHD=BCpY'׫=g$5%^ET[Q]0JAaMD!ۡ]0"fMY#"95*B Fh#`D:"R0"D;!#" ]8B Fh#G$[m>tAP.6#!#BR0">`Dv8B 'zp&ےzMEkHQJGw+5GsJbSz]BaH[m!Jϑ߳]ܵQRJZdDR0F2KJ-w_ǣOML$ڣ !0Iu\55()Ǟ##O gM$Od@Tk$D!LdlDHkI!5Af ("E?PEJj ?1PHQW)9Ib(m 6>{6!hK v+@NCG2H/p5N'4ލ[aAGGӋ˓9=)]kGy0o}4"R,xu[p2SV@BigLQҬl䩩Ɠu`oxbìa#{0+B$9Frj S]#HkCqe.>0|.4/h_9'7ںS栗CSA~]/N~ﵥ"rH\NGj7! ԟj&ZV?Ռ~OLrRhgSڈ8c-MrMl!FHĈȄ-ü!GBelSV"W`N<[OyOd;x6;Nfiߦ͗[[(qm' |n׻cmp VVc'[b-p%&8i£o oq}XNJb8) NVɊd Nvd7 Np Np Nڙ=Z;s,p|98'}b>p'83~+ߊZwZ*Ryټ\h~,պH} 4[qOJ'|LZoZ`kZ8j^ 3?֮#ny ؞Ԇ7h? z#_~zW5 :$uxր`9RK 5-!~)X.5*`6߫HܧbF3cz<X=`1" /o(~1ߦm67#onp{ 5Hfj}15:ΣuܑqpImLpR '1Q891IZK28Iky>e,p2N'SdjLpNV'+deLp''pLprN'=ZXUVĘo~h)P}rlL\5|Yk?TڡwWjg/ڹj߯]G/@k__]xkЭvIbx\G>9Os^ uڅ~:r<ր`9X~ Ŀ%/%r9kdc'Ӆnwj]A;yrhQsg#YY'y`D_ݩ\GH x6KG3Qr|1 WXأq*F߱G_̬\d[+)K`lXrG섭좤ߕ.J?0|(qÈl .U! ~HoCLD#uHWw{En 6ʩܣ0hqh ޣ=&Jĵ`28ؓTPoJfp+8qMAvSs}A9}&tj0k7/ _Z*grV*}_0:M_8Ҭc:L2 vF` |*%]l4ב+u. ~/004yn:E gw|םYHRFQ4z!_ G,RJ'c+d*@OW|2%я]l@腜>{&~I_3@ p 'c+~c鬈~) k$D<МY3=MT2)0ra& } v2Az0Sw )ף%&\@~ @af-|05Un"u$DrODNtEBU DS3XgFp>caba=TcYXZe2NWuVf(7ߕ.۽܏=HR>M~O`{ן}GH+>h}mS?[a  Ѿz|:5,_M}VgL?0 BL>X Nw` Rn @ T{37S}A|5gșެ?Țn{ ~אs@e.P5]1@_YoXw܇漧Eq-( 3c<#`/`G*`30\ǀQF|6a7a ml:B ٵ Q33yh#w)!$k#rk 'r8#׺ʯhCpC! d^W+4GY k?,e#؞mTol"7gk*ϐ{c.oSKr&LeG&} N)ގݮy&نA0~h?Мw'TuQ$64JY>M>}5(곔QP}}C}Ob%$dz$^c} qi BD=LȤ^Ai 8۱|ՠh` Ob_Vs(t4Rky  \-QGpڣэRZpJ| l`<\_! o"}1+y`4ɚ8ĸNϪ˸ r$-L[Kڸ@%FWDwEg!NhzNֶMP'}=V_6ډ`m\MU|Mx Wԅ_X؄|jq*G%W J<hJbd8~5p4'4pe4'4pߟoDx3;#ڄ/GӒ|ˈ^$jT_(N?ZŷUfhCz~21R_j#CxNW'+=M0993>:>%8iHlYo6nU&Bw9L{^uoַC]{G}ʷE{_A2{Y}yZ5_C\-Ok:Cw~[[hz|"/FB= }-j!ey'5\bį]wqzEOQE|pM@G{~-A^{ ~5WK27/}s!Z1`u:w`؍vێvCa,&fe{{?u،M0:O1Wx*y_d{_`Ž/җ?ob¬[Ľ!َ6b+<; x^7c yqfo3-Tf*u(_~ ~ϪM;8HAYࠔnhMĻTT꟨ҏ{[%iuԁ"y?!M4 SSnXpsUzW̗俱7V+x>̎r||~AA4u|E__0}}N4HDɽ ~i;FWc\h sOsjgΊu';yqojĽ].L{*n+۱hsM9=NKvrgDڗs ~Ľ] qېBĽFL޷ȹ {Ӭ̛qݸv'o3_ݘœv"O"X1m!oې-.clÿN:dyd59blCi {fm}?k#378qJl׊૘+mLeԫ;nحURnj,P*֛ .D?~Ouԑ>wߎ}~b<;-x~g, fО ^ UwӦTWx=HNrnG] o:T ^UӰMFf0Vp%n.1sc.c˘ngl3yV0[߯f2j+>8>@W~c1u]gi|쑋%D[ ؊,pPy/~Vj$iKZKrì!|r؁N"W~|f(sxuu8X փ'=]`=x-qFW_= 򣰽W0C+*fe5X =ւOb{YڀFփuĮC~rnd˞dױ?V^n6qyO=˓3yy&7&zg١azli%fefe|_z^MA=o@kE? s ;Y|i` |e26 ֜-~o:ߤثʧ |L~ auߘ#_?#p>>#gRSQ>:tŸ=Ca5yl[vfh'V1C{ay/fFv23Vy3:3&~[mvt;".rb6v03V6}mA_9G= 3O3cV%t/3C '?$^bz &_B"a|7ȿC {Yy8 x{WY֬<+k5jڏU`%xW`cjjjX"oUYL%'@ E*]Nu.S˩e!x\9[ jp{/EǷ=~ KZy _?bZ| XU X5 |8^ Mƽr: ~&Q^Ap?}~7~"2WsOjXi<o0+ngͣ1۷3{s[ћlp# ft:3:o&iߌn[u+9oef0K7`}݈n_7wgVT<@\+TP%227u>dȥD|'WJ9سu6ϦgsG'hkŦ˷R[QWs}1 h\EބE/Zurz7y%~o5UW'򯑘:{oZ{ j\A~7᷉>6N㰏/<cGr:e]jLfq/am6K6RK|5{TC3tc~Dm~G?>?c}|ˇX/_s=z=|j>y |ߪЍ'8c?Co )Ocu)sWpu58L(k[l{)5}-W0T'ZOO[x}%eRx:E[σ-&.j1yČ]Т@ښΨsL0y4 m6`*u:ӂzz'oG{$_ F/no9Uk,0Z߬zo;`;؉K^u:ߧnsĶEx ױg\lw0go_@?.r[,P~:iU?Wgלȩ_տDBo~j}J5~M%zx|A[wz+0_S퇚X-"b_i1D?Nb]ܬg>. XA&> ~HW F?@@_\S@wes漦lkʅ@*pX` W uY`J<hΫ8n 곁r)%P]60Qk~5Ue T W`Ow=xRE r*G59א{5[C9/t9.p[E{rڵAx ̣-p/U{.B\9V$XxFb-mgJ|k:&lR^^|Kܥ?pS艹^3 jY5y<|q-@#aۣAފ-[{7BNB7[9>n+$\nrV{lλHZ:9xHBKy>NU hl~oal-|UBm.ciU 0bL2s% m%n-Pl'!|z}FISx-U=?_m :y(udΐ;w" q$X S U ߁|0տZ$8+`!!韺gE5Urr`< n-|p7JJ6&26VS5NCZm^Bkz^SxtY _Ky_sl!G 1TablesSummaryInformation(&DocumentSummaryInformation80CompObjjOh+'0  ( D P \ ht|HLT/NAACL 2004 TemplateLT/ publicpc 20ublubl hlt-naacl04raduaac3duMicrosoft Word 10.0@p! @Tī@n` @.w Zy՜.+,0 hp|  AIƎ HLT/NAACL 2004 Template Title  FMicrosoft Word Document MSWordDocWord.Document.89q0d@d  ANormal$ h7$8$H$a$#OJPJQJ]^J_HmH sH tH L@L  Heading 1$ & F@&5CJL@L 4? Heading 2$ & Fx@&5aJT@T Heading 3$$x@&a$5CJ\^JaJDA@D Default Paragraph FontVi@V  Table Normal :V 44 la (k@(No List .O. Text$a$PJ:O: Template TextCJB@B  U Footnote Text ^CJ@&@!@ Footnote ReferenceH*8O28 Address$a$CJPJFOF Heading $a$5CJPJ\aJROAbR Abstract Heading$a$CJ\JObJ Abstract$]^a$PJVOrV References$x^`a$ CJPJaJJOJ ACL Title $,a$5CJPJ\aJ:O2: Author$a$ 5CJPJ@O@ Email$<<a$ CJOJQJ:O: Text Indent  ` HOH Template Example CJOJQJ:O: Example  ] ^ @O@ Example 1st LinexFOF Example Last Line xVOV Template List $dh^a$ CJPJaJDOD List Indent! dx^ fO"f sC List Bulleted*" & F @dx^@`CJaJLO!"L List Bulleted 1st Line#xHOH List Indent 1st Line$x2"@2 Caption%CJ\4OQb4 Caption Text&FOrF 3 Affiliation'5B*PJphtH >@> 3 Normal Indent (^TOT ~T Normal-NoIndent)7$8$H$`PJtH 6U@6 p Hyperlink >*B*phe@ pHTML Preformatted@+ 2( Px 4 #\'*.25@97$8$H$OJQJ]^JH@H @!e Balloon Text,CJOJQJ^JaJB'@B OComment ReferenceCJaJ4@4 O Comment Text.@j@@ OComment Subject/5\0?Y\cn{}?XY\_|( + "#%'( 0?Y\cn{}?XY\_|  Eric BrillraduEE(EDBTrl{Xs{QT(*&*W*123@_'J *.##&&''.(H(+v-B000113778u:N<<[=.?\?@C2C^CiCjCDDEEFzHL)LRNOO^TUU>WPYYYY\_L`e`bddegmkyp{ppq:r;rjrusvsstt\uu v7vwewwyxxyuyyyz{{t~57 '߇yEA`apwxz{ÐĐΐϐ֐ېܐݐސ FZ[ґܑݑߑ  !"#$%&)0000'0'0'0'0'0 0'0'0'0'0 0 0000 0000 0000 0000 0)0#)0#)0#)0#)0# 0##0.(0.(0.()0.( 0##)00 0##0101 0##)07)0707 07#)0P<0P< 07#00?00?0#0C0C0C0C0C0C0C0C0C0C0C0C0C 0)0O0O 0OO)0U)0U0U0U 0OO0Y0Y0Y 0OO0N`0N` 0OO0d0d0d0d0d0d 0)0p0p0p0p0p0p0p0p)0p 0p 0p0p0p 0p0p0p 0p0p0p 0p0p0p0p 000000000000000000000000@.0@.00,Μ000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000123@_'J *.##&&''.(H(+v-B000113778u:N<<[=.?\?@C2C^CiCjCDDEEFzHL)LRNOO^TUU>WPYYYY\_L`e`bddegmkyp{ppqt\uu v7vwewwyxxyuyyy{)0000'0'0'0'0'00'0'0'0'000000 0000 0000 0000 0)0#)0#)0#)0#)0# 0##0.(0.(0.()0.( 0##)00 0##0101 0##)07)0707 07#)0P<0P< 07#00?00?0#0C0C0C0C0C0C0C0C0C0C0C0C0C 0@)0L@0LA 0LL@)0R@)0R@0R@0RA 0LL@0V@0V@0VA 0LL@0y]@0y]A 0LL@0a@0a@0a@0a@0a@0a@ 0@)0l@)0l@ 0l@ 0l@0l@0l@ 0l@0l@0l@ 0l@0l@0l@ 0l@0l@ 0:004 ^,% "&Q+J.0X78 =@FD/HJKMQVJY$]_bdf ilpqv {~ eJ$ݔ YNRSTUWXZ[\]_`acdeghjklmnoprstuvxyz{} J/9NDiKayl}Hb/KMYOQVY^bfiqw|~XP&&&B0Z0]0C-C0CDDDEEEL"L$LPYkYmY___9dTdVd(___::::::834@)3(    LB&Q) 3  s"*?`  c $X99?LB&Q)  B~~3___E    H~~3___8T   B~~3___&    B~~3___ %%(   B~~3___i%Q(    fBMC DEF___M @[%j   fB9CDEF___9@AF   fBCDEF___@' '   fBDCDEF___D@'A'   H~~ 3___L%&   H~~3___|   H~~3___""8#?$   H ~~3___|&B Q)  nb  C  "` 5$    `BCKDEFK@" #-%ZB  S D    `BCDEF@$%n  C  "`!%    BCDE(F a49 @   ai%ZB B S Di%B&  s * GKHBT "`|B#    H ~~3___";$    &   3  s"*?`  c $X99?&  l  c $b2b2    `B5CDEF5@+   0b2b2 "`O   ! `BCDEF@ " 0b2b2""` s l # c $b2b2#nG$  $ `BCDEF@G  % 0b2b2%"``M  & `BMCDEFM@O ' 0b2b2'"` l ( c $b2b2(  l  %* )3  s"*?` * c $X99? %*h + 3 +"` %@* B S  ?&[0.C(\ t )(Ttth4\ p5 p6|D7\ p8 p9 p:\ p;p< p= p> p? p@pApBԨC"DEFlGܫHDIDJK<`LMNOCP_Q#RSl TUbVDGWL`Xl fYvZv[,v\lv]^ _`ab pcIJd|e\fghi$jnk< Zlnm\pnVopdq@?ACBDFEGIHJKMLNPOQSRTVUWZXY[]\^`_acbdfegiP P  grgrrrrrrrrrrrrrrrr sssss=s=sKsKscsssssz166%%هއއ{sxxjq||?DD;@@ )    ! "$#%'&(*)+,-./013245768:9;=<>@?ACBDFEGIHJKMLNPOQSRTVUWYZX[]\^`_acbdfeg 9h*urn:schemas-microsoft-com:office:smarttagsplace8b*urn:schemas-microsoft-com:office:smarttagsCity=f*urn:schemas-microsoft-com:office:smarttags PlaceName9`*urn:schemas-microsoft-com:office:smarttagsStateB[*urn:schemas-microsoft-com:office:smarttagscountry-region=g*urn:schemas-microsoft-com:office:smarttags PlaceType8Q*urn:schemas-microsoft-com:office:smarttagstime;e*urn:schemas-microsoft-com:office:smarttagsaddress>\*urn:schemas-microsoft-com:office:smarttags PostalCode:d*urn:schemas-microsoft-com:office:smarttagsStreet 0HourMinutehgfedhbh`hb`\[edhb`\[h[Qh[h[h`h[hb`hb`hb`hb`h[hbhbhb`hhb`hb`hgfhb[hb[hb`hb`hhb[hb`hb[hb`hffghb`hb`hb`hb` TEE8?`i+/ 7@ty/36=%)03ru),####$$X&h&++:,A,|----M.T.(/1///000031<1A1G11277(989<<<<O=R===>?hBxBoCCDDDDGGIIII9J:JUJWJJJKK1L3L5L7LLLMMMMNNOOLVUVVVuW|WXXXXYYYYZZ] ]]]]]^ ^^ _____0b@bbbddmeveTr]rqrzr4s=sMsWs}ttz#z||||ˀҀ΁ՁX^qw2:CJSY….3  !)6;ʉӉۉ)1ȋ=DŒь،ٌEMNSZ\^emry AFGIOW{a)##=7D7^ChCDDPFVFGGrHtHIIKKMMNNQQ%R/RTT7U?UsUU YYYYssttӀ 4LӂԂ$7 p؄+Å{<̇߇@nֈ Z|8fъ}Ћ}2E3Ύa@DFRݑޑ)3333333333333333333333333333333333333333333333333333333399DRRRRRRSSS!SSSSS1T7Tim`aavwwxxyzz{{ÐÐĐĐ͐ΐΐϐϐڐېېܐܐݐݐސސґۑܑܑݑݑޑߑߑ  !!##$$%%)a)radu zA &J`W<2E C\-ZdD)3ʧO/>W eJʧ0lRB7`WamW<C+uW<Os^`s56789;<B*CJH*CJOJQJS*TX^JaJo(phhHI77^7`56789B*CJH*CJOJQJS*TX^JaJo(phhH.'77^7`56CJOJQJ^JaJo(hH..!7^7`CJOJQJ^JaJo(hH.... !78^7`CJOJQJ^JaJo(hH ..... !78^7`CJOJQJ^JaJo(hH ...... !7^7`CJOJQJ^JaJo(hH....... !7^7`CJOJQJ^JaJo(hH........ !7^7`CJOJQJ^JaJo(hH.........hhh^h`OJQJo(hHh88^8`OJQJ^Jo(hHoh^`OJQJo(hHh  ^ `OJQJo(hHh  ^ `OJQJ^Jo(hHohxx^x`OJQJo(hHhHH^H`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hH@@^@`OJQJhH^`OJQJ^Jo(hHo^`OJQJo(hH^`OJQJo(hH  ^ `OJQJ^Jo(hHoXX^X`OJQJo(hH((^(`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH P^`Po(hH @@^@`o(hH. 0^`0o(hH.. ``^``o(hH... ^`o(hH .... ^`o(hH ..... ^`o(hH ......  `^``o(hH.......  00^0`o(hH........h^`OJQJ^Jo(h^`OJQJ^Jo(ohpp^p`OJQJ^Jo(h@ @ ^@ `OJQJ^Jo(h^`OJQJ^Jo(oh^`OJQJ^Jo(h^`OJQJ^Jo(h^`OJQJ^Jo(ohPP^P`OJQJ^Jo(hh^h`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH^`OJQJo(hH  ^ `OJQJ^Jo(hHoXX^X`OJQJo(hH((^(`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hHA77^7`56789;<B*CJH*OJQJS*TX^JaJo(ph;77^7`56789B*CJH*OJQJS*TX^JaJo(ph2.77^7`56CJOJQJ^JaJo(..7^7`CJOJQJ^JaJo(.... 78^7`CJOJQJ^JaJo( ..... 78^7`CJOJQJ^JaJo( ...... 7^7`CJOJQJ^JaJo(....... 7^7`CJOJQJ^JaJo(........ 7^7`CJOJQJ^JaJo(.........hh^h`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH^`OJQJo(hH  ^ `OJQJ^Jo(hHoXX^X`OJQJo(hH((^(`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hHhA77^7`56789;<B*CJH*OJQJS*TX^JaJo(phh;77^7`56789B*CJH*OJQJS*TX^JaJo(ph.h77^7`56CJOJQJ^JaJo(..h7^7`CJOJQJ^JaJo(.... h78^7`CJOJQJ^JaJo( ..... h78^7`CJOJQJ^JaJo( ...... h7^7`CJOJQJ^JaJo(....... h7^7`CJOJQJ^JaJo(........ h7^7`CJOJQJ^JaJo(.........A77^7`56789;<B*CJH*OJQJS*TX^JaJo(ph;77^7`56789B*CJH*OJQJS*TX^JaJo(ph2.77^7`56CJOJQJ^JaJo(..7^7`CJOJQJ^JaJo(.... 78^7`CJOJQJ^JaJo( ..... 78^7`CJOJQJ^JaJo( ...... 7^7`CJOJQJ^JaJo(....... 7^7`CJOJQJ^JaJo(........ 7^7`CJOJQJ^JaJo(.........@@^@`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH^`OJQJo(hH  ^ `OJQJ^Jo(hHoXX^X`OJQJo(hH((^(`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH@@^@`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH^`OJQJo(hH  ^ `OJQJ^Jo(hHoXX^X`OJQJo(hH((^(`OJQJo(hH^`OJQJ^Jo(hHo^`OJQJo(hH C\-0lRO/>0lR z7`23 eJam0C+uJ`A                    nL        q7F,8qFdBeQkVfJ]!  ? ;R ~T 3 }Q { N4S,Qk/,\P cQD 4sC-bg5U"aBcog! .+ M" ##2#<#v#I+%)**jh*w3+@!,/]S0A+2V,2W3tq3j5l5k8t;05;%<;B;G;y<c=-u=W>4?(?Cs?@ Av)AGB]Bj CDE Fx6G0BG"IZWJU!KOKnKtL}L/M-PSRQ,R1R ST$U+UQV_QWWXiYS\i]_4_SbcHc|Td@!eE`eVgeU f!)V.Vhk *vq mCTqw?4Wm[1]p{M" pFR)BcZ{`;jkWJo 9!6q%k3-,BK i(~c+{V"(bi<7Vlm0r0w4` a, S[_TYIb_h s3)@9999dHIJKNOϏ(P@PPPRPTP@PZP@P0@Unknownradu Eric Brill Gz Times New Roman5Symbol3& z Arial?5 z Courier NewCTimesNewRomanG5  hMS Mincho-3 fgQTimesNewRoman,Italic5& zaTahoma;Wingdings"Ah-fًf{{ZyIZyI!24dƎƎ 3qKP)?@-zY:\Word\hlt-naacl04.docHLT/NAACL 2004 Templatepublicpcradu<