ࡱ> [ rbjbj |jjol4(((((VYP)f)f)f)f)y* +4?+XXXXXXX$.[ N]HX[+u*y*[+[+X5f)f)X<555[+$8f)f)X5[+X55;NTBXf)D) _z(/V0XY<VY/Wt^S2@^X5A Model of Textual Affect Sensing using Real-World Knowledge Hugo Liu Software Agents Group MIT Media Laboratory Cambridge, MA 02139 +1 617 253 5334 hugo@media.mit.edu Henry Lieberman Software Agents Group MIT Media Laboratory Cambridge, MA 02139 +1 617 253 0315 lieber@media.mit.edu Ted Selker Context-Aware Computing Group MIT Media Laboratory Cambridge, MA 02139 +1 617 253 6968 selker@media.mit.edu ABSTRACT This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted models. This paper demonstrates a new approach, using large-scale real-world knowledge about the inherent affective nature of everyday situations (such as getting into a car accident) to classify sentences into basic emotion categories. This commonsense approach has new robustness implications. Open Mind was used as a real world corpus of 400,000 facts about the everyday world. Four linguistic models are combined for robustness as a society of commonsense-based affect recognition. These models cooperate and compete to classify the affect of text. Such a system that analyzes affective qualities sentence by sentence is of practical value when people want to evaluate the text they are writing. As such the system is tested in an email writing application. The results show that the approach is robust enough to be used in textual affective user interfaces in isolation. Keywords Affective computing, Affective UI, Artificial Intelligence, Commonsense Reasoning, Open Mind Common Sense. INTRODUCTION One of the great challenges facing the community of human-computer interaction today is the design of intelligent user interfaces that are more natural and social. As Nass et. als studies on human-human and human-computer interactions suggest, people most naturally interact with their computers in a social and affectively meaningful way, just like with other people. [11] Successful social interaction means successful affective communication. Research on synthetic agents also supports this notion with the observation that a computers capacity for affective interaction plays a vital role in making agents believable. [1]. Researchers like Picard have recognized the potential and importance of affect to human-computer interaction, dubbing work in this field as affective computing. [13]. In order for intelligent user interfaces to make use of user affect, the users affective state must invariably first be recognized or sensed. Researchers have tried detecting the users affective state in many ways, such as, inter alia, through facial expressions, speech, physiological phenomena, and text. (see [13] for a survey of the literature). We believe that text is a particularly important modality for sensing affect because the bulk of computer user interfaces today are textually based. It follows that the development of robust textual affect sensing technologies can have a substantial impact in transforming todays socially impoverished text-based UIs into socially intelligent ones. In addition, improved textual sensing can reinforce the accuracy of sensing in other modalities, like speech or facial expressions. Previous approaches for analyzing language for affect have commonly included keyword spotting, lexical affinity, statistical methods or hand-crafted models. This paper addresses textual affect sensing using an approach that introduces multiple corpus-based linguistic analyses of affect. A real-world generic knowledge base of commonsense called Open Mind is employed. The affect of text, at the sentence level, is classified into one of six basic categories of emotion, using Ekmans categories of happy, sad, angry, fearful, disgusted, and surprised. [3]. The approachs robustness comes from three sources: 1) a society of multiple models that compete with each other; 2) its ability to judge the affective qualities of the underlying semantics; and 3) its reliance on application-independent, large-scale knowledge, covering a broad section of affective commonsense about the real world. Papers Focus A related paper on this work [8] argued for the utility of evaluating affect using user-independent commonsense knowledge, and explored the user interface aspects of an application based on the approach that offered users, typing into an email browser, some affective feedback about their story. This paper focuses on the architecture, practical methods, and linguistic models behind the textual affect sensing engine. A more detailed introduction of the way the Open Mind Commonsense knowledge base (OMCS) works and is used in the system, is given. This discussion revolves around how the system differs from, improves upon, and complements other textual approaches. Papers Organization This paper is structured as follows: First, we frame our approach by comparing it to existing approaches in textual affect sensing, and briefly motivating the approach from the perspective of cognitive psychology. Second, we introduce the Open Mind Commonsense knowledge base, which is the basis of the first implementation of our approach. Third, we give a detailed treatment of the linguistic affect models used to evaluate text. Fourth, we present the architecture of the textual affect sensing engine. Fifth, we discuss how we evaluated our approach in the context of an experimental application. The paper concludes with a discussion of the impact of this work to the development of affective user interfaces, and proposals for future work. WHAT IS THE VALUE OF ANOTHER APPROACH? There are already a handful of approaches to textual affect sensing, and a fair question to ask is: Why do we need another one? This section hopes to answer this question by examining the strengths and weaknesses of existing approaches. As our argument will show, a large-scale, real-world knowledge approach addresses many of these limitations. Existing approaches can be grouped into the following categories, with few exceptions: 1) keyword spotting, 2) lexical affinity, 3) statistical natural language processing, and 4) hand-crafted models. Keyword spotting is the most nave approach and probably also the most popular because of its accessibility and economy. Text is classified into affect categories based on the presence of fairly unambiguous affect words like distressed, enraged, and happy. Elliotts Affective Reasoner [4], for example, watches for 198 affect keywords (e.g. distressed, enraged), plus affect intensity modifiers (e.g. extremely, somewhat, mildly), plus a handful of cue phrases (e.g. did that, wanted to). Ortonys Affective Lexicon [12] provides an often-used source of affect words grouped into affective categories. The weaknesses of this approach lie in two areas: poor recognition of affect when negation is involved, and reliance on surface features. About its first weakness: while the approach will correctly classify the sentence, today was a happy day, as being happy, it will likely fail on a sentence like today wasnt a happy day at all. About its second weakness: the approach relies on the presence of obvious affect words which are only surface features of the prose. In practice, a lot of sentences convey affect through underlying meaning rather than affect adjectives. For example, the text: My husband just filed for divorce and he wants to take custody of my children away from me, certainly evokes strong emotions, but use no affect keywords, and therefore, cannot be classified using a keyword spotting approach. Lexical affinity is slightly more sophisticated than keyword spotting. Detecting more than just obvious affect words, the approach assigns arbitrary words a probabilistic affinity for a particular emotion. For example, accident might be assigned a 75% probability of being indicating a negative affect, as in car accident, hurt by accident. These probabilities are usually trained from linguistic corpora. Though performing better than pure keyword spotting, we see two problems with the approach. First, lexical affinity, operating solely on the word-level, can easily be tricked by sentences like I avoided an accident, (negation) and I met my girlfriend by accident (other word senses). Statistical natural language processing is another approach which has been applied to the problem of textual affect sensing. By feeding a machine learning algorithm a large training corpus of affectively annotated texts, it is possible for the system to not only learn the affective valence of affect keywords as in the previous approach, but such a system can also take into account the valence of other arbitrary keywords, punctuation, and word co-occurrence frequencies. Statistical methods such as latent semantic analysis (LSA) have been popular for affect classification of texts, and have been used by researchers on projects such as Goertzels Webmind [5]. However, statistical methods are generally semantically weak, meaning that, with the exception of obvious affect keywords, other lexical or co-occurrence elements in a statistical model have little predictive value individually. As a result, statistical text classifiers only work with acceptable accuracy when given a sufficiently large text input. So while these methods may be able to affectively classify the users text on the page or paragraph-level, they will not work on smaller text units such as sentences. Hand-crafted models rounds out our survey of existing approaches. In the tradition of Schank and Dyer, among others, affect sensing is seen as a deep story understanding problem. Dyers DAYDREAMER models affective states through hand-crafted models of affect based on psychological theories about human needs, goals, and desires. [2] Because of the thorough nature of the approach, its application requires a deep understanding and analysis of the text. The generalizability of this approach to arbitrary text is limited because the symbolic modeling of scripts, plans, goals, and plot units must be hand-crafted, and a deeper understanding of text is required than what the state-of-the-art in semantic parsing can provide. An Approach Based on Large-Scale Real-World Knowledge Given a multitude of existing approaches, why develop a new approach? While existing approaches have their applications, they all fail at the robust affect classification of small pieces of domain-independent text such as sentences. We argue that the ability to sense differences in affect when progressing from sentence to sentence is important to a host of interactive applications that wont work given page- or half-page- level sensing capabilities. Examples of such applications include synthetic agents that want to give affective response to the user input at the sentence level, affective text-to-speech systems, and context-aware systems where user utterances are sparse. We suggest that using large-scale real-world knowledge to tackle the textual affect sensing problem is a novel approach that addresses many of the robustness and size-of-input issues associated with existing approaches. Rather than looking at surface features of the text like in keyword spotting, our approach evaluates the affective qualities of the underlying semantic content of text. Real-world knowledge allows us to sense the emotions of text even when affect keywords are absent. Whereas semantically weaker statistical NLP require page or half-page inputs for reasonable accuracy, semantically stronger commonsense knowledge can sense emotions on the sentence-level, and thereby enable many interesting applications mentioned above. Hand-crafted models tend to have smaller coverage and require deeper semantic understanding than can be achieved over arbitrary text domains. In utilizing a large-scale commonsense knowledge base like OMCS, we avoid having to hand-craft knowledge, and benefit from the robustness derived from the greater breadth and coverage offered by such a resource. Though in some ways similar to lexical affinity (i.e. concepts and everyday situations have a probabilistic emotional valence), our approach is more general and robust. Unlike lexical affinity, our approach parses text using broad-coverage shallow parsing and concept recognition, and therefore, our text analyzer is not easily tricked by structural features like negation or ambiguity at the word-level. In addition, our knowledge comes uniquely from a large body of commonsense, whereas lexical affinity typically mines its statistical model from annotated corpora and dictionaries. Though in this paper our commonsense-based approach is discussed and implemented in isolation of these other aforementioned approaches, it should be noted that all of these methods have their merits and can play a role in mutual disambiguation and mutual reinforcement. Brief Grounding for Our Approach The approach put forth in this paper entails the notion that there is some user-independent commonality in peoples affective knowledge of and attitudes toward everyday situations and the everyday world which is somehow connected to peoples commonsense about the world. Support for this can be found in the works of, inter alia, Aristotle, Damasio, Ortony [12], W. James [6], and Minsky [9]. Aristotle, Damasio, and Ortony have explained that emotions are an integral part of human cognition of the everyday world, and Minsky has gone further to suggest in The Emotion Machine, that much of peoples affective attitudes and knowledge is an integral part of their commonsense model of the world. Psychologist William James also noted that just as with the rest of commonsense, the recognition of emotion in language depends on traditions and cultures, so people may not always understand another cultures expression of emotions. Having framed our approach in terms of existing approaches and theoretical considerations, the rest of this paper will focus on more practical considerations, such as how a small society of commonsense affect models was constructed and integrated into a textual affect sensing architecture. THE OPEN MIND COMMONSENSE CORPUS Our approach relies on having large-scale real-world knowledge about peoples common affective attitudes toward situations, things, people, and actions. If we want our affective sensing engine to be robust, we will have to supply it with a great breadth of knowledge that reflects the immensity and diversity of everyday knowledge. Generic commonsense knowledge bases are the best candidate sources of such knowledge because affective commonsense is generally a subset of this knowledge, and such knowledge bases are generally rather large, on the order of hundreds of thousands to millions of pieces of world knowledge. We are aware of three large-scale generic knowledge bases of commonsense: Cyc [7], Open Mind Common Sense (OMCS) [14], and ThoughtTreasure [10]. Cyc is the largest of the three, with over 3 million assertions about the world, followed by OMCS, with close to half a million sentences in its corpus. ThoughtTreasure has around 100,000 concepts and relations. The implementation discussed in this paper mines knowledge out of OMCS because its English-sentence representation of knowledge is rather easy to manipulate and analyze using shallow language parsers. In the future, we expect to also incorporate knowledge from the other two commonsense knowledge sources. One caveat is that many of the analysis techniques and possible knowledge representations may be specific to the representation of the corpus used. Our system uses OMCS, so the discussion of our system necessarily entails discussion of some OMCS-specific methods. In OMCS, commonsense is represented by English sentences that fit into 20 or so sentence patterns expressing a variety of different commonsense relations between concepts. Here are some examples of knowledge in OMCS (sentence pattern words are italicized): (non-affective) an activity a doctor can do is examine the patient you are likely to find rollercoasters in an amusement park the effect of eating dinner is loss of appetite (affective) Some people find ghosts to be scary. A person wants popularity A consequence of riding a rollercoaster may be excitement. From OMCS, we first extract a subset of the sentences which contain some affective commonsense. This represents approximately 10% of the whole OMCS corpus. The identification of these sentences is heuristic, accomplished mainly through keyword spotting of known emotion adjectives (e.g. happy, sad, frightening,), nouns (e.g. depression, delight, joyous), and verbs (e.g. scare, cry, love), taken mainly from Ortonys Affective Lexicon [12]. These affect keywords serve as emotion grounds in sentences, because their affective valences are already known. A SMALL SOCIETY OF COMMONSENSE-BASED LINGUISTIC AFFECT MODELS After identifying a subset of the commonsense knowledge that pertains to emotions, we build a commonsense affect model enabling the analysis of the affective qualities of a users text. In truth such a model is a small society of different models that compete with and complement one another. All of the models have homogeneously structured entries, each of which have a value of the form: [a happy, b sad, c anger, d fear, e disgust, f surprise] In each tuple, a-f are scalars greater than 0.0, representing the magnitude of the valence of the entry with respect to a particular emotion. As a starting point, we work with the six so-called basic emotions enumerated above, based on Ekmans research on universal facial expressions [3]. It should be noted that our approach can be grounded in any set of basic emotions which can be discerned through affect keywords, which include, most prominently, sets proposed by Ekman, Frijda, W. James, and Plutchik. For a complete review of proposals for basic emotions, see [12]. The Models In the remainder of this section, we give an in-depth review of each of the affect models generated from OMCS, we talk briefly about how we generate them from OMCS, and we discuss smoothing models on the inter-sentence level. Subject-Verb-Object-Object Model. This model represents a declarative sentence as a subject-verb-object-object frame. For example, the sentence Getting into a car accident can be scary, would be represented by the frame: [: ep_person_class*, : get_into, : car accident, : ] whose value is: [0 happy, 0 sad, 0 anger, 1.0 fear, 0 disgust, 0 surprise] In this example, we refer to scary as an emotion ground because it confers an affective quality to the event in the sentence by association. In this sentence, there are two verb chunks, getting into, and can be. Can be refers to the relation between an event and an emotion, so this relation is used to assign the event getting into a car accident a value. For the event phrase, the subject is omitted, but we know from this relation (sentence template) in OMCS that the implicit subject is a person, so we fill the subject slot with a default person object. The verb is get_into insofar as it is a phrasal verb. The object1 slot is a noun chunk in this case, but may be an adjective chunk. The object2 slot is empty in this example, but in general, either object slot may be noun and adjective chunks, prepositional phrases or complement clauses. This example does not cover the models treatment of negation or multiple SVOOs in one sentence. Negation is handled as a modifier to a subject, object, or verb. If there are multiple verb chunks in a sentence, and thus multiple SVOOs, then each a heuristic disambiguation strategy will try to infer the most relevant candidate and discard the rest. The strength of this model is accuracy. SVOO is the most specific of our models, and best preserves the accuracy of the affective knowledge. Proper handling of negations prevents opposite examples from triggering an entry. The limitation of SVOO however, is that because it is rather specific, it will not always be applicable. We try to make SVOO slightly more robust by semantic class generalization techniques, which we discuss later in the paper. Concept-level Unigram Model. For this model, concepts are extracted from each sentence. By concepts, we mean verbs, noun phrases, and standalone adjective phrases. Concept, which are obviously affectively neutral by themselves (e.g. get, have) are excluded using a stop list. Each concept is given the value of the emotion ground in the sentence. For example, in the sentence: Car accidents can be scary, the following concept is extracted and is given a value: [: car accident] Value: [0 happy, 0 sad, 0 anger, 1.0 fear, 0 disgust, 0 surprise] Negations are handled by fusing the prefix not_ to the affected verb. Noun phrases which contain adjectival modifiers are generalized by stripping the adjectives. Then, both the original and generalized noun phrases are added to the model, with the generalized NP necessarily receiving a discounted value. Concept-level unigrams are not as accurate as SVOOs because they relate concepts out of context to certain affective states. However, this model is more robust than SVOO because it is more independent of the surface structure of language (the specific syntax and word-choices through which knowledge is conveyed). Concept-level Valence Model. This model defers from the above-mentioned concept-level unigram model in the value. Rather than the usual six-element tuple, the value is just a vector between 1.0 and 1.0, indicating that a word has positive or negative connotations. Evaluating valence affords us the opportunity to incorporate external semantic resources such as dictionaries into our affect model. Associated with this model is hand-coded meta-knowledge about how to reason about affect using valence. For example, knowing the valences of wreck and my car, we can deduce that the sentence, I wrecked my car has negative affect. To make this deduction, we use the following piece of meta-knowledge: narrator neg-valence pos-valence ( neg-valence I WRECKED MY CAR Although this model does not give us mappings into the six emotions, it is more accurate than the concept-level unigram model. It is also useful in disambiguating a story sentence that the other models judged to fall on the cusp of a positive emotion and a negative emotion. Modifier Unigram model. This model assigns six-emotion tuple values to the verb and adverbial modifiers found in a sentence. The motivation behind this is that sometimes modifiers are wholly responsible for the emotion of a verb or noun phrase, like in the sentences, Moldy bread is disgusting, Fresh bread is delicious Generating models In constructing each of the aforementioned models, we first choose a bag of affect keywords, pre-classified into the six basic emotions. These words act as emotion grounds with which we can interpret the OMCS sentences. To build up the models, we make a first pass in which emotion grounds propagate their value to other concepts/SVOOs/modifiers (this is model-specific) in the same sentence. To improve coverage (number of concepts with an affect value), we make a second and a third pass over the entirety of OMCS, propagating the affect value of concepts that have a non-zero value to concepts in the same sentence which have a zero value. After each propagation, the affect value is discounted by a factor d. An example of propagation for the concept-level unigram model is given here (six-tuple refers to happy, sad, anger, fear, disgust, and surprise): Something exciting is both happy and surprising. (pass 1: exciting: [1,0,0,0,0,1]) Rollercoasters are exciting. (assume d=0.5, pass 2: rollercoaster: [0.5,0,0,0,0,0.5] Rollercoasters are typically found at amusement parks. (pass 3: amusement park: [0.25,0,0,0,0,0.25]) With the completed commonsense-based affect models, we can evaluate texts and on the sentence-level, and sense affect in terms of the six Ekman emotions or determine affect as neutral, which means that there is not enough information or confidence to make a classification. Each sentence is classified by running each of the models on that sentence and applying a weighted scoring function to the results of each model. Smoothing models After sentences have been annotated with one of the six basic emotions or neutral, we apply various techniques aimed at smoothing the transition of emotions from one sentence to the next. Decay. The most basic smoothing is decay. A good example of this is when a sentence annotated as surprised is followed by two or more neutral sentences. The neutral sentences are most likely the result of sentences not handled well by our models. And the emotion surprise is not likely to transform abruptly into neutral. Decay in this case would revise the annotation of the first neutral sentence to 50% surprised, or whatever the decay rate is decided to be. Interpolation. Another related smoothing technique is interpolation. For example, if a neutral sentence is between two angry sentences, then one of the hand-coded interpolation rules will revise the annotation of the middle sentence to 75% angry, or whatever the interpolation factor is set to be. Global mood. A third smoothing technique is global mood. In storytelling, larger sections of text such as the paragraph or even the entire story establish and preserve moods. Computationally, we can analyze the raw scores and affect annotations to figure out the mood of a paragraph and of an entire story. We can then add a memory of these moods onto each sentence within the paragraph or in the story, respectively. Meta-emotion. A fourth technique, and perhaps the most interesting, is the meta-emotion. We observe that certain emotions not part of the six basic emotions actually emerge out of patterns of the six basic emotions. This is highly desirable because it gives the six basic emotions more expressive ability, and allows them to have more RGB-like properties (to make an analogy to computer color blending). Several examples of meta-emotions are given below: Frustration Repetition of low-magnitude anger Relief Fear followed by happy Horror Sudden high-magnitude fear Contentment Persistent low-level happy Of course, as with any pattern recognition, meta-emotion detection is not fool-proof, but since meta-emotions are meant to express more natural transitions between emotions, they will generally fail-softly, in that the wrong meta-emotion wont be far off the right meta-emotion. Having discussed in great detail our models and methods in isolation, we wish to reincorporate this presentation into a demonstration of the overall architecture of the system. ARCHITECTURE OF AN AFFECT SENSING ENGINE The architecture of the affect sensing engine follows closely from the approach outlined in the previous section. It can be viewed in two parts: 1) the Model Trainer; and 2) the Text Analyzer. Model Trainer Architecture The Model Trainer architecture has three sequential modules: 1) Linguistic Processing Suite; 2) Affective Commonsense Filter & Grounder, and 3) Propagation Trainer. 1) The raw OMCS corpus of million sentences first undergo linguistic processing. Because OMCS sentences follow a sentence-template based ontology we first rewrite such sentences as a binary relation. To satisfy the representational needs of the different models, we also perform a suite of linguistic processing including part-of-speech tagging, phrase chunking, constituent parsing, subject-verb-object-object identification, and semantic class generalization (e.g. I ( narrator; People ( ep_person_class). 2) From this parsed OMCS corpus, we use emotion ground keywords classified by the six Ekman emotions to first filter the affective commonsense from the whole OMCS, and second to tag the emotion keywords with grounds in preparation for training the models. 3) In the third module, the propagation trainer propagates the affect valence from the emotion grounds to concepts related through commonsense relations, and from those concepts to yet other concepts. Each propagation discounts the valence by some factor d e.g. 0.5. This propagation can be viewed as analogous to undirected inference, or spreading activation over a semantic network of concept nodes connected by commonsense relation edges. Text Analyzer Architecture The text analyzer architecture can be decomposed into five sequential modules: 1) Text Segmenter, 2) Linguistic Processing Suite, 3) Story Interpreter, 4) Smoother, 5) Expressor The incoming story text is first segmented into paragraphs, sentences, then into independent clauses, which are the smallest story units that can capture an event. In the story interpreter module, each parsed and processed sentence is evaluated against the trained models and a weighted scoring function generates a six-tuple score. Disambiguation metrics help to map this final score into an emotion annotation. In the output of this module each sentence will be annotated with one of the six basic emotions, or neutral and will also be annotated with the total score. The next module, Smoother, performs pattern matching over these emotion annotations and re-annotates each sentence to reflect the smoothing strategies of valence delay, interpolation, global mood, and meta-emotions. The new annotations will have the form: but may have additional global_mood and global_mood_valence fields. The re-annotated sentences are then expressed by the Expressor, which symbolizes a placeholder for some output modality or application. EVALUATION THROUGH A PROTOTYPE APPLICATION For an interesting way to test the technical merits of our approach, we incorporated our affect sensing engine into Chernov face style feedback in an affectively responsive email browser called EmpathyBuddy. This experimental system allows us to test the robustness of our approach against everyday use, rather than against a formal corpus. We also motivate an in-application test with the curiosity: Can this approach work well enough to make a practical impact on the design of affective user interfaces? We believe that the user study evaluation shows just that. This section is subdivided into three subsections: 1) an overview of EmpathyBuddy user interface; 2) a scenario of a user interaction with the system and 3) a summary of our user study evaluation. For a fuller discussion on EmpathyBuddy and its evaluation, see [8]. EmpathyBuddy EmpathyBuddy is an email browser with a Chernov face embedded in its window that emotes in sync with the affective quality of text being typed by the user (shown in Figure 1). EmpathyBuddys faces express the six basic Ekman emotions plus decayed versions of the six basic emotions, and also four transitory meta-emotions.  Figure 1. EmpathyBuddy Email Agent The layout of the email browser is meant to be familiar to the user. At the upper left corner are email header fields. In the lower left is the email body text box. A demon frequently polls the text in the body and analyzes the text using the emotion sensing engine. The avatar in the upper left changes faces to try to match the dynamic affective context of the story. Because of limitations of pre-drawn facial expressions, aspects of the affect annotations outputted by the sensing engine, such as global mood, cannot be fully expressed. A Users Encounter with the System Let us walk through a scenario in which the user writes an email to her mom telling her that she bought a new car but uneventfully wrecked it. Thankfully though, she was not hurt. Figure 2 gives a walkthrough of this scenario. This is a useful scenario because it highlights some of the more advanced features of the affect sensing engine.  Figure 2: User Scenario In the sentence, its a gorgeous new sports car! the engines models are not yet certain about the affect of sports cars. They show that this sentence is ambiguous and that it could be one of two emotions: surprise or anger. Three disambiguation features all concluded that the correct emotion was surprise. First, according to the conceptual valence model, this sentence was characterized by positive emotion, and since surprise is positive whereas anger is not, this feature chose surprise. Second, the previous sentence disambiguation feature prefers surprise because that emotion also occurred in the previous sentence. Third, according to the fail-soft strategy of only showing anger and disgust in extreme cases, anger would have also been disallowed from occurring here. The last two sentences are a good illustration of meta-emotion smoothing. The sentence, I got into an accident and I crashed it evoked fear, while the sentence Thankfully, I wasnt hurt evoked happy. However, it would seem rather unnatural to change suddenly from a frightened expression to a happy expression. Humans dont easily forget the anxiety they held two seconds ago! The affect sensing engine recognizes the pattern of moving from fear to happy as a meta-emotion. It decides that happy should actually be revised into relief (from anxiety). It does so accordingly, and the emotion displayed by the EmpathyBuddy avatar reflects this. User Study and Evaluation A 20-person user study was conducted to quantitatively measure the performance of the EmpathyBuddy email browser in a practical scenario. Each user was asked to perform the task: send an email to someone and tell them a brief but interesting story about something you did recently. They were asked to type into three interfaces, given in random order. All three interfaces use the look and feel of the EmpathyBuddy mail client, differing only in the behavior of the face. The baseline is Neutral face, which displays a static neutral-looking face. To control for the deliberate selection of faces by the affect sensing engine, a second baseline called Alternating, Randomized faces was created, which displays a randomly selected face at the end of each sentence. The third client is EmpathyBuddy.  EMBED Excel.Chart.8 \s Figure 3. User Testing Questionnaire Results Users were asked to evaluate the system against four aspects: entertainment, interactivity, intelligence, and adoption, as shown in Figure 3. (Each line segment bisecting the ends of the bars represents one standard deviation above and below the mean.) The results of our user testing suggest that the textual affect sensing engine works well enough to bring measurable benefit to an affective user interface application. In light of the fact that users inputs closely mirror real application use, the results also speak of the robustness of our engine. We were gratified that the client using the affect sensing engine was judged to behave the most intelligently of the three, and that users expressed enthusiasm for wanting to adopt EmpathyBuddy as their regular mail client. Two results were rather unexpected. First, the randomized faces client was judged to be slightly more entertaining than EmpathyBuddy. User feedback suggests one possible explanation: the randomized faces had a quicker turnover in faces and were more outrageous (e.g. showing disgust, anger with equal probability)! Though it is satisfying to note that quick turnover of faces was not as important as the relevance of faces, to the idea of interactivity. Second, the randomized faces client scored better than expected on the question of system adoption, even though there was no AI behind it. Users told us that they were so bored of their static email clients that they were more than willing to flock to somethinganythingmore interactive and entertaining! CONCLUSION AND FUTURE PROSPECTS This paper presents a novel approach to classifying the affect of text into the six Ekman emotion categories of: happy, sad, angry, fearful, disgusted, and surprised. By leveraging a real-world knowledge base called Open Mind with 400,000 pieces of knowledge, we can evaluate the affective nature of the underlying semantics of sentences, and in a robust way. This approach addresses many of the limitations of four other tried approaches to textual affect classification. First, while keyword spotting relies on surface features of text, our more robust approach evaluates the affective nature of the underlying story semantics. Second, lexical affinity is not robust because structural features, such as negation, can trick it; in contrast, our approach employs competing linguistic models that factor in structural features. Third, while statistical techniques can be effective, they require a large input and they are not transparent to explanation. Our approach is robust enough to work on the sentence level, which is an important barrier that allows for much more interactive behavior. It is also easy to explain the reasons for a particular classification, by examining a trace of the commonsense inferences involved in an evaluation decision. Fourth, our approach is more flexible than hand-crafted models because the knowledge source, OMCS, is a large, multi-purpose, collaboratively built resource. Compared to a hand-crafted model, OMCS is more unbiased, domain-independent, easier to produce and extend, and easier to use for multiple purposes. To test the suitability of the approach in isolation for affective interactive applications, we incorporated the affect sensing engine into Chernov face style feedback in an affectively responsive email browser called EmpathyBuddy. User evaluations suggest that the approach and implementation are robust enough to be used by everyday people in an everyday task like email. In future work, it is important to demonstrate how a real-world knowledge approach can complement each of the existing approaches, and to evaluate the precision and recall of all the approaches against a standardized test corpus. We hope to improve the linguistic sophistication and accuracy of the society of commonsense affect models, and we plan to add directed inference capabilities in the training of the models in an attempt to reduce their noise. Finally, we wish to examine some of the many possible uses for a rather robust, domain-independent, sentence-level affect analyzer. Such technology could aid the development of affective user interfaces for synthetic agents, affective text-to-speech, characters in multi-user gaming, storytelling, and context-aware systems. ACKNOWLEDGMENTS The authors thank our colleague Push Singh for directing Open Mind Commonsense, and Cindy Mason for her invaluable feedback during the preparation of this paper. REFERENCES Bates, J. The Role of Emotion in Believable Agents. Communications of the ACM, Special Issue on Agents, July 1994. Dyer, M.G. Emotions and Their Computations: Three Computer Models. Cognition and Emotion, Vol. 1, No. 3, 323-347, 1987. Ekman, P. Facial expression of emotion. American Psychologist, 48, 384-392. 1993. Elliott, C. The Affective Reasoner: A Process Model of Emotions in a Multi-agent System. PhD thesis, Northwestern University, May 1992. The Institute for the Learning Sciences, Technical Report No. 32. Goertzel, B., Silverman, K., Hartley, C., Bugaj, S., and Ross, M. The Baby Webmind Project. Proceedings of AISB 2000. James, W. What is an emotion? In C. Calhoun & R. C. Solomon (Eds.), What is an emotion? Classic readings in philosophical psychology (pp. 127-141). New York: Oxford University Press. 1984. Lenat, D.B. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38, 1995. Liu, H., Lieberman, H., and Selker, T. Automatic Affective Feedback in an Email Browser, submitted to ACM SIG-CHI 2003. Minsky, M. The Emotion Machine. Pantheon, not yet available in hardcopy. Drafts available: http://web.media.mit.edu/~minsky/. Mueller, E.T. ThoughtTreasure: A natural language/commonsense platform (Online). http://www.signiform.com/tt/htm/overview.htm. 1998. Nass, C.I., Stener, J.S., and Tanber, E. Computers are social actors. In Proceedings of CHI 94, (Boston, MA), pp. 72-78, April 1994. Ortony A., Clore, G. L., and Collins, A. The cognitive structure of emotions. New York: Cambridge University Press. 1988 Picard, R. W., Affective Computing, The MIT Press, Mass., 1997. Singh, P., The public acquisition of commonsense knowledge. Proceedings of AAAI Spring Symposium. Palo Alto, CA, AAAI. 2002. =#. ,"<"$%))0011F9[9)C4C>CDC\CsCCCCCCCCCCCCDD#D:DADUHWH^H_HeHfHnHoHuHxHHHHHKKSSWWxXXh[i[\\``baaaaaa bAbddffggiiiiAqBqVq j5\6]65^J^#?@AJ` !".Lau$a$$a$$a$$a$<oq !  !3H8_,"$)k,,N/,091$H#$+D/0$a$91723567:;<O=q>?BC)C\CU? & F ZEƀ_%j^`\CCCCUF? Z^`U? & F ZEƀ_%j^`U? & F ZEƀ_%j^`CCDUU? & F ZEƀ_%j^`U? & F ZEƀ_%j^`DMDFFTHHIJJKLL MDMMP RSU^U? & F ZEƀ_%j^`UUUV=WxXY Z=[v[[\]]^baaaa bAbobd*ddfgi^`^iUkkkkk mmmnnfojqlr(tCttu7w"yMy{||} ~-PVqWqlsns}}ŠÊĊPd}hǝBɠ(_Ѣ,Pnrϼϫ 0J6]0J6B*]aJmH phsH B*aJmH phsH 6B*]ph B*ph 6]^J^J5\ jUjHA UV j=U jU6] j5ӄa{̏=|ћs~$a$~iaN & FEƀ_%j.^`N & FEƀ_%j.^`iaN & FEƀ_%j.^`N & FEƀ_%j.^`aN & FEƀ_%j.^`N & FEƀ_%j.^`2aN & FEƀ_%j.^`N & FEƀ_%j.^`)aN & FEƀ_%j .^`N & FEƀ_%j .^`6aN & FEƀ_%j .^`N & FEƀ_%j .^`opqra__aN & FEƀ_%j.^`N & FEƀ_%j .^` &P/ =!8"8#8$%+ 0&P/ =!8"8#8$/% P -+ 0&P/ =!8"8#8$% P =Dd- B0  # Ab4#qݎ5hD>n4#qݎ5hPNG  IHDR1_ ,sBIT3 sRGB'IDATx^M%ɕ&xߠ AEDA/"\D0YT Z^HP FK.EnMNj7Z֦hP& T *hP/ "d,  13;~_ׯ|ܼn~gs웯l!Dۊ>:&xۊM|?20j0Xx"@DϿ&neYh"@od|kfߛvom,?_vom7>m֧zW4hc|t.w>b|k};ھ\o%]uϯ7rW;;[͟6} 9e7>X_gSY}{s@. ~6ܽΡʝoy;vhv|v)gcm[]h.w{ǵmEӶk]=u 4}nsmw?mȷ̆[ԳQƶc~}&z2V_a +_7 nf́3OAk"]C XVLݷ4h@urdc(d3Cእ"EpjysM (w_jmi \1wS @]Q?|l=W9oxζc'] `1 H7;Z|wж6ֵ󮋣8pmr%m -~vtoJmel֠6Zxl e!vrz!ȇKdh KmA zt4fSRޒ7~ <ly^ z;7& wYg/WPuvbKx'aO"6??n@28;;{ĚL rij' qb´W5U9wc3E_5LNa8}+=ӾklCxIb_!(d 6iu9@77[O79/-x^PW.!P9zo!Jh: ! @xJxf(`ޅ ՁBh"yG f-4IzYԘ`fI ] m[0}@0cv 1ma;x]j[Ы1Sx4P؂m)Q=f 2#L0)e}{fl.k7վ9QS޴F9Md"?}!<#~trK/@.I8}||㷼<^g2n~O']pw7Oӟ_A_ wx@LuUB*m]~W`+{0??Ͼkzt~XA1NJӀW" BW^|#G1TmRm_/~Om+& m{Sd/A>_BYB=rv׵m9o=ڶG>gGm{AސNj?"eGOy:lK| >AI5=mk +U;? o.`͛ߟ}q(eby3QpX_O_<~/?i{@)xԘ5W o}|筏ϒpㄞNtǺ^O3k~͊4*Ls__ȷݿTW¬α':f%1K꼁K =nP%dH\9 7!Ld{LH|=& ]_?L7W9 B>!߮;o^_ދŶ}Ǯm@9޴gBׯހ 8Ai-ֻzjw>d3pbKw%z]}7%/w{Ħx|ZzĵmOڼ>{h&Ei}&Jt^o=y="G Tohч@ٻsq|fNB+Eժ dJ駛}y,_x2.|$ljn+73XtI2D{qfE+zqfE21e;t 8230Sb9=2Ϭg:Anhԋ2>A7G~ Yу7]1APR?/ER)菻~߼˿wΦCh)J13!Gd ^_|u3X¼ $3d!V3+s$xtǀyY8k8!?覊a83kHf.yݢߣ9ڶ!ԔǾa^\@ )bm*&qC93y?;ȵpԸ<^;gͅ9Z^RӱжY[M1 aa$ q}ggz78 Sݛ n@;L@ff`xyZhdӇ^ Vh)37~y,G Yh@_94cv15~<~kWG>,wWw8B.J_G<!|^b>/~*'pW^c=}q\BIq=^|A79#A|Ͼ=v K-nAw&鮻q~ [rE\^ eGf@=oϾOЎ?7n߿t M%z1V=HћE}{ z|8NF>r\ yͥ{%h|oy_lvRgA$? ~w縥r^Cx!^z_.u.z#W_\B݋_ȕN'ܧ[VE5|KL#  Cz{?oAC uM3NO9SDuFo6qn>÷"d`: nsŵ<Q+[uɛ-s [ ͏.z' ;Nu12+A(~56-4s|(AG2 GbX( $%B ^\Kg^3@B'?{Ap`Ҁά10kQb@׎B0Z 8 \?w3?v2J;9!'E<%"4NVDl踎k 8N&̙ <Nd'p)ì!_v)8PV w#fw9NOq5NQwY>dLsL/l.<INq&'5l`@Yץ, K&ߎzstqLڗ=͊#H6D搟wsۣ.?aљzw{ߊOaP7'e qAD}'p!|r'Xz긞4#R'ȧ+9 ;#!ƻ@&IHh@>rBT'<ْȰc.7Ѹ]nury &w(ftX sQ}y, Wr@2Np 14Ëa=l=QHgO!@Ūޡhty=6Cx7q81rOtg[ffy)}s)R: ubgw * $i;(ID2crsI2JV?F"Hiܳ/Bp65YQKc=A1[O*>_œs3Ok!yDųsdXn8B@1zB= KR zB8l@qd` q$"R z7~=A>šK)?DM3L }<vxxf/1h`WA'Cnji4=఻+L>LHu4N- c;$97,4ʹ# %dlAV- ~ ܡx=ΰ< LDcf` 08|KtH=8tRH=nAW7f?'c:Kzl}ߩc#PH S-B4m* T%ObLCC}q (8aP2JAP#Zi'ҧr ҷws7>"y '0ȝ&t#F?;^LV`8=s7XZcR@ݎ3 L}ۑTW`:JԄjP\mxVƐ315!H)#ޑvoG>vs j,nC㩶kRI'||n5ֲlM 0+{R18H[&- ;[x'(dcm'C>YN:?l]+syd}m֛ `LNwΐ\7 Qc qQ'8W٢ha`q9ο+kq>A'>@!Fy0q4{>Apƴ OP@|z5Z*m].OA)cɗO;hWj v]qwGpr/~.!Z̊N23XkC!ހԹ@ ׶ۨ׭[OwCf?Nݛ%9wTiw(N+~q7ɷؙa!cl&`v ;F8N̬ӑ^L'8H[7N0D !D rzLvZ B= D`?>V'OO&Dx+//'D  0"@N2W$Ď`f)"p N  D`f3LDGLpUD 3#@&`'D<C_P@%/~-F.zkBM 52SD,50n IC@=9NPwH6!`ɳVl! "@A`&8$> ?z N͉=LX B+bK&FNQ24GsVR8 @Ȍ0A )W vN="KF^ԟ"p( Eۆ m+˫ kVO@~{{l X~>=2X MZ\o7&q}:WWs'9^Aggg΢m^rwܑ"|"^2Pusٞ^z)$$<(%/#쮤w~pO\_J_ei+3 mmu 8N7Cp4~Fq*?޹{쎓wݻjiv=/_t< duF. ѵSF.- ͼ~|o"\rP$+9l4YYj5M&8DkG:sH]2) ҙs/Mvs ,)[.˗ztڃ]I`C ߰HKƑ墰~ k 㻄x "p(ݟ3@/O)ak7(}%y;k)^۷ &^ܑ>K.:iVzūVUݻ)ij`aDS I$6ta˄IbM]4}xI=!p $Sq| s]ڑ.<]z@UtNƹWVSKg3l}|m#iğLClϠ von5 Ten",}g_; wCMkd 5z0Nb[PRc W ?d=-'S> ;هU s)_FN0#* aH9FZ`>d qS;㖜`qE _>m V! C> A3UqϠV3DT/ɲxK2\Όi ACF;]\ݍ 8g+[J\v!nUagb&(>:U}Hwѕ4 +dy9C>c^ΕmsXr p%Vx4#!%M U@~b^ϝ->߸ ?xEL):$Ҡ3`Q|K9Bqa>nzaU-z2$b d {v7 4l,DIʄjmBVyuܪ]W1B2$ O2))##roM\dUf򏯒 Rбj=>?F,ҞxV,~;0F:Uo0ֈ`"餲fIgѶ2=V`i6j#\#ҩ7@)ԍ_aYcdv|4YSX9;ڰq ʗ+Sܞ³Y桚eшn +7Ygcr!FYmela= {ֆVtM1ޒ[ہ]ے2<`_0J @@߁LP4FY{nŒn~ My $b2u +&q'?S?-1 ZH ݋c6Q'{@v`*(H>Ă?Mjjh&{C$4fiNcY@ "$ˌlbVRصrO@2 aiPF1}q)"`Ų1vZ˜rxxoTv>KǔDF";ɒ]vr7.,HWlK`֏ 7Qa 1!L~/ʄ@$>AO˨ߖu1JPBkPY,ٝ0Yx| &YdBNm8?R,WDr#M u]Qk*кOvY-w\D!w;Rq#gU%ɍ EUGAC[!c&)%ē;q֬6GMlKbD̊@B;~634"vV^ a~tۅQ˯6zd#W; JbZrx# :!/MEd?zvЬM‰8Mjv}0qފ~$s3}waF6VUNhmS <^Oa*T'#EmRvI.~EhRD;a6KAb4N xtx+잊jJznd3ѥ;Y“*#Qot\| P@^'e)A&XF]"P#5q WͺrK]큑!?; .HimC=c OCa y9+uw55A𻾚Q8'fQ1"Vցս COQ#(A{'.3_yQ LYpmGQilX᫗2 j;&q,Z % NQnJȲETjMLGm%k99ٺ+7,#:U1rK`"U&bL60X=oޝfB[AEB~/S@UMIV{/ҽvRe'q)"@&D@Hwf$/ĮԇE*MJK3? &id+PrӉeF\WgrS|w{/`j)vPFsY\c4:^%Ӹg  g' gAnOs0qhD_od6v2'&~6jsn\"*aD.Γ)VBBh ní0=fM!Dh&O6+7O:6p5]FHfEmt^~S,&'kh͓{_@ ̮ɳ=|5; pܐ9_S3'7s.=k˛\8{~"fb5 P X#CB.nTА3}Mߏt&]Auεh󑫡W{d ?sЌCHN3k[}7#*% !J`p0j mc&;<jjgτG# &K'%>8YlSmT&~Cs%ód"53pyƋ%Z/;5/]~ad0jkiWIim1~"539Bv7AqMqF/dG\ǎ ?c3AZBIyelgݰ9ȕn&% yՊ`坅vP C`aaSJۻF DHQ %tXXwmD^_`!'_GpCUF+f l5&b+Id+$aħW# ,?>'џvXRk^9Z#iVf'b3-8e6gnCΟUk4ȮIHc5Acb[b1LL;ql#tѡz.L";|۽sһXs-[*$!j7e Bݻ^ lM&u J7e]ךټFv#nUd"P}^;LIVCzphZx f*xqlS-o>6݇ɣakLS԰pD[Et:$IYk-vx;@lQ qOXޢ6YRVQ>5&R(▋#E&V hZ~ wd`& eU F4s~kg`MyF'AoTL@EP>Z| |,p! u:  o.RE%wz\4FfEJ78wG1>,~bg6g|]ޙ<0!Z?>X48W[? m mL}dr@ۛMp"pY!80^fY>hc邕+حSW34$dS An5`5w Nzy Lsj {7+D 2AZ_KjH@b ԚKhhmhnO%#'cL"׬_^55ܑ3ªvxb[ɝ~=ꌵl:9Z@7( 㣽̈́@om."#v/N.v%U(?I> oOͽ{knOgU$(H&AF?:3aD+@'Eh|0.D͵u}=t$\Ti]\H-ukϭ:+ ?ᲸpQ0a7IvZS"`C<ygE vBD@ƣy2"3s^?cH:*޽{v4Dzxdyv_Ϝ:ذRQnf t3|7sv A$:tT']T T E\YK2_r"@A@dس$y5qlz%4NȎֲ<[ f1|nY C$H\5 |3>&]t}@?`w%H?재zSLnseʼ ,·h@iBn(:c@L]uv]ihKY͜A$gdĈ^gg`(E* k]<Y)A3iRaMmifaxr=iJ) qcߔ n$ XDPd&%`Ȱjd#It"7s= +sweDK hvLa&mxq ]7B{q u1"{xW/I,,=1T#ݧ M[1[W^ 9w(Lw…C fEx&6M,/6Rln4zZC'GH78?/?bPE;1E+UpVzuv G,Ds8#a4O9-z ܋rx\#/KFF?RO4O)j&4B'<ǩuKlOpD2[xlozAswF!kEGBx-Fe.Aw`zKH lT,Ɣ,PPSU~zMeK0X^8b; @X+x̛HR;iHb͞Ea&>HC7ܹ)Wu^nV3q"+u=bDnu hv0 ;9fC'VAP؄Am%u)#>{ |c\L`_-i|L[||F'14脆[Y#7g.wqF0.p26.t{⟮DTg#+XÎ[~rH!Ў9F)lݹ{=`O8 eLGCol+.)HPmKzc_|lA1 OnX|u!@uVR7F:7gnܡj}fS+jU(\x!.+?Xd\<E)7JH *$:خ&r_t"d% IE"%4}~=V?̝a",upG# xD`%`]l]NJaEj.؃vaYjP@m#˵kBpd ׯP[+Xo^m /RL E +53Dr˞ ,e)0Ds\ gD`Y۔Z' ws\wAiRxZok}4Ƨu;nq1e\^Л{Z[$㮡5y (Oɍ{r 3V6Wuq |c{ijk$dY'WH\F6bXl&#G躽[^9\ }lQ9tvBi%!P4:† z|/!P)@P; b11AG!qU%3߄kXJG9t1K=~+;CnsȜ'E``mi2?Ag?0Z[j u<}E^/ a=k2aҧhM ZUU(b2p .k*w.|?@S ɄEQCIErF<#槎K#9d^RJ "=L*Zsk9aゥ\<W@iV5wGڭ=ijs%^.|#i~GYd2:_na x(ErZHQC^r &lU,?yZڅ!P0 "0#EɊ]hvknu ܍0ͫwB玈%kR8,Б1[ l*@rq ?arQ<0qֳT2CVAL0d~4$ّJsd:!2  ”Vúwv3dzJcAH:jC8뀫2Nјbb? c0@*h;ǥ]ב--"p! {(0k Hwܘ-Z|$n䷦jߍLGg1A"0Bְ\fZQgزVѨG`b7ojQcVkēX3l"6 *%Y5ӍbzKK^@ծc0dStTbk!O ʯ'f|ZI-%Gwfs~j2gz)@ X4P5VTlEN_QY}#{~Ժ1~FDJ (1(][>rJon!#;S MNLoŌt]%JjP&MIz )q?Zko*Jy&X'B9#D`K w{Jng< bO2,0y:yv &X1=W9}B( 6vx\h Z_Ǒ=BfZwRJ"#$1_fL@cDbaEaqtwewwˉV Hi_,|vFEJ]f~m ؠMQ6iϟ?[p0tŨv3ϗa%ҚOH" (+xD j T3zs<@F105pv&Wmcd Bq|:n}yHSCvSc7)"CA-ZQiBFc8 B{oۆL<< d͈~mvrm^' &w<s7c|@"<w/B0̤< EjoF F3~Cf1GTGW&:c (l :y_sE[P#+/Fp8νяpo2^GI6P_%YwEPL3Нb#X< 3[@ΞL"`"4=S} جƸE%qq"8uSjǃI5+%ynCj6]{!'dtvG֏dLE_ɔ AwyЅͬWpd>ϾɃw>o߳~u}O?y|3Z}3 RHx7|oo15nL߅eoa]4,˲օ콝Exb~OsEےf"Fe⊮XcyJ9Dus2dp ~so>[nR[F}pXqOyo16*,TՓX-N-Q&0$sഘ@&!Ќ( 9:d_&/=?No5ko?[u `8.vJ0~; $ \w!qV t^aP6ot4N*I"p@|X"혋s; 0m@b"vq :mH*.un٢U~'Uc0fYO{dα`64 N:|:c5WDZ7%'"aѿAP 7;\g\A)L?DÄ<ȯӀҴ [Ⱥc]L !'D`xcً T'"7MX"[H#,tc(nau0:dYDIjlzš ]ɣBdj5I&ya ?&!M eF.)#\Jv|V>ѭKq[dYCZ%bKEK8#!ٽ9IN<}؛TDf"[GϝIgےZ$Zyk[q'TёAKubjbK=(:T;^rJN*AV[f!9nūPGwܳFYk<筄pdXn G-ٙ5z"G@ml@PN:Z.Q%аK熆 )7Q\IV%Gr%He @bѪ˛##&DPZIg SkHcCLZ< ,z<7!+'1avfd)ɂd$C-"cmwb#e,z?PU?E|sjWè>+,uUot% ݨ"?~J`[t _sOtΙ#bsT oI[a(9i1kS85ց*e\E#^yqIb^~!jGޔwF|tw',AylEWfy\Ic'qM"Hv8I@Uց% "neVlIDI'h5\߅GYk[+xjֿ@ % N@024>"@@g|<']*n>{D(a, n [о<=&9/BжQ!9kvhYvhK&{J\$`_~ [fO#oM- vg`KCz(]LgM{o߳)~u}O?yW!D6g1wdM}3+{e ha!Y'&A0htA~IAyq2 z5q ){OAa $1 "ѡ^."N n }4Q ƕѡ^?m\~(qy?6/]FmI94"ELd[U'h1PFV?{sq`Xܕޟg qdJ6B%xXxvGq~ȶ<%pvqw0*-&D`,ANp&h$~-{9v9EKY@v{NK3b&z;)K Uk]cŸ^18ؗ"(>.dٺJw%3mtuf/pSv袗 KhDOohU_#QܮICְA ZC2Toc&D6# sC%H)"̡l.0sHlgdBL?DHѐ|TT#Fv( ű5To9]}G*$FRZw0(<^ |F, 2rzǤ{M~\34 E-+oOD %әjy>.!!q$F 7ą 6 n;ςC|(?&,3y2E.A; k'b/Zy'W۵*zTjR+ w׃ۉ@qO" *>BkLJ,E_U2]$"*}j\WwN:J:d8{az]ۻ`Rtx A9q砹C`ns;KMAsL M 2ތ n|d5N3K7_ήm%dBQӂkH4r!D`jk6 8gQwtAn=k81{ n̞-ڧm of_|{t8X9p&7"@aԤvv18م= 3]܈~b$t`5-1 4 6-Bnȡ" LQu4~5Xeډz}g$ףahP<\Iw>]S_w8CIށO˜?NI#&7lNW /:gg.<,*Օ D]~kbCܟ_O&@5N=WAЬV9t><  :~d$D#Fd-=b^ia]_ &|dT?HQ#Q{K7u;_H%~g| D8垄ڍd< o|X1%X5y$n;;Yd8Ѐ v+ysd3о@:9<:'t&)R 藝dn?8+YwDh8RgI305:A&ۋє5Y F)#X5>k$زW_Y2@H]8~HRC ӆ#YP!:*giK?3`q-#wfqƑ>Yg!A,ҿ:C߸5D7!D`*j^]fgx҆~! 'vVb/` m(֛N3c'5#%PMS ,|Gc10}\}Nҩ;vnYtᲰDg)bm;PcH1f`o%DvV!è_vbaŨɪ( GM${ŐNG &#ð =EEż-(skhŷ؀;!$w$ ۮ6@b)Gӓ>u#.vʯ.[j EC8v#(RS*봒.<߹ũ-$0>[No irWh@ҁa@PQ/97[6L41GΨyË!D\ޙs[g"CUdpd0?2{a` %kBȚڄå̖/N Baquki]F˜ l~G##t2Zٹ>"Vޙ#6-r,á7!;l)@>NGvA5%XuXyz!~\M)0ޯ puHkn =#[9 >-K^ wKNr:ADUiuDlkQ`Zj5{u I>}DlTvPY)oŗx" Jj\U|shA32}U<2Fdj<`Fq;kEyd&׺2 䂡(Md d5 $Sz68֠rCR`tCg4y,@09fvJzr;,ZGjR vYd*B.V$߈ȇ F!AN{! a+cBݦr&=f'"*jW*̮^ߙ 2<2Ӗ#2!D`4轧=dڔ1U/j@D'NjܞKTG6N˗!h g~ OS+rHC(<8}t` aܚDXfm"p (FmhBf[:q+Z`$}JXF]#d ;[2һ38Ch= 5y(L~S KmB=˧9:RHѡ[bD"G-ػj,)'qgd W$"2xC=tϮg1F;׵P7˧E3jݕv9/A#5 pL7z*d[D`bn5@jler6?Q'Rv ݅7T(+{=q"#?gI=[˅Nm*Wt$0q>gnL0+AqD"PMy"geMy inGRou8d 0 G{< Ch)W)kOY7") t@QZ^ _W ^,u&W $Hy%זE&#PY7+{>w A])XR6#@zd'S99'U$mtCX)rG`%k#n6 3Y-&P3Y`"m +~ӿHn-kļ&hB9Kf¦c@ yCa~0S E8rY4'X/{a=nUbup 0jtJkYCb~",'־,8 =J>֌Zk\iBqwۃ b ˗&rt<abG.ɄdV{\?VA̋<.E0Ɠ5T@HgO@5˥vHZkIL0@AD"PG5#8 U#ܥpW-[O{R u:mrRIB(ns)U)-j͈"j-BDX㐲[ V_̼w,tI+L0@YD#ON2xyvrKkh>gkE%ӯ s\I w * 5 AZbH@@m n hˑ@ {[.Y}h@XQ E!GL 5|*5GG ʲX7;I>'4 ]KZ'LP4"6EV3ON}dȱcj)Fq1|tr*U4<>ASI=MCpzAOI(6-c¸*,>p'8!QR3l+'j<][Fɂ6Z1=6 H8&c1$ AFzd>mdwk\!id_jKi֤ iyj#6 mLhڍIj6;@f9-"䠁rw׭6mkvfW\riXgx`C+YG@cڮڋos d&>)fb7/X-k$͚>m;ei["v#;.}dijC j 2F+ U}|V;?R[ַZ?. HyUC "7յ-FNc7wg?C\n E4#Y9!0 ?8y:1~&MmEc ;؎ƢƬwؖ?m8\*Li'4L0ܝȦhfխTA@'8$U0y;xҡ=mq:f0 oj o- $\{@`&ȻIW6 [;>Y:QrvqjAxHI(ȾRƈ6i;Cʻ`U 0 9G5(BGZYm5HnԂCZn\;_я3l@f'Y3]kd`6!mvegGPڞP #Ic|5H5c<&&"ĵbiCJWf&(L"CLpU&oྃ%'7q)A޸Q 2mq""@&` Dv;њd$BST{`IlvFnA->X"`g:5qgγV4 KU/-s$#bzؗv)I)v$!&|gk/B%is; 8B7ZsmkRd!o?bon ֔'2bu7!4D,3Nq40_HLp-Pp nFWY&@8L}#!])K#jH-L%ղlI-![GGhR}UD`>gLJ'Kb#V~C‡dgC.!ojT40USDpcU'aD#ֲ>\BLNAI{ P"0{3AmR­|sHX!y'&HY[};~W4#=AKMS1 \kVBC${$a ^Su`)Rr*Qv⼓T4C47Xbv6ڠ | )W,nÉmYMPyՔeJ 2rv0Zbk;\oþ]V iVjEY@ԴًǴ1W͋qZyekSP)6asR:)b$8@bΆ@pxTaH.Cߓkq!sxW.^'@ ]{tmƭbB.|\=qpR}sNDȋ6Dkiz9D<itHFRWCo^u7|L[m2И| ydQL3tY+f4 }|>{}/B5n.t4B9\ZvM!b(_߫U0MB`3nG+KG8N1`3j'g`@ĕ=&ouCDH^Bw%_Rn,,B ~Tbh(1{?|}oua9.mJD!@&XQeQU"@,T5*Ɯ|Ӫ7d80% D`AZ>d@pvʂ Y"p h1ALBXd"@Fq]ީ0ٹ>~/jf  v0A-@<.Boc|"0>A'VH$efp"p#)=A"@ALp 4G(;^[D”D &@(&vQCX/!X"p[И `Z7&Kq "p`Ĉ3Tu "dun/w8̂"pj12(V{%ш**"@&G`?`)"@GLxP"@ fOX2U@  #@&X= D`qW D,` `D ,^T"0d+"@GLxP"@ fOX2U@  #@&X= D`qW D,` `D ,^T"0d+"@GLxP"@ fOX2U@  #@&X= D`qW D,` `D ,^T"0d+"@GLxP"@ fOX2U@  #@&X= D`qW D,` `D ,^T"0d+"@GLxP"@ fOX2U@  #@&X= D`qW D,` `D ,^T"0d+"@GLxP"@ fOX2U@  #@&X=X;?n^WOO}᷷W6 "@j<샟ӧ<4Ob PkPu͗_\~tI&XE%RI"p*$Lp*jQl<Փ+ b "DvP@ {=GC@cwO`)i:Z0#"pP&Qq)Ft%ܸjg8>|~'K'Ӆ6 P!1d1/S"@V'm9bzDtOp(vGR' Z9 P5&ȣ;z'0:I& Dp7Y<"@N;!b"@ GLp+#DDL"& D4@lj K"8 KV!&DdS@uq c3D8'ʓ M*@@&("pLjF Yw̋"p NV D@?- "@Vm[21= D!@&i5"@EL/bLOX9vf߿ dD%C(C& Ai  # 'P۩:`'DL@ nxxDG &g|;}TnvF3 G@@|O!ۙ@k*O2 R D` V]}T"0d @"@ < D`HDU#@&XuQy"@ &" FLD  L"E"@V`G  @&D D2"@&@L0AX5dUW'DL`)"j>*O2 R D` V]}T"0d @"@ < D`HDU#@&XuQy"@ &" FLD  L"E"@V`G  @&D D2"@&@L0AX5dUW'DL`)"j>*O2 R D` V]}T"0d @"@ < D`HDU#@&XuQy"@ &" FLD  L"E"@V`G  @&D D2"@&@L0AX5dUW'DL`)"j>*O2 R D` V]}T"0d @"@ < D`HDU#@&XuQy"@ &" FLD  L"E"@V`G  @&D D2"@&@L0AX5dUW'DL`)"j>*O2 R D` V]}T"0d @"@ < D`HDU#@&XuQy"@ &" FLD  L"E"@V`G  @&D D2"@&@L0An9wjȈ@}<}@d@1 -~nP%OM!Wx2k"iñbJ"@k[ɠqk/E{D@Fpʵz-bSkCXy5Fqy]S!I9D}XXF z@h-wkvU ѪX(d0 %}5͑ BrFvڲH%ß}Kŀ{1@?$F^Ǐސ"웆L/bLOfk##2`R e4b VP{Q1ˁ8Y3z(ZCmp "0ŹC0Ǎ0)1ͨv}HQke#TŹ=SPi9g|;}Tv;,!D"˳~O~2|`'8@CϿ>7 G23"@#@&8 zD"0zmEd Þ9"@N2i D,`93 D4 F=P "@r Þ9"@N2i D,`93 D4 F=P "@r Þ9"@N2i D,`93 D4 F=P "@r Þ9"@N2i D,`93 D4 F=P "@r Þ9"@N2i D,`93 D4 F=P "@r Þ9"@N2i '.G:ը¼Lv )F'VM7_=yg_?~d)" 4h->O>>\S'xf5|1R>j"0?ED$ `X ]ONPA 3߾w (P"@A@ P |xCsMJ&D2:Z# O/X2U@ QߩG=fCL0LX dT$D̆`6h)"+(I 2lR0 D`% VRQT"0d٠`"@J & D`6AKD @&XIEQM"@l f +ALD -"@V`%E5 !@& Z &D2J*j"@fCL0LX dT$D̆`6h)"+(I 2lR0 D`% VRQT"0d٠`"@J & D`6AKD @&XIEQM"@l f +ALD -"@V`%E5 !@& Z &D2J*j"@fCL0LX dT$D̆`6h)"+(I 2lR0 D`% VRQT"0d٠`"@J & D`6AKD @&XIEQM"@l f +ALD`~~p=93 ,\̞O?yzTf&3KD pWOOE}ũ( S#O!H&!Isϊ)Vʓ;3AbEgqe ˙2X"@V@ [sWX깳h1}ѧ|"p:h~` yV>(LWĴ9W ?v(DGۭa ;%}7oPI ]'D.'"8 `a-*>bdCF!>`j|"p LS+2N&IPc!_]_ 8t"pMfyYX"0`X7"FT# AC`:\OZफ# 0#@&8>̑"pt21= D`a~nCdkS@@ ѝ*ut:YcsMDI#N n H'(d|͛7ÿoNo\IFąC7-@D`/ﳳYmsmu! ; ukkb ACo &Z!dVU&DD paG@ ~SC"pl e`~GDѡ#ͬJVR]I&D 7 ֩,o`)b2"@!r؈!J 4D(GT=en:2& 7@wtv{/zX58!}d9yj[#tP)"@X)`6 D`2O]& kC@|L} D`jx\xIENDB`FDd%j  C FA.emotusponens-scenario2bEXubid"EnEXubid"PNG  IHDR53PLTE #!#424$#$! !)()&%&212101,+,>=>767656TSTONOLKLJIJGFGFEFDCDBAB~yxytstsrsjijdcd̫//0--.99:QQRMMNXXYsvx-=3z|a{B(G^{@9e#l[8 Vf&}oa?{m+=r~_MPV+N~Ȳ 7pq cl%jQi"(8e|=.`|qC.3vu,YهXS&YW]fE-tklS.}CLA9]j5[ ^DS oa#DE&ş+4cQo =KXqI78}r'WL󱈟zU/G|(tG,ƀӽ:9i:fMleVY )\ᰛswyhp1;X^bq"Wܡ0q0QaG$u$*NNjZl_ng}7Glӡ#:) jcW}@QnBCX,l|!m2NLk[ TXH@ }\ݠ\0 5|>A ށ馯chUq._CL6^r,]5GkW Ԑខ'c̜5aL2&oġ4;X '`!tqA}:E P6ӷޤ1zOP57?~{@$UE-h+I3ǣ0 [ z5 `8LJ[`@$GCpF-, hP㮰V&=S:ii3,>J@KY_#IGE\ iiqcL#WL-A.R>߷Ku leeP$Qڗ6&Nj?}-[*8lX@eun/=欃PbI!EFhX4fU~ ɦ߬W7Z9J[OKAۯjC3bۍ?=,hk=4EZGhSZ"[XGU%&K&dX[y!E-9 ᵅfG{-<5b+K0ɨi: kv MW~tPt[r&z(d ÿPACH.-rAe"o~K? n=)n !x|bqoFG@[F0:ICS ^4(9!ca_| 6TC*oIJ (88ZP ;:%>ЭQD@WJƵ.so|GC/p2R$d!ÂmXe‘?8|YGw|hpS,:j7VI2tP$Q%QW*n[cJbpcʧT׭?ǚxģĞUu o*Ho,jGؙlv6Vqz< vMf켋L%(#uˢv?FhuqH #i*3ܲ7mmUPxOk+Ǡ P~v9\W iHPtc/opO6;a&wΝtJwlALBMVw.'M?9?.7Vr̽*j-a}72q-?sη.vN.F9lr+xѩnXk;])^jn6t&!#B(lu/̇X=lj;G|3h6QYe}oVlϮ,gOK5%oupK.-zH޽{/\X9(,{xo/GKMM>|s8Keh9S)k7ܧS<`RIQ"^P3xiS},HqQ>ϻזW)h$_ ew4tTubkpn,w7[Օ/`3[Q뙳=?'\azqL$ Vvޙg4PS}Km;gSeFIqj.y\4ڰ[VְnX !J;*,g]kPkqM,eysx[)?ɖVrã{ȹBmsVZc5/a99K4,ҜF?ܛ97w#IƓRYp?tNj5]"z-EߌNniWǵff7 )BmcACbƴF3-Elk?8i\mS_Ps/ r39=c9*VAd}К3b ӼR&YcvTk?lt4.Ȧc klLjCفMFiC%eXjh9e*ˁui: yqjA7CxQq]ejɝa7hdm?[^ϻv Yò5nWS}IjhjW[;!KI/OI6K1ku ϭ)$gOۈײqe߸eQKKr:kgҗi;JM{3؝1E=4{M:6OzmvO1oh;z<1JjXXܱxw)=tSoB%|]_+dSn>p .ɟyBGۻQ:TY+]M#),#孲qTBs7v^.'[5U\yZ::HeMmysHT-=@To-!^ٕe`a5O~&F:c+SOtDjb18&d P{K39X&h·GfGz&P)aнLh }8(08rpC]r8!}`,oVd+?\ Emh=跿w}qɝ۪a>^߰ b18ŀRʷO$fr2ký.ZSM飨>4cVCgYo:UƱ2L]FP)*( s1C8~JxUYSP`ގ:\bW7Lc#S=nS$c@I@Z&{[AOpԪ*TO)FU'&kZ]=6=cBrr @18CŒR&ӖJW᪈!"WEr-#$اHF.j"h!>al@\>W O x X/U ]5sfਙ" aR^#Y*.]jMA)f Y€DQL`_-+D?G 2/Œ"u~.+Ub\ $E`T(T@MpB'P e nEpXB qi&l8.Y%/_ؓ2PfS1"TT`?_ aҁU]$j0rRQFu㘳( p^d$'D_YB @h"Fp 2ׯ#N3#HC cH_&oL~v^~ّ Hgnh,u`߳yLťPQ PɢgzBLzekBZTd$H1aE Ԃ y`^(7BQueF3B8AQBj(~GzLKAIYX̎`XЦ>$|i1l#šQPt&?"{:0P෌ kGd$QCTnB1uWWm6&<@Q#8d$'0/FTQ'*Z#gDMpbn#uBjzJ %*A;{g^U=uˆ֣'C"%FMOk KB7jaA}xp]ȭi paP5KZV3ͫF%C,E\oQpQCYN#Q'APĦM&?&iPkV[k/ܝg_<,'"OfH_hRgl/KF-XKQw!D7G%B.yrIq2 * τKFF'!>0z HN.CjKZ$B8, $2P=,6~Lx1ӯ/ӽ}>wN>/Õ(KBԂ7zz]s@ZH$w'cD:ʿ"/ƍk@(i;X~nD-1 C΄ڴGyxH*/3n j\BǖVkx詂Uj0>[ R~4_D|:ovU&khz:npSt*9M(_=9eX5JlEݿi|4gb"}uQ?X&.M $ABRSn!/{}%Ĭާ7/&k[!QB댟ԲS?VKր@ZZm g0#Vy#cST q虡[OtYKY XO —`F"/ge"F36\5N$bqb8T VfM{BM8 78GVDgVϝ6{d3h1N*^w=ɳld]R<+c6Ɗ./8hP*tvfJf]ߴקЮjMz ,Lr7r/ ik#վ ?SNߏ\Kntכ}l#^lTNwqB*-[v1YqJІ*&nT@-yx䪖[b,;Rؿ$NvB-iAIE.L cޙ;{j-MS md-y&ͱcS2Msn&8[ј}&G줝nt/Ua<={ȥ#L7!sPmfvC~4_O@pڑ:c I8帥Q 48`iyY;sOJ1>7t7|Q3t7QdnbfnbfnbM:MpL:MpL:Mt9MpL:Mt9MxL9MxL9MxL9MpL:MpL:YqP3tg&\_tPSaI9Y $,tYKnGtP#fO̼r XBB fXff3en;Y{*\,^ ,wu* g3Z2kBNjO0HkUضԝlهjvןA s[eLk uL"ѱP#|j4c kDh@9g3,ĸXeuI7-I3voX(ؕGLvP} l]E<ܵ( #L{ 5B`1&$!0+!FnLu6S H#CqbyfA{ ,#'R\V=iHQTdhqӐ9 I+#PC-MҫZJhh=bm{,H<)RwMsUz}̫\TJ>TPkтJDiG{  DD2Ǟhej]ƖVmQh3$G ~dhzPb*FTjO汻ݻGF~cjVF eli5QC("TgLQ#9ۼQv@R謉IƯr185@d~/FCtHĉh`1yCIbI DCb224 T8Ȃhe(&-M447A}(*hhYgL??Sw3=qc9ϦFRA}>^<|U f'A=b)e;]7Ɛ S,vQ2'gTlis BE(!}z:xk\3i*<ʞO̭P妠&'bo_gyh\k^.WxmNZWife[W3OW:y^rFs$6mI?4v:{cWg=py$r=_a[p\-M:fe믲g||^ ʿgXm |Β:stbTNhvdm(#,](PAHs~,7ؽ+Zl|5?FԦeQm/̴ ڭ3^X>edFCK,Evzʳ:>5[-Ԁ@M;5rhyѸS,_ucFUf-ƁYŁ߾,#ҩo_Xp>-ԀV^Ne{ֹ# 'asK 50鳢'6^gy=0--)YvkߴP92~3,hs_lh5qE_EHC3mh(_6ZueʨC/`&ѱ}śg84AL؎)h:2mf[߬ƸSx{f[K퉵 U$ PLeLmi%c珰JuVw)ђب1-`P8e ^uоB0W:x{ G¢j9$q+ 6~S--}K|پ5md,Y_/a,3Ķآ̶oK#Jh Qo:>8r뺻y+Лɦ-df {]2i#SF`fS.@lFX(U|uZ)҅՛]-ln7]ǐ4G#{B fnfX%mgX4d(`ff3e,Y۹;ξg+FBh2IfV `5 nצk6ë&,j6^,>۳g/:e]x0/ݚh ؕ֨BM}g-` \χ3ڝAۜnj} AiPжr`9OHQGnjX}ww̒k  Ya-a {]Cw9g ]t% 5B@̡6~"xa#huNiΙ|oq͎|ss-ԢG37ѷoepvא̄iPX6n}IŜYsCƲ5L7w~wCcfº{>49;,Y=97lVざ8L6g#C!uVٔ޵w?HkoVJSkYQ: 5BΟ<ż>Z9m85L?wѺhdҍ [Hd‹s&B!+ذm\yyAn?9i5›O5p'⟾u&/F'Ņb7e|!TN2lMFj i,&꓾qio㑽 *ݒ;wuk3claUѠ0OITͣ9Qt*2tp!Qɚygqux*˽$3>Bo,ނYkFeJnp4 BW0 t.gaM0m=?&D*^G'> e5Ai6ք"$9R$"(b%esY.y6:f0u vTFBL~,4k8'BM^-K l8OQsDLHQ(ȅ$=Z圯S*Ĺr"F|ks<z Gэ [ b&$\(BOZ+b(DUOV$"đ8LP<1ք>`R*" V< w<8R^S4p[~|+N 7h6)@bhIRH^mPhP$q"lc テ&hW\c~iiNKmKlYO; ,p|?tm'1eʎZ ח :Hy)!oobBQ5i z6od{ɟsKς-Yyupi?"`T;:Y+y(iǸ7OG2z(_QEm wW=w1ed lFCn۷YZmdO;כN}v֟YQ'1;` KL3gO|"cz_ZGH?zK[mWw`''XΦs4Ij+dN,x' 9h)vXIْ7p7b>ˌK&e|7WI+-\1DbQ_p`̙oNly'.֒!;mp %k5 \Zm8;~z~m;?`ܭ9͊ax&ji7}G|cʇ_/my6^` ,jfpX=z)_?#o`6ESa7y ysNޕyle9<@ O ?I# le3.2 BMH P+=(Vy!tlhBܹamTcJ,dWW^uQrCJǻ:ֶEg."%k1T^F!:μZc+`fcXWԟZz'_n"N$j.lvmZLT7_€v6Kc ;b^]gLZ0&J:WLΡ}3.AZPۗfuD=0跉:O͇N\@(._6]VWb6ЈG8.Kn]2,ԒAKkff3e,YP32YALK, `%kjf0S涕"]OawۢfHӺ}PJybFЉ}|EBM6VL'{iy|-8p[uscMQCsP;jq#5g1B y۾`?Xx'=>*d Q[ Bi AFIm8[˄YEQL6 GF#eX )tn[KLjToբY[ٌp[B"sżvi] )VG=8ҫ~lhDzbA27NE n0팢S)XN̚5ra9'p@oPK7G|\?M/*$cNl4Z+̏tcY31FhHdmGϩ{cV٬m϶ͼo}^玺l`|🕨95t Qg$!1ǧL=; tBb:'AC.*ؘR.[JQ3ettjd<߶_8p|7O;>MoҖ6Ԇzy$6~(0qbΒxtQjNJ78d7dgڇC[+QAImPSּA.ikeԂ<"^)*I vAhNhs˱iryaՀhRB-¥Zba@ K62jPSy - mD j4E;5K5'5CА@HxQqYdd`JgӾT:T1ڰT V$&)>^МJ|$З 3Ηl6 >;ʆ9t8em8ZkPG>Ak U=.H'JY u*Ȱ[jNWWs˼IKztNrFejC=oh,s0Bנ+D]x9A[^Jc ]rT:P^Q ^^ԡ8w1âACm&c ; 65G59 8)HF#|&|ȗӣKSK F5'ϏT{w7 2j>^oz so;м: *9(Nk~kT?MdZps6,X[3HSqE29=4ܤG(0P.;.o$ӎ-A_d 7sh2ʻW]6\XkR͸me +C8ZI5; j4}XC11>w}U W9$VJG(1P[xu5EÆvhfw.KxHl@{& c_(IIWvCsO,X l38rO BM _=Zއ GfkE5tÒ5j@f"mzaX䭶 2P3$ NcU#15r}Zx %kFY#C\D=} +~Vp#[%ic)}7{$4")h6PP32YALK, `%kjf0Sƒ5 53)cɚ̔d ji!;0wwIENDB`dDd   !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdeghijklmnopqrstuvwxyz{|}~ Root Entry  FRpz@Data f+WordDocument |ObjectPoolPHzRpz_1095278468!FPHzPHzOle PRINT'CompObjb   !FMicrosoft Excel ChartBiff8Excel.Chart.89qOh+'0@H\p  Hugo Liu  Hugo Liu Microsoft Excel@I)_@jl,!% } 9   ''   Arialw@/ bwbw0- Arialw@* bwbw0---Arialw@' bwbw0--- Arialw@+ bwbw0- Arialw@. bwbw0-------"System 0-'- -- !!-- -'--- - - $;;<<gk*0-;<;<-- -'-- - -- -'-- - !!-- -'-- - <<-- -'-- - <K- -  $3,3,~~3$,,$::$$p$poo$- $~~~$$ccii$ooo- $$VVdd$iHiHi$--X,Xb,b:IpIX,X,:IpI-V~DcD;HpDcD, >V>H->V>H--- -'---  <<-- -'---  -  < <;-J<J<J<J<J-<-J<J;<;<-KK<<----- -'---  ? *2 QPerformance Measurement-%)<%)%%9%%%)%<%)--- -'---  -- -'---  ----- -'---  <  2 .F1.2%%--- -'---  ---- -'---  l<  2 ~F1.2%%--- -'---  ---- -'---  5J  2 T1.3%%--- -'---  ---- -'---  Pg  2 y6.2%%--- -'---  ---- -'---  /x  2 4.6%%--- -'---  ---- -'---  s  2 }3.5%%--- -'---  ---- -'---  4X  2 j4.1%%--- -'---  ---- -'---  )  2 5.8%%--- -'---  ---- -'---  zf  2 p5.3%%--- -'---  ---- -'---  RX  2 db5.2%%--- -'---  ---- -'---  v=  2 G5%--- -'---  ---- -'---     2 3.6%%--- -'---  --- -'---    2 d1 2 d2 2 d3 2 d4 2 d 5 2 d6 2 d.7-- -'---  --- -'---   12 The program was entertaining!-'  02 >The program was interactiveg!-'  92 #!The program behaved intelligently!-   42 ,Overall I was pleased with the* ' ' 92 s!program and would use it to write-'  ' 2 emails.- -- -'---  ----- -'---  Z!  Arialw@1 bwbw0- 2 1Questionnaire Item- --- -'---  ---- -'---  ' 2 1Score-%)%--- -'---  --  hk*0-- -'-- -  fj+1-- -'-- -  fj+1- -   sJm2 : Neutral face0%%%%"%-- -'-- - fj+1-- -'-- - fj+1- -  m32 Alternating, Randomized faces-%%%%%0%%%%9!%%%"%"-- -'---  fj+1-- -'---  fj+1-  -  Em2  EmpathyBuddy-9%%%!-%%%!-- -'-- -  fj+1-- -'-- -  -- -'-- -  - -   !!- -'  '  'ObjInfo Workbookh?SummaryInformation( DocumentSummaryInformation8,    @ !"#$%&'()*+,-./0123456789:;<=>? ^@\pHugo Liu Ba=&=k9X@"1Arial1Arial1Arial1Arial1Arial1Arial1Arial1nArial1Arial1Arial1Arial1Arial1Arial1Arial1Arial1Arial1Arial1Arial1Arial1nArial1Arial"$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* "-"_);_(@_).))_(* #,##0_);_(* \(#,##0\);_(* "-"_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* "-"??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* "-"??_);_(@_)                + ) , *  `7 Chart17 #Sheet1]&Sheet2u>Sheet3`ibZ  3  @@  The program was entertainingThe program was interactive!The program behaved intelligentlyHOverall I was pleased with the program and would use it to write emails. Neutral face EmpathyBuddyStandard DeviationsAlternating, Randomized faces %  ^@M\\PRINTSERVER\embry odXXLetterPRIV''''\K\KqK("dXX??3`  ` `  ` `  ` *D` *Dx` *D? 3d23 M NM4 3Q:  Neutral faceQ ;Q ;Q3_4E4 3Q: >Alternating, Randomized facesQ ;Q ;Q3_ O 3  MM< 4E4 3Q:  EmpathyBuddyQ ;Q ;Q3_ O   MM< 4E4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O 3f  MM< 4J[?4 3QQ ;QQ3_ O 3f  MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4D$% M3O&Q4$% M3O&Q4FA 3O) o 3 bNM&43*?@??#M&4% M Z3On&Q (Questionnaire Item'4%  M 3O$&Q Score'4523   43" U I3OO &% M3O& Q423 M NM443_ M NM  MM< 444% 9Mz3O&Q 2Performance Measurement'4% M3O&Q'4%  M3O&Q'4% M3O&Q'4% M3O&Q'44 eAThe program was entertainingAThe program was entertainingAThe program was entertaining?The program was interactive?The program was interactive?The program was interactiveK!The program behaved intelligentlyK!The program behaved intelligentlyK!The program behaved intelligentlyHOverall I was pleased with the program and would use it to write emails.HOverall I was pleased with the program and would use it to write emails.HOverall I was pleased with the program and would use it to write emails.e333333?@333333@??????333333?ffffff@333333@????333333?333333?? @@333333?333333???333333?333333? @ffffff@@??????e>  ^@  dMbP?_*+%"??U} >} m } }       *^@`@ @D@?? *^@|@@D@?N@ *@`@ @@@N@?^@ *v@y@@??T@PP<<<PH0(  >@7 ^@  dMbP?_*+%"??U (  v  <NMM?;i R]`  ^@"??3`  `  `  ` I'@` %' ` ( `  `  п)3dCZ 23 M NM4 3Q:  Neutral faceQ ;Q ;Q3_4E4 3Q: "Randomized faceQ ;Q ;Q3_ O 3  MM< 4E4 3Q:  EmpathyBuddyQ ;Q ;Q3_ O   MM< 4E4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O 3f  MM< 4J[?4 3QQ ;QQ3_ O 3f  MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4 3QQ ;QQ3_ O   MM< 4J[?4D$% M3O&Q4$% M3O&Q4FA/us 3OKr 3 bNM& 43*?@??#M&4% $M Z3O}& Q (Questionnaire Item'4% 8 % M 3O)& Q Score'4523   43" ( 3O+ &% M3O& Q423 M NM443_ M NM  MM< 444% >Mz3O&Q 2Performance Measurement'4% M3O& Q'4%  0 M3O& Q'4% M3O& Q'4% M3O& Q'44 eAThe program was entertainingAThe program was entertainingAThe program was entertaining?The program was interactive?The program was interactive?The program was interactiveK!The program behaved intelligentlyK!The program behaved intelligentlyK!The program behaved intelligentlyHOverall I was pleased with the program and would use it to write emails.HOverall I was pleased with the program and would use it to write emails.HOverall I was pleased with the program and would use it to write emails.e333333?@333333@??????333333?ffffff@333333@????333333?333333?? @@333333?333333???333333?333333? @ffffff@@??????e >+@EEE7 ^@  dMbP?_*+%"??sU>@7 ՜.+,0 PXd lt| MIT2 Sheet1Sheet2Sheet3Chart17  WorksheetsChartsOh+'0@L` t    J  C A? "2 ;/P?F0 >`! ;/P?F0&AgH.-4hxZmlE~g$SW$|~H`h q.%wWJIEL4!1b11!AiL!A#h?jw>vvolKɵw}f睝Ye_SfYDrӈ&rl">?~ 3G06;9y=;UW0=uLwk oJGgnUVUס))|5[rHgߣh jo^X O&О`yZB+,'G}ppJQSFy5g^[1.*5S|-"37L;rˏ@9Ǐ^xYjDviʠay$|R},oWXF{e/^.v5"BSM,oe6K>vY\fiˆWFay]C9o is MvaTheylI ڃ,o8Kch<P{sgDiBVea1";5pFcw'3;;{G`8 b8v֏X?{VU|WKվ1euXTSQ[%ZU- BP72Q~ݣдFqg7 |Fqt5qĠ #ןK%D/Zƺ/qts J.HĮK%6.r>O,=#o ﯢZPsɼc{%Wb$vLb%v^b8&ẗX׳h_ V1m5_zO4misw&k&h.,z[s}5"}Q?k0ՅB ,BE3/NVS42i3a&]!s-@i!?(j=7 ׿C2/&R=Ms;,58SHଷ jZT}% %}E4x>ԗa yo 6Zo(~pB$)V]j ^ue_WD587qUA}&.Í"~P[쁥d++2lxMxu Ju^ oR 5(y. ]eLc;l룥5.p{IϬR=i9"_Id!uQDaH}3C!wD Vφ 8zm42fv3$5ݶP\WTvZIdOuΌDzz6ծ2]5i]csG` +˚0bL5U92ڬK,:({;-Ν%Ѱi TS|vvl,L7eD~kGRۖ۸vfaͭ)voJ8ҬagˣWU'$϶ߩvi:~<3\ϬD;ՙuVqz5#`5yõ{bV:jٙŁZg!62Rgഔ>-{۽(?~^I*-{3Laz^Y>idqF gT"vgFR|%s AH"(v#[ىT1= }&1F}&gbkbMZILbhNcs~HSny:JW``k}[t1Table^SummaryInformation( DocumentSummaryInformation8CompObjj=A Model of Textual Affect Sensing using Real-World Knowledgee'MIT Media Lab Technical Report (draft) &Hugo Liu, Henry Lieberman, Ted Selker ugo Normal.dote Hugo Liute4goMicrosoft Word 9.0b@e@h9l@*:l@zv՜.+,D՜.+,h$ hp|  sCq2 =A Model of Textual Affect Sensing using Real-World Knowledge Title 8@ _PID_HLINKSAh2?emotusponens-scenario2  FMicrosoft Word Document MSWoi iJ@J Normal$P5$7$8$9DH$a$_HmH sH tH H@H Heading 1$(@&5CJKHOJQJ*@* Heading 2@&0!0 Heading 3@&56:@: Heading 4$@&^6]88 Heading 5$$@&a$5\@@ Heading 6 <@&5CJ\aJ:: Heading 7 <@&CJaJ@@ Heading 8 <@&6CJ]aJF F Heading 9 <@&CJOJQJ^JaJ<A@< Default Paragraph Font8&@8 Footnote ReferenceH*(O( Author5CJDOD Paper-Title $xa$5CJ$OJQJ0O"0 AffiliationsCJ>2>  Footnote Textp^`pB Bulletv & Fp>ThTf^`p.U@Q. Hyperlink >*B*phOb Referencesv & F>T.Tf^`er HTML PreformattedQ$ 2( Px 4 #\'*.25@95$7$8$9DH$a$B*OJPJQJ^JphDDDX^X Normal (Web)&$dd5$7$8$9DH$[$\$a$CJaJ\\ LREC main body text$d$5$7$8$9DH$`<C@< Body Text Indent ^>V@> FollowedHyperlink >*B* ph."@. Caption$a$5\VV Thesis Reference$,5$7$8$9DH$a$CJaJ&X@& Emphasis6]O h(a@( HTML Cite6]<T< Block Text!x]^*B"* Body Text"x4P24 Body Text 2 #dx6QB6 Body Text 3$xCJaJHM!RH Body Text First Indent %`XNbX Body Text First Indent 2&hx^h`JRrJ Body Text Indent 2'hdx^hLSL Body Text Indent 3(hx^hCJaJ*?* Closing )^,,  Comment Text*L Date+JYJ  Document Map,-D M OJQJ^J4[4 E-mail Signature-,+,  Endnote Text.h$h Envelope Address!/@ &+D/^@ CJOJQJ^JaJ>%> Envelope Return0 OJQJ^J, , Footer 1 !,", Header 2 !2`22 HTML Address36]2 2 Index 148^`82 2 Index 258^`82 2 Index 36X8^X`82 2 Index 47 8^ `822 Index 588^`822 Index 698^`822 Index 7:x8^x`822 Index 8;@8^@`822 Index 9<8^`8@!B@  Index Heading=5OJQJ\^J,/, List>h^h`02@0 List 2?^`030 List 3@8^8`040 List 4A^`05"0 List 5B^`2022 List Bullet C & F66B6 List Bullet 2 D & F 67R6 List Bullet 3 E & F 68b6 List Bullet 4 F & F 69r6 List Bullet 5 G & F :D: List ContinueHhx^h>E> List Continue 2Ix^>F> List Continue 3J8x^8>G> List Continue 4Kx^>H> List Continue 5Lx^212 List Number M & F 6:6 List Number 2 N & F6;6 List Number 3 O & F6<6 List Number 4 P & F6=6 List Number 5 Q & F~-"~  Macro Text<R$  ` @ P5$7$8$9DH$a$OJQJ^J_HmH sH tH I2 Message HeadergS8$d%d&d'd-DM NOPQ^8`CJOJQJ^JaJ6B6 Normal Indent T^,O, Note HeadingU4Zb4 Plain TextV OJQJ^J(K( SalutationW.@. Signature X^FJF SubtitleY$<@&a$CJOJQJ^JaJL,L Table of AuthoritiesZ8^`8D#D Table of Figures[p^`pN>N Title\$<@&a$5CJ KHOJQJ\^JaJ H.H  TOA Heading]x5CJOJQJ\^JaJ TOC 1^&& TOC 2 _^&& TOC 3 `^&& TOC 4 aX^X&& TOC 5 b ^ && TOC 6 c^&& TOC 7 d^&& TOC 8 ex^x&& TOC 9 f@^@HrH HTML Body g7$8$H$OJQJ_HmH sH tH @@  Balloon TexthCJOJQJ^JaJrAr"O#?@AJ` !".Lau! !  3H8_, %k((N+,,9-7./123678O9q:;>?)?\?????@M@BBTDDEFFGHH IDIIL NOQQQR=SxTU V=WvWWXYYZb]]]] ^A^o^`*``bceUggggg iiijjfkjmln(pCppq7s"uMuwxxy z-|P|}}}Ӏa{̋=|їs~i2)6ops000`000000000000000000`0000000000000000000030080808080808080k(0k(0k(0k(0k(0k(0k(0802020070707070707 ?07 ?07 ?07?07 ?07 ?07 ?070700B0B0B0B0B0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F80F0B0Y0Y0Y0Y0Y0Y0Y0Y0B0`0`0`0`0`0`0`0`0`0`0`00i0i0j0j0j0j0i0(p0(p0(p0(p00"u0"u0"u0x0x0x0"u0-|0-|0-|0-|0-|0"u0a0a0a0a0a0a0a000000000 0s 0s 0s 0s 0s 0s 0s 0s 0s 0 s 0 s 0 s 0 s 0 s0VqrR\91\CCDUi~irSUVWXYZ[]^_`abcdqTÆr:l,2$u$ϭ4qc@0(  B S  ?r OLE_LINK1 _1093844486 _1093844555 _1093844677 _1093844896 _1093845375Mus@@@@@vs'-EI& , s#}#~##%%`'p'[4_4l4s4u4{4444444 5&5::::;;;;s??ABDD{EEhFmFoFuFFFHHHHKKLLMMkSpSUUMWPWjWmWXXN[S[!]&]]] ^^^^Xmgmmmpppp7rJӃ߃ ԉzsxɓГ#inՙݙҚٚNTɜ˜(7F̝ҝ6<AFos#(!JLNR  9<"0/=7=#!1!%%z&&''))++7.C./$/0/1??)?+?\?_?????UDVDDDEEHHHHPPVRR?WGWXXf\t\\\]]]]B^F^^^_`Jahajjkkuuww{{}}YgTX4@’wĚ#&(os3333333333333333333333333333333333333333333333333333333333<<//CCLL99 1 1 ddkkiijj!!#'>CKW^^ ) )))))S*S*T*h*s**,,]/]/p/}/~//001166666666<<??(?(?CCCCEEKKD_D_4a4a>Ć4nsHugo LiuPreferred CustomerHugo LiusC:\Documents and Settings\Administrator\Application Data\Microsoft\Word\AutoRecovery save of 02-IUI-2003-draft1.asdHugo LiusC:\Documents and Settings\Administrator\Application Data\Microsoft\Word\AutoRecovery save of 02-IUI-2003-draft1.asdHugo Liu:C:\Program Files\Eudora Mail\Attach\02-IUI-2003-draft1.docHugo Liu:C:\Program Files\Eudora Mail\Attach\02-IUI-2003-draft1.docHugo Liu:C:\Program Files\Eudora Mail\Attach\02-IUI-2003-draft1.docHugo Liu+C:\work\papers\drafts\IUI-2003-tosubmit.docHugo LiuJC:\work\hugoliu.com\anewaesthetic\publications\drafts\IUI2003-tosubmit.docHugo LiuJC:\work\hugoliu.com\anewaesthetic\publications\drafts\IUI2003-tosubmit.doc|9Q}‚ P~݈^OVcNŐGVF(5EHHD(MC@^N6W) C ,*P'1mbuq;cP"JnYn ZahZ^~r@u1a tGjV#wJ^`.^`.88^8`.^`. ^`OJQJo( ^`OJQJo( 88^8`OJQJo( ^`OJQJo(hh^h`. hh^h`OJQJo(*h^`.h^`.hpLp^p`L.h@ @ ^@ `.h^`.hL^`L.h^`.h^`.hPLP^P`L.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(h ^`OJQJo(h TT^T`OJQJo(oh $$^$`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh ^`OJQJo(h dd^d`OJQJo(h 44^4`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(h ^`OJQJo(h TT^T`OJQJo(oh $$^$`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh ^`OJQJo(h dd^d`OJQJo(h 44^4`OJQJo(oh ^`OJQJo(@p^`p.h ^`OJQJo(h ^`OJQJo(oh TT^T`OJQJo(h $ $ ^$ `OJQJo(h   ^ `OJQJo(oh ^`OJQJo(h ^`OJQJo(h dd^d`OJQJo(oh 44^4`OJQJo(h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(h^`.h^`.hpLp^p`L.h@ @ ^@ `.h^`.hL^`L.h^`.h^`.hPLP^P`L.h hh^h`OJQJo(h 88^8`OJQJo(oh ^`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh xx^x`OJQJo(h HH^H`OJQJo(h ^`OJQJo(oh ^`OJQJo(n Z'1PhZ^V#w~}|uq;1aC ,N6W)tGj @h h^h`OJQJo(  @h h^h`OJQJo(                                                                                 @L_r`@UnknownPreferred CustomerGz Times New Roman5Symbol3& z Arial;Wingdings;& z Helvetica?5 z Courier New5& z!Tahoma3z Times"Ah(jTjf(j vC6E""xx20dqo 3qH<A Model of Textual Affect Sensing using Real-World Knowledge&MIT Media Lab Technical Report (draft)%Hugo Liu, Henry Lieberman, Ted SelkerHugo LiurdDocWord.Document.89q