Author Guidelines for 8



A Collaborative face-to-Face Design Support System based on Sketching and Gesturing

Gustavo Zurita1, Nelson Baloian 2, Felipe Baytelman 2

Universidad de Chile, Santiago, Chile, 1Information Systems Department - Business School,

2 Computer Science Department – Engineering School.

gnzurita@facea.uchile.cl;nbaloian@dcc.uchile.cl;fbaytelman@dcc.uchile.cl

Abstract

This paper presents MCSketcher, a system that enables face-to-face collaborative design based on sketches using handheld devices equipped for spontaneous wireless peer-to-peer networking. It is especially targeted for supporting preliminary, in-the-field work, allowing designers to exchange ideas through sketches on empty sheets or over a recently taken photograph of the object being worked on, in a brainstorming-like working style. Pen-based designed human-computer interaction is the key to supporting collaborative work.

Keywords: Techniques, methods and tools for CSCW in design. Applications and testbeds. Other Keywords: Handhelds. Gestures. Sketches. Pen-based. Concept maps.

1. Introduction

The growing acceptance of handhelds enables users to take advantage of numerous facilities facilities that mobile information systems can provide in environments that computer technology otherwise could not reach [2]. In addition to note-taking, scheduling, address storage, etc., handhelds can form build ad-hoc wireless peer-to-peer networks and support collaborative face-to-face applications [18], [16].

Potential beneficiaries of these features include users whose activities involve on-site collaborative design sketching. For example, Aa group of architects, for example, may need to jointly work jointly on a graphic sketch of a construction site [17]. Or a group of engineers conducting an on-site inspection might require high mobility and efficiency of communication to exchange ideas on possible deficiencies and improvements using graphic sketches while at a specific physical location or moving around within it [17]. The construction industry presents particular opportunities for using mobile information systems implementation to improve collaborative work design practices on building sites..

A handheld’s most natural data-entry mode is the stylus (a.k.a. a pen-based or freehand-input-based system), which imitates the mental model of using pen and paper, thereby enabling users to easily rough outwrite down their ideas and/or draw design sketches [13], [20], [21]. However, most currently available handheld applications adopt the PC application approach which usesusing widgets (buttons, menus, windows) instead of freehand-input-based paradigms (via touch screens) and/or sketching [1], [14], [4].

According to [13], handheld graphics-editing applications do not require a mouse-and-palette-based interface, nor do they need to rely on such elements as colors, fonts or lines. TRather, they should rather use informal and natural stroke entry of freehand-input-based notations with a pen-based sketching interface [14], [4]. Applications employing this approach have been designed for scenarios such as sketching informal presentations [21] and military courses of action diagrams [7], and for use being used as a non-technological method of idea generation in meetings. But no system has yet been proposed that uses only interconnected handhelds in a wireless ad-hoc network with a pen-based system to create a sketch-based collaborative design-editing mechanism as is done on PC-based systems [6].

The ability to draw design sketches as opposed to finished designs at any physical location and even while in movement allows the user, among other things, to discover opportunities for substantial improvements at an early stage of the design process and thus enhance efficiency of productivity. With sketching the user can employ visual symbols, describe spatial relations [14], explain new ideas and/or exchange opinions [11] in a rapid and efficient medium for sharing and discussing complex ideas [20], [11]. All this, however, requires sketch diagramming support, for sketch diagramming while simultaneously allowing users to users to engage in face-to-face communication and explanation of their rough designs simultaneously [20], [11], [8]. The ability to handle graphic representations while establishing engaging in a face-to-face communications channel is a natural mode of expression [20] crucial to knowledge creation and capture among various persons [11].

In the light of the above, we propose here the MCSketcher system (Mobile Collaborative Sketch) using wirelessly connected handhelds in an ad-hoc network. With the pen-based paradigm, users can draw sketches collaboratively and show one othereach other their “graphic opinions” while at the same time maintaining face-to-face communication to explain their designs. Two further fundamental aspects add value to the proposed application: the use of conceptual maps for organizing and structuring sketches [10], and of gestures as a simple way of implementing data-entry functions required by the user [4].

2. Related work

According to [13], [4], sketching and gesturing with pen-based systems are natural modes forof design-task-oriented interaction. In [20] it is noted that a sketch is a quick way of making designs that a) facilitate the creator’s idea generation process, b) stimulate communication of ideas with others, and c) stimulate the use of early ideas thanks to the accessibility and interpretation they provide.

The participation of various persons in the elaboration of a sketch using computer support is of key importance given that it has been proven people think more creatively in groups [8].

Various computer systems supporting sketch-based interaction have been developed in recent years. Desktop systems providing collaborative support include:

• Networked virtual environments (net-VEs). These are distributed graphical applications that allow multiple users to interact in real time, providing a shared sense of space, presence and time, as well as a way to communicate [19].

• SKETCH. As described by Zeleznick et al. [22], this system is a collaborative system based on a client-server model for conceptual designs that provides geographically distant users an area for cooperation through sketching as well as interactive design exploration and modification. It cannot, however, provide awareness of presence information on other users in the environment. SKETCH includes an interface for creating and editing 3D sketches of scenes [22] based on the use of simplified (2D) drawing commands interpreted as operations to be applied to objects in a 3D world. All objects are thus 3D and rendered in orthographic views.

• NetSketch [12], an application based on the SKETCH interface [22] that supports distributed conceptual design, in which scene models are constrained to the relatively simple shapes that can be created and rendered using SKETCH. NetSketch uses a peer-to-peer network topology and cannot always guarantee model consistency among all users.

• A collaborative system for conceptual design, whose design and implementation are described in Zhe et al [6]. It allows users located in geographically distant areas to cooperate by sketching, exploring and modifying their ideas interactively, with immediate visual feedback. The system can be used for urban and landscape design, rapid prototyping of virtual environments, animation, education and recreational activities.

Handheld-based note-taking and sketching systems have also been proposed:

• Citrin et al. [1], describe a software architecture that supports pen-based mobile applications through a client-proxy server organization, allowing many graphical applications designed with a mouse/palette-keyboard interface to be accessed through pen-based mobile devices offering shape, gesture, and handwriting recognition.

• Davis et al. have designed an informal system that supports team capture of meeting notes [5]. The system, known as NotePals, uses a detail-in-context technique during note capture. The user selects a location for new information on the screen, and the system highlights the insertion location. The user then enters notes at the bottom of the display.

• PebblesDraw focuses on the use of PDAs for computer-supported cooperative work [18]. The touch screen on the PDA provides access to a remote space. In group meetings, this space is viewed as a “common ground” where PDA users can simultaneously post information. To visualize all the information in the remote space, users can switch focus between their PDAs and an external monitor displaying the shared space.

3. System design principles

As we already mentioned in the first chapter, we want to provide the users with a design supporting tool which can be used on-site in a collaborative way. This is because design must often be produced at the very location they are requested and sometimes on the move. Such situations are common for engineers in construction [17], on-site construction inspectors [3], garden designers and architects. They require high mobility and support mechanisms for establishing collaborative interaction among colleagues or with employers or clients [17]. Sometimes they may even create the design based on a photo captured and pasted into the handheld screen background (see section 4.3). Handheld computer devices is s are an appropriate technology for providing high mobility and portability, and for creating ad-hoc networks through peer-to-peer connections between already incorporated Wi-Fi components (Dell Axim X50). In fact, handhelds are considered to be a good platform for reading brief, concrete content because their interface is simple and insensitive to content formats, thus allowing information to be read quickly read. They are also considered to be suitable for providing support to diverse collaborative work groups, [23]. However, their reduced screen size and use of virtual keyboards or widgets for entering and handling information introduces new complexities into the user-handheld interaction, [4]. In order to overcome these problems we propose the following design principles:

• Interaction is based exclusively on gestures thus minimizing the number of widgets and virtual keyboards and maximizing the space available for entering content. The content iswill be exclusively free handwriting. Although free handwritten text may take more space than typed text, it allows also a flexible combination of sketching and writing. The value of making a gesturegestures is that they matching the mental model of its action to unify aa command and arguments in a single piece that thus avoidings errors. According to [4] and [13], sketching and gesturing with pen-based systems are natural modes of design-task-oriented interaction. In [24] it is noted that a sketch is a quick way of making designs that a) facilitate the creator’s idea generation process, b) stimulate communication of ideas with others, c) stimulate the use of early ideas thanks to the accessibility and interpretation they provide, and d) gives the opportunity to see and get inspired by other group members’ ideas. A pen-based input interface also enables the use of gestures for interacting with the system, and sketches for facilitating data entry [13], [20], [4], [21]. As observed in [4], collaborative design based on gestures, sketches and a pen-based interface enhances design naturally and harmoniously, enabling the sharing and exchange of design information so asin order to improve efficiency (see section 4).

• Many systems use the metaphor of pages and/or scrolling bars for the documents generated in order to offer more “space” to the user. This is indeed a simple and very intuitive way of organizing the content of a document, but when the working area is extremely small, which is the case of handhelds, is seems to be better to organize the content as in a structure which is also intuitive and may contain moreprovides additional information in the structure itself without having to enter more data. A structure like a concept map will beis indeed certainly more suitable to order the ideas generated during the meeting than the "list of pages" (which in fact is a simple form of a concept map) since this it is an intuitive yet flexible structure which adds holdsmore information relative to the content than the “list of pages” (which in fact is a simple form of a concept map) without having to input more contentin the structure of the map. Handheld screens acquire greater depth with the use of concept maps. This type of shared visual space has been applied in discussion groups [10], design groups and collaborative activities.

• Sketching is a powerful mean for supporting interpersonal communication [11]. Face-to-face communication may involve the use of diagrams and drawings in a medium way that enablespermits users to share views while they converse. This is a process that helps to eliminate ambiguities and helps to swiftly communicate new and complex ideas [13], [20]. It has been demonstrated [11] that people cannot produce finished designs in real time or draw objects and the relations between them that result in complex designs without interrupting the flow of verbal information [11]. On the other hand, with sketching, only a minimum of verbal exchange among those working on a design is required to achieve a common and fluid communication channel [13], [20], [11] (see sections 5.1 and 5.3).

• In many pen-based interfaces supporting sketching and gesturing, in which the users have to explicitly switch between them. Most proposed applications using sketching for design are stand-alones [2], [13], [21], [7], and the one collaborative example [6] uses personal computers and is oriented toward supporting persons located at a distance. We believe it is of vital importance that there be a mechanism for facilitating collaboration in the design process based on the positive findings in the use of sketching for communicating and interpreting new ideas to others [20] and for sharing sketches that are easily recognizable among various persons [13], [15] (see sections 4.2 and 5.2).

• Permitting the user to either sketching or gesturing without prior mode specification, on the assumption that selection is the primary avenue to other operations such as moving and deleting (Saund, 2003). According to Yang (2005) has found that , many pen-based interfaces support these two modes in which the users may have to frequently transition between them, and a slow or ineffectual sketch/gesture mode switching techniques may become a mayor bottleneck in its the usability of the system, producing originating a significant source of errors, confusion and complexity, with the final correspondent failure on its adoption. To avoid this, a single mode interface is implemented in which any pen-based input is first analyzed in order to find out if it match a known gesture for the system which should trigger an action. It not, it is considered a sketch. input. Of course, this means that it is not possible to enter sketches matching a gesture. Therefore gestures are designed in order not to match frecuently used skatches and their number is low MCSketcher tiene un único modo en el que se pueden aplicar los gestos, al mismo tiempo que se pueden realizar sketchs, con lo cual se minimizaron los problemas de usabilidad y de adopción del sistema. Para lograr esto, se utilizan gestos (see sectionción 5.3) que no sean obstructivos con los sketchs de los diseños realizados por los usuarios, así como también se considero el uso de una cantidad mínima de gestos pero suficientes. According to [15], a survey intended to illuminate shed light about the problems and benefits users experience with gestures over pen-based interfaces, it was found that the most frequent actions were selecting, moving and deleting. The, and that users consider these actions to be an efficient as a form of interaction, as well as convenient, easy to learn, utilize use and remember, and thus giving anpotentially an added advantage value for to the interface.



4. System Description

As discussed above, MCSketcher is intended to support the collaborative on-the-field design with handhelds. As we think a design session of these characteristics should proceed as swift as possible, there is no login procedure to start one. Participants just start the application on their handhelds; they discover each other and the necessary logical connections to build a collaborative session are made automatically by the software at the beginning. We considered the authentication process not necessary because iIt is hard to think that other people having started the same software but not taking part on the design session would be present at a distance the ad-hoc network would be able to reach.

4.1 The document’s structure and navigation

A working session with MCSketchThe session starts with a blank page for all users, which is synchronized for everyone. Any participant can draw a sketch and this will be transmitted to all the rest. Any participant having a built-in camera can take a photograph and use it as a background image for the page. This image will also be distributed to all the rest. The use of images previously stored in the handheld is also possible

Over this first-level page, diverse “Design spots” can be defined. These are areas which are linked to another page which can be expanded to contain a new page for a particular region of the graphic in orderused to describe it a particular object or region of the sketch in more detail (see figure 1). This can be done recursively creating in this way document structured like a tree, where pages are the nodes and design spots are the links from a father page to its sons.

[pic]

Figure 1: MCSketcher system screenshot.

Users can create “design spots” and navigate into them or out of them, building an implicit tree shaped conceptual map in the form of a tree. Such maps help users express their ideas and understand those of others [10]. Because the depth and complexity of the document tree can hinder users’ location awareness, a view of the document tree (see Figure 2) view is also availablecan be displayed ( by pressing the document tree icon (shown in Figure 1).

[pic]

Figure 2: Two examples of document tree view. Yellow nodes show current user locations (“living room” on the left, and “camera lens” on the right). Thick borders indicate unvisited modified nodes.

Once two or more users have signaled their desire to work in synchronized mode (by pressing the “Session menu” of Figure 1), they can interact with a shared document from their own handheld device. In this way, all users can draw on the same screen, thus sharing ideas. They can create new nodes independently, thereby enlarging the document tree. By default, newBecause users start in synchronized navigation mode, they also moveing together through the document. T his means, when any user changes the page following the link of the design spot, all other users will follow and have the page changed to. nodes when someone enters a node. Also, a However, a user may want to can navigate independently through all the available design spots afterthe document, for example, for writing a sketch in a new page the user wants to show the rest when finished without disturbing the ongoing discussion. Users can switch to an independent navigation mode by first disabling synchronized navigation (by pressing the “Session menu” icon in (see Figure 1). With these options designers can work in parallel, defining new nodes and drawing internal sketches in different areas, thus enhancing their collaborative work design. Users can join the synchronized navigation mode by pressing the "Session" icon again. If all user decide to work in a not synchronized work, the last to abandon the synchronized mode retains the "master session" which all will join if they switch to the synchronized navigation mode again.

Unvisited modified nodes, or nodes including modified sub-nodes, display their design spot (the colored mark behind the sketches) in a stronger tone. In the document tree view these unvisited modified nodes are also highlighted (Figure 2). The two higlighting methods of accentuation notify users of changes, inviting them to visit the modified spots.

4.2 Communication architecture

To meet all the requirements for the system it is necessary to develop a full peer-to-peer architecture. This means that all users will have the same program running on their handhelds and there should be no central service. The program must be able to recognize the presence of other participants and establish a secure communication with them in order to transfer data for synchronizing the applications. In this system this is done via multicasting peer discovering and point-to-point synchronizing data communication. Each running program is sending constantly a multicast message revealing its presence and the parameters for establishing communication. This message will be consumed by the peers and used to maintain a list of active participants which will be used when there is the need to distribute synchronizing information. All this functionality is encapsulated in a module implementing a single send-to-all method which can be called when synchronizing information should be sent to all participants participants, facilitating in this way the programming of the rest of the system. This module is part of our a framework previously devoped by the same authors for supporting the programming of peer-to-peer mobile applications, therefore can, and has been also be used for implementing other handheld-based systems.

The structure of the synchronizing information is an object that which is converted to a XML representation, first and the sent which is sent as text information to the other applications. This was done in order to enable the communication with programs and systems developed on other platforms like .nNETet. The coding and decoding of the object to its XML representation and back and back iis done by a standard procedure provided by the NanoXML package, which is a small, open source XML parser for Java.

Another part important module of our this framework is the one providing gesture recognition functionality, which is also encapsulated in a separate module. In this module the recognition of every gesture is implemented as a separate object class. Therefore it is very easy to extend the available gesture set by just writing new classes.

Finally there is the main module implementing the system itself and its graphic interface which uses all other modules.

Figure 3 : The MCSketch System Architecture

5. Interface design

In order to define the set of actions that effectively allow collaborative sketching, we need a special way of triggering them. Before describing these actions, we first analyze the pen-based handheld interaction scenario.

5.1 Principles

Given that handhelds use stylus-based interaction and have limited screen space, care must be taken to distinguish which pen traces will be interpreted as actions to be executed (gestures) and which will not (sketches), and also to include a reduced set of buttons that reflect awareness of the system status.

Gestures have proved to be a natural way of representing desired actions in pen-based interfaces [13], [6]. However, a too wide a range of possible available gestures interferes with normal sketching interaction, both confusing the user and limiting some drawing techniques possibilities [4], [9]. Several methods and frameworks for recognizing gestures have been implemented or proposed [10].

As we already said, the role of traditional graphical user-interface widgets used for PC-based application for triggering actions needs to be reevaluated in the context of the small screen of the handheld device Since PC interface elements need to be reevaluated for use with handhelds [13]. In particular, their number and size should be stricter limited. On the other hand and , the range of gestures used for the same purpose must be context-limited [9] in order to achieve, a balance between the two techniques (sketches and gestures) is essential. Thus, some actions may be triggered through gestures directly over the drawing area, while others may be activated by more traditional controls such as interface widgets. Actions closely related to the sketches are triggered by interactions of the first technique while those operations independent from the drawing or related to state control are triggered by buttons, with icons reflecting their current state and mode the system is in.

5.3 Metaphors for user actions in a collaborative sketching application

There are three categories of actionsActions available in the system may be classified in three cathegories : drawing, session management and navigation. Drawing actions comprise background selection, basic sketching (drawing strokes), selecting, moving, cutting and pasting. Session management actions include creating a new session, synchronizing navigation, plus opening, saving and exporting a session as a PDF document.

In collaborative workspaces, navigation actions are relatively complex. Because each peer-user is free to move through the document tree, actions to permit joining must be available. Navigation actions include creating pages (nodes) and moving among them, zooming in and out on design spots, displaying document trees, and joining other users’ sessions. Also important is an indicator indicating how many partners are viewing a user’s spot, so he or she can be aware of their presence and their contributions while designing.

The proposed system uses its own ad-hoc gesture recognition system based mainly on the strokes and their drawing speed to which gesture-recognition rules are applied. Such rule matching is more lightweight and easier to introduce because there is no need to train the system.

Most drawing actions are executed using the following natural gestures over the drawing area:

• Background selection is performed by surrounding the screen with a rectangular gesture, which triggers a standard file-open dialog. The user can then choose any image, or a recently taken photo on camera-enabled handhelds.

• A sketch’s basic strokes are performed by natural drawing.

• Selecting is done by either of three methods. The first is to click with the pen on a given trace or line (see Figure 3a), which will select all other traces touching it (Figure 3b). The second method, used to select a group of traces not necessarily connected, is to double-surround them with a continuous closed shape (Figure 3c and Figure 3d). The last method, used as an alternative to double-surround method, is to draw a dense dot and then, without releasing the stylus, draw a line which touches different elements (Figure 3e and 3f). This final method also “copy” selection into a clipboard (see Pasting in this section) [Ref]. Different methods help the user to select under various scenarios: for instance, double-surrounding is easy and fast for complex drawings (like writing), while dot-selection is faster for touching large drawings.

• Deselecting is done by clicking any empty space.

• Employing any of these methods more than once in succession will add or remove items from the selection so that the user can make complex selections using simple gestures.

• Pasting is done by drawing a dense point and releasing the stylus. This duplicates elements previously copied into the clipboard, and places them where the point has been drawn (see Figure 4).

• When one or more items have been selected, two small handles appear at the right bound (see Figure 5a). There is a set of simple actions for editing selected shapes:

o Moving is accomplished by dragging any selected stroke or concept. Dragging will move the selection as a whole.

o Resizing is done by dragging the red square handle, located at selection’s upper right corner (see Figure 5a).

o Rotating can be achieved by dragging the blue round handle, located at selection’s right size (see Figure 5b).

• Removing is performed by drawing a “connected cross” (see Figure 6). If nothing is selected, this gesture removes every touched element. If a trace or more is already selected, only selected elements will be removed.

• Comments and annotations can be done “outside a page”. Clicking and dragging any corner of the current page, zooms it out, showing a darker background (see Figure 7). This backdrop represents “outside the page”, where notes or comments can be drawn or written about what is shown in the page. These annotations are also shared in real time with other users. This helps to understand sketches from other users, or drawn some time ago. Users can zoom out, returning to original setup, dragging the corner of the zoomed page back to the screen edge.

[pic] [pic]

Figure 3: a) Holding the stylus on a trace (“room” on upper left) will b) select connected graphics (partially darkened “r”, and “oom” on upper right). c) Double-surrounding (“ving” on lower left) will d) select enclosed traces (darkened “ving” on lower right). e) Drawing a dense dot and then moving the stylus over other traces will f) select all touched elements.

[pic]

Figure 4: a) Drawing a dense dot and drawing over some sketches b) selects and copies touched strokes. c) With the dense dot gesture d) copied graphics are pasted.

[pic] [pic]

Figure 5: a) Resize selected strokes dragging the red square handle. b) Rotate selected strokes dragging the blue round handle. These handles are always available when any selection is done, but they have been removed from other figures in this paper to simplify understanding of explained features.

[pic][pic]

Figure 6: a) Some strokes are selected,selected; b) connected cross gesture represents Remove command;

. c) FfFeedback appears when command is recognized and d) then only previously selected strokes are removed.

[pic]

Figure 7: a) Original page, displayed at full screen. b) The user drags the corner of the page, displaying the space “out of the page”. c) Annotations outside de page can be done using the same sketching techniques for drawings inside.

Drawing is made with natural sketching gestures. Content editing is made through two steps: first simple selection gestures, and then with some basic editing gestures [Referencia] (Denoue, Chiu, and Fuse, 2003). Chosen selection techniques and resizing and rotating interface allow users to create rich sketches with simple methods.

For session management actions and users there is a Session menu, including the usual New, Open, Save and Export as PDF functions, as well as a Synchronize command for inviting new participants to collaborate. The Export as PDF function builds a PDF file with the expanded document tree, similar to the document tree view (Figure 2). The file can be reproduced on any computer, including a handheld, using the optimized PDF viewer included in the OS. This feature is currently being developed. Finally, the Session menu icon changes to notify the user if the session needs to be saved, or if it has not been modified since it was opened (see Figure 1).

Basic navigation actions are triggered with natural gestures. To create a new node, the user double-clicks on any area to open a new empty sheet attached to the clicked location as a new design spot. Once created, a soft colored mark (customizable and yellow by default) behind the sketches indicates that the user can enter that spot again. Double-clicking the mark will take the user inside the attached node, expanding it to full-screen size. On sub-nodes (everywhere but the root sheet) the margin of the screen will be darkened slightly depending on how deep the node is, to notify the user (see Figure 1). Double-clicking this margin returns the user to the parent node. Repeating this action will move the user to the root sheet. With these natural gestures, the user can browse through the document tree.

5.4 Awareness

Collaborative awareness is an important issue on cooperative working. Because MCSketcher allows simultaneous drawing, two or more users might want to draw at the same space. In order to avoid sketches overlapping, the system displays an alert showing where other users are drawing (see Figure X8). Although this feature encourages users to avoid sketching in the same location, they may still draw in the same area if desired.

[pic]

Figure 78: In the example, three users draw at the same time. Each one is aware of where the others are drawing thanks to shown drawing feedback. This helps users to avoid drawing at the same space.

On the other hand, gestures recognition should inform the user when a gesture has been recognized. Thus, MCSketcher displays different kind of feedback for each recognized gesture:

• Removing displays a cross,

• Entering and exiting from node displays “zoom-in/out rects”,

• Removing crossI (see figure 6c) is displayed for every user, no only the one who made the gesture, but also for those who are looking to the same sketches. This helps them understand why some drawings have disappeared: it is because another user has chosen to remove them.

• Dense-dot (for dot-selection and paste gestures) turns the “ink” into a different color to inform the user the dot is will be analyzed as a gesture and not like sketches. This guide the user about how much should he or she wait before continuing the gesture.

Sound alerts or earcons [1] acknowledge and confirm a user’s action. Three such sounds have been included, one for zooming in (entering a design spot), another for zooming out (exiting a design spot) and a third for creating a new design spot.

There are also some context awareness: an indicator on the menu bar shows the number of users in the current node (“group icon” of Figure 1). If the other users in the current node are relatively few, clicking on it will switch the user to the tree’s busiest node where the rest of the group can be joined. As mentioned in section 4.1, document tree view displays the full document map and highlights the user’s current location, as shown in Figure 2. This view can be displayed by pressing the “document tree icon” (Figure 1). Here, the user can click any element to access that node in full screen.

6. Evaluations of MCSketcher

We made a usability test of the system using think-aloud method to measure its effectiveness. The test was aimed to the detection of problems in: a) the design collaboration supported by the system, identifying the circumstances where the system does not provide he adequate support for or hinders the collaborative design work among the members; b) the user interface, identifying which aspects difficult the understanding and learning the way it works, the user satisfaction when using the diverse functionalities, particularly, when sketching, using gestures, and navigating. The results of the test show that MCSketcher is an effective tool for collaborative design on the field, and the interface also provides an effective way of communicating design ideas between the participants.

6.1 Evaluation parameters

In the Think-aloud evaluation method users verbalize their thoughts about the system while using it, thus avoiding distortions, wrong cognitive translations or omissions that may be introduced by a guided questionnaire [Boren & Ramey, 2003]. The process is observed by an person, who is in charge of gathering the oral information in real time while in our case the participants are moving on the field designing something related to their work. According to [Kjeldskov & Skov, 2003], this method can be used to evaluate the collaboration inside a working group supported by a computer system. En each session, the participants work on a different real collaborative design on the field issue, establishing social interactions of communications, proposal, etc.; and verbalizing their thoughts at the same time. During the collaborative design session, the observer is responsible for registry its development and the way the participants use the system. In this way the development of the session registering in real time the usability problems detected and/or verbalized by the participants. Seven think-aloud test of real collaborative designs on the field (four in constructions y dos de los cuales utilizaron una foto como fondo, and three in the garden branch de los cuales solo una session utilizó una foto en el fondo de la pantalla) were organized with 3 to 4 people each.

Before starting the first session of every branch, we teach the people how to use the system during 10 to 15 minutes. The next session of every branch were dedicated to participants mapping of the system for about 30 minutes. All other session lasted about 25 to 35 minutes, in which the observer took notes about usability problems detected and verbally expressed by the participants.

6.2 MCSketcher effectiveness

To measure MCSketcher effectiveness, we looked at the difficulty and complexity of the collaborative design produced, the time to complete the task, how useful is the system for the participants, and how easily is the use of the interface by gesturing and sketching at the same time.

Los resultados mostraron que no se presentaron problemas críticos de usabilidad ni efectividad tanto en la parte del soporte al diseño colaborativo, como en la interfaz de usuario. Designers found that learning MCSketcher was not too difficult, although ellos tuvieron al principio algunos pocos problemas en el uso del gesto de seleccionar y borrar debidos simplemente a que requerían un tiempo para apropiarse de la forma de interactuar con el sistema mediante estos gestos.

Los participantes rápidamente pudieron realizar en forma colaborativa un diseño de solución basado en sketches, con un tiempo máximo de no más de 35 minutos. Los problemas de diseño que los usuarios finales consideraron fueron de simple y mediana complejidad, y que tenían que ver con una nueva propuesta o bien la modificación a un diseño real. Participants tend to leave parts of the designed in an specially rouge state. For example, they did not spend much effort trying to resize or rotate objects, aunque si la acción de mover un sketch era la más utilizada. Muy pocas veces se utilizó una annotation para describir con más detalle algun objeto bosquejado en la pantalla, debido a que la alta comunicación generada entre ellos permitía recordar los aspectos que mencionaban mientras se realizaba el diseño colaborativo. Solo algunos diseñadores utilizaron los gestos para copiar un sketch realizado, ya que la mayoría de los sketches tenian que ver con partes de diseños diferentes cada vez. A few designers still preferred pencil and paper but found that MCSketcher was simple and similar in many ways. Many designers said that the system is great for giving a soon proposal of design and jointly solution without getting into the details, al mismo tiempo que encontraban una facilidad enorme para presentar sus ideas a los otros al mismo tiempo que los otros podrían también diseñar sus propuestas en una misma área de trabajo.

Conceptual maps and hierarchical navigation were found to overcome the limitations of the handhelds’ small screen size. “Design-spots”, “document tree icon”, and dark screen borders were found to be simple and effective ways for notifying users where they are and allowing them to navigate readily through the nodes. On average, the designs used five screens, with no more than three design-spots for each screen.

The few gestures used (press-and-hold, double-click, double-surround and connected-cross) were sufficient to master all the functions of the system, thus keeping the collaborative design process simple and improving system usability.

Due to the simplicity and contextualization of the permitted gestures, it was not found necessary to differentiate sketching from gesture recognition as a separate mode.

The three on-screen buttons, in addition to their functionalities, ensured the user maintained continual and easily recognized awareness, thus optimizing the use of the handheld’s small screen size.

The “document tree icon” view is sufficient to maintain awareness of all hierarchical pages, and the zoom feature has therefore been discarded. Sketches were found to be easily recognizable even when reduced to a minimal size, thus obviating the need to zoom and scroll, thereby increasing the simplicity and usability of the system.

Earcons were useful for confirming user actions, eliminating ambiguity regarding system response to similarly triggered actions, and .

PDF export expands the future use of generated documents, allowing them to be widely shared.

The final users found it easy to communicate ideas while maintaining face-to-face conversations and using sketches to explain their designs.

Ad-hoc gesture recognition was rapid compared to pattern matching and learning algorithms because its implementation simplifies the stroke analysis in CPU/memory-limited devices. It also simplifies the addition of new gestures to customized systems.

7. Discussion and Future Work

We believe that the most significant contribution of the work reported here is to have combined the various data communication capabilities of sketching in a single product supported by the mobility of wirelessly interconnected handhelds and a natural gestures-based interaction style for facilitating collaborative design.

MCSketcher is a lightweight design collaborative system that gives group members easy access other’s ideas through their personal sketches, by simple gesturing edition actions. MCSketcher is a lightweight design collaborative system that gives group members easy access other’s ideas through their personal sketches, by simple gesturing edition actions. Sketching and gesturing with a pen-based system in a one mode interaction have been shown to be especially valuable for creative collaborative design on the field tasks. For designers, the ability to rapidly sketch objects with uncertain types, sizes, shapes, and positions is important to the creative and proposal process. The designers can to explore more ideas without being burdened by concern for inappropriate details (colors, fonts, alignment, etc.). In the early phase, sketching also improves communication, both with collaborators and the target audience of the designed artifact. For example, an audience examining a sketched interface design will be inclined to focus on the important issues at this early stage, such as the overall structure and flow of the interaction, while not being distracted by the details of the look.

The results described in Section 6 demonstrate that through the use of buttons and the method for executing action gestures, MCSketcher provides the user with constant awareness. MCSketcher is still evolving and there is plenty of room for deeply usability and utility evaluations, improvement and future development, but preliminary empirical testing would suggest that our efforts are aimed in the right direction. Various new functionalities and features are already in planning. One of these is the use of more earcons for providing feedback to improve management of objects, operations and system interactions.

Another planned extension is the introduction of gesture rendering for supporting specific design work. In architectural design, for example, elements representing furniture, bathroom fixtures, etc., could be placed in the design by sketching certain strokes. Similarly, to design graphical interfaces the buttons, text input areas and dialog boxes could be incorporated using simpler strokes such as squares. To facilitate development of functionalities for supporting different design types, sets of sketches could be added/removed as plug-ins to/from the existing system. This would also avoid any conflicts resulting from a given sketch having different meanings in different contexts.

Acknowledgment

This paper was partially funded by Fondecyt 1050601 and DI - Universidad de Chile Nro. I2 04/01-2.

References

[1] S. Brewster, “Overcoming the Lack of Screen Space on Mobile Computers”. Personal and Ubiquitous Computing, 2002, 6, pp. 188-205.

[2] W. Citrin, P. Hamill, M. Gross and A. Warmack, "Support for Mobile PenBased Applications," In proceedings of MOBICOM 97, Budapest, Hungary, 1997, pp. 241-247.

[3]. S. Cox, J. Perdomo and W. Thabet, “Construction field Data Inspection Using Pocket PC Technology”, International Council for Research and Innovation in Building and Construction, CIB w78 conference, 2002.

[4] G. Dai and H. Wang, “Physical Object Icons Buttons Gesture (PIBG): A new Interaction Paradigm with Pen”, Proceedings of CSCWD 2004, LNCS 3168, 2005, pp. 11-20.

[5]. R. Davis, J. Landay, V. Chen, J. Huang, R. Lee, J. Li, J. Lin, C. Morrey, B. Schleimer, M. Price and B. Schilit, B. “NotePals: Lightweight Note Sharing by the Group, for the Group”, Proceeding of CHI 99, 1999, pp. 338-345.

[6] Z. Fan, M. Chi, M. Oliveira, “A Sketch-Based Collaborative Design System”. SIBGRAPI, 2003, pp. 125-131.

[7] K. Forbus, J. Usher and V. Chapman, “Sketching for military courses of action diagrams”. Intelligent User Interfaces, 2003, pp. 61-68.

[8] V. Goel, Sketches of Thought, MIT Press, Cambridge, Mass, 1995.

[9] K. Hinckley, P. Baudisch, G. Ramos, And F. Guimbretiere, “Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli”, Proceeding of CHI 2005, ACM, 2005, pp. 451-460.

[10] U. Hoppe, K. Gaßner K, “Integrating Collaborative Concept Mapping Tools with Group Memory and Retrieval Functions” Proceedings of the Computer Support for Collaborative Learning (CSCL) 2002 Conference, 2002, pp. 716-725.

[11] D. Kenneth, R. Forbus and J. Usher, “Towards a computational model of sketching”, Proceedings of Intelligent User Interfaces, 2001, pp. 77-83.

[12] J. LaViola, L. Holden, A. Forsberg, D. Bhuphaibool and R. Zeleznik, “Collaborative Conceptual Modeling Using the SKETCH Framework”. Proceedings of the IASTED International Conference on Computer Graphics and Imaging, 1998, pp. 154-157.

[13] J. Landay and B. Myers, “Sketching interfaces: Toward more human interface design”, IEEE Computer, 2001 34(3), pp. 56-64.

[14] E. Lank and S. Phan, “Focus+Context sketching on a pocket PC”. Proceedings of CHI Extended Abstracts, 2004, pp. 1275-1278.

[15] A. Long, J. Landay and L. Rowe, “PDA and gesture Use in Practice: Insights for Designers of Pen-based User Interfaces”, Retrieved on 2005, December, 12 from [16] P. Luff and C. Heath, “Mobility in Collaboration”. In Proceedings of Computer Supported Collaborative Work, CSCW’98. ACM Press, 1998, pp. 305-314.

[17] A. May, V. Mitchell, S. Bowden, T. Thorpe, “Opportunities and challenges for location aware computing in the construction industry”, Proceedings of Mobile HCI, 2005, pp. 255 – 258.

[18] B. Myers, H. Stiel, and R. Gargiulo, “Collaboration using multiple PDAs connected to a PC”, Workshop on Shared Environments to Support Face-to-Face Collaboration at CSCW '2000, 2000, pp. 285-294.

[19] S. Singhal and M. Zyda. Networked Virtual Environments: Design and Implementation. Addison-Wesley, 1999.

[20] R. van der Lugt, “Functions of sketching in design idea generation meetings”, In TT Hewett & T Kavanagh (Eds.), Creativity & cognition, New York: ACM, 2002, pp. 72-79.

[21] L. Yang, J.A. Landay, Z. Guan, X. Ren and G. Dai, “Sketching Informal Presentations”, Fifth ACM International Conference on Multimodal Interfaces: ICMI-PUI, Vancouver, B.C., November 5-7 2003, pp. 234-240.

[22] R. Zeleznik, K. Herndon and J. Hughes. “SKETCH: An Interface for Sketching 3D Scenes”. ACM SIGGRAPH’ 96, 1996, pp. 163-170.

[23] Schmidt, A. Lauff, M., and Beigl, M. Handheld CSCW. Workshop on Handheld. Proceedings of the Computer Supported Collaborative Work - CSCW '98, November, Seattle (1998).

[24] van der Lugt, R. Functions of sketching in design idea generation meetings. In TT Hewett & T Kavanagh (Eds.), Creativity & cognition, New York: ACM, 2002, 2002, 72-79.

Agregar las siguientes referencias:

Yang Li, Ken Hinckley, Zhiwei Guan, James A. Landay, Experimental Analysis of Mode Switching Techniques in Pen-based User Interfaces. In CHI 2005: ACM Conference on Human Factors in Computing Systems, CHI Letters, 7(1), 461-470

Saund, E.; Lank, E. Stylus input and editing without prior selection of mode. Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (UIST 2003); 2003 November 2-5; Vancouver; BC; Canada. NY: ACM,; 2003,; 213-216.

Boren, T., and Ramey, J. Thinking Aloud: Reconciling Theory and Practice. IEEE Transactions on Professional Communication, 30(3), September, 2003, 261-278.

Kjeldskov J., and Skov M. B. Evaluating the Usability of a Mobile Collaborative System: Exploring Two Different Laboratory Approaches. Proceedings of the 4th International Symposium on Collaborative Technologies and Systems, Orlando, Florida. SCS press, 2003, 134-141.

Denoue L., Chiu P., and Fuse, T. Conference on Human Factors in Computing Systems archive. CHI '03 extended abstracts on Human factors in computing, Ft. Lauderdale, Florida, USA, 2003, 710-711.

Trabajo a futuro

expresar ideas y su comportamiento ver “are informal tools better? … DEMAIS”

Conclusión:

- No existencia de modos, y ademas la ausencia de conflictos entre los gestos utilizados y los sketch que se están esbozando

-----------------------

NanoXML

Functionality Graphics

I/O

Highlighted Session menu shows the work needs need to be saved the work

“Document three icon”

ba

The “group icon” shows that 2/3 of the users are in this node

FALTA bNelson’s device

FALTA d

f

e

Gestures

Recognition

Communication

Platform

a

b

c

d

Darkened margin denotes user is in an inner sub-node

a

Design spots show there are other sketching pages attachedattached to this one, which can be explored by following the link shown as a yellow circle. sketches. For example, the road patter could be sketched. In this case, there are 2 of them

FALTA aGustavo’s device

Different colors show singular designer’s contributions

FALTA c Felipe’s device

FALTA c

FALTA a

FALTA b

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download