Visualizing a meeting as a graph to increase the ... - CiteSeerX

0 downloads 0 Views 660KB Size Report
Meetings are considered as one of the most important activities in a business environment. .... data and finally the presentation node as a meeting task. Susan's status is ... deeper. Several recording analysis tools under several themes such as ...
Visualizing a meeting as a graph to increase the accessibility of meeting artifacts Yurdaer N. Doganata IBM Research 19 Skyline Dr.

Mercan Topkara IBM Research 19 Skyline Dr.

Hawthorne, New York, 10532

Hawthorne, New York, 10532

[email protected] ABSTRACT This paper focuses on capturing, correlating and visualizing the execution of a meeting from the history data. Relevant artifacts that are utilized or generated during a meeting as well as meeting activities and resources are mapped onto a generic meeting data model. The execution of a meeting is then captured as a graph where generated meeting artifacts, participants and meeting tasks are connected. The graph enables faster and structured access to meeting data and provides for a visualization capability for users who want to browse quickly the sections of the meeting that are of most interest.

Categories and Subject Descriptors E. 1[Data Structures]: Graphs and Networks, E. 2[Data Storage Representations]: Object Representation, I.2.6 [Learning]]: Knowledge acquisition

General Terms Algorithms, Management, Design, Human Factors,

Keywords Meeting browser, meeting history, representation of a meeting

1. INTRODUCTION Meetings are considered as one of the most important activities in a business environment. Many organizations hold regular meetings as part of their routine operations. Delivering information, keeping each other updated, discussing issues around team projects, assigning tasks, tracking progress and making decisions are some of the reasons why meetings are very crucial part of a professional and human activity. Recording meetings are as important as conducting them. Members of an organization access past meeting records to recall details of a particular meeting or to catch up with others if they missed a meeting. The result of a survey [1] that is summarized in [2] shows the reasons why people frequently refer to meeting records. These include checking the consistency of statements and descriptions, revisiting the portions of a meeting which were missed or not understood, re-examining past positions in the light of new information and obtaining supportive evidence. Recording and retrieving meeting data effectively has been a challenge. Before the advancement of computer and communication technologies, a lot of time and effort were spent on producing written documents related to the meetings. This process put a burden on the preparer who may not remember all

[email protected] the details or transcribe them correctly. Hence, manually transforming meeting minutes into documents suffer from lack of accuracy, completeness and objectivity. Capturing meeting data effectively became possible with the advances in computer and communication technologies and the introduction of online multimedia meetings over the internet with desktop sharing, whiteboard, text, audio and video capturing capabilities. Online meetings have become a popular means of conducting meetings among the geographically dispersed users. Vast amount of audio visual and textual data are being recorded and stored for such online meetings. A number of indexing and searching technologies were developed in order to associate the meeting context with the stored multimodal data. In order to access the meeting data effectively various meeting miners and browsers were proposed in the literature. A review of meeting browsing technologies, including multimedia segmentation and indexing can be found in [1]. Mainly meeting browsing technologies focus on either browsing the speech or the video. The goal is to enable access to specific speech and video information without the need to listen to or watch the entire segment. In speech, accessing the specific parts of audio files, spotting particular words [7], speaker segmentation [3], [4] and automatic speech recognition [5],[6], topic identification and segmentation [8], spoken language summarization [9] are the main challenges. Similarly, indexing and summarization are the challenges of video browsing [10]. Meeting browsers are generally developed by integrating speech and video browser technologies [11] and aim at providing an effective interface to access multimedia archives. Hence, the research efforts are mainly focused on indexing, searching, retrieving and visualization of multimedia information. While meeting browsing and mining techniques improve multimodal search of meeting data, there is a need for a meeting visualization approach where the meeting entities, events and their relations are displayed from beginning to the end within the context of the meeting. In this paper, we focus on a novel meeting visualization approach that increases the traceability and visibility of an archived meeting data. Our design concept is based on transforming the recorded meeting data into a graph where the nodes of this graph are the salient meeting entities and the edges are the relations among these entities. Our goal is to give the context of a meeting to the user with a single graph where end-to-end meeting events, the actors (e.g., presenters, participants) and the data utilized (e.g.,

the presentation slides) are made visible without a need to search. In addition, the approach we propose here provides a means for the users to interact with the graph for extensive functionality as well as additional convenience of being able to use graph query languages to retrieve answers to relational questions such as “who was involved in deciding to change the task plan?”. The result to such a query will also return graph nodes that will allow users to further interact with the graphs to find out more about the segment of interest in matching meetings. In the next section, the components of the system that we have implemented are introduced. In section 3, the application that are available to capture on line meeting data and the type of data they capture are overviewed. In section 4, our visualization approach and the associated data model are discussed. In section 5, our meeting visualization approach is demonstrated by using the data extracted from an actual online meeting. The user interface of the visualizer application that we have implemented is illustrated in

section 6. Techniques that are used to extract information from graph structures are summarized in section 7. Finally, section 8 is dedicated for concluding remarks.

2. SYSTEM COMPONENTS Figure 1 shows the components of a meeting data capture and visualization system. The meeting events are captured by event listeners and converted into meeting visualization data and stored in the meeting data store. The data model for the meeting visualization system is created by using the visualization data management system. Core meeting data types and their extensions are explained in the next sections. The relations between data elements are extracted through analytical enrichment component. Once the captured meeting data is stored, the information about the meeting is retrieved by means of a query interface and visualized through meeting visualizer component.

Figure 1 Meeting visualization data recording system components Figure 2 illustrates how meeting data is extracted from meeting events. A number of technologies are available to capture meeting events. Some of those are shown in Figure 2. The data produced as a result of analyzing the history of meeting events are then transformed into visualization data and stored into the

meeting data store. Meeting data includes artifacts generated or utilized in a meeting, meeting related activities and resources involved in executing a meeting. A detailed model for the meeting data is discussed in the next section.

Figure 2 Creating provenance data from meeting events

Once the meeting data is extracted from the analysis of meeting events, the extracted data is then mapped onto the meeting data model. As an example, speaker identification system identifies the speaker as Susan; participation activity indicator shows when Susan speaks; screen change capturing system indicates which slide deck is presented, etc. In the data model Susan is represented as a participant. Correlation of the duration and the time of screen captures with the speaker indicate that Susan presented a slide deck. In the data model, slide deck is represented as presentation data and Susan’s presentation as a meeting task. As a result three graph nodes are created, first for Susan as a participant, second for the slide deck as presentation data and finally the presentation node as a meeting task. Susan’s status is recorded as the presenter in the web meeting tool at this time frame; hence an edge is formed by connecting Susan and the presentation task. Other edges of the graph are also formed by analyzing the relations between the nodes.

3. CAPTURING MEETING EVENTS A number of online meeting applications are capable of capturing meeting artifacts that are produced constantly from beginning to the end. In this section, some of these applications and the type of data they capture are reviewed. Examples of online meeting applications with recording capabilities include LotusLive Meetings, WebEx, Office Live Meeting, and GoToMeeting. In addition, applications such as chat messaging twitter dialogues, camcorder recordings, and collaborative content editing enhance the interactions during an online meeting. Meeting recording devices provide information about attendees, their join/present/leave actions. For example, the audio exchange over the phone (e.g., LotusLive meetings, WebEx), or over VOIP (e.g., Polycom, Sametime Meetings) are used to identify participants; video clips capture the screen of the presentations etc. As a result of capturing and analyzing the meeting artifacts generated by the applications mentioned above, a meeting provenance graph is produced that contains information about who joined the meeting and when, along with what they presented. The richness of the graph depends on the complexity of the analyses though which more nodes and relations are extracted. If the meeting application supports automatic speech transcription service [15], initial graph can be improved by adding the transcription documents to the graph as nodes and linking them to the associated participants. In addition, speaker identification systems [16] can be used to identify the presenter and help associating the participant nodes in the graph with the presentation data nodes or video clips. The partitions of the transcripts in the speech data can be used to determine when a certain task starts and who started it. Speaker information is inherently included in the VOIP based systems as it is possible to point out the source of the audio feed and assign an attendee id or name to this feed at the time meeting recording is being captured.

Automatic decision detection systems reach up to 68% precision on certain types of meetings when prosodic, lexical, dialogue rules and topical classification features are used in the analysis [17]. Such a detection system can easily be added to the preprocessing step of our system to add certain “decision” task nodes. Involved parties can easily be correlated from the times they spoke. In another setting where the authors edit content collaboratively in real-time [18], decision points can be decided by looking at the convergence of changes on parts of the content. A real-time content editing system can also provide the information on who edited what and when. All of this information can be fed into our system to improve the coverage of the captured structured data. As the coverage improves more details about the meeting are rendered through graph visualization. Another example of a meeting recording repository system that is capable of providing annotated meeting recording is Collaborative Recorded Meetings (CRM) application [19] available under LotusLive Labs [20]. In this paper, we have employed the CRM application to demonstrate our meeting visualization concept. CRM monitors the LotusLive Meetings launched through this service, and records metadata about the meeting actions, also fetches the video file for the recording as well as the presentation files shared during the meeting. The video file is automatically fed to a transcription service [15], and presentation slides are automatically fed to Slide Library [20] by CRM. Slide Library is another application available under LotusLive Labs; it extracts the text in slides and provides a keyword search API that returns individual slides as well as full decks. CRM aligns the text in slides, and the transcriptions with the video timeline as well as the participant actions such as join, leave, and present. Furthermore, users of CRM can annotate the recording with certain tags and comments to elaborate further on what happened during a meeting, or provide links to extra information about a certain topic discussed in the meeting. CRM allows users to perform keyword search that returns results from transcription, slides, tags and comments, where search results take the users to the specific segments where a particular slide was discussed or a word was uttered by a speaker etc. Our demo system utilized CRM’s APIs to retrieve dynamic feed from CRM’s database to get information about archived meetings. We then processed the meeting history data to generate a meeting provenance graph. The meeting provenance graph gets more complex as the analysis gets deeper. Several recording analysis tools under several themes such as topic segmentation, visual indexing, and audio indexing are reviewed in [2] which could be used to enhance the meeting provenance graph.

4. MEETING VISUALIZATION APPROACH A meeting can be modeled as a set of activities executed by various actors like any business process where textual and audio visual data are consumed or produced at different steps. In effect, a meeting is a process with a start and an end event and

sequence of other events in between. Hence, the techniques that are used to generate provenance graphs for business processes [14] are applicable for meetings as well. Provenance in a general sense refers to the lineage of data and tracking events that affect data during its lifetime [12]. Activities, data and resources constitute the nodes and causal relations among these entities constitute the edges of a meeting graph. Since the graph contains information about the history of a meeting, information related to meeting history can be accessed through a graph query interface. Visualization of a meeting as a graph gives the users a better insight about the meeting flow, involvement of different participants and an easy access to various meeting information simply by clicking on the corresponding icons. In the absence of visualization, without the meeting context, users do not have anchors for navigating the artifacts of a meeting recording. Traditional multimodal keyword search of meeting logs returns a flat list of results against the keywords. Meeting provenance graph queries, on the other hand, render the results in connection with other artifacts. Extracted meeting event data contains information about the meeting artifacts such as the slides presented, the roles of the people who were in the meeting, speech to text translation segments etc. The first step of visualizing a meeting as a graph is to supply a data model for various classes of graph nodes and edges. Once the node and edge types are defined, then the raw meeting event data instances are mapped onto the graph types constituting the instances of graph nodes and edges.

4.1 Meeting graph node types The types of graph nodes should be general enough to support various meeting data without lack of generality. We propose to extend the following data types for meeting visualization which are proven sufficient to represent any business process [14]: Data Type: These are the artifacts that were produced, utilized or modified during the execution of a meeting. Typically these are the presentation slides, audio or video clips, voice transcripts, chat messages and database records. Task Type: A task record is the representation of a particular meeting activity. Usually, but not necessarily, meeting activities utilize or manipulate data and are executed by the meeting participants. Making a presentation, introducing participants, holding discussions, answering questions are various activities of a meeting. Resource Type: A resource record represents a person, or any resource that is the actor of a particular task. Participants, presenters, meeting organizers are the resources of a meeting. Relation Type: These records are generally produced as a result of correlating two records. Meeting Type: Meeting record is used to connect the artifacts that belong to a particular meeting together. Those artifact types constitute the nodes of a meeting graph and the relation records represent the edges. Figure 3 shows the icons used to represent meeting graph data types. The icons help to identify important meeting artifacts visually when the meeting provenance graph is displayed.

Figure 3 Iconic representations of data types used in the graph visualizer Meeting artifacts of various types are detected by the recording probes which act as the event listeners of the underlying meeting systems. Section 3 gives an overview of some of the existing online meeting applications and recording capabilities. In order to recreate a meeting end-to-end from the event data, the typed meeting artifacts must be connected together. This naturally translates into creating edges in the meeting provenance graph by adding relation records which can be done in multiple steps. Basic relations between a task and the manipulated data or a task and the resources can be established based on the information that the task record holds. As an example, presentation is one of the most common activities of a meeting. A particular presentation task starts when the presenter starts speaking and projecting the slides. As a result, the relations between the presentation task and the slides as well as the speaker are established automatically. More complex relations are discovered by running analytics and locating the correct provenance records in the provenance graph. Other relations are established by utilizing the data outside of provenance graph, such as the data stored in content repositories. As new relations are added, the underlying provenance graph gets continuously enriched, as the creation of some relations may trigger execution of other enrichment rules. As relations between meeting records are established, the hyperlinked structure provides for each such record a context that describes its lineage with a path into related events that had occurred prior to its existence and related events that had happened later. In the next section, we illustrate the process of meeting visualization with an example. The data for this example is collected from a CRM application as mentioned in section 3. The data model, on the other hand, is created by extending the core meeting entity types; data, resource, task, meeting and relation types. Figure 4 shows the data schema that we extended from the core types for the example described in the next section. For the example discussed in the next section, speech to text segments are typed as ‘segmentsType’, meeting participants are typed as ‘rolesType”, presentation tasks are typed as ‘presentationType’ and the meeting record is typed as ‘meetingType’. The elements and the attributes of each type are also illustrated in the same figure. Each record is an extensible XML data structure and all records share common attributes specified in the Graph Node Type. Hence, Graph Node Type is the base class for all other node types. All other types inherit their properties from Graph Node Type, such as displayName which is used to display the name of the artifact. Data, Resource and Task artifacts are added to the graph as the meeting progresses. A semantic relation between two meeting artifacts is expressed by an edge between the corresponding nodes materialized as a relation record.

Figure 4 Meeting provenance graph node data model for the example in this paper

5. VISUALIZING AN ACTUAL MEETING In this section, we generate a meeting graph by using the event data collected during an online department meeting. The meeting is recorded by using a Collaborative Recorded Meetings application as mentioned in section 3. During the meeting, some of the participants share their impressions about CHI 2010 with the group members after they come back from the conference. A video clip of this meeting (Figure 5) and some raw event data files in XML format are available for extracting meeting artifacts.

So, our starting point is the raw XML files that contain information about the slides presented, the roles of the people who were in the meeting, speech to text translation segments and information about the meeting itself. Figure 6 to Figure 8 show sample XML files that are processed to extract meeting information, speech to text segments, their durations and starting points, participants and their roles

Figure 6 Sample XML for the slides presented

Figure 7 Sample XML for speech to text segments

Figure 5 Video clip of the department meeting where CHI 2010 papers were discussed

Figure 8 XML for the roles of meeting participants As a result of extracting meeting artifacts from event data and mapping them onto the data model extended for this example in Figure 4, the following graph node types are generated: DataType: segmentType: Speech to text captions TaskType: presentationType: Slide presentations ResourceType: rolesType: Participants In addition, relation records are created between slides, the presenter of the slides, participants and the speech to text captions by using the time and duration information associated with each artifact. This way, a slide presentation and the associated speech to text caption are connected to the correct presenter and the

presentation. Figure 9 depicts the visualization of the meeting graph where Janet, Scott, Jeffrey, Michael and Miriam are resource nodes, speech to text captions are data nodes and the slide presentations are the task nodes. Note that there is a task node corresponding to presentation of each slide. Each slide presentation task is connected to the prior and the next presentation slides, keeping the flow in order. The role of each participant is also displayed as an edge in the graph. The graph immediately reveals information about the meeting that is not visible from meeting records such as the fact that Miriam was not present when the meeting started. She joined the meeting during Jeffrey’s presentation of the 9th slide. Janet, on the other hand, left the meeting during Jeffrey’s presentation. 24 slides were presented in the meeting. These are some indications of how the visibility of meeting information is increased through graph visualization. It is evident from the graph displayed in Figure 9 that Janet started the presentation with slide[0] to Jeffrey, Scott and Michael. Scot is the chair of this meeting. Janet’s presentation lasted until the ninth slide after which Janet left and Miriam joined. Jeffrey is the second presenter who presented slide[9] to slide[24].

Figure 9 The graph of a meeting

6. USER INTERFACE We have implemented a web based user interface to visualize the meeting provenance data as shown in Figure 10. Our visualizer is based on ILOG’s JViews Diagrammer Java components [21]. In this version, the location of the graph nodes are calculated automatically based on the graph pattern by the software package and rendered. Hence we did not have control over the layout. In the next version, we will focus more on controlling graph visualization patterns and plan to implement more of ILOG’s graph rendering capabilities. Figure 10 shows various features of the meeting visualizer user interface. The pane on the right is for visualizing the flow of the meeting. In this pane, each artifact is represented with an icon corresponding to the type of the artifact. Each node or edge can be clicked to see its attributes on the left pane. In the example depicted by Figure 10, details of the first slide presentation is

displayed in the right pane such as the title ‘CHI 2010 Summary’, presentation start time ‘0.0 sec’, presentation duration ‘6.266 sec’ and a URL reference to the presented slide or the associated video clip that correspond to the presentation of this slide. Connections between the icons can also be clicked to display the type of relation. At the bottom of the left pane, the artifacts connected to ‘slide[0]’ are listed. These are the participants present when the slide is presented and their roles. The bottom pane has multiple functions. It serves as a timeline to layout the meeting events on a timeline as well as a search and query interface to search for a particular artifact or filter the graph based on the attributes of a provenance node. Filter tab on the left pane enables filtering the provenance graph from the artifacts that are not relevant. In the example depicted in Figure 11, the provenance graph for the meeting is filtered to render only the ‘meeting artifact’ and the associated connections

which are the participants of the meeting. In the left pane, only the artifact of type meeting and the resources with connections to the meeting artifact are checked and selected for rendering. Other details are filtered out. This way one could get an isolated view of the selected artifacts. We also developed a web services query interface to access the provenance graph to write complex graph queries. This XML based query interface allows creating conditions based on the attributes of meeting artifacts. As an example, if we would like to retrieve the e-mail address of the person who presented the

“slide[0]”, we first find the nodes of the graph with class attribute “task” and type attribute “presentation” and displayName ‘slide[0]’. We next find all the actors or resource nodes that are connected to this presentation task with role attributes are set to “presenter”. This returns a list of all resource nodes that represent the people who presented the “slide[0]”. Assuming that there is only one presenter, the e-mail addresses can then be obtained from the value of email attributes of the associated resource node.

Figure 10 Meeting history browser user interface

Figure 11 Filtering the meeting graph

7. EXTRACTING INFORMATION FROM A GRAPH Extracting information about the meeting from its provenance graph is equivalent to extracting the properties of the graph or discovering various sub-graph patterns. There is a number of graph query languages in the prior art with various computational complexity and flexibilities such as SPARQL, RQL, RDF Query language, QWL-QL, GraphLog, GOQL, GRAM, etc. The graph can be interacted in different ways. These include browsing the graph one step at a time, or finding sub-graphs based on meeting related constraints, or checking hypothesis expressions. As an example, one may want to find out who were present when a particular slide presented or browse the content of the presentation to find out when the participants joined or left the meeting. One could also find answers to questions like “who were present in the room when a

particular statement is made?” by sending a graph query that would extract all the “presentation” task nodes that are linked to “caption” data nodes that contains the statement in question. Then the resource nodes that joined before the particular slide presentation is made but did not leave are listed. A number of applications external to the meeting visualizer can be invoked from the user interface. This enriches the functionality of a meeting visualizer. Figure 12 demonstrates the concept of invoking various applications from the user interface of a meeting visualizer. In Figure 10, Paul is a resource node with a presenter role. One can immediately contact Paul via phone, chat or email by invoking the associated application from the context menu. The audio or the video clip of the presentation for a particular slide can also be played with a click as shown in the figure. Similarly, one can also open the presentation slides or send them as an e-mail.

Figure 12 Invoking external applications from the meeting visualizer

8. CONCLUDING REMARKS Visualizing the history of a meeting as a graph gives better insight about the context of a meeting. The graph representation improves the visibility of meeting logs such as the participant activities. Currently, online meetings are stored in the archives servers with some log data and video clips for retrieval. Meeting visualization approach presented in this paper offers an alternative way of storing the meeting data and new search paradigm to access meeting history data. If meeting graph representation is included, when archived meetings are searched, the return list includes graph representation of a meeting as well. A graph is a structured form that captures the history information by selectively probing the relevant meeting event data hence a graph pattern is a better navigation tool in comparison to flat archived log. The number of nodes and edges in the graph depends on the amount of meeting information captured by the meeting recorder devices. As the speech and video processing technologies advances, the size of the meeting graph is expected to grow with increasing number of nodes and edges between them. The graph complexity, however, is controlled by the users. The type and the amount of event data to be converted into graph data can be configured. As the graph is enriched by using new analytics and recording technologies, on the other hand, meeting data search and navigation experience will get richer.

[3] Hindus, D., Schmandt, C.: Ubiquitous audio: capturing spontaneous collaboration. In: Proceedings of the 1992 ACM Conference on computer-supported cooperative work,CSCW’92, pp. 210–217. ACM Press (1992) [4] Luz, S.,Roy,D.: Meeting browser: a system for visualising and accessing audio in multicast meetings. In: Society, I.S.P. (ed.) Proceedings of the international workshop on multimedia signal processing (1999) [5] Rabiner, L.R., Juang, B.H.: Fundamentals of speech recognition. Prentice-Hall, Englewood Cliffs (1993) [6] Jurafsky,D., Martin, J.H.: Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition. PrenticeHall, Englewood Cliffs (2000) [7] Rohlicek, J., Russell, W., Roukos, S., Gish, H.: Continuous hidden Markov modeling for speaker-independent word spotting. In: Proceedings of international conference of acoustics, speech, and signal processing, ICASSP-89, vol. 1, pp. 627–630 1989 [8] Marcus, M. P., Reynar J. C.: Topic Segmentation: algorithms and applications, University of Pennsylvania, PA, 1998. [9] Meier, R. P., Cormier, K., Quinto-Pozos, D.: Modality and Structure in signed and spoken languages. Cambridge University Press, 2002.

Our future research includes understanding how to generate analytics that would run rules to extract meeting related information, relations and measuring the contributions of participants. Other extension of this work is to generate an effectiveness measure for the meetings in terms of the attainment of original meeting goals. Possible meeting goals may include high attendance rate (e.g. 80%), covering the agenda items, being on schedule, etc. Meeting statistics such as these can be automatically rendered on a dashboard..

[10] Li, F.C., Gupta, A., Sanocki, E., wei He, L., Rui, Y.: Browsing digital video. In: CHI ’00: Proceedings of the SIGCHI conference on human factors in computing systems, pp. 169–176. ACM Press (2000)

Another advantage of representing meeting provenance as a graph is the convenience of using graph query languages to retrieve information. Graph query languages support semantic search over graph data structure. The data schema accommodates semantic relations between the meeting artifacts, hence enabling ontology-assisted queries. This allows searching not only for the list of participants, but their roles and contributions, and how they were involved in decision making along with the relations between participants. Similarly, the search results will not return just a list of presentation slides but who presented and when, if it impacted the decision point, etc. Understanding the structure of semantic meeting queries is an area we would like to pursue as an extension of this work.

[12] Ram, S., Liu, J.: A new perspective on Semantics of Data Provenance. The 1st International Workshop on the role of Semantic Web in Provanance Management (2009).

9. REFERENCES [1] Jaimes, A., Omura, K., Nagamine, T. Hirata, K.: Memory cues for meeting video retrieval. In: CARPE’04: Proceedings of the 1st ACM workshop on continuous archival and retrieval of personal experiences, pp. 74–85. ACM Press (2004) [2] Matt-Mouley, B., Saturnino, Luz: Meeting browsing. Multimedia Syst. 12(4-5): 439-457 (2007)

[11] Waibel, A., Bett, M., Finke, M., Stiefelhagen, R.: Meeting browser: tracking and summarizing meetings. In: Penrose, Meeting browsing 457 D.E.M. (ed.) Proceedings of the broadcast news transcription and understanding workshop, pp. 281–286. Morgan Kaufmann (1998)

[13] Scheer, A.-W., Nüttgens, M.: ARIS Architecture and Reference Models for Business Process Management Institut für Wirtschaftsinformatik, Universität des Saarlandes, Im Stadtwald Geb. 14.1, D-66123 Saarbrücken [14] Curbera F., Doganata Y., Marten A., Mukhi N., Slominski A.: Business Provenance - a technology to increase traceability of end-to-end operations, CoopIS 2008, Monterrey, Mexico. [15] Bain K., Basson S., Faisman A., Kanevsky D.: Accessibility, transcription, and access everywhere, IBM Systems Journal, 2005. [16] Wilcox L., Kimber D., Chen F.: Audio indexing using speaker identification. Proceedings of Conference on Automatic Systems for the Inspection and Identification of Humans, pp. 149-157, 1994. [17] Hsueh P., Kilgour J., Carletta J., Moore J., Renals S.: Automatic Decision Detection in Meeting Speech, MLMI 2007.

[18] Sun C., Jia X., Zhang Y. Yang Y., Chen D.: Achieving convergence, causality preservation and intention preservation in real-time cooperative editing systems, ACM Transactions on Computer-Human Interaction (TOCHI), pp. 63-108, 1998. [19] Tag Me While You Can: Making Online Recorded Meetings Shareable and Searchable, anonymized for blind review.

[20] http://www.lotuslive.com [21] http://www-01.ibm.com/software/integration/visualization/ jviews/diagrammer/