Cooperation in Dialogue and Discourse Structure - CiteSeerX

3 downloads 0 Views 41KB Size Report
'topical' jungle, International conference on the fifth generation computer systems, Tokyo, 1984. [Grosz and Sidner 1986] Barbara Grosz and Candace Sidner.
Cooperation in Dialogue and Discourse Structure Brigitte Grau LIMSI-CNRS and IIE-CNAM BP 133 91 403 Orsay CEDEX FRANCE Abstract Dealing with dialogues requires developing different levels of interpretation. An intention-based analysis must take into account discourse conventions and communication phenomena which shall result in coherent cooperative dialogues. The realisation of such a system is possible by distributing the tasks to independent agents. We show that the collaboration of these agents is possible by building and using a dialogue representation whose structure and content integrate the results of the different agents. The structure is dynamically built and is based on an intentional analysis. But to go further than only a plan-based approach, the basic units are communicative intentions and exchanges are finely structured. Thus the structure can also be used to handle conversational obligations. Since comprehension problems affect the form of a dialogue, a decomposition in precise units also allows reasoning on the structure itself to evaluate the progress of a dialogue and to detect communication problems.

1 . Introduction Dialogues which occur in the context of seeking information, or which are meant to help a novice to solve a task or teach a student, are samples of situations where cooperativeness is essential. A dialogue system must be able to help its partner to explicit his demand or to solve his problem. These aspects could be handled by an approach mainly relying on plans [Allen and Perrault 1980; Grosz and Sidner 1986]. However to solve communication problems, as misunderstandings, and dialogue phenomena as adherence to discourse conventions, requires a complementary approach. Communication problems arise when speakers' intentions are badly recognised or when sentence analysis fails. They give raise to successive exchanges that do not converge towards an agreement. Since these situations are detectable at the surface form of the dialogue, we propose a dialogue model to allow an evaluation of the dialogue situation. Studies such as [Traum and Allen 1994; Airenti and al. 1993] have shown that participating in a dialogue entails obligations, such as responding to questions for example, which are independent of the task resolution. These obligations show that speakers also have a goal of respecting conventions, and that utterances must be interpreted from a double point of view: (a) intentions related to the task (refers to why the speakers participate in dialogue: seeking information, teaching,...); (b) communicative intentions.

Anne Vilnat LIMSI-CNRS BP 133 91 403 Orsay CEDEX FRANCE These aspects can be handled by independent agents, because they correspond to different levels of interpretation as shown in [Litman and Allen 1990; Moore and Pollack 1992; Lambert and Carberry 1992]. Maintaining a cooperative dialogue coherent and natural requires that these agents collaborate. The dialogue model we propose centralises the results computed by the agents, all of whom have access to information concerning the current situation. It works as a blackboard. Thus the dialogue manager makes decisions, during the interpretation and generation processes, according to different points of view. We will show that the dialogue structure is dynamically built, and we will detail how the different agents work and collaborate. A first validation of the model has been done on dialogues from different kinds of corpora. The first one is a “Wizard of Oz” corpus collected by the Gene group1[Gene 1994]. The dialogues occur between a student in medicine and a teacher. The second one contains human-human dialogues between a teacher and a class. It was collected, and transcribed in the context of the LHM project2. Dialogues from both corpora have been represented within our model, showing the validity of the structure and its capacity to reveal communication phenomena and strategies chosen by the speakers.

2 . Overview of the model The dialogue model we have defined is based on a precise structure of the exchanges between the two speakers. An exchange is like an Adjacency Pair [Schegloff and Sacks 1973], containing an initiative speech act and a reactive one, plus optionally an evaluation. A speech act encodes the communicative intention. The exchanges are hierarchically structured according to the part of the task they are about to perform. So, the model traditionally structures the intentions of the speakers according to the problem to be solved. But to be able to act in a cooperative manner a dialogue manager has to recognise when its partner has difficulties solving a problem, or when misunderstandings occur. These problems are detectable from the form of the dialogue because they are necessarily expressed by exchanges. For example, the number of exchanges taking place before answering, or, more generally, reacting, indicates a difficulty to solve a problem. Handling these interaction problems requires an evaluation based on a precise structure of the ex1 Gene is a group of the PRC-IA program, granted by the French Education Minister. 2 LHM is a project granted by the European Community concerning Learning. The dialogues were collected by the university of Athens.

changes. Thus, all the exchanges are structured, to be able to differentiate the resolution of different points of a same subtask from the difficulty to solve one point. Exchanges are given a label to explicit their role in the dialogue. Exchanges are main when they refer to a primary intention, or goal. They are preliminary when they focus on a point of a more general goal. Exchanges are complementary when they refer to additional information about a reaction. The model is dynamically built and an exchange can be shifted in the structure after its insertion. These changes are a kind of reinterpretation. A speaker is not supposed to reason in a “descendant” way and present its main problem before particular points. For example, in a train reservation task, a speaker can ask how much time before the departure it is necessary to make a reservation. This question may be a main goal, and after the answer of the system, the dialogue will be finished. In another case, the dialogue may continue and the speaker may ask for a reservation for the next departure of the train Paris-Marseille. This kind of dialogue occurs when the speaker knows that there are conditions attached to making a reservation, and he wants to verify that his problem can be solved before submitting it. In this case, the first exchange, first considered as main, is shifted as a preliminary to the second one which is labelled as main. Computing these relationships between exchanges avoids questions which have already been answered. The notion of obligation is another property of dialogue that must be taken into account to construct a coherent whole. Coherence is due to the way a task is solved, but also to the fact that conversational rules has been respected. This conversational level is independent of the task evolution. For example, a basic rule is the obligation to give a response to a question. But the kind of response will be different if the question is expressed as a Y/N question, or a WH question. It can also be necessary to give an evaluation of an exchange to mark an agreement. So we have defined conversational rules that relate each kind of initiative communicative intention with their appropriate reactions, plus eventually the presence of an evaluation. These rules are used to compute expectations about the next input, and can constrain the interpretation of a sentence. For example, consider the following dialogue, in which a speaker (L) asked an operator (S) of a university for a telephone number: S1: Which department does Mr Watson belong to? L2: Is there an English department? S3: Yes, the phone number of Mr Watson from the English department is 12-34-56-78.

The third sentence is possible only if the intervention in L2 is analysed as a response to S1. If L2 was considered as a Y/N question, S3 would only contain 'Yes', and the speaker would be obliged to reformulate its response or its prior request. Such a behaviour does not fit in with cooperativeness. This example illustrates that conversational rules are not only related to surface form but also act on the evolution of the dialogue. The rules are also helpful to a dialogue manager to establish its own obligations, and create its communicative intentions. The processes in charge of evaluating the dialogue, handling the interaction and recognising the intentions are conceived as agents that can collaborate due to the dialogue structure.

3 . Dialogue representation In this section, we detail the dialogue structure, and the rules allowing to dynamically build it. The input to the system consists in intentions concerning the task, the turn taking decomposed in moves with the communicative intention related to them and a selected context of interpretation.

3.1.

Relationships between intentions

Task knowledge is organised in recipes, rather analogous to those used by [Lochbaum 1994], or by [Rich and Sidner 1997]. We use the formalism defined in [Grau 1984]. A recipe contains a set of actions, with preconditions and effects, involving different parameters (an example of a recipe is given in section 4). Thus the relations between the intentions concerning the task reflect the relations between recipes, or the internal organisation of a recipe. We say that an intention I1 dominates an intention I2 (or I2 is subordinated to I1), if I2 concerns an action, a precondition, an effect or one of their parameters included in the recipe referred by I1. Two intentions I1 and I2 are successive if none of them dominates the other, i.e. I1 and I2 both refer to entities belonging to the same recipe. These definitions allow to decide the embedding of exchanges. Along the dialogue, a recipe-tree is built reflecting the relations among the activated recipes. For each utterance, intentions related to the task are settled. They refer to actions for which recipes are known (or they are basic actions). Then the link between these recipes (or basic actions) and the recipes in the recipe-tree is computed.

3.2.

Communicative intentions

In a dialogue, the discourse is naturally segmented in turn taking, but we need to refine it to obtain the basic units of the dialogue structure, in order to handle the interaction and to take into account obligations. Some turn taking must be cut in two parts from the task point of view because they refer to different intentions. This separation results in sets of sentences. Then, for each sentence, the associated speech acts are computed . We define a speech act (corresponding to the communicative intention) as the combination of a predicate class and an associated illocutionary function [Roulet, and al. 1985]. The predicate class depends on the semantics of the sentence: the predicates are classified, depending on the application (such as physical action, interrogate, order, assert, agree, YNquestion, ...). This part is independent from the dialogue situation. The illocutionary functions are computed by rules taking into account the dialogue situation and the predicate class. They are labelled as initiative or reactive. The following table shows their associations, allowing the dialogue manager to take into account obligations. illocutionary functions (initiative) offer-request ask for information ask for confirmation assertion

illocutionary functions (reactive) accept deny positive answer negative answer confirm invalidate positive evaluation negative evaluation

For a given semantic interpretation in a same dialogue situation different illocutionary functions may be found: the dialogue manager will choose the relevant pragmatic inter-

pretation. For an information seeking task, we have defined a set of 17 rules.

3.3.

Moves and exchanges

For each turn-taking, the corresponding sets of sentences are associated to sets of speech acts. Three cases may occur: (a) a set is made of a unique speech act (only one sentence, or a repetition of the same speech act): it constitutes a move (the notion of move presented here is analogous to [Sinclair and Coulthard 1975] and [Goffman 1973]; (b) there are more than one speech act, but all of them include the same illocutionary function, then it is also a move, associated with several speech acts (one is principal and the others are subordinated); (c) in the other cases, a set is divided in different moves with respect to one of the preceding conditions. Moves are labelled as initiative or reactive according to their illocutionary functions. Moves are combined inside exchanges. An exchange is composed of an initiative move and a reactive one. It may be completed by a third move, which evaluates the reactive move, and thus is called evaluative. It generally consists in an acknowledgement of the exchange. Exchanges are labelled as preliminary or complementary or main. This classification has been introduced by Roulet to represent human conversations. A preliminary exchange precedes a move : it indicates that the speaker needs an information, before his move. A complementary exchange follows a move : it indicates that the speaker needs further information concerning the preceding move. A main exchange corresponds to a primary intention. The whole structure is a tree with a root labelled dialogue, and first branches labelled main exchanges, corresponding to the different parts of the dialogue.

3.4.

Building the structure

The building process takes place after the agents have performed their different analyses. Its input consists in: the current move to attach to the structure and its communicative intention, the current exchange and the relation between the intention of the current move and those of the current exchange. Coherence between the illocutionary functions is verified (see table in section 3.2). The set of rules needed to build the structure is small (9) with regard to the different cases of insertion. We only give two of them which are used in the following examples. Notations : M: move, E: exchange, Ui: user’s move, Sj: system move, current move: the move to insert, current exchange: the selected exchange. Rule 1: the current exchange becomes a preliminary, relative to the current initiative move If - Current move : initiative M Uk Then initiative M Ui - Current exchange : prel E reactive M Sj initiative M Ui E initiative M Uk reactive M Sj E - Uk intention dominates Ui intention

Rule 2: the current move causes the opening of a new exchange, complementary to the current exchange

If initiative M Then Uk - Current move : initiative M Ui - Current exchange : E initiative M reactive M Sj E Ui reactive M Sj initiative M Uk - Uk intention is subordinated to compl E Sj intention

4 . A dynamic structure The organisation of exchanges may be modified when the dialogue continues. Further information may lead to reconsider the hierarchy of the intentions related to the task. The leaves of the structure correspond to moves: they point to parts of the user’s or system utterances. Each leaf also points on a node of the recipe-tree (R-tree in the following). Each utterance is associated to a recipe, as we show it in section 3.1, and its relation with those of the R-tree is computed. In most systems, a recipe must be attached to an existing branch, or causes the creation of a new tree. That means that this recipe either concerns a different action, or is part of a preceding one: the user details his general intention, by asking on a part of the action necessary to reach it. Thus it is assumed that the user always proceeds by refinements of the general goal he has presented at first. In our model, we allow the R-tree to grow by the leaves or by the root. That means that the user may begin by reaching a subgoal, before presenting his primary intention. Let us give an example in the train domain, with two different followings (U3 and U3bis). U1 : How much time before the departure day-time is it still possible to make a reservation on TGV? S2 : 2 hours before. either U3 : Thank you. or U3bis : OK, so make a reservation for the ParisMarseille this evening. Extract of the recipe R2 : name : R_make_a_reservation roles : train (t1), agent (a1), reserv_time(rt1), tr_time(tt1),... conditions : R_delay with roles rt1, tt1,t1 R_get_ticket with roles ... description : A_reserve (t1,...) A_fill_in_reservation_ticket(...) R_give_reservation_ticket with roles ... results : A_get_reservation(t1,...)

In this example, after U1, the recognised intention I1 is obtain information about a condition attached to making a reservation. The corresponding recipe R1 (R_delay) is activated, and becomes the root of the R-tree. U1 opens a main exchange. S2 is generated to satisfy this intention. If the dialogue is continued by U3, this interpretation is confirmed. But when the following is U3bis, the recognised intention I2 concerns an ask for an action. The relation between the activated recipe R2 (R_make_a_reservation) and R1 is already determined after U1. R1 is part of R2, thus I2 dominates I1, and the R-tree “grows” by the root. The dialogue structure is shown below, after application of Rule 1.

initiative M prel E main E Dial

reactive M initiative M

R-tree U1

R-tree

R2

S2 U3bis

E

initiative M S1 initiative M L'2 prel E prel E

R1

R1

The recognition of U1-S2 as a preliminary exchange relative to U3bis which opens a main exchange (and not as two successive main exchanges) entails to better interpret U3bis. For example, as it is said that the requested reservation must be made on the evening TGV, and not on another train, that prevents to ask “Which kind of train do you want this evening, a TGV or an express?”, which will seem rather strange for the user. In such cases, the dialogue may be difficult to pursue, and the risk of misunderstandings becomes higher. The cooperativeness of the system is increased, and the cost to pay in terms of complexity is not very important. The search of the link between the recipes is bi-directional.

5 . Collaboration of the two analyses To illustrate the necessity to deal with obligations, let us examine another case of reinterpretation by detailing what happens in the “Mr Watson example” presented at the beginning of this paper. S1: Which department does Mr Watson belong to? L2: Is there an English department? S3: Yes, the phone number of Mr Watson from the English department is 12-34-56-78.

The situation after S1 is the following, with the convention:- - - expected moves: R-tree E

prel E

initiative M

S1

reactive M

R1

reactive M

The analysis of L2 indicates that the corresponding intention (I2) is subordinated to I1, because it concerns a recipe R2 which is part of R1 (the existence of an “English department” is a condition to belong to this department). The speech act recognition gives two interpretations: a YNquestion with either an ask for information as illocutionary function (which is initiative), or a positive answer (which is reactive), because it contents the factual elements of an answer. Preference is given to the respect of conversational obligations and the second interpretation is tried. In this case the Rule 3 is used: If - current move : reactive M Uj + YNQu - current exchange : initiative M Si E - Uj intention reaches Si intention

Then

E

Uj : U'j + U"j initiative M Si initiative M prel E reactive M

U'j

U"j

evaluative M

The application of this rule (preferred to Rule 2, corresponding to the first alternative) produces the following structure:

reactive M L"2 evaluative M

R2

reactive M

We observe on this structure two main features : L2 plays both an initiative role, to which the system is obliged to react adequately, and a reactive role, which allows to obtain the answer to the question concerning the department. Thus S3 may be generated, with three parts: (a) a first part S3a (“Yes”) to formally react to L’2; (b) a second part S3b, to acknowledge L”2 and to explicit the inference the system made; and (c) a third part S3c to react to the first user’s initiative (not mentioned in the figure) corresponding to Watson’s phone number. When producing S3, the system has built the following structure : R-tree E

initiative M S1 prel E initiative M L'2 prel E reactive M S3a L"2 reactive M evaluative M S3b reactive M

R1

R2

S3c

This interpretation is only possible if the English department really exists (to validate the inference). If it is not the case, the other analysis of the illocutionary function of L2 will be chosen, and L2 will only be the opening of a preliminary exchange (by application of rule R2). The user will then have to find a solution to answer the question asked in S1. In the case we develop (the department exists), it would be very surprising for the user to only obtain the answer “Yes” and have to precise “So the Watson I am looking for belongs to the English department”, which seems completely obvious from his point of view, after L2.

6 . Communication handling A dialogue manager has to be aware of communication problems, as misunderstandings, to skilfully react in order to keep the dialogue coherent, friendly and natural. Reasoning only about the task knowledge does not allow the dialogue manager to evaluate how the resolution of a problem is progressing. Questions as 'Did the user really understand a system's question?’, ‘Is he able to give an answer?'‘, can be answered by examining the set of exchanges related to the same problem (i.e. pointing to the same recipe) in the dialogue structure. Such an evaluation does not directly upgrade the comprehension level but it gives the system some additional chances to perform a successful communication, even with a bad understanding.

6.1.

Evaluation of the interaction

We have defined a dynamic evaluation process whose results are transmitted to the dialogue manager. This evaluation is based on three variables whose values are computed from the form of the structure. They deal with incidence notion, corresponding to the secondary exchanges useful to satisfy an intention (e.g. reformulating, iteration, restarting, explanation, confirmation or precision questions). They are issued from [Luzzati 1989], who established them from studies of corpora. • The Incidence Depth (ID) measures how long an incidence is, in terms of number of preliminary closed exchanges before an expected reaction. The ID is related to the difficulty to find the appropriate reaction to a given initiative intention. initiative M initiative M

prel E

prel E

initiative M



User System User

reactive M prel E

ID = 4

reactive M

System

System initiative M User reactive M System initiative M User prel E reactive M System

The Local Height (LH) measures the height of a local incidence, in terms of number of non closed preliminary exchanges before a reaction. LH is related to communication problems.

prel E

initiative M System initiative M User prel E System prel E User LH = 3



The General Depth (GD) measures the global complexity of the whole dialogue. It is the sum of the IDs relative to the main exchanges. These variables are used by two functions that compute the complexity of the dialogue at a given moment. F1 = (aLH + ID), F2 = F1 log GD F1 evaluates the difficulty to solve a problem. Measuring the misunderstandings, LH is the most important parameter: this explains the presence of a weight a>1, a priori settled. If preliminary exchanges are too numerous, the system should intervene to help the user. This evaluation is not related to the task complexity and F2 allows us to take this aspect in account, by assuming that the global complexity of the dialogue (GD) gives also a measure of the complexity of the task. Thus in F2, F1 is attenuated by GD, and a dynamic adjustment based on what has already occurred and its complexity is done.

6.2.

Control of the interaction

By using these evaluations, a dialogue manager can take into account the misunderstanding risks and choose an ap-

propriate strategy, based on the effects of its reaction on the evaluation variables. Asking for precision will make ID or LH increase. Reacting to a user's initiative will not make ID decrease. Thus, if the variables have already reached a high value, the dialogue manager can choose to restart a subpart of the dialogue by reformulating a previous question and make ID and LH decrease. On the contrary, if the variables have low values, the dialogue manager can repeat its last question, or ask for a confirmation, or precision. Handling communication phenomena allows a dialogue manager to decide a strategy to apply based on the task resolution and the intention management, but also on the user’s behaviour and its own reactions.

7 . Selection of context Before interpreting the role of a user's move, a context is determined. It contains all the topics the move must be related to in order to keep the dialogue coherent. The possible topics are those which are related to the non-closed preliminary exchanges in the current main exchange, to the current exchange at the preceding move, even if closed, and to the main exchange. Selecting exchanges in the structure to construct the current context allows us to define it relatively to the pending problems in the task resolution but with a restriction to those that are coherent given the dialogue situation. This context guides the search for links between recipes to relate a new intention to preceding ones. This context plays the same role as the focus stack in [Grosz and Sidner 1986].

8 . Validation of the model on different tasks To validate the ability of the model to represent conversational phenomena by structuring and labelling the exchanges, we have applied it to different kinds of dialogues. We show that the evaluation of the interaction allows to dynamically choose the most appropriate strategy.

8.1.

Gene corpus

The dialogue concerns explanations of an expert systems results. The participants are a student in medicine, and a “Wizard of Oz” plaid by his teacher. The collected dialogues are long (about 30 turn takings), and include utterances with argumentative aspects. This corpus has been the basis for the conception of a multi-agent explanation system. Extract of a dialogue structure : initiative M S : hypocalcemy : contradictory with data? reactive M E : hypocalcemy diagnosis : low proba. initiative M S : Pneumonopathy proba? prel E What is pneumonopathy? reactive M E : Only one question initiative M S : HMD /pneumonopathy ? main E reactive M E : HMD : kind of pneumonopathy main E

initiative M S : HMD = pneumonopathy ? compl E

reactive M compl E

E : HMD and pneumonopathy : different causes initiative M S : diag for HMD? reactive M E : prematurity,...

In this task, the dialogue structure is important to keep track of what has been said and how it has been evoked. For example, the requirement for explanation may not be explicit, but is only detected by the number of exchanges concerning a subject, without reaching an agreement. The precise structuring of the exchanges (and the fact that some of them are closed or not) is very important to allow the further agents to make decisions, concerning the explanations to give, and the way to argue them to convince the student. A more complete study may be found in [Vilnat 1994].

8.2.

LHM corpus

For this project, human-human dialogues between a teacher and his pupils (about 10 years old) have been collected in two different pedagogical contexts: (a) an experimentation in which the teacher tries to bring the students to discover the physical laws by themselves, (b) a more classical teaching in which the teacher presents the laws. The dialogue model has been used to exhibit the differences between these strategies. The dialogues are very long (about 100 utterances), and imply more than two participants, requiring to enhance the definition of speech acts (not given here). Extract of a dialogue structure in an experimental class: prel E initiative reactive initiative main E

T1 S2 T3

reactive

Ar4 T5 reactive S6 compl E initiative T7 reactive S8 compl E initiative T9 reactive S10 compl E initiative reactive evaluative compl E initiative T13b reactive S14 evaluative T15a initiative

Dial

main E

T11 S12 T13a

With this structure, it is clear that the different basic teaching strategies are made explicit by the dialogue structure: (a) a simple exchange, to precise a particular point, or to remind a given notion (T13b-T15a);(b) the adjunction of embedded complementary exchanges (Ar4-T13a) : the teacher is never satisfied with the students' answer to his question, and try to bring them to the appropriate one, by asking a little more on the preceding answer; (c) beginning by some preliminary exchanges, before the principal initiative (T1-T3): the teacher verifies that the students have the appropriate knowledge in mind, before asking his question. These strategies are often mixed. An important parameter is also the variation of the length of the exchanges : a long exchange is often followed by a short one, to prevent monotony. It is also obvious that the general shape of structure obtained for the dialogues in the “classical” class (not illustrated here) is completely different. The differences between the chosen strategies is clearly reflected by the structure.

9 . Discussion The structure we propose in this paper is composed of a precise structure of the dialogue in exchanges, and a recipe

tree (R-tree) to represent the task in progress. It can be compared to the attentional state in [Grosz and Sidner 1986] composed of discourse segments, a focus space stack and a dominance hierarchy which is obtained from an intentional structure. This last structure has slightly evolved in the works of [Lochbaum 1994] and [Rich and Sidner 1997], and the R-tree we propose is rather similar to these structures, except the precise definition of the kinds of relations it allows to recognise. The difference we introduce also concerns the growth of this tree. We do not postulate that the user will always adopt a descendant strategy to pose his problem. Most dialogue systems [Litman and Allen 1990; Pollack 1990] assume that the user gives his main goal in his first utterance. This choice is mainly justified by the fact that a novice user proceeds so [Carberry 1990]. But we consider that a user who will often use a system will become expert, and thus will change his strategy. Dialogues will be more flexible, and cooperative, without assuming such a postulate, as it is shown in [Lambert and Carberry 1991]. Compared to discourse segments as defined in [Grosz and Sidner 1986], the structure we propose is completely segmented and more detailed. The basic unit is the exchange, and each exchange is labelled to indicate how it is related to the others. Our motivation is to introduce communicative intentions in this structure, and not only the intentions relative to the task accomplishment. Recognising the communicative intention allows to take into account the obligations a speaker has in participating in a dialogue. The phenomena we consider here are analogous to those treated by Traum. Another motivation for our segmentation is the capability to take into account communication problems. The structure we define combines the different kinds of information and plays the role of a blackboard. Each agent is informed of it, and takes it into account to make its decision. It is important either for the agents in charge of the interpretation of the user’s utterances, or for the agents having to decide the strategy the system must adopt to continue the dialogue. Thus the dialogue structure allows a better collaboration of the different agents by providing complete information. It also permits a better distribution of the tasks to be done by different agents, each one being expert of its particular domain.

10.

Conclusion

Handling a cooperative dialogue requires to be aware of different problems: user’s intentions satisfaction and coherent communication management. To solve them, we have defined agents, each specialised in one task. Finding relations between user’s intentions related to a task, managing discourse obligations and maintaining the communication are independent processes that share a same structure : the dialogue representation we have presented. Thus, our structure gives a complete representation of all the phenomena we want to treat. It allows to make decisions concerning interpretation or reaction with full knowledge of the case. Rules relative to communicative intention recognition and structure building have been defined and task knowledge is being processed. Finally, the whole process shall be completely integrated in a general multi-agent platform, CARAMEL [Sabah and Briffault 1993].

11.

References

[Airenti and al. 1993] Gabriella Airenti, Bruno G. Bara and Marco Colombetti, Conversion and Behavior Games in the Pragmatics of Dialogue, Cognitive Science, 17, 2, pp. 197-256, 1993. [Allen and Perrault 1980] James F. Allen and Raymond C. Perrault, Analyzing intention in utterances, Artificial Intelligence, 15, pp. 143-178, 1980. [Carberry 1990] Sandra Carberry, Plan recognition in natural language dialogue, MIT Press, Cambridge, 1990. [Gene 1994] Gene, Modélisations d’explications sur un corpus de dialogue, Actes de l’atelier de recherche Groupe GENE du PRC-IA, Telecom Paris, 1994. [Goffman 1973] Erving Goffman, La mise en scène de la vie quotidienne. Les relations en public, Éditions de Minuit, Paris, 1973. [Grau 1984] Brigitte Grau, Stalking coherence in the ’topical‘ jungle, International conference on the fifth generation computer systems, Tokyo, 1984. [Grosz and Sidner 1986] Barbara Grosz and Candace Sidner 1986, Attention, intentions and the structures of discourse, American journal of computational linguistics, 12, 3, pp. 175-204, 1986. [Lambert and Carberry 1991] Lynn Lambert and Sandra Carberry, A tripartite plan-based model of dialogue, Proceedings 29th ACL, University of California, Berkeley, 1991. [Litman and Allen 1990] Diane J. Litman and James F. Allen, Discourse processing and commonsense plans, Intentions in communication, ch.17, MIT press, 1990. [Lochbaum 1994] Karen Lochbaum, Using collaborative plans to model the intentional structure of discourse, TR 2594, Harvard Univ., PhD thesis, 1994. [Luzzati 1989] Daniel Luzzati , Recherches sur le dialogue homme-machine : modèles linguistiques et traitements automatiques, doctorat d’état ès lettres, Paris III, 1989. [Moore and Pollack 1992] Johanna D. Moore and Martha E. Pollack, A problem for RST : the need for multi-level discourse analysis, Computational linguistics, 18, 4, 1992. [Pollack 1990] Pollack Martha 1990, Plans as complex mental attitudes, Intentions in communication, Bradford books at MIT, 1990. [Rich and Sidner 1997] Charles Rich and Candace Sidner, Collagen: When Agents Collaborate with People, Proceedings Autonomous Agents, Marina Del Rey, California, 1997. [Roulet, and al. 1985] Eddy Roulet, Antoine Auschlin, Jacques Moeschler, Christian Rubattel and Marianne Schelling, L’articulation du discours en français contemporain, Peter Lang, Berne, 1985. [Sabah and Briffault 1993] Gérard Sabah and Xavier Briffault, Caramel : a Step towards Reflexion in Natural Language Understanding systems, IEEE International Conference on Tools with Artificial Intelligence, Boston, pp. 258-265, 1993. [Schegloff and Sacks 1973] E. Schegloff and H. Sacks, Opening up closings, Semiotica 7: 289-327, 1973. [Sinclair and Coulthard 1975] J.M. Sinclair and R.M. Coulthard, Towards an analysis of discourse. The english

used by teachers and pupils, Oxford University Press, Oxford, 1975. [Traum and Allen 1994] David R. Traum and James F. Allen, Discourse Obligations in Dialogue Processing, Proceedings 32th ACL, New Mexico State University, Las Cruces, 1994. [Vilnat 1994] Anne Vilnat, L’historique du dialogue pour gérer des explications, Proceedings of l’atelier de recherche Groupe GENE du PRC-IA, Telecom Paris, 1994.