Prospective Logic Programs

3 downloads 0 Views 119KB Size Report
A full-blown theory of preferences has been previously presented .... tually happen. ... If we commit to the decision of boarding up the house, by assuming the ...
Prospective Logic Programs Lu´ıs Moniz Pereira and Gonc¸alo Lopes Centro de Inteligˆencia Artificial - CENTRIA Universidade Nova de Lisboa, 2829-516 Caparica, Portugal [email protected] [email protected]

Abstract. As we face the real possibility of modelling systems capable of non-deterministic selfevolution, we are confronted with the problem of having several different possible futures for a single starting program. This issue brings the challenge of how to allow such evolving programs to be able to look ahead, prospectively, into such hypothetical futures, in order to determine the best courses of evolution from their own present, and thence to prefer amongst them. The concept of prospective logic programs is presented as a way to address such issues. We start by building on previous theoretical background, on evolving programs and on abduction, to construe a framework for prospection and describe an abstract procedure for its materialization. We then proceed to fully specify the ACORDA system, a concrete working implementation of the procedure previously described, and report on new developments and results achieved by our recent research on extensions to the system using state-of-the-art query resolution mechanisms, namely tabling in XSB-Prolog in combination with Smodels. We also take on several examples of modelling prospective logic programs that illustrate the proposed concepts. We conclude by elaborating about current limitations of the system and examining future work scenaria.

1

Introduction

Continuous developments in logic programming (LP) language semantics which can account for evolving programs with updates [ABLP02,ALP+ 00] have opened the door to new perspectives and problems amidst the LP community. Given it is now possible for a program to talk about its own evolution, changing and adapting itself through non-monotonic self-updates, one of the new looming challenges is how to build logic programs capable of anticipating their own possible future states and prefer among them in order to further their goals, prospectively maintaining truth and consistency in so doing. Such futures represent not just changes in the outside environment, but must also necessarily incorporate possible actions originating from the program execution itself, and perhaps even consider possible actions and hypothesized goals emerging from the activity of other programs. As we are dealing with non-monotonic logics, where knowledge about the world is incomplete and revisable, predictions about the future can represent evolving hypotheses which may be true, pending subsequent confirmation or disconfirmation on further observations. We intend to show how rules and methodologies for the synthesis and maintenance of abductive hypotheses, extensively studied in Abductive Logic Programming [KKT98,Kow06b,Poo00,Poo97], can be used for effective and defeasible prediction. Abductive reasoning by such prospective programs must necessarily pair-up with a notion of simulation, as the program is imagining the evolution of its future prior to actually taking action towards changing its own state. While being immersed in the world (virtual or real), the program should be capable of conjuring up hypothetical what-if scenaria while attending to a given set of integrity constraints, goals, and partial observations of its environment. These scenaria can be about hypothetical observations (what-if this observation were true?), about hypothetical actions (what-if this action

were performed?) or hypothetical desires (what-if this goal was pursued?). Note that we are considering in this work a very broad notion of abduction, which can account for any of these kind of hypotheses. An abducible can be assumed only if it is a considered one, i.e. it is expected in the given situation, and moreover there is no expectation to the contrary [DP05,DP07]. consider(A) ← expect(A), not expect not(A). The rules about expectations are domain-specific knowledge contained in the theory of the program, and effectively constrain the hypotheses (and hence scenaria) which are available. It is to be expected that multiple possible scenaria become available to choose from at a given time, and thus we need some form of preference specification, which can be either a priori or a posteriori w.r.t hypotheses making. A priori preferences are embedded in the knowledge representation theory itself and can be used to produce the most interesting or relevant conjectures about possible future states. Active research on the topic of preferences among abducibles is available to help us fulfill this purpose [DP05,DP07] and results from those works have been incorporated in the presently proposed framework. A posteriori preferences represent choice mechanisms that enable the program to actually commit to just one single interesting hypothetical scenario engendered by the relevant abductive theories. These mechanisms may trigger additional simulations, not only to posit which new information to acquire so a more informed choice can be enacted, but also to restrict and imagine commitment to each of the abductive futures along the way. At times, several hypotheses may be kept open simultaneously, constantly updated by information from the environment, until a choice is somehow forced during execution, or until a single scenario is preferred, or until none are possible. The study of this new LP outlook is essentially an innovative combination of fruitful research in the area, providing a testbed for experimentation in new theories of program evolution, simulation and self-updating, while launching the foundational seeds for modeling rational selfevolving prospective agents. Our basic extensions to LP can be applied to a variety of purposes, one of which is to construct logical agent theories, but to address this a more general high-level language would have to be used and subsequently compiled into these more basic constructs. Even though we materialize some of the examples in this work as agent theories we intend to stay close to the LP constructs at this experimentation and design phase. Building upon decades of solid LP research and implementation from the ground up is insurance towards constructive theory building and cumulative scientific inquiry. Preliminary research results have proved themselves useful for a variety of applications and have led to the development of the ACORDA1 system, successfully used in modelling diagnostic situations [LP06]. This paper sketches a more formal abstract description of the procedure for designing and executing prospective logic programs and reports on new developments and results achieved by recent research on extensions to the system by using state-of-the-art query resolution mechanisms like tabling [Swi99] in combination with Smodels [NS97]. Some examples are also presented as an illustration of the proposed system capabilities, and some broad sketches are laid out concerning future research directions. 1

ACORDA literally means “wake-up” in Portuguese. The ACORDA system project page is temporarily set up at: http://articaserv.ath.cx/

2

Logic Programming Framework

For the foundations of prospective programming to be laid out, we just present in this section a logical framework of discourse with which to approach the problem of creating futures. The task at hand is to describe how evolving logic programs can be represented as sequences of updates, and how abduction can be employed to query and complete those update sequences, so that predictive models can be correctly encoded in LP. Prior research results have shown how abduction can be successfully encoded in LP [KKT98,APS04,Kow06b,Poo00] to provide explanations for new incoming updates or to generate updates themselves and we resume these efforts to account for abduction as a viable means to anticipate the future. Since abduction deals constantly with multiple possible explanations, the problem of preferring between available explanations is of paramount importance. A full-blown theory of preferences has been previously presented in conjunction with EVOLP and updates [AP00] and has been extended to work with abductive theories in [DP05,DP07]. All of these are based on solid and well-established semantics such as the Stable Models [GL88] and Well-Founded Semantics [vGRS91], and exhibit thoroughly tested working implementations2 . In the next section, the abstract procedure for future prospection is laid out in terms of the previously mentioned semantics, effectively demonstrating how we can combine these diverse ingredients into powerful implementations of systems exhibiting promising new computational properties. Before starting we feel it necessary to differentiate this mechanism of prospection from your average planning problem, in which we search a state space for the means to achieve a certain goal. In planning, the search space is inherently fixed a priori, implicitly defined by the set of possible actions that we can use to search for the goal. In prospective programming there are several main deviations from this recurrent theme that effectively set it apart from a typical planning problem. First, the presence of context-dependent abductive extensions to the initial theory, combined with iterated self-evolution of the program during the simulation itself means that the state space is very likely to drastically change during the very prospection process. In some situations we may end our simulation with many more hypotheses than those we initially had available. In other cases they may be dramatically reduced, and these changes typically depend on non-deterministic updates on the program, either from the environment, or from the program itself. An example of this will be presented later on, where the decision to commit to a scenario may represent the expansion of the search space at later stages of prospection. In planning, the presence of a goal is mandatory to guide the search. In prospective programming, while we can indeed probe the future with a given goal in mind, abducible scenaria for the future may be generated even without a specific objective, i.e. with the purpose of merely predicting the scenaria which are likely to come up next. The resulting simulation may itself trigger new top goals for the program to execute. Also, and since we are dealing with future prediction not only in a neutral environment but possibly in a population of other rational agents, it is likely that a prospective program may want to predict the future evolutions of other such programs. In this case, it may very well turn out that it doesn’t want to abduce solutions for a given goal, but in fact abduce goals or intentions of others, from solutions and actions that other agents exhibit! This is a significant departure from the habitual planning setting and, as such, demands new approaches and perspectives. 2

Working implementations of Dynamic Logic Programming, EVOLP and Updates plus Preferences using DLP available online at: http://centria.di.fct.unl.pt/˜jja/updates

2.1 Evolving Logic Programs Modelling the dynamics of knowledge changing over time has been an important challenge for LP. Accounting for the specification of a program’s own evolution is essential for a wide variety of modern applications, and necessary if one is to model the dynamics of real world knowledge. Several efforts were conducted in lieu of developing a unified language that could be both expressive and simple, following the spirit of declarative programming that is characteristic of LP. The language EVOLP [ABLP02] is one of the most powerful results from this research area, with working implementations. EVOLP extends LP in order to provide a general formulation of logic program updating, by permitting rules to indicate assertive conclusions having the form of program rules. Such assertions, whenever they belong to a model of the program P, can be employed to generate an updated version of P. This process can then be iterated on the basis of the new program. When the program semantics affords several program models, non-deterministic branching evolution will occur, and hence several evolution sequences are possible. The ability of EVOLP to nest rule assertions within assertions allows rule updates to be themselves updated over time, conditional on each evolution strand. The ability to include assertive literals in rule bodies allows for looking ahead on program changes and acting on that knowledge before the changes actually take place. The body of research surrounding EVOLP has been steadily growing over the years, and we direct the reader to the references for more details about its specification and formal LP semantics[ABLP02,ALP+ 00]. 2.2 Preferential Theory Revision Abduction plays a crucial role in belief revision and diagnosis, and also in the development of hypotheses to explain some set of observations, or even to generate explanations for new incoming updates on dynamic knowledge bases [KKT98]. Such abductive extensions to a theory can be expressed by sets of abducibles, over which we should be able to express conditional priority relations. Abducibles may be thought of as the hypothetical solutions or possible explanations that are available for conditional proof of a given query. This ability of construing plausible extensions to one’s theory is also vital for logic program evolution, so that the program is capable of self-revision and theorizing, providing new and powerful ways on which it can guide the evolution of its knowledge base, by revising incorrect or incomplete behaviour. In future prospecting, abductive hypotheses can also be used to generate possible ways to reach a given goal, and to account for the triggering of likely scenaria. In the latter case, we can think of abduction as the answer to the inquiry about which scenario is more likely to accommodate future events. The evaluation and selection among multiple alternative explanations is a central problem of abduction, because of the combinatorial explosion of available explanations. Thus, it is important to generate only those explanations relevant for the problem at hand. Several approaches have thus far been proposed, often based on some global selection criteria, which has the drawback of generally being domain independent and computationally expensive. An alternative to global criteria is to allow the theory to contain rules encoding domain specific information about the likelihood that a particular assumption be relevant. In [DP05,DP07], preferences among abducibles can be expressed in order to discard unwanted assumptions. The notion of expectation is employed to express preconditions for assuming abducibles.

Preferring Abducibles To express preference criteria among abducibles, we consider an extended first order language L∗ . A preference atom in L∗ is one of the form a ⊳ b, where a and b are abducibles. a ⊳ b means that the abducible a is preferred to the abducible b. A preference rule in L∗ is one of the form: a ⊳ b ← L1 , . . . , Lt (t ≥ 0) where a ⊳ b is a preference atom and every Li (1 ≤ i ≤ t) is a literal over L∗ . Although the program transformation in [DP05,DP07] accounted only for mutually exclusive abducibles, we have extended the definition to allow for sets of abducibles, so we can generate abductive stable models [DP05,DP07] having more than a single abducible. For a more detailed explanation of the adapted transformation, please consult the ACORDA project page, mentioned in a previous footnote.

3

Prospective Logic Programming

We now present the abstract procedure allowing to compute the prospective evolution of an evolving logic program. Although it is still too early to present a complete formal LP semantics to this combination of techniques and methodologies, as the implemented system is undergoing constant evolution and revision, it is to be expected that such a formalization will arise in the future, since the proposed architecture is built on top of logically grounded and semantically well-defined LP components. The procedure is illustrated in Figure 1, and is the basis for the implemented ACORDA system, which we will detail in Section 5, in application to some simple prospection examples in diagnosis and action selection. Definition 1. An observation is a quaternary relation amongst the observer; the reporter; the observation name; and the truth value associated with it. observe(Observer, Reporter, Observation, V alue) The observe literals are meant to represent observations reported by the environment into the program or from one program to another, which can also be itself (self-triggered goals). Observations can be actions, goals or perceptions. We also introduce the corresponding on observe literal, which we consider as representing active goals that the program will attempt to satisfy by launching the corresponding observe action. The prospecting mechanism polls for these on observe literals which are satisfied in a given situation and attempts to execute them, independently of the results for each one. In an abstract representation, we are interested in those on observe literals which belong to the Well-Founded Model of the evolving logic program at the current knowledge state. By adopting the more skeptic Well-Founded Semantics at this stage, we guarantee a unique model for the activation of on observe literals. Integrity constraints are also considered, and can be triggered on the basis of possible abductive scenaria, as the next example will demonstrate. Example 1. Prospecting the future allows for taking action before some expected scenaria actually happen. This is vital in taking proactive actions, not only to achieve our goals, but also to prevent, or at least account for, catastrophic futures. Consider a scenario where weather forecasts have been transmitted foretelling the possibility of a tornado. It is necessary to deal with this emergency beforehand, and take preventive

Start Update Committed Abducibles

Knowledge Base

Abductive Hypothesis

External Oracles

a posteriori Preferences

Active Goals + Integrity Constraints

Abductive Scenarios

a priori Preferences

Fig. 1. Prospective simulation procedure.

measures before the event actually takes place. A prospective logic program that could deal with this scenario is encoded below. Note that the falsum literal is equivalently used here to model denial integrity constraints, instead of the usual empty head that is adopted in Smodels’ syntax. falsum