Journal of Mathematical Imaging and Vision, 4, 375387 (1994) © 1994 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
An Efficient Parallel Algorithm for Geometrically Characterising Drawings of a Class of 3D Objects 1 NICK D. DENDRIS AND IANNIS A. KALAFATIS
{dendris, kalafatis}@cti.gr
Department of Computer Engineering and Inforrnatics, University of Patras, Rio, 265 O0 Patras, Greece LEFTERIS M. KIROUSIS
[email protected]
Department of Computer Engineering and Informatics, University of Patras, Rio, 265 O0 Patras, Greece Computer Technology Institute, P.O. Box 1122, 261 10 Patras, Greece Labelling the lines of a planar line drawing of a 3D object in a way that reflects the geometric properties of the object is a much studied problem in computer vision, considered to be an important step towards understanding the object from its 2D drawing. Combinatorially, the labellability problem is a Constraint Satisfaction Problem and has been shown to be NPcomplete even for images of polyhedral scenes. In this paper, we examine scenes that consist of a set of objects each obtained by rotating a polygon around an arbitrary axis. The objects are allowed to arbitrarily intersect or overlay. We show that for these scenes, there is a sequential lineartime labelling algorithm. Moreover, we show that the algorithm has a fast parallel version that executes in O(log3n) time on an ExclusiveReadExclusiveWrite Parallel Random Access Machine with O(n3/log3n) processors. The algorithm not only answers the decision problem of labellability, but also produces a legal labelling, if there is one. This parallel algorithm should be contrasted with the techniques of dealing with special cases of the constraint satisfaction problem. These techniques employ an effective, but inherently sequential, relaxation procedure in order to restrict the domains of the variables. Abstract.
Keywords, image analysis, planar projections of 3D objects, line drawings, labelling of lines of drawings, efficient parallel algorithms 1
Introduction
A planar line drawing (the image) of a 3D object (the scene) is a planar graph whose lines correspond to a depth or orientation discontinuity in the scene. We assume that the drawing does not carry any information about texture, shadows, lighting, etc., and that it does not contain any isolated points. The scene, in general, is assumed to consist of opaque, solid objects. Moreover, as is often done in the literature, we assume the scene to be trihedral and nondegenerate. Trihedral means that any vertex of the scene is the 1This research was partially supported by the European Community ESPRIT Basic Research Program under contracts 7141 (project ALCOM II) and 6019 (project Insight
I0.
intersection of at most three surfaces, while nondegenerate means that zerowidth solids and cracks are not allowed. The line drawing is obtained by projecting the edges and vertices of the scene onto a plane via an orthographic projection. We assume that the projection plane satisfies the restriction of the general viewpoint, i.e., the topology of the image graph will not change if the scene is infinitesimally perturbed. For the purpose of understanding scenes as above whose faces are planar (i.e., polyhedra), Clowes [2] and Huffman [4] introduced the scheme of labelling. In this scheme, labels are assigned to the lines of the drawing according to certain geometric properties of the projected scene. For example, the label '+' on a line means that the corresponding edge is convex as seen from the projection plane, '  ' means that
376
Dendris, Kalafatis and Kirousis
it is concave, and the label ' 4 ' means that the corresponding edge reflects a depth discontinuity where one surface occludes another (the direction of the arrow is such as to leave the occluding surface to the right). In 1987, Malik [12] generalized this labelling scheme to objects bc~nded by pieeewise smooth surfaces. Of course, in images of curved objects, the lines of the image graph are in general not straight lines, but curves. Now, the requirement that the image is the projection of a 3D object belonging to the class of allowed scenes and not an "impossible object" imposes severe constraints on the legal labellings of the lines that are incident onto a junction of the image. Thus, the labellability problem becomes a constraint satisfaction problem. The importance of obtaining a legal labelling (a labelling satisfying the constraints imposed by the geometry of the projected object) towards solving the practically more important problem of fully realizing the projected object has been demonstrated by many researchers (see, e.g., [12]). This importance stems mainly from the fact that many algorithms that realize an object from a line drawing depend on a labelling preprocessing stage. Such algorithms are described, for example, by Sugihara [17], and Malik and Maydan [13]. However, the labellability problem has been shown to be NPcomplete even for images with planar surfaces as, for example, in [10]. To overcome this difficulty, one can either use the methods developed in order to deal with a general constraint satisfaction problem or, otherwise, investigate classes of accepted scenes ("worlds") for which efficient labelling algorithms can be designed. The first approach usually makes use of algorithms that do not produce a globally legal labelling, but rather restrict the possible labels of each junction so that any remaining label can participate in a labelling which is only locally consistent with the constraints. This "relaxation" procedure, introduced by Waltz [18], has been extensively investigated (see, e.g., [14], [15]). Of course, cases where the relaxation leads to complete solutions have also been studied. However, Kasif [7] proved that the relaxation procedure is Pcomplete. Consequently, algorithms using
this method are not amenable to parallelism. The second approach is to restrict the objects that may appear in the scene in a way that leads to efficient algorithms. We believe that this approach is practically important, since computer vision is usually applied to restricted environments. This method was used by Kirousis and Papadimitriou in [10], where a labelling and realizing algorithm for the Manhattan world was obtained (the Manhattan world consists only of polyhedra whose faces are perpendicular to one of the Cartesian axes). A similar approach is that of Alevizos [1], and Parodi and Torre [16], who assume that more information about the edges of the scene is provided. In this paper we follow the restricted world approach. Specifically, we restrict our scene to comprise of a set of arbitrarily intersecting or overlaying objects each obtained by rotating an arbitrary polygon (with edges that are straightline segments) around an arbitrary axis. The objects do not intersect with the projection plane. We call this permissible universe the pottery world (see Figure 1). Notice that although the objects are restricted to be obtained by rotation, there is no restriction on their orientation (cf. with the Manhattan world in [10], where the orientation of the permissible objects is severely restricted). Following Malik's [12] approach in our model, solid cone apices must not belong to any other surface. For the case of hole cone apices, observe that they project into an isolated vertex. Therefore, by the initial assumptions concerning line drawings, such apices are not allowed to be visible. For example, scenes like the ones in Figure 2 are not allowed in the pottery world. Our approach to labelling is not restricted to the purely combinatorial problem of finding labellings from a list of legal labellings attached to each type of junction of the image graph. In addition to combinatorial exclusion, we further exclude combinations of labellings on various components of the image graph, if these combinations cannot represent objects in our restricted world. This extended use of the realizability requirement (exclude not only labellings of a junction, but also combinations of labellings on collections of junctions) makes possible the avoidance of the combinatorial explosion. As
Efficient Parallel Algorithm for Geometrically Characterising Drawings
377
Fig. 1. A scene from the permissible class of 3D objects.
2 Constraint AnaLysis
2.1 Types of Labels
Fig. 2. Illegal scenes.
far as we know, this method was used only by Kirousis in [8], but in a very restricted manner. Here we make strong use of it. In terms of complexity, our algorithm requires linear sequential time. Moreover, we show that it has a fast parallel version that executes in time O(logan) on an ExclusiveReadExclusiveWrite Parallel Random Access Machine (a shared memory machine, see, e.g., [5]) with O(n3/log3n) processors. The algorithm not only answers the decision problem but also finds a legal labelling, if there is one. It makes use of the fast parallel algorithms that produce a maximal independent set of vertices of a graph. It must pointed out that our algorithms are easily implementable. Also, the approach we use to propagate the labels of an image graph is closely related to the approach in [9] which is used to obtain an efficient solution of the constraint satisfaction problem for implicational constraints (a special restricted type of constraints; see [9] for the definition).
The ClowesHuffmanMalik labelling scheme assigns a label to each line of the image graph according to the way the corresponding edge of the scene is seen from the projection plane: For a connecting edge (i.e., an edge not belonging to the contour of an object), if it is convex as seen from the projection plane, its projection is labelled by a '+', while if it is concave, its projection is labelled by a '  ' . For a contour edge, where one surface occludes another, its pr~ection is labelled by an ' 4 ' and the direction of the arrow is such as to leave the occluding surface to the right. Finally, a limb is labelled by a '>>'. A limb is not the projection of a "physical" edge. It results when a curved surface occludes itself and the line of sight is tangential to it for all points on the limb. Again, the direction of the '>>' is such as to leave the occluding surface to the right. See Figure 3 for examples of all labels. We also use a generalized notion of a label: a multilabel. A multilabel is a set of possible labels which will be processed at a later stage to yield one label from the set (see [12]). The two multilabels that we introduce are {  , 4 } and {+,.}. A line that can be labelled by only one of these multilabels can get an arrowlabel of a uniquely determined direction, whereas one that can be labelled with both multilabels can get arrowlabels of both
378
Dendris, Kalafatis and Kirousis
•
Yjunction: Three curves with distinct tangents. No angle between them is > ~r (e.g. junction f of Figure 3).
d
2.3 Legal Labels of Junctions straints)
Fig.
3. A l a b e l l e d scene.
directions (this is the reason that the arrows of the two multilabels are in opposite directions).
2.2
Types of Junctions
The junctions of the image graph are classified as follows ([121): • Ljunction: Tangent discontinuity across the junction (e.g. junction e of Fig. 3). • CurvatureLjunction: Tangent continuity but curvature discontinuity across the junction (e.g. junction b of Fig. 3). • Tjunction: Two lines have tangent and curvature continuity at the junction (e.g. junction g of Fig. 3). • Phantom junction or pseudojunction: It is not the projection of a vertex of the scene and is devised only to indicate the change of label along an line (e.g. junction a of Fig. 3). Initially, a line drawing does not contain any phantom junctions. However, such junctions are inserted in the drawing during the execution of our labelling algorithm. As in [12], we restrict ourselves to the sparse labelling problem, i.e. the labelling problem where the labels of a line only at points close to its endpoints are of interest. • Threetangentjunction: Three curves with common tangent. Two of them have the same curvature (e.g. junction d of Fig. 3). • E or arrowjunction: Three curves with distinct tangents. One angle between two of them is > ~r (e.g. junction c of Fig. 3).
(Unary Con
Figure 4 gives all legal labels on the lines of various types of junctions. To see that these are indeed all possible labels, we first state and prove three lemmas concerning the possible labels of certain types of junctions. The catalogue of legal labels is then produced by an exhaustive search which exploits these lemmas to prune the search space. LEMMA 1. A line that is a projection of an intersection between two surfaces and is not the projection of a base circle can only be labelled with '  '
Proof It is obvious that an intersection line cannot be labelled by a '>>'. Furthermore, the ' ~ ' label can be assigned to a line, iff this line can also be assigned the ' + ' label, depending on the viewpoint (see Figure 5). Thus, we only have to show that lines that are projections of intersections and not projections of base circles cannot be assigned the ' + ' label. To see this, observe that a ' + ' label implies that the two visible parts of the surfaces form an angle greater than 7r. This, however, is impossible, because in our permissible world curved surfaces are surfaces by rotation and are not arbitrarily terminated, but only partly occluded by other surfaces. Similarly, plane surfaces are necessarily base surfaces partly occluded by other surfaces. [] LEMMA 2. A straight line can be assigned only the labels '  ' or '>>'. Moreover, a limb can occur only in L, curvatureL and threetangentjunctions.
Proof If a straight line is a projection of an intersection of two surfaces and not a base line, then using Lemma 1 we obtain a '  ' label. In every other case, a straight line must be either a limb or a projection of a base circumference. The latter case would violate the general viewpoint restriction (since a base circumference is
Efficient Parallel Algorithm for Geometrically Characterising Drawings
379
&LAb 2L LLLA AA AA
A A P P ~~ Pq
Fig. 4 The junction catalogue. The last two junctions contain a phantom point.
a straight line only when seen from a viewpoint that lies on the base plane, and it is an elipse if seen by any other viewpoint). Thus it can only be a limb, and is labelled with '>>'. The second statement of the lemma follows from the observation that a limb which is not part of a Tjunction and also not part of an Lnode that is the projection of a cone apex, should be tangent to the other line(s) of the junction. ~
Fig. 5. Equivalence between ' + ' and '*'.
LEMMA 3. An Ejunction in our world, corresponds to an arrangement of volumes either like the ones in Figure 6.a or like the ones in Figure 6.b.
380
Dendris, Kalafatis and Kirousis
realized as three dimensional objects of the pottery world. 2.4 Higher Order Constraints
Fig. 6. Volume arrangements corresponding to Ejunctions.
Fig. 7. Illegal Ejunctions.
Thus it can be labelled only in two ways: either with (+,  , +), where '  ' is the label of the middle line, or with (  , +, ), where '+' is again the label of the middle line. Proof. At first, we observe that the labels '+' and '>>' cannot appear in Ejunctions. This can be seen by exhaustively testing the realizability of all permutations of labellings that include at least one of the above labels. Figure 7 shows some of these illegal scenes. In this way, every Ejunction is guaranteed to be a projection of the intersection of exactly three visible surfaces. Since cone apices are assumed not to lie on any surface, the tangential plane of each surface at the point of intersection is well defined. In a way similar to [12], we reach the conclusion that the Ejunctions of our world can be labelled according to the three legal Ejunction labellings of the ClowesHuffman catalogue. Having excluded the ' ~ ' label from Ejunctions, we are left with two possible labellings and Lemma 3 follows. [] Now, by an exhaustive search, clude that:
we con
THEOREM 1. Junctions with labels that are not
included in the catalogue of Figure 4 cannot be
Because of the large size of the junction catalogue, one can easily conclude that the problem of labelling a scene of the pottery world cannot be trivially solved in sequential time which polynomially depends on n, the number of junctions of the image graph (notice that since the image graph is planar, r~ is of the same order as the number of lines of the image graph). The labelling problem is a Constraint Satisfaction Problem (CSP). CSPs are usually solved using backtracking. However, even if a relaxation preprocessing is applied to a CSP, in the way that Mackworth and Freuder [14] describe, backtracking is still prone to exponential explosion, especially when the constraints between the variables are only unary or binary (it is obvious that by requiring two adjacent junctions of the picture graph to share the same label at their common line and by allowing a junction to be labelled according to the junction catalogue, we apply only binary and unary constraints). This is so because, even when the constraints are only unary or binary, a relaxation preprocessing that guarantees local consistency only does not essentially restrict the set of legal values at each variable of the CSP. However, if we take into consideration that the regions of the image graph must be realized as surfaces of the 3D scene, we get constraints of higher arity that involve not only adjacent junctions but all junctions that delimit a region of the image graph. In our approach, it is essentially the exploitation of this fact that leads to the avoidance of the combinatorial explosion. To formalize this idea, we introduce the notion of a component of an image graph. A component is a subgraph that consists of a maximal connected set of junctions that belong to a certain type. The various types of components are defined below. Our labelling algorithm will at first assign labels to the members of each component separately. Junctions that can be either uniquely labelled or labelled in more than one way (none of which
Efficient Parallel Algorithm for Geometrically Characterising Drawings
a
a
b
c
381
b
d Fig. 9. A special case of an Ljunction.
Fig. 8. Junctions that are not included in t h e compon e n t scheme.
propagates any restriction to the labels of the neighboring junctions) are not included in the formation of components. The cases of junctions with a unique labelling can be easily found by inspecting the junction catalogue. The cases of junctions that can be labelled in more than one ways, and yet do not propagate any restriction to neighboring junctions (i.e. junctions with only local ambiguity) are depicted in Figure 8. Below we analyze each case separately. The properties of the junctions that we use in the analysis are consequences of the requirement that the drawing be realized as an object of the pottery world.
• Figure 8.a: These junctions do not form chains. Moreover, the labels of such junctions are uniquely determined by the labels of their neighboring junctions, except from the degenerate case where they are connected to Tjunctions; then there is only local ambiguity. • Figure 8.b: In the case that two junctions of the type in Figure 8.b are connected one to the other through their curved lines, the single curved line thus formed is labelled by a uniquely determined multilabel. In every other case, junctions of the type of Figure 8.b are connected to the ends of chains formed by the junctions of Figure 9.b. In this case, the label of the curved line is uniquely determined by the label of the Lchain. • Figure 8.c: Such junctions are either connected to uniquely labelled ones or are connected to chains at their ends. In the latter case, the label (or the multilabel) of the junction under examination is uniquely determined by the label of the chain. • Figure 8.d: The same argument as in the previous case applies to junctions of this type (threetangent type junctions).
Fig. 10. A scene with uniquely labellable chains.
Another observation is that a legal labelling of the junction in Figure 9, if at all related to the labels of the neighboring junctions, is determined by the neighboring labels to be either one of the labellings in Figure 9.a, or one of the labellings in Figure 9.b. The latter case need not be included in the component formation scheme. This is because in the case of Figure 9.b, the requirement that the chain be geometrically realized imposes the restriction that the chain must terminate with either a curvatureLjunction or a Tjunction. Moreover, this has as consequence the labelling of the chain to be uniquely determined (see Figure 10). Finally, observe that phantom junctions do not pose any problem (i.e., need not be included in the component scheme.) To see why this is the case, one has to observe that a phantom junction is equivalent to a legal Ljunction with a label of ( + , ~ ) or (+, +). In both cases, if such a phantom junction occurs in an Lcomponent, it bears no effect onto the set of possible labels of the component. We now describe the various types of components.
• Ycomponent of type 1: Such a component is formed by Yjunctions with exactly one straight line. In these Yjunctions the straight line is always labelled by '  ' and the other two can be labelled by either '  ' or ' 4 ' . Thus the legal labellings of such a component are only two and reflect the fact that by
382
Dendris, Kalafatis and Kirousis
a
b
Fig. 13. Ljunctions that form Lcomponents.
Fig. 11. CurvatureL ambiguity.
c a
b
Fig. 14. Connections of Ljunctions.
Fig. 12. Yjunctions that form Ycomponents of type 2. looking at a scene like the one in Figure 11, there is an inherent ambiguity that cannot be resolved: do the two cylinders stand on or above the base of the third? This ambiguity will be referred to as the curvatureL ambiguity, because it is also observed in curvatureLjunctions. • Ycomponent of type 2: Such components are formed by Yjunctions of the type in Figure 12. One should observe that in such a component no label of any line is a priori known. However, once a label is found to be an arrow, this arrow will uniquely propagate. • Ecomponent: Such components include only Ejunctions. These, by Lemma 3, can have two possible labellings: ( + ,  , +), where '  ' is the label of the middle line, or (  , + ,  ) , where ' + ' is the label of the middle line. Thus, the whole component has two possible labellings; moreover, given the label of a line in the component, the labels of the whole component are uniquely determined. So far, the components that involve junctions of degree three have been presented. We now introduce the components formed by
Ljunctions. To determine all possible types of Lcomponents we discard all combinations of Ljunctions that do not have a realization as a scene of the pottery world. The task consists of an exhaustive search among all possible connections between Ljunctions. Thus, we obtain that only the following two types of Lcomponents can have more than one label: Lcomponents of type 1: This component type is formed by junctions like the ones of Figure 13.a. We use the multilabel { + , ~ ) for labelling these junctions (recall the case of Figure 9.b). Lcomponent of type 2: These components are formed by junctions like the ones in Figure 13.b. By taking into consideration their possible realizations, one can prove that a multilabel (either { + , ~ } or {  , ~ } ) on one of their end junctions uniquely propagates to the rest. That is, if the first line in the component can be labelled as a ' + ' or ' ~ ' then these will be the possible labels of the second line and so on. The same applies if the leading junction can be labelled as '  ' or ' ~ ' . So in every case, even if the number of valid labels that can be applied to these components is exponential, the component propagates one multilabel throughout itself (see Figures 14.a and 14.b). Now, the facts proved above lead to the following rules about the legal labels of various types of components:
Efficient Parallel Algorithm for Geometrically Characterising Drawings
Once we know one label of an Ecomponent, the labels of all lines of the component are uniquely determined. • Rule 2: Lcomponents of type 2 connected to an Ecomponent can have two multilabels if the label of the Ecomponent is not given, or one, if one label in the Ecomponent is given. That is, one can first label the Ecomponent in one of the two possible ways, obtain a label ('+' or '  ' ) for each line connected to the Lcomponent, and then propagate the apropriate multilabel ({+, +} or {  , ~ } , respectively) through the Lcomponent. If a contradiction occurs, one will have to backtrack only once, then relabel the Ecomponent and propagate the new labelling. ® Rule 3: Whenever an Ecomponent is connected to a Ycomponent, the connecting line is labelled as a '  ' . Therefore, the Ecomponent is uniquely labelled. • Rule 4: An Lcomponent of type 2 connected to the same Ycomponent of type 2 by both its end junctions can have at most two labellings. That is, if the connection is made through a junction like the one in Figure 14.c, the label of the common line is '  ' , else if the connecting junction is like the one in Figure 14.d, the common line is labelled with the multilabel {,4}. In both cases, the connecting line of the Yjunction is uniquely labelled. • Rule 5: If a Ycomponent of type 1 is connected to an Lcomponent of type 1, then they both share the curvatureL ambiguity. • Rule 1:
3 The Algorithm and Heuristics that Improve its Performance 3.1
Description o f the Algorithm
• input: a graph which is an orthographic pro
jection of a scene from the trihedral pottery world. • output: a legal labelling satisfying constraints imposed by the requirement that the image be realized as a scene from the pottery world. • step 1: Disconnect the middle line of all Tjunctions from the two lines forming its bar. Apply steps 29 separately onto each connected component thus obtained, under
383
the restriction that the two "bar" lines must be labelled by an ' ~ ' , if they are curved lines or by a '>>' if they are straight lines. • step 2: Find the various components in the graph as defined in section 2.4. ® step 3: Label all junctions that are not part of any component with either a label or a multilabel. For any line of these junctions which cannot be consistently labelled in this way, call step 10. If the conflict remains, stop the algorithm and report that the drawing is not labellable. • step 4: Propagate the labels of step 3 as far as they can go, by using the label catalogue of Figure 4. For any line which cannot be consistently labelled during this step, call step 10. If the conflict remains, stop the algorithm and report that the drawing is not labellable. ® step 5: Label the lines that are shared between Ycomponents of type 2 on one hand and either Ecomponents or Lcomponents on the other (recall that the lines of Ycomponents of type 1 are uniquely labelled, except the case of the inherent curvatureL ambiguity). To do this, follow the case analysis below: 
An Ecomponent connects to a Ycomponent of type 2. Use Rule 3 to determine the label of the common line.  An Lcomponent connects to a Ycomponent of type 2. Then by use of Rule 4 determine the type of the label of the common line. • step 6: Propagate all labels that were intro
duced in step 5 of the algorithm, according to the rules and the catalogue. For any line which cannot be consistently labelled during this step, call step 10. If the conflict remains, stop the algorithm and report that the drawing is not labellable. • step 7: By now, all lines shared by two components must be already labelled, except the following case: An Ecomponent is connected with the rest of the graph by means of Lcomponents. Then follow this procedure: give an arbitrary label from the set { + ,  } to one of the lines of the Ecomponent (say, this
384
Dendris, Kalafatis and Kirousis
is the line a). Thus the whole Ecomponent is uniquely labelled. Propagate the newly introduced labels as far as they can go. If at any stage a contradiction (that cannot be resolved by calling step 10) occurs, then label a with the alternative label. Propagate all labels as far as they can go. Note that this backtracking will not give rise to an exponential algorithm. • step 8: Label the lines of the Ycomponents that are not shared with another component and which are not determined by already existing arrows, by arbitrarily giving to the whole set of lines the label '  ' • step 9: The final step of the algorithm concerns the labelling of the internal lines of an Lcomponent. In this case, the label '  ' can be arbitrarily assigned to the lines of the component, if their multilabel is {  , 4 } . If on the other hand, their multilabel is { + , ~ } , then the label ' + ' is assigned to all lines. • step 10 (insertion of phantom points): This step possibly solves conflicts in the labelling of a line by inserting phantom points: 

If the line under consideration is curved and the conflicting labels are the ones met in labelling phantom junctions (i.e. a ' + ' and a ' ~ ' ) , then insert a phantom point in the line and return to the calling step. The labels are not conflicting anymore. Otherwise, return to the calling step and report the conflict.
3.2 Heuristics as a Practical SpeedUp
To improve the performance of the algorithm (not asymptotically), we will now present some heuristics that have to do with geometric characteristics of the lines of the drawing. These heuristics can be applied during a preprocessing stage on the image graph, whenever the geometric properties required by each heuristic can be derived from the image data. The heuristics act as a filter to many nonrealizable interpretations. However, the algorithm can be implemented without any use of the heuristics. • Heuristic 1: If a curved line is not part of
•
•
•
•
an ellipse, then its label is always '  ' . This is so because a curve which is not part of an ellipse, in our world can only occur as a projection of an intersection line and cannot be the projection of a base circle. Heuristic 2: If the curvature of a line changes between the junctions it connects, its label is '  ' . This can be derived by the previous observation and can be easily and reliably implemented, since, usually, it is relatively easy to detect a change in curvature. Heuristic 3: A phantom point can only appear in lines that are parts of an ellipse and only at the point where the large axis of the ellipse intersects with the ellipse. This is also an easy heuristic to implement even in representations of lines as digitised images. Heuristic 4: A chain of arrowlabels in a Ycomponent of type 2 can, by Heuristic 1, only occur on curves that are parts of ellipses. Moreover, these curves can only be projections of parallel circles (in order to satisfy the requirement that the scene be realizable as a trihedral 3D object); therefore the ellipses that these lines are part of should be similar (same aspect ratio and parallel axes). This implies that in a Ycomponent of type 1, if a region is delimited by ellipses that two of them are not similar then the labels of the whole boundary should be '  ' . (However, the reverse is not true.) Heuristic 5: In a Ycomponent of type 2, if a line connecting two Yjunctions is '  ' , then at least one of these Yjunctions must be labelled by (  ,  ,  ) .
3.3
The Complexity of the Algorithm
First observe that since the image graph is planar, its number of lines is O(n), where n is the number of its junctions. Therefore, the tasks of finding any type of components can be performed sequentially in time O(n) and in parallel in time O(log2n) on a E R E W PRAM using o(n) processors. The labelling of junctions that are not part of components can of course be performed sequentially in linear time and in parallel with a linear number of processors in constant time. The propagation of labels is also
Efficient Parallel Algorithm for Geometrically Characterising Drawings
performed sequentially in linear time (observe that the bactracking employed in connections of Ecomponents with Lcomponents does not destroy the linearity, since if the label of a junction does not lead to an inconsistency while propagated as far as possible, it will not be changed later on during the execution of the algorithm). Parallelwise, these steps can be executed by the standard technique of employing Boolean matrix multiplication in order to find the transitive closure of a directed graph (here the transitive closure corresponds to the propagation of a label). Therefore these steps can be performed in O(log2n) time on a EREW PRAM using O(M(n)) processors, where M(n) = O ( n 2"376) (see, e.g., [5]). However, as Karp and Widgerson [6] observe, for the problem of 2SAT, the transitive closure technique leads only to the solution of the decision problem. To find a legal labelling if there is one, we have to use the technique of finding a maximal independent set. For that, we first construct a directed implication graph. That graph has a junction for each legal labelling l~ of each line e of the image graph. Also, in the implication graph, there is an edge from l~ to l'e if the label l~ of line e propagates on the line e' as the label l'e. The implication graph can be constructed in O(logZn) time using O(M(n)) processors by the transitive closure technique. What we do next is to construct a conflict graph. The conflict graph has the same set of junctions as the implication graph, but an (undirected) edge connects every pair of labels that are incompatible according to the implication graph. Now a maximal independent set of junctions for the conflict graph gives a legal labelling for the image graph (if the maximal independent set has cardinality less than the number of lines in the image graph then the image graph is not labellabte). The construction of the conflict graph requires constant time with O(n 3) processors. Now, by the fast parallel algorithm of Goldberg and Spencer [3], the construction of a maximal independent set of the conflict graph requires O(log3n) time and O(nZ/logn) processors (O(n 2) is the number of edges of the conflict graph). Summing up we get:
385
THEOREM 2. The problem of deciding whether a scene from the pottery world is labeUable and of finding a legal labelling if there is one can be solved in linear sequential time and in O(log3n) time on a E R E W PRAM using O(n3/log3n) processors. 4 Discussion
As pointed out in the Introduction, the Pcompleteness result of Kasif [7] indicates that the relaxation technique cannot lead to fast parallel algorithms for the labellability problem. To overcome this, it is often assumed (as we do in this paper) that the scene belongs to a "restricted world". However, even this method is not expected to lead to tractable cases of the labelling problem, in case the restrictions apply only to the type of the surfaces that delimit the objects of the permissible world. This is so because any type of permissible surfaces can be considered at least locally as a plane. Therefore, the negative completeness results can be applied. This leads to the conclusion that "nice worlds" are ones that are described by a kind of 3D object generating grammar. The derivations of such grammars can be rotation of polygons, intersection etc.. An interesting question is the existence of a fast parallel algorithm that can produce a legal labelling (when there is one) without making use of the maximal independent set parallel algorithms. Acknowledgment
We thank Paul Spirakis for many illuminating conversations. References 1. E Alevizos, ' ~ linear algorithm for labeling planar projections of polyhedra," in Proc. IEEE/RSJ Int. Workshop on Intelligent Robots and Systems, Osaka, Japan, 199i, pp. 595601. 2. M.B. Clowes, "On seeing things," Artificial Intelligence, vol. 2, pp. 79116, 1971. 3. M. Goldberg and T. Spencer, "Constructing a maximal independent set in paprallel," SIAMJ. Discrete Mathematics, vol. 2, pp. 322328, 1989. 4. D.A. Hnffman, "Impossible objects as nonsense sen
386
5.
6.
7.
8.
9. 10.
11.
Dendris, Kalafatis and Kirousis
tences," Machine Intelligence, vol. 6, B. Meltzer and D. Michie eds., Edinburgh University Press: Edinburgh, 1971, pp. 295323. R.M. Karp and V. Ramachandran, "Parallel algorithms for sharedmemory machines," Handbook of Theoretical Computer Science, vol. A, J. van Leeuwen ed., Elsevier: Amsterdam, 1990, pp. 869942. R.M. Karp and A. Wigderson, '~. fast parallel algorithm for the maximal independent set problem," J. Association of Computing Machinery, vol. 32, pp. 762773, 1985. S. Kasif, "On the parallel complexity of discrete relaxation in constraint satisfaction networks," Artificial Intelligence, vol. 45, pp. 275286, 1990. L.M. Kirousis, "Effectively labeling planar projections of polyhedra," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 123130, 1990. L.M. Kirousis, "Fast parallel constraint satisfaction," Artificial Intelligence, vol. 64, pp. 147160, 1993. L.M. Kirousis and C.H. Papadimitriou, "The complexity of recognizing polyhedral scenes," Journal of Computer and System Sciences, vol. 37, pp. 1438, 1988. M. Luby, ' ~ simple parallel algorithm for the maximal independent set problem," SIAM J. Computing, vol. 15, pp. 10361053, 1986.
12. J. Malik, "Interpreting line drawings of curved objects," International J. of Computer Vision, vol. 1, pp. 73103, 1987. 13. J. Malik and D. Maydan, "Recovering threedimensional shape from a single image of curved objects," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, pp. 555566, 1989. 14. A.K. Mackworth and E.C. Freuder, "The complexity of some polynomial network consistency algorithms for constraint satisfaction problems," Artificial Intelligence, vol. 25, pp. 6574, 1985. 15. U. Montanari and E Rossi, "Constraint relaxation may be perfect," Artificial Intelligence, vol. 48, pp. 143170, 1991. 16. P. Parodi and V. Torre, "A linear complexity procedure for labelling line drawings of polyhedral scenes using vanishing points," in Proc. International Conference on Computer Vision ICCF93, Berlin, Germany, 1993, pp. 291295. 17. K. Sugihara, ' ~ necessary and sufficient condition for a picture to represent a polyhedral scene," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, pp. 578586, 1984. 18. D. Waltz, "Understanding line drawings of scenes with shadows," The Psychology of Computer Vision, EH. Winston ed., McGrawHill: New York, 1975, pp. 1991.
Efficient ParallelAlgorithm for Geometrically Characterising Drawings
Nick D. Dendris was born in 1970 in Larissa, Greece. He graduated from the Department of Computer Engineering and Informatics of the University of Patras in September 1992, and in 1993 he started working for his Ph.D. degree. As an undergraduate student he became interested in the areas of Graphics, Computer Vision, and, in general, the Design and Analysis of Computer Algorithms. He has recently published in the latter two areas and he has extensive professional experience in the first. He is also participating in applied and basic research projects funded by the European Community, such as ALCOM II, Insight II and DELTACIME.
Lefteris M. Kirousis received his Ph, D. in 1978 from the University of California, Los Angeles. He was an Assistant Professor at the University of California at Santa Barbara. In Greece, he has taught at the Technical University of Athens, the University of Crete and the Department of Mathematics of Patras University. Since 1989, he teaches at the Department of Computer Engineering and Informatics of Patras University, where he is now a professor. He is also a researcher at the Computer Technology Institute in Patras. His research interests include Design and Analysis of Algorithms, Distributed Computing, Computer Vision and, in general, the Foundations of Computer Science. He has over thirty publications in journals and proceedings of international conferences and he is participating in several national and international funded research projects. He is a member of the Greek Mathematical Society, ACM, the IEEE Computer Society, and AMS.
387
Iannis A. Kalafatis was born in 1969 in Athens, Greece. He graduated from the Department of Computer Engineering and Informatics of the University of Patras in April 1993. As an undergraduate student he become interested in the areas of Computer Vision, Graphics, Data Encryption, and, in general, the Design and Analysis of Computer Algorithms. He is also interested in Computer Networks, and he has extensive professional experience in the field, having worked for IBM GmbH in the "BTX Program Distribution" project. He is currently serving in the Greek Air Force.