Immersive Content Authoring for Computer Entertainment in ...

2 downloads 0 Views 2MB Size Report
In this paper we focus on entertainment applications in augmented .... 3D Studio MAX to allow animators to refine the animation in a .... network of sensors.
Immersive Content Authoring for Computer Entertainment in Augmented Reality István Barakonyi

Dieter Schmalstieg

Graz University of Technology, Institute of Computer Graphics and Vision Inffeldgasse 16, Graz, A-8010 Austria Email: { bara | dieter } @ icg.tu-graz.ac.at

ABSTRACT Augmented reality (AR) has recently stepped beyond the usual scope of applications such as machine maintenance, military training and production, and has been extended to the realm of entertainment including computer gaming. This paper demonstrates several advanced immersive content and scenario authoring techniques in AR through example applications. We pay a particular attention to immersive authoring tools for character animation as an increasing number of virtual characters are being inserted into real movie and game environments to form a mixed reality stage where real actors and props play together with virtual animated characters and synthetic background elements. CR Categories: H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems - Artificial, Augmented, and Virtual Realities; I.3.6 [Computer Graphics]: Methodology and Techniques - Interaction techniques. Keywords: augmented reality, immersive authoring 1

INTRODUCTION The highly influential book of Bolter and Grusin titled Remediation [3] states that digital technologies such as the World Wide Web, computer games, virtual reality (VR) and augmented reality (AR) cannot separate themselves from earlier media forms such as photography, film or stage. The authors argue that a novel visual media form should first observe and respect, then attempt to surpass earlier media by refashioning, “remediating” them, just like photography remediated perspective painting, film remediated stage and photography, and television remediated film. Augmented reality has recently stepped beyond the usual scope of applications such as machine maintenance, military training and production, and has been extended to entertainment including computer gaming and filmmaking. This step is expected to greatly contribute to a wider acceptance of this relatively novel research field, similarly to the spread of 2D and 3D computer games supported by rapid hardware and software developments in computer graphics that have also attracted significant public interest. In this paper we focus on entertainment applications in augmented reality environments. The contribution of our work is the application of the remediation theory to authoring tools in augmented reality environments. We argue that not only presented content and its perception need to be remediated, but the communication medium should also strongly influence authoring tools. While tools to author content for AR have been based on strong roots in classic computer graphics and VR, we claim that augmented reality should exploit medium-specific techniques to smoothen its production workflow: AR has to penetrate its own authoring

- 26 -

pipeline. Nevertheless, in this paper we do not offer a complete solution for creating standalone applications as some larger frameworks do [9] [12] [16] but rather provide case studies to explore novel ways of authoring and enabling interaction. 2

MOTIVATION Besides being a compelling environment for digital entertainment, augmented reality offers novel, intuitive tools for content authoring as well. MacIntyre et al. [11] make the point that the most relevant factor making AR an important and unique medium is the combination of three concurrent features: blurring the boundary between the real and the virtual world, constant user control of viewpoint and interactivity. Traditional tools for authoring VR and AR content such as desktop-based 2D/3D content authoring tools, character animation programs, multimedia frameworks [12], or editors to compile and run scripts [9] offer the evident advantage of familiarity. However, an authoring environment possessing the aforementioned three qualities would be desirable to let content developers fully experience and understand the novel environment they develop for and tailor the content to it. From the various extant tools for authoring content for AR applications we highlight the tangible AR-based system from Lee et al. [10], which is to our knowledge the only general purpose immersive authoring AR system existing to date. This system implements an intuitive WYSIWYG editor, where users handle physical markers to manipulate virtual objects, their properties and their connections to other objects and properties to model interaction. Although the immersive environment and the tangible controls are similar to ours, the data flow model covers only basic functions of tangible AR applications without considering more complex aspects such as application events or multi-user aspects. Non-general authoring systems focus on a particular subset of the immersive content creation problem, for instance Ichikari et al. [6] present a mixed reality system for the pre-visualization and camera work authoring for movies. In this paper we pay a particular attention to immersive authoring tools for character animation. An increasing number of virtual characters are being inserted into real movie and game environments forming a mixed reality stage where real actors and props play together with virtual animated characters and synthetic background elements. Producing virtual content for physical stage elements and tailoring them to the stage’s atmosphere and design is quite challenging as there are no existing measures or tools that would ensure a coherent experience for the audience. We propose to use AR techniques to bridge the gap between physical and virtual production environments by superimposing 3D graphics on real world objects. We have created tools based on the Studierstube AR framework [14] to improve the content

creation pipeline especially in character animation by exploiting the physical world as a user interface. The following sections describe the individual tools in detail. Firstly, we present an immersive keyframe creation tool for animated characters. Then we illustrate how to exploit a PDA as a graphical as well as a tangible user interface to dynamically tweak character animation parameters and transfer data between disjoint workspaces. Finally the potential of using embodied autonomous agents as reactive virtual actors in mixed reality production environments is discussed. CASE STUDIES

3

3.1. Keyframe Creation for Animated Characters Modelers and animators often rely on real-life references to create digital content for games or film production. Videotaping a real subject and the manipulation of mock-ups support the creation of precise and expressive content in virtual authoring environments such as 3D modeling and animation packages. For character animation, professional artists use motion capture techniques or other expensive means of acquiring motion data such as the Monkey kinematic tracker device [5] and the Dinosaur Input Device [8] to create an essential initial data set for the final, refined animation. Similarly, within an AR environment the animated virtual model and the real-world reference can be merged to form a single interactive modeling instrument.

We use a wooden mannequin as an input device to animate skeleton-based anthropomorphic 3D characters. The head and limbs of the mannequin are pose-tracked. The system maps realtime pose data to rotation information for the joints of the character skeleton using the Cyclic Coordinate Descent (CCD) inverse kinematics technique [15] extended with rotational constraints for joints (see Figure 1 for illustration). We enhanced the Cal3D character animation library [4] with the CCD inverse kinematics module to let users directly manipulate joint rotation. Keyframes are stored in the standard Cal3D format, therefore animations created with our tool can be replayed either within our modeling application to receive immediate feedback about the animation sequence’s correctness, or they can be imported into 3D Studio MAX to allow animators to refine the animation in a professional modeling and animation software package. Our AR-based immersive modeling tool not only enables close interaction with virtual models by using tangible objects but also the creation of complex motions such as walking up stairs or lifting a ball. Animators can use an actual physical model of stairs or a ball and their respective virtual counterpart in concert with the character to create realistic motions. 3.2. Configuration with the Personal Universal Controller An innovative way to configure characters within AR environments is using the Personal Universal Controller (PUC) technology [13]. Figure 2 illustrates how the technology works. To support PUC, a character needs to provide an XML-based description of its relevant, configurable attributes and its supported commands together with the command syntax. The character runs a PUC service that is listening to incoming connections from PUC clients. The client software is implemented on various devices and platforms including PCs, PDAs and smartphones. When connected to a PUC service, the PUC client queries the service’s attribute description, and then renders a graphical user interface (GUI) to control the listed attributes. A PUC service can accept multiple clients, which enables collaborative, multi-user configuration. Figure 3 shows a screenshot of a sample control GUI rendered on a PocketPC. By checking the “skeleton” control checkbox (marked with an ellipsoid over the screenshot) the user changes the rendering mode from mesh to skeleton mode to reveal the underlying bone structure.

a)

b) Figure 1. Immersive keyframe creation in AR: a) Manipulating a wooden mannequin to create character pose, b) Turning on the skeleton mode to compare real and virtual joints.

- 27 -

Figure 2. Overview of the Personal Universal Controller-based character configuration pipeline

Figure 4. Tangible data transfer between disjoint character animation workspaces. a) Picking up character by PDA, b) Tangible character transfer between PCs, c) Persistent agent parameter from a database: wireframe mode preserved on PC and PDA, d) Transferring character to projection screen

Figure 3. A PDA serving as a graphical and a tangible user interface simultaneously for tweaking character animation parameters

Mobile devices implementing the PUC technology provide an intuitive way to configure virtual characters placed into real environments as the user can simply walk up to a character in her physical environment, connect to it to query its PUC description, and then tweak attributes dynamically using the GUI generated on the fly. As the virtual character is situated in the real environment, an immediate comparison with real background elements is possible, enabling instantaneous adjustments in the character’s appearance and behavior to mitigate the gap between the virtual and physical production environments. If a fiducial marker is mounted on the PDA (see Figure 3 again), a computer equipped with a webcam and running a computer vision algorithm is able to track the pose of the PDA in real time. Thus this mobile device is acting not only as a GUI but also as a TUI or tangible user interface [7]. By manipulating the PDA it becomes possible to observe characters from an arbitrary angle and viewpoint at the desired physical location.

- 28 -

3.3. Tangible data transfer by a PDA Character animation projects in big studios rely on the collaborative work of multiple people: modelers design the mesh, animators and programmers create expressive behavior, and producers supervise all stages. The production pipeline typically requires the simultaneous use of multiple computing environments. Artists prefer to work on their personal computers, while programmers need to test characters in the target production environment such as a game console, and producers report current progress to customers in the presentation room. We propose a solution to decrease the seam between workspaces by proactively migrating characters to presentation environments demanded by the current animation pipeline stage (see Figure 4). The underlying technology is described in [2]. In the design stage, a PDA is used as a tangible transfer medium for characters. Similarly to the previous hardware setup used for the PUC technology, the PDA is pose-tracked by a fiducial marker and webcams mounted on the designer PCs. If the mesh designer wants to discuss potential modifications with the animator, he/she holds the PDA in front of the webcam on the PC monitor, which indicates an intention to “pick up” the character. The character senses the PDA’s spatial vicinity and “jumps over” to the handheld character platform, which is then carried to the animator’s machine. There the character migrates again to the monitor if the PDA enters a predefined „hot” area around the webcam. Changes made to the character on the animator machine are persistently stored in a database, therefore the next time the character is transferred back to the modeler PC, its appearance is automatically updated to reflect changes. In the presentation stage the character is taken to the presentation room’s projection screen, where an Ascension Flock of Birds magnetic tracking system is installed. By mounting a magnetic receiver on the back of the PDA and penetrating a predefined presentation area around the projection screen, the character moves from a small, private handheld display to a large public screen to show its features to a larger audience. The continuity of the visual interface creates a spatially continuous workspace for the collaborators and thus improves productivity. To avoid discontinuities in the interface and the

interaction metaphor, the characters constantly check the availability of target environments and only attempt migration if the target platforms appear to be present and capable of hosting the character. This includes for instance the periodic checking of the handheld device’s battery level as a critical resource. If the battery level is too low, the character refuses to be picked up by the PDA, or escapes to the nearest available display. 3.4. Reactive characters and level editing in an AR game If used as virtual actors within a real production environment, animated characters are expected to possess certain autonomy to allow a better focus on artistic aspects instead of low-level technical details such as triggering animation sequences, path planning among physical objects during locomotion, or reacting to real and virtual world scene events such as actor behavior or prop arrangement on the stage. Augmented reality-based computer games provide a challenging environment to test real and virtual world application events with animated characters and demand various authoring tasks as well. We created MonkeyBridge, a collaborative multiplayer AR game [1], where users place real and virtual objects onto a physical surface, thus influencing the behavior of virtual animated characters and responsive physical objects.

Figure 6. Embodied autonomous agent behavior in Monkeybridge: a) Choosing animation and sound based on platform type, b) Path planning depending on the spatial distribution of available blocks

a)

b) Figure 5. a) Screenshot from the optical tracking-based setup of the Monkeybridge collaborative AR game. Note the real game elements that show through the virtual scene. b) Game environment with the magnetic tracking setup.

- 29 -

A “monkey bridge” is a fragile wooden construction over a river in South-East Asia. People frequently risk their lives as they try to keep their balance while crossing to the other side. In this application two players dynamically build a monkey bridge for their monster-like characters using virtual and physical pieces of landing stage, which vary in shape (Figure 5 shows two application screenshots). The goal is to reach a dedicated target in the middle of a virtual ocean. The game includes many spectacular virtual and physical visual elements such as animated 3D ocean waves, a flock of virtual seagull boids, a real, illuminated smoking volcano and lighthouse with rotating lights. Sound effects further enhance the game experience. The game characters are embodied autonomous agents. Their behavior does not require careful and detailed scripting. Instead a dedicated control logic or virtual “brain” decides which animations and sound effects to play, which direction to turn or whether the target has been reached. The only factors that directly influence agent behavior are the spatial distribution, pose and shape of the virtual and physical building blocks placed on the game board. Figure 6 provides illustration. It is transparent to the agents whether they walk on physical or virtual tiles, which blurs the boundary between the real and the synthetic game environment and allows the arbitrary combination of real and virtual bridge elements on the game table. The characters autonomously choose: the path they walk on, decide

how to get from one platform to the other (e.g. jump up or down when there is a slight difference in height between platform edges), automatically choose the straightest path from several available tiles, and fall into the water if there is no suitable piece of landing stage to walk on. To add affective content to the game, they happily cheer with their hands up when they win, and cry over a lost game. The different arrangements of bridge elements result in a large number of level combinations that prevent the game from becoming monotonous. Augmented reality enables the use of an intuitive tangible level editor, where users design the pose of the physical game elements in the game area with their hands prior game start, record the manually edited physical level and register it with its virtual counterpart to initialize the game. After the game start players dynamically build the virtual level elements during the game by manually placing them next to the physical tiles using their interaction devices. CONCLUSION AND FUTURE WORK In this paper we presented tools and case studies for immersive content authoring built on augmented reality environments. Our techniques make steps towards advanced production tools that assist content creation for computer games and films employing virtual characters that act together with real actors and physical props at the same stage. All of our case studies rely on virtual objects that are able to react to meaningful events in the physical world measured by a network of sensors. An interesting direction for future work is to extend this currently unidirectional communication channel between the real and the virtual to a bidirectional channel, whereby responsive physical stage props are able to react to attribute changes and behavioral actions of virtual characters and props with the help of built-in physical actuators.

4

ACKNOWLEDGEMENTS This project has been sponsored by the Austrian Science Fund FWF (contract No. Y193). The authors wish to thank Ulrich Krispel, Christoph Schinko, Thomas Psik, and Daniel Wagner for their help in our demo applications. REFERENCES [1]

[2]

[3] [4] [5]

[6]

[7]

Barakonyi, I., Weilguny, M., Psik, T., Schmalstieg, D., MonkeyBridge: Autonomous Agents in Augmented Reality Games, In Proc. of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE'05), Valencia, Spain, 2005. Barakonyi, I., Schmalstieg, D.: Ubiquitous Animated Agents for Augmented Reality, In Proc. of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality 2006 (ISMAR'06), Santa Barbara, CA, USA, 2006. Bolter, J. D., Grusin, R.: Remediation: Understanding New Media. MIT Press, Cambridge, Massachusetts, 2000, ISBN-0-262-52279-9 Cal3D website::http://gna.org/projects/cal3d/ Esposito, C., Paley, W. B., Ong, J.: Of mice and monkeys: A Specialized Input Device for Virtual Body Animation, In Proc. of Symposium on Interactive 3D Graphics, Providence, RI, USA, 1995. Ichikari, R., Kawano, K., Kimura, A., Shibata, F., Tamura, H.: Mixed Reality Pre-visualization and Camera-Work Authoring in Filmmaking. Poster appearing at the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’06), Santa Barbara, CA, USA, 2006. Ishii, H., Ullmer, B.: Tangible bits: Towards Seamless Interfaces between People, Bits and Atoms. In Proc. of Conf. on Human

- 30 -

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

Factors in Computing Systems (CHI’97), Atlanta, GA, USA, 1997, pp. 234–241. Knep, B., Hayes, C., Sayre, R., Williams, T.: Dinosaur Input Device. In Proc. of Conference on Human Factors in Computing Systems (CHI’95), Denver, CO, USA, 1995, pp. 304–309. Ledermann, F., Schmalstieg, D.: APRIL: A High-level Framework for Creating Augmented Reality Presentations. In Proc. of the IEEE Virtual Reality 2005 Conference, Bonn, Germany, 2005, pp. 187194. Lee, G. A., Nelles, C., Billinghurst, M., Kim, G.J., Immersive Authoring of Tangible Augmented Reality Applications, In Proc. of the 3rd IEEE and ACM International Symposium on Mixed and Augmented Reality 2004 (ISMAR'04), Arlington, VA, USA, 2004, pp. 172-181. MacIntyre, B., Bolter, J. D., Vaughan, J., Hannigan, B., Moreno, E., Haas, M., Gandy, M.: Three Angry Men: Dramatizing Point-ofView Using Augmented Reality. In Proc. of SIGGRAPH 2002 Technical Sketches, San Antonio, TX, USA, 2002. MacIntyre, B., Gandy, M.: Prototyping Applications with DART, The Designer’s Augmented Reality Toolkit. In Proc. of Software Technology for Augmented Reality Systems Workshop (STARS 2003), Tokyo, Japan, 2003. Nichols, J., Myers, B. A., Higgins, M., Hughes, J., Harris, T. K., Rosenfeld, R., Pignol, M.: Generating Remote Control Interfaces for Complex Appliances. In CHILetters: ACM Symposium on User Interface Software and Technology (UIST'02), Paris, France, 2002, pp. 161–170. Schmalstieg, D., Fuhrmann, A., Hesina, G., Szalavári, Zs., Encarnação, M., Gervautz, M., Purgathofer, W.: The Studierstube Augmented Reality Project. In PRESENCE - Teleoperators and Virtual Environments, MIT Press, 2002. Welman, C.: Inverse Kinematics and Geometric Contraints for Articulated Figure Manipulation, Master's Thesis, Simon Frasier University, 1993. Zauner, J., Haller, M., Brandl, A., Hartmann, W.: Authoring of a Mixed Reality Assembly Instructor for Hierarchical Structures. In Proc. of the 2nd International Symposium on Mixed and Augmented Reality (ISMAR’03), Tokyo, Japan, 2003, pp. 237-246.