InvIncrements: Incremental Software to Support Visual ... - CiteSeerX

2 downloads 0 Views 4MB Size Report
parameters of the simulation are controlled through the use of 3D widgets. ... Inventor provides an interactive 3D viewer; it maintains a scene graph that can be im- .... We used texture mapping to draw the sketches on a quadrilateral mesh.
InvIncrements: Incremental Software to Support Visual Simulation David C. Banks and Wilfredo Blanco Florida State University, Tallahassee, Florida, USA ABSTRACT This paper describes incremental software to support interactive visual simulation. The software was used in the classroom so that students could modify a common prototype code to create diverse applications. In the prototype application, parameters of the simulation are controlled through the use of 3D widgets. The software, based on Open Inventor, has been tested in the classroom (Fall 2002) for Linux and Irix systems, and is available on the World Wide Web. Keywords: computational simulation, interactive 3D graphics, problem solving environments (PSE), Open Inventor.

1. INTRODUCTION This paper describes incremental software we developed to support a course titled “Interactive Computational Simulation” offered within the School of Computational Science at Florida State University. The course introduced students from diverse backgrounds in the sciences and engineering to the area of computational science. Computational science is concerned with developing mathematical models of physical phenomena, implementing the models by efficient algorithms (often on high-performance parallel computers), and visualizing the resulting simulations to support a scientific investigation. Its inter-disciplinary nature distinguishes it from Computer Science. In the initial course offering, nine graduate students enrolled from the departments of Computer Science (5), Physics (1), Mathematics (1), and Engineering (2). Supercomputer users devote considerable cycles to simulations of physical phenomena (such as weather, turbulence, or combustion). These simulations are not interactive (running instead in batch mode) and attempt to employ physics faithfully. By contrast, desktop and laptop users devote considerable cycles to games. These games are very interactive, but rely on high-level models, scripting, or artificial intelligence instead of faithful physics. The combination of interaction and faithful physics has not yet attracted a large following. To the scientist, the main value of interaction appears to be as a debugging tool, but Wilcox reported that this appearance is illusory.1 There is little published evidence that interactive 3D graphical simulations contribute significantly to science, but a few examples have been documented of these problem solving environments (PSEs) in actual use by researchers. SCIrun/BioPSE 2 is one such PSE; it combines 3D widgets with a simulation of bio-electric fields of the heart or brain. This PSE has been used by several of the project’s collaborators, leading to multiple publications. While the case may be weak for the value of interactive simulations in research, they have value as pedagogical tools for students. One example of an instructional 3D PSE is the Web-based Optics Project (“WebTOP”).3 WebTOP includes modules that run in a browser (using Java and VRML) simulating such things as waves, interference, reflection, refraction, polarization, and lasers; it has been used at several universities to support undergraduate instruction in physics. (Several other PSEs have been developed as well; Parker’s PhD thesis provides a comprehensive survey.4 ) Therefore, two reasons for teaching science students to write their own interactive 3D simulations are that (1) such experimental systems have value for a student acquiring expertise in a discipline, and (2) such systems show evidence of being valuable to the expert. BioPSE/SCIRun and WebTOP were designed to be run as applications, not to serve as development tutorials. BioPSE/SCIRun has more than 200,000 lines of code and WebTOP’s code is not in the public domain, so neither is designed to serve as instructional software. Our aim was to create courseware that could be used at other universities introducing a computational science curriculum or offering courses on interactive game design. With this goal in mind, we chose Open Inventor as the graphics layer for several reasons. Open Inventor provides an interactive 3D viewer; it maintains a scene graph that can be imported or exported in plain text format for rapid prototyping of a scene; it offers 3D widgets that support direct manipulation of objects in a scene; it hides low-level OpenGL calls; and it is object oriented. These attributes are important Further author information: (Send correspondence to D. Banks) D. Banks: E-mail: [email protected], Telephone: 850 644 0183

Figure 1. Wave simulation. A 3D widget controls the state of an oscillator producing waves in a medium with wave speed c. Left: the oscillator’s position is stationary. Middle: the oscillator’s position is moving to the right at 0.6c, producing a visible Doppler effect. Right: the oscillator is moving faster than the wave speed at 1.5c, producing a wake.

for a student to succeed in creating an interactive simulation in a single semester. The Open Inventor source code is in the public domain (oss.sgi.com/projects/inventor/) (www.coin3d.org/) and there are commercial implementations available as well (www.tgs.com/). The Inventor Mentor is available as a programming guide. 5 There is a discussion forum (comp.graphics.api.inventor) where the active user community is available for comments and help. Although our School has high-performance computing assets (including a 168-processor IBM SP), these machines are not available to students for homework assignments, a situation that generally prevails at other universities with highperformance computers. Our instructional platform was simply a single-CPU system. Students developed their code on Intel Xeon machines (running Linux) with nVidia graphics cards. Although this platform does not support the level of simulation provided by a high-performance computer, it proved adequate for developing demonstrations of interactive simulations. The incremental software is built on the concept of customizing a common codebase in diverse ways. For the common prototype we chose a wave simulator, inspired largely by the interactive Waves module in WebTOP. Several characteristics make a wave simulator an attractive choice as the starting point for presenting the development of an interactive simulation. Waves produce a dynamic, moving surface that clearly demonstrates the simulation is running. Waves are so familiar that a student needs little explanation of the physical phenomenon being displayed; moreover, visualizing the behavior of a moving source (e.g., producing the Doppler effect when moving slowly; generating a wake when moving faster than Mach 1) is itself of interest to many students in the sciences (figure 1). An analytic solution is available for waves propagating from an oscillator, so a numerical “solver” uses mere function evaluation. The solver can be upgraded to incorporate solution techniques for differential equations, leading naturally to discussions of numerical algorithms. The parameters of the oscillator and of the medium provide an entry point to the design of 3D widgets. With a mesh with 300x300 vertices, the wave simulator runs interactively (about 10 Hz) on the students’ single-CPU systems, providing an additional reason for choosing it. The first eleven weeks (and the first eleven assignments) in the semester were devoted to various aspects of developing an interactive 3D application in which the user can change parameters of a simulation and see the results in real time displayed on the screen. Although we did not use virtual reality (VR) devices such as 3D trackers or head-mounted displays, the assignments employed 3D widgets with a view toward developing VR-ready applications. The remainder of the semester was used for developing final projects. The students chose projects from diverse areas in biology, engineering, and physics (such as simulating ion transport across cell membranes, simulating mass/spring systems, and simulating collisions of elementary particles). They used the InvIncrements as their starting point, specializing the prototype code into each of their specific applications.

2. STRUCTURE OF THE INVINCREMENTS A typical computational simulation combines elements of (1) computer science, (2) applied mathematics, especially linear algebra and differential equations, and (3) a particular application discipline such as biology, chemistry, engineering, or physics.6 We developed the prototype simulation in incremental, tutorial steps so that the non-CS students could gain proficiency in time to complete projects before the end of the semester. Each week we introduced a 500-line patch (C++

code and comments), called an InvIncrement, to a working Inventor application. Students modified and customized the code, and gave weekly demonstrations of their individualized simulations. The InvIncrements. In week 0 we covered the basics of Unix makefiles, program compilation, text editing, and the structure of an Open Inventor application. Each student implemented a customized parametric surface in week 1, and created a time-varying surface in week 2. Week 3 addressed the real-time concern of a re-entrant inner loop, so the user could interactively rotate a partially updated surface asynchronously from the computation of the surface mesh which was still in progress. In week 4 the students applied 2D texture, a crucial visualization technique for displaying scalar quantities. In week 5 they made their applications parse the command line. In weeks 6, 7, and 8 they learned to use 3D widgets and customize their appearance and behavior; this was really the heart of the course, since the widgets communicated the user’s 3D input to the state of the simulation. Week 9 introduced them to techniques that produce graphically self-documenting code. In week 10 they created 2D menus (using X windows) to open and save state-files, or checkpoints, of the simulation. Elements of the InvIncrements are described in more detail below. Widget design. InvIncrements numbered 6, 7, and 8 concern 3D widgets. Objects in a 3D scene can be manipulated directly via 3D widgets located on or near them, rather than via 2D sliders and dials placed on the edge of an application window.7 Designing direct-manipulation widgets can be a challenge if a simulation has many parameters. Some parameters do not have clear geometric effects. Others have non-local effects that do not map well onto a physical realization. Different parameters (such as phase and wave speed) can produce the same geometric effect, so a change made to a geometric shape has an ambiguous interpretation in parameter space. Designing the prototype simulation provided an opportunity to show several examples of how a programmer might incorporate 3D widgets (“draggers” in Inventor parlance) within the scene, as opposed to incarnating these controls as dials and sliders docked on the side of the window. These design decisions, especially concerning the direct-manipulation paradigm, are described below. Oscillator widget. The oscillator dragger is composed of four parts. These parts control its position, frequency, amplitude, and phase. When the user moves the dragger, the velocity of that motion is incorporated into the wave simulation. Each part of the dragger changes color and size when activated, and each part exploits an Inventor dragger whose dimension and topology match the domain of the simulation parameter. The largest oscillator part, and therefore the easiest to pick with the mouse, controls the position of the wave source within the two-dimensional plane R2 . This part (130 lines of code) uses Inventor’s Translate2Dragger. Although the source continually oscillates in the vertical direction (perpendicular to the horizontal plane of the waves), the dragger controlling it does not: chasing a dragger that continually wiggles is a nuisance. This dragger part is the largest because it most nearly expresses a one-to-one geometric mapping to the simulation. Its vertical position is offset from that of the actual oscillator so that it does not continually intersect with the wave source but instead remains visible above the wave. This choice, however, has implications for negative amplitudes (see below). The oscillator frequency lies in the one-dimensional domain (0, ∞). Its value is controlled by a barber pole whose stripes have a spatial frequency that reflects the oscillator’s temporal frequency; “direct manipulation” is interpreted through the Fourier transform of the grating. This dragger (150 lines of code) is constructed from Inventor’s Translate1Dragger. The

Figure 2. Frequency component of the oscillator widget. The mouse drags along the barber pole, changing the oscillator frequency from low (left) to high (right). The spatial frequency of the stripes on the widget changes accordingly.

texture is scaled by a Texture2Transform. The user slides the cursor across the pole’s texture to change the oscillator frequency. Inventor’s Translate1Dragger actually moves along with the mouse; in order to make its barber-pole geometry remain stationary, it is translated by the vector −v when the dragger itself translates by v. Originally it seemed reasonable to let the user directly manipulate a stripe on the barber pole to change the spacing of the stripes; this design proved to be overly sensitive when the stripes are closely spaced, and it is unusable when the stripes are spaced farther than the pole’s length. Motivated by Fitts’ law,8 we opted instead for an exponential mapping. As the mouse moves a distance x, it changes the frequency f according to f = f0 exp(b x) where b is a constant that determines the sensitivity of the dragger in response to the mouse and f 0 is the frequency when the dragger is activated. Figure 2 illustrates the behavior of the frequency part of the oscillator dragger. A cap on the barber pole controls the oscillator amplitude. The amplitude is a one-dimensional parameter, so a Translate1Dragger controls it (130 lines of code). If the amplitude is negative, the dragger lies below the surface, occluding the dragger and making it unpickable. It is important that the user not lose track of the dragger, but we did not find a way to meet this design criterion without violating some other one. A flat disk controls the oscillator phase (140 lines of code). A tick mark texture-mapped onto the disk indicates the phase angle, which lies in the domain S1 , the unit circle. The user controls the angle via Inventor’s RotateDiscDragger. Non-local widgets. The medium through which the waves propagate exposes two parameters to the user: wave speed and damping. Since the medium is not localized at a single point, there is no obvious choice of where to place the draggers that control it. We simply bundled the draggers together, front and center of the surface mesh. This choice, like dumping the controls onto the application’s window border, admittedly violates the spirit of direct manipulation. Our primary reason for installing such widgets in the 3D scene at all is to make the prototype VR-ready, with a view toward porting the prototype to a CAVE-like virtual environment.9 The wave speed of the medium is controlled by another barber-pole dragger (145 lines of code). The reason for choosing this design was not so much to re-use code from the oscillator frequency widget, but rather because changes in wave speed can make the waves stretch or squish as though the oscillator frequency were changing. The two draggers look alike because their effects look similar. Waves are damped as they propagate from the oscillator. The amount of damping is controlled by a Translate1Dragger (320 lines of code) positioned beside a 2D graph. The graph sketches exponential curves with different damping exponents as determined by the dragger (figure 3). We used texture mapping to draw the sketches on a quadrilateral mesh. The texture coordinates solve the inverse of the exponential function. Each vertex (x, y) in the mesh is assigned texture coordinates (s, 0) where s s=

−x2 ln(y)

Figure 3. Widgets governing the state of the medium. The barber pole controls the wave speed. The cone-shaped slider (highlighted) governs the amount of damping. The exponential damping curve is sketched by appling a 1D texture to a mesh whose texture coordinates solve the inverse of the exponential function.

Figure 4. GNU enscript output of source code with automatically generated figure of the oscillator dragger. The position part is changed from a sphere to a cube; the code is recompiled; a script launches the simulation with camera focus on the highlighted part; the script converts the resulting image to encapsulated postscript; GNU enscript inserts the portrait into the pretty-printed version of the source code.

A long, thin white texture with a single dark pixel at (0, 0) is applied to the mesh. The dragger produces p a texture offset (using Inventor’s Texture2Transform) by an amount (a, 0) so that the black pixel is drawn wherever a = −x2 / ln(y). That condition holds when ln(y) = −x2 /a2 , so a black curve is produced through the points where  2 x y = exp − 2 a thereby sketching the desired curve. Self-documenting 3D code. “Literate programming” is a phrase coined by Knuth 10 to describe the combination of a programming language and a documentation language so that the written program is intended for the human reader. Our technique for producing such self-documenting code combines elements of Open Inventor and Unix tools. The GNU “enscript” utility converts code from raw text to pretty-printed postscript, which can then be converted to Adobe’s Portable Document Format (pdf). GNU enscript responds to escape sequences that begin with the null character \0. For example, the string \0epsf{figure.eps} causes insertion of an encapsulated postscript figure into the output. In the prototype code, command-line flags cause the 3D viewer to point its camera at a dragger of choice, highlight it, render a single image, and exit. A shell script launches the application, saves the portrait of a dragger, and converts it to encapsulated postscript for

Figure 5. GNU enscript output of source code with automatically generated figure of the scene graph. The root node (blue sphere at top of graph) has several children, the first of which are the viewTransform and LookAtTransform nodes indicated in the code. When the structure of the scene graph is modified in the source code, the resulting tree is displayed with the Inventor gview utility. The portrait of the scene graph is converted to encapsulated postscript, which GNU enscript inserts into the pretty-printed version of the source code.

insertion into the pretty-printed code. The Makefile serves to coordinate these activities and produce a document containing the current image of the dragger. The programmer may, for example, change the appearance of the oscillator from a sphere to a cube in line 227 of oscillatorDragger.cxx and then re-make the pdf document. The resulting document shows the changed code and also the changed appearance of the dragger (assuming the change compiles and executes) (figure 4). This feature keeps the source code self-documented in a graphical way, which invites both teacher and student to actually read it. We also automatically insert images of scene graphs as comments in the code, again using a combination of utilities. We modified the Open Inventor program “gview” (available with the open source distribution) to draw the tree-structured scene graph. The simulation writes each scene subgraph to an Inventor-format file. The file is read and displayed by gview. The image is saved and converted to encapsulated Postscript, then inserted by GNU enscript into the pretty-printed code (figure 5).

3. PERFORMANCE OF THE INVINCREMENTS The wave simulator implemented with InvIncrements achieves update rates of about 8-10 Hz on Linux workstations equipped with nVidia graphics. We were intrigued by the prospect of using nVidia’s Cg vertex-shading language to exploit the graphics processing unit (GPU) for computing the physical simulation as well as rendering the geometry. Our straightforward implementation of the simulation’s inner loop increased the rendering rate by another factor of two, to about 15 Hz. The comparison is inexact because we were required to simplify the computation to fit the graphics card’s limit on shader instructions in OpenGL applications. This improvement is much lower than the 25-fold speedup reported by Harris for migrating a different computational simulation from the CPU to the GPU. 11 Extra care is required to move the simulation code from the Inventor-level application down to the OpenGL level because the render state (transforms, materials, lights) is not inherited at the graphics hardware level used by Cg.

4. RESULTS AND FUTURE WORK We described a simulation prototype that supports a graduate course on interactive computational simulation. The prototype runs under Irix (SGI) and Linux. It is built using open-source tools such as Open Inventor. Each increment, called an InvIncrement, of the prototype is about 500 lines of code. The simulation employs a variety of 3D widgets to provide direct manipulation of the 3D scene. Each widget uses an Open Inventor dragger (such as Translate1Dragger, Translate2Dragger, RotateDiscDragger) that corresponds to the domain (such as R1 , R2 , or S1 ) of the parameter it governs. The widgets offer examples of several techniques that can be useful in interacting with computational simulations: mapping time to space via a Fourier transform; dragging a texture rather than the dragger it adorns; and sketching the shape of an abstract parameter

Figure 6. InvIncrements customized into various applications. Top row: ion transport across axon membrane; superposition of waves; tissue-constrained diffusion of water molecules. Bottom row: swimming fish; the game of cricket; morphology of seashells.

by drawing a level set in a texture. The software is self-documenting in the sense that it automatically generates images of the scene graphs and the widgets it implements. These images are inserted into the pretty-printed pdf version of the source code. The prototype was customized by nine students enrolled in the course “Interactive Computational Simulation.” Four of the nine students came from disciplines outside of computer science. Examples of the students’ customized simulations are shown in Figure 6. These include simulations of ion transport across an axon membrane; superposition of waves; anisotropic diffusion of water within the white matter of the brain; the swimming motion of fish; the game of cricket; and seashell morphology. We gratefully acknowledge support for this work from NSF grants #0083898 and #0430954, and editorial assistance from K. M. Smith.

REFERENCES 1. E. M. Wilcox, J. W. Atwood, M. M. Burnett, J. J. Cadiz, and C. R. Cook, “Does continuous visual feedback aid debugging in direct-manipulation programming systems?,” in Proceedings of ACM CHI 1997: Human Factors in Computing Systems, Atlanta, GA, pp. 258–265, 1997. 2. D. Weinstein, P. Krysl, and C. Johnson, “The biopse inverse eeg modeling pipeline,” in ISGG 7th International Conference on Numerical Grid Generation in CFS, pp. 1091–1100, 2001. 3. D. C. Banks, J. T. Foley, K. N. Vidimce, M. Kiu, and J. Brown, “Interactive 3d simulation and visualization of optical phenomena,” IEEE Computer Graphics & Applications 18(4), pp. 66–69, 1998. 4. S. G. Parker, The SCIRun Problem Solving Environment and Computational Steering Software System (PhD Thesis), University of Utah Department of Computer Science, 1999. 5. J. Wernecke, The Inventor Mentor: Programming Object-Oriented 3D Graphics With Open Inventor, AddisonWesley, 1994. 6. Working group on CSE Education, “Graduate education in computational science and engineering,” SIAM Review 43, pp. 163–177, March 2001. 7. D. Conner, S. Snibbe, K. Herndon, D. Robbins, R. Zeleznik, and A. vanDam, “Three-dimensional widgets,” in 1992 Symposium on Interactive 3D Graphics, pp. 183–188, 1992. 8. P. Fitts, “The information capacity of the human motor system in controlling the amplitude of movement,” Journal of Experimental Psychology 47, pp. 381–391, 1954. 9. C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, “Surround-screen projection-based virtual reality: The design and implementation of the cave,” in Computer Graphics (Proceedings of ACM SIGGRAPH 1993), pp. 135–142, 1993. 10. D. E. Knuth, Literate Programming, Stanford Center for the Study of Language and Information, no. 27, 1992. 11. M. J. Harris, G. Coombe, T. Scheuermann, and A. Lastra, “Physically-based visual simulation on graphics hardware,” in Graphics Hardware 2002, T. Ertl, W. Heidrich, and M. Dogget, eds., pp. 1–10, The Eurographics Association, 2002. 12. M. R. Mine, F. P. Brooks, and C. H. Sequin, “Moving objects in space: Exploiting proprioception in virtualenvironment interaction,” in Computer Graphics (Proceedings of ACM SIGGRAPH 1997), pp. 19–27, 1997.