Motion Evaluation for VR-based Motion Training - Semantic Scholar

2 downloads 0 Views 2MB Size Report
Virtual Reality (VR) has emerged as one of the important and effective tools for education and training. Most VR- ... An application of the proposed techniques to a simple swordsman training is .... applying a given motion to a new character. Hodgins et al. 8 .... arm lengths to the center between the left and right shoul- ders.
EUROGRAPHICS 2001 / A. Chalmers and T.-M. Rhyne (Guest Editors)

Volume 20 (2001), Number 3

Motion Evaluation for VR-based Motion Training Seongmin Baek, Seungyong Lee, and Gerard Jounghyun Kim Department of Computer Science and Engineering Pohang University of Science and Technology (POSTECH) San 31 Hyoja-dong, Pohang, Kyungbuk, 790-784, Korea

Abstract Virtual Reality (VR) has emerged as one of the important and effective tools for education and training. Most VRbased training systems are situation-based, where the trainees are trained for discrete decision-making in special situations presented by the VR environments. In contrast, this paper discusses the application of VR to a different class of training, for learning exact motions, often required in sports and the arts. For correct evaluation of the trainee and effective motion training, the reference motion data need to be retargeted according to the body size of the trainee, and performance feedback should be provided to enhance the training effect. This paper focuses on the techniques for retargeting a given reference motion for an arbitrary trainee and providing analysis of how well the trainee has followed the reference motion. We assume that the overall posture of a reference motion is deemed to be more important than the absolute positional profile, and this makes our retargeting formulation different from the previous approaches. On-line quantitative analysis based on a moment by moment comparison between the reference and captured motion data is used to supply indication as to how well the trainee is doing during training. In addition, curve fitting and wavelet transformation are employed for off-line qualitative analysis to generate a corrective advice afterwards. An application of the proposed techniques to a simple swordsman training is demonstrated.

1. Introduction Training has been considered as one of the most natural application areas of virtual reality (VR) 19 . Most VR-based training systems to date are oriented toward learning a sequence of discrete reactive tasks. Training occurs simply by exposing and immersing the user into a virtual environment with various situation scenarios, which is otherwise difficult to experience in the real world. The given task is usually to perform a sequence of actions in reaction to important events. The importance is given to whether the trainee selected the right type of action rather than how it is done kinesthetically. This paper discusses the application of VR to a different class of training, for learning exact motion, often required in sports and the arts (e.g. a golf swing, martial arts, calligraphy, sign language, and so on). In a previous work 18 , we presented a concept behind the VR-based motion training called “Just Follow Me (JFM)”, which is based on an intuitive interaction method called the “Ghost” metaphor. Through the submitted to EUROGRAPHICS 2001.

Ghost metaphor, the motion of the trainer is visualized in real time as a ghost (superimposed on the trainee initially) moving out of one’s body. The trainee, who sees the motion from various viewpoints, is to “follow” the ghostly master as close as (or as fast as) possible. However, for a correct evaluation of trainees, the reference motion data need to be retargeted according to the body size of the trainee. While appropriate for animation production, conventional methods may not be appropriate for motion training, because they are oriented toward placing the position of the “end-effector” of the resized character as close as to that of the original one. Imagine retargeting a “front kick” motion of a tall character to a smaller one. Using conventional approaches may result in a retargeted posture that tries to kick needlessly high, as if making an effort to reach the kicking height of the taller character. We assume that, for the reference motions we consider, the overall posture is deemed to be more important than the absolute positional profile of the original motion, and this makes our retargeting formulation different from the previous approaches. That is,

2

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

the final kick position, which is the constraint point, need to be repositioned appropriately. An alternative approach is to simply copy joint angle information as illustrated in Figure 1. This may result in a situation where the “qualitative” constraint (e.g. a hand and a foot meeting together) is not satisfied.

(a)

(b)

(c)

(d)

Figure 1: Using end effector positions or joint angles for retargeting: (a) Reference posture, (b) Retargeted with end effector position (constraint may not be met at all), (c) Retargeted with copied joint angles only (constraints may not be met), (d) Retargeted with newly computed end effector position and joint angles In motion training, providing moment by moment performance feedback is very important in enhancing the training effect 14 . Many performance measures are possible: accuracy based such as position/orientation difference, timing difference, number of oscillation, and speed-based such as task completion time. In this paper, we present two analysis methods to evaluate how well the trainee follows the motion and to provide the trainee with a corrective advice, respectively. The on-line quantitative analysis evaluates the trainee by the absolute differences in position and orientation data between the reference and the captured motions to give an indication as how well the user is following the reference motion, a very important factor in motion learning. The offline qualitative analysis employs curve fitting and wavelet transformation and provides the trainee with the advice how to improve the overall motion. The remainder of this paper is organized as follows. In Section 2, we review the other research related to motion training and evaluation. Section 3 gives the overview of the proposed VR-based motion training system in which our evaluation techniques can be applied. In Section 4, we propose techniques for repositioning the constraint points for posture preservation. Section 5 and 6 describe the on- and off-line motion analysis methods, respectively. Section 7 illustrates a simple application of the proposed motion training system to swordsman training. Section 8 concludes this paper. 2. Related Work Motion training can be modeled as a process of transmitting motion information from the trainer to the trainee through a

series of interaction by some communication media. Books and videos have been popular forms of such transmission media. Recently, the large storage capability of CD-ROM and increased computing power have made ways to richer and more organized multimedia contents for training and education with text, voice, short video and image clips. Despite the increased interactivity, the effect of such indirect training is questionable, especially for motion training, as the trainee must interpret large part of the implicit motor control knowledge and evaluate one’s own performance. Like any training or learning process, the interaction should be a two-way communication as much as possible with immediate performance feedback and correction. As a technology that can provide real time two way communication with multitude of interaction methods, VR-based training becomes a good alternative to the expensive and difficult to attain “direct” learning methods. Perhaps the most famous example is the NPSNET/SIMNET, a network-based simulation for tactical military training 13 . Others include the earthquake readiness training 17 , hostage situation resolution training 15 , battleship fire escape training 5 , fear of flying treatment 7 , and machine operation training 9 . One can easily realize that these systems are mostly situation-based training for decision making; the user is expected to make decisions to perform series of actions, and the particular motion is not very important. A system called CAREN probably comes closest to our vision of the VR-based motion training system, originally developed out of a joint European ESPRIT research program, currently a commercial product 4 . The purpose of CAREN is to train and rehabilitate patients to overcome balance disorders. A patient standing on a moving force plate is to correct one’s way of staying in balance by looking at the avatar (in a third person view point displayed on a large projection screen in front) representing the patient and transparent boxes bounding the avatar’s limbs representing the correct posture/motion. As for work related to motion editing and retargeting for reusing reference motion data, Bindiganavale et al. 1 proposed a method for extracting and editing key characteristics of motion capture data. Character Studio 11 , commercial product made by Kinetix, has a capability to adjust key frames, while maintaining foot position and balance when applying a given motion to a new character. Hodgins et al. 8 tried to reuse motion using parametric models. Gleicher 6 solved the motion retargeting problem by minimizing a motion difference based objective function, while solving for the space-time constraints. Lee et al. 12 applied multi-level B-splines to solving the same problem, while Choi et al. 2 used the closed-loop inverse rate control to produce a retargeted motion on-line by minimizing end-effector path difference. While perhaps appropriate for animation production, these methods may not be so for motion training, because they are oriented toward placing the position of the “endsubmitted to EUROGRAPHICS 2001.

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

3

effector” of the resized character as close as to that of the original one. 3. Overview of the VR-based Motion Training System The idea behind the Ghost metaphor in the “Just Follow Me (JFM)” system is very simple and is illustrated in Fig. 2. The motion of the trainer is visualized in real time as a ghost moving out of trainee’s body. The trainee, who sees the motion from a chosen viewpoint, is to “follow” the ghostly master as close as possible both timing wise and position/orientation wise. Fig. 3 shows a system configuration of the motion training system with the proposed interaction method, divided into mainly three parts: constraint reformulation and posture based retargeting, and reference motion animation (as a ghost overlayed on the trainee), motion capture and animation of the trainee’s motion, on-line momentby-moment motion evaluation, and post-analysis module for overall motion evaluation. Constraint Reformulation and Retargetting

Figure 4: Motion conversion process

reference motion. As the motions of the trainer and trainee are displayed on the screen simultaneously, the system compares the two motions by the difference in the joint positions (∆d) and angles (∆θ). In the evaluation, rather important key postures or frames can be manually defined so that they are more weighted in the final score. Fig. 5 illustrates the on-line motion evaluation process.

Posture-based Retargetting

Figure 3: System configuration of the proposed VR-based motion training system

Using the selected reference motion and body size data, the retargeting process starts by scaling and translating the motion data, while preserving the original posture (or joint angles). Then, if any inter-body constraint such as “hand touching the shoulder” or “hand clapping” needs to be met, they are solved using the process called the “IK_Circle” or “IK_Sphere” (see Sections 4.2 and 4.3). These methods recalculates the constraint point in order to preserve the original posture as much as possible. When satisfying the constraints, the method applies interpolation around the constraint frames to prevent the motion from changing abruptly. Fig. 4 illustrates the overall process. Motion of the trainee is captured using eight motion tracking sensors and animated along with the ghostly converted submitted to EUROGRAPHICS 2001.

Figure 5: On-line motion evaluation For a more qualitative evaluation of the overall trainee’s performance, a post analysis is performed that consists of a curve-fitting procedure and a wavelet analysis (see Fig. 6). The curve fitting process is applied to neutralize the difference between the two motions due to time shift (e.g. slight delay or lookahead) or positional/angular offset (e.g. trainee performing off the coordinate origin). The Haar wavelet is used to extract features of the trainee’s motion curve with respect to that of the reference curve. By combining the analysis results from the curve fitting and the wavelet analysis, a simple easy-to-understand motion advices can be generated. 4. Posture-based Motion Retargeting In the proposed VR-based motion training system, motion data are represented in the BVH (Biovision Hierarchical

4

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

Figure 2: Interacting with the ghost metaphor in the JFM system: main first person viewpoint and alternative third person viewpoint

Figure 6: Post-analysis of trainee’s motion

Data) format. In a motion data file, the header contains the hierarchical structure and the lengths of the body segments, and the motion is represented by a sequence of postures that consist of root positions and joint angles. A reference motion to be retargeted is selected from the motion database, and the trainee’s body segment lengths are given as input. The posture-based motion retargeting process is to change the reference motion so that the converted motion fits the trainee’s body size and the original posture is preserved as much as possible. The motion retargeting process starts by scaling the body segment length values according to the ratio between the

trainer’s and the trainee’s different limb sizes. However, if we then simply copy the root positions and joints angles to the newly scaled body segments, problems may occur with respect to the content of the motion data, for instance, feet may not touch the ground or hands may not meet in a clapping motion (also, see Figure 1). The following sections present simple solutions to these problems that reposition the constraint points (and calculate new joint angles at the constraint points) of the motion data. An interpolation is then applied on the copied or recalculated joint angle data around the constraint points within a fixed time window. This retargeting process is relatively fast because other approaches tries to derive a new set of joint angles while minimizing the end effector position difference, a much more computationally expensive process.

4.1. Translation to the Ground To adjust the root positions, we first manually select “important” frames at which one of the feet of the reference character contacts the ground. At these important frames, the corresponding root position of the new character is translated in the vertical direction so that the scaled leg touches the ground (see Fig. 7). The root positions at the other (nonimportant) frames are determined by a B-spline fitting process. submitted to EUROGRAPHICS 2001.

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

Figure 7: Translating the root to the ground

5

(a)

4.2. IK Circle A constraint of an end of a movable limb touching an immovable part of the body may no longer be satisfied after just scaling the body segments. For example, a hand may no longer touch the shoulder as the scaled arms may be too short or long. Such a problem can be solved by applying the usual inverse kinematics approach. In this paper, we present a faster and simpler approach. Assume that the hand touches the other shoulder in the reference motion but it does not with the scaled upper and lower arms. The lower arms possess two degrees of freedom, twisting and bending. However, the upper and lower arms move in the same plane, which is determined by the left and right shoulders and the elbow position. Then, two circles can be formed in the plane with the left and right shoulders as their centers and lengths of lower and upper arms as their radii. When these two circles meet at two intersection points, we select the intersection point closer to the original elbow position as the new elbow position (see Fig. 8(b)). If there is no intersection point, meaning the arms are too short to touch the new shoulder point, then the constraint can not be met. In this case, the arms are configured to fully stretch but will not reach the desired shoulder point. A similar approach can be applied to the constraint concerning other movable limbs and immovable parts of the body. The changed joint angles at the constraint frames are propagated to the neighbor frames as done in the previous work 12 to prevent abrupt changes. 4.3. IK Sphere Analogous to the IK Circle, the IK Sphere is used for a constraint in which two movable end points need to be relocated. For example, consider two hands meeting each other in a clapping motion (see Fig. 9(a)). The IK Sphere is a generalization of the IK Circle method. Figs. 9(b) and (c) illustrate submitted to EUROGRAPHICS 2001.

(b) Figure 8: Example of IK Circle: a hand touches a shoulder on the other side.

the process of IK Circle. We explain the process with a clapping motion as an example. Let Ct be the vector from the center between the left and right shoulders to the contact point of the two hands in the reference motion. We first reposition the constraint point to the new location that is determined by adding the vector Ct scaled according to the ratio between trainer’s and trainee’s arm lengths to the center between the left and right shoulders. Then, two spheres can be formed with their centers, one at the new constraint point and the other at the shoulder. The radii of the respective spheres are the lengths of the lower and the upper arm. Note that contrary to the case of IK Circle, the new constraint point may not lie on the plane that formed by the shoulder, original elbow, and original constraint point. Hence, we do not know the plane to which the new elbow belongs to and need spheres instead of circles to determine the new elbow position. The two spheres intersect to form a circle as shown in Fig. 9(c). The new elbow position is to lie somewhere on the perimeter of this intersection circle. We select one such point by calculating a point on the circle with the shortest distance

6

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

from the elbow position of the original trainer motion. To do this, we find an intersection point between the circle perimeter and a line whose direction is defined by projecting the vector from p to c onto the circle (see Fig. 9(c)). Using the IK Circle and IK Sphere, we can handle most types of constraints in human motion/posture, when finger, wrist or ankle motions are ignored.

5. On-Line Motion Evaluation The on-line evaluation scheme is basically a weighted sum of positional and angular differences. Both parameters are used because taking just one or the other may result in poor evaluation. The difference in position is computed in the world coordinates, while the difference in joint angles in the local coordinate system. Each difference value is normalized as a score value using a discrete range table and summed to a final score. The range table is given by a motion expert when the reference motion is added to the database.

(a)

In the evaluation, some joints may be assigned higher priorities or weights. For example, for a certain dance, the waist movement may be very important and be assigned higher weights compared to those of the legs or the arms. Ideally, these weights should be set by the motion expert. However, this inherently data dependent process can be very time consuming and boring. A good alternative is to compute the total moving distance, the mean and deviation of each joint and assign the weights according to this data. That is, the heavier movement is detected, the higher weight is assigned in making an evaluation. To evaluate the whole body posture, the system compares the differences in all important body positions and joint angles at each frame for frame by frame score. Also each frame by frame score is summed and averaged for a final score as well. Certain frames can be designated to be a key posture frame and be given higher weight so that the final score can reflect this aspect. Fig. 10 illustrates this process.

(b)

6. Post-analysis of Motion Despite using both position and angular information with weights and key frames, the on-line motion evaluation still fails to produce qualitative characterization of the trainee’s performance because it is a local measure. A global curve fitting technique is used to neutralize the effect of the short time delay or lookahead that can often happen in motion following situation. Without this, a well followed motion, but with a slight delay, would get a low score. We also explore the use of wavelets to extract global motion characteristics with respect to the reference motion.

(c) Figure 9: Example of IK Sphere: clapping hands

submitted to EUROGRAPHICS 2001.

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

7

(a) Figure 10: Example of calculation of frame by frame score: EvaluationScore = Σ(Σα j δ j )βi = (0:35 1 + 0:2 1 + 0:1 0:5 + 0:1 0 + 0:25 0:5) 0:4 = 0:29, where α j is the priority of a joint, δ j is a level score, and βi is the weight of a frame.













6.1. Curve Fitting for Motion Analysis 6.1.1. Horizontal Shift

(b)

The person training by following for the first time usually exhibits a delay of about at least 4 to 5 frames because the human reaction time to visual stimuli is known to be about 120 ms to 2000 milliseconds 10 . These delays will be seen especially apparent at various “turning points” or explicitly marked milestones during the motion (e.g. ball hitting point, the moment retracting the foot from kicking). These points appear in the motion curve usually as inflection points. Informally, an inflection point is where a function value starts to change its concavity, e.g. local maximum or minimum points. Fig. 11 shows three cases of two slightly off-the-synch motion curves with their respective inflection points marked. The first case shows a simple situation where there seems to be clearly corresponding inflection points between the two curves with short amount of delay by the trainee (see Fig. 11(a)). In such a case of one-to-one case correspondence, the respective points are simply moved to the left or right. If there can not be found any corresponding inflection points in one or the other curve, no action is taken (see Fig. 11(b)). The search is made within a small time window around the inflection points of the trainer’s motion data. The third case shows many potentially matchable inflection points in the trainee’s motion data (see Fig. 11(c)). In this case, we select an inflection point closest to the trainer’s with the same concavity to move it forward or backward. The result of the horizontal shift applied to Fig. 11(a) is shown in Fig. 12. 6.1.2. Vertical Shift The vertical shift is considered at the inflection points likewise as their coincidence will generally produce higher score. This is because the key postures, at which higher submitted to EUROGRAPHICS 2001.

(c) Figure 11: Possible configurations of inflection points.

Figure 12: Fitting inflection points for Fig. 11(a).

8

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

weights are given in the evaluation, often occur at these inflection points. Then, from the difference curve between the reference and the trainee’s motion, we can extract a set of points at which the local maximum differences are greater than a preset threshold value. The trainee’s motion data is modified vertically by fitting these points to the reference curve, that is, moving the points upward by the respective difference values. The other neighboring points are moved as well by interpolating the vertical movements by a B-spline curve. Fig. 13 shows the motion and difference curves before and after the vertical shifts.

When we apply the Harr wavelet analysis to a motion curve, the averaging coefficients roughly reflect the trend of the motion for a given time interval. Similarly, the detail coefficients are related to the vibration characteristics of a motion. For instance, Fig. 14 gives three different sets of coefficients for three motion curves, where the one drawn with the solid line is the reference curve. As we can see in the table in Fig. 14, if the trainee’s motion has a similar global trend to the reference motion, the averaging coefficients have similar values. In addition, the detail coefficients reflects the amount of the vibration in the motion curve. In Fig. 14, motion curve 2 vibrates much more than curve 3 and has much a larger detail coefficient.

(a) Motion and difference curves before vertical movement.

(b) Motion and difference curves after vertical movement. Figure 13: Vertical curve fitting. Figure 14: Motion curves and the result of wavelet analysis. 6.1.3. Advice With the two step curve fitting procedure, two types of advices are possible; one that reflects the time delay or lookahead and the other a difference based advice with time fitted data. From the horizontal shift data, we can generate the advice on the timing of the motion following, for instance like, “You seem to start the action too late (or too early) in the kicking motion”. The amount of the vertical shift can be used as a basis for a segment by segment posture advice. For example, by analyzing the joint angle curve of the upper arm, we can say that “You seem to clapping rather timidly”.

This analysis with the averaging and detail coefficients can be used to provide qualitative advice such as, “You seem to vibrate too much when preparing to kick”. The analysis can be applied to each level in the hierarchy of the wavelet coefficients. As the coefficient level becomes coarser, the analysis of the trainee’s motion becomes more global. Conversely, as the analysis continues to finer levels, the wavelet coefficients roughly reflect the chracteristics of short motion segments. 7. Application Example

6.2. Applying Wavelet Transform to Motion Analysis There have been prior approaches to interpret motion as signal and edit motion using operators such as Fourier transforms. However, it is difficult to apply Fourier transforms to our application in which we would like to know motion characteristics at different points in time. On the other hand, wavelets provides a hierarchical representation of the motion data using the averaging coefficients and detail coefficients. The most interesting dissimilarity between these two transforms is that the individual wavelet function is temporally localized, while the Fourier transform is not. Among the various types of wavelets, Haar wavelet is the simplest, where the averaging coefficients are the averages of the data values and the detail coefficients are the deviations from the average 3 16 . ;

We have applied the proposed motion evaluation method to a simple fencing training system. Fig. 15 shows few snapshots of the system with the converted reference motion (solid) overlayed on top of the trainee’s motion (gray) captured by the tracking sensors. It would be quite difficult to directly show that the proposed method of motion evaluation helps the trainee learn better compared to other systems using different techniques. In a different paper, we showed empirically the utility of using the JFM paradigm in motion training compared to the learning in the real environment (i.e. “trainer-in-residence” or through books and videos) 18 . We hypothesize that efficient and correct transmission of motion information from the trainer to the trainee would be one of the important factors in improving the learnability of such a system. We believe our way of posture-based retargeting submitted to EUROGRAPHICS 2001.

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

and analysis will carry more correct proprioceptive senses and feedback to the trainee.

9

ysis. This project has been supported in part by the Korea Ministry of Education BK 21 program and the Korea Science Foundation supported Virtual Reality Research Center. References 1.

R. Bindiganvavale and N. I. Badler. Motion abstraction and mapping with spatial constraints. In Modeling and Motion capture Techniques for Virtual Environments, pages 70–82. International Workshop CAPTECH ’98, Nov. 1998.

2.

K. J. Choi and H. S. Ko. On-line motion retargeting. In Proceedings of the International Pacific Graphics ’99, pages 32–42, October 1999.

3.

E.Aboufadel and S. Schlicker. Discovering Wavelets. John Wiley & Sons Inc., 1999.

4.

W. Eerden, E. Otten, G. May, and O. EvenZohar. Caren-computer assisted rehabilitation environment. Technical report, Uinversity of Amsterdam : Academic Medical Center, Dept. of Rehabilitation, On-Line Document: http://www.motek.org/products/medical/caren/wpaper. html, 1999.

5.

S. Everette et al. Creating natural language interfaces to vr system: Experiences, observations, and lessons learned. In Proceedings of the VSMM98, 1998.

6.

M. Gleicher. Retargeting motion to new characters. ACM Computer Graphics (Proc. of SIGGRAPH ’98), 32:33–42, July 1998.

7.

L. Hodges, B. Rothbaum, R. Kooper, D. Opdyke, T. Meyer, M. North, J. de Graff, and J. Williford. Virtual environments for treating fear of heights. IEEE Computer, 28(7):27–34, 1995.

8.

J. Hodgins and N. Pollard. Adapting simulated behaviors for new characters. ACM Computer Graphics (Proc. of SIGGRAPH ’97), pages 153–162, August 1997.

9.

W. L. Johnson and J. Rickel. Integrating pedagogical agents into virtual environments. Presence, 7(6):523– 546, December 1998.

Figure 15: Snapshots of the fencing training system

8. Conclusion In this paper, we proposed a method to convert a reference motion data for a new trainee (with a different body size than the original motion capturer) appropriate for a motion training application. In our motion training application, for the reference motions we consider, the overall posture is deemed to be more important than preserving absolute positional profile of the original motion, and this makes our retargeting formulation different and more simplified from the previous approaches of motion retargeting and reuse. We also presented various methods of analyzing the trainee’s motion in on and off line. These approaches should prove very useful in any similar VR based motion training applications.

10. H.T. Kang. Contents of physical education. On-Line Document: http://tonggugch.ed.seoul.kr/study/heal/khk/43-1.htm. 11. Kinetix Division of Autodesk Inc. Character Studio, 1997.

Acknowledgements

12. J. H. Lee and S. Y. Shin. A hierarchical approach to interactive motion editing for human-like figures. ACM Computer Graphics (Proc. of SIGGRAPH ’99), pages 39–48, August 1999.

Our thanks go to the Statistics and Human Factors Engineering Team of the POSTECH Industrial Engineering Department for helping us with the experimental design and anal-

13. M. R. Macedonia. Npsnet: A network software architecture for large scale virtual environments. Presence, 3(4):265–287, 1994.

submitted to EUROGRAPHICS 2001.

10

Baek, Lee, and Kim / Motion Evaluation for VR-based Motion Training

14. R. Schmidt. Motor Learning and Performance. Human Kinetics Books, 1991. 15. D. Shawver. Virtual actors and avatars in a flexible userdetermined scenario environment. In Proceedings of IEEE VRAIS, pages 170–177, 1997. 16. S. S. Soliman and M. D. Srinath. Continuous and Discrete Signals and Systems. Prentice Hall, 1990. 17. VR TechnoCenter. Vr earthquake simulator, 1998. 18. U.Y. Yang and G. J. Kim. Just follow me: An immersive vr-based motion training system. In Proceedings of International Conference on Virtual Systems and MultiMedia, September 1999. 19. C. Youngblut. Educational uses of virtual reality technology. Technical report, IDA Document D-2128, Institute for Defense Analyses, Alexandrea, VA., 1998.

submitted to EUROGRAPHICS 2001.