Multimodal visualization interface for data

0 downloads 0 Views 334KB Size Report
Department of Anatomy (CP 619), Faculty of Medicine, ... self-learning and data presentation ..... Distances or angles (in 2D or 3D) between these landmarks can.
Surg Radiol Anat DOI 10.1007/s00276-006-0128-6

MEDICAL IMAGING

Multimodal visualization interface for data management, self-learning and data presentation S. Van Sint Jan Æ X. Demondion Æ G. Clapworthy Æ S. Louryan Æ M. Rooze Æ A. Cotten Æ M. Viceconti

Received: 14 November 2005 / Accepted: 25 April 2006 Ó Springer-Verlag 2006

Abstract A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and

S. Van Sint Jan (&) Æ S. Louryan Æ M. Rooze Department of Anatomy (CP 619), Faculty of Medicine, Universite´ Libre de Bruxelles, Lennik Street 808, 1070 Brussels, Belgium e-mail: [email protected] X. Demondion Æ A. Cotten Department of Musculoskeletal Radiology, Hospital R. Salengro, CHRU Lille, France X. Demondion Department of Anatomy, Faculty of Medicine, Universite´ de Lille, Lille, France G. Clapworthy Department of Computing and Information Systems, University of Luton, Luton, UK S. Louryan Department of Radiology, Erasme Hospital, Universite´ Libre de Bruxelles, Brussels, Belgium M. Viceconti Laboratorio di Tecnologia Medica, Istituti Ortopedici Rizzoli, Bologna, Italy

based on an open-source software development architecture, and therefore updates of the system for custom applications are possible. Keywords Visualization software Æ Multimodal interface Æ Registration Æ 3D Æ Medical imaging Æ Open source

Introduction Visualization of medical imaging datasets in Radiology is usually occurring inside a clinical department where dedicated computers and software are available for that purposes [3, 7]. On the other hand only few software tools allow such visualization outside clinical premises. Educational tools in Radiology are also limited for the same reasons, while computer-aided instruction should offer new opportunities for more effective learning [4]. Several educational software systems were previously described for radiology purposes including a few visualization tools [8, 9, 11, 12]. Unfortunately, functionalities included in these tools are often limited to standard image display. Most of them are also costly, and therefore not easily portable or affordable by individuals. Other Radiology visualization tools are also limited for the same reasons, while computer-aided instruction should offer new opportunities for more effective learning and rehearsal [5]. This paper describes a multimodal interface, called the Data Manager (DM) that includes various medical imaging visualization and management tools. This interface is distributed as free-ware and runs on standard personal computers running with Windows 2000

123

Surg Radiol Anat

or Windows XP (a Linux version is currently under development). Heterogeneous data visualization is usually a computer intensive task. Therefore, and although the DM may virtually run on any computer configuration, higher user’s comfort is reached using faster hardware (minimum: Pentium III processor, RAM: 768 MBs, 3D video card with 256 MB). The DM has been implemented from an open-source programming architecture called multimod application frame (MAF) developed by the same authors. MAF has been developed, partly based on the VTK library (website: http://www.public.kitware.com/VTK), to answer the visualization and data exchange needs within the biomedical community [14], and find a solution to current limitations of other available opensource libraries (for example, visualization of multiple datasets or handling of time-varying data are in general not supported). The aim behind this continuing endeavor is to increase and promote multidisciplinary efforts by the development of common tools used by the various fields involved in biomedical research (radiology, orthopedics, biomechanics, etc.). These fields produced heterogeneous data that must be sometimes combined [1, 2, 13]. Only few tools are available to perform such data registration. The DM development aims to support advanced data visualization, fusion and registration for intuitive data analysis as requested by clinicians [6]. The DM also includes a data management structure. For example, a senior radiologist can organize standard image data [computerized tomography (CT), ultrasound, etc.] around particular topics (e.g., the shoulder joint), and store it in the DM structure. The available material is then distributed to junior radiologists for rehearsing. Next to rehearsal, the DM tools can be used in circumstances where data analysis is required without the availability of costly visualization equipment (private practice, conference presentations)..4

Materials and methods Software development This paper describes the DM key features that are of interest within the Radiology field. Details on supplementary features are available elsewhere [14]. Both DM and MAF are currently in development. User’s feedback is welcome to increase the system performance (see http://www.tecno.ior.it/multimod/ to download the software and find more information on MAF).

123

Data collection procedures Medical imaging data was obtained from standard CT and stored in DICOM format. Patient data from shoulder arthro-CT dataset and abdominal arterio-CT are included in this paper. Other data were collected to allow registration of CT data and joint kinematics. Both shoulders from one fresh-frozen specimen (male, 67-year old) has been processed by sawing at both 6th cervical and 10th thoracic vertebra. After sawing, the specimen included the thorax, and both entire upper limbs. Rigid clusters equipped with reflective markers have been drilled into the sternum, scapulas, clavicles and humeral bones. All clusters included four markers, except for the computerized tomography clavicle (three markers). The whole setting, i.e. specimen and clusters, has been CT imaged. Three-dimensional (3D) reconstruction of all bones involved in the shoulder complex (thorax, humeral bone, scapula, clavicle) has been obtained from the CT data using a commercial software (AmiraÓ, Germany). Spatial location of each marker has been located by processing its centroid directly from the CT dataset. On the other hand, kinematics of the shoulder complex has been then collected using stereophotogrammetry (Vicon 612 system, frequency set at 120 Hz). Data related to planar motions (flexion/ extension, abduction/adduction, internal rotation/ external rotation) have been collected by passive mobilization of the upper limb. Trajectories of the reflective markers were stored into C3D format. Within the DM, 3D bone models (STL format) have been registered to the joint kinematics data (C3D format) using the spatial location of the markers [13]. After registration all data were defined into the same reference system. Motion simulation could then be performed. Once registration and simulation was obtained, anatomical landmarks (ALs) were virtually palpated at the surface of the 3D models using the DM interface. These landmarks were located at both origins and insertions of some of the peri-scapular muscles. One pair of landmarks included one origin and one insertion. Length between both origin and insertion ALs were automatically processed in the DM interface during the motion simulation. Theoretical muscle elongation and shortening were interpolated from the length variations of the AL pairs during the above-given planar motions. Digital photographies of a dissected specimen (shoulder) have been obtained and stored in TIF format. One film radiography has been digitized using a plane digitizer and stored in TIF format. The above

Surg Radiol Anat

data have been imported, processed and visualized using the tools available in the DM.

Results Data importation The heterogeneous data produced by the abovementioned data collection procedures can be imported into the DM. Supported formats include most standards: DICOM (medical imaging); STL and VRML (3D modeling); C3D and PGD (kinematics and kinetics analysis data); BMP, JPG, PNG and TIF (images); VTK (exchange format); and MSF (property format used to store the management structure). DICOM data are made fully anonymous, which guarantee patient privacy in case of data distribution. Once a particular file is imported into the DM the related data becomes a member of the so-called VME (Virtual Medical Entity) tree (Fig. 1) that is hierarchically organized. VME tree members, i.e. the

imported data, can be copied/pasted/cut in the hierarchy and alphabetically sorted. Each object in the hierarchy tree will correspond to one data file stored on the hard disk. A unique management file (file extension: *.msf) keep track of each VME element position and status (color, spatial position, scaling, etc.) in the VME tree. Non-supported formats can also be included in the VME tree (for example, the PowerPoint presentation found in the VME tree illustrated in Fig. 1), but will not be directly visualized using VM tools (see next section). This is useful to present a coherent tutorial where all information is accessible from one single source. In summary, the VME tree allows hierarchical management of any data stored on the hard disk. Data visualization Once imported data can be visualized. Non-supported format data present in the VME tree can be visualized using the default system viewer, dedicated to that particular format, installed on the user’s computer (for example, PDF files with Acrobat ReaderÒ, PPT files with PowerPointÒ) by a call performed from the DM interface. Supported formats can usually be displayed in several of the following viewers. Standard viewers

Fig. 1 Left the VME tree. Imported data can be ordered by the user along a customized hierarchy. Element in the tree can be organized by groups (e.g., ‘‘C—dissections’’ or ‘‘F—medical imaging’’). The elements found in the lowest branches of the data tree are the imported data (e.g., ‘‘protocol’’, ‘‘MRI’’, ‘‘femur’’). Right typical layout (see text for explanations)

Both Single Slice (Fig. 2a) and Orthoslice (Fig. 2b) viewers deal with medical imaging datasets (imported in DICOM format). The Single Slice viewer displays, in gray scale or in pseudo-colors, a single planar slice along one of the original dataset axes (horizontal, frontal or sagittal). The orthoslice viewer shows four sub-viewers—multiplanar displays along each dataset axis and a 3D viewer. The latter allows displaying of 3D models, with or without transparency. Using transparency allows studying the mutual spatial relationships of the structures visible in the dataset. Contours of the 3D models are simultaneously displayed along the planar slices. Such feedback helps the user to better locate the structures present in the dataset. This is of interest when estimating the extension of a structure within the subject’s anatomy or when learning how to recognize anatomical features within medical imaging datasets. Note the DM currently does not include an advanced segmentation tool in order to create separated 3D anatomical objects. On the other hand an isosurface viewer is available (see the special viewer section below). An arbitrary slice viewer allows interpolation of slices along any orientation at any position in the

123

Surg Radiol Anat Fig. 2 Various viewers displayed simultaneously in the Data Manager (DM) environment (screen snapshot). a Single slice viewer (using pseudo-colors); b orthoslice viewer; c RX-CT viewer; d image viewers; e surface viewer. a–c Display views obtained from a patient arthro-CT dataset showing a communication between the shoulder joint cavity and a geode located into the greater tubercle (arrows). d A normal X-ray image and an image obtained from dissection. e A real motion simulation of the shoulder complex obtained from in vitro measurements. These views are simultaneously displayed on the screen

medical imaging dataset. This is particularly interesting to obtain planar views showing variable orientations according the orientation of the structures of interest. The RX-CT viewer (Fig. 2c) interpolates X-ray-like images from a CT dataset together with the original slices. This is useful for example to rehearse X-ray modalities and comparison with the original dataset. This viewer combined with the display of 3D surfaces help the user to better visualize the boundaries of the anatomical structures on RX images. The Image (Fig. 2d) viewer allows visualization of general purposes images (TIF, JPEG, etc.). The surface viewer (Fig. 2e) allows visualization of any data (geometry, motion, medical imaging) within a fully interactive 3D environment. The technical frame of each dataset available in the VME tree can be modified either by associating it with another VME element, or by performing registration. Such simultaneous use of different viewers allows the users to observe a particular dataset (e.g., medical imaging, motions analysis) from various paradigms. For example, observation of particular spatial relationships of some anatomical structures could be observed during motion simulation (Fig. 2e), and directly compared to the same structures viewed using another viewing protocol (Fig. 2a, b) or compared to reference images (Fig. 2d).

123

Special viewers The DM includes several other viewers that allow displaying of medical imaging using more advanced technologies. The isosurface viewer allows generation of surface model based on the intensity of the voxels (Fig. 3). The DM allows interactive isosurfacing even on less performing computers thanks to a novel algorithm. Isosurface results can be stored in the VME tree for further use (including exportation in standard formats). The volume rendering viewer is based on algorithms that process the projection of rays through the dataset according to several properties (color, opacity, gradient) related to Houndsfield Unit (HU) values found in the dataset. Anatomical structures can be simultaneously distinguished according their intensity properties (this viewer requests more performing computers then previously in order to keep a relative user comfort). The DRR viewer (DRR = digitally reconstructed radiograph) produces a ‘‘synthetic’’ X-ray from the data (Fig. 4). It creates the image sufficiently rapidly that it can be used interactively to manipulate the orientation of both data and virtual X-ray device to observe the data from any point-of-view. This allows obtaining X-ray views that are usually not obtainable using standard systems (for example, Fig. 4c). Interactive intensity threshold allows alternate visualization of organs characterized by different Houndsfield units.

Surg Radiol Anat Fig. 3 Isosurface viewer (patient data from abdominal arterio-tomodensitometry). a Selection of the skin. b Selection of bone and arteries. c Display of results from a and b within a surface viewer. d Display of 3D models segmented from the same dataset using an external segmentation software and imported into the DM. e Display of 3D model contours within a single slice viewer

Fig. 4 Digitally reconstructed radiograph (DRR) viewer (same patient dataset as for Fig. 2), reconstructed X-ray. a Anterior view; b posterior view; c inferior view; d inferior view (different display setting than from c). The spatial diffusion of the contrast agent is visible: joint cavity (1), axillary recess (2), subcoracoid recess (3), intertubercular tendon sheath (4), and the geode within the greater tubercle (arrow)

123

Surg Radiol Anat

Layouts The DM layout facility (see Fig. 1, right) is useful for data distribution. For example, the above senior radiologist who gathered imaging material around a particular topic built a particular VME tree. Then, she/he selects the best viewers, and picks the data to display. If necessary, the various selected viewer windows are then arranged on the computer screen. Eventually, the above organization (selected viewers, viewer position on screen, and displayed data) is stored in a layout which label can be customized. Once stored on the disk, the VME data structure (and the associated layouts if existing) can be distributed to third users (junior radiologists or colleagues). The latter users open the data file, and go directly to the layout menu where they can select the available layouts. Selecting a layout will display all data and pre-selected viewers. The layout concept can also be used during conference presentation, or simply to retrieve patient data from various medical imaging modalities quickly (for example, during multidisciplinary staff meeting to discuss particular patient cases). Data operations Several operations are available to modify the imported data or to extract new information. Volume sub-sampling and cropping of imaging datasets can be performed at two moments: either during image importing (and only the cropped volume is then subsequently stored in memory), or after the entire original volume is stored in the VME tree. Isosurface models can be extracted and stored into the VME tree. Particular ALs can be interactively selected. Distances or angles (in 2D or 3D) between these landmarks can be then determined. The following DM features are probably not all directly useful in Radiology. Nevertheless, they are worth being mentioned because they are present in disciplines (motion analysis, finite element analysis, etc.) increasingly using medical imaging for their own research. Interdisciplinary communication and mutual understanding are therefore necessary. Surface models can be decimated (something similar to simplified), smoothed, mirrored, spatially transformed (that is moving them in space using rotation, translation, scaling or shearing), registered (i.e., aligning data towards another reference frame), or animated (using for example motion data coming from clinical motion analysis system). Two kinds of registration algorithms are available: (1) registration of cluster (or cloud) of landmarks available from various datasets [landmark

123

registration can be performed according three modalities: rigid registration (‘‘rigid’’), homogoneous scaling (‘‘similarity’’), heterogeneous scaling (‘‘affine’’)]; (2) surface registration using ICP (Iterative Closest Point) algorithms. Further version of the DM will include material intensity mapping (pseudo-color of 3D surfaces according the voxel intensity found in the original imaging dataset), arbitrary volume sampler (to resample an image volume in any orientation from the original dataset), interactive spatial transformation (3D objects and 3D image volumes). Discussion This paper presented the first version of a newly developed software interface called the DM that includes both data visualization and data management features that are of interest for radiology applications. Features from other fields (bioengineering, computer graphics, biomechanics, pedagogy) are also included in the same environment to promote and to increase multidisciplinary collaborations, communication and data exchange. It has been previously mentioned that advanced visualization of medical imaging datasets usually requests dedicated computers and software available within a Radiology Department only [3, 7]. Some software tools are available for standards PCs, but include limited visualization tools [8, 9, 11, 12], and do not allow advanced visualization outside the clinical premises: for example, at home to prepare a presentation or during a conference speech abroad. A more advanced visualization interface, called OsiriX, offers multimodal image navigation [10]. The latter is especially attractive because it allows data combination, i.e. registration, of several medical imaging modalities. A promising perspective is to join both MAF and OsiriX open-source libraries to produce even more advanced multidisciplinary tool. The authors acknowledge the difficulty to develop a software interface that would answer all needs and educational requirements in Radiology [5]. Feedback from field users is certainly welcome to improve the current interface and better answer the constraints of filmless radiology teaching [15]. Such collaborations are also necessary to achieve a higher understanding of particular anatomical systems (musculoskeletal or MS, gastrointestinal, thoracic, etc.) both in clinical and fundamental research applications. For example, the static aspects of the MS system (i.e., its gross anatomy) are relatively well described. The MS system shows a highly complex functional organization that is far to be understood in details. Clinically meaningful progresses

Surg Radiol Anat

in the MS understanding can only be accomplished through multidisciplinary data collections, data exchange and combination. Such needs also exist for the other anatomical systems (e.g., thorax, abdomen). The consortium behind both MAF and DM strives towards that goal, and is calling for more partnerships in the development of advanced data exchange and visualization tools. Acknowledgments This study is part of both MULTIMOD and LHDL projects, which was funded by DG IST of the European Commission.

References 1. Belsole RJ, Hilbelink DR, Llewellyn JA et al (1988) Mathematical analysis of computed carpal models. J Orthop Res 6:116–122 2. Delp SL, Loan JP, Hoy MG et al (1990) An interactive graphics-based model of the lower extremity to study orthopaedic surgical procedures. IEEE Trans Biomed Eng 37:757–767 3. Fishman EK, Magid D, Ney DR et al (1988) Three-dimensional imaging and display of musculoskeletal anatomy. J Comput Assist Tomogr 12:465–467 4. Jaffe CC, Lynch PJ (1995) Computer-aided instruction in radiology: opportunities for more effective learning. Am J Roentgenol 164:463–467 5. Jaffe CC, Lynch PJ (1996) Educational challenges. Radiol Clin North Am 34:629–646

6. John N.W., McCloy R.F. (2004) Navigating and visualizing three-dimensional data sets. Br J Radiol 77:S108–S113 7. Passadore DJ, Isoardi RA, Ariza Pp et al (2001) Use of a low-cost, PC-based image review workstation at a radiology department. J Digit Imaging 14:222–223 8. Rosset A, Ratib O, Geissbuhler A et al (2002) Integration of a multimedia teaching and reference database in a PACS environment. Radiographics 22:1567–1577 9. Rosset A, Muller H, Martins M et al (2004) Casimage project: a digital teaching files authoring environment. J Thorac Imaging 19:103–108 10. Rosset A, Spadola L, Pysher L, Ratib O (2006) Navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation. Radiographics 26:299–308 11. Sennst DA, Kachelriess M, Leidecker C et al (2004) An extensible software-based platform for reconstruction and evaluation of CT images. Radiographics 24:601–613 12. Tavares MA, Dinis-Machado J, Silva MC (2000) Computerbased sessions in radiological anatomy: one year’s experience in clinical anatomy. Surg Radiol Anat 22:29–34 13. Van Sint Jan S, Salvia P, Hilal I et al (2002) Registration of 6-DOFs Electrogoniometry and CT Medical Imaging for 3D Joint Modeling. J Biomech 35:1475–1484 14. Van Sint Jan S, Viceconti M, Clapworthy G (2004) Modern visualization tools for both research and education in biomechanics. In: Proceedings of the VIIIth international conference on information visualisation, IV. IEEE Computer Society Press, Los Alamitos, pp 9–14 15. Wiggins RH, Davidson HC, Dilda P et al (2001) The evolution of filmless radiology teaching. J Digit Imaging 14:236–237

123