Download as a PDF

18 downloads 96 Views 459KB Size Report
Ian Lewin, John O'Farrell, and James E. Walker III, “Advanced techniques in lamp characterization,” Photometric. Engineering of Sources and Systems, Proc.
TIR Systems Ltd. 7700 Riverfront Gate Burnaby, BC Canada V5J 5M4

1 800 663 2036 T 604 294 8477 F 604 294 3733 www.tirsys.com

___________________________________________________________________________

A Near-Field Goniospectroradiometer for LED Measurements By: Ian Ashdown & Marc Salsbury

Copyright 2006 Society of Photo-Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, Vol. 6342, 634215, International Optical Design Conference 2006, and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

Invited Paper

A Near-field Goniospectroradiometer for LED Measurements Ian Ashdown∗ and Marc Salsbury TIR Systems Limited, 7700 Riverfront Gate, Burnaby, British Columbia, Canada V5J 5M4 ABSTRACT Designing micro-optics for light-emitting diodes must take into account the near-field radiance and relative spectral power distributions of the emitting LED die surfaces. We present the design and application of a near-field goniospectroradiometer for this purpose. Keywords: Near-field photometry, goniophotometer, LED measurements, spectroradiometer

1. INTRODUCTION One of the often-touted advantages of high-flux light-emitting diodes (LEDs) is that they are “point sources” of light, which in theory greatly simplifies the design of non-imaging optical systems such as architectural light fixtures (or “luminaires”) and LED backlighting systems. The situation in practice is more complex. It is often necessary to employ arrays of LEDs in order to compete with traditional incandescent and fluorescent light sources. The size and density of the array is usually limited by thermal considerations – LEDs generate considerable amounts of heat – and so the aggregate light source can no longer be considered a “point source” for optical design purposes. One solution to this problem is to couple each LED die to a micro-optical element with refractive and possibly diffractive microstructures [1]. These elements can be mounted close to the LED die or molded directly onto the surface of its optical epoxy encapsulant. They can also be applied directly to the encapsulant surface using ultraviolet replication techniques [2]. In order to design efficient refractive and diffractive optical elements, we need to know the LED die geometry, including its size, shape, and position within its encapsulant. We also need to know the 2D radiance distribution across the surface of the emitting die surface and its relative spectral power distribution. It is not sufficient to model the LED die as a uniform Lambertian emitter, as the spatial distribution of radiance can, depending on the current spreading layer design and bond wire placement on the die, be highly non-uniform. A useful technique for measuring these parameters can be developed from instrumentation that was originally developed for near-field photometry of architectural luminaires.

2. NEAR-FIELD PHOTOMETRY Near-field photometry was first developed to measure and model the near-field luminous flux distribution of architectural luminaires [3]. It uses a digital camera to measure the four-dimensional scalar field of light surrounding a volume light source such as a lamp or luminaire. With these measurements, it is possible to accurately model the illuminance distribution over any surface, regardless of its distance, orientation or curvature with respect to the light source. The key concept is the geometric ray of light. The IESNA Lighting Handbook [4] defines illuminance E as “the luminous flux per unit area incident at a point on a surface.” That is, illuminance is due to light coming from all directions above a surface and intersecting a point, where the “surface” can be real or imaginary. We can think of this ∗

[email protected]; phone 604-473-2335;www.tirsys.com International Optical Design Conference 2006, edited by G. Groot Gregory, Joseph M. Howard, R. John Koshel, SPIE Vol. 6342, 634215, © 2006 SPIE-OSA · 0277-786X/06/$15 · doi: 10.1117/12.692272 SPIE-OSA/ Vol. 6342 634215-1

light as an infinite number of geometric rays coming from every direction above the surface plane, each with its own quantity of luminous flux dΦ. The IESNA Lighting Handbook similarly defines the luminance L of a geometric ray as: L = dΦ 2 dωdA

(1)

where dω is the differential solid angle of an elemental cone containing the ray and dA is its differential cross-sectional area (Figure 1). Thus:

E = ∫ L cos(θ )dω

(2)

ω

where E is the illuminance at the point P on a surface and θ is the angle between the ray and the surface normal n. If we can measure the luminance of each ray, we can calculate the illuminance at the point. A ray of light travels in a straight line through an optically homogeneous medium such as air. Because the quantity of luminous flux within the ray does not change (neglecting scattering and absorption), neither does its luminance. We can therefore measure the luminance of a ray anywhere along its length. n

L

θ



P Fig. 1 Geometry of a ray of light illuminating a surface

Now, consider a planar or volumetric light source surrounded by an imaginary sphere (Figure 2). Every ray of light emitted by the light source will have to intersect this sphere at some point. We can therefore think of the light source being surrounded by a four-dimensional scalar photic field [5], wherein each point has two position coordinates θ, φ and two direction coordinates δ, ϕ . Imaginary sphere

Light source

Intersection

Fig. 2 Four-dimensional scalar photic field defined by geometric rays

SPIE-OSA/ Vol. 6342 634215-2

We can measure the luminance of a single ray of light with a lens-type luminance meter (Figure 3). More accurately, we can measure the average luminance of a bundle of rays contained within a cone defined by the photosensor area, the lens aperture and focal length, and the focus point. In practical terms, this may still be considered representative of a single ray for a luminance meter with a sufficiently narrow field of view. (The “surface” that the luminance meter is focused on can be real or imaginary.)

Sensor Surface Fig. 3 Lens-type luminance meter

For the purpose of a practical near-field goniophotometer, we can replace the lens-type luminance meter with a photometrically-or radiometrically-calibrated digital camera, wherein each image pixel measures the luminance or radiance of a unique geometric ray. With an image resolution of (say) 1024 × 1024 pixels, a digital camera can simultaneously measure the luminance of over one million rays that converge on the camera lens. If we mount the camera on a moveable arm that rotates in the vertical plane about the light source and rotate the light source in the horizontal plane, the camera will circumscribe an imaginary sphere about the light source (Figure 4). By capturing images at closely spaced intervals in the vertical and horizontal planes, we can thus adequately sample the surrounding photic field. The luminance of an arbitrary geometric ray can then be interpolated from this set of measured rays.

Camera

θ

Moveable arm

Light source

φ

Fig. 4 Near-field goniophotometer

Practical near-field goniophotometers have been constructed. For example, Ashdown used the resultant ray set to predict the illuminance distribution of architectural surfaces near linear fluorescent luminaires, while Rykowski characterized incandescent and high-intensity discharge lamps for non-imaging optical design using ray-tracing techniques [6,7]. Other examples include near-field goniophotometers designed and constructed for high-intensity discharge lamps, automotive headlights, and computer graphics applications [8,9,10].

SPIE-OSA/ Vol. 6342 634215-3

3. NEAR-FIELD GONIOSPECTRORADIOMETER DESIGN Like most laboratory instruments, there is little that is truly innovative in the design of a near-field goniophotometer. Rather, the devil is in the details. Designing and constructing an instrument specifically for LED measurements includes a number of issues that must to be considered for a successful implementation. 3.1 Spectral Power Distribution

Most traditional light sources, including incandescent, fluorescent, and high-intensity discharge lamps, exhibit essentially constant relative spectral power distributions under normal operating conditions and so goniophotometric measurements (both near-field and far-field) are typically performed with photopically-corrected broadband photosensors [4]. High-flux LEDs differ in that their peak emission wavelengths vary with LED die junction temperature, typically 0.04 nm / °C for InGaN (blue and green) LEDs and 0.6 to 0.9 nm / °C for AlInGaP (red and amber) LEDs [12]. In addition, their approximately Gaussian spectral power distributions (ignoring phosphor-coated white light LEDs) exhibit spectral broadening with increasing junction temperature, according to: ∆λ = 1.25 × 10 −7 λ2T

(3)

for AlInGaP LEDs, where λ is in nm and T is in Kelvin [13]. The spectral power distributions and spectral broadening characteristics of InGaN LEDs are more complicated, but have been recently been successfully modeled [14]. These are not academic issues, as high flux LEDs are inherently dimmable, and so they may be expected to operate with junction temperatures ranging from ambient (25 °C) to as much as 185 °C at full power [15]. Changes in peak wavelength and spectral bandwidth affect both the chromaticity and luminous intensity of LEDs. Phosphor-coated LEDs represent another challenge. In addition to the temperature-dependent spectral characteristics of the blue InGaN LED used to optically pump the phosphors, the phosphors themselves may exhibit temperature dependencies. Worse, the phosphor coating may have an uneven thickness, which can result in LED chromaticity variations with viewing angle [16]. Image sensor

LED die

Spectroradiometer Fig. 5 Combined imaging luminance meter and spectroradiometer

It is a reasonable assumption that the emission spectrum of a semiconductor die will remain approximately constant across its emitting surface. However, given that the spectrum may be both temperature and spatially dependent, it should be monitored and recorded during image acquisition. This can be accomplished by placing a beam-splitter between the camera lens and image sensor, with a portion of the light directed to a fiberoptic-coupled spectroradiometer (Figure 5). A relative spectral power distribution can then be captured with each image and later used to generate a hyperspectral ray set for non-imaging optical design, or at least apply necessary corrections for ray luminance calculations. Another reason for using a spectroradiometer instead of a photopically-corrected broadband photosensor – and this applies to all LED luminous flux measurements – is that broadband sensors with photopic filters often exhibit

SPIE-OSA/ Vol. 6342 634215-4

unacceptably large errors when measuring luminous flux from narrowband blue sources [17]. The spectroradiometer measurements can be used to calibrate the digital camera’s spectral response and thereby obtain accurate luminance or radiance values for the measured geometric rays. 3.2 Imaging Optics

Another issue of concern is the camera lens. Rea provides a good overview of the issues concerning vignetting compensation and focus, but the small size of LED die means that diffraction and depth of field also become issues [18]. Figure 6 shows a surface-mount technology (SMT) LED that measures 1.0 mm × 0.8 mm, while Figure 7 shows a closeup view of the epoxy-encapsulated LED die with its gold bond wire. The camera lens was stopped down in order to maintain an adequate depth of field in Figure 7, but only at the expense of diffraction-limited resolution. The tradeoff between resolution and depth of field is dependent on the size of the LED die and the need for precision in the interpolated ray set when designing non-imaging optics. For most applications, the resolution indicated by Figure 7 is likely adequate, as there is considerable variation in the placement of the bond wires with commercial LEDs. Regardless, any non-imaging optical design should take into account LED manufacturing tolerances.

Fig. 6 Encapsulated SMT LED (1.0 mm x 0.8 mm)

Fig. 7. Encapsulated SMT LED die with bond wire

An interesting solution to this problem that we are currently investigating is the recently introduced plenoptic camera [19]. Coaxially interposing a lenticular microlens array between the primary objective and the image sensor produces an array of low resolution images that effectively sample the four-dimensional photic field with a single exposure. Image analysis techniques can then be employed to resort the rays where they would have terminated in a virtual camera at a slightly different position. As a result, the depth of field and focus of the camera can be adjusted a posteriori without reducing the camera aperture or increasing the exposure time. 3.3 Image Sensor

Figure 6 illustrates another problem in that the LED die is grossly overexposed. This is due to the 8-bit dynamic range of the file format used for the illustration. In practice, a minimum of 12 bits dynamic range is usually required for image capture and analysis. This can be achieved with a cooled scientific-grade CCD camera or with a CMOS camera using different exposures and high-dynamic range imaging techniques [20]. Multiple images can also be captured and averaged to reduce thermal noise, while a dark frame image can be captured and subtracted from the averaged images to compensate for per-pixel bias and “hot” pixels. Most digital cameras exhibit a nonlinear response to incident radiant flux over their dynamic range. Fortunately, a simple radiometric self-calibration technique can be employed to measure and thereby linearize the camera-lens response [21]. 3.4 Data Compression

There are also data representation issues to consider. The problem is that a data set of raw 12-bit images may require gigabytes of disk space. Ashdown presented a simple lossless algorithm using Haar wavelets to achieve 90 percent data

SPIE-OSA/ Vol. 6342 634215-5

compression that also offered extremely fast decompression for on-demand image retrieval when generating interpolated ray sets [22]. Similar work has been done in other fields, including light field rendering in computer graphics and MRI scanner data sets in medicine [23,24]. Data compression is not an important issue per se with respect to the design of a near-field goniospectroradiometer, but it is necessary in order to implement a practical instrument. Ideally the compressed image data file is sufficiently small that it is practical for LED manufacturers to provide the data to their customers on DVD-ROM or via the Internet. 3.5 Camera Positioning

As will be understood from the following section on data interpolation, it is convenient but not essential to have the digital camera positions evenly spaced on the imaginary sphere surrounding the light source (see Figure 4). This can be accomplished by tiling the sphere with identically-sized equilateral triangles. (Tiling a hemisphere in this manner yields the familiar geodesic dome.) The advantage of this approach is that the ray luminance interpolation algorithm becomes independent of the camera position. Calculating the exact camera positions in spherical coordinates is easily accomplished by recursively subdividing the triangular faces of an octahedron, where each subdivision quadruples the number of equilateral triangles (see Figure 8). This technique is commonly used in computer graphics to mesh the surfaces of spherical and hemispherical objects [25]. (It is so common that there is no proper name for the recursive algorithm. Paradoxically, it is only rarely discussed in computer graphics or computational geometry textbooks.)

Fig. 8 Recursive subdivision of octahedron to determine optimal camera positions

Choosing the appropriate subdivision level depends on both the three-dimensional complexity of the light source’s photic field and the desired accuracy of the interpolated ray set. For most commercially available high-flux LEDs, three levels of subdivision (145 vertices and 256 triangles) are usually sufficient. However, if the LED die or package exhibits a narrow beam pattern due to collimating lenses or photonic crystals, four levels of subdivision (545 vertices and 1024 triangles) may be required. 3.6 Data Capture

Image and spectra capture follow the usual procedures for digital photometric measurements. The LED is energized and allowed to thermally stabilize, while a three-axis mechanical stage is used to precisely align the LED with respect to the goniometric center of rotation. An initial dark field image is then captured with the lens covered to record the per-pixel bias values due to thermal noise, following which the camera arm and LED mount are successively positioned to each vertex of the imaginary geodesic dome. Multiple camera images are captured at each vertex position to reduce the signal-to-noise ratio while obtaining a highdynamic range composite image. Meanwhile, the LED spectrum is measured with the through-lens spectroradiometer, while the LED forward voltage is monitored to determine the die junction temperature. Once the images and associated spectra have been captured and their data suitably compressed, they can be written to disk for later retrieval and data interpolation to generate ray sets.

SPIE-OSA/ Vol. 6342 634215-6

_____

U

Monitor

iiiiiiiiiiiiiiiiiiiiiiiiii] Cancel

Inlormation Frame Number:

Hors. AngIe:

Vert. AngIe: Statue:

400 nm

4

0.00 degrees 90.00 degrees In progress

700 nm

Fig. 9 LED image and spectrum capture

3.7 Ray Interpolation

Many non-imaging optical design programs support the importation of ray sets based on physical measurements. These include position and direction information for each ray, and optionally wavelength for refractive and diffractive systems design. Depending on the design requirements, a ray set may consist of several thousand to several million random rays. The advantage of archiving the images and associated spectra from the near-field goniospectroradiometer is that an optical engineer can generate a posteriori as many random rays as required, and also specify the desired range of positions and angles for a particular project. Choosing the start position for a random ray can be based on the known geometry of the LED die and its mount. In the simplest case, the LED die can be assumed to be planar, and with a width delimited by the camera field of view when seen from directly above. Any ray whose interpolated luminance is less than a predetermined noise threshold can later be rejected from the ray set. Geometric models of the LED die can also be used, although manufacturing tolerances for the bond wire placement may lead to significant uncertainties. If necessary, the three-dimensional geometry of the LED die can be derived using stereo disparity techniques as described in the computer vision literature [26]. For each random ray, it is necessary to determine the closest captured images from which to interpolate the ray luminance. This can be easily done by determining which triangle the ray intersects, using an efficient ray-triangle intersection algorithm by Badouel [27]. The algorithm is as follows: given a triangle with vertices {v1 , v 2 , v 3 } (where variables in bold are three-dimensional points or vectors), the normal to the plane containing the triangle is given by: n = (v 2 − v 1 ) × (v 3 − v 1 )

(4)

and the implicit equation of the plane is: n⋅p + d = 0

(5)

for any point p on the plane, and where d is the distance of the plane from the origin (Figure 11). If the ray has origin o and direction d, its parametric representation is: r (t ) = o + dt

(6)

SPIE-OSA/ Vol. 6342 634215-7

and so the evaluation of the parameter t corresponding to the ray-plane intersection point p is: t=

d + n⋅o n ⋅d

I

(7)

I

I

F.fl1l3 H©::

vt..

Fig. 10 Captured and archived image display in pseudocolor to visualize full dynamic range

If the ray and plane are parallel, then n ⋅ d = 0 and no intersection occurs; otherwise, the ray intersects the plane if t ≥ 0 . The intersection point p can be represented in parametric form as:

(p − v1 ) = α (v 2 − v1 ) + β (v 3 − v1 )

(8)

The point p will be inside the triangle if α ≥ 0 , β ≥ 0 , and (α + β ) ≤ 1 .

To determine α and β, the triangle is first projected onto the x-y, x-z, or y-z plane such that its projected area is maximized (as are the projections of α (v 2 − v1 ) and β (v 3 − v1 ) ). This plane is perpendicular to the dominant axis of the triangle plane normal, so that we have the index i1 : ⎧1 if ⎪ io = ⎨2 if ⎪3 if ⎩

n(x ) = max( n(x ) , n( y ) , n(z ) ) n( y ) = max( n(x ) , n( y ) , n(z ) ) n(z ) = max( n(x ) , n( y ) , n(z ) )

(9)

The indices i 2 and i3 ( i1 , i2 ∈ {1,2,3} ) are the indices different from i1 , and represent the axes of the plane onto which the triangle is projected. If we let (u, v ) be the two-dimensional coordinates of a vector projected onto this plane, the projected coordinates of (v1 − p ) , (v 2 − v1 ) , and (v 3 − v1 ) are respectively: u1 = p(i 2 ) − v1 (i2 ) u 2 = v 2 (i 2 ) − v1 (i 2 ) u 3 = v 3 (i 2 ) − v1 (i 2 ) v1 = p(i3 ) − v1 (i3 ) v 2 = v 2 (i3 ) − v1 (i3 ) v3 = v 3 (i3 ) − v1 (i3 )

Substitution into Equation 8 then yields:

SPIE-OSA/ Vol. 6342 634215-8

(10)

u1 = αu 2 + βu 3 v1 = αv 2 + βv3

(11)

and thus: ⎡u u 3 ⎤ ⎡u det ⎢ 1 det ⎢ 2 ⎥ ⎣ v1 v3 ⎦ , β = ⎣v2 α= ⎡u u 3 ⎤ ⎡u det ⎢ 2 det ⎢ 2 ⎥ v v 3⎦ ⎣ 2 ⎣v2

u1 ⎤ v1 ⎥⎦

(12)

u3 ⎤ v3 ⎥⎦

Having determined which triangle the random ray intersects, we know which three captured images and associated spectra are closest to the point where the ray intersects the imaginary sphere (Figure 2). Knowing the selected point o on the LED die from which the ray originates, it is a straightforward geometric calculation to determine the pixels P1 , P2 , and P3 in the images corresponding to the physical point and their corresponding measured luminances L1 , L2 , and L3 . With this, the interpolated ray luminance L R becomes: LR = (1 − α − β )L1 + αL2 + βL3

(13)

P2

v2

p

β(v3 -v1)

v3

α(v2 -v1) LR v1

L2 L3 L1

P1

o

Fig. 11 Ray luminance interpolation

SPIE-OSA/ Vol. 6342 634215-9

P3

Badouel’s algorithm is extremely efficient, and can be implemented in a few dozen lines of code. In practice, the ray interpolation process is typically i/o-limited by the need to access and decompress images from the data file.

4. CONCLUSIONS Designing micro-optics for light-emitting diodes will present numerous engineering challenges. While current efforts have concentrated on optical designs for handheld devices such as backlit PDA and cellular phone displays, there are opportunities and challenges for backlit television displays and solid-state lighting as well. The near-field goniospectroradiometer we have described in this paper provides detailed information on the near-field radiance and relative spectral power distributions of the emitting LED die surfaces. We have only recently completed its design and construction, and we are currently in the process of commissioning the instrument. We fully expect however that it will prove invaluable in our company efforts to develop next-generation optical components for solid-state lighting systems.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

Mark Rossi and Michael Gale, “Micro-optics promote use of LEDs in consumer goods,” LEDs Magazine 2(7):2729, 2005. M. T. Gale, “Replication technology for holograms and diffractive optical elements,” Journal of Imaging Science. and Technology 41(3):211-220, 1997. Ian Ashdown, “Near-field photometry: a new approach,” Journal of the Illuminating Engineering Society 22(1):163180, 1993. Mark S. Rea, ed., The IESNA Lighting Handbook, Ninth Edition, Illuminating Engineering Society of North America, New York, NY, 2000. Parry Moon and Domina. E. Spencer, The Photic Field, MIT Press, Cambridge, MA, 1981. Ian Ashdown, “Near-field photometry in practice,” IESNA Annual Conference Technical Papers, 413-425, 1993. R. Rykowski and J. F. Forkner, “Matching illumination systems with projection optics,” Projection Displays, Proc. SPIE 2407, 48-52, 1995. Ian Lewin, John O’Farrell, and James E. Walker III, “Advanced techniques in lamp characterization,” Photometric Engineering of Sources and Systems, Proc. SPIE 3410, 148-157, 1997. M. W. Siegel and R. D. Stock, “A general near-zone light source model and its application to computer-aided reflector design,” Optical Engineering 35(9):2661-2679, 1996. M. Goesele, X. Granier, W. Heidrich, and H.-P. Seidel, “Accurate light source acquisition and rendering,” ACM Trans. Graphics 22(3):621-630, 2003. E. Fred Schubert, Light-emitting Diodes, Cambridge University Press, Cambridge, UK, 2003. Lumileds Lighting, Power Light Source Luxeon Emitter Data Sheet, Technical Datasheet DS25, Lumileds Lighting LLC, San Jose, CA, 2006. A. Žukaukas, M. S. Shur, and R. Caska, Introduction to Solid-State Lighting, Wiley-Interscience, New York, NY, 2002. K. Man and I. E. Ashdown, “Accurate Colorimetric Feedback for LED Clusters,” Solid State Lighting VI, Proc. SPIE 6337 (to appear). Lumileds Lighting, Power Light Source Luxeon K2 Emitter, Technical Datasheet DS51, Lumileds Lighting LLC, San Jose, CA, 2006. Daniel A. Steigerwald, Jerome C. Bhat, Dave Collins, Robert M. Fletcher, Mari Ochiai Holcomb, Michael J. Ludowise, Paul S. Martin, and Serge L. Rudaz, “Illumination with solid state lighting technology,” IEEE Journal of Selected Topics in Quantum Electronics 8(2):310-320, 2002. Carolyn F. Jones, “Measurement of luminous flux emission from blue LEDs using broad band photometry,” Proc. CIE LED Symposium on Standard Methods for Specifying and Measuring LED Characteristics, CIE x013, Commission Internationale de l’Eclairage, Vienna, Austria, 1997. M. S. Rea and I. G. Jeffrey, “A new luminance and image analysis system for lighting and vision,” Journal of the Illuminating Engineering Society 19(1):64-72, 1990.

SPIE-OSA/ Vol. 6342 634215-10

19. Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz, and Pat Hanrahan, “Light field photography with a handheld plenoptic camera,” Technical Report CTSR 2005-02, Stanford University, Stanford, CA, 2005. 20. Erik Reinhard, Greg Ward, Sumanta Pattanaik, and Paul Debevec, High Dynamic Range Imaging, Morgan Kaufmann Publishers, San Francisco, CA, 2006. 21. Tomoo Mitsunaga and Shree K. Nayar, “Radiometric self calibration,” Proc. IEEE Computer Vision and Pattern Recognition, Vol. 1, 380-383, 1999. 22. Ian Ashdown and Ron Rykowski, “Making near-field photometry practical”, Journal of the Illuminating Engineering Society 27(1):67-79, 1998. 23. Bernd Girod, Chuo-Ling Chang, Prashant Ramanathan and Xiaoqing Zhu, “Light field compression using disparitycompensated lifting,” IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP-03 Vol. 4, 760-763, 2003. 24. Gloria Menegaz and Jean-Philippe Thiran, “3D encoding / 2D encoding of medical data,” IEEE Transactions on Medical Imaging 22(3):424-440, 2003. 25. David Eberly. Tessellation of a Unit Sphere Starting with an Inscribed Convex Triangular Mesh, Geometric Tools Inc. (http://www.geometrictools.com), 1999. 26. R. Szeliski and T. Scharstein. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, Microsoft Research Technical Report MSR-TR-2001-81, 2001. 27. Didier Badouel, “An efficient ray-polygon intersection,” in Graphics Gems, A. Glassner, ed., Academic Press Professional, San Francisco, CA, 390-393, 735, 1990.

SPIE-OSA/ Vol. 6342 634215-11