Orientation-selective edge detection and ... - OSA Publishing

1 downloads 0 Views 480KB Size Report
Feb 1, 2010 - 1Electronic Engineering Department, University of Guadalajara, Avenue Revolución 1500,. Guadalajara, Jalisco, México, CP 44840. 2Instituto ...
Orientation-selective edge detection and enhancement using the irradiance transport equation Jorge L. Flores1 and José A. Ferrari2,* 1

2

Electronic Engineering Department, University of Guadalajara, Avenue Revolución 1500, Guadalajara, Jalisco, México, CP 44840

Instituto de Física, Facultad de Ingeniería, J. Herrera y Reissig 565, 11300 Montevideo, Uruguay *Corresponding author: [email protected] Received 7 October 2009; accepted 5 December 2009; posted 15 December 2009 (Doc. ID 118247); published 26 January 2010

We present a method for orientation-selective edge detection and enhancement based on the irradiance transport equation. The proposed technique distinguishes the sign of the derivative of the intensity pattern along an arbitrarily selected direction. The method is based on the capacity of liquid-crystal displays to generate simultaneously a contrast reverted replica of the image displayed on it. When both images (the original one and its replica) are imagined across a slightly defocused plane, one obtains an image with enhanced first derivatives. Unlike most Fourier methods, the proposed technique works well with a low-coherence light source, and it does not require precise alignment. The proposed method does not involve numerical processing, and thus it could be potentially useful for processing large images in real-time applications. Validation experiments are presented. © 2010 Optical Society of America OCIS codes: 100.2980, 100.2810.

1. Introduction

Image edge-enhancement operations are very important subjects in image processing, mainly because these techniques are useful to detect the edge and shape of an object. Therefore they are frequently used in industrial inspection, e.g., to inspect the standards quality of printed circuit board assemblies [1]. Also these operations are used to improve the performance of the optical pattern recognition systems. The edges of objects correspond to the high spatial frequencies of the Fourier spectrum, which in turn make the main contribution to the autocorrelation peak; other parts of the object correspond to low spatial frequencies, which make a smaller contribution to the autocorrelation peak. Therefore, edge enhancement increases the discrimination capability of the recognition system [2]. Furthermore, edge enhancement has been employed in medical imaging to

0003-6935/10/040619-06$15.00/0 © 2010 Optical Society of America

evidence structures in magnetic resonance images and to improve x-ray images [3–5]. In general, we can classify the most widely used methods for edge detection and enhancement in digital and optical techniques. In digital image processing, the edge detection can be realized in either the spatial or the spatial-frequency domain. Most edge detection algorithms are based on computation of local derivatives followed by some thresholding. These methods use small convolution masks to approximate the first derivative of the image intensity values. For instance, the well known Roberts method and others such as the Sobel and the Prewitt methods [6] are implemented with two convolution kernels that detect edges in orthogonal directions. The computation time spent on convolution methods strongly depends on the size of the kernel. Usually very small kernels, typically 3 × 3 matrices, are used to reduce the processing time. However, the small size of the kernel yields a decrease in the noise robustness. In the same way, the optical methods for edge detection can be realized in either the spatial or the 1 February 2010 / Vol. 49, No. 4 / APPLIED OPTICS

619

spatial frequency domain. Unsharp masking is a powerful optical image processing technique operating in the spatial domain. It involves an unsharp replica (mask) of the original image, which is used for filtering the original one and creating an edge enhancement [7]. On the other hand, the spatial frequency filtering has already been established as a powerful technique for optical image processing. By inserting masks, stops, or slits, we can filter out appropriate spatial frequency components at the Fourier transform plane of an optical imaging system to perform operations such as edge enhancement, feature extraction, pattern recognition, and image correlation [2,8,9]. This conventional technique of masking selective spatial frequencies at the Fourier transform plane has the following limitations. First, appropriate masks are to be constructed according to the image to be processed, and hence the technique cannot be used for real-time operations. Second, these methods present a loss of information and energy in the filtering process. Other methods related to spatial Fourier filtering that allow formation of an image that is edge enhanced compared with the input object, e.g., operations such as the fractional Hilbert transform [10,11], the fractional derivative [12,13], and the spiral phase filters [14,15], have been recently proposed in the literature. In applications where the local features of some edges are more interesting than others, anisotropic filters have been proposed [16–18]. These kinds of filter have been investigated because they allow adjustment of the degree of the edge enhancement, and either right or left edges can be selectively enhanced. In this paper, we introduce a new method for orientation-selective edge detection and enhancement based on the irradiance transport equation [19–21]. The proposed method is implemented using a confocal imaging system with a polarization mask in the Fourier plane. The method is based on the capacity of liquid-crystal displays to generate simultaneously a contrast reverted replica of the image displayed on it. When both images are imagined across a slightly defocused plane, one obtains an image with enhanced first derivatives. The main advantages of the proposed system are that it is not necessary to produce a bank of complex filters, it is easy to implement with optics, and it does not involve numerical processing, and thus it could be potentially useful for processing large images in real-time applications. In addition, accurate optical alignment is not required. In Section 2 we briefly describe the theory, in Section 3 we present validation experiments, and we conclude in Section 4. 2. Theory A. Irradiance Transport Equation

In this section we briefly review the irradiance transport equation and clarify its physical meaning. Let the field amplitude of a wave be written as 620

APPLIED OPTICS / Vol. 49, No. 4 / 1 February 2010

Eðx; y; zÞ ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Iðx; y; zÞ exp½iϕðx; y; zÞ;

ð1Þ

where Iðx; y; zÞ is the intensity (irradiance) and ϕðx; y; zÞ is the phase of the wave. Assuming paraxial propagation along the z axis, it is not difficult to demonstrate that the irradiance transport equation can be written as (see, e.g., Refs. [19–21], and Appendix A) 2π ∂I ¼ −∇I · ∇ϕ − I∇2 ϕ; λ ∂z

ð2Þ

where ∇ is the gradient operator in the ðx; yÞ plane, which is normal to the direction of beam propagation, and λ is the light wavelength. The first term on the right-hand side of Eq. (2) may be called a prism term, and it represents the variation caused by inhomogeneous intensity and phase distributions. The second term (I∇2 ϕ) can be interpreted as the variation caused by convergence or divergence of the beam, and thus it can be called a lens term. The irradiance transport equation has been used by several authors to retrieve the phase profile of phase-only objects [19–21]. It is interesting to notice that, in a first order approximation valid for small Δz, from Eq. (2) one obtains λΔz ð∇I · ∇ϕ þ I∇2 ϕÞ; 2π ð3Þ P where Iðx; y; z þ ΔzÞ is the intensity P across a plane at a distance Δz from the plane 0 . Also, it is noteworthy to consider that expression (1) is the result of illuminating, with a plane wave of intensity Iðx; y; zÞ, a pure phase object with phase profile ϕðx; y; zÞ. Hence we conclude that under homogeneous illumination (i.e., for Iðx; y; zÞ ≈ I 0 ), by simple accommodation of the naked eye it is difficult to observe a pure phase object involving small phase gradients. This is because ∇I ≈ 0 and ∇2 ϕ ≈ 0, and therefore, from Eq. (3) one obtains Iðx; y; z þ ΔzÞ ≈ I 0 . On the contrary, when we have a steep spatial change in the illumination, e.g., near an illumination edge, the term ∇I · ∇ϕ becomes important (i.e., the phase gradient is amplified by the intensity gradient), and thus small phase gradients can be easily visualized, as reported in the literature [22,23]. Iðx; y; z þ ΔzÞ ≈ Iðx; y; zÞ −

B.

Edge Enhancement

In the present work we assume that the object under study is a pure amplitude object, e.g., an amplitude mask p orffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi display that modulates only the field amplitude Iðx; y; zÞ (but not the phase) of an incident plane wave propagating in the direction of the unit vector ^e, under an angle α ð≪ 1Þ with respect to the z axis. Iðx; y; zÞ can be the intensity distribution immediately after the amplitude mask placed at the plane

P

P or it could be the image across 0 obtained with the help of a 4f optical system, as shown in Fig. 1. It is important to recall that the images obtained with a lens include a spurious quadratic phase term that is absent in a confocal optical system. The phase associated to a plane wave traveling in the ^e direction can be written as 0,

ϕðx; y; zÞ ¼ 2π~ r · ^e=λ ¼ 2πðsinðαÞξ þ cosðαÞzÞ=λ;

ð4Þ

where ~ r ≡ ðx; y; zÞ and ξ is a coordinate in direction of the projection of ^e on the ðx; yÞ plane. Since ∇2 ϕ ¼ 0, from Eqs. (3) and (4) it results that Iðx; y; z þ ΔzÞ ≈ Iðx; y; zÞ − ð∇I · ^eÞΔz:

ð5Þ

Thus the image Iðx; y; z þ ΔzÞ obtained by propagating the field in free space has added a term proportional to the intensity gradient, in direction ^e, of the original image Iðx; y; zÞ. Roughly speaking, we can say that the amplitude gradient is amplified by the oblique illumination (i.e., by the phase gradient). As we demonstrate next, Eq. (5) can be used to achieve a kind of orientation-selective edge enhancement. Suppose that we have an amplitude image I p ðx; y; zÞ whose edges have to be enhanced in a given direction, and define its “negative” replica as I n ðx; y; zÞ ≡ 1 − I p ðx; yÞ;

ð6Þ

where, for the sake of simplicity, we have assumed that all images are normalized to the unit. Now, consider that the “positive” image is illuminated with a plane wave under an angle of incidence α and its “negative” replica is illuminated under an angle −α. From the addition of two expressions of the form shown in Eq. (5),

3.

Experiments

A.

Setup

The experimental setup is schematically shown in Fig. 2. For the sake of simplicity, in Fig. 2 the ξ direction was drawn coincident with the y direction. As an amplitude mask we have used a liquid-crystal display (600 × 800 pixels, Model LC2002, Holoeye Corporation) working in the “amplitude-mostly” configuration, i.e., a given direction of the polarizer– analyzer pair (predetermined by the manufacturer) in which a maximum amplitude modulation with minimal phase modulation is obtained. The direction of polarization of the polarizer (P), placed in front of the display working in the “amplitude-mostly” configuration, was taken as the y direction. Immediately after the display we have basically linearly polarized light with polarization direction θðx; yÞ that depends on the voltage signal applied to the liquid-crystal cells of the display (which corresponds to the gray levels of the original image). Thus, after an analyzer (A1 ) whose direction of transmission is in the x direction, one has an intensity distribution given by Malus’s law, I p ðx; y; zÞ ¼ cos2 ðθðx; yÞÞ:

≈ I p ðx; y; zÞ þ I n ðx; y; zÞ − ½∇ðI p − I n Þ · ^eÞΔz:

ð7Þ

Using Eq. (6) we finally obtain I p ðx; y; z þ ΔzÞ þ I n ðx; y; z þ ΔzÞ ∂I sinðαÞΔz: ∂ξ

ð8Þ

Fig. 1. Afocal optical imaging system using oblique illumination, i.e., the object is illuminated with an off-axis plane wave.

ð9Þ

where the angle θðx; yÞ is measured with respect to the x direction. Clearly, using an analyzer (A2 ) in the y direction gives a “negative” image with reverted contrast, I n ðx; y; zÞ ¼ sin2 ðθðx; yÞÞ ¼ 1 − I p ðx; y; zÞ:

I p ðx; y; z þ ΔzÞ þ I n ðx; y; z þ ΔzÞ

≈ 1 − 2ð∇I p · ^eÞΔz ≈ 1 − 2

Thus, after propagation in free space by a distance Δz, the addition of an obliquely illuminated image and its contrast reverted (negative) replica gives a term proportional to the intensity gradient of the original image in the selected direction (added to an offset).

ð10Þ

There are many possible methods to achieve oblique (symmetrical) illumination of positive and negative images. The simplest method is to use diffractive effects to achieve it, as we explain next. For example, consider a sinusoidal grating with amplitude transmittance cosð2π sinðαÞξ=λÞ placed at the input of an ideal display (without pixels). At the Fourier plane of the confocal system, along the ξ direction we have two symmetrical replicas of

Fig. 2. Experimental setup: P, polarizer; LCD, liquid-crystaldisplay; L0;1;2 , lenses; M, mask; and A1;2 , orthogonal analyzers. 1 February 2010 / Vol. 49, No. 4 / APPLIED OPTICS

621

the same polarization distribution, which are generated by the phase terms exp½−ið2π sinðαÞξ=λ and exp½ið2π sinðαÞξ=λ, respectively. Now, if we put the analyzer A1 covering a half of the Fourier plane and the analyzer A2 covering the other half-plane, at the output plane Σ we obtain a “positive” image illuminated with a plane wave under an angle α and its “negative” replica illuminated under an angle −α. In principle, a sinusoidal grating with the desired period and orientation can be electrically generated by the same display utilized for displaying the image to be processed. For the sake of simplicity, in our setup we have simply used the diffraction generated spontaneously at the pixel matrix of the display (independently of the applied voltage signal). The sinusoidal-grating effect can be emulated by retaining only the 1 diffraction orders in the x or y direction at the Fourier plane of the confocal system, as schematically shown in Fig. 2. The orthogonal analyzers (A1;2 ) and the mask (M) blocking diffraction orders different than 1 do not require precise alignment, and in fact, the system works well with a low-coherence light source. The light source used in our experiments was a standard 5 mm green LED (λ ∼ 524 nm, FWHM ∼20 nm). Actually, Eq. (10) describes an ideal situation that can be only approximately fulfilled in practice. In our setup, at the system output we have included an additional rotatable polarizer (not shown in Fig. 2) to balance the intensities passing through A1;2 . This intensity control can be also used to achieve a predominantly positive or negative image with some amount of enhanced derivatives. The focal length of the lenses L0;1;2 were f 0 ¼ 500 mm, f 1 ¼ 200 mm, and f 2 ¼ 100 mm, respectively. Since the pixel separation in our display is of the order of 32 μm, a simple calculation gives the result α ≈ 1:9°. The quantity 2 sinðαÞΔz; appearing in expression (8), is a measure of the distance Δξ between images displaced in the ξ direction. This states the limit of interedge spacing that would prevent the method from working. For example, for Δz ¼ 1 mm one obtains Δξ ¼ 2 sinðαÞΔz ≈ 65 μm. The images were acquired with a CCD camera (244 × 753 pixels), without an objective lens, placed at the plane Σ. The pixel separations in vertical and horizontal directions are of the order of 27 μm and 12 μm, respectively, so that the displacement in the ξ direction calculated above correspond to 3–6 pixels per each millimeter of defocus (Δz).

Figure 3(c) shows the reciprocal behavior (i.e., bright outlines at the edges where ∂I=∂x > 0 and dark outlines where ∂I=∂x < 0) obtained by taking Δz ¼ −1:00ð0:05Þ mm. The same reverted derivative sign is obtained by displaying on the LCD the “negative” of the original image, which can be done electrically using the standard controls of the LCD. Figure 3(d) shows the optically processed image with ξ along the y direction for Δz ¼ 1:00ð0:05Þ mm. In this image we observe dark (bright) outlines at the edges where ∂I=∂y > 0 ð< 0Þ. Figure 3(e) illustrates the output image obtained by taking Δz ¼ −1:00ð0:05Þ mm. The orientation-selective edge enhancement, which distinguishes the sign of the derivative, is clear in the subfigures in the upper-right corners, which show intensity cuts across the arrow directions. In order to demonstrate the application of the proposed method to half-tone pictures, we displayed a Lena picture on the LCD [see Fig. 4(a)]. Figures 4(b) and 4(c) show optically processed images obtained at the plane Σ, with ξ along the x direction for Δz ¼ 0:50ð0:05Þ mm, respectively. Clearly, an enhancement of the derivatives is obtained and the flat regions are viewed as part of the background intensity.

B. Experimental Results

In a first series of experiments we have used the image of a printed circuit as a test pattern. Figure 3(a) shows the image displayed on the LCD, while Figs. 3(b)–3(e) show the optically processed images obtained at the plane Σ. Figure 3(b) shows the image obtained with ξ along the x direction for Δz ¼ 1:00ð0:05Þ mm. In this image we observe dark outlines at the edges where ∂I=∂x > 0 and bright outlines where ∂I=∂x < 0. 622

APPLIED OPTICS / Vol. 49, No. 4 / 1 February 2010

Fig. 3. (a) Image displayed on the LCD; the x coordinate is horizontal and the y coordinate is vertical. Images (b) and (c) were obtained with ξ in the x direction for Δz ¼ 1:00ð0:05Þ mm and Δz ¼ −1:00ð0:05Þ mm, respectively. Finally, images (d) and (e) were obtained with ξ in the y direction for Δz ¼ 1:00ð0:05Þ mm and Δz ¼ −1:00ð0:05Þ mm, respectively. The subfigures in the upper-right corners show the cross sections of the arrow and the output patterns.

Fig. 4. (a) Lena picture displayed on the LCD. Images (b) and (c) show optically processed images obtained at the plane Σ, with ξ in the x direction for Δz ¼ 0:50ð0:05Þ mm, respectively.

The images shown through the present work are crude images without any software processing.

and retaining only the two first terms of the binomial approximation in the exponent of (A1), one has Eðx; y; zÞ ¼ expðikzÞAðx; y; zÞ;

4. Conclusions

We have presented a method for orientation-selective edge detection and enhancement based on the irradiance transport equation. The proposed technique distinguishes the sign of the derivative of the intensity pattern along an arbitrarily selected direction. We also presented validation experiments that show clearly enhanced (bright and dark) edges when a small amount of defocus Δz is introduced, as predicted by expression (8). Unlike numerical edge detection and enhancement schemes, the proposed method does not consume computational time, and thus it can be used for processing large (e.g., megapixel) images in real-time applications. Also, the method is easy to implement and it does not require a high-coherence light source and precise alignment as most Fourier methods do.

with Z Z Aðx; y; zÞ ≡

Z Z

Eðx; y; zÞ ¼

~ x ; ky ; 0Þ exp½iðkx x þ ky yÞ Eðk sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   k2x þ k2y dkx dky ; × exp ikz 1 − k2

ðA1Þ

~ x ; ky ; 0Þ is the angular specwhere k ¼ 2π=λ, and Eðk ~ is differtrum of the field at z ¼ 0. We assume that E 2 2 2 ent than zero only when ðkx þ ky Þ=k ≪ 1 (paraxial approximation). Then, using the binomial expansion sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   ðk2x þ k2y Þ ðk2x þ k2y Þ ðk4x þ k4y Þ ≈ kz 1 − − þ… kz 1 − k2 2k2 8k4

~ x ; ky ; 0Þ exp½iðkx x þ ky yÞ Eðk

  ðk2x þ k2y Þz dkx dky : × exp −i 2k

ðA3Þ

It is easy to demonstrate that Aðx; y; zÞ satisfies the parabolic equation, 2ki

∂A þ ∇2 A ¼ 0; ∂z

ðA4Þ

where operator ∇ works on the ðx; yÞ coordinates. From Eq. (A4), or directly from Eq. (A3), results

Appendix A: Irradiance Transport Equation

Let us consider a light beam Eðx; y; yÞ propagating with an average direction along the z axis after passing through a diffracting mask on the ðx; yÞ plane,

ðA2Þ

Z Z Aðx; y; z þ ΔzÞ ≡

~ x ; ky ; 0Þ exp½iðkx x þ ky yÞ Eðk

  ðk2x þ k2y Þðz þ ΔzÞ × exp −i 2k Z Z iΔz × dkx dky ≈ Aðx; y; zÞ − 2k ~ x ; ky ; 0Þ exp½iðkx x þ ky yÞ × Eðk × ðk2x þ k2y Þdkx dky ≈ Aðx; y; zÞ þi

Δz 2 ∇ Aðx; y; zÞ: 2k

ðA5Þ

This expression can be used to find the electric field amplitude as it propagates a distance Δz. Since Iðx; y; zÞ ≡ Eðx; y; zÞ · E ðx; y; zÞð¼ Aðx; y; zÞ· A ðx; y; zÞÞ, from Eq. (A5) to first order in Δz, results 1 February 2010 / Vol. 49, No. 4 / APPLIED OPTICS

623

Iðx; y; z þ ΔzÞ ¼ Iðx; y; zÞ   iΔz  2 A ðx; y; zÞ∇ Aðx; y; xÞ ; þ 2Re 2k ðA6Þ where Re½… denotes the “real part,” and the asterisk denotes “complex conjugation.” Writing A ¼ jAj expðiϕÞ, one obtains ∇2 A ¼ ð∇2 jAjÞ expðiϕÞ þ 2ið∇ϕ · ∇jAjÞ expðiϕÞ þ ijAjð∇2 ϕÞ expðiϕÞ − jAjj∇ϕj2 expðiϕÞ: ðA7Þ Hence, Re½iA ∇2 A ¼ −2jAjð∇ϕ · ∇jAjÞ − jAj2 ∇2 ϕ ¼ −∇I · ∇ϕ − I∇2 ϕ:

ðA8Þ

Then, substituting Eq. (A8) into Eq. (A6), we finally obtain Iðx; y; z þ ΔzÞ ¼ Iðx; y; zÞ −

Δz ½∇I · ∇ϕ þ I∇2 ϕ; k ðA9Þ

or equivalently, k

∂I þ ∇I · ∇ϕ þ I∇2 ϕ ¼ 0; ∂z

ðA10Þ

which is the irradiance transport equation [Eq. (2) of Subsection 2.A].

J. A. Ferrari acknowledges financial support from Programa de Desarrollo de las Ciencias Basicas (PEDECIBA) (Uruguay) and the Comisión Sectorial de Investigación Científica (CSIC), UdelaR, Uruguay. J. L. Flores expresses his gratitude to Programa de Estancias Académicas, University of Guadalajara, for funding his academic stay at the Facultad de Ingeniería, UdelaR, Uruguay, where this research was developed. References 1. M. Fak-aim, A. Seanton, and S. Kaitwanidvilai, “Automatic visual inspection of bump in flip chip using edge detection with genetic algorithm,” Proceedings of the International Multiconference of Engineers and Computer Scientists 2008 (International Association of Engineers, 2008), Vol I, pp. 19–21. 2. B.-L. Liang, Z.-Q. Wang, G.-G. Mu, J.-H. Guan, H.-L. Liu, and C. M. Cartwright, “Real-time edge-enhanced optical correlation with a cerium-doped potassium sodium strontium barium niobate photorefractive crystal,” Appl. Opt. 39, 2925–2930 (2000).

624

APPLIED OPTICS / Vol. 49, No. 4 / 1 February 2010

3. C. S. Yelleswarapu, S.-R. Kothapalli, and D. V. G. L. N. Rao, “Optical Fourier techniques for medical image processing and phase contrast imaging,” Opt. Commun. 281, 1876–1888 (2008). 4. J. A. Sorenson and C. R. Mitchell, “Evaluation of optical unsharp masking and contrast enhancement of low-scatter chest radiographs,” Am. J. Roentgenol. 149, 275–281 (1987). 5. P. G. Tahoces, J. Correa, M. Souto, C. Gonzalez, L. Gomez, and J. J. Vidal, “Enhancement of chest and breast radiographs by automatic spatial filtering,” IEEE Trans. Med. Imaging 10, 330–335 (1991). 6. R. C. Gonzalez and P. Wintz, Digital Image Processing (Addison-Wesley, 1977). 7. J. A. Ferrari, J. L. Flores, C. D. Perciante, and E. Frins, “Edge enhancement and image equalization by unsharp masking using self-adaptive photochromic filters,” Appl. Opt. 48, 3570–3579 (2009). 8. X. Lin, J. Ohtsubo, and T. Takemori, “Real-time optical image subtraction and edge enhancement using ferroelectric liquidcrystal devices based on speckle modulation,” Appl. Opt. 35, 3148–3155 (1996). 9. M. Y. Shih, A. Shishido, and I. C. Khoo, “All-optical image processing by means of a photosensitive nonlinear liquid-crystal film: edge enhancement and image addition—subtraction,” Opt. Lett. 26, 1140–1142 (2001). 10. J. A. Davis, D. E. McNamara, and D. M. Cotrell, “Analysis of the fractional Hilbert transform,” Appl. Opt. 37, 6911–6913 (1998). 11. J. A. Davis, D. E. McNamara, and D. M. Cottrell, “Image processing with the radial Hilbert transform: theory and experiments,” Opt. Lett. 25, 99–101 (2000). 12. H. Kasprzak, “Differentiation of a noninteger order and its optical implementation,” Appl. Opt. 21, 3287–3291 (1982). 13. J. A. Davis, D. A. Smith, D. E. McNamara, D. M. Cottrell, and J. Campos, “Fractional derivatives—analysis and experimental implementation,” Appl. Opt. 40, 5943–5948 (2001). 14. S. Fürhapter, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Spiral phase contrast imaging in microscopy,” Opt. Express 13, 689–694 (2005). 15. J. Mazzaferri and S. Ledesma, “Rotation invariant real-time optical edge detector,” Opt. Commun. 272, 367–376 (2007). 16. J. A. Davis and M. D. Nowak, “Selective edge enhancement of images with an acousto-optic light modulator,” Appl. Opt. 41, 4835–4839 (2002). 17. D. Cao, P. P. Banerjee, and T.-C. Poon, “Image edge enhancement with two cascaded acousto-optic cells with contra propagating sound,” Appl. Opt. 37, 3007–3014 (1998). 18. G. Situ, G. Pedrini, and W. Osten, “Spiral phase filtering and orientation-selective edge detection/enhancement,” J. Opt. Soc. Am. A 26, 1788–1797 (2009). 19. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984). 20. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). 21. F. Roddier, “Wavefront sensing and the irradiance transport equation,” Appl. Opt. 29, 1402–1403 (1990). 22. C. D. Perciante, J. A. Ferrari, and A. Dubra, “Visualization of phase objects using incoherent illumination,” Opt. Commun. 183, 15–18 (2000). 23. C. D. Perciante and J. A. Ferrari, “Visualization of twodimensional phase gradients by subtraction of a reference periodic pattern,” Appl. Opt. 39, 2081–2083 (2000).