Employing Spatial Metrics in Urban Land-use/Land-cover Mapping ...

8 downloads 0 Views 1MB Size Report
suggest that the spatial autocorrelation measures, Moran's I and Geary's C, may ...... Gatrell, A.C., T.C. Bailey, P.J. Diggle, and B.S. Rowlingson 1996. Spatial point ... Sensing and Image Interpretation, Fifth edition, John Wiley and. Sons, New ...
07-030.qxd

2/11/07

10:38

Page 1403

Employing Spatial Metrics in Urban Land-use/Land-cover Mapping: Comparing the Getis and Geary Indices AAG REMOTE SENSING SPECIALITY GROUP 2007 AWARD WINNER1 Soe W. Myint, Elizabeth A. Wentz, and Sam J. Purkis

Abstract We examine the potential of supplementing per-pixel classifiers with the Getis index (Gi) in comparison to the Geary’s C on a subset of Ikonos imagery for urban land-use and landcover classification. The test is pertinent considering that the Gi is generally considered more capable of identifying clusters of points with similar attributes. We quantify the impact of varying distance thresholds on the classification product and demonstrate how well the Gi identified cold and hot spots in comparison to Geary’s C. The exercise also provides a rule of thumb for effectively measuring spatial association in connection to adjacency. We are able to support existing literature that measuring local variability improves classification over spectral information alone. The results, however, neither confirm nor deny the challenge on whether measuring cold and hot spots rather than just spatial association improves classification accuracy.

Introduction Correctly classifying urban land-cover in images and inferring urban land-use remains a significant challenge to remote sensing scientists. This problem exists because there are numerous but distinguishable land-cover features in small geographic areas and various combinations of these land-covers can represent distinct urban land-uses. This results in a complex relationship between land-cover and land-use (Barr et al., 2004; Myint, 2006a). To extract urban land-cover, researchers have successfully implemented approaches that consider groups of pixels and their shapes using methods such as object-based models and geocomputation (Medda et al., 1998; Wentz, 2000; Hay et al., 2003; Thomas et al., 2003; Chubey et al., 2006). Geospatial techniques that measure spatial arrangements of objects in space can be expected to identify multiple spectral responses from land-cover features into organized land-use classes. Object-based approaches, while proven

Soe W. Myint and Elizabeth A. Wentz are with the School of Geographical Sciences, Arizona State University, P.O. Box 870104, Tempe, AZ 85287 ([email protected]; [email protected]). Sam J. Purkis is with the National Coral Reef Institute, Nova Southeastern University, 8000 N. Ocean Drive, Dania, FL 33004 ([email protected]). PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

successful at extracting land-cover, have not yet mastered creating direct inferences of land-use, necessary in many applications. Three distinct problems arise when conventional pixel-by-pixel image processing approaches are used to classify land-use. Pixel-by-pixel approaches require a training set for each class, which are homogenous groups of pixels representing a single class. A single land-use class in an urban environment, however, is rarely homogeneous due to variation in the spectral response of their component surface covers (Aplin, 2003; Herold et al., 2003; Barr et al., 2004; Myint, 2006a). The first problem arises when a single land-cover class that exists only in one land-use class is selected as the training set. What happens is those area’s land-cover are correctly placed in the correct land-use category, but other land-covers also in that land-use class are not included. For example, if a residential rooftop is selected as the training sample for a residential land-use class other land-covers in residential areas (e.g., hard surfaces, swimming pools, exposed soil, or vegetation) will be incorrectly classified. The second problem occurs when a land-cover class that exists in multiple land-uses is selected as the training set, such as grass. The algorithm will identify all grasses as residential, including those in parks, commercial complexes, and golf courses. The third and final problem exists when all landcovers within a land-use are selected as the training set. In this case, the standard deviation of the training set will be very large, and the data distribution of the class will likely violate the normality assumption of statistical classifiers

1 In recognition of the 100th Anniversary of the Association of American Geographers (AAG) in 2004, the AAG Remote Sensing Specialty Group (RSSG) established a competition to recognize exemplary research scholarship in remote sensing by post-doctoral students and faculty in Geography and allied fields. Dr. Soe Myint submitted this paper, which was selected as the 2007 winner.

Photogrammetric Engineering & Remote Sensing Vol. 73, No. 12, December 2007, pp. 1403–1415. 0099-1112/07/7312–1403/$3.00/0 © 2007 American Society for Photogrammetry and Remote Sensing D e c e m b e r 2 0 0 7 1403

07-030.qxd

2/11/07

10:38

Page 1404

(e.g., maximum-likelihood), (Barnsley et al., 1991; Sadler et al., 1991), therefore decreasing the likelihood of an accurately classified image. Techniques associated with spatial autocorrelation (e.g., Gi and Geary’s C) can quantify the potential for clusters of land-cover classes to coexist as a single land-use class. Figure 1a represents a hypothetical image of a residential area that includes a cement road (white), grassland (light grey), a tree crown (dark grey), a rooftop (grey), and a tar road (black). Instead of applying first-order statistics to this window (which would result in a high standard deviation) a measure of whether there are clusters of similar cells would tell us there are homogenous features in the window that could be identified (e.g., the roof). Myint (2003) validated the assumption that spatial autocorrelation measures are more effective than first order statistics (e.g., standard deviation, variance, mean) to assist with the extraction of features and objects from digital images. Warner and Shank (1997) and Warner et al. (1999) demonstrated the usefulness of Geary’s C in selecting a subset of the original bands providing an optimal trade-off between the probability of error and classification cost. LeDrew et al. (2004) used the Getis statistic to identify homogeneity and heterogeneity in images to characterize changes in images over time. Espindola et al. (2006) employed a spatial autocorrelation indicator (Moran’s I) that detected separability between different spatial regions and a variance indicator that expresses the overall homogeneity of the regions. Emerson et al. (1999) measured the spatial autocorrelation (i.e., Moran’s I and Geary’s C) of satellite sensor imagery to observe the differing spatial structure of smooth and rough surfaces in remote sensing images. Doubt is cast, however, by Lee and Wong (2001) who suggest that the spatial autocorrelation measures, Moran’s I and Geary’s C, may not be effective in identifying spatial concentrations of attributes. In other words, they may not be accurate in distinguishing “hot spots” and “cold spots” which are a set of clustered points with large or small attribute values, respectively. The general Getis statistic (Gi) was introduced as a technique to characterize the presence of hot spots and cold spots over an entire area (Getis and Ord, 1992; Ord and Getis, 1995). One key difference between Gi and spatial autocorrelation measures is that computation of the Gi method takes the form of multiplying attribute values of neighborhood polygons or pixels, whereas the spatial autocorrelation techniques consider differences between these attribute values. The Gi technique could be used, for example, to identify a group of bright pixels that represent a spectral response from a homogeneous feature (e.g., a sand dune in the red visible band) from a group of dark pixels that represents a different homogeneous feature (e.g., deep clear water in red visible band). With a Moran’s I or Geary’s C, these two features would share the same index. The objective of this research is to compare the effectiveness of Gi and Geary’s C with spectral per-pixel classifiers to extract land-cover and infer land-use classes from imagery. We hypothesize that the Gi measure will be more effective than Geary’s C since it has the ability to identify clusters of high versus low values. A specialized routine was developed to compute Gi and Geary’s C in classifying a remotely sensed image. Our comparison is made on two datasets, one hypothetical, and the second is a case study in Dallas, Texas. For better comparison purposes, the performance of the traditional spectral-based classification approaches was also evaluated. We used the overall classification accuracy, user’s accuracy, producer’s accuracy, and kappa coefficient to make the final comparison. 1404 D e c e m b e r 2 0 0 7

Figure 1. An example illustrating how the distance threshold could influence the outcome of a group of objects in space. (a) A hypothetical image of a residential area covering a cement road (white), grassland (light grey), a tree crown (dark grey), a rooftop (grey), and a tar road (black), (b) converted features and objects in raster format; (c) attribute values or digital numbers that represent the above objects and features in the image. A color version of this figure is available at the ASPRS website: www.asprs.org.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:38

Page 1405

Background Geostatistics and Spatial Analysis The importance of local variability in accurately classifying images has prompted the use of geostatistics and spatial statistics in numerous application contexts (Curran, 1988; St-Onge and Cavayas, 1997; Atkinson and Lewis, 2000). Boucher and Kyriakidis (2006) recognized the complexity of urban land-use classification and applied geostatistical kriging to infer probabilities of land-cover classes at fine spatial resolution. Geostatistical techniques have been used for image fusion (Pardo-Iguzquiza et al., 2006); to extract texture information (Carr and Miranda, 1998; Atkinson and Lewis, 2000; Almeida et al., 2005; Dell’Acqua, 2006); to better analyze time series (LeDrew et al., 2004; Boucher et al., 2006); and for error detection and accuracy assessment (Cressie and Kornak, 2003; Hagen-Zanker, 2006). Most commonly, spatial statistics have been used to characterize the distribution of extracted features in classified images (Stein et al., 1998). Read and Lam (2002) found that applying fractal analysis and several spatial statistics increased the ability to characterize the complexity of Landsat images. Henry and Yool (2002) measured spatial pattern of fires over time using several spatial statistical techniques, including fractal dimension and Shannon’s diversity index. Spatial statistics can also be used to estimate local variability and increase classification accuracy through the use of spatial-transformed bands. Spatial statistics are calculated on the variability in digital numbers or brightness values within local windows in addition to the values of the original spectral bands (e.g., Purkis et al., 2006). Example statistics are standard deviation (Arai, 1993), contrast between neighboring pixels (Edwards et al., 1988); and variance (Woodcock and Harward, 1992). Using these statistics, however, is problematic when there is a large variance in the digital numbers. Local variability has also been measured and used to improve classification by using texture-transformed bands in which the variability in digital numbers or brightness values is estimated within local windows. Texture bands are then combined with the original spectral bands. Local variability can be characterized by computing statistics of a group of pixels, e.g., contrast between neighboring pixels (Edwards et al., 1988); the standard deviation (Arai, 1993), or local variance (Woodcock and Harward, 1992). Emerson et al., (1999) measured the spatial autocorrelation of satellite imagery to observe the differing spatial structure of the smooth and rough surfaces in remote sensing images. Hyppanen (1996) measured spatial autocorrelation and the optimal spatial resolution of remotely sensed data in a forested landscape. Warner and Shank (1997) and Warner et al. (1999) demonstrated the usefulness of spatial autocorrelation in selecting a subset of the original bands providing an optimal trade-off between the probability of error and classification cost. These studies suggest that spatial autocorrelation approaches can improve upon traditional spectral methodologies. Spatial Autocorrelation The two commonly applied spatial autocorrelation techniques are Moran’s I and Geary’s C (Cliff and Ord, 1973; Lee and Wong, 2001). Both assess how dispersed, uniformly distributed, or clustered points (weighted by their attributes) are in space and whether or not this pattern has occurred by chance (Goodchild, 1986). The two techniques differ in the manner in which attribute values are compared. We employed Geary’s C for the purpose of comparison since the PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

difference in effectiveness of Moran’s I and Geary’s C to measure the spatial arrangements of objects and features in an image is negligible (Myint, 2003). In the context of urban remote sensing, spatial autocorrelation is a representative index with which to characterize the spatial arrangements and surface textures of urban objects and features in an image. Both statistics are based on a comparison of the attribute values of neighboring units, with strong positive spatial autocorrelation indicating similarity between those units. For the purpose of this study, Geary’s C index (Geary, 1954) was selected to characterize the spatial arrangements of objects and features in an urban area. Geary’s C is calculated from the following: n

C(d) 

n

i j wij (d)(zi  zj)2 n n

2wij (d)s2 i

(1)

,

j

where wij is the weight at distance d so that wij (d)  1 if point j is within distance d from point i; otherwise, wij (d)  0; s2 is the variance of z values and can be computed as: s2  (zi  z)2/(n  1),

(2)

i

where a value of less than 1.0 indicates positive correlation, 1.0 indicates no correlation, and values greater than 1.0 indicate negative correlation. Geary’s index is a single-lag semivariogram value. In the Geary’s case, the lag definition corresponds to the non-zero entries of w, whereas in the semivariogram case, lag is explicitly defined as a distance. Since both spatial autocorrelation approaches measure how dispersed or clustered points are in space with regards to attribute values, they can be applied to any satellite sensor images. Getis Index (Gi ) The Getis statistic (Getis and Ord, 1992; Ord and Getis, 1995) is computed as: n n

Gi(d) 

i j wij(d)zizj n n

zizj i

, for i  j.

(3)

j

The Gi is defined by a distance, d, within which areal units can be regarded as neighbors of i. The weight wij (d), is 1 if areal unit j is within d and is 0 otherwise. The relationship among the neighboring points is determined by a distance threshold, d. The value of d needs to be defined before computing the Gi. Some of the xi, xj pairs will not be considered in the calculation of numerator if i and j are more than d away from each other. However, the calculation of the denominator includes all xi, xj pairs within a neighborhood under consideration regardless of the distance between i and j. The index ranges from very small numbers near 0, when low values are near one another (cold spots), to near 1 when high values are near one another (hot spots). This index, which has widely been used in epidemiological investigations (Gatrell et al., 1996; Getis et al., 2003; Cecere et al., 2004), has received scant attention for image texture analysis (Wulder and Boots, 1998). The Getis index measures both first and second-order effects, whereas the Geary’s and Moran’s indices assume no first order effects. There is a

D e c e m b e r 2 0 0 7 1405

07-030.qxd

2/11/07

10:38

Page 1406

potential that the Gi approach could measure spatial arrangements of surface features more effectively since this approach is designed to capture both cold and hot spots.

Methods Study Area and Data The effectiveness of the Gi compared to Geary’s C in remote sensing was tested on two datasets. The first is represented by two simple hypothetical images, one containing cold spots where a cluster of cells with small values is surrounded by large-value cells (Figure 2a); the second is a hot spot image where a cluster of cells with large values is surrounded by smaller-value cells (Figure 2b). The second dataset is a small region in central Dallas, Texas (upper left latitude 33° 46 N and longitude 96° 44 W, lower right latitude 32° 45 N and longitude 96° 43 W). The area includes urban segments (commercial and residential) and undeveloped regions (grassland, forest, and open water), giving a diversity of land-use and land-cover classes (Plate 1). This region was selected since it offers an ideal site to study the effectiveness of the selected spatial approaches because (a) Dallas is a typical city that represents many medium- to large-sized cities across the United States; (b) the selected area contains a variety of standard urban land-use and landcover classes; (c) downtown Dallas demonstrates a complex spatial assemblage of residential and commercial classes; and (d) the selected land-use classes include many land-covers that could lead to spectral confusion when using traditional per-pixel approaches. The data for the study were extracted from a pansharpened Ikonos image at 1 m spatial resolution with four channels: blue: B1 (0.45 to 0.52 m), green: B2 (0.52 to 0.60 m), red: to B3 (0.63 to 0.69 m), and near-infrared:

Figure 2. Hypothetical images with hot spot and cold spot: (a) Geary’s C  0.6251, Getis index  0.3472, and (b) Geary’s C  0.6250, Getis index  0.6356.

1406 D e c e m b e r 2 0 0 7

B4 (0.76 to 0.90 m) acquired on 03 October 2004. A subset of the Ikonos image (1,191 pixel by 1,478 pixels) covers the study area defined above. The selected area is relatively small for the sake of computational efficiency, relative ease of classification specificity, and manual delineation of the selected classes using heads-up digitizing for the performance evaluation (Figure 3). We used all points or pixels in the study area for a better accuracy assessment. In addition to the accuracy assessment of comparing each pixel in the output image with a reference source, a total of 200 points with a minimum of 40 sample points per class using a stratified random sampling technique was also employed. We identified one training sample per class for all five classes. Our land-use training samples (i.e., residential, commercial) contain many landcover classes within their land-use areas (e.g., grass, trees, shrubs, rooftops, cement roads, tar roads, swimming pools, exposed soil). Calculation of Geary’s C and Gi For the hypothetical image, we calculated Geary’s C and the Gi on the entire data set. For the Dallas data, the two metrics were calculated in a 63 m  63 m window and used as spatial-transformed bands in the classification algorithm. We selected this window size because we believe it to be large enough to include all selected classes and reasonably small to minimize overlap between classes. This window size is potentially a favorable local window since we consider it to be of the characteristic scale (approximate minimum distance between two pixels to cover a particular class) for the most complex class (e.g., residential) among all selected land-use and land-cover classes. The window size is supported by the findings of Myint (2006b) that demonstrated a 63 m  63 m window to be optimal in the separation of urban land-use and land-cover types. Myint (2006b) examined five different window sizes (i.e., 31  31, 63  63, 95  95, 127  127, 159  159) using an Ikonos data covering a portion of the central part of the Dallas metropolitan area. Because the algorithm assigns the computed value to the center of the window as it progresses throughout the image, we lose (n  1)/2 pixels on the top, bottom, left, and right sides of the image (where n  63, our window size). To compensate for this, we performed a mirror extension of (63  1)/2  31 pixels around the image before calculating Geary’s C and Gi (Myint, 2006a). The algorithm duplicates digital numbers in the second-last row/column and adds them to the next-to-last row/column. It then duplicates the third-last row/column and adds it to the next to the secondto-last row/column, respectively. The extension is performed during the computation and the output contains the same number of rows and columns as the original image. An example illustrating the original image and extended image is presented in Figure 4. Gi Approach and Distance Threshold (d) We believe that as distance increases and more pixels are considered, an unreliable or inconsistent Gi value is generated, which could lead to increased spectral confusion. For example, the computation of the Gi numerator for a vector of 2, 3, 4, 5, 6, and 7 will be 2  3, 3  4, 4  5, 5  6, and 6  7 if the distance threshold is one pixel. Note that this computation follows the original spatial arrangements of digital numbers in the example (Figure 1). However, if the distance threshold is four pixels, the computation of the numerator for the same vector will include 2  3, 3  4, 4  5, 5  6, 6  7, 2  4, 2  5, 2  6, 3  5, 3  6, 3  7, 4  6, and 4  7. When the distance threshold is equal to one, the only adjacent pixels are incorporated into the analysis, whereas when the distance threshold is equal to PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:39

Page 1407

Plate 1. A false color composite of Ikonos 1-meter resolution data displaying channel 4 (0.76 to 0.90 m) in red, channel 3 (0.63 to 0.69 m) in green, and channel 2 (0.52 to 0.60 m) in blue.

four, many combinations of digital numbers are considered regardless of their spatial adjacency. We again reference our hypothetical image of a residential area (Figure 1), this time to clarify the impact of the distance threshold. If the distance threshold is one pixel, we consider adjacency of 90 versus 50, 50 versus 50, 50 versus 75, 75 versus 75, 75 versus 25, and 25 versus 1. If we use the distance threshold of four, in addition to the above associations, we will also consider 90 versus 75, 50 versus 25, and 75 versus 1 in the computation. In a real world situation, the cement road (90) is not adjacent to the rooftop (75), the tree crown (50) is not located next to the

Figure 3. Manually interpreted and digitized map.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

grassland (25), and the rooftop (75) is also not connected to the tar road (1). In general, distance thresholds greater than one would consider the spatial association of objects that are not adjacent to each other. This could potentially lead to signature confusion since many other combinations of pixels that are not adjacent to each other are considered in the computation. Hence, it is important to recognize that we consider adjacent regions or polygons instead of a particular distance threshold to describe the level of similarity to other regions or any other pattern occurring in space since the distance is measured between a centroid of an irregular polygon and that of another. Hence, the distance between two irregular polygons depends on the size and shape of both polygons under consideration. Two adjacent polygons may not be included in the computation if i (centroid of a polygon) and j (centroid of an adjacent polygon) are more than d away from each other, and many polygons that are not adjacent or far from the starting polygon may be included in the computation if the polygons are small and d is very large regardless of their adjacency and association. In general, we consider adjacency of two polygons, grids, or pixels only if they physically touch each other in space. In the case of remotely sensed images, two pixels are considered adjacent if one side of the first pixel touches one side of the second pixel. This basically implies that 25 percent of a common boundary is needed for the rooks case when considering irregular shaped polygons. In fact, a distance threshold of one pixel is a side of a pixel or the spatial resolution of the remote sensing image. Comparison Between Geary’s C and Gi To test whether the spatial statistical information increases the classification accuracy, different combinations of spatialtransformed bands (Geary’s C-transformed images (CT) and D e c e m b e r 2 0 0 7 1407

07-030.qxd

2/11/07

10:39

Page 1408

possible permutations of band combinations, since we expect that results from all band combinations would be excessive and make the study unclear and/or misleading. We believe that the above band combinations are representative combinations of the original and transformed bands. Figure 5 demonstrates a step-by-step procedure to compute a Geary’s C value that represents texture features of a 2  2 hypothetical image (2  2 local window). Only those “rooks case” pixels that share common borders as neighbors are considered in the demonstration. Figure 6 illustrates a step-by-step procedure to compute a Gi value that represents texture features of a 3  3 hypothetical image (3  3 local window). A 65  65 square window size centered at randomly selected pixels in each category was used to identify training samples. Two subsets each from band 3 and band 4 were generated for all of the selected classes leading to 20 training samples. These subsets were used to determine the optimum distance (d ) for the Gi measure (Figure 7). We tested five different distance thresholds, 1 pixel (1 m), 5 pixels (5 m), 9 pixels (9 m), 13 pixels (13 m), and 17 pixels (17 m). The computed Gi values of each subset were subject to discriminant analysis. The procedure generates a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables, which provide the greatest discrimination between the groups.

Figure 4. An example showing mirror extension using a hypothetical image: (a) original image, and (b) extended image using a 5  5 moving window. A color version of this figure is available at the ASPRS website: www.asprs.org.

Gi-transformed images (GT)) and the original spectral bands (B) were compared with the spectral bands alone (Table 1). The first combination includes only the original spectral bands (Combo-A) and provides a baseline for the other combinations. Combinations two through five (Combo-B through Combo-E) test different combinations of the Gitransformed images with different spectral bands. Combinations six through eight (Combo-F through Combo-I) were used to test Geary’s C transformed images with different combinations of the spectral bands. We do not present all

Accuracy Assessment For an ideal evaluation and a precise comparison among different classification approaches, it is generally accepted that “wall-to-wall” comparisons (Lillesand et al., 2004) of checking every pixel in an image is the best approach (Myint, 2006a). Hence, we accurately delineated the selected classes using a visual interpretation approach using headsup digitizing. All pixels (i.e., 1,191  1,478  1,760,298) in the manually digitized map were used to assess the accuracy instead of randomly sampling pixels from the image. The output map is assumed to be error-free, or at least highly accurate with negligible error. The digitization procedure was conducted with the help of sound local area knowledge and a careful ground survey. The manually digitized map (Figure 3) was treated as the reference data (ground survey data). The overall classification accuracy, user’s accuracy, producer’s accuracy, and kappa coefficient of the selected approaches were assessed by cross-validation on the whole data set. A total of 200 randomly selected points with a minimum of 40 sample points per class was also used to perform an accuracy assessment for Combo-B and Combo-F. The sample

TABLE 1. COMBINATIONS OF THE ORIGINAL AND SPATIAL-TRANSFORMED BANDS. NOTE: B = ORIGINAL BANDS; CT = GEARY’S C-TRANSFORMED IMAGES; GT = GI-TRANSFORMED IMAGES (GT) Band Combinations Name

Com-A Com-B Com-C Com-D Com-E Com-F Com-G Com-H Com-I

1408 D e c e m b e r 2 0 0 7

Gi-transformed bands 1

2

3

4

x

x x

x x x

x x x x

Geary’s C-transformed Bands 1

x

2

x x

3

x x x

Original Bands

4

1

2

3

4

x x x x

x x x x x x x x x

x x x x x x x x x

x x x x x x x x x

x x x x x x x x x

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:39

Page 1409

Figure 6. A worked example (raster) for the computation of Gi for a 3  3 image or local window.

Figure 5. A worked example (raster) for the computation of Geary’s C autocorrelation statistics for a 2  2 image or local window.

points were displayed on the original Ikonos multispectral image data, with the help of local area knowledge, manually digitized map, and ground information collection to identify the classes. This additional accuracy assessment was performed to confirm the results of two main approaches with different band combination that achieved the highest accuracies when using the wall-to-wall approach.

Results Hypothetical Image of Hot and Cold Spots In our hypothetical images, the Geary’s C of two different images with cold spot and hot spot were nearly identical (i.e., 0.6251 and 0.6250) whereas Gi statistics of the images PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

were different (i.e., 0.3472 and 0.6356). This indicates that the Gi has been able to identify differences between the images of similar clusters with large and small attribute values that Geary’s C does not and hence, we can conclude that the Gi index is capable of identifying cold spots and hot spots whereas Geary’s C is not in our hypothetical image. These results imply that the Gi index may be more accurate than the Geary’s C in discriminating different land-use landcover classes with a similar clustering pattern. Gi Approach and Distance Threshold (d) The overall classification accuracy for the distance threshold of 1 pixel (1 m), 5 pixels (5 m), 9 pixels (9 m), 13 pixels (13 m), and 17 pixels (17 m) for the selected land-use and land-cover samples were found to be 75 percent, 65 percent, 55 percent, 50 percent, and 30 percent, respectively (Table 2). The shortest distance threshold (i.e., 1 pixel or 1 m) achieved the highest overall accuracy (75 percent) and the highest Kappa coefficient (0.69). Accuracy consistently decreased with the increasing distance threshold. In the case of a one pixel distance threshold, only those xi and xj pairs that were within one pixel away from each other were considered in the calculation of the numerator. In other words, the relationship among the neighboring points is based on those pixels that are contiguous. However, the computations in the other distance thresholds (i.e., 5 pixels, 9 pixels, 13 pixels, and 17 pixels) considered adjacent pixels and also pixels within the distance limit. D e c e m b e r 2 0 0 7 1409

07-030.qxd

2/11/07

10:39

Page 1410

Figure 7. Selected training samples (i.e., 63  63) displaying channel 3 (0.63 to 0.69 m) in gray scale: (a) commercial, (b) forest, (c) grassland, (d) residential, and (e) water.

TABLE 2. OVERALL ACCURACIES OF DIFFERENT DISTANCE THRESHOLDS (E.G., 1 PIXEL (1 M), 5 PIXELS (5 M), 9 PIXELS (9 M), 13 PIXELS (13 M), AND 17 PIXELS (17 M) USING GI AND A DISCRIMINANT FUNCTION Land Use Land Cover

Distance 1 pixel Producer’s accuracy User’s Accuracy

Distance 5 pixel Producer’s accuracy User’s Accuracy

Distance 9 pixel Producer’s accuracy User’s Accuracy

Distance 13 pixel Producer’s accuracy User’s Accuracy

Distance 17 pixel Producer’s accuracy User’s Accuracy

Commercial

Forest

Grass

100.0% 50.0% Overall Accuracy 

57.1% 100.0% 75.0%

100.0% 50.0%

100.0% 75.0% Kappa 

66.7% 100.0% 0.69

40.0% 50.0% Overall Accuracy 

44.4% 100.0% 65.0%

42.9% 75.0%

20.0% 25.0% Kappa 

33.3% 75.0% 0.46

0.0% 0.0% Overall Accuracy 

60.0% 75.0% 55.0%

42.9% 75.0%

60.0% 75.0% Kappa 

66.7% 50.0% 0.44

0.0% 0.0% Overall Accuracy 

60.0% 75.0% 55.0%

42.9% 75.0%

50.0% 50.0% Kappa 

66.7% 50.0% 0.38

0.0% 0.0% Overall Accuracy 

25.0% 25.0% 30.0%

37.5% 75.0%

50.0% 50.0% Kappa 

0.0% 0.0% 0.13

Combination of Spatial and Spectral Information Table 3 reports on accuracy results from Combo-A, showing that the original bands alone yielded the overall accuracy of 60.99 percent and kappa coefficient of 0.38. Since these values are low, we conclude that spectral bands alone are not effective in identifying urban land-use and land-cover classes. Table 4 reports on the results of the Gi-transformed band combinations (Combo-B through Combo-E). Combo-B achieved an overall accuracy as high as 78.47 percent and kappa coefficient of 0.58. The highest user’s accuracy (100 percent) was produced by water and the highest producer’s accuracy (96.32 percent) was produced by residential. Combo-C had an overall accuracy of 76.4 percent and kappa coefficient of 0.55. Even though the accuracy was not as high as Combo-B, we believe that it is an acceptable level of accuracy. Combo-D achieved an overall accuracy of 74.8 percent and a kappa coefficient of 0.53. Combo-E produced an overall accuracy of 70.1 percent and a kappa coefficient of 0.50. We found that the water category gave a 100 percent user’s accuracy for all of the above band combinations. We also consistently found spectral confusion between 1410 D e c e m b e r 2 0 0 7

Residential

Water

residential, commercial, residential, and grassland categories. This could be because both residential and commercial areas contain similar land-covers such as large impervious areas and grassy areas. In all combinations, the highest producer’s and user’s accuracies were for the residential and water categories. In most cases, the grassland category gave the second highest user’s accuracy. In general, the overall accuracy went down about 2 percent every time we dropped one spatial-transformed band. However, the accuracy dropped 4 percent after we changed from two spatial-transformed bands to one spatial band. A combination of all the original bands and one spatial-transformed band gave the highest accuracy (i.e., 70.1 percent). Table 5 summaries the results of the different combinations of Geary’s C transformed images and the spectral bands (Combo-F to Combo-I). Combo-F produced an overall accuracy of 77.7 percent and kappa coefficient of 0.61. The Combo-F, Combo-G, Combo-H, and Combo-I produced overall accuracies of 77.7 percent, 74.8 percent, 70.1 percent, and 72.9 percent, and kappa coefficients of 0.61, PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:39

Page 1411

TABLE 3. OVERALL, PRODUCER’S AND USER’S ACCURACIES (I.E., B1 + B2 + B3 + B4)

OF ALL THE

ORIGINAL BANDS

Land Use Land Cover Commercial Combo-A Producer’s accuracy User’s Accuracy

66.8% 51.1% Overall Accuracy 

Forest

Grass

Residential

Water

68.6% 34.8% 61.0%

34.5% 67.4%

61.8% 71.7% Kappa 

88.6% 99.9% 0.38

TABLE 4. OVERALL, PRODUCER’S AND USER’S ACCURACIES OF THE DIFFERENT COMBINATIONS OF GI-TRANSFORMED IMAGES AND THE SPECTRAL BANDS (COMBO-B THROUGH COMBO-E) Land Use Land Cover Commercial Combo-B Producer’s accuracy User’s Accuracy

Combo-C Producer’s accuracy User’s Accuracy

Combo-D Producer’s accuracy User’s Accuracy

Combo-E Producer’s accuracy User’s Accuracy

Forest

Grass

Residential

Water

59.1% 75.2% Overall Accuracy 

57.9% 79.9% 78.5%

29.5% 89.9%

96.3% 77.7% Kappa 

67.0% 100.0% 0.58

60.6% 66.5% Overall Accuracy 

56.1% 65.7% 76.4%

18.3% 97.4%

94.7% 78.2% Kappa 

68.1% 100.0% 0.55

58.8% 60.9% Overall Accuracy 

55.0% 63.6% 74.8%

36.2% 93.7%

89.5% 77.3% Kappa 

68.1% 100.0% 0.53

76.3% 47.8% Overall Accuracy 

56.9% 51.6% 70.1%

37.0% 93.6%

76.8% 80.4% Kappa 

68.2% 100.0% 0.50

TABLE 5. OVERALL, PRODUCER’S AND USER’S ACCURACIES OF THE DIFFERENT COMBINATIONS C-TRANSFORMED IMAGES AND THE SPECTRAL BANDS (COMBO-F TO COMBO-I)

OF

GEARY’S

Land Use Land Cover Commercial Combo-F Producer’s accuracy User’s Accuracy Combo-G Producer’s accuracy User’s Accuracy Combo-H Producer’s accuracy User’s Accuracy Combo-I Producer’s accuracy User’s Accuracy

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Forest

Grass

Residential

Water

75.2% 56.8% Overall Accuracy 

59.0% 97.6% 77.7%

58.5% 70.9%

85.3% 83.0% Kappa 

67.0% 100.0% 0.61

73.7% 59.6% Overall Accuracy 

52.0% 59.8% 74.8%

55.2% 75.2%

82.6% 80.7% Kappa 

67.1% 100.0% 0.56

77.2% 50.5% Overall Accuracy 

64.4% 47.0% 70.1%

50.7% 85.7%

72.8% 81.4% Kappa 

67.0% 100.0% 0.51

67.1% 55.4% Overall Accuracy 

53.0% 56.4% 72.9%

56.0% 78.9%

81.0% 79.0% Kappa 

64.2% 100.0% 0.53

D e c e m b e r 2 0 0 7 1411

07-030.qxd

2/11/07

10:39

Page 1412

0.56, 0.51, and 0.53, respectively. The overall accuracy of the Geary’s C transformed bands dropped about 3 to 4 percent each time one band was dropped. An exception to this trend occurred for the last combination (Combo-I), which is approximately 3 percent higher than one produced by Combo-H, which contains two bands. This implies that the Geary’s C approach does not improve the accuracy consistently with increasing number of spatial-transformed bands. Similar to Gi-transformed bands, the highest user’s accuracy and the highest producer’s accuracy were also produced by water and residential, respectively. Even though the highest overall accuracy given by the Gi approach (78.5 percent) is slightly higher than the highest accuracy produced by Geary’s C approach (77.7 percent), we cannot make a firm conclusion as to which approach is better since Geary’s C received a better kappa coefficient (0.61) than that of Gi (0.58). This implies that an observed classification for Geary’s C is on average 61 percent more accurate than one resulting from chance whereas Gi is only 58 percent better. The accuracy (83.0 percent) with the same class was better when using Geary’s C. Grassland showed a similar situation when using Geary’s C. We observed that only 70.91 percent of the areas categorized as grassland within the classification are truly of that category when using Gi. This was not the case for the same grassland when using Geary’s C (89.85 percent). Table 6 reports on the overall, producer’s, and user’s accuracies of Combo-B and Combo-F using a stratified random sampling approach. We found that the overall accuracy of the output map generated by a combination of all the original bands and the Gi-transformed bands reached 81.5 percent. This accuracy is slightly higher than the accuracy when comparing every pixel in the image. A combination of all the original bands and the Geary’s C-transformed bands produced a lower accuracy (77.0 percent). In this accuracy assessment, the kappa coefficient of Gi approach (0.77) is also higher than Geary’s C approach (0.71). Figures 8 and 9 represent classified maps based on the combination of all the original bands and the Gitransformed bands (Combo-B) and combination of all the original bands and the Geary’s C-transformed bands (Combo-F). Even though the Gi is capable of identifying spatial concentrations of hot and cold spots, a classic spatial autocorrelation approach (i.e., Geary’s C) may be more effective in measuring how dispersed, uniformly distributed, or clustered points are in space with regards to their characteristic values.

Figure 8. Classified map generated by a combination of all the original bands and the Gi-transformed bands (i.e., B1  B2  B3  B4  GT-B1  GT-B2  GT-B3  GT-B4). A color version of this figure is available at the ASPRS website: www.asprs.org.

Figure 9. Classified map generated by a combination of all the original bands and the Geary’s C-transformed bands (i.e., B1  B2  B3  B4  CT-B1  CT-B2  CT-B3  CT-B4). A color version of this figure is available at the ASPRS website: www.asprs.org.

TABLE 6. OVERALL, PRODUCER’S AND USER’S ACCURACIES OF COMBO-B (I.E., B1 + B2 + B3 + B4 + GT-B1 + GT-B2 + GT-B3 + GT-B4) AND COMBO-F (I.E., B1 + B2 + B3 + B4 + CT-B1 + CT-B2 + CT-B3 + CT-B4) USING A STRATIFIED RANDOM SAMPLING APPROACH Land Use Land Cover Commercial Combo-B Producer’s accuracy User’s Accuracy Combo-F Producer’s accuracy User’s Accuracy

1412 D e c e m b e r 2 0 0 7

Forest

Grass

Residential

Water

87.5% 70.0% Overall Accuracy 

85.7% 60.0% 81.5%

63.9% 97.5%

86.5% 80.0% Kappa 

95.2% 100.0% 0.77

67.9% 47.5% Overall Accuracy 

92.9% 97.5% 77.0%

77.1% 67.5%

55.8% 72.5% Kappa 

93.0% 100.0% 0.71

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:39

Page 1413

Discussion The results of the current study neither confirm nor deny the challenge to Warner (1999) by Lee and Wong (2001) on whether measuring hot and cold spots (Gi ) rather than just spatial association (Geary’s C ) improves classification accuracy of urban land-use classification. Rather, we offer insights into how spatial-transformed bands can be used to improve the classification accuracy in urban land-use. One of the first challenges was to determine the best threshold (d) for measuring the Gi statistic. This study revealed that a smaller threshold should be used for Gi to characterize spatial patterns of raster-based data. For equally spaced grid cells or pixels and their attribute values the calculation should consider only adjacent grid cells (rooks case) to determine the degree to which similar or different objects in space are closely associated with each other. This also applies to irregular regions and their attribute values (e.g., census track level administrative boundary in a vector format). For an effective measure of spatial clustering, this study discovered that polygons that do not touch each other should not be considered spatially associated regions, even though they are within a particular distance threshold. Moreover, two polygons should be considered adjacent to each other only if the common boundary of the two polygons is equal to or longer than half the distance from the centroid of a smaller polygon to the common boundary. It should be noted that the distance between their centroids is always less than the distance threshold. This can be thought of as a general rule of thumb for measuring spatial association of irregular polygons in space. We also suggest that there is no single window size that can be used effectively for all applications. The identification of a method for determining optimal window size a priori classification is vague. An ideal approach is to use different spatial techniques with different classification options (i.e., different distance thresholds, different window sizes, different band combinations, different classes, and samples) to compare the results for a particular type of satellite data to achieve specific objectives. It can be concluded that, under real-world condition, the success of the above geospatial approaches for urban mapping may be slightly variable, and care should be taken with the awareness of the limitations when performing the tasks. A third outcome from this study suggests that a combination of spectral (per-pixel) and spatial information (i.e., Geary’s C or Gi) outperforms spectral information alone. In other words, we conclude that including a measure of local variability improved the classification accuracy. In general, the highest producer’s accuracies were given by either residential or water for all band combinations. This is not surprising since these tend to be the most homogeneous (i.e., water) and heterogeneous (i.e., residential) classes. In most cases, there was spectral confusion between residential versus commercial and residential versus grassland since most of the residential and commercial areas contain large impervious areas and grassy areas. A combination of all the original bands and the Gi-transformed bands gave the highest accuracy and is considered the better approach. The classification accuracy gained by a combination of the spatial-transformed bands and the original bands of band 2, 3, and 4 as a false color composite is comparable to the above combination of all the original and spatial-transformed bands. This study also discovered that spatial-transformed bands are more powerful and useful than spectral information (per-pixel information from the original bands) in increasing the classification accuracy. Finally, we are unable to confirm or reject our hypothesis that the Gi approach is better than Geary’s C at quantifying local variability in remote sensing images and using that information to improve classification accuracy. We were able PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

to confirm that Gi identifies hot spots and cold spots in our hypothetical images, but it did not always improve all aspects of classification accuracy. In the Dallas dataset, the overall accuracy of Gi is slightly higher than Geary’s C whereas the kappa coefficient of Geary’s C is slightly higher than the Gi. In this case, Geary’s C has been more effective in characterizing the spatial arrangements of objects and features as coarse and smooth textures. Instead, Gi is more accurate in identifying dissimilarity in a group of bright pixels and dark pixels from two different homogeneous features or textures. In general, we believe that spatial autocorrelation may be relatively more powerful than Gi in characterizing complex spatial arrangements of objects and features in the classification of remotely sensed images. This may be due to the fact that objects and features that form different land-use and land-cover classes in images may not always correspond to hot and cold spots. In some cases, it may be more important to characterize how objects and features are arranged (degree of dispersion or degree of clustering) in space than what type of objects and features (dark or bright pixels) are close to each other (clustered) in identifying land-use and land-cover classes. We expect that Gi index is not as effective as Geary’s C in detecting how dispersed objects are in space. This is an important issue because there may be different arrangements of objects (clustered, dispersed, uniform) even within a particular land-use category. For example, high density residential, low density residential, residential with tree crown closure above 50 percent, residential with tree crown closure below 50 percent, each will exhibit different spatial arrangements of land-covers (e.g., roads, trees, grass, rooftops) even though they all are under same land-use category (residential). Since spatial autocorrelation approaches (i.e., Moran’s I and Geary’s C) are designed to measure spatial dissimilarity (Goodchild, 1986), they may be more accurate in detecting degree of dispersion or clustering even though they are not capable of distinguishing hot and cold spots.

Conclusion High-resolution imagery such as Ikonos gives the hope of highly detailed and extremely accurate classified images. This promise, however, is offset by the conundrum of not being able to see the forest through the trees (so to speak). The ability to group sets of pixels with cognitively similar characteristics, but different spectral information into meaningful objects is a skill not yet mastered by automated techniques. In this paper, we offer a comparison between Geary’s C and Gi in statistically measuring local variability of complex urban land-cover and land-use. We are able to support existing literature that measuring local variability improves classification over spectral information alone. We were unable, however, to identify whether Geary’s C or the Gi approach is better. Both Geary’s C and Gi have advantages and limitations, and hence, no single approach is optimal for all applications.

Acknowledgments This research has been supported by the National Science Foundation (Grant No. 0351899). The authors wish to thank the anonymous reviewers for their constructive comments and suggestions.

References Almeida-Filho, R., F.P. Miranda, J.A. Lorenzzetti, E.C. Pedroso, C.H. Beisl, L. Landau, M.C. Baptista, and E.G. Camargo, 2005. RADARSAT-1 images in support of petroleum exploration: The offshore Amazon River mouth example, Canadian Journal of Remote Sensing, 31(4):289–303 D e c e m b e r 2 0 0 7 1413

07-030.qxd

2/11/07

10:39

Page 1414

Aplin, P., 2003. Comparison of simulated Ikonos and SPOT HRV imagery for classifying urban areas, Remotely Sensed Cities (V. Mesev, editor), Taylor & Francis, London. Arai, K., 1993. A classification method with a spatial-spectral variability, International Journal of Remote Sensing, 14:699–709. Atkinson, P.M., and P. Lewis, 2000. Geostatistical classification for remote sensing: An introduction, Computers and Geosciences, 26(4):361–371. Barnsley, M.J., S.L. Barr, and G.J. Sadler, 1991. Spatial re-classification of remotely sensed images for urban land-use monitoring, Proceedings of Spatial Data 2000, Oxford, 17–20 September, Remote Sensing Society, Nottingham, pp. 106–117. Barr, S.L., M.J. Barnsley, and A. Steel, 2004. On the separability of urban land-use categories in fine spatial scale land-cover data using structural pattern recognition, Environment and Planning B, 31:397–418. Boucher, A., and P.C. Kyriakidis, 2006. Super-resolution land-cover mapping with indicator geostatistics, Remote Sensing of Environment, 104(3):264–282. Boucher, A., K.C. Seto, and A.G. Journel, 2006. A novel method for mapping land-cover changes: Incorporating time and space with geostatistics, IEEE Transactions on Geoscience and Remote Sensing, 44(11):3427–3435. Carr J.R., and F.P. de Miranda, 1998. The semivariogram in comparison to the co-occurrence matrix for classification of image texture, IEEE Transactions on Geoscience and Remote Sensing, 36(6): 1945–1952. Cecere, M.C., G.M. Vazquez-Prokopec, R.E. Gurtler, and U. Kitron, 2004. Satio-temporal analysis of reinfestation by TRIATOMA INFESTANS (Hemiptera: Reduviidae) following insecticide spraying in a rural community in northwestern Argentina, American Journal of Tropical Medicine and Hygiene, 71(6): 803–810. Chubey, M.S., S.E. Franklin, and M.A. Wulder, 2006. Object-based analysis of Ikonos-2 imagery for extraction of forest inventory parameters, Photogrammetric Engineering & Remote Sensing, 72(4):383–394. Cliff, A.D., and J.K. Ord, 1973. Spatial Autocorrelation, London, Pion Press. Cressie N., and J. Kornak, 2003. Spatial statistics in the presence of location error with an application to remote sensing of the environment, Remote Sensing of Environment, 18(4):436–456. Curran, P.J., 1988. Semivariogram in remote sensing – An introduction, Remote Sensing of Environment, 24(3):493–507. Dell’Acqua, F., P. Gamba, and G. Trianni, 2006. Semi-automatic choice of scale-dependent features for satellite SAR image classification, Pattern Recognition Letters, 27(4):244–251. Edwards, G., R. Landary, and K.P.B. Thomson, 1988. Texture analysis of forest regeneration sites in high-resolution SAR imagery, Proceedings of the International Geosciences and Remote Sensing Symposium-IGARSS 88, ESA SP-284, Paris, European Space Agency, pp 1355–1360. Emerson C.W., N.S.N Lam, and D.A. Quattrochi, 1999. Multi-Scale fractal analysis of image texture and pattern, Photogrammetric Engineering & Remote Sensing, 65(1):51–61. Espindola, G.M., G. Camara, I.A. Reis, L.S. Bins, and A.M. Monteiro, 2006. Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation, International Journal of Remote Sensing, 27(14):3035–3040. Gatrell, A.C., T.C. Bailey, P.J. Diggle, and B.S. Rowlingson 1996. Spatial point pattern analysis and its application in geographical epidemiology, Transactions of the Institute of British Geographers, NS 21:256–274. Geary, R., 1954. The contiguity ratio and statistical mapping, The Incorporated Statistician, 5:115–145. Getis, A., and J.K. Ord, 1992. The analysis of spatial association by use of distance statistics, Geographical Analysis, 24(3): 1269–1277. Getis, A., A.C. Morrison, K. Gray, and T.W. Scott, 2003. Characteristics of the spatial pattern of the dengue vector, Aedes Aegypti, Iquitos,

1414 D e c e m b e r 2 0 0 7

Peru, American Journal of Tropical Medicine and Hygiene, 69(5):494–505. Goodchild, M.F., 1986. Spatial Autocorrelation: Concepts and Techniques in Modern Geography, Catmog 47, GeoBooks, Norwich, 56 p. Hagen-Zanker, A., 2006. Map comparison methods that simultaneously address overlap and structure, Journal of Geographical Systems, 8(2):165–185. Hay, G.J., T. Blaschke, D.J. Marceau, and A. Bouchard, 2003. A comparison of three image-object methods for the multiscale analysis of landscape structure, Journal of Photogrammetry and Remote Sensing, 57:327–345. Henry, M.C., and S.R. Yool, 2002. Characterizing fire-related spatial patterns in the Arizona Sky Islands using Landsat TM data, Photogrammetric Engineering & Remote Sensing, 68(10): 1011–1019. Herold, M., N.C. Goldstein, and K.C. Clarke, 2003. The spatiotemporal form of urban growth, Remote Sensing of Environment, 86: 286–302. Hyppanen, H., 1996. Spatial autocorrelation and optimal spatial resolution of optical remote sensing data in boreal forest environment, International Journal of Remote Sensing, 17(17):3441–3452. LeDrew, E.F., H. Holden, M.A. Wulder, C. Derksen, and C. Newman, 2004. A spatial statistical operator applied to multidate satellite imagery for identification of coral reef stress, Remote Sensing of Environment, 91:271–279. Lee, J., and D. Wong, 2001. Statistical Analysis with ArcView GIS, New York, John Wiley and Sons, 192 p. Lillesand, T.M., R.W., Kiefer, and J.W., Chipman, 2004. Remote Sensing and Image Interpretation, Fifth edition, John Wiley and Sons, New York, 763 p. Medda, F.P., P. Nijkamp, and P. Rietveld, 1998. Recognition and classification of urban shapes, Geographical Analysis, 30: 304–314. Myint, S.W., 2003. Fractal approaches in texture analysis and classification of remotely sensed data: Comparisons with spatial autocorrelation techniques and simple descriptive statistics, International Journal of Remote Sensing, 24(9):1925–1947. Myint, S.W., 2006a. A new framework for effective urban land-use land-cover classification: A wavelet approach, GIScience and Remote Sensing, 43(2):155–178. Myint, S.W., 2006b. Multi-resolution decomposition in relation to characteristic scales and local window sizes using an operational wavelet algorithm (under review). Ord, J., and A. Getis, 1995. Local spatial autocorrelation statistics: Distributional issues and an application, Geographical Analysis, 27:286–306. Pardo-Igúzquiza, E., M. Chica-Olmo, and P.M. Atkinson, 2006. Downscaling cokriging for image sharpening, Remote Sensing of Environment, 102(1–2):86–98. Purkis, S.J., S. Myint, and B. Riegl, 2006. Enhanced detection of the coral Acropora cervicornis from satellite imagery using a textural operator, Remote Sensing of Environment, 101:82–94. Read, J.M., and N.S.N. Lam, 2002. Spatial methods for characterising land-cover and detecting land-cover changes for the tropics, International Journal of Remote Sensing, 24(9):1925–1947. Sadler, G.J, M.J. Barnsley, and S.L. Barr, 1991. Information extraction from remotely-sensed images for urban land analysis, Proceedings of the Second European Conference on Geographical Information Systems-EGIS’91, Brussels, Belgium, April, EGIS Foundation, Utrecht, pp. 955–964. St-Onge, B.A., and F. Cavayas, 1997. Automated forest structure mapping from high resolution imagery based on directional semivariogram estimates, Remote Sensing of Environment, 102(1–2), 86–98. Stein, A., W.G.M. Bastiaanssen, S. De Bruin, A.P. Cracknell, P.J. Curran, A.G. Fabbri, B.G.H. Gorte, J.W. Van Groenigen, F.D. Van der Meer, and A. Saldana, 1998. Integrating spatial statistics and remote sensing, International Journal of Remote Sensing, 19(9):1793–1814.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

07-030.qxd

2/11/07

10:39

Page 1415

Thomas, N., C. Hendrix, and R.G. Congalton, 2003. A comparison of urban mapping methods using high-resolution digital imagery, Photogrammetric Engineering & Remote Sensing, 69(9):963–972. Warner, T.A., and M.C. Shank, 1997. Spatial autocorrelation analysis of hyperspectral imagery for feature selection, Remote Sensing of Environment, 60:58–70. Warner, T.A., K. Steinmaus, and H. Foote, 1999. An evaluation of spatial autocorrelation feature selection, International Journal of Remote Sensing, 20(8):1601–1616.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Wentz, E.A., 2000. Shape definition for geographic applications based on edge, elongation, and perforation, Geographical Analysis, 32:204–213 Woodcock, C., and V.J. Harward, 1992. Nested-hierarchical scene models and image segmentation, International Journal of Remote Sensing, 13:3167–3187. Wulder, M., and Boots, B., 1998. Local spatial autocorrelation characteristics of remotely sensed imagery assessed with the Getis statistic, International Journal of Remote Sensing, 19:2223–2231.

D e c e m b e r 2 0 0 7 1415