Friday, March 22, 2019

Lab 5 - Classification Accuracy Assessment

Goal and Background

The primary goal of this lab exercise was to develop an understanding of the evaluation of the accuracy of image classification results.  As accuracy assessment is a necessary step in image classification and an important part of image post processing, this was a concept that was important to have a quality understanding of.  To conduct this accuracy assessment for this lab, we both collected ground reference testing samples and then used those samples to conduct the accuracy assessment of our classified images.

Methods

Part 1: Generating ground reference testing samples for accuracy assessment

Part 1 of this lab was dedicated to the collection of ground reference testing samples that would later be used to judge the accuracy of a classified image.  To do this, we first opened up our unsupervised classification image from our Lab 3 - Unsupervised Classification lab into one viewer in Erdas Imagine followed by a high resolution areal reference image of the same study area in another viewer. Next, we clicked on Raster>Supervised>Accuracy Assessment to open up the Accuracy Assessment dialog window.  Once opened, we brought our unsupervised classification image into the tool using the folder connection button and then using the Select Viewer button, we selected the viewer containing the reference image of the study area to create the ground reference sample points in.  Once this was done, in the accuracy assessment window we clicked on Edit>Create/Add Random Points.  This opens the Random Points dialog box, which was used to select the five informational classes of our classified image using the Select Classes button, set the number of points to 125, select the Stratified Random Distribution Parameter, and set the Minimum Points to 15.  Once all these parameters were set, the Ground Reference Testing Samples were all created so that they could be used to conduct an accuracy assessment of the unsupervised image.

Part 2: Performing Accuracy Assessment

Once all of the Ground Reference Testing Sample points were properly generated, the next step was to analyze them to conduct the accuracy assessment. To do this, only 10 points at a time were set to be visible on the reference image with the 125 points so as to make the analysis easier.  For each individual point on the reference image, the point was zoomed into to see exactly which LULC class it fell into.  Each point was assigned a value of 1-5 representing which LULC class it fell into.  Next, in the accuracy assessment window, Report>Accuracy Assessment was clicked to generate the accuracy assessment report.  In this report, the statistics like overall accuracy, producer's accuracy, and user's accuracy can all be found and calculated as well as the information necessary to fill in the Error Matrix Table.

Part 3: Accuracy Assessment of Supervised Classification. 

Part 3 of this lab was simply a repeat of the steps that were conducted in both parts 1 and 2 but with the supervised classification image that was created in Lab 4 rather than the unsupervised image that was used in the previous accuracy assessment.  
es for accuracy assessment

Results

The results of these separate accuracy assessments show the difference in accuracy of the unsupervised classification image we created in lab 3 compared to the supervised classification image we created in lab 4.  Overall, the unsupervised image had greater accuracy compared to the supervised image, most likely as a result of the few number of training samples collected for that classification.  In a real world scenario, neither of these images would meet the accuracy requirements for being submitted to a customer and both the classifications and the accuracy assessments would need to be redone to obtain a higher overall accuracy.


Unsupervised Classification Error Matrix



Supervised Classification Error Matrix

Sources

Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. High resolution image is from United States Department of Agriculture (USDA) National Agriculture Imagery Program.

Wednesday, March 20, 2019

Lab 4 - Pixel-Based Supervised Classification

Goal and Background

The main goal of this lab was to develop an understanding of conducting a pixel-based supervised classification scheme to classify 5 different Land Use Land Cover (LULC) classes.  The first section of this lab was dedicated to the collection of training samples for a variety of surface features.  the second portion of this lab was dedicated to the evaluation of the quality of the training samples collected.  Finally, the third and final section of this lab was dedicated to the actual production of meaningful LULC classes.

Methods

Part 1:  Collection of training samples for supervised classification

to begin this lab, our first task was to go about selecting training samples, or samples of known LULC, that can be used to train the classifier.  To do this, we used the Raster>Unsupervised>Signature Editor window as well as the Draw>Polygons tool.  For each training sample collected, we created a polygon within the specific LULC feature and then clicked the Create New Signature From AOI.  Training samples were first collected for waster, then forest, then agriculture, then urban/built-up, and then finally for bare soil.  Once all of these training samples were collected, the samples were labeled and numbered with their respective LULC.  The signature file was then saved once all training samples were collected and correctly labeled.

Part 2: Evaluating the quality of training samples

The next step of this lab was to check to make sure the training samples we collected in the previous part of the lab were of sufficient quality.  To do this, we displayed the signatures of each training sample in the Signature Mean Plot window which was accessed from the signature editor window.  Next, we made sure to accept the default values for the Image Alarm and the Parallelepiped Limits.  Next, we viewed the histogram for our data by clicking the histogram symbol and making sure that Single Signature was selected under the Histogram Plot Control Panel.  This was done for each of our individual LULC classes so that we could compare the signatures of the samples we selected to the real world signatures of those same things.  Next, a separability report was created by clicking Evaluate>Separability on the Signature Editor window.  Once the Signature Separability window was open, we changed the Layers Per Combination to 4 and chose Transformed Divergence for the distance measurement.  Once this was done, the separability report was created which was used to evaluate the separability between spectral signatures for each band so we knew which bands were the best to use for our supervised classification.  Once this was completed, all of the individual spectral signatures were merged into one signature for each of the LULC classes using the Edit>Merge tool within the Signature Editor window.

Part 3:  Performing supervised classification

The final portion of this lab was all about taking all the information obtained and created in the first two parts and using it to actually perform a supervised classification.  To do this,we clicked on Raster>Supervised>Supervised Classification and set the input image to the provided image of Eau Claire and Chippewa counties and then set he input signature as the merged signatures created from the previous part of this lab.  We then made sure that the Non-Parametric rule was set to None and that the Parametric Rule was set to Maximum Likelihood.  Finally, the classification tool was run and the image was brought into a viewer for examination. The Supervised Classification image was then compared to the Unsupervised Classification image created in the previous lab and finally a map was created using the Supervised Classification image.

Results

Figure 1. - Spectral Signatures of all 50 Samples



 Figure 2. - Merged Spectral Signatures

Figure 3. 

Sources

Data sources are as follows: Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Saturday, March 9, 2019

Lab 3 - Unsupervised Classification

Goal and Background

The main goal of this lab exercise was to develop an understanding of the use of unsupervised classification methods to extract information about surface features from remotely sensed images.  The first part of this lab was focused on learning about inputs and configuration requirements necessary to run the unsupervised classifier.  The second part of this lab was focused on taking the spectral classes created by the classifier and recoding them into useful thematic information about land-use/land-cover (LULC).

Methods

Part 1: Experimenting with unsupervised ISODATA classification algorithm

To start off this lab, we were given an image of Eau Claire and Chippewa counties from Landsat ETM+ captured in June of 2000.  We used this image to run a Iterative self-organizing data analysis technique (ISODATA) by using the Unsupervised Classification tool under Raster>Unsupervised>Unsupervised Classification.  With the Unsupervised Classification tool window open, the input image of Eau Claire and Chippewa counties was put in and the output image was saved to the correct folder.  Next, the ISODATA radio button was checked under the "Method" section and the number of classes was changed to be 10.  The default parameters for "Initializing Options" as well as "Color Scheme Options" was selected followed by the default value for the convergence threshold before changing the "Maximum Iterations" to 250.  After all the correct inputs were finalized, the model was run and the resulting image was brought into the viewer.  With the classified image open in the viewer, the attribute table was opened and the viewer was synced to Google Earth to help in identifying the features to be classified.  By changing the color of each class to a brighter, more identifiable color and comparing what features are that color we created 5 distinct LULC classes.  These classes were water (blue), forest (dark green), agriculture (pink), urban/built-up (red), and bare soil (sienna). 

Part 2:  Improving the accuracy of unsupervised classification

Part 2 of this lab was similar to part 1 as the same ISODATA unsupervised classification scheme was used but with slightly different input parameters compared to part 1.  The only input parameters that were different in this iteration compared to the first was a Convergence Threshold of 0.92 as compared to the 0.95 used in part 1 and a maximum number of classes of 20 compared to the 10 classes created in part 1.  Once these parameters were changed, the tool was run and the output image was brought into a viewer.  Just as in part 1, the attribute table was opened and all 20 of the classes were labeled as belong to one of the five classes; water, forest, agriculture, urban/built-up, or bare soil.  Once this was done, the Raster Attribute Editor and the Column Properties window was used to better order the data in the attribute table for easier analysis.  Once this was done, the next step was to create a map using the 20 classifications but to do this those 20 classifications needed to be cut down to 5.  To do this, the Recode tool was used under Raster<Thematic<Recode.  Using the Recode tool and the Thematic Recode window, the 20 classes were condensed into 5 classes, one for each of the LULC classes.  Again, column properties were edited to make analysis easier and finally the image was exported into Arcmap where a map was created to represent the various LULC classes for Eau Claire and Chippewa counties.

Results

Figure 1. - Difference between 10 classes (left) and 20 classes (right) showing the higher accuracy present with a higher number of classes.


Figure 2. - Final attribute table of five classes once Recode tool had been used.

Figure 3. - Final map showing various LULC of Eau Claire and Chippewa counties.


Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Saturday, March 2, 2019

Lab 2 - Radiometric and Atmospheric Correction

Goal and Background

The goal of this lab was to develop an understanding and experience in the atmospheric correction of images using both relative and absolute atmospheric correction models.  These models included Empirical Line Calibration (ELC), Image Based Dark Object Subtraction (DOS), and Multi-Date Image Normalization.  

Methods

Part 1: Absolute atmospheric correcting using empirical line calibration

For part 1, ELC was calculated with the following equation, where DN was the band(s) to be corrected, M was a multiplicative term for the brightness value, and L was an additive term.
Using the Raster>Hyperspectral>Spectral Analysis Workstation tool as well as the Atmospheric Adjustment tool, we were able to collect points on vegetation, concrete, and other surface features and compare their spectral profiles to those within the ASTER and USGS V4 spectral libraries.  In the Spectral Analysis Workstation we then ran the Preprocess>Atmospheric Adjustment model to create an output atmospherically corrected image that used the ELC equation and information collected above.  This image was then compared to the original uncorrected image to judge how well this correction method works.

Part 2: Absolute atmospheric correction using enhanced image based Dark object subtraction

The first step in atmospherically correcting an image using the DOS method was to create a model that converted the satellite image to at-satellite spectral radiance for bands 1-5 and 7 simultaneously.  To do this, the equation below was used using data collected form the metadata text file associated with the individual band images.
Next, the at-satellite spectral radiance images for each individual band that were the results of the model were then converted to true surface reflectance using the equation below with values calculated from the histograms of the radiance images, distance between the Earth and Sun calculated from an Earth-Sun distance table, the Sun zenith angle calculated from the image metadata file, and the Esun value from a table on Landsat TM.
Once both models were completed, true surface reflectance images for each band were layer stacked and then spectral profiles of a variety of features were collected to compare to the original uncorrected image.

Part 3: Relative atmospheric correction using multi-date image normalization

The first step in multi-date image normalization was to collect points off of pseudo-invariant features from base image and subsequent image.  This was done by using the same Spectral Profile tool used elsewhere in the lab.  The tabular data from these spectral profiles were then copied into Excel where scatter plots were created and regression equations were calculated. Using the values in the regression equations, a model was created to correct each individual band to create individual images of each band that were corrected.  To do this, the following equation was used.
Once the model was run, the resulting corrected band images were layer stacked to create a false color image that was then compared to the original image.

Results



Example model and equation for conversion of image to at-satellite spectral radiance image




Example of model and equation for at satellite spectral radiance to true surface reflectance


Chicago 2000 Image Spectral Profiles



Chicago 2009 Image Spectral Profiles



Scatter Plots of Mean Pixel Values per Band


Spectral Profiles of Chicago 2000 and Chicago 2009 After Relative Correction



Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. Spectral signatures are from the respective spectral libraries consulted - ASTER, USGS V4

Lab 10 - Radar Remote Sensing

Goal The goal of this lab exercise was to introduce the class to the basics of working with remotely sensed radar images including prep...