Sunday, May 5, 2019

Lab 10 - Radar Remote Sensing

Goal

The goal of this lab exercise was to introduce the class to the basics of working with remotely sensed radar images including preprocessing and processing of said images.  This was accomplished by performing noise reduction through speckle filtering, both spatial and spectral enhancement, multi-sensor fusion, texture analysis, polarimetric processing, and finally slant-range to ground conversion.

Methods

Part 1 - Speckle Reduction and Edge Enhancement

In this section of the lab, we used the provided radar image and the Radar Speckle Suppression tool to conduct despeckling.  Using the calculated Coefficient of Variation as well as two other provided Coefficients of Variation, we ran the Radar Speckle Suppression tool three times, also changing the Coefficient of Variation Multiplier and the Window Size parameters each time the tool was run.  Once all the despeckled images were created, we then viewed the histograms for each to compare them.  Next, we ran the Non-Directional edge tool on the original provided image and our third despeckled image using the Prewitt Output Options parameter for each.  Next, we used the Radar Speckle Suppression tool with a different provided radar image and set the filter to Gamma-Map and then used the Adaptive Filter tool to set the data type to Stretch to Unsigned 8-Bit, the Moving Window to 3, and the Multiplier to 3 as well.

Part 2 - Sensor Merge, Texture Analysis, and Brightness Adjustment

The next portion of this lab started with using the Sensor Merge tool and merging a multispectral and a gray scale image using the IHS method, Nearest Neighbor resampling, IHS Substitution, and setting the data type to Stretch to Unsigned 8-Bit.  Once this was done, the next portion of the lab was about using the Texture Analysis tool with a Window Size of 5 and a Skewness Operator.  Once this was done, the next tool to run was the Brightness Adjustment tool with the Output Options set to Column and the Data Type set to Float Single.

Part 3 - Polarimetric SAR Processing and Analysis

For part 3 of this lab, ENVI was used and the Synthesize SIR-C Data tool was used and run multiple times with different parameters.  These parameters included changing the Transmit and Receive Ellip as well as the Transmit and Receive Orien.  These different parameters produced different polarization images.  Once this was done, the images were displayed in either Gaussian, Linear, or Square-Root stretches.  Next, we restored ROI's for the image and then used those ROI's and the Extract Polarization Signatures > SIR-C tool to create Polarization Signature Viewer windows.  

Part 4 - Slant-to-Ground Range Transformation

For the final portion of this lab, we first previewed the CEOS Header of an image by clicking on the View Generic CEOS Header button and opening up the provided image.  Next, we resampled the image using the Slant to Ground Range > SIR-C tool and set the Output Pixel Size (m) to 13.32 and the Resampling Method to Bilinear and then ran the tool.

Results

 Histograms of original image (upper left) and 3 subsequent despeckled images

Edge Enhancements of original image and despeckled image

Original image (left) and Image Enhanced image (right)

Merge image

Texture image

 Gaussian stretched image

 Linear stretched image

 Square-Root stretched image

Gaussian stretched color image


Polarization Signature Viewer


Resampled Slant-to-Ground Image

Sources

Erdas Imagine, 2018

ENVI, 2015

Monday, April 29, 2019

Lab 9 - Hyperspectral Remote Sensing

Goal

The goal of this lab exercise was to introduce us to the processing of hyperspectral remotely sensed images.  The ways in which this lab introduced us to hyperspectral remote sensing was through some various basics of hyperspectral and spectroscopy, using FLAASH for the purposes of atmospheric corrections, and various tools to determine vegetation health.

Methods

Part 1 - Introduction to Hyperspectral Remote Sensing

The first portion of this lab was devoted to viewing and basic analysis of hyperspectral images in the ENVI software.  In ENVI, we began by loading in select bands from the image and importing ROI's (Regions of Interest) and plotting them to view there spectral signatures.  Once this was done, other spectral signatures were imported from a spectral library, in this case the JPL spectral library.  In addition to viewing spectral signatures of ROI's and reference signatures, this portion of the lab also demonstrated animating the image data.  Using the animation tool in ENVI, we animated all the individual bands in a slideshow that can be used to view if there are any bad bands or images with errors.

Part 2 - Atmospheric Correction Using FLAASH

The second portion of this lab exercise was devoted to using FLAASH to conduct atmospheric correction on the hyperspectral imagery.  As UWEC does not have the license for FLAASH, we just walked through the steps of using FLAASH rather than actually executing it.  The first step to conduct FLAASH correction was to open the FLAASH window and input the desired image and then select the "Read array of scale factors (1 per band) from ASCII file" radio button to import a text file to import the desired parameters.  Other default parameters were accepted and the FLAASH tool was "run." Next, we viewed the "corrected" image and viewed which bands FLAASH flagged as being bad from the bad bands list.

Part 3 - Vegetation Analysis

Part 3 of this lab was all about various forms of vegetation that can be accomplished using hyperspectral imagery.  The first vegetation index we used was the Vegetation Index Calculator to calculate an NDVI image.  Some other indices that were tested at this stage of the lab were things like water indices, red edge indices, and Lignin indices.  Once this was done, some vegetation analysis tool were run.  The first of these tools was the Agricultural Stress Tool to create a image showing areas of vegetation under stress.  After this tool, the next tool was the Fire Fuel Tool which measures areas of vegetation that are more or less susceptible to fires.  In using this tool, we also gained experience using the Mask Band tool to mask out portions of the image that we did not wasn't to analyze, such as the urban areas in the image.  The final tool run in this portion of the lab was the Forest Heath tool which measures overall vegetation health based on parameters like water content, greenness, etc.

Part 4 - Hyperspectral Transformation

The final portion of this lab was devoted to using a MNF (Minimum Noise Fraction) transform tool to determine image dimensionality, reduce computational requirements, and remove noise data from the image.  To do this, we used the MNF Rotation > Forward MNF > Estimate Noise Statistics from Data tool and input the provided noise statistics.  Once this was done, we viewed the Eigenvalues plot of the MNF transformed image.

Results



Sources

ENVI Tutorials. (n.d.). Retrieved April 29, 2019 from, https://www.harrisgeospatial.com/Support/Self-Help-Tools/Tutorials

Saturday, April 20, 2019

Lab 8 - Advanced Classifiers 2

Goal

The goal of this lab exercise was to expose students to the use of two different advanced classification algorithms.  These advanced classifiers provide a much higher level of accuracy for classifying images over traditional classification methods.  The first advance classification conducted in this lab was an expert system/decision tree classification with the use of ancillary data.  The second classification scheme was developing and using a neural network to perform a complex image classification.

Methods

Part 1: Expert System Classification

For part 1 of this lab, we used the Knowledge Engineer tool in Erdas Imagine and created various rules and variables to classify the provided image of the Eau Claire and Chippewa Falls region.  In the Knowledge Engineer window, we created arguments and counter-arguments for each of the desired LULC classes.  The final classification scheme for this specific classification can be seen in figure 1 below, showing all of the various rules and arguments for the classification.

Figure 1. Arguments for expert system classification

Part 2: Neural Network Classification

Part 2 of this lab exercise was devoted to performing a Neural Network classification using the ENVI software.  After inputting the desired image into ENVI, we used the Restore ROI tool to add in provided ROI's that would be used to train the classifier.  Next, the Neural Network Supervised classification was run with 1000 iterations and a Logistic Activation method.  Once all the parameters were set, the classification was run and the resulting image was the one below in Figure 2.

Figure 2.

Once this was done, the next part of the Neural Network classification portion of this lab was to collect our own training samples and run a neural network classification of the University of Northern Iowa campus.  To do this, we imported the image into ENVI just as the previous image was imported and instead of restoring ROI's, we created our own of classes we desired to create, such as buildings, roads, and vegetation.  Many of the same parameters from the first Neural Network were used and the classification was run.  Once it was run however, the classification was run again with the number of iterations set to the value of where there was much variability in the Neural Net RMS Plot window.  Once the classification was run again with a different number of iterations, the classified image appeared to be more accurate for certain classes.

Results

Final output map from part 1 of lab

First NN classification of University of Northern Iowa campus

Second NN classification of University of Northern Iowa campus with altered number of iterations.


Sources

Landsat imagery provided by Earth Resources Observation and Science Center, United States Geological Survey

Quickbird High resolution image of portion of University of Northern Iowa campus provided by Department of Geography, University of Northern Iowa.

Sunday, April 14, 2019

Lab 7 - Object-based Classification

Goal

The goal of this lab exercise was to develop an introductory understanding in using eCognition software to conduct an object based classification.  As object based classification systems are relatively new in the remote sensing world, gaining this experience sets us apart from many others who may not have experience in this type of work yet.  To train us in this new form of classification, this lab was designed to teach three different and important parts of the object based classification model.  The first part of this lab was devoted to the segmentation of an image into homogeneous spatial and spectral clusters, known as objects.  the second portion of this lab exercise was the selecting of samples from the objects created in section 1 to train the classifiers that we would be using.  The third and final portion of this lab then was devoted to the execution of the object based classification schemes, Random Forest and Support Vector Machines, followed by the refinement of the classified images to mitigate any errors that occurred from the classification.

Methods

Part 1 - Create a New Project

The first step in conducting an object based classification is to launch eCognition, as Erdas Imagine does not have the functionality to conduct object based classification like eCognition does.  When bringing in the desired image to eCognition, we made sure to check the "Use Geocoding" checkbox to make sure that all the georeference information that is included in this image was not lost.  To make sure that pixels that should not be analyzed are not analyzed, we made sure to click on 'No Data' in the 'Create Project' dialog box.  This opens the 'Assign No Data Values' dialog box which is where the 'Use Single Value for All Layers (Union)' checkbox was checked.  Once this was done, the project was created by selecting 'OK' and the â€˜Edit Image Layer Mixing' tool in the main toolbar was used to assign the band combination of this image to be a 432 false color image.

Part 2 - Segmentation and Sample Collection

The second step in this lab was to create the objects that will be used to conduct the object based classification of the image.  To do this, the first step was to open the 'Process Tree' by clicking on 'Process>Process Tree' on the main toolbar.  This opens the 'Process Tree' dialog where we then right-clicked inside of and clicked on 'Append New' to open the 'Edit Process' dialog.  In this dialog, we changed the name to Lab7_object_classification and clicked on 'Execute'.  This created the <0.001s Lab7_object_classification inside the 'Process Tree'.  Inside this process we inserted a child process and changed the algorithm to 'Multiresolution Segmentation,' typed 'Level 1' in the 'Level Name' box, and set the 'Shape' criteria to 0.3 and the 'Compactness' criteria to 0.5.  We then clicked on 'Execute' to create the image objects and viewed the image objects overlaid on the original image by selecting the various view or hide image objects buttons from the main toolbar.  Once this was completed, we launched the 'Class Hierarchy' dialog from the main toolbar under the 'Classification' tab.  In this dialog, we entered in all of the desired classes and added a unique color to represent each one.  Once the desired classes were created, we then collected samples for each of the classes by selecting the 'Classification>Samples>Select Samples' from the main toolbar.  By selecting which class we wanted to select sample for, we worked through the image selecting a desired number of samples for each class by simply double clicking on the object that was fully within the desired sample.  These steps were repeated to collect the desired number of samples for each of the created classes.

Part 3 - Implement Object Classification

Once all of the samples were collected, the next step was to conduct a Random Forest classification.  To do this, a new variable was created through the 'Create Scene Variable' window.  This variable was named 'RF Classifier' and set to be a string type variable.  Once this was done, a new parent process was added to the 'Process Tree' window and this process was named 'RF Classification.' Inside this process, another process was created named 'Train RF Classifier.'  Finally, inside the 'Train RF Classifier' process another process was inserted whose variables were set to those that highlighted by the red boxes in Figure 1.
Figure 1. - Train RF Parameters

Once this was done, the desired features were added to the 'Features' tab in the process and various object features were added.  These features included the mean object features as well as GLCM Dissimilarity (all dir) and also GLCM Mean (all dir) from the Texture- Texture after Haralick tab.  Once this was done, this process was executed to train the RF classifier based off of the desired parameters from the image objects created earlier in the lab.

Once the RF Classifier was trained, the next step was to apply the RF Classifier to create the classified image.  To do this, a process called Apply RF Classifier was added to the Process Tree and then a process was created as a child to that process to actually perform the classification.  In this final process, the parameters were set to those shown by the red boxes in Figure 2.  this process was then executed to perform the object based classification.
Figure 2. - Apply RF Parameters

If the resulting image had much error, we then fixed these errors either by editing the Class Filter within the Apply Classification process or by manually editing the objects from the tools in the View>Manual Editing toolbar.  Finally, the image was exported as a raster file with Classification being selected for the Content type and after we made sure to select all the desired classes to be included in the image.  The image was then used to create a final map shown in the results section below.

Part 4 - Support Vector Machines

Part 4 of this lab exercise was devoted to conducting another object based classification as the same image as before but instead of the Random Forest scheme being used, the Support Vector Machines, or SVM, was used.  The same processes as in the previous classification were created in the same way, with only slightly different parameters being used in the 'Train' process, as shown in Figure 3, and slightly different parameters for the 'Apply' process, as shown in Figure 4.  Once these parameters were set, the processes were executed and the resulting classified image was used to create a map as shown below in the results section.

Figure 3. - Train SVM Parameters

Figure 4. - Apply SVM Classifier Parameters

Part 5: Object-based classification of UAS imagery

The final portion of this lab was devoted to conducting an object based classification of a provided high resolution UAS image.  For this portion of the lab, I chose to again use the SVM classification scheme as I had better results with it compared to the RF classification scheme.  The only parameters that I changed for this SVM classification was the Scale Parameter for creating the objects to classify.  As this was a very high resolution image that encompassed a small area, a larger scale parameter of 35 was used to create objects that were not too large but not too small.

Results






Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. UAS image is from UWEC Geography & Anthropology UAS Center. 

Saturday, April 6, 2019

Lab 6 - Digital Change Detection

Goal and Background

The goal of this lab exercise was to allow us to develop an understanding of the methods to evaluate and measure LULC changes that occur over time.  In this lab, we performed quick, qualitative measurements of LULC change, quantified the change detection, and developed a model that was used to map from-to changes in LULC over time.

Methods

Part 1: Change detection using Write Function Memory Insertion

For part 1 of this lab, we performed a write memory memory insertion to quickly be able to view pixels that changed from one image to another.  To do this, the provided band 3 image and the two provided band 4 images were layer stacked and the output image was saved.  Next, under the Multispectral tab, the Set Layer Combinations window was opened so that we could specify which band was to be inserted into which color gun.  The band 3 red image was inserted into the red band color gun and then the two band 4 NIR band images were inserted into the green and blue bands.  The output image of this was an image that showed which pixels experienced change over time as represented by the color pink.

Part 2: Post-classification comparison change detection

For part 2 of this lab, we first began by calculating the quantitative change that occurred between the two provided LULC classification images of the Milwaukee WI area. To do this, the Raster Attribute Editor was opened for both images so that the data in the table could be copied to an Excel sheet for calculations. The data that was copied over was the class names column and the histogram values column.  Using this data, the next step was to convert the histogram values to hectares for each of the LULC classes and then calculate the percent increase or decrease for each of the LULC classes.  Once this was done, a table was created to show how much each LULC class changed over the time period between the images capture dates.  The next portion of this lab was devoted to creating a model that would take these same LULC classification images of the Milwaukee area and output images that showed LULC change-to over time.  To accomplish this, the Wilson-Lula Algorithm was used.  The model that was developed has an input for each of the LULC images, a pair of functions for each of the change-to possibilities, and a second function for each change-to which combines the outputs from the first function to create the final image showing which pixels experienced change from one LULC class to another LULC class.  For this example, the change-to options that were shown were Agriculture to urban/built-up, Wetlands to urban/built-up, Forest to urban/built-up, Wetland to agriculture, and Agriculture to bare soil. In the first set of functions, the first function was changed from analysis to conditional and then set to EITHER IF OR.  For example, the first function, which measured Agriculture to urban/built-up was EITHER 1 IF ( $n1_milwauke_2001==7 ) OR 0 OTHERWISE with the image being set equal to 7 because this is the value that corresponds to agriculture.  In the same set of functions, the second function was set to EITHER 1 IF ($n2_milwauke_2011==3 ) OR 0 OTHERWISE with the image set equal to 3 because this is the value of urban/built-up.  A temporary raster was created between the two layers of functions and the second group of functions was set to bitwise and the '&' symbol was used to add the two separate values from the previous functions together.  This final function output an image that showed which pixels went from one LULC class to another.  These various change-to images were then brought into Arcmap to create a map showing all the changes experienced by areas in the Milwaukee area between 2001 and 2011.

Results

Figure 1. Write Function Memory Insertion

Figure 2. Table showing quantitative change


Figure 3. Example of Model using Wilson-Lula algorithm to develop from-to imagery

Figure 4. Final output map showing from-to change in Milwaukee area

Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. National Land Cover Dataset is from Multi-Resolution Land Characteristics Consortium (MRLC). Appropriate citation for the 2001 and 2011 National Land Cover Dataset can be found at http://www.mrlc.gov/nlcd2001.php and http://www.mrlc.gov/nlcd2011.php respectively. Milwaukee shapefile is from ESRI U.S geodatabase. 

Friday, March 22, 2019

Lab 5 - Classification Accuracy Assessment

Goal and Background

The primary goal of this lab exercise was to develop an understanding of the evaluation of the accuracy of image classification results.  As accuracy assessment is a necessary step in image classification and an important part of image post processing, this was a concept that was important to have a quality understanding of.  To conduct this accuracy assessment for this lab, we both collected ground reference testing samples and then used those samples to conduct the accuracy assessment of our classified images.

Methods

Part 1: Generating ground reference testing samples for accuracy assessment

Part 1 of this lab was dedicated to the collection of ground reference testing samples that would later be used to judge the accuracy of a classified image.  To do this, we first opened up our unsupervised classification image from our Lab 3 - Unsupervised Classification lab into one viewer in Erdas Imagine followed by a high resolution areal reference image of the same study area in another viewer. Next, we clicked on Raster>Supervised>Accuracy Assessment to open up the Accuracy Assessment dialog window.  Once opened, we brought our unsupervised classification image into the tool using the folder connection button and then using the Select Viewer button, we selected the viewer containing the reference image of the study area to create the ground reference sample points in.  Once this was done, in the accuracy assessment window we clicked on Edit>Create/Add Random Points.  This opens the Random Points dialog box, which was used to select the five informational classes of our classified image using the Select Classes button, set the number of points to 125, select the Stratified Random Distribution Parameter, and set the Minimum Points to 15.  Once all these parameters were set, the Ground Reference Testing Samples were all created so that they could be used to conduct an accuracy assessment of the unsupervised image.

Part 2: Performing Accuracy Assessment

Once all of the Ground Reference Testing Sample points were properly generated, the next step was to analyze them to conduct the accuracy assessment. To do this, only 10 points at a time were set to be visible on the reference image with the 125 points so as to make the analysis easier.  For each individual point on the reference image, the point was zoomed into to see exactly which LULC class it fell into.  Each point was assigned a value of 1-5 representing which LULC class it fell into.  Next, in the accuracy assessment window, Report>Accuracy Assessment was clicked to generate the accuracy assessment report.  In this report, the statistics like overall accuracy, producer's accuracy, and user's accuracy can all be found and calculated as well as the information necessary to fill in the Error Matrix Table.

Part 3: Accuracy Assessment of Supervised Classification. 

Part 3 of this lab was simply a repeat of the steps that were conducted in both parts 1 and 2 but with the supervised classification image that was created in Lab 4 rather than the unsupervised image that was used in the previous accuracy assessment.  
es for accuracy assessment

Results

The results of these separate accuracy assessments show the difference in accuracy of the unsupervised classification image we created in lab 3 compared to the supervised classification image we created in lab 4.  Overall, the unsupervised image had greater accuracy compared to the supervised image, most likely as a result of the few number of training samples collected for that classification.  In a real world scenario, neither of these images would meet the accuracy requirements for being submitted to a customer and both the classifications and the accuracy assessments would need to be redone to obtain a higher overall accuracy.


Unsupervised Classification Error Matrix



Supervised Classification Error Matrix

Sources

Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. High resolution image is from United States Department of Agriculture (USDA) National Agriculture Imagery Program.

Wednesday, March 20, 2019

Lab 4 - Pixel-Based Supervised Classification

Goal and Background

The main goal of this lab was to develop an understanding of conducting a pixel-based supervised classification scheme to classify 5 different Land Use Land Cover (LULC) classes.  The first section of this lab was dedicated to the collection of training samples for a variety of surface features.  the second portion of this lab was dedicated to the evaluation of the quality of the training samples collected.  Finally, the third and final section of this lab was dedicated to the actual production of meaningful LULC classes.

Methods

Part 1:  Collection of training samples for supervised classification

to begin this lab, our first task was to go about selecting training samples, or samples of known LULC, that can be used to train the classifier.  To do this, we used the Raster>Unsupervised>Signature Editor window as well as the Draw>Polygons tool.  For each training sample collected, we created a polygon within the specific LULC feature and then clicked the Create New Signature From AOI.  Training samples were first collected for waster, then forest, then agriculture, then urban/built-up, and then finally for bare soil.  Once all of these training samples were collected, the samples were labeled and numbered with their respective LULC.  The signature file was then saved once all training samples were collected and correctly labeled.

Part 2: Evaluating the quality of training samples

The next step of this lab was to check to make sure the training samples we collected in the previous part of the lab were of sufficient quality.  To do this, we displayed the signatures of each training sample in the Signature Mean Plot window which was accessed from the signature editor window.  Next, we made sure to accept the default values for the Image Alarm and the Parallelepiped Limits.  Next, we viewed the histogram for our data by clicking the histogram symbol and making sure that Single Signature was selected under the Histogram Plot Control Panel.  This was done for each of our individual LULC classes so that we could compare the signatures of the samples we selected to the real world signatures of those same things.  Next, a separability report was created by clicking Evaluate>Separability on the Signature Editor window.  Once the Signature Separability window was open, we changed the Layers Per Combination to 4 and chose Transformed Divergence for the distance measurement.  Once this was done, the separability report was created which was used to evaluate the separability between spectral signatures for each band so we knew which bands were the best to use for our supervised classification.  Once this was completed, all of the individual spectral signatures were merged into one signature for each of the LULC classes using the Edit>Merge tool within the Signature Editor window.

Part 3:  Performing supervised classification

The final portion of this lab was all about taking all the information obtained and created in the first two parts and using it to actually perform a supervised classification.  To do this,we clicked on Raster>Supervised>Supervised Classification and set the input image to the provided image of Eau Claire and Chippewa counties and then set he input signature as the merged signatures created from the previous part of this lab.  We then made sure that the Non-Parametric rule was set to None and that the Parametric Rule was set to Maximum Likelihood.  Finally, the classification tool was run and the image was brought into a viewer for examination. The Supervised Classification image was then compared to the Unsupervised Classification image created in the previous lab and finally a map was created using the Supervised Classification image.

Results

Figure 1. - Spectral Signatures of all 50 Samples



 Figure 2. - Merged Spectral Signatures

Figure 3. 

Sources

Data sources are as follows: Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Lab 10 - Radar Remote Sensing

Goal The goal of this lab exercise was to introduce the class to the basics of working with remotely sensed radar images including prep...