Monday, April 29, 2019

Lab 9 - Hyperspectral Remote Sensing

Goal

The goal of this lab exercise was to introduce us to the processing of hyperspectral remotely sensed images.  The ways in which this lab introduced us to hyperspectral remote sensing was through some various basics of hyperspectral and spectroscopy, using FLAASH for the purposes of atmospheric corrections, and various tools to determine vegetation health.

Methods

Part 1 - Introduction to Hyperspectral Remote Sensing

The first portion of this lab was devoted to viewing and basic analysis of hyperspectral images in the ENVI software.  In ENVI, we began by loading in select bands from the image and importing ROI's (Regions of Interest) and plotting them to view there spectral signatures.  Once this was done, other spectral signatures were imported from a spectral library, in this case the JPL spectral library.  In addition to viewing spectral signatures of ROI's and reference signatures, this portion of the lab also demonstrated animating the image data.  Using the animation tool in ENVI, we animated all the individual bands in a slideshow that can be used to view if there are any bad bands or images with errors.

Part 2 - Atmospheric Correction Using FLAASH

The second portion of this lab exercise was devoted to using FLAASH to conduct atmospheric correction on the hyperspectral imagery.  As UWEC does not have the license for FLAASH, we just walked through the steps of using FLAASH rather than actually executing it.  The first step to conduct FLAASH correction was to open the FLAASH window and input the desired image and then select the "Read array of scale factors (1 per band) from ASCII file" radio button to import a text file to import the desired parameters.  Other default parameters were accepted and the FLAASH tool was "run." Next, we viewed the "corrected" image and viewed which bands FLAASH flagged as being bad from the bad bands list.

Part 3 - Vegetation Analysis

Part 3 of this lab was all about various forms of vegetation that can be accomplished using hyperspectral imagery.  The first vegetation index we used was the Vegetation Index Calculator to calculate an NDVI image.  Some other indices that were tested at this stage of the lab were things like water indices, red edge indices, and Lignin indices.  Once this was done, some vegetation analysis tool were run.  The first of these tools was the Agricultural Stress Tool to create a image showing areas of vegetation under stress.  After this tool, the next tool was the Fire Fuel Tool which measures areas of vegetation that are more or less susceptible to fires.  In using this tool, we also gained experience using the Mask Band tool to mask out portions of the image that we did not wasn't to analyze, such as the urban areas in the image.  The final tool run in this portion of the lab was the Forest Heath tool which measures overall vegetation health based on parameters like water content, greenness, etc.

Part 4 - Hyperspectral Transformation

The final portion of this lab was devoted to using a MNF (Minimum Noise Fraction) transform tool to determine image dimensionality, reduce computational requirements, and remove noise data from the image.  To do this, we used the MNF Rotation > Forward MNF > Estimate Noise Statistics from Data tool and input the provided noise statistics.  Once this was done, we viewed the Eigenvalues plot of the MNF transformed image.

Results



Sources

ENVI Tutorials. (n.d.). Retrieved April 29, 2019 from, https://www.harrisgeospatial.com/Support/Self-Help-Tools/Tutorials

Saturday, April 20, 2019

Lab 8 - Advanced Classifiers 2

Goal

The goal of this lab exercise was to expose students to the use of two different advanced classification algorithms.  These advanced classifiers provide a much higher level of accuracy for classifying images over traditional classification methods.  The first advance classification conducted in this lab was an expert system/decision tree classification with the use of ancillary data.  The second classification scheme was developing and using a neural network to perform a complex image classification.

Methods

Part 1: Expert System Classification

For part 1 of this lab, we used the Knowledge Engineer tool in Erdas Imagine and created various rules and variables to classify the provided image of the Eau Claire and Chippewa Falls region.  In the Knowledge Engineer window, we created arguments and counter-arguments for each of the desired LULC classes.  The final classification scheme for this specific classification can be seen in figure 1 below, showing all of the various rules and arguments for the classification.

Figure 1. Arguments for expert system classification

Part 2: Neural Network Classification

Part 2 of this lab exercise was devoted to performing a Neural Network classification using the ENVI software.  After inputting the desired image into ENVI, we used the Restore ROI tool to add in provided ROI's that would be used to train the classifier.  Next, the Neural Network Supervised classification was run with 1000 iterations and a Logistic Activation method.  Once all the parameters were set, the classification was run and the resulting image was the one below in Figure 2.

Figure 2.

Once this was done, the next part of the Neural Network classification portion of this lab was to collect our own training samples and run a neural network classification of the University of Northern Iowa campus.  To do this, we imported the image into ENVI just as the previous image was imported and instead of restoring ROI's, we created our own of classes we desired to create, such as buildings, roads, and vegetation.  Many of the same parameters from the first Neural Network were used and the classification was run.  Once it was run however, the classification was run again with the number of iterations set to the value of where there was much variability in the Neural Net RMS Plot window.  Once the classification was run again with a different number of iterations, the classified image appeared to be more accurate for certain classes.

Results

Final output map from part 1 of lab

First NN classification of University of Northern Iowa campus

Second NN classification of University of Northern Iowa campus with altered number of iterations.


Sources

Landsat imagery provided by Earth Resources Observation and Science Center, United States Geological Survey

Quickbird High resolution image of portion of University of Northern Iowa campus provided by Department of Geography, University of Northern Iowa.

Sunday, April 14, 2019

Lab 7 - Object-based Classification

Goal

The goal of this lab exercise was to develop an introductory understanding in using eCognition software to conduct an object based classification.  As object based classification systems are relatively new in the remote sensing world, gaining this experience sets us apart from many others who may not have experience in this type of work yet.  To train us in this new form of classification, this lab was designed to teach three different and important parts of the object based classification model.  The first part of this lab was devoted to the segmentation of an image into homogeneous spatial and spectral clusters, known as objects.  the second portion of this lab exercise was the selecting of samples from the objects created in section 1 to train the classifiers that we would be using.  The third and final portion of this lab then was devoted to the execution of the object based classification schemes, Random Forest and Support Vector Machines, followed by the refinement of the classified images to mitigate any errors that occurred from the classification.

Methods

Part 1 - Create a New Project

The first step in conducting an object based classification is to launch eCognition, as Erdas Imagine does not have the functionality to conduct object based classification like eCognition does.  When bringing in the desired image to eCognition, we made sure to check the "Use Geocoding" checkbox to make sure that all the georeference information that is included in this image was not lost.  To make sure that pixels that should not be analyzed are not analyzed, we made sure to click on 'No Data' in the 'Create Project' dialog box.  This opens the 'Assign No Data Values' dialog box which is where the 'Use Single Value for All Layers (Union)' checkbox was checked.  Once this was done, the project was created by selecting 'OK' and the â€˜Edit Image Layer Mixing' tool in the main toolbar was used to assign the band combination of this image to be a 432 false color image.

Part 2 - Segmentation and Sample Collection

The second step in this lab was to create the objects that will be used to conduct the object based classification of the image.  To do this, the first step was to open the 'Process Tree' by clicking on 'Process>Process Tree' on the main toolbar.  This opens the 'Process Tree' dialog where we then right-clicked inside of and clicked on 'Append New' to open the 'Edit Process' dialog.  In this dialog, we changed the name to Lab7_object_classification and clicked on 'Execute'.  This created the <0.001s Lab7_object_classification inside the 'Process Tree'.  Inside this process we inserted a child process and changed the algorithm to 'Multiresolution Segmentation,' typed 'Level 1' in the 'Level Name' box, and set the 'Shape' criteria to 0.3 and the 'Compactness' criteria to 0.5.  We then clicked on 'Execute' to create the image objects and viewed the image objects overlaid on the original image by selecting the various view or hide image objects buttons from the main toolbar.  Once this was completed, we launched the 'Class Hierarchy' dialog from the main toolbar under the 'Classification' tab.  In this dialog, we entered in all of the desired classes and added a unique color to represent each one.  Once the desired classes were created, we then collected samples for each of the classes by selecting the 'Classification>Samples>Select Samples' from the main toolbar.  By selecting which class we wanted to select sample for, we worked through the image selecting a desired number of samples for each class by simply double clicking on the object that was fully within the desired sample.  These steps were repeated to collect the desired number of samples for each of the created classes.

Part 3 - Implement Object Classification

Once all of the samples were collected, the next step was to conduct a Random Forest classification.  To do this, a new variable was created through the 'Create Scene Variable' window.  This variable was named 'RF Classifier' and set to be a string type variable.  Once this was done, a new parent process was added to the 'Process Tree' window and this process was named 'RF Classification.' Inside this process, another process was created named 'Train RF Classifier.'  Finally, inside the 'Train RF Classifier' process another process was inserted whose variables were set to those that highlighted by the red boxes in Figure 1.
Figure 1. - Train RF Parameters

Once this was done, the desired features were added to the 'Features' tab in the process and various object features were added.  These features included the mean object features as well as GLCM Dissimilarity (all dir) and also GLCM Mean (all dir) from the Texture- Texture after Haralick tab.  Once this was done, this process was executed to train the RF classifier based off of the desired parameters from the image objects created earlier in the lab.

Once the RF Classifier was trained, the next step was to apply the RF Classifier to create the classified image.  To do this, a process called Apply RF Classifier was added to the Process Tree and then a process was created as a child to that process to actually perform the classification.  In this final process, the parameters were set to those shown by the red boxes in Figure 2.  this process was then executed to perform the object based classification.
Figure 2. - Apply RF Parameters

If the resulting image had much error, we then fixed these errors either by editing the Class Filter within the Apply Classification process or by manually editing the objects from the tools in the View>Manual Editing toolbar.  Finally, the image was exported as a raster file with Classification being selected for the Content type and after we made sure to select all the desired classes to be included in the image.  The image was then used to create a final map shown in the results section below.

Part 4 - Support Vector Machines

Part 4 of this lab exercise was devoted to conducting another object based classification as the same image as before but instead of the Random Forest scheme being used, the Support Vector Machines, or SVM, was used.  The same processes as in the previous classification were created in the same way, with only slightly different parameters being used in the 'Train' process, as shown in Figure 3, and slightly different parameters for the 'Apply' process, as shown in Figure 4.  Once these parameters were set, the processes were executed and the resulting classified image was used to create a map as shown below in the results section.

Figure 3. - Train SVM Parameters

Figure 4. - Apply SVM Classifier Parameters

Part 5: Object-based classification of UAS imagery

The final portion of this lab was devoted to conducting an object based classification of a provided high resolution UAS image.  For this portion of the lab, I chose to again use the SVM classification scheme as I had better results with it compared to the RF classification scheme.  The only parameters that I changed for this SVM classification was the Scale Parameter for creating the objects to classify.  As this was a very high resolution image that encompassed a small area, a larger scale parameter of 35 was used to create objects that were not too large but not too small.

Results






Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. UAS image is from UWEC Geography & Anthropology UAS Center. 

Saturday, April 6, 2019

Lab 6 - Digital Change Detection

Goal and Background

The goal of this lab exercise was to allow us to develop an understanding of the methods to evaluate and measure LULC changes that occur over time.  In this lab, we performed quick, qualitative measurements of LULC change, quantified the change detection, and developed a model that was used to map from-to changes in LULC over time.

Methods

Part 1: Change detection using Write Function Memory Insertion

For part 1 of this lab, we performed a write memory memory insertion to quickly be able to view pixels that changed from one image to another.  To do this, the provided band 3 image and the two provided band 4 images were layer stacked and the output image was saved.  Next, under the Multispectral tab, the Set Layer Combinations window was opened so that we could specify which band was to be inserted into which color gun.  The band 3 red image was inserted into the red band color gun and then the two band 4 NIR band images were inserted into the green and blue bands.  The output image of this was an image that showed which pixels experienced change over time as represented by the color pink.

Part 2: Post-classification comparison change detection

For part 2 of this lab, we first began by calculating the quantitative change that occurred between the two provided LULC classification images of the Milwaukee WI area. To do this, the Raster Attribute Editor was opened for both images so that the data in the table could be copied to an Excel sheet for calculations. The data that was copied over was the class names column and the histogram values column.  Using this data, the next step was to convert the histogram values to hectares for each of the LULC classes and then calculate the percent increase or decrease for each of the LULC classes.  Once this was done, a table was created to show how much each LULC class changed over the time period between the images capture dates.  The next portion of this lab was devoted to creating a model that would take these same LULC classification images of the Milwaukee area and output images that showed LULC change-to over time.  To accomplish this, the Wilson-Lula Algorithm was used.  The model that was developed has an input for each of the LULC images, a pair of functions for each of the change-to possibilities, and a second function for each change-to which combines the outputs from the first function to create the final image showing which pixels experienced change from one LULC class to another LULC class.  For this example, the change-to options that were shown were Agriculture to urban/built-up, Wetlands to urban/built-up, Forest to urban/built-up, Wetland to agriculture, and Agriculture to bare soil. In the first set of functions, the first function was changed from analysis to conditional and then set to EITHER IF OR.  For example, the first function, which measured Agriculture to urban/built-up was EITHER 1 IF ( $n1_milwauke_2001==7 ) OR 0 OTHERWISE with the image being set equal to 7 because this is the value that corresponds to agriculture.  In the same set of functions, the second function was set to EITHER 1 IF ($n2_milwauke_2011==3 ) OR 0 OTHERWISE with the image set equal to 3 because this is the value of urban/built-up.  A temporary raster was created between the two layers of functions and the second group of functions was set to bitwise and the '&' symbol was used to add the two separate values from the previous functions together.  This final function output an image that showed which pixels went from one LULC class to another.  These various change-to images were then brought into Arcmap to create a map showing all the changes experienced by areas in the Milwaukee area between 2001 and 2011.

Results

Figure 1. Write Function Memory Insertion

Figure 2. Table showing quantitative change


Figure 3. Example of Model using Wilson-Lula algorithm to develop from-to imagery

Figure 4. Final output map showing from-to change in Milwaukee area

Sources

 Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. National Land Cover Dataset is from Multi-Resolution Land Characteristics Consortium (MRLC). Appropriate citation for the 2001 and 2011 National Land Cover Dataset can be found at http://www.mrlc.gov/nlcd2001.php and http://www.mrlc.gov/nlcd2011.php respectively. Milwaukee shapefile is from ESRI U.S geodatabase. 

Lab 10 - Radar Remote Sensing

Goal The goal of this lab exercise was to introduce the class to the basics of working with remotely sensed radar images including prep...