Tuesday, December 9, 2014

Remote Sensing Lab 8
Ethan Nauman
12/9/14

The goal for the final lab this semester was to gain experience on the measurement and interpretation of spectral reflectance (signatures) of various earth surface materials captured from a satellite image. In this lab I learned how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to see if they passed the spectral separability that we discussed in class. These techniques combined with all the other techniques learned in previous labs, set me up to move into a more advanced remote sensing class. 

Preamble-
For this lab I used the Landsat ETM+ image that covered the Eau Claire area and other regions in WI and MN. This image was taken back in 2000 and I used this image to collect the spectral signatures of near earth surface materials. I used this image to measure and plot the spectral signatures of 12 different materials and surfaces.
1. Standing water
2. Moving water
3. Vegetation
4. Riparian vegetation
5. Crops
6. Urban Grass
7. Dry soil
8. Moist soil
9. Rock
10. Asphalt highway
11. Airport runway
12. Concrete surface (parking lot)
I began this lab in Erdas image and brought in the Eau Claire image from 2000. I used the spectral tools in Erdas to collect the spectral signatures, but this technique could also be used in the field by using an instrument called a spectroradiometer. This instrument makes reflectance measurements in visible, near-infared, and middle-infared of the EM spectrum. I began by collecting the spectral signature for standing water. I used the Lake Wissota body of water because it was a big lake and not much current runs through it. Under the drawing tools I used the polygon tool. I drew a fairly small polygon in the middle of the lake. After completing the polygon, under the raster toolbar, I clicked the supervised tool and used the signature editor tool. This allowed me to create an AOI and change the name of that to standing water, along with the color scheme. Also, I was able to 'display the mean plot window' which would show me the spectral curve. By displaying the spectral curve, I was able to see if the reflectance matched correctly with what should be displayed and also was able to see if there was any interference when collecting the reflectance. Below is the spectral curve for standing water. 
As you can see the reflectance was highest in the blue band and lowest in the NIR band. One spot were there was interference was at the MIR band were there was a slight spike from interference in the atmosphere. 

The next step was to find the spectral reflectance for the spectral signatures 2 through 12. This made me  read the map to the extent of being able to pick out the other 11 surface features that I had to take the reflectance for. Knowing the surrounding area helped me when choosing the signatures for moving water, crops, vegetation, rocks, airport runway, asphalt, and urban grass. Below is the rest of the spectral signatures and their reflectances. 

2. Moving water- I knew the best spot to find moving water was on the river where there was rapids. This was hard to pin point on the Eau Claire 2000 image because it wasn't in good contrast when you zoomed in. So, I figured the areas on the river were it was a lighter color, possibly whitish, meant that there was fast moving water hopefully rapids. Below is the spectral curve I collected after drawing an AOI polygon to collect the data from. 
Similar to standing water, The blue band had the highest reflectance while the MIR band had the lowest. There also appeared to be some interference between the NIR and MIR band were the was a slight spike on the spectral curve. 

3. Vegetation- Finding the spectral curve for vegetation took knowing the area. I knew that the vegetation would appear as pink in the Eau Claire image from the NIR band. Also, I was difficult to pick out between crops and vegetation but taking the shape of the landform into consideration I stayed away from the areas that appeared to be rectangular or square shaped knowing that crops usually are planted in this pattern. Below is the spectral curve that I found when looking for vegetation. 
The red band had the lowest reflectance while the NIR band had the highest. This is because the NIR band reflects off the photosynthesis and chlorophyl in the band showing that there is healthy mature vegetation in the AOI. 

4. Riparian vegetation- Riparian vegetation is the vegetation along the banks of a water system. This was pretty easy to find since there is so much water on the Eau Claire image. I used the vegetation on the banks of the Chippewa river. Since this was also a form of vegetation, I knew that the spectral curve wouldn't differ much from that of the normal vegetation. 
Although it is hard to see the riparian vegetation almost mirrors that of the normal vegetation. 

5. Crops- To find crops on the Eau Claire image it helped me by knowing the area again. It was hard to pick out the difference between crops and and vegetation, however I used the area and the shape of the outline of the AOI that I was looking at. Knowing that crops are usually put in a field in a rectangular or square pattern I factored this in when selecting my AOI. 
The NIR band was the highest, that means that the crops are healthy and mature since they are not absorbing much of the NIR band for photosynthesis. Crops along with the two types of vegetation are similar on the spectral curve. 

6. Urban Grass- When searching for urban grass on the Eau Claire image, I used the area just off of campus to select my AOI. I knew the houses in the area had grass, especially my back yard on campus with no trees in the way. 
The NIR band had the highest reflectance while the MIR band dropped drastically along with the red band. There is some interference in the spectral curve at the blue band. The green band should be the highest out of the visible light.

7. Dry soil- This was difficult for me to find on the Eau Claire image. Depending on when the image was captured, it could be the rainy season or also could have been when there was snow on the ground.  I knew that dry soil reflects a lot and absorbs a little so this took trial and area when dealing with the AOI in selecting the right type of dry soil. 
The MIR band reflected the most while the NIR band reflected the least. Also, the red band was the highest out of the visible light. 

8. Moist soil- This too was difficult for me to find in the Eau Claire image. This also depended on when the image was captured and if there were crops in the moist soil at the time. 
The MIR band was again the highest reflectance with the blue band being the lowest. Below you can see the difference in the spectral curve between the dry soil and the moist soil. 
As you can see that throughout all the bands the moist soil reflected much less than the dry soil did especially in the visible light and the MIR band. 

9. Rock- Finding a large rock outcrop was difficult to come across on the Eau Claire image. I knew of a large rock outcrop called Big Falls that was northeast of altoona along the Chippewa river. By tracing the river on the Eau Claire image I was able to find this outcrop and select it as my AOI. 
The MIR had the highest reflectance while the NIR had the lowest. The red band had the highest in the visible light. 

10. Asphalt highway- This was easy to find on the Eau Claire image. However, the highway only appeared as a skinny line at full extent and when zoomed in the colors changed slightly from one pixel to the next. 
Once again the MIR had the highest reflectance while the NIR had the lowest reflectance, also there was a fairly large spike in the blue band to start the spectral curve.

11. Airport runway- I used the Eau Clair airport located just north of Eau Claire as the AOI. I zoomed in to where the runway was the only item visible on my viewer and drew my AOI. The AOI appeared as white on the Eau Claire image. 
As you can see the red band had the highest reflectance while there was a large dip when it came to the NIR band. 

12. Concrete surface (parking lot)- This took finding a large enough parking lot to use as my AOI in the area. The first thing that came to mind was the mall parking lot. I was able to find the mall and the parking lot and used this as my AOI for the spectral curve. One thing that I did not take into consideration was that there could be cars parked in the parking lot and this could cause interference. 
The red band had the highest reflectance along with the asphalt highway and there was once again a large dip when it came to the NIR band. Below is all the spectral curves on one graph for all the spectral signatures 1 through 12. 

Wednesday, December 3, 2014

Remote Sensing Lab 7
Ethan Nauman
12/2/14

The goal of this lab was to develop our skills in performing key photogrammetric tasks on aerial photos and satellite images. This lab was specifically detailed to train us in the mathematical calculations of photographic scales, measurement of areas and perimeters, and calculating relief displacement. By the end of this lab I was able to perform diverse photogrammetric tasks. 

Part 1: Scales, measurements and relief displacement
Data for this portion of the lab was found in our Lab 7 folder. The first two questions of this lab dealt with figuring out the scales for two different maps. We were given the distance in feet from two points on the map and had to find the distance using a ruler on the maps. After finding the distances we then had to perform calculations that would allow us to find the scales of the maps. 
This is the photo we used for calculating the scales.

Section 2: Measurement of areas of features on aerial photographs.
This section of the lab was performed in the Erdas Imagine. For the first part we displayed the Eau Claire-west-southeast picture from our lab 7 folder. We then were asked to find the area of the lagoon that was marked on the map. We used the polygon measuring tool to perform this action. This would allow me to single click points around the lagoon, then would tell me the final area of the lagoon. I could also change the measurement tools to whatever the question was asking for, acres, hectares, etc. After finding the area of the lagoon, we were then asked to find the perimeter of the lagoon. Using the same tool concepts, the only change was instead of using the polygon tool, we used the polyline tool. This allowed for the tool to find the perimeter of the lagoon rather than the area. We could also change measurements of the tool to whatever the question was asking for. Below is a picture of the lagoon labeled with an 'X' that we had to find the perimeter and area of. 
Section 3: Calculating relief displacement from object height.
We used the Jpeg of Eau Claire-west-southeast from our lab 7 folder for this section of the lab. We were asked to find the relief displacement of the smokestack labeled in the photo. We were given the height of the aerial camera above the datum, and the scale of the photo. Also we were given the principal point on the photo. 
Calculating the relief displacement took time and careful measurements to figure out the exact amount of relief displacement. 

Part 2: stereoscopy
For this part of the lab we needed a pair of polaroid glasses that would allow us to view our maps and would allow us to see elevation. We began in Erdas Imagine and brought in a photo of the city of Eau Claire at a 1 meter spatial resolution. I then brought in a second image into another view which was the dem of the city of Eau Claire. This photo was in a 10 meter spatial resolution. I used one form of GCPs to show a 3-dimensional perspective view of the city. From the main interface I clicked on the terrain tool bar which allowed me to select anaglyph. For the input DEM I used the EC-DEM, for the input image I used the EC-City image. I also increased the vertical exaggeration and saved the output image in my personal lab 7 folder. I accepted all other parameters and ran the program. The image that it gave me was not much different then the original image until I put on the glasses. The glasses allowed for me to see elevation changes throughout the image. 

Part 3: Orthorectification.
This part of the lab introduced me to the Lecia Photogrammetric Suite in the Erdas Imagine viewer. This is used in photogrammetry, orthorectification, and extraction of elevation. This part of this lab took awhile to get used to and complete, the tasks for this part were: create a new project, select a horizontal reference source, collect GCPs, add a second image to the file block, collect GCPs in the second image, perform automatic tie point collection, triangulate the images, orthorectify the images, view the orthoimages, and save the block file. The LPS tool was located under the toolbox function. Once the LPS project manager was open, it allowed me to change the parameters in the model setup. I changed it to a polynomial based push broom and the SPOT push broom. I also had to change the horizontal reference coordinate system. I used the UTM projection, the spheroid name was Clarke 1866, and the datum name was NAD27 (Conus). 

Section 2: Add imagery to the block and define sensor model. 
Now I brought in the first of two images into the block and had to accept the parameters. After accepting the parameters I had to activate the point measurement tool. I changed it to a classic point measurement tool and upon okaying it, it opened another viewer and brought in my image into three different panels. A regular view and two zoomed in views. In this view I checked the 'use viewer as reference' box and input my second spot pan image. Now I have the spot pan image on the right and the xs-ortho image on the left. 
The next step was to collect GCPs on the ortho image. After referencing the GCP in our lab I was able to find where the first GCP went. My X and Y reference almost matched so I didn't have to change them. Next I had to collect the corresponding point on the block image, right image. I moved the inquire box on the full scale image then moved the zoomed inquire box to the exact area that I needed. This allowed me to collect the GCP for the block image. Once again my X and Y reference was almost identical. After collecting another GCP on both images I then activated the Automatic (x,y) Drive. This allowed me to collect the GCPs in rapid succession. I collected GCPs up to 10, the final two GCPs were on a different image. After the 10th GCP I then saved and reset the horizontal reference source. This allowed me to bring in the other image, NAPP-2m-ortho. 
I then collected the final two GCPs from the second image I input. After collecting the final GCP I saved again and now had to collect GCPs for elevation. I used the reset vertical reference source and used the plasm springs DEM. I right clicked on the Point # and selected all, then used the update Z values tool button. 

Section 4: Set type and usage, add a 2nd image to the block and collect the GCPs. In the cell array under 'Type', I changed all the points to full, and under the Usage cell I changed all the points to control. 
Now that I have finished the collection of reference points for the first images, I then moved to the second image, spot-panb. I uploaded the spot-panb image and referenced the GCPs off the first spot-pan. I used the point measurement tool which allowed me to locate the points in the first spot-pan on the second image. I collected all 12 of the GCPs on the second spot-panb image. After saving the points again I referenced my block image interface. This showed me where the points were located on the two images. 

Section 5: Automatic tie point collection, triangulation and ortho resample.
Finally I was at the last couples steps to allow me to orthorectify my images. I used the 'Automatic tie point generation properties tool. My images used was set to all available, my initial type button was set to exterior/header/GCP, under the distribution tab in the intended number of points/ image field I set to 40. I then ran the tool. An auto-tie summary is displayed allowing for me to see the accuracy of my GCPs, after looking at this summary I saved it and closed it. After completing all these steps I had all the control and tie points, the nest step was the triangulation process. I used the 'Edit Triangulation Properties tool. I changed the interations with relaxation from 1 to 3, under the point tab I changed the x,y, and z fields to 15. I then ran the triangulation process. After the function ran it gave me a report summary. 
After looking over the report summary and saving it, I exited out. This brought me back to the LPS interface. I now can create my orthorectified images. After running the orthorectifying process I then was able to view my two images. The images overlap each other but they blend very well into each other. The features blend very well together, if it wasn't for the borders on the image overlay, it would be difficult to tell that there is two different images. My final two images that are orthorectified appeared as this. 
I was very pleased how this process turned out for me. I am thinking about using this in my final term project. The only problem is that this was considered to be "the marathon lab" due to the fact that it takes sometime to collect all the GCPs and tie points. However, with that being said it was a good feeling after completing this lab.