Tuesday, December 9, 2014

Remote Sensing Lab 8
Ethan Nauman
12/9/14

The goal for the final lab this semester was to gain experience on the measurement and interpretation of spectral reflectance (signatures) of various earth surface materials captured from a satellite image. In this lab I learned how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to see if they passed the spectral separability that we discussed in class. These techniques combined with all the other techniques learned in previous labs, set me up to move into a more advanced remote sensing class. 

Preamble-
For this lab I used the Landsat ETM+ image that covered the Eau Claire area and other regions in WI and MN. This image was taken back in 2000 and I used this image to collect the spectral signatures of near earth surface materials. I used this image to measure and plot the spectral signatures of 12 different materials and surfaces.
1. Standing water
2. Moving water
3. Vegetation
4. Riparian vegetation
5. Crops
6. Urban Grass
7. Dry soil
8. Moist soil
9. Rock
10. Asphalt highway
11. Airport runway
12. Concrete surface (parking lot)
I began this lab in Erdas image and brought in the Eau Claire image from 2000. I used the spectral tools in Erdas to collect the spectral signatures, but this technique could also be used in the field by using an instrument called a spectroradiometer. This instrument makes reflectance measurements in visible, near-infared, and middle-infared of the EM spectrum. I began by collecting the spectral signature for standing water. I used the Lake Wissota body of water because it was a big lake and not much current runs through it. Under the drawing tools I used the polygon tool. I drew a fairly small polygon in the middle of the lake. After completing the polygon, under the raster toolbar, I clicked the supervised tool and used the signature editor tool. This allowed me to create an AOI and change the name of that to standing water, along with the color scheme. Also, I was able to 'display the mean plot window' which would show me the spectral curve. By displaying the spectral curve, I was able to see if the reflectance matched correctly with what should be displayed and also was able to see if there was any interference when collecting the reflectance. Below is the spectral curve for standing water. 
As you can see the reflectance was highest in the blue band and lowest in the NIR band. One spot were there was interference was at the MIR band were there was a slight spike from interference in the atmosphere. 

The next step was to find the spectral reflectance for the spectral signatures 2 through 12. This made me  read the map to the extent of being able to pick out the other 11 surface features that I had to take the reflectance for. Knowing the surrounding area helped me when choosing the signatures for moving water, crops, vegetation, rocks, airport runway, asphalt, and urban grass. Below is the rest of the spectral signatures and their reflectances. 

2. Moving water- I knew the best spot to find moving water was on the river where there was rapids. This was hard to pin point on the Eau Claire 2000 image because it wasn't in good contrast when you zoomed in. So, I figured the areas on the river were it was a lighter color, possibly whitish, meant that there was fast moving water hopefully rapids. Below is the spectral curve I collected after drawing an AOI polygon to collect the data from. 
Similar to standing water, The blue band had the highest reflectance while the MIR band had the lowest. There also appeared to be some interference between the NIR and MIR band were the was a slight spike on the spectral curve. 

3. Vegetation- Finding the spectral curve for vegetation took knowing the area. I knew that the vegetation would appear as pink in the Eau Claire image from the NIR band. Also, I was difficult to pick out between crops and vegetation but taking the shape of the landform into consideration I stayed away from the areas that appeared to be rectangular or square shaped knowing that crops usually are planted in this pattern. Below is the spectral curve that I found when looking for vegetation. 
The red band had the lowest reflectance while the NIR band had the highest. This is because the NIR band reflects off the photosynthesis and chlorophyl in the band showing that there is healthy mature vegetation in the AOI. 

4. Riparian vegetation- Riparian vegetation is the vegetation along the banks of a water system. This was pretty easy to find since there is so much water on the Eau Claire image. I used the vegetation on the banks of the Chippewa river. Since this was also a form of vegetation, I knew that the spectral curve wouldn't differ much from that of the normal vegetation. 
Although it is hard to see the riparian vegetation almost mirrors that of the normal vegetation. 

5. Crops- To find crops on the Eau Claire image it helped me by knowing the area again. It was hard to pick out the difference between crops and and vegetation, however I used the area and the shape of the outline of the AOI that I was looking at. Knowing that crops are usually put in a field in a rectangular or square pattern I factored this in when selecting my AOI. 
The NIR band was the highest, that means that the crops are healthy and mature since they are not absorbing much of the NIR band for photosynthesis. Crops along with the two types of vegetation are similar on the spectral curve. 

6. Urban Grass- When searching for urban grass on the Eau Claire image, I used the area just off of campus to select my AOI. I knew the houses in the area had grass, especially my back yard on campus with no trees in the way. 
The NIR band had the highest reflectance while the MIR band dropped drastically along with the red band. There is some interference in the spectral curve at the blue band. The green band should be the highest out of the visible light.

7. Dry soil- This was difficult for me to find on the Eau Claire image. Depending on when the image was captured, it could be the rainy season or also could have been when there was snow on the ground.  I knew that dry soil reflects a lot and absorbs a little so this took trial and area when dealing with the AOI in selecting the right type of dry soil. 
The MIR band reflected the most while the NIR band reflected the least. Also, the red band was the highest out of the visible light. 

8. Moist soil- This too was difficult for me to find in the Eau Claire image. This also depended on when the image was captured and if there were crops in the moist soil at the time. 
The MIR band was again the highest reflectance with the blue band being the lowest. Below you can see the difference in the spectral curve between the dry soil and the moist soil. 
As you can see that throughout all the bands the moist soil reflected much less than the dry soil did especially in the visible light and the MIR band. 

9. Rock- Finding a large rock outcrop was difficult to come across on the Eau Claire image. I knew of a large rock outcrop called Big Falls that was northeast of altoona along the Chippewa river. By tracing the river on the Eau Claire image I was able to find this outcrop and select it as my AOI. 
The MIR had the highest reflectance while the NIR had the lowest. The red band had the highest in the visible light. 

10. Asphalt highway- This was easy to find on the Eau Claire image. However, the highway only appeared as a skinny line at full extent and when zoomed in the colors changed slightly from one pixel to the next. 
Once again the MIR had the highest reflectance while the NIR had the lowest reflectance, also there was a fairly large spike in the blue band to start the spectral curve.

11. Airport runway- I used the Eau Clair airport located just north of Eau Claire as the AOI. I zoomed in to where the runway was the only item visible on my viewer and drew my AOI. The AOI appeared as white on the Eau Claire image. 
As you can see the red band had the highest reflectance while there was a large dip when it came to the NIR band. 

12. Concrete surface (parking lot)- This took finding a large enough parking lot to use as my AOI in the area. The first thing that came to mind was the mall parking lot. I was able to find the mall and the parking lot and used this as my AOI for the spectral curve. One thing that I did not take into consideration was that there could be cars parked in the parking lot and this could cause interference. 
The red band had the highest reflectance along with the asphalt highway and there was once again a large dip when it came to the NIR band. Below is all the spectral curves on one graph for all the spectral signatures 1 through 12. 

Wednesday, December 3, 2014

Remote Sensing Lab 7
Ethan Nauman
12/2/14

The goal of this lab was to develop our skills in performing key photogrammetric tasks on aerial photos and satellite images. This lab was specifically detailed to train us in the mathematical calculations of photographic scales, measurement of areas and perimeters, and calculating relief displacement. By the end of this lab I was able to perform diverse photogrammetric tasks. 

Part 1: Scales, measurements and relief displacement
Data for this portion of the lab was found in our Lab 7 folder. The first two questions of this lab dealt with figuring out the scales for two different maps. We were given the distance in feet from two points on the map and had to find the distance using a ruler on the maps. After finding the distances we then had to perform calculations that would allow us to find the scales of the maps. 
This is the photo we used for calculating the scales.

Section 2: Measurement of areas of features on aerial photographs.
This section of the lab was performed in the Erdas Imagine. For the first part we displayed the Eau Claire-west-southeast picture from our lab 7 folder. We then were asked to find the area of the lagoon that was marked on the map. We used the polygon measuring tool to perform this action. This would allow me to single click points around the lagoon, then would tell me the final area of the lagoon. I could also change the measurement tools to whatever the question was asking for, acres, hectares, etc. After finding the area of the lagoon, we were then asked to find the perimeter of the lagoon. Using the same tool concepts, the only change was instead of using the polygon tool, we used the polyline tool. This allowed for the tool to find the perimeter of the lagoon rather than the area. We could also change measurements of the tool to whatever the question was asking for. Below is a picture of the lagoon labeled with an 'X' that we had to find the perimeter and area of. 
Section 3: Calculating relief displacement from object height.
We used the Jpeg of Eau Claire-west-southeast from our lab 7 folder for this section of the lab. We were asked to find the relief displacement of the smokestack labeled in the photo. We were given the height of the aerial camera above the datum, and the scale of the photo. Also we were given the principal point on the photo. 
Calculating the relief displacement took time and careful measurements to figure out the exact amount of relief displacement. 

Part 2: stereoscopy
For this part of the lab we needed a pair of polaroid glasses that would allow us to view our maps and would allow us to see elevation. We began in Erdas Imagine and brought in a photo of the city of Eau Claire at a 1 meter spatial resolution. I then brought in a second image into another view which was the dem of the city of Eau Claire. This photo was in a 10 meter spatial resolution. I used one form of GCPs to show a 3-dimensional perspective view of the city. From the main interface I clicked on the terrain tool bar which allowed me to select anaglyph. For the input DEM I used the EC-DEM, for the input image I used the EC-City image. I also increased the vertical exaggeration and saved the output image in my personal lab 7 folder. I accepted all other parameters and ran the program. The image that it gave me was not much different then the original image until I put on the glasses. The glasses allowed for me to see elevation changes throughout the image. 

Part 3: Orthorectification.
This part of the lab introduced me to the Lecia Photogrammetric Suite in the Erdas Imagine viewer. This is used in photogrammetry, orthorectification, and extraction of elevation. This part of this lab took awhile to get used to and complete, the tasks for this part were: create a new project, select a horizontal reference source, collect GCPs, add a second image to the file block, collect GCPs in the second image, perform automatic tie point collection, triangulate the images, orthorectify the images, view the orthoimages, and save the block file. The LPS tool was located under the toolbox function. Once the LPS project manager was open, it allowed me to change the parameters in the model setup. I changed it to a polynomial based push broom and the SPOT push broom. I also had to change the horizontal reference coordinate system. I used the UTM projection, the spheroid name was Clarke 1866, and the datum name was NAD27 (Conus). 

Section 2: Add imagery to the block and define sensor model. 
Now I brought in the first of two images into the block and had to accept the parameters. After accepting the parameters I had to activate the point measurement tool. I changed it to a classic point measurement tool and upon okaying it, it opened another viewer and brought in my image into three different panels. A regular view and two zoomed in views. In this view I checked the 'use viewer as reference' box and input my second spot pan image. Now I have the spot pan image on the right and the xs-ortho image on the left. 
The next step was to collect GCPs on the ortho image. After referencing the GCP in our lab I was able to find where the first GCP went. My X and Y reference almost matched so I didn't have to change them. Next I had to collect the corresponding point on the block image, right image. I moved the inquire box on the full scale image then moved the zoomed inquire box to the exact area that I needed. This allowed me to collect the GCP for the block image. Once again my X and Y reference was almost identical. After collecting another GCP on both images I then activated the Automatic (x,y) Drive. This allowed me to collect the GCPs in rapid succession. I collected GCPs up to 10, the final two GCPs were on a different image. After the 10th GCP I then saved and reset the horizontal reference source. This allowed me to bring in the other image, NAPP-2m-ortho. 
I then collected the final two GCPs from the second image I input. After collecting the final GCP I saved again and now had to collect GCPs for elevation. I used the reset vertical reference source and used the plasm springs DEM. I right clicked on the Point # and selected all, then used the update Z values tool button. 

Section 4: Set type and usage, add a 2nd image to the block and collect the GCPs. In the cell array under 'Type', I changed all the points to full, and under the Usage cell I changed all the points to control. 
Now that I have finished the collection of reference points for the first images, I then moved to the second image, spot-panb. I uploaded the spot-panb image and referenced the GCPs off the first spot-pan. I used the point measurement tool which allowed me to locate the points in the first spot-pan on the second image. I collected all 12 of the GCPs on the second spot-panb image. After saving the points again I referenced my block image interface. This showed me where the points were located on the two images. 

Section 5: Automatic tie point collection, triangulation and ortho resample.
Finally I was at the last couples steps to allow me to orthorectify my images. I used the 'Automatic tie point generation properties tool. My images used was set to all available, my initial type button was set to exterior/header/GCP, under the distribution tab in the intended number of points/ image field I set to 40. I then ran the tool. An auto-tie summary is displayed allowing for me to see the accuracy of my GCPs, after looking at this summary I saved it and closed it. After completing all these steps I had all the control and tie points, the nest step was the triangulation process. I used the 'Edit Triangulation Properties tool. I changed the interations with relaxation from 1 to 3, under the point tab I changed the x,y, and z fields to 15. I then ran the triangulation process. After the function ran it gave me a report summary. 
After looking over the report summary and saving it, I exited out. This brought me back to the LPS interface. I now can create my orthorectified images. After running the orthorectifying process I then was able to view my two images. The images overlap each other but they blend very well into each other. The features blend very well together, if it wasn't for the borders on the image overlay, it would be difficult to tell that there is two different images. My final two images that are orthorectified appeared as this. 
I was very pleased how this process turned out for me. I am thinking about using this in my final term project. The only problem is that this was considered to be "the marathon lab" due to the fact that it takes sometime to collect all the GCPs and tie points. However, with that being said it was a good feeling after completing this lab. 

Tuesday, November 18, 2014

Remote sensing Lab 6
Ethan Nauman
11/18/14

The goal of this lab is to introduce us to a very important processing exercise known as geometric correction. This lab was setup to develop my skills on two major types of geometric correction that are performed on satellite images. Technically this was a short lab but it also has parts of it that were tedious, especially when dealing with ground control points and repositioning them. 

Part 1: Image to map rectification
The skills that I used in this lab were performed in the Erdas viewer. I began with a blank viewer and uploaded the Chicago_drg.img image from our lab 6 folder. This image was a USGS 7.5 minute digital fast graphic (drg). I then opened a second viewer and uploaded the Chicago_2000.img image and ft both of the images to frame. I clicked on multispectral at the top of Erdas to activate these tools, then clicked on the control points tool handle. This opened the set geometric model window for me. Under this model I scrolled down and clicked on polynomial and then clicked ok. I used a first order polynomial equation in this window. When clicking on the select geometric model it opened two tools. The first tool was the multipoint geometric correction tool which I used as a guide in the collection and evaluation of the GCPs. The second tool was the GCP reference tool setup. I accepted the default parameters on the image layer (new viewer) on the GCP reference tool setup and clicked ok. I then navigated to our lab 6 folder and added our reference drg image- Chicago_drg.img. A window then opened that illustrated our reference system for the image. I accepted the default references and clicked ok. This process then opened the polynomial model properties (no file) window. At the start it said that there was no solution at the bottom of this window, however that would change after I started entering GCPs. I accepted the default parameters in this window and clicked on the close button. I then maximized the multipoint geometric correction window for better viewing purposes. This window contained two panes, one for the input image and another for reference image which was on the right. There was also two panes on the top of the window which were the zoomed in portion of the bottom panes. Once this window was open I had to delete the original GCPs that came with the image. To do this I selected all the point #'s in the bottom portion of this window and deleted them. The next step was to re-add four GCPs. I fit to frame both of the images in their respective panes and clicked on the crosshair tool that would allow for me to add GCPs. I started by adding a GCP on the input image then added the same point on the reference image as well. I repeated this process to add up to four GCPs. Once I added the fourth GCP my solution changed from no solution to solution is current. The next step was to evaluate the location of my GCPs by using the RMS error. I could individually see the RMS error on each GCP and also take the total RMS error in to make sure I was doing this correctly. To start my RMS errors were very high, the ideal situation would be to get them under 0.5, but since this was the first time dealing with GCPs we had to get them under 2.0. To get the RMS error below a 2.0 I had to zoom into the areas of the GCPs individually and relocate them while watching the RMS error and get it below a 2.0. This was a tedious process but it had to be done correctly. After getting my GCPs repositioned and the RMS error below a 2.0, my window appeared like this. 
Once my RMS error was below the 2.0 mark I was ready to run the geometric correction. I used the compute transformation matrix which had already been computed from my GCPs. On the multipoint geometric correction toolbar I clicked on the display resample image dialog button. this opened a window for me to save the image into my personal lab 6 folder. I left all parameters as their default and ran the tool. After running the tool my output image appeared as this. 

Part 2: Image to image registration
I started with a blank viewer in Erdas again. I displayed the Sierra Leone 1991 image from our lab 6 folder. This image had serious distortion, so i uploaded the Sierra Leone east 1991 grf image onto the original image. To actually see the distortion I used the swipe tool bar function. I right clicked on the images and scrolled down to the swipe function, this opened the swipe toolbar. I moved the slider around to see how bad the distortion actually was. After I was finished viewing the distortion I closed the swipe toolbar. I then cleared both the images and brought them into separate viewers like I did in the first part of this lab. My task was to correct the image just like I did in the first part of this lab with the GCP points. I clicked on multispectral to activate the raster tools and then clicked on control points like I did in the first part. I clicked on polynomial under the select geometric model  in the set geometric model window. I also clicked ok on the collect reference points forum in the GCP tool reference setup. I navigated to the lab 6 folder and added the reference image Sierra Leone east 1991 grf. I clicked ok on the reference map information window. On the polynomial model properties window I changed the polynomial order from 1 to 3 then clicked close. Just like I performed in the first part of the lab I deleted the original GCPs that came with the image and put my own GCPs onto the input and reference image. I ended with 12 GCPs when only 10 were required for a 3rd order polynomial. Putting and rearranging the 12 GCPs while maintaing a RMS below a 2.0 took awhile but I was able to get it done. After getting my total RMS error below a 2.0 my window appeared like this. 
After getting the RMS error below the 2.0 that was required I ran the model just like I did in the first part of the lab and my final maps distortion came out. The distortion was minimal especially because I used the 3rd order polynomial which meant I used more GCPs then the first part of this lab. 

Thursday, November 13, 2014

Lab 5 Image Mosaic

Remote Sensing Lab 5
Ethan Nauman
11/13/14

The goal of this lab was to introduce us to important analytical processes in remote sensing. We explored image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. The process of image mosaicking in this lab is dealing with multiple images and being able to display it as one seamless image. By the end of the lab I was able to complete image mosaicking along with the other skills that I learned.

Part 1: Image Mosaicking
The first part of this lab dealt with being able to combine multiple images into one seamless image covering a wide area of interest. To begin this lab I brought in an image provided for me into Erdas. However, before bringing the image in I had to make sure all the prerequisites fit the image. Since I was bringing in multiple photos, I had to check the boxes in the image file allowing for me to bring in multiple images. After completing this process and uploading the first image, I then had to conduct the same steps allowing me to overlay the first image with a second one. After both of the images were in the viewer, I then had to perform the mosaic on them. For this process I used the simple mosaic express tool. This was located under the raster tools, then under the mosaic express tool. Once the mosaic window appeared, I then had to upload both of the images into the mosaic tool. After uploading both of the images, I selected the file that I wanted the mosaic to store the mosaicked image. This was located in my personal folder. After completing these steps I then ran the mosaic express tool. After running the tool I then brought the two mosaicked images into a new viewer and they appeared as this. 
Section 2: Image mosaic with MosaicPro
In this section I was again going to mosaic the two images I stitched together, however this time I was going to use a more advanced mosaic tool, MosaicPro. To begin this section of the lab I brought in the original two images that I laid over each other. I went into the raster tools and the mosaic tools, instead of using mosaic express this time I used the MosaicPro tool. Once completing this step of getting to the mosaic window, I then had to upload both of the images again into this tool. Before completing the upload for each image I had to change some of the image area options. I used the compute active area button in this option, upon looking over the parameters I accepted the terms and uploaded the first image. I then followed the same steps to upload my second image and accepted all the parameters likewise. After uploading both of the images I had to make sure that the second image I uploaded was the bottom image. Now I wanted to synchronize the radiometric properties at the area of intersection of both images, that way there would be a smooth color transition from one image to the other. To do this I used the color corrections tool. After opening this tool I used the match histogram option, this opened another tab that allowed me to select the set button. The set button allowed me to select the overlap areas button in the dialog box, this allowed for a smooth transition between the two images but didn’t affect the brightness of the rest of the images. I accepted all other parameters in the color corrections dialog box. On the MosaicPro toolbar I then selected the set output options dialog icon to open the output image options window. This window would allow me to change map projections or pixel size if I choose. I accepted all parameters and closed the window. I then clicked on the set overlap function icon; this opened another window for me. I accepted the default parameter of overlay, which used the brightness values of the top image in the area of intersection. After filtering through all these parameters I then ran the tool. The image that appeared had a more smooth transition from one image to the other and the colors matched better. 

Part 2: Band Rationing
In this section of the lab I would perform band ratio by implementing the NDVI found on the original image of Eau Claire. I started with a blank viewer and then upload the original Eau Claire image. I used the raster toolbar and used the unsupervised tool and used the NDVI tool on the drop down. This opened the indices window. My input file was the original Eau Claire image and my output file was saved and located in my personal folder for the class. I also had to make sure that the sensor parameter was Landsat TM and the select function parameter had NDVI selected. I accepted all other parameters and ran the tool. The image that appeared caught me by surprise because it was a mostly white image. The white of the image was highly vegetated areas on the image.

Part 3: Spatial and spectral image enhancement
For this section of the lab I uploaded the Chicago TM image from our image enhancement folder for our class. The image demonstrated some amount of high frequency, which needed to be suppressed. For the first portion of this section I used a 5x5 low pass convolution filter. This was located on the raster tool bar under the spatial icon, which I followed the drop down to convolution. This opened the convolution window. In this window I changed the kernel to 5x5 low pass. My input image was the Chicago TM image and I saved the output image in my personal folder. I accepted all other parameters and ran the tool. After the tool ran, the image didn’t appear much different from the original besides that its quality was not as detailed as the original. Since this image didn’t appear very different I decided to try and improve the brightness quality with a high pass filter on a different image. I started again with a clear image viewer and uploaded the Sierra Leone image from our lab folder. I used the same tools, but instead of using a low pass convolution kernel I selected the 5x5 high pass convolution kernel. The below images are the original image (left) and the high pass convolution (right). 

After overlooking the image, the next step was to perform an edge enhancement technique. I again brought in the original Sierra Leone image from our lab folder. I accessed the convolution window again and input the Sierra Leone image. Under the kernel selection, I selected the 3x3 Laplacian edge detection kernel. I checked fill under the handle edges parameter and unchecked normalize the kernel. I accepted all other parameters and ran the tool. 

Section 2: Spectral enhancement
In this section I performed two types of linear contrast stretch. I started with a blank image viewer and brought in the Eau Claire 1991. I used the metadata tab to look at the histogram for the image. I decided a minimum-maximum contrast stretch was best for this image. I clicked on general contrast and followed the drop down to the general contrast tool and clicked on it. This opened the contrast adjust interface. Under the method tab I changed it to Gaussian. I then ran the tool and the image appeared.

 I decided to run a piecewise stretch after looking at the Gaussian stretched image. I clicked on general contrast and followed the drop down to piecewise to access the tool. Under the range specification tab I clicked on middle and changed the dynamic range of brightness values for the last mode to 180. I applied the tool to the image and it appeared as this. 
Histogram Equalization
This process improves the contrast of the image to enhance visual interpretation. I opened the original image; this image was the red band of the Landsat TM. I used the raster toolbar and clicked on radiometric and histogram equalization. This process opened the histogram equalization window. The input image was the original image I brought in and the output image was saved in my personal lab folder. I accepted all parameters and ran the tool. 
Part 4: Binary change detection
In this part of the lab I estimated the brightness values of pixels that changed in Eau Claire County and surrounding area from 1991 to 2011.
Section 1: Image differencing
I began this with opening two viewers in Erdas. I uploaded the Eau Claire images from 1991 into one viewer and the Eau Claire image from 2011 into the other viewer. I then clicked on the raster toolbar to activate it, then clicked on the function tab and scrolled down to the two-image functions tool to access the two-input operations window. In the input file one I inserted the 2011 image and put the 1991 image into the second input file. Under the output options portion I changed it from a plus to a minus. Underneath the first image in the input file I clicked on the layer tab and changed it from all to four, and did the same on the second input image. I accepted all parameters and ran the tool. After bringing in both of the images I then observed their histograms. I used the rule of thumb threshold of (mean+1.5 standard deviation) to determine the cutoff points on the histograms. After viewing the histograms and calculating the changes, my histogram came out looking like this.
Section 2: Mapping change pixels in difference image using spatial modeler

In this section I mapped out the changes of Eau Claire County and the surrounding areas from 1991 to 2011. I started with a blank viewer and didn’t upload any image; instead I opened the model maker by opening the toolbox then clicked on the model maker tab. I constructed a simple model with two input raster objects for my 1991 and 2011 images. After bringing the two image files into the model maker, I then subtracted the 2011 image file from the 1991 image file in the function. For my output raster I saved it in my personal lab folder and ran the model. After running the model and bringing the image into my viewer I then opened the histogram. Upon observing it, the next step was to determine were the upper threshold of the histogram was. To do this I used the mean+(3x standard deviation) function. I again opened the model maker and setup another easy function however this time I only had one input raster, which was the image I saved during the last function. In the function raster option I changed it from analysis to conditional. I then clicked on the either, if, or, or function. This function simply meant that all pixels with value above change/ no change threshold value and mask out those that are below the change/ no change threshold value. My change/ no change threshold value was 202.18. After running the model and bringing it into the viewer I realized that it was hard to read because it was such a dark image. I then opened the arcmap interface from our programs. I brought in the NIR image of Eau Claire from 1991 with the four bands and overlaid it with the Eau Claire image from 2011. In the 2011 image I set the no data to no color so that I could see the areas that changed. I changed the view of the images from data to layout and brought in a legend, north arrow, and scale bar. The final part of this lab looked like this. 

Friday, October 31, 2014

Remote Sensing Lab 4
Ethan Nauman
10/30/14

The goal of this lab was to introduce us to skills in image preprocessing. A few of the skills we learned while completing this lab were: learning how to pick an area of interest from a larger image, demonstrate how enhancing an image can help the purpose of being able to visually interpret an image, learn radiometric enhancement techniques of optical images, learn how to link satellite images with google earth, and to be introduced to different methods of resampling. 

Part 1: Image Subsetting
For the first part of the lab we learned how to select and area of interest through an inquire box on a satellite image. This method is quite simple but it does pose a problem. That problem is that usually an area of interest isn't in she shape of a square or rectangle, so this technique has limitations. Upon opening the Eau Claire image in Erdas, we selected the raster tool set. These tools allow us to open the inquire box. The inquire box came onto our Eau Claire image and we repositioned it so that it was over the Chippewa and Eau Claire area. By clicking on the outer edge of this box you can either make it smaller or bigger. After I put the inquire box over these two areas, I needed to capture this area. This is done by using the subset and chip tool under the raster tools. Upon completing this step and saving the image, I uploaded the saved image into another viewer. The image that appeared was the area that was in the subset box, that image looked like this. 
Part 1 Section 2: Subsetting with the use of area of interest
This process is used when your area of interest isn't in the shape of a rectangle or box. This technique is quite useful. 
For this process I began with the original Eau Claire image. The next step to this process was to upload the Eau Claire and Chippewa counties shape file on top of this image. Once I completed this I then had to select both of the counties, this was done by holding down the shift key and selecting just the two counties. After selecting the two counties I then had to paste from selected image. This left a dotted line around my area of interest which in turn would allow for me to save the image. Upon completion of this step my final image appeared as this: 
Part 2 Image Fusion
In this portion of the lab, I was able to fuse a finer resolution image with a coarse resolution image allowing for a more clear picture which allowed for better utilization. I began by opening the original Eau Claire image in one viewer and opened a panchromatic image in a second viewer. The panchromatic image was set at 15 meters while the original image was set at 30 meters, I would be using the panchromatic image to 'pan-sharpen' the original image. The pan sharpen tool is located under the raster tool set. Once I clicked on the pan sharpen tool I then went down to the resolution merge tool. Upon clicking on the resolution merge tool, a resolution merge box then appeared. This had many parameters that I needed to go through before completion. The input file was the panchromatic image of the Eau Claire area, the multispectral input file was the original Eau Claire image. I had to create a folder to save the pan sharpened image in and put that folder as my output file. Under the method portion of the resolution merge box, I had to check the multiplicative box. Under the resampling techniques box I used the nearest neighbor tool. After I completed all these steps, the tool was ready to run. The final image that I received after completing this was a sharpened image, that image looked like this: 
Part 3: Simple radiometric enhancement techniques
In this section I learned some simple radiometric enhancement techniques that would enhance image spectral and radiometric quality. The first section of this part dealt with haze reduction. The image I used for this section was in our radiometric folder. I used the raster tool set and used the haze and reduction tool. This in turn brought up the haze reduction window, for the input file I used the Eau Claire image. I also had to create another folder which I named haze reduction. For my output file on the haze reduction window I used this file. Upon filling out these two simple steps I ran the program and was quite amazed at what happened. The original image had clouds over the southeast portion of the image and after running the tool my final image appeared as this: 
There is no more clouds over any portion of the image. 

Part 4: Linking image viewer to google earth
I once again started with the original Eau Claire image and fit it to frame. I then clicked on the google earth icon on the upper left portion of Erdas, and then connect to google earth. I then moved the google earth portion over to the other desktop for easier viewing. I want to show the Eau Claire image on the google earth portion so I had to connect the two. To do this I clicked on the match GE to view on the Erdas interface. Next I wanted to sync the two images. I clicked on the sync GE to view on the Erdas interface. Upon completing this portion I then could zoom in on the original Eau Claire image and it would direct google earth to the same area as that on my Eau Claire image. I found this to be very useful.

Part 5: Resampling
Resampling is the process of changing the size of the pixels on the image which lead to a much clearer image. I could also go the other way with resampling, which would increase the size of the pixels, depending on what I am trying to portray in the image. The image for this section was located under our resampling folder. I brought our Eau Claire image into Erdas and fit it to frame. I then clicked on the raster toolset and used the spatial tool. This brought a drop down menu which I used the resample pixel size tool. I first resampled for nearest neighbor, then I resampled for bilinear interpolation. They used the same process I only had to change from nearest neighbor to bilinear interpolation. The images of both of these were very similar and did make the image much clearer. Upon clicking on the resample pixel size tool, this then brought up the resample window. The original Eau Claire image was the input file. I had to create another folder on my desktop named resample outlet which would allow for me to save the image when done. Under the resample method portion of the window I clicked on bilinear interpolation. I changed the output cell size from 30x30 micrometers to 20x20 micrometers. I then accepted all other parameters and ran the tool. My final image after resampling the pixel sizes looked like this:
Upon completion of this lab I reflected on the techniques I learned and realized that these in turn can prove quite useful. My favorite technique I learned was the haze reduction tool. If you have problems with your satellite image with clouds or poor quality and run this tool, it can prove quite effective and solve your problems.