Wil Blouin's GIS Portfolio
This blog is meant to showcase my GIS work and skills gained in my GIS 1 class and remote sensing classes.
Monday, December 19, 2016
Remote Sensing Term Project: LIDAR Road Management Application
Follow this link to the full lab report:
https://drive.google.com/file/d/0B36dlU8PtG9pMmxBVjhUZzI4OGc/view?usp=sharing
Tuesday, December 13, 2016
Remote Sensing Lab 8: Spectral Signature Analysis and Resource Monitoring
Background and Overview: In this lab I became accustomed to some procedures having to do with spectral signatures; simple and manual procedures were practiced, and also those using an index function were practiced (we used the NDVI and Ferrous Mineral index functions).
Methods:
Part 1 (Spectral Signature Analysis): Opening the image supplied by my instructor of the Eau Claire Area, I clicked on the Drawing tab then the polygon button to draw a polygon over a large area of Lake Wissota, our area of spectral interest. I now clicked on the Raster tab, then on Supervised, and then Signature Editor in order to open the Signature Editor window. I now clicked on the Create New Signature From AOI button on the top tool bar of this window and renamed the created signature Standing Water. After this I clicked on the Draw Mean Plot Window icon in order to see the spectral signature drawn on a graph. I next did the same for all of the following types of terrain: standing water, moving water, forest, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, and concrete parking lot. All of these spectral signatures can be seen below in the results section as signature mean plots showing the spread of reflectance across bands. I was also asked a few questions about these signature and the answers are listed here:
Part 2 Maps:
Sources: Imagery and instruction was provided by the instructor, Dr. Cyril Wilson.
Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.
Methods:
Part 1 (Spectral Signature Analysis): Opening the image supplied by my instructor of the Eau Claire Area, I clicked on the Drawing tab then the polygon button to draw a polygon over a large area of Lake Wissota, our area of spectral interest. I now clicked on the Raster tab, then on Supervised, and then Signature Editor in order to open the Signature Editor window. I now clicked on the Create New Signature From AOI button on the top tool bar of this window and renamed the created signature Standing Water. After this I clicked on the Draw Mean Plot Window icon in order to see the spectral signature drawn on a graph. I next did the same for all of the following types of terrain: standing water, moving water, forest, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, and concrete parking lot. All of these spectral signatures can be seen below in the results section as signature mean plots showing the spread of reflectance across bands. I was also asked a few questions about these signature and the answers are listed here:
- Water reflects most energy in the visible bands and especially in the blue (hence it looks blueish) and it absorbs the infrared and thermal bands quite well.
- Vegetation displayed high NIR reflectance because it really just absorbs the visible band to make food and reflects higher bands to prevent damage. It looks green because it absorbs mostly much more red.
- Dry and moist soils differ most in their reflection of the MIR band because of the MIR water absorption.
- The following are comparisons of spectral signatures: Vegetation, urban grass and riparian vegetation are very similar because they are all live plant life! Airport runway and highway are also fairly similar but less similar. Moving water and crops are very different.
- Bands three, four, five, and six seem like they have a lot of weird variations even among similar surfaces so I would choose these bands if I had to choose 4 bands to include in a sensor for differentiating these surfaces.
Part 2 (Resource Monitoring) Section 1 (Vegetation Health Monitoring): In this section I used the normalized difference vegetation index (NDVI) to create a new raster file based on its results of an image of the Eau Claire and Chippewa counties area. I brought the image into a viewer in ERDAS Imagine, then clicked the Raster tab, then Unsupervised, then NDVI. I made sure to input the specific image, and name the output image appropriately and save it in my folder, then selected the appropriate Landsat 7 Multispectral sensor and the NDVI index in the Indices window. I next clicked run and then opened the resulting image in ArcMap to display the raster in an appropriate 5 class and equal interval classification and symbology system and then created a cartographically pleasing map. I also when asked noted that the white areas are areas of a high index value, denoting areas of high vegetation health, and gray and black areas denoted either water or lower vegetation health. My map is included in my results.
Part 2 (Resource Monitoring) Section 2 (Soil Health Monitoring): In this section I used much the same procedure but instead of using the NDVI function and index I used the Ferrous Minerals index. I also created a map in the end in ArcMap with the same classification system and it too is included in my results. When asked I noted that the ferrous minerals are more prevalent in the south-west half of the image.
Results:
Part 1 Signature Mean Plots:
| Riparian Vegetation |
| Airport Runway |
| Asphalt Highway |
| Standing Water |
| Standing Water |
| Dry Soil |
| Vegetation |
| Urban Grass |
| Parking Lot |
| Crops |
| Moving Water |
| Ferrous Mineral Content in Eau Claire and Chippewa Counties |
| Vegetation Health (NDVI) in Eau Claire and Chippewa Counties |
Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.
Tuesday, December 6, 2016
Remote Sensing Lab 7: Photogrammetry
Goals and Background: In this lab we learned how to do many different tasks pertaining to photogrammetry and orthorectification. These included finding scales and relief displacement, making stereoscope images, and performing orthorectification on satellite images.
Methods:
Part 1 (Scales, Measurements and Relief Displacement) Section 1: In this section we used two equations to find scale in different problems. The first equation was the simple picture distance/ground distance=scale. The second had to do with focal length and height the aerial image was taken from (scale=focal length/height from ground).
Part 1 Section 2: In this section we used the measurement utility to digitize the perimeter of a lagoon to find area and perimeter. After opening the image supplied by my professor in a new viewer in ERDAS Imagine, I clicked on the measurement button under the Manage Data tab with the Home tab opened, then created a polyline to measure the perimeter and a polygon to find the area. The resultant length and area were then displayed in the view measurements table at the bottom of the screen.
Part 1 Section 3:
Part 2 (Stereoscopy) Section 1: In this section, we created a stereoscopic image from a DEM and an aerial image of the city of Eau Claire using the anaglyph generation function of ERDAS Imagine. To create this image I clicked on Terrain, then Anaglyph, and tuned the settings as are shown in the image below. The resultant file could be viewed in 3D with red and blue glasses.
Part 2 Section 2: In this section I created a stereoscopic image using an aerial image of Eau Claire and a LiDAR derived DSM using the same technique as the last section. This created a much higher spatial resolution image with greater three dimensional detail that is shown below.
Methods:
Part 1 (Scales, Measurements and Relief Displacement) Section 1: In this section we used two equations to find scale in different problems. The first equation was the simple picture distance/ground distance=scale. The second had to do with focal length and height the aerial image was taken from (scale=focal length/height from ground).
Part 1 Section 2: In this section we used the measurement utility to digitize the perimeter of a lagoon to find area and perimeter. After opening the image supplied by my professor in a new viewer in ERDAS Imagine, I clicked on the measurement button under the Manage Data tab with the Home tab opened, then created a polyline to measure the perimeter and a polygon to find the area. The resultant length and area were then displayed in the view measurements table at the bottom of the screen.
| Digitization for Measurement |
Part 1 Section 3:
Part 2 (Stereoscopy) Section 1: In this section, we created a stereoscopic image from a DEM and an aerial image of the city of Eau Claire using the anaglyph generation function of ERDAS Imagine. To create this image I clicked on Terrain, then Anaglyph, and tuned the settings as are shown in the image below. The resultant file could be viewed in 3D with red and blue glasses.
| DEM Anaglyph Generation |
Part 3 Section 1: In part 3 we used already orthorectified imagery as a source for ground control points for the orthorectification of two SPOT panchromatic images of Palm Springs, California. We used the Erdas Imagine Lecia Photogrammetric Suite (LPS).
To begin section 1 I opened a fresh viewer in Erdas Imagine, created an orthorectification output folder in my personal storage, and opened the LPS Project Manager by clicking on Toolbox, then on IMAGINE Photogrammetry. Clicking on the create new block file icon, I created a new block file in my previously mentioned output folder with a specific name. With the resulting Model Setup window open I chose the Polynomial-based Pushbroom option, then chose the SPOT Pushbroom specification in the Geometric Model Category as my data was from SPOT. Now in the Block Property Setup dialog I set the Horizontal Reference Coordinate system to the appropriate UTM projection type, Clarke 1866 spheroid, NAD27 (CONUS) datum, UTM zone 11 and North. I now set the Horizontal units to Meters.
Part 3 Section 2: I now added the imagery needing to be orthorectified and defined the sensor model of the block. I began by clicking the image folder in the tree view on the left side of the project manager and then clicking the add frame icon next to the save icon at the top. Navigating to the Lab 7 folder, I inputted the first SPOT panochromatic frame. I now clicked on the Show and Edit Frame Properties icon (the one with the lowercase I) at the top of the project manager. Reviewing this info I clicked okay.
Part 3 Section 3: In this section I began to collect GCPs with the point measurement tool and set my vertical reference source to import Z elevation data. Clicking on the point measurement tool icon at the top of the manager (the circle with crosshairs), I began the tool, selecting the classic point measurement tool when given the option. With the tool open, I clicked the reset horizontal reference source icon in the upper right grouping of icons (the icon with the black and white circle and the horizontal double arrowed line under it). I now navigated to the first Orthorectification subfolder of the Lab 7 folder my instructor provided and I selected the spot image to be used as a reference. After clicking okay, I checked the Use Viewer As Reference box in order to view the reference image side by side. Moving the inquire boxes appropriately in order to find matching points, I now selected the add button, then clicked at the same point in both images to add my first reference point. I did this 9 total times, after the second clicking the Automatic (x,y) Drive icon in order to ease the finding of matching areas, each time zooming to the a large scale in order to get a high degree of accuracy for later computation.
After creating my ninth point, I clicked save to save my progress. I now reset the Horizontal Reference Source again in order to to use a different source. This next source was an already orthorectified aerial photo rather than SPOT satellite data. Creating a new point, I changed the Point ID to 11 instead to 10 like the Point number to note the change in source, and then created another point.
Now I set my vertical reference source by clicking its icon which was similar to the set horizontal reference source icon. Selecting DEM, then find DEM, and setting my source to my DEM supplied, I set my vertical reference source. Now, with all point numbers selected, I clicked the Z icon which updated all of my elevations from my past specified source.
Part 3 Section 4: In this section I set the type and usage of the points collected, and added a second image to the block, collecting its GCPs. I clicked on the title of the type column to highlight it, then right clicked on the column and selected formula, then Full in order to label the coordinates of each point as full. I now repeated this process in order to label all of the usages of the points as control, designating them GCPs.
I now saved and closed in the Point Measurement Tool in order to get back to the manager. In the manager, following the same procedure as I did to add the first block image, I added the second spot panographic image to be orthorectified. I also again clicked on the frame properties icon again and clicked okay in order to let the software know that I had verified the properties. Opening the Classic Point Measurement Tool again, I began to collect GCPs from the first image for the second. Adding a new point and selecting a spot at first in the second image and then in the original image, I matched points for use as control points for the second image. I did this for every point that was contained in the overlap between the two images that already existed. I then clicked save.
Part 3 Section 5: In this section I did the last necessary processes to finish the orthorectification process. I first clicked the Automatic Tie Point Generation Properties icon. I set the image used to all available, the initial type to Exterior/Header/GCP and the Image Layer Used for Computation to 1, then I changed tabs to Distribution and set the Intended Number of Points/Image to 40, made sure the keep all points option was unchecked so poor tie points were discarded and clicked run. After checking the tie points for accuracy, I saved and closed the Point Management Tool. I now clicked edit, then triangulation properties, changing iterations with relaxation value to 3, then image coordinate units for report to pixels. Changing to the point tab, I changed the x, y, and z SDs to 15, then checked the Simple Gross Error Check Using box and clicked run. Opening and saving the report generated from the triangulation, I clicked the Start Ortho Resampling Process icon selecting my appropriate DEM file name, setting the output cell sizes to 10 for both x and y, setting a descriptive output name in my personal storage, setting the resampling method to bilinear interpolation, adding my second image to correct, and clicked run to finish the entire process. I saved my block and then viewed my orthorectified images.
Results:
| Final Result of Orthorectification (both images shown) |
| LiDAR Derived Stereoscopic Image |
National Agriculture Imagery Program (NAIP) images are from United States Department of
Agriculture, 2005.
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of
Agriculture Natural Resources Conservation Service, 2010.
Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau
Claire County and Chippewa County governments respectively.
Spot satellite images are from Erdas Imagine, 2009.
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.
Sunday, November 20, 2016
Remote Sensing Lab 6: Geometric Correction
Goals and Background: This lab was meant to familiarize me with geometric correction of images used in preprocessing before images can be studied for spatial-statistical information about observed phenomena. We used a USGS 7.5 minute raster of chicago in order to correct a Landsat TM image of the same area (image to map rectification), and we used a Landsat TM image of Sierra Leone to correct a distorted image of the same area (image to image registration).
Methods:
Part 1 (Image to Map Rectification): Starting with a new 2D viewer in ERDAS Imagine, I opened the USGS Chicago DRG reference file. I then opened a new view and displayed the Image of the same area I intended to geometrically correct. I then made sure that the view of the image I wished to correct was highlighted. Next, I clicked on multispectral, then control points, opening the Set Geometric Model dialog. I now clicked on Polynomial, then okay, this correction being done with a first operation polynomial process. Next I clicked on image layer (new view), and okay. Navigating to the my personal folder I added the USGS image which I wished to reference, and clicked okay on the reference map information dialog. Going on, I accepted default model properties, and maximized the window in order to have more space to do the sensitive work of finding and moving matching points on the maps. I now deleted the current GCPs and created three new ones, selecting precisely the same place on the two images, then moving one of them while zoomed in extremely far while watching my RMS value decrease. After my total RMS was low enough for my standards, I clicked the display resample image dialog button, after which I specified my output location and specific name and accepted he other default parameters and clicked okay.
Part 2 (Image to Image Registration): Bringing both the image I wanted to correct and the reference image into a single viewer in ERDAS Imagine, I right clicked in the viewer and clicked swipe. I now used to slider to observe the difference between the images (the extent to which the one image was distorted). I now closed the viewer swipe window, then cleared the reference image from the viewer. I now clicked on multispectral, then control points, selecting polynomial, and setting my reference image file in the resulting windows before the main interface window. On one of the windows I observed the coordinate system the image was in, and on another I changed the polynomial order to 3. I now deleted the GCPs already present, and created 10 of my own, correcting as need be in order to get an adequate RMS before adding more GCPs automatically placed for good measure and to prevent the wrap tool error. I now saved and named the file and changed the resample method to bilinear interpolation.
Results:
Sources:
Data was supplied by my instructor. Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.
Methods:
Part 1 (Image to Map Rectification): Starting with a new 2D viewer in ERDAS Imagine, I opened the USGS Chicago DRG reference file. I then opened a new view and displayed the Image of the same area I intended to geometrically correct. I then made sure that the view of the image I wished to correct was highlighted. Next, I clicked on multispectral, then control points, opening the Set Geometric Model dialog. I now clicked on Polynomial, then okay, this correction being done with a first operation polynomial process. Next I clicked on image layer (new view), and okay. Navigating to the my personal folder I added the USGS image which I wished to reference, and clicked okay on the reference map information dialog. Going on, I accepted default model properties, and maximized the window in order to have more space to do the sensitive work of finding and moving matching points on the maps. I now deleted the current GCPs and created three new ones, selecting precisely the same place on the two images, then moving one of them while zoomed in extremely far while watching my RMS value decrease. After my total RMS was low enough for my standards, I clicked the display resample image dialog button, after which I specified my output location and specific name and accepted he other default parameters and clicked okay.
Part 2 (Image to Image Registration): Bringing both the image I wanted to correct and the reference image into a single viewer in ERDAS Imagine, I right clicked in the viewer and clicked swipe. I now used to slider to observe the difference between the images (the extent to which the one image was distorted). I now closed the viewer swipe window, then cleared the reference image from the viewer. I now clicked on multispectral, then control points, selecting polynomial, and setting my reference image file in the resulting windows before the main interface window. On one of the windows I observed the coordinate system the image was in, and on another I changed the polynomial order to 3. I now deleted the GCPs already present, and created 10 of my own, correcting as need be in order to get an adequate RMS before adding more GCPs automatically placed for good measure and to prevent the wrap tool error. I now saved and named the file and changed the resample method to bilinear interpolation.
Results:
| Corrected Chicago |
| Corrected Sierra Leone |
Data was supplied by my instructor. Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.
Remote Sensing Lab 5: LIDAR
Goals and Background: This lab familiarized me with basic LIDAR processing, and had me make DTMs and DSMs, and an intensity image.
Methods:
Part 1: I copied the 40 individual LAS files from my department server into a personal folder for LIDAR processing. I then opened them all in a new 2 dimensional viewer ERDAS Imagine. I denied the software's request to make LOD. I then took a look at metadata in this software in order to familiarize myself with this dataset.
Part 2: I opened ArcMap and ArcCatalog. In ArcMap I right clicked on the top interface in order to turn on the LAS Dataset toolbar. I also clicked on customize, then extensions, and turned on 3D analyst, and spatial analyst. I also clicked on geoprocessing, then environments, to set my scratch and current workspace, afterward saving the ArcMap map file in my folder for future ease of work on this lab. Next, I right clicked on my folder in ArcCatalog, and created a new LAS dataset. I then named it and went into its properties in order to add all of my LAS tiles files, and to click on statistics, and then calculate. I then examined the statistics and compared them to other research for quality assurance purposes. Examining the metadata supplied with the data in an external .xml file, I found the horizontal and vertical coordinate systems and their respective units. With this information I set the coordinate systems of the dataset in ArcMap by right clicking on the dataset in the catalog and selecting properties.
I now dragged this correctly configured dataset from the catalog into the map. Loading a shapefile containing the boarders of the county as reference, I checked that the LIDAR tiles were located in the appropriate place. I now played with the LAS Dataset toolbar in, displaying the dataset in a variety of ways, also changing the classification settings in symbology layer properties and the selection to slope, elevation, and contour.
In order to create a profile of a bridge in Eau Claire, I clicked on the appropriate button on the toolbar, and positioned the resulting box over the bridge. I did the save with the 3D interactive profile function, and also used the two functions to explore other areas of the city.
Part 3: In this section I created a DTM and a DSM, hillshades of both, and an intensity image. Setting data to dots of elevation and first return, I began my DSM. I opened the LAS Dataset to Raster tool and set value field to elevation, cell type to maximum, void fill to nearest neighbor, sampling type to cell size and its value to 6.56168, and saved it in the right place with a descriptive title. After displaying this new file I used the hillshade tool inputting the DSM file, and looked at this file.
Now I created a DTM by using the LAS Dataset to Raster tool again but changing settings to binning, natural neighbor, minimum, and cellsize with the same size as the DSM. Before this I made sure to set the data to ground, and to points of elevation. I again ran the hillshade tool to make another hillshade image..
Next I created the intensity image. In order to do this I changed my dataset to points of elevation, and first return, then running the LAS Dataset to Raster tool again. I set the parameters to intensity, average, and natural neighbor, and cell size with the same size. I now opened this image in ERDAS Imagine because it automatically enhances the display.
Results:
| DSM |
| DTM |
| DTM Hillshade |
| DSM Hillshade |
| Intensity Displayed in ERDAS Imagine |
Data was provided by my instructor. Lidar point cloud and Tile Index are from Eau Claire County, 2013.
Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price,
2014.
Wednesday, November 2, 2016
Remote Sensing Lab 4: Miscellaneous Image Functions
Goals and Background: The goal of this lab was to make me accustom to various functions aiding interpretation of aerial imagery in the ERDAS Imagine software. Specifically, cropping of an AOI (area of interest), and of a rectangular area using the inquire box, linking an aerial image to google earth for interpretation aid, resampling to be easier on the eyes in interpretation, using radiometric haze reduction functionality, image mosaicking using both express and pro methods for different results, and binary change detection using simple graphical modeling.
Methods: I used data that was given to me by my professor. Each different miscellaneous image function is in a different following part.
Part 1 Section 1: I first made a subset image using the inquire box method. I opened an aerial image of Eau Claire area in a new viewer. then I clicked on raster to find the raster tools, after which I right clicked on the image to click on Inquire Box to show an inquire box. Clicking and moving as well as resizing by dragging the sides of the box I moved it over the Eau Claire city area. I now clicked apply on the inquire box viewer window. I now clicked on Subset and Chip, then Create Subset Image. I selected the output folder I had made earlier for this part of the lab and gave the new image a unique name, and then clicked from inquire box on the subset window. I now clicked okay, then after my process was finished I clicked dismiss on the process list window, then closed the process list window. I now brought in the finished subset image and screen captured it for use in my report. This subset image can be seen below in the results section.
Part 1 Section 2: In this section I used a shape file of Eau Claire and Chippewa Counties in order to create a subset image from this AOI. In a new viewer containing the same aerial image as the last section, I brought in the shape file my instructor provided. To see this in the Select Layers to Add window I selected Files of type, then clicked on the shape file option in the drop-down menu. I next shift-clicked both counties' shape files in order to select both, changing them from a shade of blue to a bright yellow. Now I clicked on Home, then paste from selected object which created an area of interest around the shape files, denoted by dashed lines. Now I clicked File, then Save As - AOI Layer As and saved this AOI as a unique file name with a .aoi ending. I then opened the image in a new viewer and screen captured it for use in my report. This sunset image can be seen in my results section below.
Part 1 Section 2: In this section I used a shape file of Eau Claire and Chippewa Counties in order to create a subset image from this AOI. In a new viewer containing the same aerial image as the last section, I brought in the shape file my instructor provided. To see this in the Select Layers to Add window I selected Files of type, then clicked on the shape file option in the drop-down menu. I next shift-clicked both counties' shape files in order to select both, changing them from a shade of blue to a bright yellow. Now I clicked on Home, then paste from selected object which created an area of interest around the shape files, denoted by dashed lines. Now I clicked File, then Save As - AOI Layer As and saved this AOI as a unique file name with a .aoi ending. I then opened the image in a new viewer and screen captured it for use in my report. This sunset image can be seen in my results section below.
Part 2: In this part I created a higher spatial resolution image from a lower resolution image and a panchromatic image for easier interpretation. I opened the image I was given by my instructor in a new viewer, then clicked raster, pan sharpen, then resolution merge from the dropdown menu. In the resulting Resolution Merge window I opened the panchromatic image supplied in the High Resolution Input File area, and the multispectral low resolution image supplied by my instructor in the Multispectral Input File area. I next created a unique name for my output image and saved it in the output folder previously created in the Output File area of the Resolution Merge window. I next clicked the multiplicative and nearest neighbor radial buttons. Clicking okay, I created the new image and opened a new viewer to view it.
Part 3: In this part I used the radiometric haze reduction technique to remove the haze from an image. After opening the image I was supplied with, I clicked on Radiometric, then Haze Reduction. I next browsed to my output folder in the resulting window, and entered a unique output image name. I then clicked okay, using all of the default parameters, then opened the image in a second viewer in order to see the difference the Haze Reduction algorithm had made.
Part 4: In this part I linked google maps for interpretation help. I first opened the image I was provided in a new viewer. Next, I clicked on Google Earth, then connect to Google Earth. Now, with google earth opened, I clicked on Link GE to View, then Sync GE to View in order to have the scale and area being looked at synced on both windows.
Part 5: In this part I resampled an image using both the Nearest Neighbor, and the Bilinear Interpolation methods. Both methods were performed using the same method, but with the method chosen differing. I first opened the image that I was provided, then clicked metadata to see that the spatial resolution was 30 meters. Next, I clicked Spatial, then Resample Pixel Size to open the resampling window. Inputing the same image I had opened earlier, I outputted the new image as a unique file name and in my output folder. Next, I changed the output cell size from 30 x 30 to 15 x 15. Clicking Square Cells and my resample method, I then accepted the default parameters and clicked okay.
Part 6: In this part I practiced image mosaicking in both the express and pro functions. I first used express. I used two images that were capture in May 1995 by the Landsat TM satellite. Adding each image one by one I opened the Select Layers to Add window, then clicked multiple, and Multiple images in Virtual Mosaic, then made sure that in the Raster Options tab the Background Transparent option was checked. I then clicked okay. I repeated the same process for the next image, then seeing the two overlapped in the viewer.
Part 6 Section 1: In this section I used mosaic express. I clicked raster, then mosaic, then mosaic express. Next I added the image I wanted stacked on top first to the area in the input tab, then the image I wanted on the bottom. Clicking on the output tab, I then specified my unique output file name, and specified the output folder I created previously. I clicked finish to create the simple mosaic.
Part 6 Section 2: In this section mosaic pro was used. I selected mosaic, then mosaic pro in order to begin this process. I clicked the Add Images button, then found and selected the first image to import. I then clicked to the Image Area Options and clicked Compute Active Area. I now clicked okay. I next did the exact same thing for the second image. In order to synchronize the radiometric properties of the images I clicked on the color correction button, clicked use histogram matching, then clicked set and selected overlap areas for the matching method. I now clicked okay and okay on the windows. Next I clicked the overlap function icon. I set the method to overlay and clicked okay. I now ran the mosaic, saving the file in my output folder with a unique name.
Part 7 Section 1: In this section I practiced binary change detection. I first displayed the images I was given in two different viewers. Next, I clicked raster, then functions, then two image functions. Now I deleted my first input file and second input file which I had been supplied with. Now, I changed the function from + to - for differencing. I set my output file to a unique name and inside my output folder then changed the layer on both to only layer 4 for simplicity sake, and clicked okay. I now investigated the histogram of the resulting image by opening it in a new viewer and clicking metadata. I now used the mean and standard deviation to find and delineate for my lab report the areas which substantially changed. To find this I multiplied the SD by 1.5, and added and subtracted them from the mean to find the larger and smaller values of cutoff.
Part 5: In this part I resampled an image using both the Nearest Neighbor, and the Bilinear Interpolation methods. Both methods were performed using the same method, but with the method chosen differing. I first opened the image that I was provided, then clicked metadata to see that the spatial resolution was 30 meters. Next, I clicked Spatial, then Resample Pixel Size to open the resampling window. Inputing the same image I had opened earlier, I outputted the new image as a unique file name and in my output folder. Next, I changed the output cell size from 30 x 30 to 15 x 15. Clicking Square Cells and my resample method, I then accepted the default parameters and clicked okay.
Part 6: In this part I practiced image mosaicking in both the express and pro functions. I first used express. I used two images that were capture in May 1995 by the Landsat TM satellite. Adding each image one by one I opened the Select Layers to Add window, then clicked multiple, and Multiple images in Virtual Mosaic, then made sure that in the Raster Options tab the Background Transparent option was checked. I then clicked okay. I repeated the same process for the next image, then seeing the two overlapped in the viewer.
Part 6 Section 1: In this section I used mosaic express. I clicked raster, then mosaic, then mosaic express. Next I added the image I wanted stacked on top first to the area in the input tab, then the image I wanted on the bottom. Clicking on the output tab, I then specified my unique output file name, and specified the output folder I created previously. I clicked finish to create the simple mosaic.
Part 6 Section 2: In this section mosaic pro was used. I selected mosaic, then mosaic pro in order to begin this process. I clicked the Add Images button, then found and selected the first image to import. I then clicked to the Image Area Options and clicked Compute Active Area. I now clicked okay. I next did the exact same thing for the second image. In order to synchronize the radiometric properties of the images I clicked on the color correction button, clicked use histogram matching, then clicked set and selected overlap areas for the matching method. I now clicked okay and okay on the windows. Next I clicked the overlap function icon. I set the method to overlay and clicked okay. I now ran the mosaic, saving the file in my output folder with a unique name.
Part 7 Section 1: In this section I practiced binary change detection. I first displayed the images I was given in two different viewers. Next, I clicked raster, then functions, then two image functions. Now I deleted my first input file and second input file which I had been supplied with. Now, I changed the function from + to - for differencing. I set my output file to a unique name and inside my output folder then changed the layer on both to only layer 4 for simplicity sake, and clicked okay. I now investigated the histogram of the resulting image by opening it in a new viewer and clicking metadata. I now used the mean and standard deviation to find and delineate for my lab report the areas which substantially changed. To find this I multiplied the SD by 1.5, and added and subtracted them from the mean to find the larger and smaller values of cutoff.
| Histogram |
| Mosaic Express |
| Part 1 Section 1 |
![]() |
| Mosaic Pro |
![]() |
| Part 1 Section 2 |
Reference:
Satellite images are from
Earth Resources Observation and Science
Center, United States Geological Survey
. Shapefile is from
Mastering ArcGIS 6
th
edition Dataset
by Maribeth Price, McGraw Hill. 2014.
Data used was given to me through department server access by instructor. Available upon request with the permission of my instructor.
Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.
Satellite images are from
Earth Resources Observation and Science
Center, United States Geological Survey
. Shapefile is from
Mastering ArcGIS 6
th
edition Dataset
by Maribeth Price, McGraw Hill. 2014.
Monday, May 16, 2016
Final Project
Goals and Background: The goal of this project was to apply my geospatial skills I have learned in the semester to a real life problem. I asked the question, where should we look to find a place that we can study high permeability soil while being near an interstate for ease of travel and near a hospital so that a member of our team could be rushed to the hospital if he or she needed medical attention for a condition that could possibly necessitate it. I used various tools that I had learned in the semester to answer this question and produce a map showing the areas meeting my criteria.
Methods: To solve this question I used data from ESRI and from the Wisconsin DNR that was stored on my university's servers. All data I found I checked the appropriate scales that they could be used at in the meta data, making sure that they would be appropriate to display at the scale I would be mapping at. I found datasets of counties, then on top of these overlaid data of hospitals, interstates, and soils. With the counties layer, I selected the counties by attributes that were in Wisconsin, then created a new layer from the selection. I then used the these counties to clip the hospitals and interstates layers down to only Wisconsin. I then selected by attributes the high permeability soils and made a new layer from the selection. This layer I then intersected with a 5 mile buffer I made from the hospital layer. This new layer I then intersected with a 10 mile buffer I made from the interstate layer. This final layer took into consideration all of my criteria and is what I displayed on my final map. Below (figure 1) is the flow chart I made for this geoprocessing.
Subscribe to:
Posts (Atom)

