Posts

Module 2 - Land Use Land Cover Classification (LULC), Ground Truthing & Accuracy Assessment.

Image
  This week’s assignment focused on learning how to classify land use and land cover (LULC) from high-resolution satellite imagery and then evaluate how accurate those classifications were. We worked with an aerial image of Pascagoula, Mississippi, and created our own data by digitizing different land use types. First, I created a new polygon feature class and digitized different land cover areas by identifying patterns of tone, texture, shape, and association. I assigned each polygon a Level II LULC code and a short description. My final categories included residential (11) , commercial and services (12) , deciduous forest (41) , forested wetlands (61) , non-forested wetlands (62) , lakes (52) , streams and canals (51) , and bays and estuaries (54) . Learning to recognize these features based on visual cues made me more aware of how land use reflects both natural environments and human activity. Next, I conducted a ground truth accuracy assessment . I created 30 sample points arou...

Module 1 Lab: Visual Interpretation

Image
  This week’s assignment focused on learning how to interpret aerial photographs using different visual cues such as tone, texture, shape, size, shadow, pattern, and association . Even though aerial images may seem like just pictures from above, this lab helped me understand how much information can be extracted when we look closely and intentionally. In Exercise 1 , I worked with a grayscale aerial photo to identify differences in tone and texture . I created a feature class for tone and labeled five areas ranging from very light to very dark. I also created another feature class for texture, identifying areas from very fine to very coarse. This process showed me how brightness and surface roughness can indicate different types of land cover. For example, paved areas appeared very light, while water bodies showed up very dark, and forests had more coarse textures. It made me realize that even without color, we can still interpret the landscape fairly well. In Exercise 2 , I pract...

Topic 3 - Module 1: Scale Effect and Spatial Data Aggregation

Image
 Scale Effects on Vector Data: In vector data, the scale at which features are represented can dramatically affect spatial analysis results. Larger scales (zoomed in) capture more detail, while smaller scales (zoomed out) generalize features. For example, analyzing counties versus ZIP codes can produce different statistical outcomes because aggregation over larger areas smooths local variability, a phenomenon known as the Modified Area Unit Problem (MAUP). Resolution Effects on Raster Data: Raster data are made of grid cells, and the cell size determines the spatial resolution. High-resolution rasters capture fine details but require more storage and processing power, whereas low-resolution rasters generalize information and may hide local variation. This affects analyses like slope, elevation, or population density surfaces, where resolution can change both visual and statistical results. Gerrymandering: Gerrymandering is the practice of drawing political district boundaries to fa...

Module 2.2: Surface Interpolation

Image
Inverse Distance Weighting (IDW) Interpolation This week’s assignment focused on using surface interpolation methods to visualize water quality in Tampa Bay. The goal was to estimate BOD concentrations across areas where sampling points were sparse, using techniques like IDW, spline, and Thiessen. Interpolation methods are useful for creating continuous surfaces from discrete sampling points. IDW assumes that nearby points are more similar and produces smooth, gradual transitions. Spline generates very smooth surfaces that can sometimes exaggerate peaks and valleys. Thiessen divides the area into polygons where each location is assigned the value of the nearest point, creating distinct zones rather than smooth gradients. Comparing these methods shows that each represents the data differently, highlighting the importance of choosing an approach based on the data characteristics and analysis goals.

Module 2.1 Lab: Surfaces - TINs and DEMs

Image
  This week’s lab introduced different methods for working with elevation data and applying them to real-world suitability analysis. I began by creating several raster surfaces from a DEM, including slope, aspect, and reclassified versions of each. These reclassified rasters were then combined into a weighted overlay to build a final ski run suitability map , using weights of 25% aspect, 40% elevation, and 35% slope. The result was visualized in 3D, applying vertical exaggeration, lighting effects, and clear symbology to highlight the areas most suitable for ski runs. Then, I created and explored TIN models as another way to represent elevation. By adjusting the TIN symbology, I was able to view slope, aspect, and contours, as well as examine the edges of the triangles to better understand how the terrain was being modeled. This showed how TINs preserve the accuracy of the original points while still allowing terrain characteristics to be visualized. Finally, I compared the TIN to...

Module 1.3: Data Quality - Assessment

Image
  The objective of this assignment is to learn how to assess the quality of road networks. Specifically, we evaluate the completeness of road networks by comparing the total length of roads across two different datasets: the county Street-Centerlines and the TIGER road network. By examining differences in road lengths at both the county and grid levels, we can identify spatial gaps and better understand the relative coverage of each dataset. To perform the analysis, all datasets were first projected into a common coordinate system to ensure accurate distance measurements. Both road networks were clipped to the study area and intersected with grid polygons, splitting road segments at grid boundaries and assigning them to their respective grid cells. Lengths were recalculated for each segment, and total road lengths were summarized per grid for both datasets. Percent differences were then calculated using the county Street-Centerlines as the base, where positive values indicate...

Module 1.2 Lab: Data Quality - Standards

Image
  This week’s assignment focused on performing a horizontal positional accuracy assessment of two street datasets: the City Streets layer and StreetMapUSA. The goal was to quantify how accurately each dataset represents the true locations of intersections using orthophotos as reference data. By selecting 20 representative test points, digitizing their true locations, and comparing them to the datasets, we calculated RMSE values and generated formal NSSDA accuracy statements. This exercise reinforced key GIS skills, including feature selection, digitizing reference points, coordinate extraction, and error analysis. To assess this, 20 test points were selected across all quadrants of the study area. Reference points were digitized using orthophotos to represent the ‘true’ location of intersections. Coordinates for the reference points and corresponding points from both datasets were recorded. Differences in X and Y coordinates were calculated, Euclidean errors determined, and RMSE va...