• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/79

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

79 Cards in this Set

  • Front
  • Back
RS as a process
(Lecture 1:11-18)
1. Energy Source
2. Radiation and Atmosphere
3. Interaction with target
4. Recording of Energy by Sensor
5. Transmission, reception and processing
6. Interpretation and analysis
7. Application
Electromagnetic Spectrum and main RS Segments
(Lecture 1:31-35)
1. UV:shortest-rocks and minerals
2. Visible:0.4-0.7 mm Red long, Short Violet
3. Infrared:0.7-100 mm (reflected and thermal)
4. Microwave: 100 mm to 1 m (longest used in RS)
Atmospheric Energy and Matter Interactions
(Lecture 1:40-47+)
1. Transmission/Refraction, volume, passes through
2. Absorption, volume, gives up energy to heating
3. Reflection:surface, equal and opposite, smooth
4. Scattering:surface, deflected, rough
5. Emitted: surface, blackbody radiation, longer Wave lengths
Atmospheric Windows
(Lecture 1:55-58)
Those areas of the spectrum which are not severely influenced by atmospheric absorption and thus, are useful to remote sensors.

Certain wavelengths of radiation are affected far more by absorption than by scattering. Heat energy emitted by the Earth corresponds to a window around 10 micrometers in the thermal IR portion of the spectrum, while the large window at wavelengths beyond 1 mm is associated with the microwave region.
Spectral Signatures
(Lecture 1:68-82)
By measuring the energy that is reflected (or emitted) by targets on the Earth's surface over a variety of different wavelengths, we can build up a spectral response (or spectral signature) for that object.

EM-target interaction depends on
the physical nature of the object,
the wavelengths of radiation involved and the condition of the feature.

For example, water and vegetation may reflect somewhat similarly in the visible wavelengths, but are almost always separable in the infrared.
Active and Passive Sensors
(Lecture 2:15-17)
Passive Sensors: Remote sensing systems which measure energy that is naturally available. Record reflected and emitted energy.

Active Sensors: detect and record EMR emitted from the sensor itself. Radar and Lidar
Three Models of RS
(Lecture 2:18-22)
1. Passive: Reflected solar radiation
2. Passive sensor: emitted terrestrial radiation
3. Active sensor: reflected own energy
Scanning Systems
(Lecture 2:32-41)
If the scene is sensed point by point (equivalent to small areas within the scene) along successive lines over a finite time, this mode of measurement makes up a scanning system.

Across Track: Wishbroom
Along Track: Pushbroom
Image Resolutions
(Lecture 2: 43-69)
1.Spatial resolution: smallest discernable physical object or detail in image.
2. Spectral resolution: “thickness” of bands or channels (min and max wavelength sensed).
3. Radiometric resolution: difference between minimum and maximum reading or measurement or number of steps between min and max reading or measurement.
4. Temporal resolution: time between repeated sensing.  
GeoStationary Orbits
(Lecture 3: 8)
Satellites at very high altitudes, which view the same portion of the Earth's surface at all times have geostationary orbits. At 36,000 Km.
Near-Polar Orbits
(Lecture 3: 9)
Many remote sensing platforms are designed to follow an orbit (basically north-south) which, in conjunction with the Earth's rotation (west-east), allows them to cover most of the Earth's surface over a certain period of time.

Travels northwards on one side of the Earth and then toward the southern pole on the second half of its orbit.  
These are called ascending (South-North) and descending passes (North-South), respectively.
Sun-synchronous Orbits
(Lecture 3:11)
They cover each area of the world at a constant local time of day called local sun time.

At any given latitude, the position of the sun in the sky as the satellite passes overhead will be the same within the same season.

The ascending pass is most likely on the shadowed side of the Earth while the descending pass is on the sunlit side.
Swath
(Lecture 3:12)
As a satellite revolves around the Earth, the sensor "sees" a certain portion of the Earth's surface.

The satellite orbits runs the Earth from pole to pole and because of the Earth rotation (from west to east) the satellite swath shifts for each new pass on the orbit, and thus sensor covers different areas on the Earth.
Revolution Period
(Lecture 3:13)
Revolution or orbital cycle will be completed when the satellite retraces its path, passing over the same point on the Earth's surface directly below the satellite (called the nadir point) for a second time.
Revisit Period
(Lecture 3: 13)
Is the interval of time required for the satellite to image the same area on the Earth.

Revisit period is function of revolution cycle, latitude and off nadir viewing capability.

The revolution cycle can be not the same as the revisit period.
Multispectral Imaging
(Lecture 3:17-28)
Usually composed of about 4 to 10 bands of relatively large bandwidths (e.g., 70 -400 nm).

True Color: RGB
False Color: RGI bands, NIR, Red and Green
Pre-Processing Digital Data
(Lecture 5:10)
Reconstruction/correction functions involve those operations that are normally required prior to the main data analysis and extraction of information.

Image restoration and rectification, are intended to correct for sensor- and platform-specific radiometric and geometric distortions of data respectively.

Radiometric and geometric corrections may be necessary due to variations in scene illumination and viewing geometry, atmospheric conditions, and sensor noise and response etc.
Radiometric Corrections
(Lecture 5:21-31)
To convert and/or calibrate the data to known (absolute) radiation or reflectance units to facilitate comparison between data.

Necessary due to variations in scene illumination and viewing geometry, atmospheric conditions, and sensor noise and response.

When the emitted or reflected electro-magnetic energy is observed by a sensor on board an aircraft or spacecraft, the observed energy does not coincide with the energy emitted or reflected from the same object observed from a short distance.

Therefore, in order to obtain the real irradiance or reflectance, those radiometric distortions must be corrected.

This is due to the sun's azimuth and elevation, atmospheric conditions such as fog or aerosols, sensor's response etc.
.
Geometric Corrections
(Lecture 5:21, 32-48)
To move the image pixels into correct horizontal positions and georeferenced space.

An error on an image, between the actual image coordinates and the ideal image coordinates which would be projected theoretically with an ideal sensor and under ideal conditions.

These distortions may be due to several factors, including: the perspective of the sensor optics; the motion of the scanning system; the motion of the platform; the platform altitude, attitude, and velocity; the terrain relief; and, the curvature and rotation of the Earth.

Geometric corrections are intended to compensate for these distortions so that the geometric representation of the imagery will be as close as possible to the real world.
Image Enhancement
(Lecture 5:50-54)
Used to make it easier for visual interpretation and understanding of imagery.

Remote sensing devices, particularly those operated from satellite platforms, often produce images that are not suitable for visual image interpretation.

In order to deal with radiometric corrections and other image processing techniques, which are based on the DN (digital radiometric numbers), the DN image histograms have to be introduced.
Image Enhancement Techniques
(Lecture 5:55-56)
1. Radiometric enhancement deals with individual values of the pixels in image (applied to one band).

2. Spatial enhancement takes into account the values of neighbouring pixels (modifies pixel values based on the values of surrounding pixels; deals with spatial frequency).
Radiometric Enhancement Stretches
(Lecture 5:56-65)
Radiometric enhancement or contrast stretching involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds.

Four Contrast Methods: Minimum-Maximum Linear Contrast Stretch
Percentage Linear Contrast Stretch
Piecewise Linear Contrast Stretch
Nonlinear Contrast Stretch
Spatial Enhancement Filtering
(Lecture 5:66-70)
Designed to highlight or suppress specific features in an image based on their spatial frequency.

Spatial frequency is related to the concept of image texture.

It refers to the frequency of the variations in tone that appear in an image.

"Rough" textured areas of an image, where the changes in tone are abrupt over a small area, have high spatial frequencies, while "smooth" areas with little variation in tone over several pixels, have low spatial frequencies.
Image Transformations
(Lecture 5;72)
The manipulation of multiple bands of data, whether from a single multispectral image or from two or more images of the same area acquired at different times (i.e. multi-temporal image data).
Image Algebra
(Lecture 5:73-75)
Typically, two images which have been geometrically registered, are used with the pixel (brightness) values in one image being subtracted from the pixel values in the other with scaling of the resultant image by adding a constant (usually 127 in such case) to the output values will result in a suitable 'difference' image.

Image subtraction is often used to identify changes that have occurred between images collected on different dates.
Spectral Ratioing
(Lecture 5:75-77)
Highlight subtle variations in the spectral responses of various surface covers.
Vegetation Indexes
(Lecture 5:78-81)
A group of algorithms contains a number of simple band combinations that are commonly used for either vegetation or mineral delineation.

Image transformation techniques based on complex processing of the statistical characteristics of multi-band data sets can be used to reduce this data redundancy and correlation between bands.

One such transform is called Principal Components Analysis.
Principal Components Analysis
(Lecture 5:82-85)
To reduce the dimensionality (i.e. the number of bands) in the data.

To compress as much of the information in the original bands into fewer bands.

To reduce this data redundancy and correlation between bands.

To identify most informative bands of original image from correlation with PCA bands.
Image Classification and Analysis: Unsupervised Classification
(Lecture 6:33-47)
Spectral classes are grouped first, based solely on the numerical information in the data, and are then matched by the analyst to information classes (if possible).

A basic sequence of operations of the unsupervised classifiers:

Run clustering algorithm and classify the image.

Identify created spectral classes into information classes.

Evaluate and edit spectral signature of information classes.

Evaluate the results:
Generalization or aggregation of classes.

Accuracy assessment (may include visiting the scene or ground truthing).

Programs, called clustering algorithms (classifier), are used to determine the natural (statistical) groupings or structures in the data.

At the same time, RS analyst has limited control over unsupervised classification process.

By changes parameters of algorithms one can get quite different classification results.

One can manage controllable parameters in order to change and improve unsupervised classification:

Number of classes.

Separation distance among the clusters.

Variation within each cluster.

Unsupervised classification has posterior decision: from spectral classes in feature space to information classes in the image.
Image Classification and Analysis: Supervised Classification
(Lecture 6:9-32)
The analyst identifies in the imagery homogeneous representative samples of the different surface cover types (information classes) of interest at first, and after a classification algorithm is applied in order to group spectral classes.

Define training sites (may include visiting the scene or ground truthing);
Extract, evaluate and edit spectral signature of training sites;
Classify the image;
Evaluate the results:

In-process classification assessment .
Generalization of classes.
Accuracy assessment (may include visiting the scene or ground truthing).

The next step of supervised classification process is extracting, evaluation and editing of spectral signatures of training sites.

Once the computer has determined the signatures for each class, each pixel in the image is compared to these signatures and labelled as the class it most closely "resembles" digitally.

In Remote Sensing, for supervised classification, the following algorithms are commonly used:
Minimum-distance-to-means classifier
Minimum-distance-to-means classifier with standardized distance
Parallelepiped classifier
Maximum Likelihood classifier

Supervised classification has prior decision: from information classes in the image to spectral classes in feature space.  
XY Event
(Lecture 7:6, 8-10)
Simple numerical coordinate pairs of information that describe the location of a feature.

XY geocoding requires a table consisting of a location name, an X-coordinate, and a Y-coordinate.

The values in the X and Y fields may represent any coordinate system and units such as latitude and longitude or meters.
Address Geocoding
(Lecture 7:6, 12-14)
Features that can be located based on address matching with a street network or other features with a text address identifier such as postal code, lot number, street with house number, city, and country name.

Requires two sets of data: a reference dataset and an event attribute table for geocoding.

The process of taking addresses and estimating their locations in GIS coordinate system.

This is done by relating an address to spatial reference layer.
Linear Referencing
(Lecture 7:6, 28-30)
Events are associated with a linear measuring system along routes and rely on the dynamic segmentation of a network.
Records locations of events by using relative positions along existing linear features.

Allows multiple sets of attributes to be associated with existing linear features such as speed limits, accidents, pavement conditions with highway routes.
Geocoding Workflow
(Lecture 7:15-27)
1. Build or obtain reference dataset.

2. Determine an address locator style or create new one.

3.Build an address locator from reference data with chosen locators style.

4. Geocode or match addresses as points.

5. Check and rematch tied and unmatched addresses.
Dynamic Segmentation
(Lecture 7:28-31)
The process of computing the map location of route events along linear reference features and displaying and analyzing them on a map.

Derived from the concept that line features need not be split or segmented each time an attribute of event’s value changes in order to dynamically locate the event.

Multiple sets of route events can be associated with any portion of an existing linear feature independently of where it begins or ends.

Requires two sets of data: a linear reference dataset and a route event attribute table for geocoding.
Dynamic Segmentation: Elements
(Lecture 7:32-35)
Consists of routes,measures, and events.

1. Routes are computed at run time--represents a linear feature, such as a city street, highway, or river.

2. Linear measures are stored with geometry of line network dataset. Have measures, which describe distance along them, a linear method consisting of a starting value and other values along the route.

3. Routes events are stored in an event table. Attributes associated with a route, e.g. such as pavement quality or traffic accident.
Events
(Lecture 7:35)
Linearly referenced or measured data. Attributes associated with a route, e.g. such as pavement quality or traffic accident.

In the highway application, pavement quality can be labeled according to the measures, poor from 0 to 150, good from 150 to 316 and fair from 316 to 400.
Types of Events
(Lecture 7:36-39)
Three types are point, linear and continuous:

Point: associated with measured location at points along a route, for example, traffic accident positions along highway

Linear: describe a portion of a route using a from- and to-measure values, for example, pavement reconstruction section at locations from 28 to 30 kilometers from the start of road.

Continuous: Traverse route’s attributes along the entire route, for example, bus fares along bus routes, or pavement quality along entire highway.
Spatial Network
(Lecture 8:8-9)
A set of edge, junction, and turn elements and the connectivity between them, whose attributes share a common theme primarily related to flow.

For example, transportation or undirected network for objects that have autonomous control, and travel may occur in either direction along the network.

There are physical and logical networks
Elements of Transportation Networks
(Lecture 8:9-14)
Physical network consists of the polylines (or arcs) and nodes that make up the underlying layer on which the network is built.

Physical networks are generally identical to the underlying polyline layer, on which they are based.

Commodities flow from sources to destinations.

The main elements of logical network are edges, junctions, and turns.

Edges provide the additional information for arcs, and junctions and turns provide the additional information for nodes.
Arcs
(Lecture 8:10)
Polyline segments that represent portions of the network between intersections.
Nodes
(Lecture 8:10)
Locations where arcs join together, and correspond to intersections or connections.
Properties of Networks
(Lecture 8:15)
In order to represent transportation and utility networks, network must include connectivity, restrictions (including directionality), and impedance to accurately make a model.
Connectivity
(Lecture 8:16)
The ability to trace the route from one point on the network to another point on the network by following a series of edges and turns.

The edge is connected at junctions, and that junctions are connected together by edges.
Restrictions
(Lecture 8:17)
Some roads may be permanently closed, restricted to certain types of traffic such as buses and taxis, or restricted based on the direction or time of travel.
Impedance
(Lecture 8:18)
Values may be assigned either to edges or to turns and indicating element’s limitations.

Edge Includes the cost (time) required to traverse an edge and turn the cost (time) required to complete a turn.
Network Analysis
(Lecture 8:23)
Answers questions about structure of connections and movement or location along a network.
Optimal Routing
(Lecture 8:26)
Finding and highlighting the best route to get from one location to one or more other locations.

The "best route" could be the shortest, the quickest, or the most scenic.

The vector data structure ensures that off-road travel is not allowed.
Finding Closest Facilities
(Lecture 8:39-40)
A special type of optimal routing problem where one is trying to find the closest points to a given location.

Typically, the points are called facilities and the given location is called an event location.
Resource Allocation
(Lecture 8:42-43)
Allocation of resources from supply centers to customers on a network.

A type of spatial analysis that creates a raster surface and calculates for each cell its nearest source based on the least accumulative cost through a network with the least accumulative cost impedance.

Centers are assigned until the service criteria or the capacity of the facility is met.
For example, streets may be assigned to the nearest ambulance station, but only within a 10-minute drive radius; or pupils may be assigned to the nearest school.
Geostatistics
(Lecture 10:14)
The analysis of spatially referenced phenomenon that was observed via discrete measurements.

It uses spatial coordinates to formulate continuous model of the analyzed phenomenon based on interpolation, smoothing, estimation and/or prediction techniques.
Interpolation
(Lecture 11:4-5)
The process of estimating unknown values that fall between known values.

These points with known values are called known points, control points, sampled points, or observations.

Only works where values are spatially dependant, or spatially auto-correlated, that is where nearby location tending to have similar known values.
Inverse Distance Weighted (IDW)
(Lecture11:6 -18)
Follows the principle of the First Law of Geography.

Determines cell values using a linearly weighted combination of points.

The technique estimates the Z value (e.g., height) at an unknown point by giving increased weighting to nearer points, this is creating an inverse relation between weighting and distance.

Works best for dense, evenly spaced sample points.

It does not consider trends in the data and cannot make estimates above the maximum or below the minimum sample values.

Local.
Deterministic.

Often judged by the root mean square (RMS) error for differences between the measured (or/and control points) and interpolated values.
Trend Surface Analysis (Global Polynomials)
(Lecture 11: 19-25)
A surface can be approximated by a global polynomial expansion of the geographic coordinates of the control points, and the coefficients of the polynomial function are found by the method of least squares adjustment, insuring that the sum of the squared deviations from the trend surface is a minimum.

Creates a slowly varying surface using low-order polynomials that possibly describe some physical process (e.g., a dominate direction of pollution and wind direction).

Analysis can help identify global trends in the input dataset.

Locations at the edge of the data can have a large effect on the surface.

There are no assumptions required of the data.

There is no statistical assessment of prediction errors.
Splines (Radial Basis Functions)
(Lecture 11:26-28)
Methods are a series of exact interpolation techniques that fits the surface through each measured sample value.

Radial basis functions are deterministic interpolators that are exact.

Radial basis functions make no assumptions about the data.

There is no statistical assessment of prediction errors.

Conceptually similar to fitting a rubber membrane through the measured sample values while minimizing the total curvature of the surface.

A deterministic interpolator, meaning that a mathematical formula (a polynomial function) is used to make a prediction for unknown point.

Produce smooth surfaces, which are ideal for the modeling of gently varying surfaces such as elevation.

Unable to model abrupt discontinuities, such as cliffs, canyons, or geological faults.

To solve this, a variant of the spline command has been developed which allows the user to input lines representing discontinuities.
Kriging 01
(Lecture 12:4-5)
Predicts the best linear unbiased estimates of a surface at specified locations, based on the assumptions that the surface is stationary and the correct form of the semivariogram has been chosen.

Procedures incorporate measures of error and uncertainty in determining estimates.

The calculation of unknown values is similar to IDW and based on weights assigned to known values.

However, these weights are only optimal weights and the semivariogram is used for calculation weights.

Semivariogram weights depend on the known sample distribution – distance and direction.
Spatial Dependence
(Lecture 12:6
The notion that things near to each other are more similar then things farther apart that is the notion of the First Law of Geography.
In this case, points are not independent.
Variogram
(Lecture 12:6)
A representation which plots semivariances against its separated distance.
Variogram Types
(Lecture 12:9-12)
1. Experimental semivariogram is calculated. Each pair of observation points has a variance. The variogram is a point “cloud”.

2. Empirical semivariogram is re-calculated.
Directional empirical variogram is re-calculated.


3. Theoretical variogram is estimated.
Sill
(Lecture 12:12)
Maximum semivariance or the height that the semivariogram reaches, and represents variability in the absence of spatial dependence.

It is often composed of two parts: a discontinuity at the origin, called the nugget effect, and the partial sill, which added produce the sill.
Range
(Lecture 12:12
Separation between point-pairs at which the sill is reached or distance at which there is no evidence of spatial dependence.
Empirical Semivari0gram
(Lecture 12:14)
A graph of the averaged semivariogram values on the y-axis, and distance (or h lag) on the x-axis:
Aniostropy
(Lecture 12:19)
A characteristic of a random process that shows higher autocorrelation in one direction than another.

Defines the spatial variation among points separated by space lag and direction.

Indicators are that the semivariogram is changed notably; ranges and/or sills are considerably different for the respective directions.
Theoretical Semivariogram
(Lecture 12:20-21)
Do not provide information for all possible directions and distances.

We need to know semivariances for all possible directions and distances.

It is necessary to fit or approximate the empirical semivariogram/covariance cloud by a continuous function or curve (the theoretical variogram model), which expresses semivariance as a function of separation vector.

This fitted model is used in the kriging equations.

Must obey mathematical constraints so that the resulting kriging equations are solvable (e.g., cannot introduce negative standard errors for the predictions).
Kriging 02
(Lecture 12:24-26)
Use of regionalized variables, which are variables that fall between random variables and completely deterministic variables – a concept that assumes the spatial variation of regionalized variables is sum of:

Structural deterministic component, having constant mean or trend.

Random, but spatially correlated component.

Spatially uncorrelated random noise.

The weights used in kriging involve not only the semivariances between the points to be established and the known points, but also those between the known points.

Is linear with a known trend.

It allows the quantification of prediction errors and representation of these errors as an error surface or map.
Deterministic interpolation methods (IDW, splines, etc) do not consider errors!
Ordinary Kriging
(Lecture 12:27-29)
Linear with an unknown flat trend.

The value in location is the sum of a regional mean and a spatially-correlated random component

Predicts at points, with unknown mean (must be estimated) and there is no trend (or flat trend).

Infers there is no trend or strata; the regional mean must be estimated from the sample.

The difference between the predictions and the actual values should be zero, thus making the prediction "unbiased."
Universal Kriging
(Lecture 12:33-34)
Linear with an unknown polynomial trend.

The value of variable at location is modeled as the sum of a regional non-stationary trend and a spatially-correlated random component.

The semivariances are based on the residuals, not the original data, because the random part of the spatial structure applies only to these residuals.
Co-Kriging
(Lecture 12:35)
Linear and multivariate and can have different types of trends.

Method of using supplementary information on co-regionalized variables (co-variables) to improve the prediction of a target variable.

The idea of co-regionalization is that the process that drives one variable is the same, or at least related to, the process that describes the other variables.

For example, distribution of heavy metals in soil can relate pollution or distribution of water pH level can relate to pollution and elevation.
Cross-Validation
(Lecture 13:11-12)
“How to calculate RMSE for exact interpolation?”

Each sample point is removed; interpolation surface is created without this point, and this surface is compared to the predicted value for removed location.

Such process is repeated for each sample point.

It is a summary statistic quantifying the error of the prediction surface.
Errors
(Lecture 13:13-14)
When one compares these models, one should look for a model that satisfies the following conditions:
The lower root-mean-squared error.
Mean error is near zero.

Deterministic interpolation methods (IDW, splines, etc) do not consider errors!

Therefore, only RMSE could be used for comparisons between a deterministic method and another interpolation method.

The mean standardized prediction error should be nearest to zero.
The average prediction error should be nearest to the root-mean-squared error.
The standardized root-mean-squared prediction error should be nearest to one.
Best Interpolation Methods (When and Where?)
(Lecture13:25-26)
Knowing the value of a variable at a point in space or time is related to its value at nearby points.

Knowing the values of sample points allows one to predict (with some degree of certainty) the value at any given chosen point.

Correlation between variables is applied to correlation within or to one variable, using distance or time to model the spatial relation.

“How to describe the auto-correlation?”

A function of the distance and direction separating two locations is used to quantify autocorrelation.

Spatial structure can be described by range, direction, and strength.
Datum
(Lecture 14:15, 22)
Provides a frame of reference for measuring locations on the surface of the earth.

It defines the origin and orientation of latitude and longitude lines.

Important because coordinates based on different datums will put features that are coincident on the Earth’s surface at different locations on a map.

Since tied to specific ellipsoid, it is equally important to know what ellipsoid was used to determine the geographic coordinates of the data you want to use.
Horizontal Datum
(Lecture 14:16-
A particular application of an ellipsoid.

Defines the position of the approximated spheroid.

Two types of horizontal datums:

An earth-centered, or geocentric datum uses the earth’s center of mass as the origin.
Based on satellite orbital data.

A local datum aligns its spheroid to closely fit the earth's surface in a particular area.

Based on geodetic ground survey measurements.
Geocentric Horizontal Datum
(Lecture 14:17-19)
An earth-centered (e.g., WGS1984) that uses the earth’s center of mass as the origin.

Aligns its ellipsoid to closely fit the earth's surface in a particular region.
Local Horizontal Datum
(Lecture 14:20
(e.g., NAD27) aligns its ellipsoid to closely fit the earth’s surface in a particular area..not at center of earth!

Datum specifies by ellipsoid used and location of ellipsoid.

The local origin point of the datum is matched to a particular position on the surface of the earth and all other points are calculated from it (Kansas).
Transformation Methods For Horizontal Datums
(Lecture 14:24-28)
Changes the underlying datum that can include spheroid.

Grid-based methods:
A geographic transformation (grid-based methods) converts between geographic (longitude–latitude) coordinates directly.

Use Grid of differences to convert values directly from one datum to another.

Potentially most accurate (NAD27 to NAD83 accurate to ~0.15m for continental US).
E.g., NADCON (US), NTv2 (Canada).

The National Transformation version 1 (NTv1) uses a single grid to model the differences between NAD 1927 and NAD 1983 in Canada.
NTv2 is designed to check multiple grids for the most accurate shift information.

Equation-based methods:convert the geographic coordinates to geocentric (X,Y,Z) coordinates, transform the X,Y,Z coordinates, and convert the new values back to geographic coordinates.

E.g. Geocentric Translation (three-parameter), Coordinate Frame Methods (Bursa–Wolf or seven-parameter methods), Molodensky, Abridged Molodensky method.
Vertical Datum
(Lecture 14:15, 29-30)
A level from which elevation is measured.

Defines the “zero reference” point for elevation (z).
E.g., North American Vertical Datum of 1988 (NAVD88).
Canadian Geodetic Vertical Datum of 1928 (CGVD28) which is based on the mean sea level determined from several tidal gauges located in strategic areas of the country.

Takes into account a map of gravity anomalies between the ellipsoid and the geoid which are relatively constant.

Earth's gravity field isn't the same everywhere due to that material that makes up the earth are fluctuated from place to place.
Measuring these differences make it possible to help figure out the earth's shape - the Geoid.

Mean Sea Level (same as the Geoid) is the average level of the ocean surface halfway between the highest and lowest levels recorded.
COGO
(Lecture 14:38-39)
A set of algorithms for converting survey data (bearings, distances, and angles) into coordinate data.

Direction-Distance - the calculation of a coordinate from an existing coordinate using known distance and direction values.

Deflection-Angle-Distance - compute a new coordinate by defining a deflection angle offset, based on a reference direction, and a distance from a known point.

Delta XY - compute coordinates for a survey point based on a known difference in coordinates from a given start point.
Traverse
(Lecture 14:40
used to compute a sequence of survey point locations starting from an initial known point.
Two methods are used:

The Bearing method.

The Angle Right method.

Used for defining coordinates based on values taken from subdivision plans.

The TPS Traverse computation can be used for processing field traverse data.
There are two primary categories for traverses: open or closed.