• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/149

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

149 Cards in this Set

  • Front
  • Back

GPS

a collection of government satellites designed to provide accurate positioning and navigation information anywhere on the surface of the earth

Components of GPS

1 - space segments - satellites


2- control segment - ground stations


3 - user segment - receivers

space segment

-uses 24 satellites (21 active, 3 inactive)


-each makes a complete orbit ever 12 hours


-this provides 5-8 satellites above horizon from any point on earth

control segment

-5 ground based control stations

user segment

-consists of GPS receivers and the user community


-receivers use satellite signals to compute position, velocity and time estimates


-these estimates are used primarily for navigation

how the GPS system works (in 5 easy steps)

-it is a ranging system (triangulation)



1) a GPS receiver ("the user") detects 1-way ranging signals from several satellites


-each transmission is time-tagged


-each transmission contains the satellite's position


2) the time-of-arrival is compared to time-of-transmission


3) the delta-T is multiplied by the speed of light to obtain the range


4) each range puts the user on a sphere about the satellite


5) intersecting several of these yields a user position

triangulation for location

-1 satellite puts us anywhere on one sphere


-2 satellites puts us anywhere that the two spheres overlap


-3 satellites puts us on one of two points where all three spheres meet. generally only one is reasonable



*more satellites used to triangulate equals higher accuracy

distance measuring

Distance = Rate x Time



-the whole system revolves around time


-Rate = 186,000 miles per second (speed of light)


-Time = time it takes signal to travel from the SV to GPS receiver

measuring travel time

SV clocks


-$100,000 - $500,000 each


-made of Cesium and Rubidium




Receiver Clock


-similar to quartz watches


-always an error between satellite and receiver clocks (delta-t)


-require 4 satellites to solve for x,y,z, and delta-t

GPS signals

*broadcast on 2 L-band carriers


-L1: 1575.42 MHz - modulated by C/A-code and P-code


-L2: 1227.6MHz - modulated by P-code



*most unsophisticated receivers only track L1


*If L2 tracked, then the phase difference (L1-L2) can be used to filter out ionospheric delay


-this is true even if the recover cannot decrypt the P-code


-L1 only receivers use a simplified correction mode

NAVSTAR Carrier Signals (L1 & L2)

-combination and code


-L1 signal carries navigation message and SPS (standard positioning service) code signals


-L2 signal is used to measure ionospheric delay by PPS (precise positioning service) equipped receivers

NAVSTAR signals (C/A)

-Coarse Acquisition code - is a repeating 1 MHz pseudo random noise (PRN) code


-this is a binary number that repeats every millisecond


-Carrier Phase GPS is based on L1 & L2 signals


-Code Phase GPS is based on C/A code

The Almanac

-the info each SV broadcasts info about ALL other SVs in addition to its own nav data (in a reduced-accuracy format)


-permits receiver to predict, from a cold start, "where to look" for SVs when powered up


-GPS orbits are so predictable that the almanac may be valid for months


-data is large: 12.5 minutes to transfer in entirety

Time is Distance

-for example: the PRN code is sent from a satellite, and generated by the GPS receiver


-code from satellite reaches receiver


-difference between code received and code generated by receiver = time = distance





Sources of GPS Positional Error

-noise


-bias


-blunders

Multi path errors

*inaccurate measures resulting from:



-radio waves cannot pass through some objects (i.e. buildings, trees, mountains) therefore signal will not be received


-radio waves can bounce or be reflected off of certain objects (i.e. water, large metal objects, buildings). Signal will not be accurate

to get the best measurement

-remain in the open


-avoid buildings and tall trees


-take several measurements


-wait for another satellite to come into view of the receiver

D.O.P.

Dilution of Precision



-a measure of the geometry of the visible GPS spectrum



good: satellites are in all directions


poor: when satellites are clustered together

types of DOP

PDOP = Position DOP (most commonly used)


VDOP = Vertical DOP


GDOP = Geometric DOP


HDOP = Horizontal DOP


TDOP = Time DOP

DOP quality

very good = 1 - 3


good = 4 -5


fair = 6


suspect = >6



*mission planning is critical to obtain good DOP

cost of errors

device & PRN noise = 1-2


clock errors = 1


ephemeris errors = 1


tropospheric delays = 1


Ionospheric delays = 10


multi path errors = 0.5


Selective Availability = 30 - 100



w/ SA error = 45 - 115 meters


w/o SA error = 10 - 20 meters


w/ differential correction = 1 - 3 meters

differential correction

-depends on a base station, at a known position, to compute corrections for remote receivers


-corrections can be made in real time, or by post-processing


types of differential correction

coverage


-local area


-wide area



methods


-real-time


-post processing

typical position accuracy

-SA = 100 meters


-typical GPS w/o SA = 15 meters


-typical differential GPS (DGPS) = 3-5 meters


-typical WAAS = <3 meters

WAAS

wide area augmentation system



-has a wide area differential GPS


-includes reference receivers, error correction message from another satellite and the field receiver

CDGPS

Canadian Differential GPS service



-wide are (satellite based) service that broadcasts CDGPS corrections


-optimized for canada, resulting in superior meter-level accuracy with most GPS receivers and sub-meter with dual frequency technology

GPS to GIS

-use a combo of GPS hardware and software to create point, line and area objects


-use software to attach attribute data to these objects



start by:


-checking for high PDOP values & # of SVs


-create a data dictionary


-set critical values (i.e. feature logging rate, not in feature rate, position mode etc)

And



Or



Xor

-where they overlap



-where either or both occurs



-where the first occurs but the second doesn't

multi-part features

a feature that has multiple separate parts that are classified as the same (i.e. the islands of Hawaii)

ways to generalize objects

-dissolve


-eliminate


-simplify line


-smooth line

dissolve

-aggregates features based on specified attribute(s)



*i.e. "code" - all adjacent polygons that have the same "code" are merged



*makes fewer larger polygons instead of more smaller ones


eliminate

-overlaying layers can create sliver polygons (where the borders don't quite line up)


-merges the sliver polygons with neighbouring ones; usually with the longest shared border or largest area

simplify

point remove or bend simplify

smoothing

smoothing the edges of a polygon

proximity analysis

-buffer


-multiple ring buffer


-near


-point distance


-thiessen polygons

buffer analysis

-a type of proximity analysis where a buffer zone is creates


-specify a distance from a point (radius), line (corridor) or area (area)

buffer generation

-on points, lines or polygons


-different buffer distances


-concentric buffers


-overlapped or dissolved zones


spatial proximity - buffer

-constant width


-variable width (weighted)

multiple ring buffer

-creates multiple concentric buffers around the same area


near

compute the distance from each point in the input feature class or layer to the nearest point, or polyline in the near feature class or layer, within the maximum search radius



-the results are recorded in the Input Features attribute table


-fields for distance and feature ID of the closest feature are added or updated

PointDistance

-computes the distances between point features in one feature class or layer that are within the specified search radius


- {search_radius}

topological overlay

-uses polygons as the overlay theme


-input is the same as the input

types of overlay

*point in polygon


-determine which point features of specified characteristics fall in specified areas



*line in polygon


-determine which line features of specified characteristics fall in specified areas



*polygon on polygon


-overlapping two or more polygons to create a new layer

overlay w/o attribute transfer

-clip


-erase


-split

clip

-extracts those features from an input feature class or layer that overlap with a clip feature class or layer


-most frequently used polygon overlay command to extract a portion of a feature class to create a new feature class



*think: cutting a cookie out of dough with a cookie cutter to get the shape you want

erase

-erases the input features that overlap with the erase feature class polygons


-opposite of clip



*think: leaves you with the "leftover dough" in the cookie cutter analogy

split

breaks the input features into multiple output feature classes



i.e. into zones

overlay w/ attribute transfer (merging)

-union


-intersect


-identity



-update

union

-combines all the features of both feature classes


-computes a geometric intersection of the Input Features


-all features will be written to the Output Feature Class with the attributes from the Input Features which it overlaps

intersect

-only those features in the area common to both feature classes will be preserved


-computes a geometric intersection of the input features


-features or portion of features common to all layers and/or feature classes will be written to the output

identity

all features of the input feature classes, as well as those features of the identity feature classes that overlap the input feature classes, are preserved in the output feature class



-computes a geometric intersection of the input features and the identity features


-the input features or portion thereof which overlap identity features will get the attributes of those identity features

update

-replaces the input coverage areas with the update coverage polygons using a cut-and-paste operation



i.e. {BORDERS| NO_BORDERS} - specifies whether or not the outside border of the will be kept after it is inserted into the

Ian McHarg

-did environmental analysis using mylar overlays from maps


-combined colours for different cultural and environmental sensitivities


-overlay analysis (decomposition and georeferencing) referred to as "McHargian analysis"

cartographic modeling

1) derivative modelling or representation


-create new data layers based on geographic principles


-programmed into the software or knowledge the user applies to the database


~knowledge of the reference system (i.e. distance)


~knowledge of context or spatial relationships (i.e. slope)


~knowledge of other relationships (i.e. suitability)



2) process or systems modelling


-add knowledge of processes to the database; use the GIS as a surrogate environment to play out scenarios

overlay suitability models

-derivative modelling


-find suitable routes/locations for things depending on the criteria


process modelling

*what-if scenarios


-how long does it take water to flow from a given point to the sea?


-how long would it take if a dam were built?

capability modeling

-involves IDing areas/features that could support a given activity, use or process and eliminating areas/features that would not be able to fulfill these needs


-usually outcome is represented as a binary classification



i.e. land capability models for forest harvesting practices (combine stream and road buffers)

modeling process

-ID the problem


-breakdown (simplify) it


-organize the data required to solve it


-develop a clear & logical flowchart using well define operations


-run the model and modify it if necessary


Suitability modeling

-typically refers to an ordinal classification of the capable areas/features to denote the relative abilities of these areas/features to fulfill these needs


-often this number is represented as an index that goes from 0 to 1 where: 1 = most suitability possible; 0 = no suitability

overlay combination rules

-first the overlay process assemblies all the attributes of a given location



Dominance Rule:


-one value wins


-operation consists of choosing one value


- i.e. minimum, maximum etc



Contributory Rule:


-each attribute value contributes to result


-operations like addition allow each source to contribute to result



Interaction Rule:


-pairs of values contribute to result


-decisions in each step may differ

horizontal integration

solves problems caused by:


-geometric distortions commonly occur in source maps


-they may be introduced by imperfect registration in map compilation, lack of geodetic control in source data, or a variety of other causes



horizontal integration issues

mismatched features:


-alignment problems


-shape problems



mismatched attributes:


-attributes might not agree in different feature classes

merging features

-adjacent, overlapping or nonadjacent features can be merged


i.e. interior boundary removed, one record in the attribute table

why merge vector converages?

study areas usually contain more than one map sheet, each with potentially different:


-horizontal control marks


-feature extents


-projection

data transformation methods

affine


-based on polynomials


-differential scaling, skew, rotation, translation



projective



differential


-based on control points


-rubber sheeting


-uses displacement links to distort data to best fit the data you are working with

vertical integration

-registering features from different themes, but with common boundaries, to a common location



common errors:


-sliver polygons: created when digitizing the same line twice and in overlay operations

georeferencing steps

1 - add the image to the map


2 - open the georeferencing toolbar


3 - select layer to be transformed


4 - create displacement links which link source coordinates to destination coordinates


5 - select a transformation method


6 - check RMS error


7 - perform the actual transformation and save results

2 classifications of data integrity

-spatial integrity


-attribute integrity

what changes with the ArcGIS topology

-tracks edits you make to the features


-flags areas that have been modified



-validate topology command analyses the edited areas and evaluates the rules



-if a rule is violated, a topology error feature is created


-you correct the errors or mark them as exceptions to the rule (or you may ignore the errors altogether)

topology rules

-25 w/in ArcGIS 9


-single feature class or between feature classes


-set at the feature class or subtype level


-categorized by geometry type (point, line, poly)



6 new topology rules supported in 10.0:


-area must contain one point


-line must not intersect with line


-line must be inside area


-line must not intersect line or touch interior


-point must be disjoint


-point must be coincident with point

coincident geometry

-editing updates multiple features without slivers and gaps



edge:


-coincident line geometry between features


-an independent line feature



node:


-vertices at the ends of an edge

defining a topology

1. name the topology


2. apply a cluster tolerance


3. select participating feature classes


4. set coordinate accuracy ranks


5. apply integrity rules


6. validate the topology

attribute integrity

*valid attribute values for a feature (by unique value or by range)


-ex. water pipe material or diameter, soil classification


*attribute dependencies


-type or diameter of pipe defines the list of acceptable material types


*default values set when creating new features

what is a domain?

-defines and constrains an attribute field with a list of valid values


-used to validate attribute values

types of domains

*coded value


-a residential parcel can only have a residential or vacant land use


*range


-a pipe must be between 25 and 500 cm wide


*domains are property of the geodatabase


-multiple objects may have the same field, use same domain

subtypes and domains

-prevent illegal attribute assignment to features, tables


-set in ArcCatalog, use in ArcMap


-apply default values for new features



i.e. feature class: buildings


subtypes: residential, commercial


domains: occupancyR 1-25, OccupancyC 1-100

data quality

-GIS is a garbage magnifier


-> garbage in/garbage out



-most failed GIS projects are due to poor planning and poor data quality

introduction to error in GIS

-was not part of mainstream GIS projects until very recently


-greatest strength is also greatest weakness (cross-referencing many types of data by location, every time a new dataset is imported: the GIS inherits its errors)


-error accumulates and grows exponentially


-error encompasses both the imprecision of data and inaccuracies


-how you deal with error (or not) will "make or break" most GIS projects

types of error

-positional accuracy and precision


-attribute accuracy and precision


-conceptual accuracy and precision


-logical accuracy and precision

positional accuracy and precision

-applies to both horizontal and vertical positions


-to meet agreed upon accuracy standards in Canada and US 90% of all locations should be within:



1:1,200 +/- 1m


1:2,400 +/- 2m


1:4,800 +/- 4m


1:10,000 +/- 8m


1:12,000 +/- 10m


1:24,000 +/- 12m


1:63,360 +/- 32m


1:100,000 +/- 50m

root mean square error

the square root of the average squared errors



*typically expressed as a probability that no more than P% of points will be further than S distance from their true location



*loosely: tells us how far recorded points in the GIS are from their true location on the ground, on average

Attribute Accuracy and Precision

-non-spatial data linked to location may also be inaccurate or imprecise


-inaccuracies may result from mistakes of many sorts


-non-spatial data can also cary greatly in precision

Conceptual Accuracy and Precision

-GIS depends upon the abstraction and classification of real-world phenomena


-sometimes users may use inappropriate categories or misclassify info




i.e. classifying streams and power lines together because they are both line and feature would not allow for much in the way of useful analysis

Logical Accuracy and Precision

-info stored in a database can be employed illogically


-GIS systems typically unable to warn the user if inappropriate comparisons are being made or if data are being used incorrectly.

Sources of Inaccuracy and Imprecision

-obvious sources of error (age, map scale, format)


-errors resulting from natural variation or from original measurements


-errors arising through processing

Coverage & Completeness

-many data sets do to have a uniform coverage of information


-some districts may have fragmented coverage of some layers


-features may be mapped differently


-roads as lines instead of polygons

Accessibility

-not all data are equally accessible



*land resources in one country may be available, and a state secret in another

Errors Resulting from Natural Variations or Original Measurements

*positional accuracy


-result of poor field work etc


-correction through rubbersheeting


*accuracy of content


-errors caused by miscoding


*measurement error


-accuracy vs. precision


-lab errors


Errors Arising Through Processing

-numerical errors


-errors in topological analysis


-classification and generalization problems


-digitizing and geocoding errors

Numerical Errors

-numerical precision


-single precision (8 digits) vs double precision (16 digits)


-some GIS are using Integer values to store coordinates and large areas may not be stored precisely


-this is why it is important to understand how a number is actually stored in the computer


Faults Arising through Topological Analysis

-overlaying multiple layers of maps can result in problems such as:


*slivers


*overshoots


*dangles

Classification and Generalization Problems

-we have small brains and like to deal with small numbers of categories (7, +/- 3)


-classification and generalization of attributes used in GIS are subject to interpolation error and may introduce irregularities in the data that is hard to detect

Resolution Errors

-generalization may improperly represent size and shape


-entire regions may be eliminated (islands, peninsulas, etc.)

Errors Associated with Spatial Analysis

*errors in digitizing a map


-source errors (distortion, boundaries have a "thickness")


-digital representation (curves approximated, boundaries are not absolute)



*boundary problems


-definitely in/out


-possibly in/out


-ambiguous (on the digitized border line)



*Polygon overlay and boundary intersection


-sliver polygons


-coordinate drift - "fuzzy" creep


-error zones



*rasterizing a vector map

Propagation of Error

*occurs when one error leads to another



i.e. if a map registration point has been mis-digitized in one coverage and is then used to register a second coverage, the second coverage will propagate the first mistake



-in this way, a single error may lead to others and spread until it corrupts data throughout the entire GIS project



-to avoid this problem use the largest scale map to register your points

Cascading Error

*when error are allowed to propagate unchecked from layer to layer repeatedly



-effects can be very difficult to predict


-may be additive or multiplicative and can vary depending on how information is combined

Triangulated Irregular Network (TIN)

-the world represented as a network of linked triangles drawn between irregularly spaced points with w, y, and z values


-specialized vector format because it does not represent individual features


-represents a surface

TIN Creation

-mass points, breaklines


-clip, erase, replace


-hard/soft edge enforcement

Interpolation

*generate surfaces from point measurements


-natural neighbors


-minimum curvature spline


-spline with barriers


-TopoToRaster


-kriging


-polynomial trend surface


-inverse distance weighted (IDW)



Contouring

*create contours from surfaces


-Batch GP tools


-Interactive contour button

Natural Neighbors

*use if you know nothing about your data


-the most conservative


-assumes all highs and lows are sampled, will not create artifacts


-based on voronoi tessellation



Strengths:


-conservatively smooth


-interpolated z range within range of data


-smart selection of samples



Supported input data types:


-points


-TIN to Raster

TopoToRaster

*use if your input data is contours


-optimized for contour input


-if not creating a DEM, turn off drainage enforcement option



Strengths:


-handles contour data on input


-hydrologically sound output



Supported input data types:


-contours with height attribute


-2D points with attribute


-2D polylines (rivers)

Spline

*use if you know the highs and lows are not sampled


-be careful of points that are near in space but very different in value creating unnatural artifacts



Strengths:


-smooth


-infers trends



Supported input data types:


-points (2D with attribute, 3D)


-barriers

Spline with Barriers

*use if your surface is not continuous


-there are no faults or other discontinuities in the surface



-minimum curvature spline that honours barriers, faults and void areas



Kriging

Strengths:


-probabilistic


-well behaved with sparse data


-options to tailor results based on input data characteristics


-statistical assessment of estimates



Supported input data types:


-points (2D with attribute, 3D)

What is Linear Referencing?

method of storing geographic features by using relative positions along a measured line feature



i.e. road is 100 miles long, where is mile 60?

Why use Linear Referencing?

-model relative locations along linear features


(the concrete road starts at mile 5, not lat/long)



-enables the association of multiple sets of attributes to portions of linear features without segmenting the underlying line each time attribute values change

Linear System of Measure

-features location determined using a linear system of measure values, instead of using x,y coordinates

Features Containing one-to-many Realtionship

-two or more pieces of information associated with the same location on a line



i.e. river, salmon habitat, speed

Features Containing Frequently Segmented Data

-some types of features have attributes that change frequently


-subdivided by different colours/tones based on material or quality

Routes

-linear features - in a polyline feature class


-must have an identifier & measurement system


-store in a geodatabase, shape file, or coverage



-route geometry: single part polyline, multi-part polyline


-routes can model: looping, branching, 180-degree turns

Measures

*the distance along a route from an origin


-can be miles, meters, feet, time (do not have to be same units as the x,y coordinates)


-can increase, remain constant, or decrease


*route vertices store measure values


-NaN (Not a Number - unknown) measures may exist

Route Locations

-use a route identifier to locate the appropriate route



*2 types:


-Point (uses a single measure value): discrete location on a route


-Line (uses from- and to- measure values): describe a portion of a route

Event Tables

*route locations thematically stored in tables are called "route events" or simply "events"


*2 types:


-Point event table (i.e. accidents)


-Line event table (i.e. pavement conditions, riverbank composition)



*can be any type of table supported by ArcGIS

Point Event Tables

*two required fields


-route ID identifies route on which even is located


-measure is the event's location on the route



*also includes other attributes about the event

Line Event Tables

*three required fields:


-route ID identifies route on which event is located


-From Measure and To Measure define beginning and end of line event



*also includes other attributes about the event

Overlay Events

*logical union or intersection of two input event tables


-creates new event table with attributes from both tables


*non-geometric way of performing


-line-on-line, line-on-point, & point-on-point overlays

Metadata

*metadata is information about your data


-contains vital information



*should NEVER be viewed or treated as a separate entity



*critical and integral component of any complete data set

Why create metadata?

*protects investment in data


-staff turnover, memory loss


-makes it easier to reuse and update data


-provides documentation of sources, quality



*helps user understand data


-provides consistent terminology


-helps user determine fitness for use


-facilitates data transfer, interpretation by new users



*enables discovery


-provides info to clearinghouses


-provides flexibility in searching


-key in sharing spatial data


-historical documentation



*limits liability: can prevent inappropriate use, provides protection if inappropriately used



*reduces workload, questions about data



*cuts costs: allows for automation



*becoming a more common requirement, and easier to do

Digital Metadata Fomats

-notes or log files


-text files


-HTML file in outline format


-customizable (eXtensible Markup Language - XML)

Metadata Standards

*Federal


-Dublin Core: lightweight (15 elements), web focus, industry support



*International


-Federal Geographic Data Committee Content Standard for Digital Geospatial Metadata (FGDC CSDGM) Minimal: middleweight (~30/300 elements), production rules


-FGDC CSDGM - Biological Data Profile (NBII): heavyweight (over 400 elements), data focus

Metadata Content Standard answers these important questions:

WHO


-collected/processed the data


-wrote the metadata


-to contact for Qs



WHERE


-were they collected/processed


-are the data located



WHAT


-are the data about


-format are they about


-quality


-constraints of their use



WHEN


-were they collected/processed



WHY


-the data were collected



HOW


-were the data collected/processed


-do I access/order it


-much do they cost

FGDC - Content Standard for Digital Geospatial Metadata

-Id


-Data Quality


-Spatial Organization


-Spatial Reference


-Entity and Attribute


-Distribution


-Metadata reference

Identification Metadata

-creator


-age of data


-extent of coverage


-why was data complied?


-limitations of use


-reliability of provider

Data Quality Metadata

*lagest # of questions here


*related sources and procedures used


-where did it come from


-what medium was it originally produced in


-at what scale was it digitized


-accuracy, positional and attribute


-how was the data checked


-does it seem logical and consistent

Spatial Organization Metadata

-internal structure of the data


-data model

Spatial Reference Metadata

-projection


-coordinate system


-datum

Entity and Attribute Metadata

-is this relevant to what I'm doing


-are attributes properly defined and explained


-is there other documentation I'll need to make sense of the fields


-what do all of these crazy values actually mean

Distribution Metadata

-format


-file size


-how do I access the data

Metadata Reference

-metadata creator


-contact information

Data Dictionary

-a catalog of all data held in a database, or al siting of al items giving data names and structures


-sometimes referred to as a Data Dictionary/Directory (DD/D)


-this pulls all of your metadata together for your entire project


Issues of Quality Assurance and Control

*QA/QC is something that needs to be integrated into all aspects of GIS database development


*look for:


-completeness


-validity


-logical consistency


-physical consistency


-referential integrity


-positional accuracy

Completeness

*data must adhere to database design standard



-topology


-table structure


-precision


-projection


-coordinate system

Validity

*attribute accuracy measure


*defined domain


-i.e. postal codes


*defined range


-i.e. range of Postal Code values for Vancouver

Logical Consistency

*measures the interaction between values of two or more functionally related attributes


*if the value of one of these attributes changes, then the other must also change

Physical Consistency

*measures the topological correctness and geographic extent of the database

Referential Integrity

*measures the associativity of related tables based on primary and foreign keys relationships


*keys must exist, and must have associated data sets in tables with predefined rules for each table

Positional Accuracy

*measures how well each objects position matches reality


*errors introduced in multiple ways


*errors can be random, systematic, or cumulative


*positional accuracy must always be qualified because maps are representations on reality

Cartography

the art, science, and technology of making maps

Topographic Maps

*relief depiction (changes in elevation)


*contour lines - connect lines of equal elevation


*contour intervals can also be shaded to add more visual cues

Thematic Maps

*show one type, or "theme", of information


i.e. population, wheat production, habitat suitability


*ways of showing dat on thematic maps:


-isopleth


-choropleth


-cartogram


-dot


-proportional symbol

Isopleth Maps

*isopleths are lines that connect points of equal value (i.e. temp, ppt)


*contour lines on topographic maps connect points of equal elevation

Choropleth Maps

*political or other units (i.e. counties, voting districts) are shaded according to the value of the theme that is mapped


*usually, the map is divided into several classes of values (often 5-7)

Cartogram

-vary the SIZE of the areas being mapped to show changes in values

Dot Maps

*each dot represents a given number of the theme being mapped (i.e. bushels of corn, toxic waste sites)


*show detailed distributions (i.e. areas of relative concentration and sparseness)

Proportional Symbol Maps

*symbols (i.e. circles, cubes) are used to show the amount or volume of something at a location


*symbols vary in size according to the value being mapped

Symbolization

*quantitative data


-symbols should have visual progression corresponding to data values


-for polygons, use monochromatic colour ramp; same colour (hue), different saturation or value


-for point symbols, use different SIZES of the same symbol



*qualitative data


-for polygons, use different colds (polychromatic), or different fill patterns


-for point symbols, use same size of different symbols

Dimensions of quality

*use value to symbolize change ordinal, interval or ratio data


*use a moderate level of saturation but avoid 100% saturation


*hue can be used to symbolize categorical variables



Human perceptual limitations

-max 12 colours


-max 7-8 shades