|
|||||||||||||
|
EOM July 2005 > Departments > TutorialRemote Sensing Data Acquisition and Initial ProcessingGeorge Raber, Jason Tullis, and John JensenThe following tutorial was extracted from a course titled "Geospatial Primer" that is being developed for the Institute for Advanced Education in Geospatial Sciences (IAEGS). It targets current or future management professionals who are interested in implementing (or have already implemented) geospatial technologies in the workplace. They may be responsible for supervising and guiding the work of the expert technicians and analysts that create information products using geospatial data. This course is also appropriate for the student who is exploring the possibility of continued studies in geospatial technology, but who has not acquired the mathematical and statistical background required to take a regular introductory course — such as high school-age students or undergraduate students with a non-science background. IAEGS currently offers several courses in the geospatial sciences. These courses serve a wide audience, ranging from very introductory to advanced topics. All the courses are delivered on-line. The project was funded by NASA through IAEGS. For more information about the entire course please contact [email protected]. Basic Principles of Remote SensingElectromagnetic Radiation
Remote sensing is the practice of measuring an object or a phenomenon without being in direct contact with it. It is non-intrusive. This requires the use of a sensor situated remotely from the target of interest. A sensor is the instrument (e.g. camera) that takes the remote measurements. There are many different types of sensors, but almost all of them share something: what they "sense" (or take measurements of) is usually electromagnetic radiation or light energy. Electromagnetic radiation (EMR) is a complex subject that is worthy of an entire physics course in its own right. Our discussion will be limited to remote sensing that makes use of EMR. Energy is defined as the ability to do work. EMR is energy propagated through space in the form of tiny energy packets called photons that exhibit both wave-like and particle-like properties. Unlike other modes of energy transport, such as conduction (e.g. heating a metal skillet) or convection (e.g. flying a hot air balloon), radiation (as in EMR) is capable of propagating through the vacuum of space. The speed of that EMR in a vacuum (e.g. outer space) is approximately 300,000 kilometers per second (3 x 108 meters/second-1 or 186,000 miles/second-1). This is an extremely fast communications medium! Visible light with its red, green, and blue colors that we see daily is just one example of EMR, but there is a much larger spectrum of such energy. We often characterize this spectrum (or range) in terms of the wavelengths of different kinds of EMR. For a variety of reasons, there are some wavelengths of EMR that are more commonly used in remote sensing than other wavelengths. Recording Electromagnetic Radiation
There are two broad categories of sensor systems used in remote sensing — active and passive. Passive sensors rely on EMR from existing sources, most commonly the Sun. Due to the extreme temperatures and nuclear activity on the surface of the Sun, this massive energy source emits a broad and continuous range of EMR (of which visible light is only a small fraction). EMR emitted from the Sun travels through the vacuum of space, interacts with the atmosphere, and reflects off objects and phenomena on Earth's surface. That EMR must again interact with the atmosphere before arriving at a remote sensor system in the air or in orbit. Some of the Sun's energy is absorbed by target objects (e.g. water, rocks, etc. on the surface of Earth) and these are often heated as a result. Absorbed energy can then be re-emitted at longer wavelengths - that is, these objects that absorbed the Sun's energy now become themselves the source of EMR. Certain passive sensor systems are designed to record portions of this emitted (as opposed to reflected) energy. On the other hand, active sensors themselves generate the EMR that they need to remotely sense objects or phenomena. The active sensors' EMR propagates from the sensor, interacts with the atmosphere, arrives at target objects (trees, rocks, buildings, etc.), interacts with these objects, and must be reflected in order to travel back through the atmosphere and be recorded at the sensor. Generally there are two types of active sensors:
Active sensors are often used to measure elevation and in applications which require a detailed understanding of the texture of the landscape. |
||||||||||||
|
|||||||||||||
|
Reflectance of Electromagnetic Energy
Remote sensing would be of little use if every object or phenomenon on Earth behaved in exactly the same way when interacting with EMR. Fortunately, different objects reflect portions of the electromagnetic spectrum with differing degrees of efficiency. Similarly, different objects emit previously absorbed EMR with differing degrees of efficiency. In the visible spectrum these differences in reflective efficiency account for the myriad of colors that we see. For example, green plants appear of that color because they reflect greater amounts of green light than of blue or red light. Plotting the spectral reflectance levels of a given object or phenomenon by wavelength yields a spectral reflectance curve, or spectral signature. This signature is the remote sensing key to distinguishing between one type of target and another (e.g., the signature of a deciduous tree vs. that of an evergreen). Analog (Film-based) Sensors
Today we hear the terms analog and digital when referring to a wide range of electronic devices. In general, analog devices operate using dynamic physical properties (e.g., chemical changes) while digital devices operate using numbers (e.g., 0110111011). Remote sensor systems record patterns in incoming EMR using analog detectors. While all remote sensor systems have at least a partial complement of analog components, some sensor systems are completely analog. A prime example of this is a film-based aerial camera, like the one used by the U.S. military during the early 1960s to document Soviet nuclear missile deployments on the island of Cuba. The emulsion of silver halide crystals in film (the detector element) responds chemically to EMR exposure. Further analog processing is used to generate negative and/or positive transparencies and hardcopy photographs. In an analog aerial camera, the length of exposure to incoming EMR is controlled through a shutter that opens for just a fraction of a second. While the shutter is open, the incoming light is focused on the film plane at the back of the camera using a high quality lens. With each exposure, the focused image of EMR causes a lasting chemical change to the exposed portion of film and a new unexposed section of film is needed in order to repeat the process. A film-based camera used for remote sensing differs in a few ways from a typical camera you might purchase at a store. For one thing, the film itself is much larger (e.g., nine inches wide). For another, the camera's focal length (the distance between the film plane and the center of the lens) is much longer (e.g., 175 millimeters). Without delving in detail into the science of photography, these differences allow the aerial camera to take better, larger-scale photographs even from a moving platform. Most cameras designed for this purpose are metric, meaning that their internal dimensions have been precisely calibrated and are reported to the user. This is vital to the practice of photogrammetry or taking detailed measurements on photographic maps. Digital Sensors
Digital sensors also measure patterns in incoming EMR using analog detectors. However, measurements of EMR taken by each detector element (sometimes there are thousands of these) are recorded, not using an analog medium (e.g., film), but using numbers. These measurements are digitized through a process called analog-to-digital (A-to-D) conversion. Possible values are in a pre-defined range, such as 0 to 255. Each recorded numerical value is then stored on some kind of digital medium, such as a hard disk, as part of a raster dataset. The value in each raster cell represents the amount of energy received at the sensor from a particular circular area (instantaneous-field-of-view or IFOV) on the ground. Digital sensors make use of the same basic technology as a computer document scanner or a digital camera. In fact, specialized digital cameras are often used to acquire remote sensor data and professional-grade document scanners are often used to convert analog (film-based) remote sensing data to digital data. The detectors in a digital sensor can be arranged in a number of different ways. One method utilizes a single detector for each frequency band. A scanning mirror is then used to capture EMR at each IFOV along a scan line. The forward motion of the sensor allows for additional scan lines and therefore a two dimensional image. This is type of instrument is often referred to as a scanning mirror sensor.
A second method is to have a linear array of detectors for each band. Each detector in an array (a single linear array can have thousands) records EMR for a single IFOV in the cross-track dimension (i.e., perpendicular to the direction of flight). The forward motion of the sensor again allows for repeated measurements and two-dimensional imagery. This type of sensor system is often called a linear array push-broom scanner. Push-broom systems have several advantages over scanning mirror sensors. They have fewer moving parts, so they are generally more durable. Also, the process of assigning coordinates to push-broom data is much easier.
A third digital sensor configuration is the one that is most like the operation of analog film-based systems. In this case, an entire area array is placed at the back of the sensor. Energy is focused through a lens onto this bank of detectors. These types of sensors are called digital cameras, or area array sensors. They are often used in similar applications as film-based cameras. Types of Resolution
Resolution quantifies how distinguishable the individual parts of an object or phenomenon are. When discussing the specifications of remote sensor systems, we generally speak of four different types of resolution. Temporal ResolutionTemporal resolution is how often a sensor visits, or can visit, a particular site to collect data. This is important because many applications depend on observing change in phenomena over time. A remote sensing instrument is mounted on a platform — such as a satellite, an aircraft, a hot air balloon, or even a kite. The platform on which a sensor is mounted is the greatest determinant of that sensor's temporal resolution. Some satellites orbit Earth without ever approaching its shadow - that is, they are in Sun-synchronous orbit. Other satellites maintain a fixed position above the rotating Earth - these are in geo-synchronous orbit. In either case, these satellites have a regular and predictable temporal resolution (e.g., every 16 days). Some satellite-based sensors are more flexible than other ones because of their ability to point at various targets near their default field-of-view. These more flexible sensors may have a temporal resolution range (e.g. 2-3 days). Sensors mounted on aircraft fly ad-hoc or on-demand missions with less predictable but more flexible temporal resolution (e.g., every hour, if funds are available). Spatial Resolution
Spatial resolution describes the size of the individual measurements taken by the remote sensor system. This concept is closely related to scale. With an analog sensor, such as film, the spatial resolution is commonly expressed in the same terms as the scale (e.g., 1:500). Since a digital sensor records information in raster format the spatial resolution is the cell size (e.g., 3 x 3 meters) in ground units. Spectral Resolution
Spectral resolution describes the sensor systems' ability to distinguish different portions of the EMR spectrum. Some sensors are sensitive to visible light only, while others can also capture near-infrared energy. The portions (ranges) of the spectrum to which an instrument is sensitive are referred to as its bands. A sensor can have multiple bands, and bands can be of varying widths. Spectral resolution refers both to the number and width of the bands for a given sensor. A panchromatic band is a wide band that encompasses a large spectral range (often the entire visible spectrum). Commonly we call film that is sensitive to the entire visible range "black and white" film because often we print images from this sort of film in grayscale. However, there are analog (film-based) and digital sensors that have wide panchromatic bands that also encompass the near infrared portion of the spectrum. When a sensor records only a few portions of the spectrum (i.e., contains only a few, relatively wide bands), it is said to be a multispectral system. A multispectral sensor might have two or three bands in the visible range (i.e. red, green, and blue) and it might also have a few near-infrared or middle infrared bands. Typical multispectral systems have between 4 and 10 bands. Hyperspectral sensors have a large number of relatively narrow bands. By definition, hyperspectral sensors have a higher spectral resolution than multispectral sensors. Commonly a sensor is considered hyperspectral when it has at least 20 or 30 bands. Many such sensors have hundreds of bands. In general, a sensor with more spectral bands has a greater ability to distinguish between two objects with similar spectral properties. Each band in a digital dataset can be thought of as an individual raster layer. Visualize an image in three dimensions, with rows, columns, and bands filling the x, y, and z coordinates of a cube. Radiometric ResolutionRadiometric resolution describes the number of unique values that can be recorded by a sensor system when measuring reflected or emitted EMR. In a digital system this is easily quantified as a number (e.g., 256, 2047, etc.). Since the digital numbers in remote sensor data are stored in a computer, they are often expressed in terms of how many bits (or factors of two) are used to store that variety of numbers (e.g., 8-bits, 11-bits, etc.). An 8-bit sensor would store a value for each measurement in an integer range from 0 to 255. This range has 28 (or 256) discrete values. With analog, or film-based, systems it is the quality of the film that determines its radiometric resolution. Turning Remote Sensing Data into Geospatial DataIn situ Data CollectionRemote sensing applications are rarely successful without at least some direct measurements being taken within the study area. Often these measurements are referred to as "ground truth." However, "truth" is really a misnomer since there is always at least some error in measurements, even if they are taken directly. "Ground reference" would be a better descriptor, but what if the measurements aren't taken on the ground? A correct term for measurements taken directly (as opposed to remote measurements) is in situ data collection. Several types of in situ measurements may be necessary for a given project or application. Almost all remote sensing projects require some amount of in situ data collection in order to perform geometric and radiometric calibration. Additional in situ data may be required to create reference maps of spatial variables, including biophysical properties. Geometric Correction
When remote sensor data is initially collected it is not geospatial data. In order to make the transition to geospatial data, geometric correction must be applied in order to place the data into a real-world coordinate system. Beyond having no real-world coordinates assigned, the raw data also contains geometric distortion. This means that all of the objects or phenomena that can be seen in the data are not equally out of place relative to a desired coordinate system. Distortion generally increases away from the point (or points) in the data that were acquired at nadir (straight down). Distortion is therefore different depending on the sensor configuration (e.g., scanning mirror sensors vs. area array digital cameras). Another source of distortion are variations in the terrain and objects on the terrain. Tall objects and steeply sloping terrain lead to more distortion than flat objects on flat terrain. A basic method for geometric correction involves the use of a GPS receiver in the field. GPS measurements are taken at locations that are also easily identifiable in the imagery. These types of locations will vary according to the spatial resolution of the remote sensor data. Ideally the smallest possible features that can be visualized in that data should be located in the field and their positions surveyed. These features should also be permanently situated. The recorded locations of these features in the study area are collectively known as control points. Road intersections typically make good control points. Features above the ground surface do not make good control points because they cause distortion. Control points should be collected at locations spaced evenly throughout the remote sensor image. In fact, the relative location of the control points is at least as important as the number of points. Once enough control points have been collected, they can be used to adjust the data to its approximate spatial position within a coordinate system. Most geospatial software packages provide an interface for doing this. As part of the process, the software package will typically report a number indicating the degree to which the desired transformation (represented by the control points) was successfully implemented. The success rate depends in part on the amount of distortion present in the raw data. Once the remote sensor data has undergone this process it is said to be georectified. PhotogrammetryThe type of correction discussed above is sufficient for many applications. However, in order to create an image that is free from all major distortions, the terrain and sensor-induced distortions must be accounted for explicitly. This is done by using a combination of GPS control points, a digital elevation model (DEM), and a detailed report of the distortion present in the sensor system. The details of this process are beyond the scope of this tutorial. When data has been corrected in this manner it is said to be orthorectified. In an orthorectified image, all points are in their proper x, y position and aligned as they would appear if one were looking straight down at them. The practice of orthorectification is part of photogrammetry — the art of taking direct measurements from photos and other remotely sensed data. Measurements derived using photogrammetric techniques include the height of objects on the terrain, their x, y location, and the ground distance between objects. Radiometric CorrectionIn addition to geometric distortion, EMR that is received by the sensor contains radiometric distortions. The source of these distortions is primarily the atmosphere and its dynamic constituents. If there were no atmosphere with which to contend, EMR recorded by the sensor would be a much more perfect representation of EMR reflected or emitted from the target object or phenomena. However, along the path between the target and the sensor, EMR must interact twice with the atmosphere. Some of this energy is scattered and some of it is absorbed. Atmospheric constitutes such as water vapor and pollution vary across space and time, and therefore these distortions make it particularly difficult to compare datasets collected at different times (or even sometimes different areas of the same image). There are various ways to minimize this distortion. Between-date radiometric differences can be minimized if the datasets are collected at similar times (of the day and of the year) so that the Sun's position is held constant. Also, acquiring data on a clear day will minimize the amount of water vapor and clouds. Even after taking these measures, many applications require additional radiometric correction to account for differences and distortions in the EMR values recorded at the sensor. This can be done in a few different ways, each with some degree of difficulty and level of uncertainty in the results. Following are three examples, of many that one could give. One simple radiometric correction technique is to rescale all of the pixel brightness values in an image by identifying one of the darkest pixels and one of the brightest pixels. The darkest pixel is re-assigned a value of 0, and the lightest a value of 255. The intermediate values are then rescaled to fit evenly in between. Although this method is very easy and requires no additional input data, it is the least reliable. This technique is known as a min-max contrast stretch. A second, simple method is referred to as empirical line calibration. In this method, several in situ radiometric measurements are taken over various objects concurrently with the acquisition of the remote sensor data. The instrument used for these measurements is called a radiometer. Unlike the remote sensor system, the radiometer is used to take measurements in situ with almost no atmosphere with which to contend. The data collected using the radiometer is used to develop a simple linear mathematical function to predict what the radiometric values should be over the entire image. A third method is more complex than either of the previous two. It relies on collecting explicit information on the environmental conditions at the time of the remote sensor data acquisition. This information might include a detailed profile of temperature and humidity within the atmospheric column, the Sun-Earth geometry, and the position of the sensor with respect to each pixel. This method is actually a group of methods, each requiring differing information. Automated computer algorithms are then used to process the remote sensor data along with the ancillary data to produce a radiometrically-corrected image. Visual Image InterpretationWith the power of the human visual system, much (if not most) information in remote sensor data can be acquired simply by visual inspection. Examples include the spatial extent of a lake, the location of roads, and the number of houses in a community. These are all variables that can be "seen" on the terrain and interpreted directly by visualizing the imagery. In these cases a trained image analyst uses a combination of real-world experience and heuristic rules-of-thumb to interpret what is seen in the image and to determine its significance. The process of image interpretation can be broken down into its fundamental elements, including:
Color CompositesAt this time it is useful to discuss how visible light mixes to create color. White light from the Sun is composed of EMR from all wavelengths within the visible spectrum. We can see this clearly when white light passes through a prism and separates into a rainbow. Combining these colors of the rainbow back together yields white light. Adding only some portions of the rainbow light will result in a different color. One can create any color by mixing the three primary ones — red, green, and blue (additive color theory). Each pixel in a computer screen is actually made using three different light "guns," one for each of these primary colors. These guns respond to commands by the computer to display with various intensities. The addition of the EMR emitted by these three guns determines what color the user perceives. The initial visualization of remote sensor data is an important aspect of an effective interpretation effort. Digital remote sensor data is displayed by assigning recorded brightness values to the three color guns mentioned above. When the red, green, and blue bands in the visible spectrum are assigned to their respective red, green, and blue color guns, the displayed result is said to be a true color composite. However, remote sensor systems often measure EMR outside the visible range, requiring the creation of false color composites. For example, near-infrared bands are often displayed using the red color gun. When looking at a false color composite image, special care needs to be taken to interpret it correctly. Automated ClassificationAlthough manual image interpretation is valuable and often provides highly detailed and accurate information, many applications require that objects on the ground be classified faster and more cost-effectively. In these cases it is necessary to automatically interpret, or classify, the image using computer algorithms. There are primarily two different ways to approach this goal. Both are based on the simple concept that similar objects or phenomena have similar spectral reflectance properties. The first method is referred to as an unsupervised classification. In this method, the computer algorithm operates without any prior knowledge of the scene. Pixels are grouped together based on the similarity of their spectral characteristics. These clusters of similar pixels, representing unique spectral classes, are then reported to the user, who is responsible for transforming them into information classes. This process can be aided with in situ data and/or manual image interpretation. Supervised classification, the second method, requires that the user have some knowledge of the actual objects and phenomena within the image. This knowledge could have been acquired through in situ data collection or manual image interpretation. The user specifies the classes (e.g. water, forest, crops) and then instructs, or trains, a computer algorithm by feeding it the exact location of several training examples for each class throughout the image. The computer algorithm examines the properties of these areas and then seeks similar regions throughout the image, eventually classifying the entire image. Spectral data is often the primary data source considered in the process, although recently more effort has been made to incorporate more advanced aspects, such as object shape and relative position. Mapping Spatial VariablesThere are certain aspects of phenomena that must be sensed while in direct contact with an object. For example, it is impossible to directly measure the live biomass (the amount of living matter) present in a stand of vegetation without harvesting the vegetation, processing it to remove water and foreign substances, and then weighing it. However, it is possible for a trained person to estimate the biomass present in a particular stand of vegetation without coming in direct contact with it. In a similar manner, remote sensing principles provide a way to quantify what is "seen" and provide information, such as biomass or other biophysical variables, that are present in an image. Mapping biomass (a biophysical variable) requires taking some in situ measurements of the vegetation of interest. These measurements are used to build a mathematical model relating the quantity of biomass to the spectral reflectance values in the remote sensor data. An example of this type of equation might be: Biomass = Bias + (Constant A x Near-infrared) + (Constant B x Red) In addition to using the band values directly (in this case near-infrared and red), it has been shown that specific mathematical combinations of band values are effective for mapping various phenomena. For example, the Normalized Difference Vegetation Index (NDVI) is often highly related to a number of vegetation properties, including green biomass. There are many other band indices for use in vegetation, geologic, and other application areas. Creating Elevation Data from Remote Sensing DataElevation is another example of a continuous variable that can be remotely sensed. This can be done in several different ways. One is through collecting stereoscopic pairs of images. In each pair, the images partially overlap (e.g. by 60 percent). The fact that the two images are acquired from different positions allows us to extract 3D (2D plus height) information from the overlapping portion. This method is a branch of photogrammetry. The operating principle is closely related to how our eyes detect 3D information by combining the different images from our two eyes. In fact, stereoscopic pairs of remote sensor images can viewed through a device called a stereoscope, thus enabling the user to see the terrain in 3D. Today there are specialized computer software packages that allow users to make quantitative elevation calculations directly from stereoscopic imagery. When a large number of these measurements are taken, an elevation surface (or DEM) can be derived. A second, related method is radar interferometry. Differences in radar signals acquired over the same area from different positions can be used to create an elevation surface. Recently, the Space Shuttle carried a radar instrument for this purpose and mapped the elevation of much of the world (including the tropics, which are often hidden beneath cloud cover) at a spatial resolution of approximately 30 x 30 meters. This elevation product is called SRTM (Shuttle Radar Topography Mission) data. A third method of deriving elevation data is to use LiDAR data. Most of the time, the collection of LiDAR data results in a series of x, y, z points. Once the points that have reflected off the ground are separated from those that reflected off other objects above the ground, a digital surface representing the ground can be created. About the AuthorsDr. George Raber is an Assistant Professor at The University of Southern Mississippi in the Department of Geography and Geology. He teaches GIS and Remote Sensing courses at Stennis Space Center as part of the Masters in Geospatial Information Technology program Southern Mississippi offers on site. Dr. Jason Tullis is an Assistant Professor at the University of Arkansas in the Department of Geosciences. He is also associated with The Center for Advanced Spatial Technologies at the University of Arkansas. Dr. John Jensen is a Carolina Distinguished Professor at the University of South Carolina. |
||||||||||||
|