فارسی  |     
Home |  Contact Us |  Help 
Sciences - Geomatics > Remotesensing
The Concept of Remote Sensing
If you have heard the term "remote sensing" before you may have asked, "what does it mean?" It's a rather simple, familiar activity that we all do as a matter of daily life, but that gets complicated when we increase the scale. As you view the screen of your computer monitor, you are actively engaged in remote sensing.
A physical quantity (light) emanates from that screen, which is a source of radiation. The radiated light passes over a distance, and thus is "remote" to some extent, until it encounters and is captured by a sensor (your eyes). Each eye sends a signal to a processor (your brain) which records the data and interprets this into information. Several of the human senses gather their awareness of the external world almost entirely by perceiving a variety of signals, either emitted or reflected, actively or passively, from objects that transmit this information in waves or pulses. Thus, one hears disturbances in the atmosphere carried as sound waves, experiences sensations such as heat (either through direct contact or as radiant energy), reacts to chemical signals from food through taste and smell, is cognizant of certain material properties such as roughness through touch, and recognizes shapes, colors, and relative positions of exterior objects and classes of materials by means of seeing visible light issuing from them. In the previous sentence, all sensations that are not received through direct contact are remotely sensed.
I-1 In the illustration above, the man is using his personal visual remote sensing device to view the scene before him. Do you know how the human eye acts to form images? If not, check the answer. ANSWER
However, in practice we do not usually think of our bodily senses as remote sensors in the way we us the term technically. A formal and comprehensive definition of applied remote sensing * , as it is customarily formulated to include determination of geophysical parameters, is:
The acquisition and measurement of data/information on some property(ies) of a phenomenon, object, or material by a recording device not in physical, intimate contact with the feature(s) under surveillance; techniques involve amassing knowledge pertinent to environments by measuring force fields, electromagnetic radiation, or acoustic energy employing cameras, radiometers and scanners, lasers, radio frequency receivers, radar systems, sonar, thermal devices, seismographs, magnetometers, gravimeters, scintillometers, and other instruments.
I-2To help remember the principal ideas within this definition, make a list of key words in it. ANSWER
This is a rather lengthy and all-inclusive definition. Perhaps two more simplified definitions are in order: The first, more general, includes in the term this idea: Remote Sensing involves gathering data and information about the physical "world" by detecting and measuring radiation, particles, and fields associated with objects located beyond the immediate vicinity of the sensor device(s). The second is more restricted but is pertinent to most of the subject matter of this Tutorial: Remote Sensing is a technology for sampling electromagnetic radiation to acquire and interpret non-immediate geospatial data from which to extract information about features, objects, and classes on the Earth's land surface, oceans, and atmosphere (and, where applicable, on the exteriors of other bodies in the solar system, or, in the broadest framework, celestial bodies such as stars and galaxies).
I-3 What is the meaning of "geospatial"? Are there any differences in meaning of the terms "features", "objects", and "classes"? ANSWER
Or, try this variation: Applied terrestrial Remote Sensing involves the detecting and measuring of electromagnetic energy (usually photons) emanating from distant objects made of various materials, so that the user can identify and categorize these objects by class or type, substance, and spatial distribution. Generally, this more conventional description of remote sensing has a specific criterion by which its products point to this specific use of the term: images much like photos are a main output of the sensed surfaces of the objects of interest. However, the data often can also be shown as "maps" and to a lesser extent "graphs", and in this regard are like the common data displays resulting from geophysical remote sensing. As applied to meteorological remote sensing, both images (e.g., clouds) and maps (e.g., temperature variations) can result; atmospheric studies (especially of the gases in the air, and their properties) can be claimed by both traditionalists and geophysicists.
All of these statements are valid and, taken together, should give you a reasonable insight into the meaning and use of the term "Remote Sensing" but its precise meaning depends on the context in which it is spoken of.
Thus, some technical purists arbitrarily stretch the scope or sphere of remote sensing to include other measurements of physical propeties from sources "at a distance" that are more properly included in the general term "geophysics". This would take in such geophysical methods as seismic, magnetic, gravitational, acoustical, and nuclear decay radiation surveys. Magnetic and gravitational measurements respond to variations in field forces, so these can be carried out from satellites. Remote sensing, as defined in this context, would be a subset within the branch of science known as Geophysics. However, practitioners of remote sensing, in its narrower meaning, tend to exclude these other areas of geophysics from their understanding of the meaning implicit in the term.
Still, space systems - mostly on satellites - have made enormous contributions to regional and global geophysical surveys. This is because it is very difficult and costly to conduct ground and aerial surveys over large areas and then to coordinate the individual surveys by joining them together. To obtain coherent gravity and magnetic data sets on a world scale, operating from the global perspective afforded by orbiting satellites is the only reasonable alternate way to provide total coverage.
One could argue that this subject deserves a Section of its own but in the remainder of this Tutorial we choose to confine our attention to those systems that produce data by measuring in the electromagnetic radiation (EMR) spectrum (principally in the Visible, Infrared, and Radio regions). Nevertheless, just to "peak at" the kinds of non-EMR geophysical data being collected from space, we will taken a "detour" from the main theme of this Section by providing on the next page several examples of the use of satellite instruments to obtain information on particles and fields around the Earth; in Sections 19 and 20 (Planets and Cosmology) there will also be some illustrations of several types of geophysical measurements.
Multi-spectral scanners measure reflected EMR in selected wavelength bands. Satellite-based scanners have been restricted to detecting EMR in a small number of bands. As a result, the bands have been carefully selected to suit particular objectives. This situation is changing with a new generation of "hyper-spectral" scanners which are capable of detecting much larger numbers of narrow wavelength bands.
Multi-spectral scanners are of two types: wisk broom and push broom scanners. Wisk broom scanners use an oscillating mechanism to deflect the sensor back and forth along a scan line that runs perpendicular to the line of flight. Push broom scanners do not require use of an oscillating mechanism since they use an array of sensors that detects an entire scan line instantaneously.
Multi-spectral scanners record EMR in analogue format as an electrical signal that varies proportionally to the brightness of a pixel in the sampled wavelength band. However, the analogue signal is converted into a digital number (DN) before transmission to an Earth-based receiving station. The digital numbers represent gray scale brightness values. Gray scales can have 64, 128 or 256 gray levels depending on whether 6-bit, 7-bit or 8-bit data formats are used.
Multi-spectral data use a variety of data formats, including BIP (band interleaved by pixel), BIL (band interleaved by line) and BSQ (band sequential). These formats identify the ordering of data in the image files. In BIP format, the brightness values for all bands for a given pixel are listed, followed by the values for the next pixel. In BIL format, the data are organized into complete scan lines. The band 1 values are listed for the first scan line, followed by the band 2 values for the same scan. After the last band values for the first scan line, the pattern repeats for the second scan line. BSQ format presents a complete band 1 image, followed by a band 2 image, etc.
Image Resolution
Multi-spectral data are available at a variety of resolutions. However, the term resolution has four distinct meanings. Spatial resolution refers to the size of the instantaneous field of view (IFOV) of the sensor. This is equivalent to the size of pixel on the ground and determines the amount of spatial detail contained in the image. Higher spatial resolution imagery allows smaller objects to be detected. Radiometric resolution refers to the number of brightness levels in the image. This can range from two levels (black and white) which can be obtained using high contrast photographic film, to 256 gray levels for an 8-bit image. Higher radiometric resolution produces a continuous tone image and may make it easier to detect different types of features in the image. Spectral resolution refers the the wavelength intervals detected by the sensors. The narrower the wavelength bands detected, the higher the spectral resolution of the sensor. Temporal resolution refers to the time interval between repeat coverage of a given area. Although this varies depending on the region of interest, repeat coverage from satellite imagery is usually available within 16 days.
Landsat MSS
Landsat Thematic Mapper
As is the case with airphotos, visual interpretation of satellite imagery can be used to identify features and interpret ground conditions. This can be accomplished by preparing hard copy prints of the imagery or by displaying selected image bands on a computer screen. Computer monitors use three electron guns - blue, green and red - to excite screen phosphurs to display an image. By assigning a different image band to each gun, we can create a false colour composite image to aid in visual interpretation. The selection of bands to be displayed depends on the objectives of the analysis and the types of features or conditions we are trying to identify.
However, much interpretation of satellite data is done using digitial image analysis techniques. These can be grouped into four broad classes of operations: image rectification, image enhancement, image classification and change detection.
Image Rectification
Image rectification is designed to remove distortion from the satellite image. Radiometric correction attempts to remove atmospheric effects such as haze from the image. This may not be required if you are working with a single image but is important if you need to piece together multiple images to obtain coverage of your study area or if you are doing certain types of temporal analysis that require use of multiple images at different times. By removing haze from the image, we can standardize the brightness values across the set of images, making subsequent analysis easier and more reliable.
Geometric correction attempts to adjust for the effect of the Earth's rotation on its axis during image acquisition and to register the image to a known co-ordinate system such as UTM. Geometric correction is accomplished by resampling the image. This process involves three stages. The first step is to define a set of control points that are easy to find on the image and have known UTM co-ordinates. Usually the UTM co-ordinates are obtained from inspection of a topographic map in either digital or paper form. Once the control points have been identified, we can use their image and UTM co-ordinates to define a transformation that will convert the image co-ordinates into UTM. The final step is to recalculate the DN values for pixels in the transformed output image based on pixel values in the input image. The spatial resolution of the image can be modified in this process. This can be done using nearest neighbour interpolation, bi-linear interpolation or cubic convolution. Nearest neighbour assigns the brightness value of the nearest pixel in the input image to the pixel in the output image. This method is best suited for use with classified image data. Bi-linear interpolation estimates  the output pixel value by interpolating between the centre points of input pixels that overlap the output pixel. This method works well with continuous surfaces that are relatively smooth. Cubic convolution assigns the output image pixel a weighted average of the input pixels within a rectangular window centred on the output pixel. This has the effect of smoothing the output image and removing unwanted noise. The resampled image can be combined with other data sets in the same co-ordinate system.
Image Resampling
Image Enhancement
Numerous image enhancement techniques are available. In general, these techniques produce a new image that can be used in subsequent analysis.
Thresholding divides an image into two classes. For example, in the near IR band, water has low reflectance values while land areas, either vegetated or bare ground, have higher reflectance values. By examining a frequency distribution of the brightness values, we may be able to determine that water bodies have brightness values less than 40 (on a scale of 0 - 255). We can use this threshold to separate water from land.
We can extend this procedure to include multiple thresholds defining different land cover types. Continuing the previous example, we may find that vegetated areas have brightness values ranging from 40 to 180 and that bare ground or paved surfaces have brightness values greater than 190. We could use a threshold of 185 to separate these two cover types. The process of using multiple thresholds to classify an image is called density slicing.
Visual interpretation is often difficult based on the raw image data because the brightness values are concentrated in a narrow range rather than being spread out over the entire gray scale range from 0 to 255. We can overcome this limitation by using a contrast stretch. As the term implies, we are rescaling the horizontal axis of the frequency distribution of brightness values so that the full range is used. If the raw brightness values are all in the range from 60 to 180, the image will have little visual constrast and will contain a series of similar gray tones that are difficult to tell apart. If we stretch the range of brightness values to use the full range from 0 to 255, the image will have much greater visual contrast, making it easier to interpret. A linear stretch preserves the overall shape of the frequency distribution while a histogram stretch results in a equal number of pixels being assigned to each unique brightness level. A histogram stretch is generally preferred because it maximizes the information content of the image.
It is also possible to extract new features from the original images to create new information channels for use in subsequent analysis. Three commonly used methods of feature extraction are: calculating texture measures, image ratioing, and principal components analysis. Texture measures attempt to capture spatial variability in brightness values within a rectangular window centred on a pixel. This can be useful in distinguishing features such as residential areas in which the brightness values represent a mixture of pavement, grass and roof tops. Image ratioing is commonly used in vegetation studies. The most widely used measure is a normalized difference vegetation index (NDVI) which is calculated by taking the difference in brighness values between the near IR and the red bands and dividing that difference by the sum of the same two bands. For example, using Thematic mapper data, band 4 is the near IR and band 3 is red:
·         NDVI = (TM4 - TM3) / (TM4 + TM3)
The NDVI measure has been found to make it easier to differentiate between vegetation types and has also been useful in biomass estimation. Principal components analysis is a multivariate statistical technique that attempts to group together highly correlated variables into a single index. This can be useful in reducing the number of data channels required for a given analysis and the components themselves may have useful spatial interpretations.
Image Classification
The purpose of image classification is to group together pixels that have similar patterns of brightness values across a series of image bands or information channels. There are two general approaches: unsupervised and supervised classification. In unsupervised classification, a statistical technique called k-means cluster analysis is used. In this procedure, the analyst specifies the number of classes required. Pixels are initially assigned to classes at random. Once all pixels have been assigned to a class, group means are calculated for each class. Each pixel is compared to each  class and is reassigned to the class with which it has the highest similarity. Once all pixels have been reassigned, class means are recalculated and the process iterates until no further changes in class membership occur. The output is a new image in which each pixel is represented by its class identifier.
Unsupervised classification is useful for exploring what cover types can be detected using the available imagery. However, the analyst has no control over the nature of the classes. The final classes will be relatively homogeneous but may not correspond to any useful land cover classes.
Supervised classification requires the user to identify the cover types of interest. Samples of pixels are then selected based on available ground truth information to represent each cover type. These samples are called training areas. The brightness values in the input image bands are analyzed to generate a spectral signature for each cover type. All pixels in the image are then compared to the spectral signatures of each cover and assigned to the cover class with which the pixel has the highest degree of similarity. Nearest neighbour, parallelpiped, and maximum likelihood classifiers can be used, although the maximum likelihood method is generally preferred.
Change Detection
Because of the repeat coverage made possible by satellite imagery, monitoring change over time using multi-temporal imagery is an important application. Change detection involves comparision of a pair of images to identify areas that have distinctly different brightness values. New images representing change can be created by taking the difference between images. The change image can be subjected to image classification techniques to determine the different types of change that have occurred.
Change detection can be extended to series of images over a long period of time. For example, Piwowar used nine years of monthly AVHRR data to analyze changes in sea ice conditions in the Arctic Basin.