![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
TECH
NOTES In years to come, extracting GIS features from high-resolution imagery to produce maps, or to update or generate GIS databases, will be a common task in remote sensing. Using sensors of lower resolution such as LANDSAT™ or SPOT might be appropriate for relevant features at small scales. High-resolution image data in the form of scanned orthophotos, or for data taken from high-resolution airborne or space-borne sensors, generally permits the extraction of features relevant at larger scales. However, due to the enormous amount of information contained within such images, it is necessary to make their contents manageable by employing one or more of the preferred image segmentation methods. Additional information such as textural, form criteria, or contextual information of the segments must then be described in an appropriate way so as to derive improved classification results.
eCognition from Definiens uses a newly patented segmentation technique that makes possible the generation of a hierarchical net of image segments on several levels of scale. It is now possible to derive meaningful image segments on the one hand, and to describe the physical and contextual characteristics of a segment on the other. This classification can be executed either by a nearest-neighbor approach or by fuzzy membership functions. Thanks to eCognitions object-oriented approach, classified segments can pass down their properties to child classes. Semantic groupings of the classes thus generated help to combine classes into meaningful superior classes. While the hierarchical net of image segments describes the content within the image domain, the class hierarchy describes it within the feature domain. Both hierarchies act simultaneously as a semantic net of the content of the image, thereby coming very close to the semantic description of the contents of maps or GIS databases. With this in mind, two classification approaches to feature extraction when considering global context have been compared by Messrs. P. Hofmann and W. Reinhardt, using airborne DPA data from Bueckeburg, Germany. A bottom-up approach was classified by taking the classification of their sub-objects into account; the classification results of larger objects influenced the classification of smaller objects on lower-image levels in the top-down approach. Compared to a conventional, pixel-based maximum likelihood classification, one that typically produces more salt-and-pepper-like results, both segment-based approaches led to more homogeneous classification. The authors rated those results derived from eCognition to be closer to a human visual interpretation, which was generated as a reference result. eCognition only produced misclassifications when even the human eye could hardly distinguish the features.
|