AIRBORNE: Map Accuracy Specifications, Part One of a Two Part Series Where did they come from, what are they, and what do they really mean? By Bob Fowler This article has been written for both users and producers of map information. I have written it with a typical map makers compromise of technical explanation, in the hope that there is sufficient detail to explain why to the novice, but not too much basic information to bore the initiated. Right at the start I should say this is a contentious issue. There could be as many opinions as there are salespeople out there selling products and equipment. There are also major issues concerning what is possible, against what is necessary, what is real and what is not real, what is precision against what is accurate. This is a complex subject, because map accuracy is really not based on any one simple formula. The sum total of what is map accuracy should consider: how good the source data are; where did they come from?; what sort of instrument produced them (and how good was that instrument)?; how carefully were the source data used?; how accurate is any single point in terms of position and also compared to another point?; how much can be shown at any scale?; and how accurate is any particular type of detail? If you will bear with me for some history, and the physics of accuracy issues which are the facts connected with the topic, hopefully I can illustrate how some of the concepts and misconceptions have come about and then we can examine the current situation and finally a more realistic approach to accuracy issues in mapping which could be termed the politics of it all. The whole accuracy issue in mapping is based on the history associated with analog survey equipment used years ago. Old survey instruments, or any other scientific instruments for that matter, were accurate according to their engineering precision - or more precisely how carefully and accurately a measuring instrument could have its scale divided. As well, in most older instruments there were physical phenomena which were impossible to control so a whole milieu surrounded proper instrument construction, maintenance and use. How and why some of the rules are the way they are In the old days (not really so very long ago - 30 years - although it seems like a totally different era to me!) surveying and engineering equipment had to be used in a specific way. If you set up a transit theodolite, there was a specific procedure which had to be used to make a sighting. This procedure ensured that you compensated for the inherent instrument errors - both mechanical and in the optics, any backlash in the adjustment screw systems and any misalignments in the measuring plates (transit theodolites have a base plate and a top plate). Each order of survey had specific equipment requirements and a methodology which had to be followed. If you were, for example, doing third order work you would be expected to use an instrument that could read angles to one second of arc. The methodology was an integral part of the accuracy requirements because, regardless of the instrument, you would not hope to achieve the order of accuracy unless you did things the right way. Using this instrument you would need to read angles a minimum of three times with different random base plate settings and each set of angles would be read with the telescope set one way, then "transited" (i.e. turned head over heels 180 degrees and swung back onto the target) and the angle read again. This transit procedure resulted in giving you two angles which were 180 degrees different on the plates. There were two reasons for this: One, it avoided making a simple calculation error, and two (more importantly) if there was a manufacturing error in the alignment of the telescope or in the alignment on the plates, the mean of the two angles (forward looking and back looking) would be the "real angle." The result of three sets of meaned angles, according to common specifications, should be within 10 seconds and none of the angles should be more than five seconds from the final mean. If, on the other hand, you were surveying to second order specifications, the same one second reading theodolite can be used, but now the number of sets is a minimum of four sets with a spread of no more than five seconds with no angle exceeding two seconds from the mean. A first order survey requires a higher degree of instrumentation - one that can essentially be read to one tenth of a second, and a minimum of six sets of angles with means less than 2.5 seconds spread, repeated preferably three times on three different days. So you can see methodology affects accuracy as well as the equipment itself. The computation of traverses or triangulation nets also required a set methodology and rules of computation. Again, there were relatively limited ways of doing this sort of thing. Usually at the end of a primary traverse (a traverse which would bound the area being mapped) any errors would be apportioned over the length of the traverse according to certain rules - such as the compass rule. Internal traverses were then adjusted to the bounding traverse. There's nothing wrong with these rules, and they are still valid today under particular circumstances for certain methodologies. But it has all changed - hasn't it? Yes and no. Over the last few decades, there has been a quantum leap in methodology, manufacture and engineering of measurement equipment and in many cases complete changes in technology. Some of the older concepts of error assessment and adjustment are, in fact, no longer valid (unless you are still using some of the older equipment). In addition, with the advent of the computer it became possible to complete more complex adjustments, such as least square adjustments. Least square adjustment software, in effect, computes the location of a position every possible way and then computes the square root of the total number of errors compared to a mean derived from the sum of the squared errors of all of them - following which the program determines a most likely position, along with an error ellipse which provides the user with the probable maximum error boundaries. If all that sounds fuzzy, it gets worse. Because, most computers work to eight decimal places, we end up with a position which is computed to eight decimal places from input information which is unlikely to be anywhere near that level of accuracy - it could be good only to the integer. This is where a lot of confusion reigns in the modern world. The reason is, users see the eight places of decimals and assume everything is much more accurate than reality. On the other hand, the chances are very good that the computed position is better than the real life measurement - just don't ask anyone to prove it! This decimal place problem runs through every piece of equipment and software used in the surveying, mapping and GIS world. Regardless of how good the equipment is, the data logger plops the measurement into a computer which immediately uses every decimal place it has to create means, and least squares and adjustments. The result is coordinate values that are listed to thousands of a millimeter - when the input data could have been read with a 10 second transit and a distance measuring device good to the nearest meter. In the modern mapping world, notorious for trying to get the most precision it can out of every aspect of the process - even when it isn't there, we have people who look at the figures coming out of their equipment and assume they are achieving some sort of precision a magnitude greater than reality. Today, for instance, you could collect a digital terrain model from 1" = 5,000' (1:60,000) scale photography, and a computer will "invent" one foot contours for you. The reality is you can read 20 feet contours with a proven degree of certainty from that photo scale and they are likely to be accurate to half of that value (+/-10 feet). A lot of mapping people will already be screaming you can do much better than that, because I have used an example based on standard, analog photogrammetric instrumentation. The answer to that is yes, you can do better - but how much better and who says? (And this is where the subject gets even fuzzier.) Because, who does make these decisions? ¥ The manufacturers who make the equipment? (Likely a vested interest here!) ¥ The people selling the very competitive mapping service? (In cahoots with the instrumentation sellers!) ¥ or is there just a vaguely held public belief or folklore (something like conspiracy theories on the Kennedy assassination)? The first reality is that instrumentation sellers do tend to push the limits of their equipment - and why not, everybody else does it - and they do get the best results under controlled laboratory conditions. The second reality is service providers, who, believing the instrumentation sellers, do have a tendency to push the limits a notch further (surveying, mapping and GIS is a very competitive business). Real Testing The proven reality comes mainly from the independent assessors. These independent assessors tend to be national and occasionally state or provincial governments who have the time to investigate and bench test equipment. Unfortunately, in these times of cutbacks, governments have less and less resources to devote to ensuring manufacturer's claims are based on solid evidence. Nevertheless, in the past many fairly definitive tests have been done on the gamut of surveying and mapping equipment. Less has been done on the GIS software, but in many cases users groups have tended to be very forthcoming in the relative merits and flaws of one software package versus another. Until, possibly 10 years ago, there was fairly rigorous testing of new equipment. Any new distance measuring equipment was tested against known standards, on base lines which have been measured multiple times with various different methodologies. Thus a new electronic distance measuring system can easily be checked against a standard under a variety of conditions to determine its consistency. This sort of baseline checking is relatively easy and inexpensive and, to be fair, most reputable manufacturers do it well. However, when we move into the satellite based world, appraisal becomes somewhat murkier. We start running into situations over which we may or may not have full knowledge and even less control, and the small print in the instrument specifications start to read like a lawyer's non-liability clause. A GPS receiver can provide you with positions which appear to be close to a centimeter in precision on any point within a couple of hundred kilometers of a base station (using differential techniques- where the known position of the base station is used to coordinate the new station through the apparent position of the satellites). The problem is that there are a vast number of external influences affecting the satellite position - compared to, say, a tape measure laid flat between two points where the only external effect is probably the expansion of the tape due to temperature. In a satellite position, we assume the satellite is in the orbit and position it is supposed to be. We assume that the temperature gradient is consistent throughout the signal space, that the radio waves are not being bent or affected by solar radiation, and a whole lot of other stuff (some of which cancels each other out) is not really happening. With GPS equipment what is really happening is we are measuring time and phase of wave lengths, and these seemingly very infinitesimal small aberrations create possible errors of several centimeters. We know from experience that, overall, the differences which occur are generally acceptable over the long haul, but many people are pushing the envelope so that the inherent errors are starting to become significant. An example of the difficulty in determining map accuracies can be shown by the 2 centimeters (1 inch error) inherent in a GPS measurement at a single ground point. The positional accuracy of that point can be taken for what it's worth. However when you make a comparison to another point surveyed the same way a couple of kilometers away, now we have an accuracy figure we can quote. Two centimeters spread over two kilometers is an expected accuracy of 1:100,000. However, that 2cms in each position will provide an expected accuracy of only 1:2,500 if the two GPS points are 100 meters apart. This serves to illustrate two points. One, that positional errors using GPS depend on how close measured points are to each other (even though theoretically nothing has changed in the instrumentation or methodology). And, secondly, errors in measuring using older technology tended to get bigger the further you measured compared to satellite technology where the errors are almost the same regardless of how far away you measure. Almost the same, you are saying. Well, you see, there are other factors which affect GPS measurements which we didn't mention yet: from the shape of the Earth to the effects of gravity. So the greater your distances between points being measured the larger the unknowns in these areas become. Satellite technology is based on orbits based on the centre of mass of the Earth - not the physical centre of the Earth, which isn't uniformly round anyway, and gravity varies depending on the mass of solid material in the area. For example, a well known survey effect is that a plumb bob hanging from a tripod on an ocean front set-up will "lean off vertical" ever so slightly away from the ocean and toward the mountains on the landward side. Radio waves, like every other wave in the electromagnetic spectrum, are subject to bending and distortion according to the influence of mass. When all is said and done though, surveying equipment generally has been tested very systematically and the limitations are fairly well documented along with generally accepted error ellipses, even if there are some of us who are not willing to admit these error ellipses are still there. Accuracy in mapping & GIS When we get into the mapping and GIS fields, documentation of limitations is not as clear. The older analog photogrammetric instruments have been around long enough and have been sufficiently tested against field results to establish their capability for detection of detail and elevations. Some general rules have been established by mapping agencies around the world concerning the level of accuracies which can be expected under specific circumstances. These specific circumstances usually rely on various input assumptions: -that the aerial photography was flown at a certain height, and -that there were no problems with it, (it hadn't been subject to big changes in temperature, wasn't flown in hazy conditions, was developed properly etc.) -that the ground control was properly executed and adjusted, -that the analog instruments were in good adjustment themselves. All of these factors influence accuracy, and in reality much more than most mapping contractors care to admit. The maximum contour interval from specific scales of photography assumes that everything is perfect - when in real life some link in the chain isn't. John Thorpe's excellent article in the June issue of EOM examines these weak links very well. When analytical instruments came along, an increase in accuracy was predicted and experienced. Part of this was due to more precision in the measuring capability of the instrumentation simply through more precision in the engineering and recording of the information, and part of it was due to being able to "zoom" in on the photography at a greater degree of magnification. This resulted in a greater thrust by some of the service providers in the mapping field to push the limits again to appear more competitive than their peers. The degree of extra accuracy obtainable through the use of analytical equipment, however, is not as significant as is often touted. For example, it is a generally accepted rule that to obtain one foot contours on an analog instrument the aerial photography should not be flown higher a scale of 1" = 250' (1:3,000) with a six inch focal length lens. With an analytical instrument this can be extended to 1" = 300' (1:3,600), providing a 20 percent efficiency improvement for subsequent downstream processing. Significant, but not by any means doubling the accuracy potential. More and more, driven by cash conscious clients, today we see the extension of mapping specifications to beyond that which can be guaranteed. Occasionally you will see people proposing obtaining one foot contours from a 1" = 350' photo or even a 1" = 600' photo. And part of the problem is, you can use the argument that you can produce a one foot contour from higher level photo using the technique of digital terrain model collection followed by processing. This process is where a photogrammetrist collects single point data, usually in some form of a grid pattern. But the photogrammetrist is human! It is a well known, measured phenomenon that a photogrammetrist collecting contours by following the lie of the land with his instrument mark will be slightly under or slightly above the ground for a significant portion of the contour. "On average" his mark will be on the ground. On single point collection, however, the photogrammetrist usually stops and places the mark on the ground more precisely. This provides a greater degree of accuracy on the specific point: usually reckoned to be about one quarter to one sixth of the contour interval. So if you were using 1:3,600 photo to provide one foot contours, an individual spot elevation should be good to plus or minus 3 inches (compared with plus or minus 6 inches on the contour). Theoretically, a good interpolation program to generate contours from a net of closely spaced spot elevations will provide a better contour accuracy - as long as the ground is uniformly smooth. The reality is, of course, the lie of the land is generally unpredictable. As far as I know, no one has ever actually spent the time and money to find out if this is really the case under a wide variety of terrain types and, more specifically, how closely together the spot elevations must be collected to guarantee a better accuracy. The general consensus among the photogrammetrists I have spoken to is that there should be an improvement of accuracy when producing contours this way, but probably not very much, and no one is willing to bet a month's salary on it. Again, we are relying on a computed accuracy - not necessarily a real one. There are many people, also, who say that because you can zoom in for a greater degree of magnification and have more precision in measuring, that the accuracy using analytical instrumentation can be further extended. The problem with statements like this is they are hard to prove or disprove. It is true looking at a photograph at a greater magnification does allow the interpreter to pick out more detail - sometimes. However, the general consensus of people who actually work with photography is that if you can't see it properly at six times magnification, you wont see it any better at 12 times. Indeed, a blurry object at six times magnification may be interpretable (by rough shape, position and the intuitiveness of the interpreter who makes an informed judgment that it appears to be a manhole, catch basin or whatever), but when you look at the same blurry blob at 12 times magnification it is just a blur. You can test this yourself. If you have a blurry 3" x 5" print of Uncle John from your Instamatic, try getting it blown up to 16 x 20 and see what happens! The second point about measuring is also subject to discussion. If you are measuring to a blurry blob on the photo that you think is a man hole, finding the center of the blur is reduced to guesswork. Probably quite good guesswork, but still, nevertheless, guesswork. There are a large number of analytical photogrammetric instruments which will measure to one micron (1/1,000 of a millimeter), but using the optics in the instrument no one can see one micron. On top of that, the resolving power of most aerial film itself is around seven microns, so even if you could see one micron you would be looking at lumps of emulsion on the film. Conversely (of course, there has to be a converse), in tests with a number of photogrammetrists, our organization has proven in aerial triangulation, at least, many operators do seem to read consistent results between five and 10 microns in repeated readings of the same spot on the photograph. This would lead you to suppose some of them can technically appear to interpret better than the grain of the film allows. However, this is really another scientific fallacy - all this really means is they are exceptionally good at averaging. What about softcopy? Now when we get into the newest line of photogrammetry equipment - the soft copy stations, things become even more blurry, if you'll pardon the pun. Now photography is being scanned and dumped into a computer file. Scanners come in all shapes and sizes with all kinds of specifications. Judging how good anything really is, is probably limited to believing the manufacturer's blurb. We have had scans made by a fair number of service bureaus on a wide variety of scanners. We have compared them with our old Optronics drum scanner, and have, surprise surprise, noted very little difference - including in direct comparison with some that were in fact scanned more precisely on better specified scanners yet looked worse (more indistinct and were no discernibly better positionally - viewed side by side and on top of one another) than those made on our old in-house system. (Do we have a fluke or what?) This concerns us somewhat, as it raises a question as to how accurate is accurate and does the extra precision really mean anything? We'd like to be convinced it does but, at the moment we can't prove it. The new scanners provide a number of advantages which older scanners do not have, yet the best of them have a maximum scan resolution of 6.5 to 7 microns. As mentioned before, this is very close to the grain size on the film, so with current emulsions there is no point in really scanning any finer. The scanner scans a discrete square (pixel) assigning one of the 256 gray scale levels to the pixel. If the film is color, the pixel is in effect scanned three times, once for each of the primary projection colors (red, blue and green). Interestingly, while the scanners will provide 256 levels of gray, most people have difficulty discerning much more than 20 to 30 levels of gray in an image. On the other hand (again!), most people viewing an image scanned with 16 levels of gray next to an image scanned with the full 256 levels can discern that the 256 level image does appear slightly better. The reason I am mentioning this is there is a very fine distinction here. The photographic film is a continuous tone image with random grains of emulsion affected by the light rays and chemistry in the processing which changes their size. The scanner provides an image where the "dots" are all of a specific size and have a meaned value for each pixel. Does this affect accuracy? Probably not. The photogrammetrist who is looking at a film positive in an analytical plotting instrument and consistently, repeatedly reading a point to 5-10 micron precision is averaging anyway, and he is only doing that on the points where he is making a concerted effort to be precise. The photogrammetrist viewing a three dimensional image on a computer screen scanned at 6.5 microns is looking at something close to the grain of the film, in reality a bit of fuzz - akin to looking at a single dot in a newspaper photo. By itself this is a meaningless bit of information. The same person looking at a pixel at 25 microns (approximately 16 times poorer resolution but a fairly often used scanning resolution) is viewing a coherent averaging of light patterns forming that pixel which even if it is an average has some value. So, is a reasonable scanning resolution to employ somewhere in between? I'm not going to answer that! Well, we have been reviewing the physical and mechanical implications of accuracies and precision but the answer to what accuracy is or is needed to be, actually comes down to something which is surprisingly independent of everything preceding. Editors Note: This concludes part one of this two part series. Watch for part two in November's issue. About the Author: Robert Fowler, O.L.S., C.S.T., C.E.T., is proposals manager for Intermap Technologies in Ontario, Canada. He has written on a number of subjects previously for EOM. He has more than 30 years experience in surveying and mapping, and has written mapping specifications for a number of clients, including the Canadian Department of National Defense and contributed to the mapping specifications for a number of other countries' mapping agencies. He may be reached at 613-226-5442. Back |