GPS
Consumer Series: Averaging GPS Data Without Applying
Differential Correction
By
Chuck Gilbert
The GPS Consumer Series is a monthly column that
explores the issues associated with GPS data collection.
This column explores the benefits provided by various GPS
receiver features on today's market. Issues commonly
encountered in differential GPS data capture are examined
from the user's perspective.
Introduction
If you walk outside right
now, turn on a GPS receiver, then read the position on the
screen, the answer will almost always be inaccurate. Any
one GPS receiver, operating on its own, will be accurate
to better than 100 meters 95 percent of the time. In my
book, that's not very accurate. It's not that the GPS
system itself is that error prone. The vast majority of
this error is due to intentional degradation of the GPS
signal by the U.S. Department of Defense (DoD). The DoD
cites reason of national defense. I do not intend to
debate the wisdom or effectiveness of this policy. I
merely wish to elaborate a little on how this degradation
impacts the typical GPS user.
My thanks to Paul Malley
and Rob Peterson, two astronomers in Texas, who have
pressed for more information, and inspired these words.
While their application may be esoteric, their data
collection needs are not atypical. I will use their
application as an example. These astronomers go to remote
locations to observe and record solar eclipse data. It is
important that they are able to determine relatively
accurate geographic coordinates for their telescope
locations. In the interest of simplicity, I will not state
precisely what their accuracy requirements are - that is
not relevant. Suffice it to say that their fundamental
needs are not very different from that of any biologist,
geologist, or any other worker in the field who desires
spatial coordinates for their 'things' in the field.
It Takes Two (or More) to Tango
If the GPS signal were not degraded, the accuracy of one
receiver, operating on it's own (or autonomously) would
typically be about 10-15 meters. The error in excess of
this 10-15 meters is a result of the DoD policy to degrade
the GPS signal for non-military users. (This degradation
is known as Selective Availability or S/A.)
Fortunately there is a
relatively easy way to subjugate this degradation. If two
or more GPS receivers collect data at the same time, it is
possible to use the data from one receiver to remove most
of the error from any number of other receivers, providing
that a few simple criteria are met.
The main criteria for
differential correction are listed below:
a) Both receivers must record data at the same time.
b) One receiver must be stationary at a location of known
coordinates. This stationary receiver is usually referred
to as a base or reference receiver.
c) Both receivers must record the cor- rect type of
information (positions only are not enough). A variety of
details about the satellites and their orbits are required
for successful correction.
d) The two receivers should be in the same geographic
region. Usually, two receivers within 500 km (300 miles)
of each other are close enough together.
The process of using two
receivers at the same time to remove errors is known as
differential correction. After differential processing the
accuracy of GPS data can be improved to better than one
centimeter with survey grade GPS receivers, and to
accuracy better than one meter with mapping grade GPS
receivers.
For a more thorough
discussion of the requirements and details of differential
correction refer to this column in the October 1993 issue
of Earth Observation Magazine. Alternatively, reprints of
the October 1993 issue can be obtained from Trimble. For
more detail still, I recommend the book Global Navigation,
A GPS User's Guide, by Neil Ackroyd and Robert Lorimer,
ISBN #1-85044-232-0.
Double or Nothing?
For the majority of applications, the 100 meter accuracy
available under the influence of S/A is not good enough.
In fact, even without S/A, the resulting 10-15 meter
accuracy is still not good enough for many applications.
It is the superior accuracy of differential GPS that makes
GPS a viable solution for innumerable applications.
The cost of acquiring
differential accuracy is that users require either an
additional GPS receiver to serve as a reference receiver
or they must obtain GPS base data from another source. An
alternative source of base data may not be readily
available (particularly in remote areas), leaving the user
only one choice, to buy yet another GPS receiver.
Obviously, this adds to the cost of using GPS. In some
cases, acquiring a second receiver will immediately double
the cost of using GPS because users must buy two receivers
(base and rover) instead of just one.
Cheating the Hangman?
Some users try to get around this dilemma by simply using
one receiver, collecting more than one position at each
site, then averaging together multiple positions. The
premise is that the average of many positions is likely to
be better than any individual position. This is correct.
However, the big question is, "How long must you
average to obtain some particular accuracy?"
The answer is...
"It depends.
" The severity of
Selective Availability varies with time. During periods of
severe degradation, you must average for a longer time
than when S/A is mild. To illustrate, let's examine the
GPS data for a 24 hour period selected at random.
Figure 2 illustrates the
result of averaging position data for varying periods. The
data represented here was collected continuously over a 24
hour period at a high precision geodetic control point.
Positions were collected once per second for a total of
more than 86,000 positions. The X-axis represents the
amount of time over which the positions were averaged. The
Y-axis represents the distance from truth of the resulting
average position.
For example, the graph
indicates that after averaging data for two hours (7,200
positions), the average location of these 7,200 positions
was 41.73 meters from the true location. Since there were
24 hours of data to choose from, and because the average
for one 2 hour period could be different from the result
at another, the full 24 hour data set was divided into as
many unique two hour data sets as possible (12 two hour
sets). All 12 of the two hour data sets were averaged to
generate 12 different answers, then the 12 answers were
averaged together to provide a single representative value
for the time "2 Hours." The table on page 44
(Figure 1) summarized the 12 data sets that were combined
to produce the value on the graph in Figure 2.
Note the wide variability
of the 12 two hour averages. After averaging over 7,000
positions the answers range from as much as 82 meters from
truth to as little as 7 meters from truth. It is important
that users recognize the unpredictability of averaging
data without differential correction.
The same procedure was used
for all of the time periods on the graph. Thus, the value
in the graph representing 15 minutes (0.25 hours) was
derived from 96 different 15 minute samples. On the other
hand, the values in the graph representing 16 and 24 hours
were derived from a single set of 16 hours and 24 hours of
data respectively.
It is interesting to note
that the first data point (15 minute average) is more
accurate than the 30 minute and the one hour averages.
This also is indicative of the large variability of
uncorrected, averaged data. It is conceivable that, once
in a while, a user could obtain a very accurate position
by averaging only a few hours of data. This is merely
luck, the problem is that the user never knows when the
average was lucky and when it was not.
It is important to be aware
that the results plotted in Figure 2 are specific for only
that particular 24 hour period. It is very likely that
other 24 hour periods would be similar in that they would
show error decreasing with time, however, the magnitude of
the error could vary significantly from day to day.
What If You Keep On Going?
What happens if you average data more than 24 hours?
Several data sets have been collected ranging from
1,500,000 to 14,000,000 positions. In general, the average
wanders around within about 5 meters of truth for about
three weeks (or about 1,500,000 positions at once per
second). After three weeks to as much as six months the
average position does not improve beyond about 1-2 meters
from truth.
Summary
The previous data indicate that averaging uncorrected data
is not a reliable way to generate dependable, accurate
positions. Be aware that after position data is averaged
together in the field, it is no longer differentially
correctable. For a successful differential correction, GPS
receivers must store much more information than just
position. Therefore, if a single averaged position is
stored in a GPS receiver, it will not have the details
that are required to correct it later via post-processing.
There are many GPS systems
on the market today that allow users to average position
data while in the field. This is a very dangerous feature.
I advise extreme caution in using any such averaging
features. Consider the two points below if you require
reasonably accurate data, and you plan to average while in
the field. There are two ways that position averaging and
differential correction can be used together. First, if
ALL of the position data is stored and the positions are
first differentially corrected then the corrected
positions are averaged together. Alternatively, if the
differential corrections are being performed in real-time
via a telemetry link with your base receiver. This
real-time correction scenario is useful primarily when
users plan on using the GPS receivers to navigate
accurately while in the field. If either of the two
techniques described above are used you can have the best
of both worlds.
About the Author:
Chuck Gilbert has over a decade of experience as
a GPS user. He has been employed as an applications
engineer for Trimble Navigation since 1989. If you have a
suggestion or request for a future article, please drop a
line to Chuck care of Earth Observation Magazine.
Back
|