GIS
Identifies Areas of Small Risk for Natural Disasters
By Clyde H. Spencer
Natural disasters are
topping the news lately. Northern California experienced
devastating floods this January, followed by the severe
Japanese earthquake near Kobe. Last year, Southern
California experienced the Northridge earthquake, followed
by severe wildfires in the summer. Man will never be able
to live on Earth without risk from natural disasters.
However, much of the loss of life and property can be
directly attributed to a lack of planning and foresight.
Some areas of the Earth's surface are more hazardous than
others, and placing large populations in these areas is
simply a disaster waiting to happen.
It is impossible to build
engineering structures that will be earthquake proof.
There are too many unknowns related to the general
severity, which is a function of duration, acceleration,
and mix of frequencies of P, S, and surface waves. An
unpredictable mix of amplitudes can result in constructive
interference that peaks well beyond the average
experienced in the general area. There even comes a point
for most buildings when the law of diminishing returns
makes it prohibitively expensive to design earthquake
resistance above a certain threshold. Therefore, to reduce
the risk of earthquake damage, one should site structures
- when possible - in areas that have little or no risk of
earthquakes.
In 1969, Ian L. McHarg,
noted landscape architect, wrote an influential book
entitled Design With Nature. If you can find a copy
somewhere, it is recommended reading. My personal copy is
a discard from the Redwood City Public Library. In it, and
through his previous work, he popularized the approach of
transparent polygonal overlay analysis for evaluating
optimal land use. His use of colored and variable density
acetate overlays anticipated the development of raster
geographic information system (GIS) overlay analysis. One
might arguably call McHarg the spiritual father of GIS
overlay analysis. In any event, he has been a seminal
influence in the objective analysis of landscape for
purposes of rational planning and development.
Interestingly, his approach of simultaneously evaluating
several layers of information has not been duplicated in
commercial GIS.
Most GISs require
performing algebraic or Boolean manipulations on two
layers (or three at most) and compounding pairwise the
results for the final output. This is tedious if one has
more than a half dozen data layers to simultaneously
evaluate. Actually, when one converts a multispectral
image into a thematic map of land use classes, the
appropriate reclassifying of theme digital number (DN)
values can shortcut the process of combining different
data layers as derived from field mapping or extant maps,
if the intent is to do a polygonal analysis of weighted
classes.
As an example of the
computer application of polygonal overlay analysis, say
the federal government wished to locate an environmentally
dangerous facility - such as a nuclear fuel reprocessing
plant or experimental nuclear reactor - with absolutely
minimal chance of some natural disaster causing a release
of radioactive nuclides into the atmosphere or
groundwater. Some essential considerations would be:
• Earthquake damage risk
• Tornado frequency
• Hurricane frequency
• Flood and forest fire
potential
• Bedrock geology
The first step would be to
evaluate the earthquake, tornado, and hurricane risk.
Then, when general regions were identified that minimized
these risks, larger scale maps showing bedrock geology and
topography could be analyzed further. One would want to
site above the 1,000 year flood plain, on a relatively
flat area, away from steep slopes that could provide a
rock avalanche or channel waters from a cloud burst. One
could use a digital elevation model (DEM) to both identify
the benches of the flood plains and to derive a slope map.
A proximity analysis could be used to identify flat to
gentle slopes at a safe distance from steep slopes and
mountain stream courses.
The bedrock geology is
necessary for several reasons. It is desirable to have a
stable building-foundation since, all other things being
equal, the stronger the foundation material the less
structural damage that can be expected in the unlikely
event of an earthquake. Should the unthinkable happen and
the hazardous facility be damaged, one does not want
anything percolating into the ground. Therefore, the site
should be chosen for a shallow soil and impermeable
bedrock. The judgement of an experienced engineering
geologist might be necessary to properly code this data
layer. However, as a first approximation, one could read
the type of rocks from a geologic map and obtain what is
called the seismic-wave propagation velocity from a
reference book. High seismic velocities generally are
desirable for foundations. Low seismic velocities,
particularly those associated with sediments, are to be
avoided. One cannot get the detail necessary on a national
scale for foundation site-selection. The same thing is
true for flooding potential. So, a facility siting
analysis like this would have to be done in two steps.
Let's do the first step.
Examine the earthquake risk map (Figure 1). This is a
shaded isoline map representing the subjective opinion of
seismologists of the U.S. Geological Survey as to the
damage-risk based on the location of known earthquake
faults and the record of historical earthquakes. One can
see that there are four arbitrary levels of risk. A simple
Boolean AND/OR analysis approach (with only 0 and 1 values
coded) may overstate or understate the risks.
The first step in analysis
would be to digitize the line map so that it can be input
into a GIS. The highest risk area was coded with a value
(50) equal to the number of tornadoes in the highest
frequency tornado region. One might reasonable argue that
any particular earthquake affects a much larger area than
a single tornado and therefore earthquakes should be
weighted more heavily. However, since this is primarily an
exercise in illustrating overlay analysis principles, we
will keep it simple.
The converted digital map
of tornado frequency (Figure 2) is a shaded isoline map
showing the areal occurrences of tornadoes over a number
of years. The values were recoded from brightness to a
digital number that corresponds to the occurrence
frequency interval. The values are: zero with unrecorded
to less than 10 tornadoes; ten for 10 to less than 30;
thirty for 30 to less than 50; and 50 for 50 or more
tornadoes.
Hurricanes would next be
dealt with in a manner similar to what was done with
tornadoes. Although hurricanes usually are accompanied by
flooding that isn't a concern with tornadoes, the
weighting could be the same since potential flooding could
be dealt with explicitly in another data layer. We won't
actually work with the hurricane data layer in this
example, however.
The earthquake and tornado
data layers can be combined by arithmetic addition. The
resulting map may possibly contain values from 0 to 100,
although no values greater than 80 were actually produced.
As coded, the values with the highest numbers should be
avoided as potential location sites.
Once general regions of low
risk are identified, then these regions can be further
analyzed by the use of additional overlays as explained
above. Another analysis approach would have been to
multiply the two data layers, pixel by pixel. That would
give a much greater weight to co-occurrences of these two
risks. However, that would require rescaling everything to
prevent an overflow condition with digital number values
greater than 255.
It might be desirable to
perform a low-pass filtration of the resulting image map
to blur the boundaries. The original boundaries were
arbitrary and the original maps were stylized to make them
easier to interpret. However, for our application, it
makes sense to explicitly deal with the reality that there
is a transition from one probability zone to the next. A
large convolution kernel for blurring would be
appropriate.
Figure 3 is the sum of the
two data layers and has been pseudocolored with red hues
of decreasing intensity. Speaking in general terms, the
riskiest area in the country would be central Oklahoma and
Kansas. (A Kansas salt mine was once seriously considered
as an underground repository for nuclear waste, but that's
another story.) The safest area would appear to be in
southern Texas, around the Rio Grande. Of slightly greater
risk (because of potential small earthquakes) would be the
Rocky Mountains and the extreme northern Great Plains.
Also, an elongate area northwest of the Appalachians,
comprising eastern Kentucky, West Virginia, and western
Pennsylvania has a combination of low earthquake damage
risk and infrequent tornadoes.
This example is a rather
simplistic approach to overlay analysis that was chosen
because it was thought that it would be easy to follow and
understand. There are other more sophisticated things that
could have been done. Boolean Algebra, using a sort of
GO-NO GO test, is often used when a particular parameter
is absolutely not allowed to be present or two or more
things must be present simultaneously to require an
acceptance or rejection decision. Complex mathematical
formulas can be applied to each and every layer to recode
data.
Proximity analysis would be
important both for cost and availability of construction
materials and subsequent shipment of nuclear materials. In
the real world, there are many other things that might
well be considered in the analysis, not the least of which
might be the environmental impact of construction on rare
or endangered species.
However, all of this goes
beyond what space allows here. The intent was to
demonstrate how a traditional, hand method of overlay
analysis can be implemented with a modern raster GIS, with
very little re-learning of principles necessary.
About the Author:
Clyde H. Spencer is a consultant specializing in
technical and market aspects of remote sensing and GIS. In
addition, he is currently teaching GIS courses part-time
for the University of California, Berkeley. He can be
reached at 408-263-6779.
Back
|