Welcome to the latest edition of MapLab. Sign up to receive this newsletter in your inbox here.
This spring, the U.S. Midwest and Southeast were deluged by rain. More than a month’s worth of wet stuff dropped in a single day in some areas, with flash floods hitting Tennessee, Kentucky, Texas, Oklahoma, North Carolina, and Florida. Downpours in Iowa, Illinois, and Missouri caused record crests on the Mississippi River, at least $3 billion in damage, and three deaths.
Yet the vast majority of properties affected by this unprecedented soak were not insured by the National Flood Insurance Program, which provides federally backed flood insurance policies for homeowners and renters to protect their homes. According to environmental researchers and disaster protection watchdogs, part of the reason is that FEMA, the U.S. agency that administers the insurance programs, fails to adequately communicate flood risk to people who settle in vulnerable areas.
For example, FEMA’s colorful maps show the relative probability of a significant flood occurring in communities across the country. These metrics are based on how often, and how severely, the area has been deluged in the past. But that leaves out critical risk factors, including how climate change has changed storm patterns and sea level rise, or where new development has covered up permeable surfaces with pavement and concrete.
That has led numerous researchers to conclude that FEMA underestimates flood risk, by a long shot. According to a study from 2018, 41 million Americans are at risk of experiencing a 100-year flood, nearly triple FEMA’s official count.
What’s more, FEMA has kept other facts away from citizens who might find them useful. For example, even though the agency keeps a list of properties that have flooded repeatedly, most states have no disclosure requirement for such information when a home exchanges hands. And most residents can only learn about a property’s history after they take out a flood insurance policy. But the chances of doing so might be pretty slim, if the true level of risk has never been communicated.
After years of prodding, FEMA is finally beginning to spill some of its secrets. In June, the agency released a big data set that contains all the policy claims that have been filed since the beginning of the flood insurance program in the late 1960s. It’s now possible to see where, and how often, requests for insurance money have been made over the decades, by zip code or Census tract. This lets researchers correlate changes in claims with other trends, such as storm patterns. And it shows at a more localized level where the biggest gaps are in flood risk and insurance subscriptions.
But there’s still a huge missing piece, said Anna Weber, a senior policy analyst at the Natural Resources Defense Council, who has been analyzing and making maps out of the new FEMA data. Even though consumers can learn the history of, say, a used car they're considering, or a property that used to be a meth lab, it’s still not possible to easily learn about their own property’s flood claim history. People might make different choices if they did. So might developers, like the ones building a big new shopping center smack in the Missouri flood plain.
“We’d save a lot of heartache and stress if people had access to this information in the way we did about other parts of our lives,” Weber said.
Read more: “America Is Flooding, and It’s Our Fault.” (CityLab)
Readers react: A friendly neighborhood “digital twin”?
The last edition of MapLab discussed so-called “digital twin” technologies, whereby officials can track and control the movements of individual vehicles in the real-world, using a SimCity-esque virtual replica. A few days later, I wrote about how the L.A. department of transportation has tried to rally other U.S. cities around such a vision for the future.
Advocates say it could help optimize traffic flows; opponents raise privacy concerns. And numerous MapLab subscribers sent in their wide-ranging reactions.
One reader, Rene St. Marie, saw nothing more than a “privacy infringement” to “surveil everything,” useless in the face of bigger social issues:
How is it senior leadership at Microsoft, Google, and IBM don’t see a problem with this? Worse, how are politicians allowing this to happen? Here’s a thought… why don’t you have AI and digital twin find the most abusive polluters of CO2, or better yet put all this technology to work at solving the imminent issues of our overheating planet.
Another, Sarma Sadhu, could see the possible benefits of new traffic-taming technology—but worried about what happens when the humanity of drivers is taken out of the equation:
Enhancing [traffic] efficiency, constantly harnessing the ever developing technology, is always welcome. But it cannot be not the sole factor. We cannot afford to neglect the human factor, whatever the field. Willingness of the public to observe prescribed road rules … is also important.
A third reader, Chris, said he now essentially expects that his movements are watchable any time he’s in public. But he asks a different question:
Should Americans rethink this whole notion of data privacy and which actors they’re fearful of? Why is it engrained in ourselves to wring our hands over a government having this sort of info but not private industry? Even with the recent Facebook scandals, we’re still nowhere near as concerned, as a culture, with that sort of intrusion.
Read my full story on “digital twins” here.
Speaking of flood risk, Kansas researchers think they have a better way to measure it. (Earth) ♦ Speaking of flood insurance, one Illinois town’s levee will not be reaccredited by FEMA’s program because it just keeps getting deluged. (Southern Illinoisan) ♦ A Berlin-based artists crafts models of human organs using old paper maps. (The Guardian) ♦ Digital twins, be gone: New York City is mulling a ban on cellphone data collection. (New York Times) ♦ Navigation by memory: even poison frogs build mental maps. (National Geographic)
Love MapLab? Share it with friends. They can sign up here.