Rumors and misinformation that spread online during emergencies, like hurricanes Helene and Milton, raise concern that content created by artificial intelligence (AI) might make the job of emergency managers even more difficult. After Hurricane Helene, law enforcement and government agencies were forced to devote precious time to combating conspiracy theories, including allegations that land had been bulldozed to cover up dead bodies. Countering this AI-powered misinformation understandably took a toll on responder morale. But at the same time, AI’s ability to ingest and synthesize data on hazards, vulnerabilities, and capacities could also prove invaluable in addressing one of the biggest long-standing challenges of emergency management: truly engaging the whole community.
What “Whole Community” Is and Why It Matters
For the Federal Emergency Management Agency (FEMA) and other emergency management agencies, the whole community is both philosophy and approach: one where all community members – people, organizations, and businesses – work together to mitigate, prepare, respond, and recover, and together share the benefits of these emergency management efforts.
Helping the whole community is the mission of emergency management. Because no one is immune to disasters and their effects, emergency management must plan to support all community members. This support may be indirect, like structural mitigation to protect critical infrastructure. Or it may be direct, such as individual assistance programs available to survivors after a disaster.
Likewise, each member of the community has a support role to play. Decades of evidence show that everyone can address risk – from preparing households for disasters to the more general building of community and social infrastructure essential to resilience. These individual actions collectively create whole community resilience to disaster, which in turn bolsters the resilience of individuals and households within the community.
Current Challenges to Whole Community Engagement
Truly engaging the whole community can be daunting. It takes time, which is often in short supply during an emergency. For instance, in response and recovery situations, decisions about allocating limited resources must be made quickly based on limited information.
Engaging the community also requires understanding its specific needs, which can be difficult given the extreme complexity of vulnerability, hazard, and capacity. Widely used metrics of vulnerability, which incorporate socioeconomic factors, for example, offer only a general view and are not designed to capture community-specific needs and capacities that are products of place and scale. Hurricane Maria, which struck Puerto Rico in 2017, offers such an example. Spanish is the dominant language on the island, meaning that people who only spoke English were at a disadvantage in getting their needs met. Yet vulnerability metrics often include English as a second language as one such metric. The hurricane also displaced many Puerto Ricans to the mainland, where they were out of sight and out of mind, speaking English as a second language, and it was difficult to ascertain their needs to support them in the recovery process. Beyond this example, metrics might also fail to capture other elements related to vulnerability, such as certain access and functional needs characteristics.
True engagement means partnering with communities to manage disaster risk. This requires dialogue, not just one-way information sharing. As more is learned about community needs, goals, and capacities, these elements should be incorporated into emergency plans. Understanding a community is critical for local-level planning, but must be supported by regional and higher-level infrastructures. Unfortunately, efforts to use appropriate communication channels, languages, and (trusted) messengers for that dialogue can be limited due to cultural norms of emergency management, resources required for such engagement, and other factors.
Ways That AI Can Support Whole Community Engagement
Current emergency management structures are adaptable in some ways. However, they are also beholden to bureaucratic rules that make it difficult to meet whole community needs. AI technologies may be suited to tackle some of these challenges.
Take chatbots, for example. By talking to people in simple, clear language, chatbots might help with crisis-related communication before, during, and after emergencies. They might be particularly useful for marginalized populations who may be less able to navigate bureaucratic assistance programs. A chatbot on, say, a relief worker’s phone could pose conversational questions that determine a person’s needs and eligibility. Chatbots could be deployed to spread awareness about local hazards, as well as the steps people could take to prepare. People have already begun to turn to chatbots for mental health support, and that too might grow in use after a disaster.
Other forms of AI, like prediction algorithms, might be used to match people in need with businesses or nonprofits with resources or, perhaps, identify people who are in need but unable to contact emergency managers. Prediction algorithms can parse massive amounts of data to discern trends that might otherwise not be apparent. In emergency management, algorithms already improve hazard prediction for wildfires, tornadoes, hurricanes, and other risks. Existing AI tools help homeowners and emergency planners identify vulnerabilities, build community resilience, and aid crisis response. These could be expanded to analyze the social dynamics of risk across an entire community and then made available to all involved.
Toward an AI Whole Community Approach
Although AI could be a powerful tool for whole community engagement, it is likely to require additional governance, training, and research and development. Government agencies and first responders may need to adapt AI regulations and procedures to the emergency management context, such as the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework.
Emergency management workforces will also need training to use AI effectively and safely in different roles, and personnel may harbor concerns about their jobs being replaced by AI. Leaders should be aware of and prepared to address these concerns.
Given ongoing challenges with bias in AI, any AI tools will need to be made culturally sensitive and avoid unwarranted assumptions that may lead to inequitable distribution of benefits or mistrust in AI tools. Here, emergency managers can build on an existing track record of involving communities in disaster-related knowledge creation by partnering with them on AI research, development, and implementation efforts. To guard against AI-generated misinformation, emergency managers can ensure that their own social media posts clearly cite evidence, as well as uncertainty, and engage users with calls to action.
Emergency managers today recognize the importance of engaging the whole community in their work. AI has the potential to accelerate these efforts. Using AI as a whole community tool may require, for instance, that policymakers develop regulations or policies that encourage AI tools to protect the whole community from harm while maintaining benefits. Emergency management leaders could develop training for their workforces to effectively deploy prediction models in a way that accounts for potential uncertainties and does not create false certainty. Finally, those in the research community can continue to advance knowledge on the benefits and risks that emergency management AI technologies might bring to communities. All these activities must engage – and be conducted in partnership with – the whole community, ensuring that emergency management benefits all who need it.
Douglas Yeung
Douglas Yeung is an associate director of the Management, Technology, and Capabilities Program within the RAND Homeland Security Research Division, a senior behavioral and social scientist at RAND, and a professor of policy analysis at Pardee RAND Graduate School.
- Douglas Yeung#molongui-disabled-link
Aaron Clark-Ginsberg
Aaron Clark-Ginsberg is a behavioral and social scientist at RAND and a professor of policy analysis at the Pardee RAND Graduate School.
- Aaron Clark-Ginsberg#molongui-disabled-link
- Aaron Clark-Ginsberg#molongui-disabled-link
- Aaron Clark-Ginsberg#molongui-disabled-link
- Aaron Clark-Ginsberg#molongui-disabled-link