Faye works as a Senior Scientist in the Weather Impacts team at the Met Office. In this blog post she tells us about early warning systems and why robust evaluation processes are an important consideration to ensure effective early warning of severe weather.
Importance of Evaluation for Early Warnings.
I work within the Met Office Weather Impacts Team which develops processes and tools to support early warning systems. Experts in their field, some of my colleagues have even written a book on the subject of early warnings! While I would highly recommend that everyone interested reads this comprehensive work, I would like to highlight another particularly critical aspect of early warning systems – their evaluation
What are early warnings?
In essence, early warnings are systems that warn of impending hazards, allowing people to take action to reduce the societal and economic impact of natural hazards. While this may sound like a straightforward endeavour, effective early warning systems require a co-ordinated interdisciplinary approach spanning a wide range of physical and social sciences (Figure 1). Recognising the importance of early warnings in reducing the impacts and losses resulting from natural hazards, the United Nations has a ‘Early Warnings for All’ initiative, with the aim of ensuring that every person on Earth is protected by an early warning system by 2027.
The Met Office is contributing to this effort, providing early warnings through the National Severe Weather Warning Service when there is a risk of impacts resulting from severe weather (rain, thunderstorms, wind, snow, lightning, ice, fog, and now extreme heat).
Additionally, through knowledge sharing and partnerships (for example the Weather and Climate Science for Service Partnership Programme) the Met Office is helping nations globally to implement their own early warning systems. It is expected that climate change will increase the intensity, frequency, and duration of extreme weather events. Providing people with access to timely and accurate early warnings is therefore an important part of ensuring our resilience to extreme weather and adapting to climate change.
How can we demonstrate that early warnings are effective?
There has been a shift in the types of early warning systems used for weather hazards, from threshold-based warnings to impact-based warnings. This shift is often referred to as moving from warning ‘what the weather will be, to what the weather will do’. While this evolution in warnings intuitively makes sense, providing people with more specific and relevant information on how weather may affect them is likely a good thing, the added benefits of adopting impact-based forecasts and warnings have not yet been fully measured.
Recently the World Meteorological Organization released ‘Guidelines on Multi-hazard Impact-based Forecast and Warning Services’. These guidelines highlight the need to demonstrate the value of impact-based forecasts and warnings. One way to demonstrate the value of impact-based forecasts and early warnings is to develop a comprehensive evaluation strategy to assess any warnings that are issued. Evaluation of warnings can demonstrate value by quantifying the improvement of impact-based warnings over traditional weather forecasts. Additionally, evaluation can be used to measure improvement over time as warning systems are updated and refined.
How are early warnings currently evaluated?
There are many ways of evaluating warnings. Two commonly used approaches are subjective evaluation and objective evaluation. These two approaches go hand in hand, and both are necessary to fully demonstrate the value of warnings.
Subjective evaluation assesses the performance of warnings using qualitative approaches such as case studies, focus groups and expert discussion panels. Subjective evaluation allows for a deeper understanding of the accuracy of any warnings and can be used to interrogate different aspects of warning performance. Subjective evaluation is routinely conducted by the Met Office for all amber and red warnings that are issued, to determine if the warnings provided good guidance. This evaluation approach provides detailed feedback and understanding of how well each warning forecast the timing, location, and severity of impacts observed.
Objective evaluation relies on the use of standardised scores and such approaches are commonly used to verify traditional weather forecasts. Recent work by researchers at the Met Office has started to explore how objective evaluation can be used to evaluate impact-based forecasts and warnings. Objective evaluation is particularly useful when evaluating impact models and can be used to measure any improvements to model performance as they are refined and developed.
Social science studies investigating the reach of warnings and how warnings lead to action are also important to fully understand the value of warnings and how they can be improved upon. Focus groups, interviews and questionnaires can provide valuable insights into how well warnings are received, perceived, and ultimately acted upon. For example, according to post-event research conducted by the Met Office, 97% of those in the red warning area for July’s heat were aware of the warning and 91% felt that the warning was useful.
What do we need to evaluate warnings?
In order to evaluate if impact-based forecasts can accurately warn of impacts from severe weather we need to have data against which forecasts can be assessed. Moving from traditional threshold-based warnings to impact-based warnings has required us to obtain data not only on what the weather did, but also data on what impacts the weather caused. Both objective and subjective evaluation approaches require us to compare what impacts were forecast against the impacts that were observed.
There are many sources of impact observations that can be used, including reports from companies, agencies and organisations, news articles and even social media. Recent work by researchers at the University of Exeter has highlighted how social media can be used to identify impactful weather. Additionally, crowd-sourced initiatives can help to provide the impact observations required to evaluate impact-based forecasts and warnings, for example the Met Office Weather Observation Website, which allows users to submit observations of impacts.
Future challenges and opportunities for evaluation of early warnings.
While many observations of impacts are available, they are rarely created for the purpose of evaluating warnings. As such, much effort is required to collect, analyse, and format impact observations to use in evaluation. Current research is demonstrating how pulling together observations from many sources is important in order to develop robust and well-rounded observations of impacts that suit the needs of warning evaluation. Development of systems and processes that can identify, format, and compile observations automatically is an area of active research within the Met Office and beyond.
A significant challenge when considering warning evaluation using observations of impacts is how to account for mitigating actions that were taken because of a warning? The overarching goal of issuing warnings is to provide people with information so that they can take actions to stay safe and thrive. Hopefully this will result in fewer impacts being observed. This poses a challenge for evaluation approaches that compare predicted impacts against observed impacts, as it is difficult to identify if the lack of observed impacts was due to an incorrect warning, or if it was because people took protective action. Collecting information on actions that were taken, in addition to impacts that were observed, may help us to address this challenge.
Just as the creation of early warning systems requires a co-ordinated interdisciplinary approach, so too does the evaluation of early warning systems. Luckily for me this means that I get to work with a diverse group of incredibly talented people both within the Met Office and with external academic and international partners. Although many challenges remain for the evaluation of early warnings, I’m confident that working together we can start to better quantify the value that early warning systems provide.