We aim to develop benchmarks of available weather forecasts to inform decision-making for large-scale dissemination to end-users. Our approach is distinct in that it is human-centered and operational: Our benchmarking is not just focused on the accuracy from a meteorological perspective, but also takes into account the end-users’ needs and the real-time forecasting operational constraints.

Benchmarking is a critical step in democratizing weather forecasting, especially in the age of AI. While there are a growing number of papers on the performance of AI weather models and benchmarking studies, they often only consider the standard meteorological metrics and cannot inform decision-making, e.g., for a given region and by a government.

Our benchmarking efforts are conducted by multidisciplinary teams centered around building feedback loops across the research and dissemination teams. We use the best available validation data and human-centered metrics, and consider fundamentals of meteorology, economics, and AI, as well as practical constraints.

Featured Project

Researchers are using new advances in artificial intelligence with knowledge of monsoon dynamics to predict the onset of the rainy season for Indian farmers.