Results Report:
The search for the perfect “recipe” for rain-snow partitioning models

January 2023 - Led by Keith Jennings
Written by Sonia Nieminen

How your observations help assess the performance of various rain-snow partitioning models.

In the same way that a baker might try multiple recipes to bake the perfect loaf of bread, scientists often consider how different models perform for analyzing data based on how the data that is used, how the models are constructed, and how the end product shapes up. Instead of flour and yeast, models have data as “ingredients”, and the “recipe” involves analyzing the data in a way that best represents the real world.

Just like the quest for the perfect loaf, many hydrologists are seeking the best model to predict the temperature at which rain transitions to snow in a way that is specific to region and storm conditions. As we explain here, the rain-snow transition has big implications for water budgets, safety, and other societal impacts.

The wealth of Mountain Rain or Snow observations that you have submitted in current and previous seasons serves as an excellent standard against which to compare the performance of various models.

Here is an overview of what we did.

We compared 2,248 of your ground-based observations from the northern Sierra Nevada from fall 2019 to early 2021 to model-based predictions of precipitation phase. Understanding variation across different models is like comparing the taste, texture, and aroma of different bread recipes. For each observation of rain, snow, or mixed precipitation, we matched certain types of data - like temperature and elevation - entered this information in each model, and then checked how well the results matched what was actually happening on the ground.

In total, we compared fourteen models, which fall into a few different categories: temperature threshold, temperature range, and a binary logistic regression model (an equation that uses a few different variables). The “ingredients'' used by the models included air temperature, wet bulb temperature, and dewpoint, which are three different ways of measuring how much energy is in the air. Only dewpoint and wet bulb temperature take humidity into account. Temperature thresholds used a single temperature for prediction (above, rain; below, snow), while temperature range models also predict mixed precipitation between certain temperatures.

With these “recipes” for rain-snow partitioning, we determined the variability, success rates, and bias for each model (bias is how the phase’s predicted frequency compared to observations of that phase). In addition, we analyzed ground-based radar that looks at falling precipitation to find where its reflectivity changes. Finally, we utilized advanced precipitation measurement tools that belong to the NASA Global Precipitation Monitoring (GPM) mission.

How the models measured up to your observations.

Jennings, K.S., Arienzo, M.A., Collins, M., Hatchett, B., Nolin, A.W., and Aggett, G.R. (In review). Crowdsourced Data Highlight Precipitation Phase Partitioning Variability in Rain-Snow Transition Zone. Earth and Space Science.

The partitioning methods varied in how well they matched the reports of precipitation phase on the ground. The average snow frequency for all fourteen models was 60.7% (observers reported snow 64% of the time) with a standard deviation of 18% (the minimum snow prediction frequency was 23.5%, and the maximum was 84.7%). This represents a lot of variability – the models did not match each other well in terms of how often they predicted snow for the same variables.

The figure to the left shows the success rates for each partitioning model (abbreviated on the y-axis) in context of the air temperature on the ground (x-axis). The right panel of this figure shows the success rates when mixed precipitation was classified as rain. Darker colors correspond to poorer success rates. The success rates drop at and just above freezing temperatures (0.0°C to 10°C).

The figure to the right compares the performance of two partitioning techniques: IMERG, a product of the NASA GPM mission (blue) and 0.5°C wet bulb temperature threshold (mint green). The actual snowfall frequency as computed from observations is shown in a black dotted line. Notice that the success rates of IMERG and wet bulb (solid blue and green lines) decreased for air temperatures between 0-10°C, and the snowfall frequencies of both (dashed blue and green lines) were lower than the observed snowfall frequency within this range as well, hence underpredicting snow.

We also found that the method with the highest success rate (71%) uses the dewpoint temperature of 0.0°C as the threshold for predicting snow. Overall, the fourteen models tended to overpredict rain. Interestingly, the analyzed NASA GPM product that predicts precipitation phase from meteorological information also overpredicted rain, forecasting liquid precipitation with a bias of +17.9%. This research also showed that models that consider humidity (like dewpoint and wet bulb temperature) are more effective than using the air temperature alone.

Jennings, K.S., Arienzo, M.A., Collins, M., Hatchett, B., Nolin, A.W., and Aggett, G.R. (In review). Crowdsourced Data Highlight Precipitation Phase Partitioning Variability in Rain-Snow Transition Zone. Earth and Space Science.

Why does it matter that there is so much variability in rain-snow partitioning models?

The fact that these models’ precipitation phase partitioning results are variable highlights the inadequacies in our current rain-snow partitioning methods. In areas like the Sierra Nevada, recognizing the tendency for models to predict rain when it is actually going to snow is crucial for assessing the risks posed by storms. In addition, analyzing model performance helps scientists learn about how the region’s unique meteorological characteristics impact the results of modeling techniques.

In order to further improve our methods of determining whether rain or snow will fall, more observations are needed. Each observation of precipitation phase helps us to more accurately assess the performance of rain-snow partitioning. Crucially, observations with greater spatial and temporal variety are needed so that scientists can perform similar analyses of rain-snow partitioning techniques for other regions. This season of Mountain Rain or Snow will help us take one step closer to doing so!

This article is a summary of recent research conducted by our team of Mountain Rain or Snow scientists. The background information, data, and figures are from:

Jennings, K.S., Arienzo, M.A., Collins, M., Hatchett, B., Nolin, A.W., and Aggett, G.R. (In review). Crowdsourced Data Highlight Precipitation Phase Partitioning Variability in Rain-Snow Transition Zone. Earth and Space Science.

Results Report:
Predicting snow above freezing

November 2022 - Led by Keith Jennings and Nayoung Hur
Written by Sonia N

Using your observations to predict the probability of snow and rain.

The first step toward improving our ability to predict snow versus rain was to calculate the probability of snowfall. By estimating the probability of snowfall for each region, we can make better predictions for the future, in the same way that you might predict that you have a 1 in 2 chance of tossing heads on a coin.

To do this, we paired each observation of rain, snow, or mixed precipitation with known local temperature data. We then created a graph with the temperature on the x-axis, and the number of observations at each temperature on the y-axis (light green dots on the figure called the "Anatomy of a snowfall probability curve"). Just like the number of tosses of a coin, the number of observations of snow at each temperature allows us to calculate the probability of snowfall at that temperature in each region. For example, in a certain location, if more people report snow at 4°C (39.2°F) than they report rain, then over time, we can conclude that the probability of snow at 4°C (39.2°F) is higher than the probability of rain.

The last step in creating snowfall probability curves is to fit a smooth line through the data points, which is essentially a simple model. On the right, you can see the black line is "fitted" between the light green dots to create the curve.

If you'd like a refresher on why our team is studying the rain-snow transition, check out our posts on the EGU Blog or on SciStarter.

"Anatomy" of a snowfall probability curve.

Below are the snowfall probability curves for some of the Mountain Rain or Snow regions. See if you can notice the difference in the probability of snow at 0°C (32⁰F) for each of the regions. The reason these curves are different is because the meterological processes driving the rain-snow transition differ by region.

Comparison of snowfall probability curves.

Now look at the comparison of snowfall probability curves. Follow the dotted line at 50% probability of snowfall: you can see that the temperatures at which rain is likely to transition to snow are not the same for all regions. The average temperature at which rain transitions to snow in each region is unique. Here is a snapshot of the rain-snow transition in some of your regions: 4.6°C (40.3°F) in the Rockies; 2.1°C (35.8°F) in the Sierra Nevada; and 0.7°C (33.3°F) in the Northeast US.

This means that a model or algorithm using a threshold of 0⁰C (32⁰F) for all regions may not likely be accurate when making predictions of rain vs. snow near the freezing point.

The key takeaway here is that the temperatures at which rain transitions to snow vary by region, and we need to take this variability into account in future predictions.

Using probability of snowfall and rainfall to create a "report card" on the success rate of satellite technology.

Now that we have calculated the probability of snowfall with community observer data, we can compare those predictions to those generated by satellite technologies. Specifically, we assessed the prediction accuracy of the Global Precipitation Measurement (GPM) mission, one of NASA’s missions, which uses an algorithm that is abbreviated as “IMERGto predict rainfall/snowfall. We can compare the probability of snowfall or rainfall generated by Mountain Rain or Snow observers with that of IMERG, region-by-region.

The probability of liquid precipitation from NASA GPM was compared to the frequencies of precipitation phase reported by community observers. This helped us see the IMERG success rate at different temperatures in different regions.

Community observations can help us to pinpoint where the IMERG algorithm struggles to predict the correct precipitation phase. When comparing the ground-based observations from community observers with estimates from NASA, we can see some areas in which IMERG struggles. In the figure on success rates, the darker orange colors indicate lower success rates, and the lighter orange colors indicate higher success (white indicates "no data").

What’s interesting is that the success rate of IMERG is not consistent across regions: see how it predicts too much rain in the Sierra Nevada region just above 0⁰C (32⁰F), and it predicted too much snow for the Southern Rockies at warmer temperatures.

This is a big step towards improving the technologies that predict the rain-snow transition.

Success rates of IMERG satellites.

Regional differences: Why they matter.

Recognizing regional differences gives us the traction we need to determine what other meteorological processes need to be included in models that predict rain-snow thresholds. Our scientists will continue work behind the scenes using the latest machine learning and modeling methods to advance algorithms to predict the rain-snow line. Your continued participation will help keep the project advancing forward.

Observations from geographically and climatically diverse areas are so important to this project. Your observations from high in the mountains, all the way to the low-lying valleys, are valuable to make locally-relevant predictions and improve the technology behind rain-snow estimates.

We appreciate your continued involvement in the project and it would not be possible to make these scientific advances without you.

Results Report:
The Impact of Our Observers

November 2022 - Led by Meghan Collins and Monica Arienzo

Mountain Rain or Snow is a collaborative science effort.

This successful project went from "small" to "big" with your help. You joined over 1,100 other observers to contribute to making this science possible.

Community observers shared over 15,000 reports of rain, snow, or mixed precipitation with us last winter season! This is a sixfold increase over the previous season, when the project was focused on just one region. The heatmap shows the geographic distribution of the observations that were submitted. Reds and oranges indicate the highest density of observations, followed by yellow, blue, and purple. Sometimes scientists group these areas into “ecoregions” with hydroclimatic similarities, and there were 35 ecoregions represented in last year’s dataset!

As of the end of last season, our largest regional network was the Northeast with 318 observers, followed by the Rocky Mountains of Colorado with 310 observers, the Sierra Nevada with 219 observers, and the Cascade Mountains with 30 observers. Numerous dedicated people also send observations from outside one of these regions as well, as you can see from the heatmap.

Heatmap of observations from the 2021-2022 season.

Real-time communication with a real human.

One of the best features of participating in this project is that observers can communicate with a real human from the project team in real-time.

How does it work? Well, you likely signed up for the project by texting a regionally-specific keyword to our project number, which is 855-909-0798 (if you aren't signed up, find your keyword on the list below). This automatically signs you up for weather alerts and provides guidance on how to participate. At any time, you can reply to an alert with questions about how to submit observations or the weather in your area.

We aim to keep the training for the project quick and simple. We send out a series of short text messages with the training information. In our Observer Input Survey, 85% of respondents said that the training texts were helpful. We also compiled an extensive list of FAQs in response to observer questions.

Observers like receiving text alerts about upcoming storms.

We pay attention to weather in every region and send out alerts when storms are in the forecast with predicted temperatures near freezing. Observers can reply directly to text messages with questions or comments for project organizers. We send alerts between November and May each year.

Our intention is to strike a balance with these alerts: frequent enough to be useful for community observers while being respectful of everyone’s time and attention. In our survey, 72% of respondents said that we sent “just enough” text messages, and nearly one third of respondents (27%) said that there were “not enough”. We actually received many comments about the need for those extra reminders! So, we will continue to let you know when there are storms of interest on the way.

Community observations driving Mountain Rain or Snow science forward.

You do high quality work. In the regions of focus, 96% of the observations submitted passed our rigorous quality control procedures.

With your help, our project team has achieved a lot in the last year!

  • 10 news articles featuring the project

  • 10 community presentations reaching nearly 400 people

  • 5 presentations at international academic conferences

  • 2 scientific manuscripts in preparation

  • $1.14 in additional funding from NASA to support 12 scientists and students to drive this research forward

Why does your input matter?

As we grow, we want to offer a positive experience for our observers. Hearing your input on the ways that Mountain Rain or Snow communicates with community observers and learning how you feel about this communication helps Mountain Rain or Snow make future seasons more straightforward, more effective, and more fun.

At any time, you can:

  • Reply directly to our text alerts to reach a real person

  • Email the project team for help with the app or troubleshooting technology

  • Request a presentation of the scientific results for groups of 15 or more

We look forward to hearing from you!

Thank you for your dedication! We look forward to another successful season working together.