Oh, the NAM. The source of so many colorful snowfall projections that spread like wildfire across social media. The weather model that gives meteorologists fits multiple times per season. The weather model that can pick out thunderstorms two days in advance, and completely mishandle a snowstorm at a 24 hour lead time. And now, the weather model that absolutely nailed the forecast for the Blizzard of 2016 in New York City.
The Blizzard of 2016 will likely go down as one of the most effectively modeled snowstorms in meteorological history. The signal for the storm system was evident as far as 8 days in advance (arguably longer via long range pattern recognition). Long range ensemble guidance and even individual operational model runs showed the storm systems evolution consistently 6 to 8 days in advance, with increasing agreement on a major low pressure system off the East Coast. And, up until Day 5 or so, the agreement was relatively unanimous among major global models such as the GFS, Euro and Canadian.
All of that changed when the storm was 3 or 4 days away. The GFS and Euro began waffling on exactly how the storm would play out. The evolution changed on these forecast models, especially in the mid and upper levels of the atmosphere. Suddenly, the Euro was “cutting off” the mid level trough over the Plains and Mississippi River Valley, meaning the surface low would track off the coast of the Carolinas and never make it here.
The GFS trended southward by the time the storm was 3 to 4 days out, as well. The Canadian started heading that way, too. All global models increased confluence to the north (reaffirming high pressure) and trended toward cutting off the storm system earlier. Suddenly, it seemed like the southerly solution was gaining traction, and the low pressure center wouldn’t make it to a far enough north latitude to impact our area.
And then came the NAM. The storm system finally appeared within the NAM’s window when it was 78-84 hours away.
From the get-go, the NAM had the storm system more amplified than globals had shown for days. The low pressure center emerged from the Southeast Coast, with tremendous atmospheric dynamics guiding the storm northward to a position off the Mid-Atlantic Coast. Incredible dynamics moved toward the New York City area, with near two feet of snow on the groundby the time the storm ended.
It couldn’t be right. Could it?
As the days passed by, the NAM never wavered. Precipitation amounts fluctuated, and banding location varied slightly, but while other global models were largely showing a miss — and trending worse — the NAM maintained that the storm would heavily impact our area. A day or so before the storms approach, it became clear the NAM was on to something. And it was.
We re-ran these NAM runs on our model server to visualize the output.Analysis of NAM model runs at two to three day lead times shows just how incredible this performance was in New York City. New York City ended the Blizzard of 2016 with 2.3″ of liquid equivalent and 26.8 inches of snow. The NAM was consistently showing liquid equivalent totals of 1.50-2.50″ for days prior to the storm, trending upward to 2.50-3.00″ right before the storm arrived. For comparative purposes, the GFS, Euro and Canadian were at one point showing less than 0.50″ of liquid — or even in some cases, a total miss. There are many reasons why the NAM could have been so consistent. The NAM in the past — while largely inaccurate — has shown a tendency to be able to handle confluence during major snowstorms. In the “Snowmaggeddon” event on 2/6/2010, the NAM correctly predicted the sharp gradient that led NYC to get 0″, while Philly received 28″ of new snow. The NAM understood the overlying confluent pattern that would lead to such a sharp cutoff, while other models were still waffling and giving the NYC area a decent amount of snow. This storm also initially had a sharp confluent zone with a large plume of moisture coming up from the south, although the actual evolution aloft was quite different from this years Blizzard. Nonetheless, the NAM’s consistency with our storm was certainly a red flag given how generally consistent it was with the 2/6/10 storm where it also had to handle an initially strong confluence zone. Additionally, it is important to remember that the NAM is mesoscale model which is meant to handle convection. Often times, its convective scheme makes it too “detailed” to be able to handle the evolutions of a general synoptic pattern. But with this storm, the general synoptics were pretty well agreed upon, as evidenced by the fact that models were very consistently signaling this storm with up to ten days lead time. The main discrepancy was the level of confluence (as discussed above), and when the surface low would start shifting east. Other models were indicating that the confluence was having too much of an impact, which let the 500mb low cut off very far to our south. This meant that the positive vorticity advection could not gain that much latitude, and by the time it did, the storm’s dynamics were decaying. The NAM kept the 500mb low an open trough a bit longer, which led the positive vorticity advection to streak northward and then curl back towards the coast, leading to a tucked in surface low pressure. Since the other models had positive vorticity advection further southeast — over the warmer ocean waters — lift and thus convection developed in that spot. The surface low pressures often “look” for areas of strong lift (since lift leads to lowering pressures below it), and thus the other models “jumped” the surface low well south and east of where the NAM had it. The NAM was able to better handle that the convection would not overwhelm the storm’s development and that the forcing for a tucked in surface low was much more prevalent. This led to the NAM consistently tucking in the surface low close the coast, while other models did no such thing. These thoughts popped into our heads as the NAM kept consistently showing the same thing: ‘If it’s the global models vs the NAM in an overall pattern and more synoptically driven storm, of course the global models typically win. But if it’s just the global models vs the NAM in terms of chasing convection, then perhaps the NAM isn’t as big of the lopsided underdog as it usually is.’
It wasn’t just the precipitation and banding in the NYC area that the NAM correctly predicted — it also was excellent in predicting the storm’s evolution both aloft and at the surface, with both the large-scale features and the small-scale details. Not only that, but it had these correct features on every single one of its model runs, while other guidance — including the mightily-touted European model — was much too suppressed.
In all honesty, the NAM’s performance leading up to the Blizzard of 2016 may be one of the better computer modeling performances we have ever seen — especially coming from a model with a mantra of being unreliable. But when analyzing how the storm behaved, perhaps it wasn’t so shocking that the NAM got it right after all. More than likely, we need to improve our skill in utilizing it correctly.
This article was written and edited by John Homenuk, Doug Simonian, and Miguel Pierre.