, °F

Personalized Forecasts

Featured Forecast

My Favorite Forecasts

    My Recent Locations

    Jesse Ferrell

    The Secrets of Weather Forecast Models, Exposed

    By Jesse Ferrell, Meteorologist/Community Director
    3/04/2010, 8:24:16 AM

    Share this article:

    A meteorologist's biggest job is deciding which Weather Forecast Model has the right idea for an upcoming storm. Some of the most frequently asked questions from blog readers and Forum users are as follows:

    "Why are the models so inaccurate?" "Who is providing oversight for improvements to the models?" "Is the GFS model really worse than the ECMWF?"

    If you search for "computer weather forecast model accuracy" on Google, you won't find much. But I am here today to answer these questions for you based on some knowledge that I've picked up over the last few years. Here are the facts as I see them:

    - The models are inaccurate due to a number of factors - The U.S. Government (NOAA) and educational institutions monitor the accuracy of models - The models are routinely tweaked by NOAA to increase accuracy - The U.S. GFS model is less accurate than the European ECMWF model - NOAA believes it may have found why and is working to change the GFS


    supercomputer-banks-noaa3

    Here are the "long answers" to those questions.

    Before we begin, let's define "computer forecast model." A model is a computer algorithm which predicts the weather (the entire process is known as "Numerical Weather Prediction"), typically run on huge government computers, by the U.S. and other governments, or institutions such as the ECMWF, an "independent international organisation supported by 31 European States". These organizations typically produce their own maps and statistics from the models, and provide the raw data to commercial companies such as AccuWeather. Some computer models are small enough to run on a personal computer, but their output is generally limited to a poor resolution or limited geographical area.

    "Why are the models so inaccurate?" The models are inaccurate because of a number of factors. Here are some:

    1. LACK OF COMPUTING POWER: Many people believe that the limits of computing power is one problem. When I was in college, my professors told me that we could probably run a model with near 100% accuracy for tomorrow's forecast, but it wouldn't finish running until the day after tomorrow. That might be an exaggeration, but it's hard to believe that, as computing power increases, our forecasts won't, though they could reach a point where they can't get much better without more initialization data (see below). Right now because of that lack of computing power, we have to pick between high resolution, wide coverage area, or forecast length. For example, It takes about the same time to run the GFS worldwide out to Day 15 at poor resolution, as it takes to run the 4-KM WRF over half of the U.S. at extremely high resolution. Of course, if you believe in chaos theory, there are limits to what computing power will buy us.

    2. INITIALIZATION DATA: In the latter part of the 20th Century, the models took in only data from a sparse network of upper-air stations, which send up balloons that transmit weather data back to earth and, when put together, form a 3-D picture of the current state of the atmosphere (called the "Initialization.") Then they work from there to apply algorithms to predict how that atmosphere will behave. As you'll see below, a bad initialization can ruin the forecast. This is what's known in the computer industry as "GIGO" (Garbage In, Garbage Out). Sadly, the "upper-air" network of balloon releases hasn't improved much over the years, but models now also take surface mesonets, satellite data, airport observations, and more into account when trying to put together the Initialization. One would assume that the more (accurate) data pumped in, the better the Initialization will be, and therefore the better the forecast. But the more data you input, the slower the initialization is, and you've run back into #1.

    3. BUGS: Like any computer program, the models are subject to occasional bugs in their hundreds of thousands of lines of code, created by humans capable of typos or other mistakes. The major models are mature enough that this does not affect things over a wide area, but I've been a computer programmer long enough to know that there are probably dozens, if not hundreds of bugs hidden within these algorithms that are causing all sorts of small problems that may add up to inaccuracy.

    4. MODEL BIAS: Each model seems to have "bias" regarding certain weather systems or situations, just as a human might have a biased political view. This could be due to the way the algorithms were originally built (because of the people who built them, or the types of storms that they programmed in equations for) or inaccuracies in #2 or #3.

    "Who is providing oversight for improvements to the models?" Fortunately, the U.S. government and various educational institutions are watching the accuracy of the models and suggesting changes (which the government or whatever institution runs the model can implement). These two links are hard to find, but I think they prove that statement:

    - NOAA NCEP Model Performance Statistics - NOAA NCEP Model Bias Tracking


    aczhist_sm_copy_copy

    The first link provides the above graph of the 23-year accuracy of the U.S. GFS, the European ECMWF, the U.K. Government's UKMET, and a model called CDAS which has never been modified, to serve as a "constant." (On the actual chart, the Southern Hemisphere is shown below the top graph.) As you would expect, model accuracy is gradually increasing (1.0 is 100% accurate), but it's clear that the ECMWF beats the GFS. That leads us to our next question.

    But before I go there, here's a 4-year model accuracy chart - note the seasonal dips in the Northern Hemisphere, proving models are more inaccurate during the Summer than the Winter (not so in the southern Hemisphere, maybe due to less land causing problems?).

    "Is the GFS model really worse than the ECMWF?" OK, I went there. Forecasters amateur and professional have long-claimed the U.S. GFS model was more inaccurate than the ECWMF. The graph above proves it, and it is the basis for the business model of the ECWMF's institution, which sells the data at exorbitant prices (the GFS data is free -- a quarter of a million dollars will buy you the rights to use redistribute the ECMWF 25-day forecast, but not their weekly or monthly forecasts which go as far as a year out). Although that makes for a compelling reason to keep their secrets to themselves, they have recently started working with the U.S. government to help determine what's wrong with the GFS.

    Last month, a breakthrough was discovered: When the GFS is run with the ECWMF Initialization data (see above), the accuracy improves dramatically (you can read the AMS presentation here). Unfortunately, implementing that is not as easy as you'd think - outside of the cost of using the ECWMF data, it only runs twice a day, so the GFS would no longer be able to run at 06Z & 18Z (midnight & noon). I suppose one other option is that we fix our initialization data, but I haven't heard much about that option taking shape. It would probably be a big undertaking. In any case, I'm thrilled that we now know what's wrong with the GFS.

    Separate from the model accuracy, there is a movement afoot, headed by the American Meteorological Society (AMS as mentioned above) and involving our Elliot Abrams (PREMIUM | PRO), to make weather forecasts (derived from those models) better and more user-friendly. Elliot is co-chairing the unit with Dr. Paul Hirschberg, chief of staff to NOAA National Weather Service Director Jack Hayes. I believe this work is very important and you can read about their ideas and progress in an AMS report here.

    "So Now What?"

    All of this, while enlightening, may be depressing to weather enthusiasts who watch models. There is no easy solution to these problems and models will continue to be fairly inaccurate into the foreseeable future. Here's a few things you can remember while examining model data, featuring products which are available on our Pro site (join today and get the rest of winter* free!).

    1. Be wary of forecasts that are only on one model. If nearly every model is on board with a solution, then you can be more confident in your forecast. Look at all the models, preferably on one map.

    2. Look at model trends. If the low pressure moved east with this run, what did it do the run before that? For the GFS, look at a couple days of 00Z and 12Z runs for consistency. Avoid the 06Z and 18Z runs when 00 or 12 is available; in the U.S. these runs don't include the weather balloon network data (balloons are only sent up twice per day), and are therefore radically different and more likely to have bias.

    3. Remember that accuracy generally decreases with increasing time, decreasing resolution, and for snowfall (because snow is around 10 times the rainfall equivalent). Remember this when you're looking at a course 15-day forecast of snowfall (read my blog on White Christmas inaccuracy).

    4. Ensembles help mitigate inaccuracy. If possible, look at the Ensembles instead of just one model. These exist for the GFS, Canadian, WRF, NMM and SREF models. As I've explained before, Ensembles take the same model and run it several times with slightly different input. This gives a range of possibilities and lets you know how "confident" the model is in itself.

    5. The models have now built in new output to do things that meteorologists used to have to do in their heads. This includes precip type, snowfall amounts, and severe weather probability. Don't stay stuck on that 500 mb chart calculating thickness; save time and check out these other newer products.

    I am by no means an expert on forecast modelling, and I can't even confess to frequently read the AMS Journals where a lot of verification papers and new ideas are published; I just wanted to let you what I know, because I get a lot of questions on this. If you're really interested in finding out more, there is a book at Amazon about the History of Numerical Weather Prediction. If you have corrections to what I have said, or other important information you think my blog readers should be aware of, please leave a Comment.

    Comments (26): Anders Persson:

    A partly alternative view might be found in my ECMWF User Guide to be found at www.ecmwf.int/Forecasts written during my time there and later updated.

    So for example, I do not believe in trying to find the model of the day, rather one should weight the existing deterministic forecast together into some sort of consensus forecast.

    Posted by Anders Persson | March 1, 2009 8:20 AM Gary:

    Outstanding Article...

    Posted by Gary | February 24, 2009 11:37 AM John:

    Why could you not just run all the models with high resolution on a tiny area, say, 100 miles by 100 miles and then paste them all together?

    There are 76? frames on the GFS (180 hours / 3 hours per run = 61 frames (including the original) + 180 hours / 12 hours per run = 15 frames. 61 + 15 = 76 frames.)

    I realize it would take a lot of manpower to put all the pieces together, but is it not worth it to get better forecasting?

    FROM JESSE: Actually it's 384 hours / 3 hours. The resolution is inherit in the model I believe, it's not something you could change. It's not only pixel size but number of levels in the atmosphere. The DGEX is probably the closest thing to what you describe. It uses the mid-range GFS initialization to run a high-res model for parts of the U.S.

    Posted by John | February 9, 2009 9:18 AM Rich in Franklin,MA:

    Jess, Great article, especially after last weeks storm prediction conundrum. I have a better understanding of why the forcast can be difficult. Can you keep us updated as to the progress of Eliot Abrams work on improving forcast accuracy?

    Posted by Rich in Franklin,MA | February 6, 2009 7:57 AM Richard Savage:

    As a retired meteorologist, long out of school, it's nice to be able to catch up on modern methods and technology (I still remember most of the theory, like chaos).

    And, as a former software manager, I understand your comment that software bugs are always present. Unfortunately.

    Richard Savage Wisconsin-Madison, 1976

    Posted by Richard Savage | February 5, 2009 11:44 AM foresmj:

    Jesse,

    Thank you. I have been looking for an article like this for a long time!!!

    Posted by foresmj | February 4, 2009 6:40 PM DCDaren:

    Jesse,

    Great post - clearly a lot of work put into this. Thanks for the information and clearing up some misconceptions about the models! I wish the storm worked out better for DC...

    Posted by DCDaren | February 4, 2009 3:10 PM RJ:

    Very educational post -- thanks for the thought, research, and time you obviously put into writing this. This is a keeper, and I'm bookmarking it. Thanks again.

    Posted by RJ | February 4, 2009 11:42 AM Dan Roberts:

    Great Blog Jessie.

    I love how you go into detail and support your facts with links. That is what you call true researching.

    Great JOB!!!!!!!!!!!!

    Dan Roberts

    Posted by Dan Roberts | February 4, 2009 9:39 AM Paul WX:

    This was one of the best posts I've ever seen on this topic. I wonder if AMS has considered cloud computing as way of handling the computations...

    Posted by Paul WX | February 4, 2009 9:04 AM Steve Enders:

    Jesse -

    Check out last nights radars in the Lititz/Ephrata areas of Lancaster County, PA. Some pretty impressive snow burst going on. Some guys reported between 8-12 inches at their houses, while just 10 miles west there was nothing. Immediately thought of your blog when i saw that.

    Posted by Steve Enders | February 4, 2009 8:47 AM Joe M:

    Jesse,

    You continue to post content with such quality and research that it's refreshing.

    We need more data! And quality data! For instance, my nearest airport - KLNS in Lancaster, PA - was offline for over three days, but still returned the same old obs in metar consistently! Problems like this, along with many others, are things that can be fixed relatively easily and will help us get better model output.

    (I'm a met major and taking an Instruments and Observations class next semester to learn about things like this.)

    Posted by Joe M | February 4, 2009 8:38 AM LJG:

    This post should be one of the "headlines" - and as another post said - REQUIRED READING ! THANKS,LJG

    Posted by LJG | February 4, 2009 8:16 AM Power_Wagon:

    I as well always thought that accuweather had their own Servers to generate these models. I wonder how often these servers are maintained and updated.... Could be that they are legacy computers with legacy software, maybe they need to be updated but I know that is very expensive.

    Great post Jesse... very insightful

    Posted by Power_Wagon | February 4, 2009 6:22 AM Alain:

    Thanks Jesse. Terrific Post! I had always assumed that Accuweather had a model of its own, but now I see I was wrong and I understand the whole thing a lot better.

    Posted by Alain | February 4, 2009 12:04 AM DavePa:

    Much work and excellent effort Jesse!! In my humble opinion, the individuals running the show are as good as the results. There are examples of poor management across the board on many businesses in our country. When a company has a CEO that has performed poorly, that CEO is ousted and replaced with, hopefully, an individual that will prove better results. Simply, if the data extracted from the model is poor, than, the individuals delegating and programming are at fault. Some models give better data than others for a reason. The level of complexity that is involved, needs individuals whom really are educated with the knowledge to provide the public with a reasonably good product. A change is needed!!

    Posted by DavePa | February 3, 2009 10:25 PM Jose:

    What does it takes to be able to write code for these models? I'm a computer programmer. Should I go out and get a degree in Metereology (which was my dream ever since I was a teenager)?

    Posted by Jose | February 3, 2009 9:59 PM jim frazee:

    Wow! In one post, you've cleared up SO MANY of my questions!

    Thank you!!!

    Jim the weather weenie from Sewell NJ

    Posted by jim frazee | February 3, 2009 9:49 PM Jon in Collingswood:

    This is a terrific explanation of the art and mystery of computer meteorological modeling. A great post and must read for all interested in the weather.

    Posted by Jon in Collingswood | February 3, 2009 9:31 PM Diane:

    Thanks Jesse, that was a really great post. I love the pic of the super computer!

    Posted by Diane | February 3, 2009 9:27 PM John Manning:

    That was a GREAT posting. Much appreciated from what Lundberg calls "weather weenies". I'm one; much to my (frequent) embarrasment.

    Posted by John Manning | February 3, 2009 7:46 PM Betsy:

    Great post Jesse! (Maybe it should be required reading (with a quiz of course) for anyone that has to be moderated on the forums!

    Thanks to you and Henry (and Mrs. Henry) for your cool heads during a tough week...

    Posted by Betsy | February 3, 2009 5:23 PM MAC292OH10:

    HI jesse,

    great article,very informative as to the workings of numerical WX prediction....

    wondering about the "summer inaccuracy" , could this be due to tropical activity ramping up????

    also , do you think in the furture it would be possible to create a experimental "numerical WX modeling" along the lines the of "folding@home" theory ...allowing users world-wide to donate thier CPU/GPU processing power to numerical WX prediction???

    Posted by MAC292OH10 | February 3, 2009 5:05 PM Kenneth Simmons:

    Jesse,

    Thank you for explaining the complexity of the tools used in the forecast models. And providing and insider's perspecive on the issue.

    Posted by Kenneth Simmons | February 3, 2009 4:51 PM Jim:

    Wow, all that made my head hurt! Could we all link up our home computers like folding@home to increase the computing power?

    Posted by Jim | February 3, 2009 4:21 PM Ionizer:

    Awesome background and information on the models and their development. Thanks!

    Posted by Ionizer | February 3, 2009 4:11 PM
    The views expressed are those of the author and not necessarily those of AccuWeather, Inc. or AccuWeather.com

    Comments

    Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.

    Jesse Ferrell