FiveThirtyEight, the operation created by the political number-crunching guru Nate Silver, doesn’t just have a list of which U.S. House races are in play. No, the site assigns the precise likelihood of victory for candidates in every single House race in the country. 

On the one hand, I can’t help but look at such things, especially regarding the unusually large crop of competitive U.S. House races in Minnesota this year. But on the other hand, the idea that there is a way to describe a Dean Phillips victory over Erik Paulsen in Minnesota’s hotly contested Third Congressional District as 65.79 percent likely seems a bit over the top. (Most of the other race-raters just put it in the “toss-up” category, which seems, compared to a precise percentage likelihood, appropriately humble about one’s ability to see the future, or even the present.)

But, for the political junkies among us (including me), I’ll pass on the percentages anyway. But first I’ll cut to the chase. Minnesota has four of its eight U.S. House races rated as “toss-ups” by all or most of the sites that do such ratings.

According to FiveThirtyEight, all four of them are likely to “flip,” meaning the candidate of the party currently holding the seat will lose. But, since our four hot races are evenly divided between two currently held by a Republican and two by a Democrat, if all four seats flip, they would cancel each other out in partisan terms and Minnesota would continue to have a U.S. House delegation of five Dems and three Repubs, just a lot of new ones.

So, as of this morning, FiveThirtyEight rates our four relatively “safe” incumbents are rated as this likely to be back for another term:

  • CD 4, Betty McCollum, DFLer, 99.98 percent likely to be reelected.

  • CD 5, an open seat but so overwhelmingly DFL a district that 538 rates it as more than 99 percent likely that the DFL nominee, Ilham Omar, will be elected.

  • CD 6, regarded as the reddest of our districts, incumbent Republican incumbent Tom Emmer is rated 99.81 percent likely to win another term.

  • CD 7, where long-time “blue dog” Democrat Collin Peterson is rated just 85.48 likely to win an astonishing 15th term, despite serving an otherwise overwhelmingly red district.

But, as I mentioned above, if FiveThirtyEight is right, all four of our other districts will flip parties. Here are the percentages they assign.

In the First Congressional District, the southern Minnesota district currently represented by Democrat Tim Walz (who is leaving to run for governor) FiveThirtyEight rates Republican Jim Hagedorn as 54.66 percent likely to defeat Democrat Dan Feehan. All the other raters call this one a toss-up, which it clearly is. But FiveThirtyEight’s willingness to assign percentages to the second decimal makes it almost impossible to have a toss-up.

But FiveThirtyEight doesn’t see the rest of our toss-ups as all that toss-up-y.

In south suburban Second Congressional District, FiveThirtyEight rates second-time challenger, DFL Angie Craig as 76-24 percent likely to unseat incumbent Republican Jason Lewis. That seems a lot less like a toss-up than it is rated by others.

In west suburban Third Congressional District, which everyone in the world has been treating as a toss-up, FiveThirtyEight rates Democratic challenger Dean Phillips as 65.79 percent likely to unseat incumbent Republican Erik Paulsen.

And the huge northeastern Eighth Congressional District, which stretches from the northernmost suburbs of the metro to the top of the Canadian border, (incumbent DFLer Rick is not seeking another term) FiveThirtyEight says that Republican nominee Pete Stauber is 64.56 percent likely to defeat Democrat Joe Radinovich.

As I said above, assigning likelihoods to the second decimal point in the chaos of an ongoing political campaign (and doing it for every district in the country) should be considered beyond daunting, and maybe silly. But if you want to know how the serious numbers geeks see our state’s races, you have it here, or read for yourself FiveThirtyEight’s overview of every House race in the country here.

In that big national picture, the FiveThirtyEight-ers conclude that:

“The Classic version of our model gives Democrats a near certainty (about a 98 percent chance) of winning more votes than the GOP in the race for the House — but “only” a 3 in 4 chance of winning the majority of seats.”

Join the Conversation

28 Comments

  1. Well…

    Stats programs like SAS and SPSS will kick out data down to whatever decimal you ask them to, but that doesn’t increase reliability because the analysis is derived from whatever data you put in, and calculated using whatever assumptions and parameters input. And by the way, it’s not THAT much more work to program more decimals into the output on these programs… the program does the work. So…. decimals shmecimals.

    I’m not sure why anyone would grant these guys any more credibility than anyone else given their spectacular failures of the past. The wiz-bang infatuation with meta-analysis should have worn off after the TV show “Numbers” was canceled.

    With this kind of statistical problem you basically start out with a 50-50 chance of getting it right, and then you blow it (or not) from there. I’ve never been convinced that 538 really has any special skills.

    1. Bayes Theorem

      Actually, this sort of problem is a classic case for Bayesian analysis.
      We usually do have some a priori probabilities for events that are NOT .50, so calculating the likelihood of these predicted outcomes rather than a simple random choice makes sense.
      And I’m not sure where metanalysis (combining the outcomes of many different studies) comes in here.
      It’s not the same thing as combining -data- from different sources.

      1. Silver is using meta-analytical modeling and all statistics are derived from data (statisticians don’t compare text after all) whether you’re using data from “studies” or different polls. Silver treats multiple polls as if they’re studies. Bayesian analysis surely applies, but it’s not more reliable because it’s Bayesian, one guys a priori assumption is another’s fallacy… i.e. Clinton has a 94% chance of winning on November 3rd, 2016. The error in that analysis was the a priori assumptions.

        1. No

          Polls are data, not statistical analyses.
          And no prediction is better than the data it’s based on.
          It’s hard to take into account things like Comey’s last minute statements and Russian interventions.
          Remember, Clinton won by 3 million votes. Most polls are based on samples of voters, not EC representatives, who are not known that far in advance.

          1. Dude…

            Polls are survey’s, surveys are analyzed statistically, mean, median, mode, and SD are standard features of that statistical analysis, this elementary. You can’t re-define the nature of math and statistical analysis in order to preserve 538 credibility. Silver’s own post election analysis did not find that Comey’s revelation (which was seven day prior to the election, not in the last minutes before the polls opened) was a deciding factor.

            And again, I don’t why we have to keep pointing this out: Clinton lost the election. Trump is our President. 538 was wrong when they predicted with a 90+% confidence that Clinton would win. This is documented history. We can talk about the mistaken assumptions behind 538’s analysis, but we can’t change the fact that their analysis was clearly based on mistaken assumptions… they got it wrong.

            1. Wait

              Five Thirty Eight never said Clinton would win. That is far from saying that she had a 90% chance of winning. If a meteorologist says there is a 90% chance of rain Monday and it doesn’t rain, that is not an error. It means that on 100 days with weather like that, 90 of those days will have rain. Analysis on 538 is much more nuanced than that.

              If I try to fill an inside straight, the odds are the same whether I am successful or not.

            2. Umm…. no. Just no.

              Nate SIlver’s post election analysis says Comey’s revelation probably COST Clinton the election:

              https://fivethirtyeight.com/features/the-comey-letter-probably-cost-clinton-the-election/

              Silver found that Clinton was statistically significantly ahead 9 days out before the election, about 6%.
              Each time Comey made an email announcement (twice he did that), Clinton’s polling went down 3%, so her lead was only 3% by election day, within the margin of error.

              ALSO: While the New York Times and other sources had Clinton winning odds at 90+%, 538 was more like 70% and was headed downward as election day approached.

              https://fivethirtyeight.com/features/the-media-has-a-probability-problem/

              You can review his whole series of post-election articles here:
              https://fivethirtyeight.com/tag/the-real-story-of-2016/

              1. Thanks for correction Mark

                Yes, I noticed that my 90% figure was in error, I stand corrected.

                As for the article, 538 has now published a number of articles attempting to explain their failure to predict Clinton’s defeat, and this one does blame Comey, but this one came out five months after the election results. Immediately after the election Silver blamed independents, not Comey.

                But here’s the thing: First, had Clinton been a strong candidate her lead would never have been in the single digits or low single digits to begin with. All Silver is doing here is really demonstrating that the margin for Clinton was too thin to guarantee a victory. Second, Comey was just one of many liabilities and problems Clinton faced throughout the campaign. You can argue about the 3% Comey cost her, but this pales compared to the double digit loss she suffered because of her historically high unpopularity and lack of trust among voters. Silver can play with the effects of smaller influences for months if he wants but he’s never explained why he ignored Clinton’s most obvious and durable handicap. Clinton did indeed suffer a death from a thousand cuts in many ways, but she started out with a huge slash to begin with. Had Clinton not started out with such a huge deficit of trust and popularity the Comey revelation, and all the other sundry plagues that Clinton wore around her neck would not have done so much damage.

              2. Another problem with 538 and Silver’s Comey claim

                I don’t know how interested anyone else is in the this thread but I’ll just make a couple more observations. Looking the articles Mr. Ohm is pointing to there’s a couple other issues that arise.

                To begin with, one problem with 538’s (and Silver’s) methodology is that they don’t collect ANY of their own data, they themselves conduct no surveys or polls, they rely entirely on someone else’s work.

                I can’t do strike through’s here, but I need to point out that my paragraph above is in error, 538 DID collaborate on a few polls with other pollsters, although the nature of that collaboration isn’t clear. This sentence should read: “To begin with 538 collects VERY LITTLE of it’s own data and relies heavily on other people’s work.” Anyways, the problem remains, and THAT is that unless someone is collecting the data you need, you can’t have the data you need to work the problem. So for instance, unless someone collects data that quantifies the effect of a candidate’s unpopularity or low trustworthiness, you can’t factor that into your analysis. However, just because you can’t factor it in doesn’t mean it’s not a factor.

                Another problem with Silver’s analysis, specially the one were looking at here regarding the “Comey efftect” is that he’s drifting somewhat away from his usual playground. When you claim that there was a “Comey effect” you’re making a causal connection, and you can’t do that with indirect correlation. This is where relying on someone else’s data really becomes an issue. Yes, you can look at polling numbers that show a drop after something happens- Clinton’s numbers drop after Comey’s statements to Congress. But that’s a correlation. We know from 538’s own analysis that there were unusually large numbers of undecided voters prior to this elections, and we know that it was THOSE voters who swung for Trump on election night. However we can’t conclude that they swung for Trump because of the “Comey effect” just because Comey’s announcement happened days before the election. In order to establish that causal connection you would have to have actual data about those voters AND WHY they voted for Trump. In other words you need a survey of those undecided voters that reveals the “Comey effect” as the deciding factor. Silver doesn’t have that. If Silver really want’s to argue that there was a “Comey effect” he needs to survey those undecided voters who went with Trump and produce results showing that Comey made up their minds. Otherwise, it could just be a coincidence. It’s not like there were NO other reasons for undecided voters to flip for Trump, and they were undecided, which means no one knew how they were going to vote in any event.

                I’m not saying Comey had NO effect, but Comey was tree in a very crowded forest of factors that were influencing that election. I can’t remember what it’s called but there is a statistical fallacy that describes for the effect of focusing on the smallest factors while ignoring or factoring out the larger factors.

        2. 2016

          I followed 538 very closely through the fall of 2016. Silver maintained throughout that Don Trump had a slim but real chance of losing the popular vote and winning the EC.

  2. It’s math

    They run the algorithm and they get a number. So I can’t fault them for listing the number they get. Taking it to two decimal places – now maybe *that’s* a bit silly. But it’s not silly showing the number their formula produced.

    It would probably be helpful to also include the error bars. That way, it’s not just a stark number. Error bars would help to turn it into more of a number with fuzzy edges and therefore be more reflective of reality.

    1. Actually it’s statistical probability analysis

      While statistical analysis is a mathematical process, there is a significant difference. Simple Math always yields predictable and incontrovertible results- i.e. no matter how, who, where, or when anyone adds 2 of something to 2 of something else… you can only get at total of 4.

      Probability doesn’t work that way, specially complex probability work involving algorithms. Different approaches to the same problem will yield different results, rarely do any two analysis yield the exact same results or numbers. This is why for instance meteorologists run multiple models and get multiple predictions when they run different data sets.

      And it’s important to understand that MORE data, doesn’t necessarily lead to better or more reliable analysis. Crunching large and ever larger data sets or numbers can actually increase the odds of producing garbage analysis. It’s the ability to discriminate between relevant data and irrelevant or garbage data that makes a good analysis… there’s nothing magical about algorithms, if you put garbage in you will get garbage out… and your garbage can go down to whatever decimal points you tell the program to go down to.

      1. I assume

        that by “simple” math you mean pure math, which only has to show internal consistency (no real world to worry about).
        However, even in the realm of ‘pure’ math some equations (see chaos theory) can have more than one valid solution.

        1. “Pure Math”?

          Figuring out how much money there is in a bank account or how much fuel an airplane has left in it’s fuel tank certainly are “real world” considerations… and it’s basic math, not chaos theory. The difference between “validity” and “reliability” is an interesting discussion, but it has little to do with the subject at hand. Suffice to say that you can deploy the correct statistical model (a condition of validity) but still get it wrong (a failure of reliability). Chaos theory doesn’t suggest that there can be more than one correct solution, it simply tries to describe the complexity involved when attempting to calculate the correct solution.

          The only thing this has to do with 538, is that they claim to be able to master the complexity, and that’s not a reliable claim for the most part since they tend to get it wrong whenever they try to analyze unusually complex situations. The Trump-Clinton contest was many things, but it was certainly NOT a typical campaign cycle. 538 got it wrong, for all their fancy calculating you had a better chance of predicting that outcome by flipping a coin than you would relying on their calculations.

          1. Arithmatic and maths

            I can see that you’ve had an undergraduate stat course.
            To be more specific, reliability measures the likelihood that repeated observations will yield the same results.
            And chaos theory says that even slight random variability at the beginning of a process (the infamous ‘butterfly’s wing can produce widely divergent outcomes.
            And once again, 538 was predicting the vote (which Clinton won by 3 million), not the Electoral College outcome.

            1. No

              538’s analysis did not simply predict the popular vote, their entire project was based on a detailed analysis of each district which would also yield the EC result, and they concluded that Clinton would win the election (not just the popular vote) with a 72% confidence level. I’ve been claiming a 90+% confidence level, but I’ve been mistaken, I apologize. Here’s what 538 published on the night of the election:

              “Our forecast has Clinton favored in states and congressional districts totaling 323 electoral votes, including all the states President Obama won in 2012 except Ohio and Iowa, but adding North Carolina.”

              Now it’s true, they hedged their bet by adding: “However, because our forecasts are probabilistic, and because Clinton’s leads in North Carolina and Florida especially are tenuous, the average number of electoral votes we forecast for Clinton is 302, which would be equivalent to her winning either Florida or North Carolina but not both.”

              However in the end they stuck by their prediction that Clinton would become POTUS:

              “Despite what you might think, we haven’t been trying to scare anyone with these updates. The goal of a probabilistic model is not to provide deterministic predictions (“Clinton will win Wisconsin”) but instead to provide an assessment of probabilities and risks. In 2012, the risks to to Obama were lower than was commonly acknowledged, because of the low number of undecided voters and his unusually robust polling in swing states. In 2016, just the opposite is true: There are lots of undecideds, and Clinton’s polling leads are somewhat thin in swing states. Nonetheless, Clinton is probably going to win, and she could win by a big margin.”

              Note, they’re not saying she’ll win the popular vote but lose the electoral college. Also note, no reference to chaos theory.

              1. Key statement

                “Clinton is probably going to win”
                Predictions are statements of probability, not certainty.
                Predicting the votes in individual districts does not predict what the electors will do, and Trump’s key wins were within the margin of error of most statistical statements.
                Unlikely events do happen; that is why they’re unlikely, not impossible.

                1. Really?

                  Probability NOT certainty? Huh. THAT must be what all stuff about 70% and 90% CHANCE of winning we’ve been discussing was all about? Who knew? Thanks for clarifying. Who IS this learned man?

  3. 538 isn’t always right

    As a long time 1st District resident, my feeling is that Dan Feehan has a slight edge over Jim Hagedorn, who seems a bit shopworn (may be due to the fact that he’s the son of a local Congressman and a three time loser). On the other hand, before Tim Walz the district was traditionally Republican. So we’ll see.

  4. Dan Feehan has what it takes to be victorious…..

    Jim Hagedorn supports Donald Trump in everything. I have it on good authority that Farmers were not happy with Hagedorn’s support of the Tariffs that are hurting farmers in the 1st District at Farmfest. The fact that Trump’s job approval is in the 30% range, does not bode well for Republicans who tie themselves to him. Jim Hagedorn wants to be a part of a Congress UNWILLING to hold Trump accountable on anything. Forget about checks and balances. House Republicans are letting Trump trample on the US Constitution and destroy the institutions that keep this country safe and prosperous.

    Dan Feehan is an Iraq war veteran. He did 2 separate tours there leading an Army unit that distroyed IEDs. He has faced actual life and death situations that required rapid decision making and leadership skills to keep his men alive. His experience at the Dept. of Defense as an Under Secretary gives him knowledge on Veteran’s issues., the ugliness of war, etc.

    Dan Feehan will go toe to toe with Jim Hagedorn issue for issue. The 1st CD may be more Conservative, but voters have been well served by having a Democrat in the seat.

  5. CD 1

    Organizing farmer is like herding cats –just ask the Farmers’ Union or the NFO. In this case, that probably (hopefully) works against the Republican. While many farmers might be willing to cut Trump some slack regarding the tariffs, a lot of them are pretty disgusted. Sufficient numbers of them found a way to support Walz. If Dan Feehan can make a good case, they may support him too.

  6. Statistics and Math… numbers shmumbers.

    I’m looking at the comments and I think I see some basic confusion about the nature of “math”, statistics, and probability.

    For instance some comments seem to be trying to claim 538 wasn’t REALLY wrong when they predicted Trump’s defeat, or a failed prediction isn’t a failed prediction for abstract reason having to do with semantics or chaos theory etc. I’ve tried to explain this once but I’ll try again.

    When something that’s been predicted with a 95% confidence level (and yes, that IS a prediction) fails to happen, that IS a failed prediction.

    Now we can try to understand why predictions fail, but often times with statistical analysis the failure isn’t a “math” failure, so it can look like we have a paradox, the math is correct but the outcome is wrong. So how does this happen? It all has to do with data selection and assumptions. So for instance, let’s say you’re in charge of counting apples at harvest time; three guys come up to you and deliver three baskets each with 25 apples. You add that up for a total of 75 apples. Does that mean that you actually have 75 apples? You couldn’t know unless someone actually counts the apples, and if they do, and they find 68 apples instead of 75, it doesn’t mean you did your math wrong. One or more of the baskets had fewer than 25 apples in it. You can study the math forever and never figure out why the count was off.

    Similarly with statistics when predictions are off, the problem won’t be found in the math (usually), it’s the assumptions (i.e. there were 25 apples in each basket) behind the data that yield the error. With 538, actually, THEY don’t do any math, the computer does the math, and computers don’t get math wrong, it’s the data and assumptions behind the data that yield an accurate or inaccurate result. So with 538 or anyone else for that matter, we can assume that their “math” is correct, but the results can still be garbage because correct math can produce garbage results. It’s an incontrovertible fact that 3×25=75, but if there’s only 11 apples in each basket you ain’t gonna have 75 apples no matter how many time you run the numbers. You can run 538’s numbers till the cows come home and get the same results over and over again… but Trump won the election.

    1. Measuring voters and issues- Flawed at best.

      The editfying comments regarding statistical methods are well-written, persuasive and educational.

      But we have the internet now.

      Keith Ellison’s nemesis just used social media to create an eleventh hour doubt.

      Kinda like Comey.

      Anyone can pull an eleventh hour attack, which, if the candidates are close, can change the outcome.

      Like Facebook ads targeting individuals who are already screened for malleability. Pizza pedos?

      The rise of ISIS, the destruction of countless reputations, and the organized take-down of Tesla and Elon Musk by short sellers and rumor-mongers- we are living in a new age.

      Add to all that uncertainty the immense amounts of money coming from people and places unknown and going to people and places unknown. What were the GOP appropriators doing in Russia on the 4th of July?

      The best statistics package with the best available data and still- “you can’t manage what you can’t measure.”

    2. Yes, 538 was wrong, but they seem to have been less wrong than most others. Besides giving Trump better odds than most other people making predictions, there were articles like https://fivethirtyeight.com/features/election-update-yes-donald-trump-has-a-path-to-victory/ which IIRC they received criticism for, for being too generous to Trump’s odds of winning.

      Setting aside the comparison to other prognosticators, one data point does not tell us much about whether the methodology is reasonable. Certainly not enough to label it a “spectacular failure”.

      1. This brand loyalty to 538 may be impressive but…

        538’s Trump-Clinton projection WAS a spectacular failure. They gave a candidate that actually had a ZERO chance of winning a 74% chance of winning. Again, the base-line is 50-50. Yes, 538 produced more than one model, but “Expertise” isn’t a function of whether or not ONE of your models was closer to the actual outcome than some of your other models, expertise is the ability to select the model that comes closest to the actual outcome… and 538 simply failed to do that, THEY DID NOT PREDICT A TRUMP VICTORY. They didn’t even predict a toss-up.

        Look, I was not even the least surprised on election night to find Clinton losing to Trump. I would have given Clinton a 50-50 chance which means I my prediction was more accurate than 538’s. Now that doesn’t make me a better expert, or any expert at all, but it does mean 538 isn’t the impressive expert they pretend to be. Sure, they have a decent record when it comes to analyzing mundane contests, but it’s the complex anomalous scenario’s that make or break “experts”. Any dope could predict with an 80% confidence interval that Klobuchar will win her next election, that’s child’s play. You had a better chance of predicting Trumps election by flipping a coin than you would looking at 538’s analysis, when you do THAT much work, and still get it wrong… that’s a fail.

        I didn’t do any complex analysis whatsoever. All I did was look at the polls and all I was looking at was the fact that with the exception of a week or two, Clinton NEVER had a double digit lead, she was always in the single digits, and her lead was decreasing not increasing heading into Nov. Given the decreased reliability of polling data for a variety of reasons over the last 10-15 years, anything less than double digit leads pretty much mean a toss-up. I’m not bragging about my “analysis”, I’m just saying it wasn’t THAT difficult to see that Clinton was in trouble and many of us realized she was in serious trouble even before she got the nomination. If 538 were the experts they brand themselves as… they would have recognized it as well.

        1. “ZERO chance of winning”? Really? Trump’s margin of victory in the states that swung the EC vote was rather small (under one percentage point difference in MI, WI, PA, and 1.2 in FL). It was quite plausible that they could have gone the other way. We can see in retrospect that they didn’t, but that’s not the same as zero chance relative to reasonably available information before the election. 74% is not all that strong of a prediction (I’d say Klobuchar’s odds are far beyond 80%, BTW); such predictions would be expected to be wrong around 1 out of 4 times, even in the absence of a last-minute surprise. This happened to be a really high profile prediction that ended up being on the wrong side of 50%, but it was hardly the only time they’ve made predictions in difficult races.

          I would not call single digit polling a “toss up”. It’s certainly not guaranteed, but I’d much rather be up by 5 points than down by 5 points. If you look at a large number of elections, over multiple years, I doubt you’d find that people down by 5 points in polling win roughly as often as those who are up by that amount (if I’m wrong, then please show the data). “It could happen” is not the same as “50-50 chance”, and if predicting possibilities based on imperfect information were a useless concept, then insurance companies would all go out of business. Even the best insurance company will occasionally end up selling a life insurance policy to someone who dies within a year.

Leave a comment