8 February, 2017
What The Election Taught Us About Predictive Analytics
Predictive analytics is sometimes lauded as a crystal ball, guiding companies away from business risks and toward business opportunities. But after the mistaken election polls, is predictive analytics as flawed as a real crystal ball?
If nearly all the polls that predicted Donald Trump had no chance of winning the election were wrong, predictive analytics would seem to be a waste of time. One week before the election, Moody Analytics’ predictive models projected that Hillary Clinton would win “by a landslide.”
Even the New York Times got it wrong. Two weeks before the votes were tallied, the newspaper gave Mr. Trump an 8% chance of winning. A chastened Times, in an editorial entitled “How Data Failed Us in Calling an Election,” blamed its failed prognostication on predictive analytics, reprimanding it as a “young science” and a “blunt instrument, missing context and nuance.”
How did nearly all but a few polls fail to pick up the undercurrents of what was really happening across the electorate? If the pollsters’ predictive models can miss the mark by a mile, can businesses depend on the same technology to assess business conditions in making their forecasts?
In the decades I’ve spent advising companies on their data and analytics strategy, few mainstream events have sparked such discussion and elicited so many questions — from technology colleagues and business customers alike — as our recent election.