Keys to the White House

13-key-to-white-house

This is a very important article about election prediction. Of course, you have to be a historian to make a proper judgement with a deep understanding of the underlying process. It is evaluated by human, not machine.

His simple linear regression beats all modern machine learning algorithms today!

The Keys to the White House, developed by Allan Lichtman, is a system for predicting the popular-vote result of American presidential elections, based upon the theory of pragmatic voting. America’s electorate, according to this theory, chooses a president, not according to events of the campaign, but according to how well the party in control of the White House has governed the country. If the voters are content with the party in power, it gains four more years in the White House; if not, the challenging party prevails. Thus, the choice of a president does not turn on debates, advertising, speeches, endorsements, rallies, platforms, promises, or campaign tactics. Rather, presidential elections are primarily referenda on the performance of the party holding the White House.

V = 37.2 + 1.8 * True = 37.2 + 1.8 * 7 = 49.7

Reference: Keys to the White House

Donald Trump’s win was predicted by Allan Lichtman — the US election expert who has called every result since 1984

allan-lichtman

Political analyist concluded ‘Hillary doesn’t fit the bill’ partly because she lacked Barack Obama’s charisma. Allan Lichtman, a political analyst who has correctly predicted the results of every presidential election since 1984, correctly foresaw that Mr Trump would be the 45th US President.

Unlike many experts who fixated on Mr Trump’s controversial campaign when assessing the election outcome, Professor Lichtman’s calculations largely focused on the incumbent party’s potential for another victory based on 13 key assessments. The system entails “mathematically and specifically” measuring the performance of the party in office. It is a historically based prediction system. He derived the system by looking at every American presidential election from 1860 to 1980.

One of his keys is whether or not the sitting president is running for re-election, and right away, [the Democrats] are down that key.

Another one of his keys is whether or not the candidate of the White House party is, like Obama was in 2008, charismatic. Hillary Clinton doesn’t fit the bill.

Check out the articles below for details of his calculation:

 

Who got it right? These 3 unusual, unlikely things predicted Trump’s win

An employee holds up masks depicting Democratic presidential nominee Hillary Clinton and Republican presidential nominee Donald Trump at Hollywood Toys & Costumes in Los Angeles

Earlier this week, CBS News profiled five strange and unexpected things that have correctly predicted the results of the presidential election for decades. Now, it seems, three of those five predictors were right — forecasting Donald Trump would be the 45th president of the United States.

  • A mystic monkey in Changsha, China.
  • The “Halloween mask index” had Donald Trump ahead of Hillary Clinton, 55 to 45 percent.
  • The American University professor Allan Lichtman.

Reference: Who got it right? These 3 unusual, unlikely things predicted Trump’s win

Winner and Looser of 2016 US Presidential Election

2016-election

Well, this morning, everyone should already know the winner and looser of the 2016 US presidential election. So, based on the spirit of this blog, let’s look at the winner and looser in infographic and predictive analytics as a result of the 2016 US presidential election.

Winner: Google Election

Google made it so easy to understand election status and results in one simple page with tabs for overview, president, senate, house, .., etc. The bar between the two presidential candidates is so clear that we know who is winning and who is catching up, as well as how many electoral votes are needed to win.

google-dashboard

I especially like the semi-transparent red and blue to indicate the majority stakeholders by states and the status of swing states in the lower part of the dashboard. In contrast to what Google is doing, other media does not use the semi-transparent colors for the remaining states so it becomes less clear who would win the rest of the electoral votes and is difficult to see the trend.

nbc-dashboard

On the other hand, the bar between the candidates in the Google dashboard diffuses the bias introduced by the area chart of the US map (implying large areas having more electoral votes).

The only item I would add to the Google election dashboard is to apply the semi-transparent colors to the bar between candidates as well. This would make the dashboard perfect.

In summary, Google election dashboard does a excellent job for the US presidential election. It brings clarity to both the status and trend of the election results in a very precise manner. It deserves to be the winner of BI dashboard design for this election.

Looser: Predictive Analytics

Predictive analytics does a very poor job in this presidential election. All predictive models consistently say Mrs. Clinton would win the election over Mr. Trump. As we all know this morning, the prediction is a total failure.

538 is a pretty popular site about predictive analytics. The following image shows you its prediction during the night of the election results. The forecast had been in favor of Mrs. Clinton until 10 PM, when the curve started to switch to the favor of Mr. Trump. And, the switch did not become evident until 11:30 PM (at the big gap where red line above blue line in the lower part of the chart), when some people could already tell the trend before the forecast trend.

538-forecast

However, we probably should not blame predictive analytics for such big failure. It is because the strength of predictive analytics is to predict “major trend”, not a single outcome; and most of our predictive analytics today rely solely on “data” and nothing else.

As I pointed out in my recent blog on my site and KDnuggest site, if predictive analytics is purely based on data without understanding the underlying process, its forecast is subject to noise and bias in the data and could be very inaccurate. This becomes evident during this presidential election. Because all data were bias towards Mrs. Clinton, it predict Mrs. Clinton to win.

In addition, since the presidential election result is a single outcome and involves a lot of human factors which cannot be quantified analytically, predictive analytics may not be the right tool for the prediction at all! The totally failed prediction makes predictive analytics the looser of this election.

Predictive analytics still works well in a controlled context, but may not be the right tool for election prediction unless we are able to (1) quantify human factors and correlations accurately, (2) do not depend solely on data, and (3) fully disclose the prediction errors.

 

Elon Musk: Robots will take your jobs, government will have to pay your wage

nyt-robotic-manufacturing

Another article talks about robotics taking over future jobs, which I have been claiming for a long time. Governments need to be prepare for the trend by working on regulations for the future job market. I proposed social security contribution by companies employing robots, whereas people are talking about “universal income.” The only problem about the universal income is that it may make the government to big or powerful for some people to accept. Anyway, governments need to be ready for such job market trend.

Reference: Elon Musk: Robots will take your jobs, government will have to pay your wage

My previous posts:

Robots May Pay Taxes Under European Proposals

Why our recent technology advancement is not a revolution to economy?

A 19-year-old made a free robot lawyer that has appealed $3 million in parking tickets

The Bank of England has a chart that shows whether a robot will take your job

These adorable robots could someday put construction workers out of a job

Sustain Social Security under the Impact of Automation and a Shrinking Job Market

Study: Allowing guns on college campuses won’t reduce mass shootings

no_weapons102416

Policies allowing civilians to bring guns on to college campuses are unlikely to reduce mass shootings on campus and are likely to lead to more shootings, homicides, and suicides on campus—especially among students—a new report concludes.

Reference: Allowing guns on college campuses won’t reduce mass shootings

How can Lean Six Sigma help Machine Learning?

cover

Note that this article was submitted and accepted by KDnuggest, the most popular blog site about machine learning and knowledge discovery.

I have been using Lean Six Sigma (LSS) to improve business processes for the past 10+ year and am very satisfied with its benefits. Recently, I’ve been working with a consulting firm and a software vendor to implement a machine learning (ML) model to predict remaining useful life (RUL) of service parts. The result which I feel most frustrated is the low accuracy of the resulting model. As shown below, if people measure the deviation as the absolute difference between the actual part life and the predicted one, the resulting model has 127, 60, and 36 days of average deviation for the selected 3 parts. I could not understand why the deviations are so large with machine learning.

lss_ml_1

After working with the consultants and data scientists, it appears that they can improve the deviation only by 10%. This puzzles me a lot. I thought machine learning is a great new tool to make forecast simple and quick, but I did not expect it could have such large deviation. To me, such deviation, even after the 10% improvement, still renders the forecast useless to the business owners. This forces me to ask myself the following questions:

  • Is machine learning really a good forecasting tool?
  • What do people NOT know about machine learning?
  • What is missing in machine learning? Can lean six sigma fill the missing gap?

Note that machine learning, in general, targets two major categories of problems: unsupervised and supervised learning. My article here focuses on a supervised learning problem by using the regression analysis of machine learning.

Lean Six Sigma

The objective of the Lean Six Sigma (LSS) is to improve process performance by reducing its variance. The variance is defined as the sum square of the difference between actual and forecast of the LSS model. This is the definition used in classical statistics.

The result of the LSS essentially is a statistical function (model) between a set of input / independent variables and the output / dependent variable(s), as show in the chart below.

lss_ml_2

By identifying the correlations between the input and output variables, the LSS model tells us how we can control the input variables in order to move the output variable(s) into our target values. Most importantly, LSS also requires the monitored process to be “stable”, i.e., minimizing the output variable variance, by minimizing the input variable variance, in order to achieve the so called “breakthrough” state.

lss_ml_3

As the chart below shows, if you get to your target (center) alone without variance control (the spread around the target in the left chart), there is no guarantee about the target you have achieved; if you reduce the variance without getting to the target (right chart), you miss your target. Only by keeping the variance small and center, LSS is able to ensure the process target is reached with precise precision and with a sustainable and optimal process performance. This is the major contribution of LSS.

lss_ml_4

Machine Learning (ML)

For supervised machine learning, it looks at a function between a set of input variables and output variable(s) to come up with an “approximation” of the ideal function, as shown by the green curve below.

lss_ml_5

Similarly, for unsupervised machine learning, it looks for a function which best differentiate a set of clusters.

lss_ml_6

Comparison between LSS and ML

It is well known that, due to bias and normal randomness, a process is subject to be random in nature; i.e., a process with variance. Therefore, both classical statistics and LSS have shown that, if input variables have large variance, we would expect large variance of the output variable(s).

lss_ml_7

This would strongly suggest the inaccuracy of the machine learning model, when input variables have large variance. This is why, I think, my recent machine learning project has such large inaccuracy in its prediction, and also the reason why the data science consultants can improve the accuracy only up to 10%.

People may argue that the machine learning does have a step called data cleansing to improve the quality of prediction. Well, the problem is that the data cleansing of ML is not the same as the variance reduction of LSS. In LSS, people would go back to examine the business process to find the source of variance of the input variables in order to eliminate the bias or reduce the variance of those input variables (factors), whereas, in ML, people do not go back to revisit the business process; instead, people in ML only try to correct data errors or eliminate data which do not make sense. As a result, such data cleansing approach does not actually reduce variance; actually, it may not change the input variance at all. Therefore, the ML model is not expected to work well, if people do not understand the role of variance.

As an example, if the left chart below represents the data points after data cleansing, we would get the red curve as the optimal ML. But, if the right chart below represents the data points after variance reduction, the resulting ML model would be much accurate.

lss_ml_8

In summary, I think the current data cleansing of ML model needs to include the variance reduction technique of LSS in order to have an accurate, reliable, and effective model for either supervised or unsupervised learning. People need to spend effort to review underlying business process to reduce input variance to make it work better for real world problems.

Software vendors and data science consulting firms should embrace the variance reduction technique in the data cleansing phase of ML to deliver real value of ML.