Study: Allowing guns on college campuses won’t reduce mass shootings

Policies allowing civilians to bring guns on to college campuses are unlikely to reduce mass shootings on campus and are likely to lead to more shootings, homicides, and suicides on campus—especially among students—a new report concludes.

Reference: Allowing guns on college campuses won’t reduce mass shootings

How can Lean Six Sigma help Machine Learning?

Note that this article was submitted and accepted by KDnuggest, the most popular blog site about machine learning and knowledge discovery.

I have been using Lean Six Sigma (LSS) to improve business processes for the past 10+ year and am very satisfied with its benefits. Recently, I’ve been working with a consulting firm and a software vendor to implement a machine learning (ML) model to predict remaining useful life (RUL) of service parts. The result which I feel most frustrated is the low accuracy of the resulting model. As shown below, if people measure the deviation as the absolute difference between the actual part life and the predicted one, the resulting model has 127, 60, and 36 days of average deviation for the selected 3 parts. I could not understand why the deviations are so large with machine learning.

lss_ml_1

After working with the consultants and data scientists, it appears that they can improve the deviation only by 10%. This puzzles me a lot. I thought machine learning is a great new tool to make forecast simple and quick, but I did not expect it could have such large deviation. To me, such deviation, even after the 10% improvement, still renders the forecast useless to the business owners. This forces me to ask myself the following questions:

  • Is machine learning really a good forecasting tool?
  • What do people NOT know about machine learning?
  • What is missing in machine learning? Can lean six sigma fill the missing gap?

Note that machine learning, in general, targets two major categories of problems: unsupervised and supervised learning. My article here focuses on a supervised learning problem by using the regression analysis of machine learning.

Lean Six Sigma

The objective of the Lean Six Sigma (LSS) is to improve process performance by reducing its variance. The variance is defined as the sum square of the difference between actual and forecast of the LSS model. This is the definition used in classical statistics.

The result of the LSS essentially is a statistical function (model) between a set of input / independent variables and the output / dependent variable(s), as show in the chart below.

lss_ml_2

By identifying the correlations between the input and output variables, the LSS model tells us how we can control the input variables in order to move the output variable(s) into our target values. Most importantly, LSS also requires the monitored process to be “stable”, i.e., minimizing the output variable variance, by minimizing the input variable variance, in order to achieve the so called “breakthrough” state.

lss_ml_3

As the chart below shows, if you get to your target (center) alone without variance control (the spread around the target in the left chart), there is no guarantee about the target you have achieved; if you reduce the variance without getting to the target (right chart), you miss your target. Only by keeping the variance small and center, LSS is able to ensure the process target is reached with precise precision and with a sustainable and optimal process performance. This is the major contribution of LSS.

lss_ml_4

Machine Learning (ML)

For supervised machine learning, it looks at a function between a set of input variables and output variable(s) to come up with an “approximation” of the ideal function, as shown by the green curve below.

lss_ml_5

Similarly, for unsupervised machine learning, it looks for a function which best differentiate a set of clusters.

lss_ml_6

Comparison between LSS and ML

It is well known that, due to bias and normal randomness, a process is subject to be random in nature; i.e., a process with variance. Therefore, both classical statistics and LSS have shown that, if input variables have large variance, we would expect large variance of the output variable(s).

lss_ml_7

This would strongly suggest the inaccuracy of the machine learning model, when input variables have large variance. This is why, I think, my recent machine learning project has such large inaccuracy in its prediction, and also the reason why the data science consultants can improve the accuracy only up to 10%.

People may argue that the machine learning does have a step called data cleansing to improve the quality of prediction. Well, the problem is that the data cleansing of ML is not the same as the variance reduction of LSS. In LSS, people would go back to examine the business process to find the source of variance of the input variables in order to eliminate the bias or reduce the variance of those input variables (factors), whereas, in ML, people do not go back to revisit the business process; instead, people in ML only try to correct data errors or eliminate data which do not make sense. As a result, such data cleansing approach does not actually reduce variance; actually, it may not change the input variance at all. Therefore, the ML model is not expected to work well, if people do not understand the role of variance.

As an example, if the left chart below represents the data points after data cleansing, we would get the red curve as the optimal ML. But, if the right chart below represents the data points after variance reduction, the resulting ML model would be much accurate.

lss_ml_8

In summary, I think the current data cleansing of ML model needs to include the variance reduction technique of LSS in order to have an accurate, reliable, and effective model for either supervised or unsupervised learning. People need to spend effort to review underlying business process to reduce input variance to make it work better for real world problems.

Software vendors and data science consulting firms should embrace the variance reduction technique in the data cleansing phase of ML to deliver real value of ML.