Any time there's a quality event and the customer is involved, the problem needs to be contained, it needs to be fixed...ASAP. The problem is, in the absence of good data, it's difficult to know what really happened that led to the quality event. The containment action tends to be an all encompassing "re-set" of the process as we try to bring everything back to its original state when everything was working well (ie a machine clean-out). If the problem goes away, everyone celebrates and gets back to business. If the quality event was big enough, a new Preventative Maintenance schedule gets established, and now the system is being re-set at a higher frequency. The problem is, every re-set is expensive, requires down-time and hurts productivity. AND it might not even address the real problem that's causing the quality issue to begin with.
Attached is a case study we did for one of our clients. They struggled constantly meeting a new cleanliness specification imposed by a new client. The initial reaction was to increase the target concentration of the chemical cleaner and increase the dump & Recharge frequency. Both would have resulted in higher operating costs. Since they didn't have the time or resources to do a true 5-Why study to determine the KPI's that were impacting their cleanliness, we were asked to help. The case study below outlines the steps we took to isolate the variables that were having a direct impact on part cleanliness. By understanding the specific KPI's that were affecting cleanliness we were able to use condition based management vs time based preventative measures to keep the process in control and costs optimized.