The safety net of innovation
This is where reliability engineering steps in, collecting data, drafting requirements and load profiles, performing risk assessments and designing field tests, to get the clearest possible picture of how your new product will perform years into the future. Reliability is the safety net of innovation, making sure your new product is robust enough to get through its expected lifecycle in one piece (pun intended).
Take the example of a company designing a new gearbox for a car. To determine its lifecycle a reliability engineer (RE) would need to determine things like how often users will shift from second to third gear and from third to fourth gear. Only then will he or she get a good estimate of the forces the separate gears have to endure, information with which the engineering department can adjust the design. Later on, the RE designs a test plan to obtain the most accurate estimate of how long the gearbox will work before it starts to malfunction and is due for maintenance.
Still, for most companies reliability engineering is just an afterthought. It’s shocking to me how often I get a blank look when I ask management or engineering how long they expect their product to last before it starts to malfunction or breaks down. Without this crucial piece of information, your maintenance department might get overrun when gearboxes all of a sudden come back for repairs after just one year.
There are many examples of companies paying dearly for ignoring their reliability homework. They seem to think a simple functionality test is enough. Okay, so the product works, but does it still work after five years? When it comes to reliability testing most companies are overly optimistic and think everything will turn out fine. When it doesn’t, it’s too late to do anything about it. A long story short: taking reliability seriously can improve product lifecycle, save maintenance costs and prevent painful incidents and accidents.
Time to market pressures
One of the problems is that in product development today time to market (TTM) has become all important. Under pressure of TTM new products are released without proper reliability testing, sometimes with dire consequences. Remember how Samsung had to recall its brand new Galaxy Note 7 phones in 2016 after they started catching fire in consumer’s bags and purses. Again, an avoidable incident if proper reliability engineering would have been involved.
If for one reason or another there was limited time for field tests, it’s essential to closely monitor the performance of your product the first few months after its release. With the help of reliable first user data you can still tweak the product and launch a better and updated version later on. Needless to say it requires proper implementation of the first users feedback loop.
That’s why data science is such an amazing addition to reliability engineering. Data has always been important for lifecycle management, but now that data is more readily available everywhere, the opportunities are plentiful. Apart from getting real-time feedback from first users, you are also able to follow each machine or product individually, which makes maintenance much more efficient and cost effective. Plus you also get valuable information about the load profile for different customers, so product improvements can be implemented early on.
Discovering patterns and trends
Let me illustrate this with the example of equipment used to install windmills at sea. When equipment break down somewhere on the open sea, the maintenance department has to fly in spare parts by helicopter, a very expensive operation. By using data science to digitally monitor the equipment, you can make detailed user and load profiles. By logging how long and how frequently the equipment is used, which weather conditions it had to endure and so forth, you can predict when that particular piece of equipment needs a maintenance checkup. And the checkup can be done when the equipment is on the shore, instead of helplessly floating somewhere in the middle of the ocean.
One of the challenges is to collect all that data from the various sources within and without the company. You need data from the engineering department about technical details, data from service support about problems and malfunctions, data from sales about how many products are circulating in the field, and data from the users themselves. It takes a little IT knowledge to gather the data and string it together intelligently to discover meaningful trends and patterns.
A word of caution here. It’s easy to just grab any dataset and start extracting patterns, but datasets can be woefully biased or incomplete. Then patterns and trends don’t mean much and it can even be dangerous to build any business proposition on them. You need statistical knowledge to judge what a dataset does and does not show.