Home > Resources > Articles & Research > Elements of a Successful Fluid Analysis Program
Elements of a Successful Fluid Analysis Program

March 28, 2013
Patrick J. Kilbane
Featured Articles

Featured Article April 2013 
Elements of a Successful Fluid Analysis Program
By Patrick J. Kilbane

Elements of a Successful Fluid Analysis Program
Predictive maintenance is part science, part art—and the highest level of condition monitoring.

Fluid analysis remains an integral part of maintenance and reliability activities. To reach the highest level of condition monitoring, you must master the science of predictive maintenance.

Not long ago maintenance was simply performed when equipment broke down. Although run-to-failure can be advised in specific circumstances, it is typically the exception and not the rule.

Maintenance then became a calendarbased (or time-based) event such as changing your oil every 3,000 miles or changing the batteries in your smoke detector every six months. Although such activities can prevent some failures and reduce costs over reactive maintenance, they tend to mask many of the problems that can occur and require maintenance on equipment that otherwise is working just fine.

One example is an engine that has a small coolant leak. If we change the oil that contains the coolant before it causes damage to the engine, everything is fine and we don’t even know there’s a problem. However, if we miss an oil change or try to extend an oil drain interval, the small coolant leak can now cause damage and possible failure of the engine.

Predictive maintenance uses non-destructive testing to identify problems prior to failure. This testing includes fluid analysis, vibration analysis, thermography, ultrasound and other tests designed to analyze specific conditions of operating equipment. Here maintenance is scheduled on equipment that needs service based on the testing results, not just because it was next on the list or it’s listed on a calendar.

The Electric Power Research Institute calculated that preventive maintenance (calendar-based) saved roughly 24% of the maintenance costs versus reactive maintenance. Predictive maintenance on the other hand saved an additional 30% over preventive maintenance and 47% over reactive maintenance. Return on investment will depend upon the program, but returns around $30-$40 per sample on average are typical and can be higher.

Our focus for this article is on fluid analysis. However, many, if not most, of the points we are discussing easily can be adapted for the other types of testing such as vibration, thermography, ultrasound and motor testing.

Perhaps the most important aspect of your predictive maintenance program is the objective or goal. Program goals must be carefully thought out and realistic— that is crucial. A generic or broad goal is OK to start with but will need to be clarified and defined to really maximize the effectiveness of the program.

Here are some examples of a generic or broad goal that will need to be better defined:
• Increase reliability
• Decrease costs
• Minimize downtime
• Eliminate safety hazards.

Once you have decided on the program goal or goals, you need to select the equipment to be tested. Equipment selection should be determined with program goals in mind.

One way to identify the equipment that should be part of the program is to use a modified Failure Mode and Effects Analysis (FMEA). Although this procedure was originally designed around manufacturing processes and product failures, it can easily be adapted to condition monitoring and the evaluation of equipment failures.

Using the FMEA approach truly provides the information needed to identify failures when they occur. Failures are rated by their occurrence, severity and detection levels; equipment with the highest scores should be evaluated against our goals for inclusion in the program.

Regardless of how we select our equipment, we also need to take into account the type of lubrication system, lubricant capacity, typical failure modes, frequency of failure and more. Equipment selection should have a high degree of impact on our goals.

We also want to start with a relatively small population of equipment to make sure the program can be implemented properly. The population or equipment in the program can always be increased once the program is functioning as intended. Starting with large populations and then making modifications tends to cause confusion, errors and an overall dissatisfaction in the performance of the program until the changes have worked through the system.

The next step is defining the testing requirements. Successful testing identifies contamination, failure modes or other conditions that affect our goals. Here we need to be specific regarding our goals such as optimizing drain intervals to lower lubricant and labor costs while reducing downtime due to these maintenance activities. Another goal might be reducing contamination levels to increase equipment lifetime and overall performance. Testing requirements initially will be set by the application type, typical failures, OEM recommendations and other service factors.

For example, if a compressor is identified as a unit that impacts your program goals, and water is a major cause of failure, we’d want to ensure that water is measured in our testing. If the amount of water it takes to cause damage is well over 1,000 ppm or 0.1%, we would be able to get by with a simple crackle test. These requirements provide the basis for the creation of reporting limits. These limits, in turn, provide an initial framework on how to evaluate the test results and provide detection of wear, contamination and operational problems. They also can be used to trigger additional testing as needed. Keeping the equipment below or within specific limits extends its longevity.

Analysis of our program is next. We need to take a step back and make sure everything makes sense. Do we have goals for our program that are clearly identified and defined? Are our goals or subgoals specific enough and easily measurable? Are the test packages designed properly to detect the failure modes before failure occurs or determine fluid usability?

If the answer to these questions is no, we will need to reexamine these items to determine if they should be removed or if modifications are needed before we move on. If we cannot measure our results in terms that are relevant to our goals, our program will only have minimal success.

Once the analysis is completed, we need to put an implementation plan together. We will need to access the frequency of sampling needed (monthly, bimonthly or quarterly). Sampling less than quarterly is considered troubleshooting and does not lend itself to trending and identification of conditions necessary to identify failure modes. We need to take into consideration who will take the samples, how the samples will be taken, where the testing will be performed (in-house or outside laboratory) and cost of testing. We also need to know how we will measure success. We may have selected a reduction in costs as one of our goals, but we should put a number against this such as a 5% overall reduction in overhauls, emergency repairs, etc., and be realistic. All goals need to be measurable, spelled out, documented and agreed upon by all parties involved.

The next step is implementing our plan. In this phase we might discover problems or identify information that challenges assumptions made early on in our process. This discovery requires us to tweak or modify our plan as necessary.

Proper sampling procedures are crucial to a successful program. Samples need to be representative of the lubricant in the system and contain the information required based on our goals. In other words, if wear failure identification is one or part of our goals, we need to make sure our samples are taken where wear particles can be collected and not after filtration has occurred. We also need to sample at the proper frequency as described earlier so that our main failure modes, contamination levels and other factors are identifiable and can be trended prior to the occurrence of a critical event.

Sampling consistency is crucial; inconsistent sampling leads to inconsistent data. A standard procedure for submitting and shipping samples to the lab should be established in order to get the sample to the lab reliably and timely. Appropriate information should be included in order for the test data to be most useful.

Measuring our results is one of the most important aspects of this process. As we start analyzing our results, we can identify weaknesses in the overall process. What have we missed? Have we collected the correct metrics for measuring success? We will be looking at cost benefits in most situations, so we’ll need metrics on parts costs, labor costs, downtime or lost production costs, utility costs and other data that may not be readily available.

Communication is a key factor and needs to be between all parties involved so everyone knows the purpose and scope of the project. The lines of communication must be open and used continuously for your program to succeed.

An additional benefit from this process is the ability to create customized and meaningful limits based on our equipment’s operation and environment. If limits were defined initially, we can now make adjustments to the limits as needed. Otherwise we can use the results to create limits for our equipment based on its operation and current condition.

The process doesn’t stop here, it continues. Our goals may change, we may add additional equipment or we may simply modify our existing program, but the program should evolve over time to provide the necessary information to meet our goals. There is always room for improvement. New technologies may provide additional capabilities that currently do not exist in monitoring certain applications, equipment or environments.

Regardless, all programs should be reviewed regularly to measure their success and determine what modifications, if any, are needed based on your requirements. Having a good line of communication with the laboratory helps ensure the service conforms to program goals, and the data provided is understood and meaningful.

So ask yourself—does your program have an identified goal and if so, is your program meeting your expectations?

Patrick Kilbane is the business development manager for ALS Tribology. You can reach him at patrick.kilbane@alsglobal.com 

©2008 STLE All rights reserved.