Open Access
Issue
Mechanics & Industry
Volume 23, 2022
Article Number 9
Number of page(s) 15
DOI https://doi.org/10.1051/meca/2022006
Published online 14 June 2022

© H. Wang et al., Published by EDP Sciences 2022

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Lubrication systems and their components are a critical part of large and complex rotating mechanical systems, such as aero-engines, wind power gearboxes, steam turbines, and marine power plants. Bearings, gears, and other parts support the shaft and transmit the power, while the lubrication system provides oil and cooling to the bearings, gears, and other lubricated components. More than 80% of mechanical equipment failures are related to wear [1], so the information obtained from oil monitoring technology based on tribology theory can directly reflect the operation status of the system. The maintenance costs of large equipment lubrication components are very high. For example, transmission repair of the main shaft of an aero-engine can cost as much as $1 million, and the one-time maintenance of a wind power gearbox can cost as much as $250,000 [2]. Therefore, to avoid failures, it is necessary to monitor the health status of the lubricating system and its lubricating components, and take preventive measures before abnormalities occur.

Lubricating oil monitoring is an effective means of generating early failure warnings because the oil contains information about degradation and damage to lubricated components [3]. Wear particle monitoring is one of the most common methods for monitoring the health of a mechanical transmission system. By monitoring the wear particles in the lubricating fluid, the operating status of the mechanical system can be obtained. Oil debris monitoring usually reflects operational failures earlier than temperature and vibration monitoring and is more intuitive.

The detection of wear particles is essential to determine abnormal wear conditions of a machine. Offline measurement methods such as spectroscopy and ferrography are still the most common methods used for oil debris monitoring. Although the offline methods can provide comprehensive wear information, they often require expensive and complex equipment and skilled analysts, which is time-consuming and labor-intensive. Offline methods also do not provide real-time machine health information and no longer meet the real-time requirements of current advanced-failure prognostics and health management (PHM) systems. As a result, engineers have been conducting online monitoring studies of wear particles since 1980 [4,5]. Because online monitoring is an effective means to ensure reliable operation of equipment and achieve condition-based maintenance, it has become a research hotspot in the field of mechanical fault diagnosis [6].

The current online wear debris monitoring methods are divided into four categories: optical-based debris monitoring [79]; induction-based debris monitoring [10]; capacitance and resistance-based debris monitoring [1113]; and acoustic-based debris monitoring [14]. The advantages and disadvantages of the various monitoring methods are shown in Table 1 [15].

As can be seen from Table 1, the inductive sensor has low sensitivity, the resistive-capacitive sensor is susceptible to oil deterioration, and acoustic monitoring is easily affected by oil viscosity, flow speed, and mechanical vibration. The traditional oil debris monitoring method based on the optical principle has high sensitivity, generally using the principle of light shading [16], cannot distinguish between bubbles and debris. The bubbles generated in the oil circuit interfere with the debris monitoring, resulting in limited engineering application.

An inductive sensor developed by Zhu [17] can detect as small as 50 μm, which has high throughput and is not affected by bubbles. However, as an inductive sensor, it generally only acts on ferromagnetic and non ferromagnetic metal particles in the oil, while the optical sensor can monitor all pollutants in the oil.

The LaserNet Fines automated wear particle analyzer developed by Lockheed Martin uses laser-based image processing to integrate two common oil-monitoring functions, wear particle morphology recognition, and particle counting [18]. This instrument uses a CCD camera with a resolution of 640 × 80, a 4× lens, an observation channel size of 1.6 mm × 1.2 mm, and a high-power pulsed short-time laser. Bubbles larger than 20 μm were identified by roundness calculations. Although the product can be used to classify debris by image algorithm, the camera has a lower resolution and a smaller flow path, making it difficult to apply to a real industrial environment with high flow speeds and larger oil pipes. Because the analyzer also only distinguishes bubbles by a single parameter of roundness, its monitoring accuracy needs to be considered.

The OILPAS company developed a debris monitoring sensor named INNOSIRIS [19], the core piece of the measurement instrument is a 1/1.7 inch color CCD sensor array with 14.7 megapixel resolution and a maximal frame rate of 1 Hz. In real-time operation, a particle stream of approximately 2 L/min is extracted from a liquid closed-loop and added to the circulation at a suitable location, and a small portion of the extracted particle stream is continuously guided through a flow unit and captured on the image sensor by a system consisting of a variable time interval flash and lens. INNOSIRIS’s camera resolution is high, and the collected image quality is guaranteed. However, the flow channel is small, resulting in a low flow rate in the working environment and low-frequency image sampling, so it is not possible to achieve full flow monitoring, only sampling monitoring.

Based on magnetic deposition and image analysis, Xie et al. [20,21] developed an online visual iron ferrograph (OLVF) sensor with direct reading function and the corresponding image acquisition and processing application software since 2005, which gives the index particle coverage area (IPCA) of relative wear debris concentration as the output index. The OLVF sensor is based on magnetic technology and optics, which uses electromagnetic force generated by an electromagnetic coil to deposit wear particles in the measured oil flowing through the deposition tube and uses an optical lens to measure and observe the area to be measured in the deposition tube [22]. The OLVF sensor can achieve high flow rate monitoring and can collect morphological image information of certain wear particles. However, the following problems need to be addressed: (1) electromagnets are needed, resulting in a large volume and complex structure; (2) sampling electromagnet adsorption results in debris overlap, chain arrangement, and other problems [23] not conducive to subsequent image analysis; and (3) the main monitoring indicator is the particle coverage area, and the measure of debris information is not comprehensive enough and cannot be accurate to a single particle.

Hao et al. [24] proposed a particle analysis method based on microfluidics and image recognition to monitor oil contamination. The system uses a micro-channel and a micro-pump to divert the flow, which results in cumbersome operations. The inner diameter of the flow channel is 150 μm. So, it is only suitable for offline analysis.

The above-mentioned research shows that the current monitoring methods based on optical images generally use the microflow channel. So, it can only be used for offline monitoring or branch sampling monitoring of microflow. Because it cannot be used for online full flow monitoring, the actual application of the industrial scene and monitoring effect is greatly reduced. Bubble interference is also a serious problem in optical oil debris monitoring, but there is little research on it. The only recognition method used is based on the single parameter roundness.

To reduce the problems of the above-mentioned monitoring system, this paper designs a lubrication oil optical monitoring system that can be used for relatively large-diameter pipes and high flow rates. A set of experimental platforms for the generation and monitoring of debris and bubbles is then built that uses a convolutional neural network (CNN)-based image recognition algorithm for particles. The subsequent structure of this paper is as follows. Section 2 introduces the optical monitoring system and the oil circuit experimental platform. The approach of particle identification based on CNN is presented in Section 3. Section 4 introduces the oil monitoring experiment and verifies the performance of the presented method compared with four traditional algorithms. Finally, conclusions are drawn in Section 5.

Table 1

Comparison of four wear debris monitoring methods.

2 System composition

2.1 Online optical image monitoring system

The online optical image full-flow monitoring system designed in this paper includes an observation cell, microscope optical device, light source, high-speed camera, image acquisition, and storage system, as shown in Figure 1. The working principle is that in the fluid channel, the oil passes through the transparent oil observation cell at a certain speed, and in the direction of the vertical oil observation cell, an optical microscopic imaging system is placed to image the particles in the oil flowing through the observation window in real time. After the particles in the fluid are optically imaged, the image signal is acquired by a high-speed CMOS (complementary metal oxide semiconductor) camera, converted by an image acquisition card, and sent to a computer for storage and display. The high-speed camera adopts DAHENG MER-131-210U3C with 1280 × 1024 resolution and a maximum frame rate of 200 Hz. The optical monitoring system is a JIANGNAN NOVEL OPTICS JSZ6S three-eye stereoscopic microscope, which can realize 0.8×–5× magnification conversion.

Since the scattering phenomenon occurs in ordinary circular pipes, resulting in images that cannot be acquired properly, a cuboid transparent observation cell was designed for monitoring, as shown in Figure 2a. To strengthen and protect the flow channel, an aluminum alloy shell was designed for protection, as shown in Figure 2b. A through-hole viewing window was arranged in the center of the shell according to the observed field of view. The flow channel of the observation cell is designed with a shallow depth and long width, which is suitable for the field of view and depth of field characteristics of the micro-optical system. Standard round pipes on both ends of the device make it easy to connect to the lubrication oil pipeline.

The field of view under the 0.8× objective lens can be up to 21 mm × 16.8 mm, and the maximum depth of field can be up to about 1.5 mm, so we designed the flow channel width of the observation cell as 16 mm, and the depth as 3 mm. According to formula (1) [25], the equivalent pipe diameter of the flow channel is about 5 mm, which is greatly improve the current applicable conditions of image monitoring. Dd=2×b×hb+h(1)

where Dd is the equivalent diameter, b is the flow channel width, and h is the flow channel depth.

When the particles pass through the monitoring field of view, if the motion time is less than the time between two frames of the high-speed camera, it can be considered that the particles can be detected. Based on the maximum frame rate of the high-speed camera and the size of the flow channel, the theoretical maximum monitoring flow rate of the system can be about 8 L/min. This system offers a significant improvement in the range of monitored flow compared to current image monitoring systems, which are usually only available for milliliter level of flow.

thumbnail Fig. 1

Online optical image monitoring system.

thumbnail Fig. 2

Cuboid transparent observation cell: (a) internal channel, (b) the protection shell.

2.2 Oil-monitoring test bench

To verify the online image optical monitoring system and to collect enough bubbles and debris samples for image classification training, an experimental platform for full-flow simulation of the lubrication system was designed and constructed, as shown in Figure 3. The test platform mainly consisted of a pin-disk friction wear tester, online optical image monitoring system, filter, oil tank (two), oil pump, gate valve (four) and controller, and other components. The system diagram of the experimental platform is shown in Figure 4.

thumbnail Fig. 3

Physical drawing of the experimental platform.

thumbnail Fig. 4

System diagram of the experimental platform.

3 Particle identification based on CNN

The overall flow chart of debris and bubble image recognition based on CNN is shown in Figure 5. It is described in detail below.

thumbnail Fig. 5

Overall flow chart of debris and bubble image recognition.

3.1 Image sample collection

To validate the debris identification method developed in this paper, a large number of image samples needed to be collected. The resolution size of the original image taken is 1280×1024 pixel. The acquired original debris and bubble images are shown in Figures 6 and 7. The two images on the left and right are the images between two adjacent frames. The movement of the particles from left to right can be seen clearly. As can be seen from these pictures, it is difficult to distinguish between debris and bubbles by the naked eye. Therefore, debris and bubble images need to be collected separately to facilitate the construction of a large number of training image sample sets.

thumbnail Fig. 6

Original images of bubbles.

thumbnail Fig. 7

Original images of debris.

3.2 Sample preprocessing

For the original images captured by the camera, it is necessary to judge whether there are wear particles or bubbles, and then extract them.

3.2.1 Moving object detection

In this paper, a background differential algorithm is used to monitor the moving objects, and background differential is a widely used method to detect moving objects [26]. The objects in a general image can be divided into foreground and background, and in this paper, the area of concern in the foreground of the image is the wear particles and bubbles. The basic principle of the method is to detect a moving object from the difference between the current frame (debris image or bubble image) and the reference frame (background image), which is calculated as follows [27]: HDx,y=Hix,yHBx,y(2)

where Hi(x,y) is the pixel value of the current image, and HB(x,y) is the pixel value of the background image. If the threshold value T is set, the formula for the final extracted object is: HDx,y={Hix,y,if|HDx,y|>T225,othersize(3)

Threshold T was calculated using the Otsu method, which is an efficient algorithm for binarizing images proposed by Japanese scholar Otsu in 1979 [28]. For the image H(x,y), the foreground (i.e., the bubbles or debris) and background segmentation threshold is recorded as T, the number of pixels belonging to the foreground as a proportion of the whole image is recorded as ω0, and its average grayscale is μ0; the number of pixels in the background as a proportion of the whole image is ω1, and its average grayscale is μ1. The total average grayscale of the image is recorded as μ, and the interclass variance is recorded as g. Assuming that the background of the image is light, and the image size is M×N, the number of pixels in the image whose grayscale value is less than the threshold value T is recorded as N0, the number of pixels whose grayscale is greater than the threshold value T is recorded as N1, then: ω0=N0M×N(4)

ω1=N1M×N(5)

N0+N1=M×N(6)

ω0+ω1=1(7)

μ=ω0×μ0+ω1×μ1(8)

g=ω0μ0μ2+ω1μ1μ2(9)

Substitute formula (8) into formula (9) to get the equivalent formula: g=ω0×ω1×μ0μ12(10)

Equation (10) is the interclass variance. The iterative method is used to obtain the threshold value T that maximizes the interclass variance g, which is required.

3.2.2 Sample extraction and preservation

Since the original image size is too large and several particles or bubbles may be present simultaneously in a single image, it is not conducive to subsequent feature parameter extraction and classification. For the original image captured by the camera, one single object extraction and preservation of each image is required to ensure that there is only one particle or bubble in each final sample image.

Through experimental comparison, it is verified that the increase of edge background image has no obvious effect on the improvement of image classification accuracy. Therefore, we only need to extract the effective pixels that only contain wear particles or bubbles in the original image, and include as little background information as possible to reduce the computational complexity. The effective area size of wear particles and bubbles is within 28 × 28 pixels. Based on the results of the above motion object detection, the coordinates of the center of the moving object (the wear particles or the bubbles) are found, and then the pixel set of the surrounding 28 × 28 pixel area is taken out with this point as the center. Finally, all extracted objects are saved as 28 × 28 pixel BMP (Bitmap) format images.

3.3 CNN image classification algorithm

CNN is a multi-layer neural network that is now widely used in various image classifications. It consists of multiple convolutional layers and pooling layers alternately, and each layer is composed of multiple independent neurons. The CNN model's convolutional layer can achieve automated image feature extraction to avoid excessive human intervention, while its local connection, weight sharing, and other techniques can effectively reduce network parameters, thereby reducing the computational volume of the network model and improving the generalization capability of the model [29]. It usually consists of the convolutional layer, the pooling layer, the fully connected layer, and the output layer.

3.3.1 Convolutional layer

The convolutional layer in the CNN is formed by convolutionally summing the different trainable convolutional kernels with all the feature maps of the previous layer, adding a bias, and outputting the result as an activation function to form the feature map of the current layer.

3.3.2 Pooling layer

The pooling layer is generally connected after the continuous convolutional layer, and the input is down-sampled. There are various methods of down-sampling, such as maximum pooling and average pooling. Maximum pooling refers to the area corresponding to the size of the filter on the image, and the largest pixel value is taken in this area to obtain the feature data. In general, the feature data obtained by this method better preserves the texture of the image. Average pooling means that in the above area, all pixels in the area that are not 0 are averaged to obtain feature data. This method usually preserves the extraction of image background information better than maximum pooling. This article uses the maximum pooling method.

3.3.3 Fully connected layer

The fully connected layer converts the two-dimensional array corresponding to each feature map in the previous layer into a one-dimensional array, and then concatenates all the converted one-dimensional arrays into a feature long vector as the input of the fully connected layer.

3.3.4 Classification layer

CNN's last classification layer, softmax, is a multi-output competitive classifier. The classification results in multiple items. Each item is divided by their cumulative sum so that the sum of the output of all items is 1. Each corresponding number represents the predicted probability value of the category, and the maximum probability is selected for the output as its final classification result.

3.4 CNN-based wear particle detection

The CNN-based debris detection process is as follows.

  • Separately collect sufficient original image sample data through experiments. Organize and prepare a collection of all image samples, both positive samples (debris) and negative samples (bubbles).

  • After sufficient image samples have been collected, image preprocessing, including motion object monitoring extraction and storage, is performed. Then trim the sample to a uniform size. In this article, all sample images are adjusted to 28 × 28 pixel size.

  • Label the sample for all positive and negative samples. In this paper, all debris samples are labeled 1, and all bubble samples are labeled 0.

  • Divide all samples into three parts: training samples, validation samples, and test samples.

  • Use training samples and verification samples to conduct CNN network training and optimize the classification results by adjusting network parameters. Then save the final classification network and model parameters.

  • Input the test sample into the obtained classification model, obtain the output label, compare it with the real label, analyze the classification accuracy of the model, and verify the final detection effect.

4 Experiment and result analysis

4.1 Experimental procedure

4.1.1 Sample collection

Shell 15W-40 lubricating oil used in this experiment is a kind of oil widely used in practical industry. According to the final effect of the experiment, the monitoring system can clearly capture the corresponding wear particle image, and the system is not negatively affected by the light transmittance of oil.

An experimental platform, as described in Section 2.2, was designed and constructed to collect enough image samples for subsequent experimental analysis. At the beginning of the experiment, the background picture was collected and saved for particle or bubble detection.

When collecting the bubble image samples, the experiment uses new lubricating oil, turns off the pin-disk tester, opens gate valves 1 and 4, closes gate valves 2 and 3, and turns on the oil pump to perform a circulating operation. The oil will continuously generate bubbles during the circulation process due to the disturbance of oil in tank 1, and there are no particles in the oil because it is new and filtered.

When a sufficient sample of bubble images has been collected, debris images can be acquired. Place the observation chip behind a long 2 m straight pipeline, and reduce the bends and turns in the pipeline, so as to avoid turbulence and bubbles during oil operation. Before each experiment, the oil is left in tank 1 for 2 hours so that all bubbles in the oil dissipate. Then gate valves 1 and 2 are opened, gate valves 3 and 4 are closed, the pin-disk tester and the oil pump are turned on. At this point, the oil circuit is not circulating, and the oil was simply transferred from tank 1 to tank 2, so there are no bubbles generated as the oil passes through the online optical image monitoring system, only wear particles generated from the pin-disk tester. One experiment ends each time the oil in tank 1 is transferred to tank 2. Before collecting debris images again, gate valves 3 and 4 are opened, gate valves 1 and 2 are closed, the oil pump is turned on, and the oil is drawn from tank 2 back to the tank 1. The oil is left for 2 hours, then the experimental steps are repeated to collect wear particle images again.

By strictly executing the above experimental steps, images of the debris and bubble samples can be accurately captured, respectively, as shown in Figures 6 and 7.

4.1.2 Image sample preprocessing

Background differential and Otsu algorithms described in the previous section are applied to extract and save the set of motion debris and bubbles images. A total of 4500 debris and 4500 bubble images were extracted from the experiments, constituting all the training, validation, and testing sample sets. Figures 8 and 9 show examples of 100 bubbles and 100 debris image samples, respectively.

thumbnail Fig. 8

Bubble image samples.

thumbnail Fig. 9

Debris image samples.

4.2 CNN network construction and wear debris monitoring

As can be seen from Figures 8 and 9, it is difficult to distinguish bubbles and debris with the naked eye, so it is essential to apply an automatic image classification algorithm to distinguish the wear particles from the bubble interference.

The CNN network structure constructed in this paper is shown in Figure 10, including an input layer, three pairs of convolutional and pooling layers, a fully connected layer, and an output layer. The specific structural parameters of the network are also shown in this figure.

Nine thousand samples were collected in this experiment (4500 debris, 4500 bubbles). One thousand of these were selected as test samples (500 debris and 500 bubbles), and then 1000 of the remaining 8000 samples were randomly selected as validation samples during the training process, and the remaining 7000 were used as training samples of the CNN model.

The 7000 training samples were used as input to the CNN network model to perform network parameter training iterations. After every 30 iterations during the training process were verified by using verification samples. In order to reduce over fitting, we introduce L2 regularization and set the L2 regularization factor in the network parameters to 0.02. The accuracy of the training process changed, as shown in Figure 11, and the loss function of the training process changed, as shown in Figure 12. It can be seen from the figure that the model results tended to be stable when the number of iterations reached about 800. The final network model and parameters were then saved.

The confusion matrix for the validation sample set is shown in Table 2. The final classification accuracy of the validation data is 94.4%.

The 1000 test samples were input to the network model saved above to obtain the corresponding prediction results and compare with the true labels of the samples. The confusion matrix of the test sample is shown in Table 3, and the final classification accuracy of the test data is 91.9%.

The parameter interpretability of convolutional neural network has always been a research difficulty, and there is no unified conclusion. For the adjustment of the parameters of the deep learning network, the author tries to compare many values and select the optimal parameter value. The input image size, the author tried 100 × 100 (91.8% test accuracy), 50 × 50 (91.8% test accuracy) and 28 × 28 (91.9% test accuracy), and found that it had little effect on the accuracy of the final test data. Compared with the cropped 28 × 28 pixel image, the 100 × 100 pixel image only contains more background information, indicating that the increase of background information is not helpful to the improvement of image classification accuracy. Therefore, 28 × 28 with the lowest computational complexity was selected. In terms of network layers, the author tried 1, 2 and 3 layer number network structures (3 × 3 spatial pooling area, the maximum number of layers is 3), as shown in Table 4, and finally chose the 3-layer network with the highest accuracy of the test data.

The CNN network constructed by the author in this paper is compared with the popular LeNet and AlxeNet. Their classification accuracy for the test set is shown in Table 5. It can be seen that the CNN network constructed in this paper has higher classification accuracy. Compared with the classical LeNet network, the network used in this paper has increased the network complexity, such as increasing the number of convolution filters, adding the third layer maximum pooling layer, and achieving higher classification accuracy.

thumbnail Fig. 10

The detailed structure of the CNN network.

thumbnail Fig. 11

Accuracy changes in CNN training process.

thumbnail Fig. 12

Loss changes in CNN training process.

Table 2

Confusion matrix for validation data.

Table 3

Confusion matrix for test data.

Table 4

The classification accuracy of different network layer number for test samples.

Table 5

The classification accuracy of different networks for test samples.

4.3 Comparison with other classification methods

To verify the effectiveness of the classification algorithm proposed in this paper, the above classification results were compared with several widely used traditional feature extraction and classification algorithms.

For the feature extraction of particles, the traditional method generally uses morphology-based extraction of feature parameters, and in this paper, 13 morphological and color feature parameters [30,31] (area, equivalent circle diameter, long axis length, short axis length, long-short axis ratio, roundness, perimeter, RGB (red-green-blue) mean and standard deviation, etc.) of debris and bubbles were extracted and compared with HOG (histogram of oriented gradient) feature parameters. The specific morphological parameters are calculated as shown in Table 6, where pR(k), pG(k), and pB(k) represent the pixel frequencies with RGB falling at the kth level, respectively, 0 ≤ k ≤ 255.

The essence of HOG is the statistical information of the gradient. HOG has many advantages over other methods of characterization. First, because HOG operates on local square cells of the image, it is well invariant to both geometric and optical deformations of the image, which appear only over a larger spatial domain. Second, under conditions of coarse spatial sampling, fine directional sampling, and strong local optical normalization, subtle changes in the monitored object can be ignored without affecting detection. HOG is thus now widely used in target detection and classification [32].

Traditional classification algorithms, which generally appear in conjunction with feature parameters, include SVM (support vector machine) and KNN (k-nearest neighbor). SVM is a supervised learning method that is widely used in statistical classification as well as regression analysis, showing many unique advantages in solving nonlinear and high-dimensional pattern recognition problems, and its efficiency has been demonstrated in many recognition applications [33]. Whereas KNN is classified by measuring the distance between different eigenvalues. The parameters of SVM and KNN model algorithm are adjusted and optimized in the training process. In this paper, RBF kernel function is used in SVM, with penalty factor of 10 and kernel scale of 2. KNN is calculated by Euclidean distance, and the nearest neighbor number k is 5.

The classification accuracy of test samples of the five methods is shown in Table 7.

Table 7 shows that the CNN classification algorithm has the highest classification accuracy of the test samples. Compared with the traditional feature extraction and classification algorithms, CNN does not need the tedious process of feature parameter selection and extraction, and can also improve recognition accuracy significantly.

Table 6

Morphological parameters.

Table 7

The classification accuracy of test samples.

5 Conclusion

To solve the problem that current image monitoring of oil wear particles is not suitable for the online full-flow working environment and is susceptible to bubble interference, a new optical image monitoring system was constructed for online debris image monitoring. A set of oil-monitoring test benches was designed and built to enable the acquisition of wear particle and bubble images. These images were extracted by background differential and Otsu algorithms, and the CNN (convolutional neural network) classification algorithm was applied to detect the wear particles from the bubble interference. The CNN algorithm eliminates the manual feature selection and extraction process. The experimental results show that the CNN algorithm applied in this paper has a significantly higher classification accuracy up to 91.8% compared to the traditional manual feature parameter extraction processes based on morphology or HOG, and SVM- and KNN-based classification algorithms which the maximum classification accuracy can only reach 83.8%. Different from the traditional microfluidic or online visual ferrograph image monitoring system, this oil image monitoring system provides a new way to monitor lubrication oil debris, which can be used in full-flow online monitoring and get the separate particle image. Because of the intuitiveness of image monitoring, this system is different from the indirect debris sensors, such as inductive-based, resistive-based, capacitive-based and acoustic-based sensors, it can provide a feasible way to compare and verify the performance of these non-visual online wear particle-monitoring sensors.

While the optical monitoring system and image processing algorithm designed in this paper have achieved some positive results in debris monitoring, more work needs to be done in the future. The system is currently possible to calculate the number and size of particles (including oil pollutants), and in further research, we will focus on the surface texture information of wear particles, distinguish wear particles and pollutants, and be specific to differentiating between specific types of debris to obtain more accurate and comprehensive wear information.

Acknowledgements

This work is supported by the China National Natural Science Foundation (Grant No. U1933202). The authors also thank the support of the China Scholarship Council (CSC) (Grant No. 201906830027).

References

  1. X. Wu, Research on Intelligent Monitoring of Aeroengine Wear State Based on Oil Analysis. Master’s Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2012 [Google Scholar]
  2. C.A. Walford, Wind turbine reliability: understanding and minimizing wind turbine operation and maintenance costs, Sandia Natl. Laborat. 5, 1–27 (2006) [Google Scholar]
  3. R. Jia, B. Ma, C. Zheng, et al., Comprehensive improvement of the sensitivity and detectability of a large-aperture electromagnetic wear particle detector, Sensors 19, 3162–3180 (2019) [CrossRef] [Google Scholar]
  4. H. Xiao, X. Wang, H. Li, J. Luo, S. Feng, An inductive debris sensor for a large-diameter lubricating oil circuit based on a high-gradient magnetic field, Appl. Sci. 9, 1546–1558 (2019) [CrossRef] [Google Scholar]
  5. L. Han, S. Feng, G. Qiu, J. Luo, H. Xiao, J. Mao, Segmentation of online ferrograph images with strong interference based on uniform discrete curvelet transformation, Sensors 19, 1546–1559 (2019) [CrossRef] [Google Scholar]
  6. X. Zhu, C. Zhong, J. Zhe, Lubricating oil conditioning sensors for online machine health monitoring – a review, Tribol. Int. 109, 473–484 (2017) [CrossRef] [Google Scholar]
  7. L. Yan, W. ShiZhu, X. YouBai, Z. Fang, Advances in research on a multi-channel on-line ferrograph, Tribol. Int. 30, 279–282 (1997) [CrossRef] [Google Scholar]
  8. S. Feng, G. Qiu, J. Luo, L. Han, J. Mao, Y. Zhang, A wear debris segmentation method for direct reflection online visual ferrography, Sensors 19, 723–734 (2019) [CrossRef] [Google Scholar]
  9. J. Reintjes, J.E. Tucker, S.E. Thomas, A. Schultz, Lasernet fines wear debris analysis technology: application to mechanical fault detection, AIP Conf. Proc. 1590–1597 (2003) [CrossRef] [Google Scholar]
  10. W. Hong, S. Wang, M.M. Tomovic, H. Liu, X. Wang, A new debris sensor based on dual excitation sources for online debris monitoring, Measur. Sci. Technol. 26, 1–12 (2015) [Google Scholar]
  11. J. Zhe, F.K. Choy, S.V. Murali, M.A. Sarangi, R. Wilfong, Oil debris detection using capacitance and ultrasonic measurements, in 2007 International Joint Tribology Conference (IJTC 2007), California, U.S.A., 22–24 October 2007; pp. 113–115 [Google Scholar]
  12. M.R. Mauntz, J. Gegner, U. Kuipers, S. Klingau, A sensor system for online oil condition monitoring of operating components, Tribology 11, 305–321 (2013) [Google Scholar]
  13. S. Murali, A.V. Jagtiani, X. Xia, J. Carletta, J. Zhe, A microfluidic Coulter counting device for metal wear detection in lubrication oil, Rev. Sci. Instrum. 80, 1–3 (2009) [Google Scholar]
  14. L. Du, J. Zhe, An integrated ultrasonic–inductive pulse sensor for wear debris detection, Smart Mater. Struct. 22, 1–9 (2012) [Google Scholar]
  15. W. Hong, W. Cai, S. Wang, M.M. Tomovic, Mechanical wear debris feature, detection, and diagnosis: a review, Chin. J. Aeron. 31, 867–882 (2018) [CrossRef] [MathSciNet] [Google Scholar]
  16. C. Yao, J. Zhao, Q. Zhang, Research on contamination monitoring technology in hydraulic fluid, Lubric. Eng. 10, 196 (2006) [Google Scholar]
  17. X. Zhu, L. Du, J. Zhe, A 3 × 3 wear debris sensor array for real time lubricant oil conditioning monitoring using synchronized sampling, Mech. Syst. Signal Process 83, 296–304 (2017) [CrossRef] [Google Scholar]
  18. M. Lukas, D.P. Anderson, T. Sebok, D. Filicky, LaserNet Fines – a new tool for the oil analysis toolbox, Practis. Oil Anal. 8, 1–11 (2002) [Google Scholar]
  19. S. Lars, M. Gerhard, R. Nils, S. Krause, OILPAS – online imaging of liquid particle suspensions – how to prevent a sudden engine breakdown, SAE Int. J. Fuels Lubric. 3, 336–345 (2010) [CrossRef] [Google Scholar]
  20. Y. Zhang, J. Mao, Y. Xie, Engine wear monitoring with OLVF, Tribol. Trans. 54, 201–207 (2011) [CrossRef] [Google Scholar]
  21. T. Wu, J. Mao, J. Wang, Y. Xie, A new on-line visual ferrograph, Tribol. Trans. 52, 623–631 (2009) [CrossRef] [Google Scholar]
  22. W. Cao, W. Chen, G. Dong, J. Wu, Y. Xie, Wear condition monitoring and working pattern recognition of piston rings and cylinder liners using on-line visual ferrograph, Tribol. Trans. 57, 690–699 (2014) [CrossRef] [Google Scholar]
  23. H. Wu, T. Wu, Y. Peng, Z. Peng, Watershed-based morphological separation of wear debris chains for on-line ferrograph analysis, Tribol. Lett. 53, 411–420 (2014) [CrossRef] [Google Scholar]
  24. Y. Hao, X. Pan, Z. Yan, Q. Chang, B. Pan, Q. Ji, Y. Shen, Recognition for particles in lubricating oil based on micro-image method, Lubric. Eng. 4, 10–16 (2017) [Google Scholar]
  25. Y. Peng, T. Wu, S. Wang, Y. Du, N. Kwok, Z. Peng, A microfluidic device for three-dimensional wear debris imaging in online condition monitoring, Proc. Inst. Mech. Eng. J 231, 965–974 (2017) [CrossRef] [Google Scholar]
  26. E.J. Fernandez-Sanchez, J. Diaz, E. Ros, Background subtraction based on color and depth using active sensors, Sensors 13, 8895–8915 (2013) [CrossRef] [PubMed] [Google Scholar]
  27. Y. Peng, T. Wu, S. Wang, N. Kwok, Z. Peng, Motion-blurred particle image restoration for on-line wear monitoring, Sensors 15, 8173–8191 (2015) [CrossRef] [PubMed] [Google Scholar]
  28. N. Otsu, A threshold selection method from gray-level histograms, IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979) [CrossRef] [Google Scholar]
  29. M. Wu, L. Chen, Image recognition based on deep learning. 2015 Chinese Automation Congress (CAC2015), Wuhan, China, 27-29 November 2015; pp. 542–546 [Google Scholar]
  30. Z. Wu, H. Zuo, L. Guo, Debris micro-morphology analysis based on AI techniques, Chin. J. Aeron. 14, 30–36 (2001) [Google Scholar]
  31. Z. Wu, Research on Engine Wear Fault Diagnosis Technology Based on Debris Particle Analysis and Information Fusion. PhD Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2002 [Google Scholar]
  32. N. Dalal, B. Triggs, Histograms of oriented gradients for human detection. 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR 2005), California, U.S.A., 20-25 June 2005, pp. 886–893. [CrossRef] [Google Scholar]
  33. B. Heisele, T. Serre, S. Prentice, T. Poggio, Hierarchical classification and feature reduction for fast face detection with support vector machines, Pattern Recogn. 36, 2007–2017 (2003) [CrossRef] [Google Scholar]

Cite this article as: Han Wang, Hongfu Zuo, Zhenzhen Liu, Di Zhou, Hongsheng Yan, Xin Zhao, Michael Pecht, Online monitoring of oil wear debris image based on CNN, Mechanics & Industry 23, 9 (2022)

All Tables

Table 1

Comparison of four wear debris monitoring methods.

Table 2

Confusion matrix for validation data.

Table 3

Confusion matrix for test data.

Table 4

The classification accuracy of different network layer number for test samples.

Table 5

The classification accuracy of different networks for test samples.

Table 6

Morphological parameters.

Table 7

The classification accuracy of test samples.

All Figures

thumbnail Fig. 1

Online optical image monitoring system.

In the text
thumbnail Fig. 2

Cuboid transparent observation cell: (a) internal channel, (b) the protection shell.

In the text
thumbnail Fig. 3

Physical drawing of the experimental platform.

In the text
thumbnail Fig. 4

System diagram of the experimental platform.

In the text
thumbnail Fig. 5

Overall flow chart of debris and bubble image recognition.

In the text
thumbnail Fig. 6

Original images of bubbles.

In the text
thumbnail Fig. 7

Original images of debris.

In the text
thumbnail Fig. 8

Bubble image samples.

In the text
thumbnail Fig. 9

Debris image samples.

In the text
thumbnail Fig. 10

The detailed structure of the CNN network.

In the text
thumbnail Fig. 11

Accuracy changes in CNN training process.

In the text
thumbnail Fig. 12

Loss changes in CNN training process.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.