Rolling bearing fault diagnosis method based on 2D grayscale images and Wasserstein Generative Adversarial Nets under unbalanced sample condition
Abstract
Accurate diagnosis of rolling bearing faults plays a crucial role in ensuring the stable operation of rotating machinery systems. However, in actual engineering applications, a significant disparity between the volume of normal data and the quantity of fault data collected impairs diagnostic performance. Bearing fault diagnosis under sample imbalance conditions is an engineering challenge encountered in the field of fault diagnosis. To improve the fault diagnosis accuracy under unbalanced sample conditions, a rolling bearing fault diagnosis method based on 2D grayscale images and Wasserstein Generative Adversarial Networks (WGAN) is proposed. The method consists of three main steps. First, the acquired bearing vibration signals are transformed into 2D grayscale images. Second, the WGAN generation model is used to generate more fault samples. Finally, both the original samples and the generated samples are used to train the Convolutional Neural Networks classification model. The validity and effectiveness of the proposed method are evaluated and compared to other bearing fault diagnosis approaches using the Case Western Reserve University Bearing Data Center dataset. The experimental results demonstrate the superior quality of the generated samples and the improved fault identification accuracy achieved by the proposed method.
Keywords
1. INTRODUCTION
As the modern industry continues to advance, rotating machinery is being increasingly utilized in practical engineering applications, with a focus on enhanced integration and intelligence. Rolling bearings, vital transmission components in rotating machinery, are susceptible to failure due to long-term operation in harsh conditions such as high-speed operation and overload. These failures can lead to abnormal machinery operations and substantial economic losses. Therefore, accurate and efficient fault diagnosis of rolling bearings holds immense practical importance.
Traditional methods in rolling bearing fault diagnosis primarily emphasize extracting meaningful features from vibration signals. These features are subsequently analyzed using signal processing techniques and empirical knowledge. In recent years, the application of deep learning in intelligent fault diagnosis has gained significant attention. This approach leverages the robust feature learning capability and end-to-end diagnostic characteristics of deep learning, making it a prominent research area in artificial intelligence[1]. Xing et al. proposed a locally connected restricted Boltzmann machine capable of achieving bearing fault diagnosis by obtaining features directly from the original signal based on the conventional RBM[2]. Jia et al. implemented bearing fault feature mining and intelligent diagnosis from frequency domain data using a self-encoder network with a deep architecture[3]. Shao et al. proposed a convolutional Deep Belief Network (DBN) and used the exponential moving average technique to improve the performance of the diagnostic model[4]. Although these methods improve the accuracy of rolling bearing fault diagnosis, all the above methods need to provide the same amount of fault data as normal data samples. However, in actual industrial production, it is difficult to collect sufficient fault data to train a deep-learning fault diagnosis model to achieve high accuracy. Therefore, a challenge that merits investigation is how to train a rolling bearing defect diagnosis model with a small number of fault samples.
At present, experts and scholars have mainly researched the sample imbalance problem from two aspects: data and algorithms. The former aims to expand the number of minority class samples by sample resampling or data generation, while the latter hopes to increase the sensitivity and penalty of the model to the minority class to reduce the diagnosis error or to use integrated learning methods to train a classifier with better performance. On the data side, Chawla et al. introduced the synthetic minority class oversampling technique (SMOTE) to address the class imbalance in the training set[5]. This technique involves the random generation of virtual samples. On the algorithmic side, current research focuses on developing new algorithms with data features or improving the structure of existing algorithms. Jia et al. proposed a deep normalized convolutional neural network (CNN) to categorize unbalanced fault data in a way that maximizes the activation of neurons[6]. Sun et al. added a cost parameter to the AdaBoost framework to adjust the weights of a few samples[7]. Sampling techniques are frequently employed in fault diagnosis to enhance data. However, these methods primarily improve the data at a superficial level, generating new signals through linear interpolation without exploring the underlying features and distribution patterns in depth. Furthermore, these methods may produce incorrect or unnecessary samples and fail to expand the diversity of the dataset. With the rapid development of intelligent fault diagnosis technology, the Transfer Learning and Generative Adversarial Network (GAN) have gained significant attention as a research focus in fault diagnosis.
Data augmentation can be accomplished by employing transfer learning-based methods that leverage other relevant datasets to reweight data samples[8-12]. However, the performance of transfer learning is related to the similarity of the data distributions in the source and target domains. If there is a large deviation between the source and target domains, this may lead to negative migration in the target diagnostic task, resulting in poor diagnostic performance. A GAN is a data generation model proposed by Goodfellow et al., which can generate data with a similar distribution to the original data using random noise[13]. The generated data can be added to the fault dataset with an insufficient number of samples, thus changing the imbalance in the training dataset.
Wang et al. proposed a hybrid approach for gearbox fault diagnosis, combining GAN and Stacked Denoising Autoencoders (SDAE)[14]. GANs were employed to augment the minority samples, while SDAEs served as classifiers for diagnosing the final fault type. Lee et al. employed Empirical Mode Decomposition to obtain energy spectrum data from the minority class samples[15]. They then used GANs to generate augmented energy spectrum data, enabling fault diagnosis in the presence of imbalanced sample conditions. Zhou et al. achieved fault diagnosis under unbalanced sample conditions by generating fault features extracted by Autoencoders from a few fault samples instead of the fault data themselves by GANs[16]. Current research has made significant advancements in addressing imbalanced fault diagnosis issues. However, the aforementioned papers overlook the difficulties posed by the adversarial mechanism of GANs in reaching the Nash equilibrium state for generating and discriminating networks. Furthermore, the existing GAN-based models struggle to maintain high-performance data generation across all fault types, as the deep features in the original vibration signals have limited learning capability. This limitation compromises the accuracy and robustness of fault detection.
To better solve the above problems, this paper proposes a rolling bearing fault diagnosis method based on 2D grayscale images and WGAN. By converting time-domain signals into 2D grayscale images, the noise in the signals is transformed into non-relevant factors such as image brightness and grayscale. This effectively reduces the impact of signal noise on the final image classification results. Due to Deep Convolution Generative Adversarial Network (DCGAN) with a certain depth of two-dimensional convolutional layer, it can extract image features and generate images well, while WGAN introduces Wasserstein distance to make the model more stable during training and avoid the gradient disappearance and pattern collapse problems that occur in GANs[17]. Table 1 compares the advantages and disadvantages of GAN, DCGAN, and WGAN.
Comparison of GAN, DCGAN, and WGAN
Methods | Advantages | Disadvantages |
GAN | High quality of generation, no need for prior knowledge | Diversity training instability, gradient vanishing, and pattern collapse |
DCGAN | Powerful image generation, structural stability | Training complexity, high computational costs |
WGAN | Addresses gradient vanishing and pattern collapse, stable training | Training complexity, hyperparameter sensitivity |
Therefore, by combining the advantages of DCGAN and WGAN, the utilization of a two-dimensional deep convolutional layer in WGAN can generate samples with higher quality and richer diversity. Our main contributions are as follows:
(1) To address the training instability in DCGAN and the limited feature extraction capability of WGAN for images, a WGAN with deep two-dimensional convolutional layers is designed by combining the advantages of DCGAN and WGAN.
(2) To generate data that closely resemble the fault data distribution, a data generation approach utilizing 2D grayscale images and WGAN is developed.
The paper is structured as follows: Section 2 provides an overview of the theoretical background. Section 3 describes the flow of the proposed method. Section 4 verifies the effectiveness of the proposed method through experiments. Section 5 concludes the whole paper.
2. THEORETICAL BACKGROUND
2.1 Generative adversarial network
The GAN is a type of unsupervised generative model that comprises two components: a Generator (G) and a Discriminator (D). The basic structure of GAN is illustrated in Figure 1. The generator produces pseudo-samples by utilizing random vectors. The generated samples, along with real samples, are then inputted to the discriminator. The task of the discriminator is to distinguish between fake and real samples.
In the process of model optimization, the discriminator and generator are trained alternately against each other. The generator continuously improves the generation ability of the network so that the generated samples are close to the real samples, while the discriminator continuously improves the discriminative ability of the network to identify the fake samples and the real samples as much as possible. Eventually, the discriminator and the generator reach an equilibrium state, making it difficult for the discriminator to determine the truth of the data. The objective function of GAN can be represented by Equation (1).
where Pdata denotes the distribution of the real data. Pz denotes the distribution of the random variable z.
2.2 Wasserstein generative adversarial network
The original GAN uses JS distance to measure the similarity between two distributions, leading to problems such as gradient disappearance and pattern collapse during the training of the GAN. To solve this problem, Arjovsky et al. proposed the Wasserstein GAN (WGAN)[18]. The Wasserstein distance can better reflect the difference between two distributions and can be used to represent the minimum cost of convergence of the generated data distribution to the actual distribution, as shown in Equation (2).
where
From the above definition, it is clear that the greatest advantage of Wasserstein distance over JS distance is that Wasserstein distance can still describe the distance between Preal and Pg even if there is no intersection between the two distributions. Therefore, the combination of the Wasserstein distance metric and GAN can not only fundamentally solve the problems of gradient disappearance, training instability, unclear optimization objectives, and model collapse that exist in GAN but also visualize the training degree of the model through the Wasserstein distance. However, it is difficult to solve Equation (2) directly, so Equation (2) is transformed into its dual form using a pairwise theory, as shown in Equation (3).
The above equation represents the upper bound on
From Section 2.1, it can be seen that the discriminant network of the original GAN network is a binary classification problem. The discriminant network of the WGAN network is mainly used to fit the Wasserstein distance between the real samples and the generated samples, so the last layer of the sigmoid layer is removed based on the original GAN network. The discriminant network aims to maximize the Wasserstein distance between real and generated samples, while the generative network aims to minimize the Wasserstein distance. Therefore, the objective function of the discriminative network can be expressed as Equation (4).
The objective function of the generated network can be expressed as Equation (5)
From Equation (4) and Equation (5), the discriminator objective function responds to the distance between two distributions; therefore, the degree of training of the model can be observed by this function, and when the distance is smaller, the better trained the model is, and the more realistic the generated samples are.
3. THE PROPOSED METHOD
In this paper, a fault diagnosis method of rolling bearings based on 2D grayscale images and WGAN under unbalanced sample conditions is proposed. Figure 2 illustrates the overall framework of the proposed method. The method is divided into three main steps: Signal-to-image conversion, fault sample generation, and sample classification.
3.1 Signal-to-image conversion method
Data preprocessing methods play a crucial role in extracting relevant features from voluminous historical data. However, selecting appropriate features can be a time-consuming task that greatly influences the final outcomes. This paper uses a data preprocessing method that converts one-dimensional vibration signals into 2D grayscale images[19].
As shown in Figure 3, in this conversion method, the time-domain raw signal is arranged. To generate an image of size M*M, then a segment signal of length M2 is obtained randomly from the raw signal. The preprocessing method is defined as in Equation (6).
where L(i), i = 1, 2, …, M2 denotes the value of the time domain raw signal. P(j, k) (j = 1, 2, …, M, k = 1, 2, …, M) denotes the pixel intensity of the image. The round(∙) function normalizes the pixel value across the grayscale map, ranging from 0 to 255, which is exactly the pixel intensity of the grayscale map.
The advantages of this data processing method include the elimination of manual extraction of signal features, direct processing of the raw time-domain signal, no need for pre-set calculation parameters, and minimizing reliance on the experience of experts.
3.2 WGAN generation model
Since the two-dimensional convolution calculation has a powerful feature learning ability for images, the WGAN generation model with two-dimensional convolution layers is used to strengthen the learning ability of deep features of the raw vibration signal and improve the quality of the generated data while avoiding gradient disappearance and pattern collapse and increasing the diversity of the generated data. The network structure of the WGAN generator is shown in Figure 4, which contains four transposed convolutional layers; the network structure of the discriminator is shown in Figure 5, which contains four convolutional layers.
3.3 CNN classification model
After the 1D vibrational signals are converted into grayscale images, the CNN classification model can be trained to classify these images. The CNN classification model used in this paper contains two alternating convolutional and pooling layers and two fully connected layers, and its network structure is shown in Figure 6.
3.4 General procedures of the proposed method
The flow chart of the rolling bearing fault diagnosis method based on 2D grayscale images and WGAN is shown in Figure 7, which includes six main steps in total.
Step 1: The vibration signal of the bearing components in the test bench is collected by using acceleration sensors.
Step 2: Adopt the rolling window acquisition method to segment the vibration signal and, at the same time, convert the vibration signals into grayscale images, and then divide the obtained grayscale images into the training set and test set.
Step 3: Build the WGAN sample generation model, input a small number of fault samples from each training set into the WGAN model for training, and then save the model parameters after the training is completed.
Step 4: Use the trained WGAN generation model to expand the samples in each training set so that the number of samples in each training set is the same.
Step 5: Build the CNN classification model, input the expanded balanced training set into the CNN classification model for training, and save the model parameters after the training is completed.
Step 6: Input the test set into the trained CNN classification model for testing and get the classification results.
4. EXPERIMENTAL VERIFICATION
For these case studies, Python 3.8 is utilized as the programming language, and Pytorch 2.0 serves as the deep learning framework. The computer setup includes a Windows 64-bit operating system, a Core i7-10700 CPU @ 2.90 GHz with 16 GB RAM, and an added GPU (NVIDIA Quadro P2200) with 5 GB memory to enhance the training speed.
4.1 Laboratory bearing dataset
The experimental data used in this paper is a rolling bearing dataset provided by the Case Western Reserve University (CWRU) Bearing Data Center. The CWRU rolling bearing dataset was acquired on a test stand with a sampling frequency of 12 kHz, and the bearing type being monitored was SKF6025. There were three types of bearing failures to be tested: inner race fault (IF), outer race fault (OF), and roller fault (RF), and each had three damage sizes: 0.18 mm, 0.36 mm, and 0.54 mm so that a total of ten health conditions can be obtained.
Six Datasets, A, B, C, D, E, and F, were produced for this experiment, as shown in Table 2. Among these, Dataset A is a balanced dataset with 1,000 samples under each health condition. Datasets B, C, D, and E are randomly selected according to the imbalance ratio of 1:5, 1:10, 1:20, and 1:40 for the faulty samples in Dataset A. There are 200, 100, 50, and 25 samples under each health condition, respectively. Dataset F is the test dataset with 100 samples in each health condition.
Description of bearing datasets
Datasets | The number of samples | Fault type | Fault diameter(mm) | Label |
A/B/C/D/E/F | 1000/1000/1000/1000/1000/100 | N | 0 | 0 |
1000/200/100/50/25/100 | IF | 0.18 | 1 | |
1000/200/100/50/25/100 | IF | 0.36 | 2 | |
1000/200/100/50/25/100 | IF | 0.54 | 3 | |
1000/200/100/50/25/100 | OF | 0.18 | 4 | |
1000/200/100/50/25/100 | OF | 0.36 | 5 | |
1000/200/100/50/25/100 | OF | 0.54 | 6 | |
1000/200/100/50/25/100 | RF | 0.18 | 7 | |
1000/200/100/50/25/100 | RF | 0.36 | 8 | |
1000/200/100/50/25/100 | RF | 0.54 | 9 |
4.2 Fault diagnosis results and analysis
The specific parameters of the method are listed as follows. The batch size is 32, the learning rate of the generator and discriminator is 0.00005, and the maximum number of iterations is 10,000. To prevent overfitting while training the model, the Dropout method is used. Dropout is a method that randomly removes neurons during the learning process. During training, neurons in the hidden layer are randomly selected and then deleted. The deleted neurons are no longer signaling.
4.2.1 Comparing generated samples and real samples
A random selection of three fault conditions is made for comparison. Figures 8 and 9 display the time domain and frequency spectrum of both real and generated samples. The two figures demonstrate that the generated samples have good diversity under different fault conditions while maintaining the key features of the raw signal. Therefore, the generated signals are similar to the real signals, which indicates that the data generated using the generative model based on grayscale images and WGAN can be expanded to the original dataset to solve the data imbalance phenomenon.
Figure 8. The time-domain waveform comparison between generated samples (right) and real samples (left).
Figure 9. The frequency spectrum comparison between generated samples (red) and real samples (blue).
The cosine similarity is used to qualitatively evaluate the generated samples. The cosine similarity between the generated samples and the real samples in Figure 7 was calculated as 0.939, 0.945, and 0.907, respectively. The cosine value varies from -1 to 1, with greater values indicating greater similarity between the two signals.
4.2.2 Diagnosis results under different imbalance ratio datasets
Firstly, the imbalanced Datasets B, C, D, and E are fed into the CNN classification model for training, and then the diagnosis results are obtained on the test Dataset F. Secondly, the imbalanced Datasets B, C, D, and E are input to the WGAN generation model for sample expansion, and after the samples are balanced, they are input to the CNN classification model for training, and then the diagnosis results are obtained on the test Dataset F. The experiment results are shown in Table 3.
Recognition accuracy before and after sample expansion
Dataset B | Dataset C | Dataset D | Dataset E | |
Before expansion | 91.9% | 84.4% | 77.3% | 68.1% |
After expansion | 98.9% | 98.5% | 94.2% | 89.5% |
As can be seen from Table 3, the fault recognition accuracy gradually decreases with the gradual increase of the imbalance rate, from the initial 91.9% to 68.1%, which indicates that for the imbalanced dataset, the classifier cannot effectively learn the features of the data, leading to the decrease of the classification accuracy. However, after expanding a few classes in the unbalanced samples by the WGAN generation model, the recognition rates all increased, and the classification accuracy improved more obviously with the gradual increase of the unbalanced ratio, and the recognition accuracy improved from 68.1% to 89.5% even under the Dataset F with severe unbalance ratios.
Taking Dataset C as an example, the fault recognition rate for each category is calculated, and the confusion matrix is drawn. The comparison of the confusion matrix before and after sample expansion is shown in Figures 10 and 11. From Figure 10, it is evident that under the unbalanced sample condition, the recognition rate of some categories is very low. From Figure 11, it can be seen that the recognition rate of each category is improved after sample expansion.
4.2.3 Performance under noise environment
In this section, to verify the noise immunity of the algorithm proposed in this paper. Gaussian white noise is added to the original signal to get the composite signal with different signal-to-noise ratios (SNR). These composite signals are then used to train the model to obtain diagnostic accuracy in a noisy environment. Taking Dataset C as an example, the results of the proposed model diagnosing noisy signals are shown in Table 4.
Classification results at different SNRs
SNR (dB) | -6 | -4 | -2 | 0 | 2 | 4 |
Accuracy | 98.22% | 97.78% | 98.32% | 98.4% | 98.12% | 98.46% |
Table 4 demonstrates the fault classification accuracies for SNRs ranging from -6 dB to 4 dB. When the SNR is equal to -4 dB, the accuracy is the lowest at 97.78%, but it is also very close to the accuracy measured when no noise is added. (The accuracy without adding noise is 98.5%.) It can be seen that the model proposed in this paper has a good noise immunity capability.
4.2.4 Comparison with other methods
To further demonstrate the effectiveness of the proposed method in dealing with the sample imbalance problem, the proposed method is compared with the approach based on time-domain signals and GAN, the approach based on time-domain signals and WGAN, the approach based on grayscale images and GAN to evaluate its fault diagnosis capability. To minimize potential random errors, ten tests were performed for each approach. Table 5 shows the fault identification accuracy of different approaches with sample imbalance datasets.
Diagnosis accuracy of different approaches
Approach | Dataset B | Dataset C | Dataset D | Dataset E |
Time-domain signals and GAN | 96.6% | 95.6% | 94.0% | 89.0% |
Time-domain signals and WGAN | 98.6% | 96.3% | 94.2% | 90.0% |
Grayscale images and DCGAN | 97.1% | 94.1% | 83.8% | 75.6% |
Grayscale images and WGAN | 98.9% | 98.5% | 94.2% | 89.5% |
From Table 5, it can be concluded that the fault identification accuracy of the proposed approach in this paper is higher than other approaches on both Datasets B and C. On Dataset D, the proposed approach in this paper performs as well as the approach based on time-domain signals and WGAN but is better than the other two approaches. Compared with the approach based on time-domain signals and WGAN, the recognition rate of the proposed method in this paper is lower on Dataset E. In conclusion, the diagnostic ability of the proposed approach in this paper is superior to other approaches.
5. CONCLUSIONS
In this paper, a new data generation method based on 2D grayscale images and WGAN is designed. The proposed method can address the issue of low fault recognition rates under sample imbalance conditions. Firstly, the raw vibration signals are converted into 2D grayscale images. Secondly, the fully connected layers in the original GAN network are replaced by two-dimensional convolutional layers, which enhance the deep feature learning capability of the raw vibration signals. Finally, the Wasserstein distance is utilized in the loss function of GAN to address issues such as gradient disappearance and pattern collapse. This inclusion enhances the quality and diversity of the generated samples. The experimental results show that the bearing fault diagnosis model based on 2D grayscale images and WGAN can solve the problem of a low fault recognition rate under the sample imbalance condition. Moreover, compared with other methods, the samples generated using the proposed method have higher quality and a higher fault identification rate.
However, the diagnosis results using the generated data from the proposed method still differ from the results using the original data. This suggests that there is still room for improvement in the data generation algorithms mentioned in this paper. Additionally, the proposed method is not suitable for machines in variable operating conditions. In the future, the bearing fault diagnosis method under variable working conditions needs further research.
DECLARATIONS
Authors’ contributions
Writing-Original Draft and Conceptualization: He J
Technical Support: Lv Z
Validation and Supervision: Chen X
Availability of data and materials
Not applicable.
Financial support and sponsorship
None.
Conflicts of interest
All authors declared that there are no conflicts of interest.
Ethical approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Copyright
© The Author(s) 2023.
REFERENCES
1. Diez-olivan A, Del Ser J, Galar D, Sierra B. Data fusion and machine learning for industrial prognosis: trends and perspectives towards industry 4.0. Inf Fusion 2019;50:92-111.
2. Xing S, Lei Y, Jia F, Lin J. Intelligent fault diagnosis of rotating machinery using locally connected restricted boltzmann machine in big data era. In: 2017 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM); 2017. pp. 1930-34.
3. Jia F, Lei Y, Lin J, Zhou X, Lu N. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mech Syst Signal Process 2016;72-3:303-15.
4. Shao H, Jiang H, Zhang H, Duan W, Liang T, Wu S. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing. Mech Syst Signal Process 2018;100:743-65.
5. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002;16:321-57.
6. Jia F, Lei Y, Lu N, Xing S. Deep normalized convolutional neural network for imbalanced fault classification of machinery and its understanding via visualization. Mech Syst Signal Process 2018;110:349-67.
7. Sun Y, Kamel MS, Wong AK, Wang Y. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognit 2007;40:3358-78.
8. Deng Y, Shichang D, Shiyao J, Chen Z, Zhiyuan X. Prognostic study of ball screws by ensemble data-driven particle filters. J Manuf Syst 2020;56:359-72.
9. Deng Y, Huang D, Du S, Li G, Zhao C, Lv J. A double-layer attention based adversarial network for partial transfer learning in machinery fault diagnosis. Comput Ind 2021;127:103399.
10. Jia S, Deng Y, Lv J, Du S, Xie Z. Joint distribution adaptation with diverse feature aggregation: a new transfer learning framework for bearing diagnosis across different machines. Measurement 2022;187:110332.
11. Deng Y, Du S, Wang D, Shao Y, Huang D. A calibration-based hybrid transfer learning framework for RUL prediction of rolling bearing across different machines. IEEE Trans Instrum Meas 2023;72:1-15.
12. Deng Y, Lv J, Huang D, Du S. Combining the theoretical bound and deep adversarial network for machinery open-set diagnosis transfer. Neurocomputing 2023;548:126391.
13. Goodfellow I, Pouget-abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM 2020;63:139-44.
14. Wang Z, Wang J, Wang Y. An intelligent diagnosis scheme based on generative adversarial learning deep neural networks and its application to planetary gearbox fault pattern recognition. Neurocomputing 2018;310:213-22.
15. Lee YO, Jo J, Hwang J. Application of deep neural network and generative adversarial network to industrial maintenance: a case study of induction motor fault detection. In: 2017 IEEE international conference on big data (big data). 2017. pp. 3248-53.
16. Zhou F, Yang S, Fujita H, Chen D, Wen C. Deep learning fault diagnosis method based on global optimization GAN for unbalanced data. Knowl Based Syst 2020;187:104837.
17. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2016. Available from: https://arxiv.org/abs/1511.06434 [Last accessed on 14 August 2023].
18. Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. 2017. pp.214-23. Available from: https://arxiv.org/abs/1701.07875 [Last accessed on 14 August 2023].
Cite This Article

How to Cite
He, J.; Lv, Z.; Chen, X. Rolling bearing fault diagnosis method based on 2D grayscale images and Wasserstein Generative Adversarial Nets under unbalanced sample condition. Complex Eng. Syst. 2023, 3, 13. http://dx.doi.org/10.20517/ces.2023.20
Download Citation
Export Citation File:
Type of Import
Tips on Downloading Citation
Citation Manager File Format
Type of Import
Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.
Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.
Comments
Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.