Abstract
Breast cancer is a prominent cause of death among women worldwide. Infrared thermography, due to its cost-effectiveness and non-ionizing radiation, has emerged as a promising tool for early breast cancer diagnosis. This article presents a hybrid model approach for breast cancer detection using thermography images, designed to process and classify these images into healthy or cancerous categories, thus supporting disease diagnosis. Multiple pre-trained convolutional neural networks are employed for image feature extraction, and feature filter methods are proposed for feature selection, with diverse classifiers utilized for image classification. Evaluating the DRM-IR test set revealed that the combination of ResNet34, Chi-square (\(\varvec{\chi }^{\varvec{2}}\)) filter, and SVM classifier demonstrated superior performance, achieving the highest accuracy at \(\varvec{99.62\%}\). Furthermore, the highest accuracy improvement obtained was \(\varvec{18.3\%}\) when using the SVM classifier and Chi-square filter compared to regular convolutional neural networks. The results confirmed that the proposed method, with its high accuracy and lightweight model, outperforms state-of-the-art breast cancer detection from thermography image methods, making it a good choice for computer-aided diagnosis.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Breast cancer (BC), responsible for 15.5% of annual female cancer deaths, can be treated with early detection [1]. Currently, biomedical imaging is a highly endorsed approach for BC detection, and imaging techniques like magnetic resonance imaging, ultrasound, and thermal imaging are widely utilized [2]. Thermography, also known as infrared imaging, employs infrared (IR) cameras to capture temperature patterns on the surface of the breast, allowing for the identification of tumors based on the temperature variations. In contrast to other methods, thermography stands out as a non-invasive, radiation-free, cost-effective technique. Further, the imaging system is simple, requiring only a specific thermal camera to capture thermal radiation from the skin’s surface [3].
There are many studies on using artificial intelligence (AI) to detect breast cancer based on thermal images. These studies focused on proposing machine learning models for high accuracy [4,5,6,7,8,9]. However, most of these studies require considerable computational processing, resulting in relatively slow data processing speed and a demand for computing systems with substantial resources [10]. The diagnostic method for breast cancer using thermography uses a simple imaging system, requiring fast processing with limited computational resources. It is crucial for any new AI methods for breast cancer detection from thermography images to have fast processing with limited resources and yet achieve high accuracy. Therefore, our research presented in this paper aims to develop a method for a computer-assisted diagnostic system that can process thermographic breast images and classify preprocessed thermal images into healthy or cancer categories with a lightweight model. Our proposed method is fast and requires limited resources but achieves high accuracy. We accomplish this by combining a small pre-trained convolution neural network (CNN) for image feature extraction, straightforwardly optimizing the features, and implementing an efficient classifier.
The common pre-trained CNNs from the ResNet [11], Xception [12], InceptionNet [13], MobileNet [14], and DenseNet [15] families were employed for feature extraction in this study. The weights the model learned on the ImageNet dataset were transferred to extract features from breast thermal images. Typical machine learning classifiers encompass Support Vector Machine (SVM) [16], Random Forest (RF) [17], Adaptive Boosting (AdaBoost) [18], Extreme Gradient Boosting (XGBoost) [19], and K-Nearest Neighbors (KNN) were employed to categorize thermal breast images and evaluate their efficacy. Moreover, this research proposed the adoption of filter methods for feature selection, wherein features are chosen based on an assessment metric without considering the particular computational method for data modeling in use. Chi-square (\({\chi }^2\)) [20] and MIFS (mutual information-based feature selection) [21] filters were employed and evaluated. The study’s investigation found that ResNet34 along with the Chi-square filter and SVM classifier demonstrated optimal performance, achieving an accuracy rate of \(99.62\%\). The main contribution includes
-
i.
Implementing various pre-trained CNNs to extract features from breast thermal images for classifying them into healthy/cancer groups.
-
ii.
Excucion of feature optimization using Chi-square and MIFS filter methods to improve breast cancer detection accuracy.
-
iii.
Utilizing diverse classifiers, including RF, KNN, SVM, AdaBoost, and Xgboost, for breast thermal image classification.
-
iv.
Using DRM-IR, a publicly available dataset of breast cancer thermography images, for the training and testing of the proposed method. The obtained performance results were compared and evaluated.
-
v.
The test results on an independent test set, separate from the training set, indicate that the model achieves high accuracy, approaching 100%.
The remainder of this article consists of “Related Works,” “Materials and Methods,” “Results and Findings,” “Comparative Discussion,” and “Conclusion” sections. The literature review is included in the “Related Works” section. The “Materials and Methods” section explains feature extractors based on the CNN models, Chi-square and MIFS-ND filters, and classifiers in detail. The DRM-IR dataset, evaluation method, and experimental setting are described in the “Experimental Method” section.” The “Results and Findings” section includes all calculation and classification results from the proposed method. The model’s performance was evaluated and compared in the “Comparative Discussion” section. Finally, conclusions are drawn in the “Conclusion” section.
Related Works
Breast Thermography
Thermography, commonly referred to as infrared imaging, utilizes infrared (IR) cameras to record temperature patterns on the breast’s surface, enabling the detection of tumors through temperature fluctuations [3]. The primary concept behind IR image diagnosis is that objects inherently release thermal signals at distinct temperature levels. The characteristics of the object determine the type and temperature range of the signals it emits. In regular conditions, the human body, similar to other entities, releases infrared signals. These signals vary from one part of the body to another due to differences in heat. This principle is widely applied in medical examinations, particularly in the context of breast cancer screenings. The presence of any malignant growth in the breast is associated with increased inflammation and blood vessel development, both of which result in higher temperature profiles [22]. Diagnosing breast cancer through thermography relies on recognizing distinct characteristics within the evolving heat patterns of the breast. These typical features encompass (1) temperature distribution difference between the healthy and affected breasts, with one side experiencing an anomaly. (2) Identifiable hot spots that signal anomalies or irregularities. (3) Changes in hypothermic vascular patterns linked to the progression of a tumor. (4) Variations in thermal patterns within the areolar and periareolar regions [23, 24].
Figure 1 presents common process of utilizing thermography for breast cancer screening. It commences with a visual examination of the breast’s surface, allowing the doctor to compare any irregular findings with the heat map. Subsequently, the patient must remain in a temperature and humidity-controlled room for 15 min to adapt to the environment. During this time, the upper portion of their body, extending from the waist to the chin, should be uncovered. After the temperature of body has stabilized, individuals are directed to position their hands on their right side to ease the examination of the pertinent surfaces. Then, the imaging procedure is initiated to conclude the process [25]. Thermal imaging protocols can be classified into two main categories based on how the body behaves regarding heat transfer: static and dynamic. Static acquisitions involve the patient achieving thermal stability within the imaging environment. This approach captures consistent thermal data when the body is at a relatively stable temperature. Dynamic acquisitions are employed to monitor changes in skin temperature, particularly the recovery process following thermal stress, such as when a patient has been cooled. This method assesses how the body responds and recovers from temperature variations [26].
Breast Cancer Detection Using Machine Learning
Machine learning finds extensive application in the analysis of medical images, a challenging task even for seasoned experts. Machine learning is a set of algorithms designed to investigate features of data. In the context of breast cancer detection, the predominant application of machine learning algorithms primarily revolves around classification tasks. These algorithms aim to differentiate between healthy breasts and those with cancerous tumors, requiring training on thermographic images of both healthy and malignant breasts. Numerous research studies have been dedicated to machine learning methods utilizing mammograms, CT scans, ultrasound images, and thermographic images showcasing impressive performance metrics. Table 1 provides a thorough summary of current research on breast cancer detection.
Chaves et al. [4] use several pre-trained CNN models for breast thermal image classification, such as AlexNet, GoogLeNet, VGG-16, VGG-19, and ResNet18, all trained on the dataset consisting of 440 thermographic images sourced from the Vision Lab dataset. VGG-16 outperformed the other models, demonstrating superior performance metrics. In the research of Kiymet et al. [5], InceptionV3, ResNet50, VGG16, and VGG19 were compared. Through experimental investigations, the ResNet50 network successfully attained the top test performance, with an accuracy of 88.89%, when it comes to detecting breast cancer. Cabioglu et al. [6] implemented several CNNs and applied transfer learning. Using transfer learning with CNN, they accomplished an accuracy of 94.3%, sensitivity of 93.3%, and recall of 93.3%. In the work of Zuluaga-Gomez et al. [28], various CNN models, including VGG16, ResNet, InceptionResNetV2, Xception, and SeResNet, were utilized to categorize thermal images obtained from the DMR-IR dataset. On the 57 patients database, they obtained an accuracy of 92% and an F1 score of 92%.
Roslidar et al. [29] introduce a mobile neural network model named “BreaCNet” for breast cancer detection. This model incorporates an effective segmentation method for breast thermal images and a classifier based on a mobile CNN. The classifier was created using ShuffleNet with the addition of a convolutional containing 1028 filters. It was observed that using the modified ShuffleNet alone achieved an accuracy of 72%. However, when combined with the suggested segmentation method, the performance improved to achieve an accuracy of 100%. In [30], the authors introduced a method that combines two distinct CNN architectures into one model for efficiently classifying mitotic cells. This proposed hybrid CNN model, incorporating data preprocessing with a median filter and Otsu-based segmentation technique, was trained and tested on 50,000 images each. It achieved an overall accuracy of 98.9%. M. A. S. A. Husaini et al. [31] built classifiers based on deep convolutional neural networks modeling inception V3, inception V4, and a modified version of the latter called inception MV4 for thermal-based early breast cancer detection. They achieved an accuracy of 99.748% with the Inception MV4 network on the DRM dataset. Alshehri et al. [32] seek to explore the potential of CNNs with attention mechanisms (AMs) for thermal breast cancer image classification. The integration of AM with the CNN model resulted in promising test accuracy rates of 99.46%, 99.37%, and 99.30% on the breast thermal dataset. In the research conducted by Chatterjee et al. [8], they proposed a two-phase model for the detection of breast cancer in thermal images. Initially, utilizing the VGG16 to extract features from the images. Subsequently, for the selection of the most suitable subset of features, they employ the Dragonfly Algorithm (DA), a meta-heuristic algorithm. To improve the performance of DA, they introduce a memory-based variant of DA by incorporating the Grunwald-Letnikov (GL) algorithm. When tested on the DMR-IR dataset, the model effectively identifies and removes non-essential features. It achieved a diagnostic accuracy of 100%, while utilizing 82% fewer features than the VGG16.
In [33], Morovati, Bahareh et al. have introduced reduced deep convolutional activation features (R-DeCAF) to enhance the accuracy of breast cancer diagnosis. This framework employs pre-trained CNNs like AlexNet, VGG-16, and VGG-19 as feature extractors in transfer learning mode. DeCAF features are derived from the first fully connected layer of these CNNs, and classification is performed using a support vector machine. Research by Aidossov et al. [9] presents a new advanced integrated diagnostic system that utilizes IR thermal images in combination with CNNs and Bayesian Networks (BNs). They illustrate how the fusion of transfer learning models like ResNet50 and integrating BNs with artificial neural network techniques like CNNs creates a cutting-edge expert system that delivers high performance and offers interpretability. The study’s findings indicated that the most effective models among the approaches implemented achieved varying accuracy rates ranging from around 91 to 93%. Additionally, the precision values fell within the 91 to 95% range, while sensitivity ranged from 91 to 92%, and specificity ranged from 91 to 97%. In the work [34], the authors conducted an extensive review on emerging advancements in thermography and CNN for breast cancer detection. They suggest that integrating CNNs with thermography provides a robust and efficient method for early breast cancer detection, which can be applied in clinical settings for routine screening and diagnosis.
In summary, the studies applying machine learning to detect breast cancer using thermography generally yield quite good results. However, there are still some limitations that need to be addressed. First, several studies [4,5,6, 9, 28] have used CNN models with rather large architectures for the classification of breast thermal images, but the results achieved have not been highly accurate. Nevertheless, the majority of current research, as indicated by the authors, suffers from a deficiency of thermogram databases. Therefore, there is a need for integrated methods between CNN networks and other machine learning techniques to enhance accuracy even with small datasets. Next, several studies [8, 29, 30, 32] have incorporated additional techniques to improve the accuracy of the models. However, these techniques are quite complex and require significant computational processing, leading to relatively slow data processing speeds and demanding large computational resources. Meanwhile, infrared imaging is a simple system with compact devices and fast computation times, meeting the requirements for quick breast cancer diagnosis. Thus, this article aims to employ a hybrid model approach for breast cancer detection based on thermography images. We utilize a CNN model for feature extraction. These extracted features were subsequently optimized and applied to train multiple classifiers. The outcomes validated that the proposed approach surpasses the existing methods for breast cancer detection from thermography images in terms of high accuracy and using a lightweight model.
Materials and Methods
Overview of the Proposed Method
The research aims to address the problem of breast thermal image classification, involving predicting patients’ categories (either healthy or cancer) using their respective breast thermal images:
where t are thermal images that identify the temperature maps of the thermographies.
To achieve this objective, we propose a scheme shown in Fig. 2. The tasks outlined in this proposal encompass as follows: (i) thermal image preprocessing, (ii) feature extraction using CNNs, (iii) feature selection using feature filters, and (iv) classification using a binary classifier. First, the thermograms are collected from the DMR-IR database. Then, the thermograms are preprocessed before passing through the CNN exactors. Next, the feature from thermal images is extracted using CNNs. A classification task is executed to find the CNN’s performance with the Softmax classifier. The deep features obtained from CNN exactors are optimized using filters to get a dominant feature vector, which is input to classifiers for final classification.
The Pre-Trained CNN Feature Extractors
Feature extraction plays a crucial role in improving the efficiency of the process and generating more meaningful datasets with larger and higher-quality images. In contrast to alternative feature extraction techniques, CNNs can directly capture the images’ features within the input dataset. The fundamental concept underlying CNNs is the interpretation of input data as images, which reduces the number of parameters employed, thereby enhancing processing speed. CNN architecture comprises convolutional layers, pooling layers, fully connected layers, and the rectified linear unit as its constituent layers. Convolutional layers specialize in learning convolutions and optimizing data categorization performance. Pooling layers play a vital role in mitigating overfitting, ensuring stable transformation, and improving computational efficiency by reducing the convolution’s structural output. The ReLU activation function enhances the network’s nonlinear characteristics [35]. When CNNs are employed to extract features, the features are obtained from the convolutional and fully connected layers of the CNNs, and these features encompass abstract visual attributes.
In image processing research using deep learning techniques, training the model demands a substantial amount of data and computational resources, while the dataset of thermograms is relatively small. Therefore, this study used different pre-trained networks and transfer learning to avoid the overfitting phenomenon. Transfer learning involves leveraging a pre-trained model as the foundation for a new model. This approach has the ability not only to minimize the necessary training data but also to expedite the training process. In this work, due to the limited size of the thermogram dataset, there is a risk of overfitting when using models with a large number of parameters. As a result, small pre-trained CNNs were chosen to serve as feature extractors. These include ResNet18, ResNet34, Xception, InceptionV3, MobileNet, MobileNetV2, and DenseNet121 [36], which were trained with ImageNet dataset.
Figure 3 presents the proposed feature extractor using the pre-trained CNN model. Feature extractor was implemented as follows: the base model was created using the pre-trained CNN architecture, and the pre-trained weights were obtained. The weights from the pre-trained models were then utilized to initialize the model’s weights for the classification of thermal images. The base model has 1000 units in the final output layer, while thermal image classification has two outputs (healthy/cancer). Thus, the base model’s final layer (SoftMax 1000) was removed, and a final output layer (SoftMax 2) compatible with the thermal image classification problem was added. The next step is to freeze layers from the pre-trained model so they do not change during training. After that, training the new layers on the thermal image dataset was done. Finally, the feature exactor was obtained by removing the last layer (SoftMax 2) of the trained model.
Feature Filter
This study proposed a scheme for breast cancer detection. In the proposed scheme, after CNN feature exactors, we receive feature vectors with a length of 1000. Not all of these features are useful for fitting the classification models. Using unnecessary features diminishes the model’s generalization ability and potentially lowers the classifier’s overall accuracy. Furthermore, adding the number of features in a model elevates its overall complexity. Thus, it is necessary to have a feature selection/reduction procedure in the proposed scheme to avoid the over-fitting problem and improve the model’s performance.
Feature selection, a technique employed in machine learning, aims to identify the most critical features from a more extensive set of features in a dataset. This process enhances the performance of a machine learning model and helps prevent overfitting by reducing the number of features considered. Feature filter is a form of feature selection that selects inherent feature characteristics based on univariate statistics instead of considering cross-validation performance. With a feature filter, the feature selection procedure is carried out independently of any particular machine learning algorithm. Instead, it utilizes statistical measures to assess and rank the features. The feature filters are faster and less computationally expensive than other feature selection methods [37]. Hence, this work employed feature filters to select the features from CNN feature extractors for classifiers. The MIFS and Chi-square filters were used and evaluated.
Mutual Information [38] can serve as a tool for feature selection by assessing the value each variable contributes concerning the target variable. It is computed by assessing the relationship between two variables and measures the reduction in uncertainty about one variable when the value of the other variable is understood. The formal representation of mutual information between two random variables X and Y can be presented as
where MI(X; Y) is the mutual information for X and Y, H(X) is the entropy for X, and H(X|Y) is the conditional entropy for X given Y. Mutual information quantifies the degree of dependence or “mutual dependence” between two random variables, and it is symmetrical, indicating that \(MI(X; Y) = MI(Y; X)\).
The Chi-square (\({\chi }^2 \)) statistic [39] quantifies the disparity between the frequencies of observed and expected outcomes within a collection of events or variables. This statistical measure seeks to ascertain whether a difference between the observed and expected data results from random chance or indicates a correlation between the variables under investigation. The formula for Chi-square is
where c is degree of freedom, O is observed vaule (s), and E is expected value (c). The Chi-square is employed for categorical features in a dataset. It involves computing the Chi-square value between each feature and the target variable and choosing the desired number of features with the highest Chi-square scores.
We employ statistical scores, such as Chi-squared and mutual information, to evaluate and rank features according to their association with the output variable. Subsequently, we choose the top K features with the highest scores for inclusion in the ultimate feature subset. When the statistical score is based on the Chi-squared measure, the feature filter is called the Chi-square filter. Conversely, when the statistical score is determined by mutual information, the feature filter is known as the MIFS filter. Algorithm 1 is the proposed feature filter algorithm.
The Classifiers for Breast Cancer Classification
The last part of the proposed scheme is a classifier that automatically categorizes data into one of two classes: healthy or cancer. In this work, five machine learning classifiers were implemented and evaluated, including Support Vector Machine (SVM), Random Forest (RF), Adaptive Boosting (AdaBoost), Extreme Gradient Boosting (XGBoost), and K-Nearest Neighbors (KNN). For the Support Vector Machine, a non-linear SVM classifier is employed with an RBF kernel. The Random Forest model uses an RF classifier with a node size of 2, a maximum tree depth of 9, and a forest consisting of 10 trees. The K-Nearest Neighbor model applies a KNN classifier with \(K=5\) using the BallTree algorithm to calculate distances.
Experimental Method
Dataset and Preprocessing
This study used thermal images from Mastology Research with Infrared Image (DMR-IR) dataset [26], an openly accessible dataset containing thermographic images of breast cancer that is provided for the purpose of training models and assessing the effectiveness of the proposed method. The dataset comprises thermograms captured using both static and dynamic acquisition protocols. In the static protocol, a single image is taken after patients have rested for 10–15 min to achieve thermal stabilization. Conversely, the dynamic protocol involves capturing a series of thermograms every 15 s over a 5-min period. The images were captured from 5 different positions for the static protocol (i.e., front, right lateral 45°, right lateral 90°, left lateral 45°, and left lateral 90°). The dynamic protocol produces 20 sequential images in the front position and two lateral images (right lateral 90°, left lateral 90°). The thermograms obtained were then labeled into healthy and unhealthy classes. Additionally, the dataset incorporates segmented images that exclusively depict the temperatures of breasts, excluding the temperatures of other body parts. Figure 4 shows examples of thermal images in the DRM-IR dataset.
In this work, we used the frontal thermal images with the dynamic protocol of 56 patients, including 37 cancer and 19 healthy patients. There were 380 images, from 19 healthy patients, labeled as normal breast thermograms and 740 images, from cancer patients, labeled as abnormal breast thermograms. Thus, we had a dataset of 1120 breast thermograms that were categorized into normal and abnormal classes. We established the training and testing sets using breast thermograms from 47 patients (31 with cancer and 16 healthy) for training and 9 patients (6 with cancer and 3 healthy) for testing. To avoid bias and ensure the independence of the datasets, all thermograms from a single patient were kept together in the same set, either training or testing. Accordingly, the training set consists of 620 thermograms showing abnormalities and 320 considered normal, while the test dataset comprises 120 abnormal thermograms and 60 normal ones. In total, 940 breast thermograms were used for training and 180 for testing. Additionally, 10% of the training data was allocated to create the validation set. This approach ensures that our training, validation, and testing sets are completely independent, with no overlap of images from the same patient across different sets.
The DMR-IR includes float temperature matrixes in “txt” files for thermograms. The float temperature matrixes present a dimension of 640 x 480 pixels. Thus, it is necessary to preprocess the thermograms to match the input requirements of the CNN extractors. Figure 5 presents the thermogram preprocessing. Using Open-CV, the collected thermograms from DMR-IR are converted from float temperature matrixes to “jpg” images, and the converted image is resized to 224 \(\times \) 224 \(\times \) 3 pixels.
Evaluation Method
In this study, to attain a thorough assessment of a model’s performance in classification task, we employ well-known evaluation functions such as accuracy, recall, precision, and F1 score as outlined in [40]. These metrics are utilized to evaluate the congruence between the model’s classification results and the assigned class labels for the thermograms. The recall shown in Eq. 4 indicates the samples in the positive classes which are correctly classified. Precision or positive predictive value (PPV) is computed by Eq. 5. The F1 score combines both precision and recall into a single value, which is calculated using Eq. 6. The cost function of the proposed algorithm is defined by the accuracy of classifier shown by Eq. 7 where the sum TP and FP is the total number of subjects with positive test and sum FN and TN is the total number of subjects with negative test. AUC (area under the ROC curve) serves as a performance measure employed to assess the efficacy of a binary classification model. It quantifies the model’s capacity to differentiate between positive and negative classes at various thresholds. The ROC curve (receiver operating characteristic curve) is created by plotting the true positive rate (sensitivity) against the false positive rate (1 - specificity) at various threshold settings. The AUC represents the area under this curve, ranging from 0 to 1.
Experimental Setting
We used 940 breast thermal images of 47 patients from the DMR-IR dataset for the training phase and 180 of the other nine patients for the testing phase. The data was divided into training and validation sets in the training phase, with \(846 (90\%)\) allocated to training and \(94 (10\%)\) to validation. We employed a k-fold cross-validation technique with \(k = 10\), dividing the training set into ten smaller subsets. With each of these k “folds,” the model is trained using k-1 of the subsets as the training data, and then the resulting model is validated on the remaining portion of the data. The performance metric reported by k-fold cross-validation is the average of the values calculated in each“fold.” This method may be computationally demanding but minimizes data wastage, making it particularly advantageous for problems with limited sample sizes, such as the DMR-IR dataset.
At the beginning of CNN exactor training, we employ pre-trained weights from the pre-trained CNN trained on the ImageNet dataset. Subsequently, we freeze the network and update the last layer using the Adam optimizer. The model produced during the epoch with the highest accuracy value on the validation set is selected as our final model. The model generated at the epoch with the max accuracy value on the validation set is used as our final mode. The hyperparameters for training CNN exactors are uniformly set and shown in Table 2. The input shape is set to (224, 224, 3), which indicates that the models process images of size 224x224 pixels with three color channels (RGB). A batch size of 8 was chosen, meaning that the model updates its parameters after processing every 8 images. The training was conducted over 50 epochs, allowing the model multiple passes over the entire training dataset to refine its learning. A learning rate of 0.0001 was used, which controls the step size during optimization, ensuring a gradual and stable convergence. The Adam optimizer, known for its efficiency and effectiveness in training deep learning models, was employed to minimize the loss function and optimize the model’s performance. In practice, these hyperparameters are optimized through a combination of grid search, random search, or more advanced methods like Bayesian optimization. The final values are selected based on the combination that yields the best performance on the validation dataset, taking into account factors such as accuracy, loss, and training time. In this work, we suggest hyperparameters based on these factors, experimentally selecting the best results.
Furthermore, all algorithms were developed and trained using the Keras framework with a Tensorflow backend and the Scikit learn library on a PC with a GeForce GTX 1080 Ti GPU.
Results and Findings
Performance of Deep Learning Networks for Breast Cancer Detection
In the initial step, we conducted experiments using pre-trained CNN networks for breast thermal image classification. These networks employ the Softmax classifier to perform the classification task. The SoftMax classifier of the original pre-trained CNN network is replaced with a SoftMax classifier with two output classes (healthy or cancer) to suit the classification task of breast thermography images. Then, the network is retrained on the DMR-IR dataset to fine-tune the parameters of the new SoftMax classification layer. We trained and tested these networks utilizing images from the DRM-IR dataset described in Section “Dataset and Preprocessing.” Table 3 lists the performance results of the networks on the test set with the same experimental method, as presented in Section “Experimental Method.”
Table 3 highlights distinct differences in accuracy, AUC (area under the curve), F1 score, precision, and recall of various deep learning models. Among the models, InceptionV3 and MobileNetV2 exhibit the highest accuracy at 96.67%, with MobileNetV2 achieving a marginally superior AUC of 99.37% compared to InceptionV3’s 98.71%. Both models also show consistent performance across F1 score, precision, and recall, each at 96.88%. ResNet34 also demonstrates strong performance with a 95% accuracy and an AUC of 98.39%, slightly trailing InceptionV3 and MobileNetV2. Xception, with an accuracy of 91.11% and an AUC of 96.16%, presents a balanced performance but does not reach the levels of the top-performing models. ResNet18 and DenseNet121, while still effective, lag behind with accuracies of 80.56% and 84.44% respectively. MobileNet, despite being the smallest model in terms of size (16 MB) and parameters (4.3M), shows the least performance with an accuracy of 77.78% and an AUC of 85.29%. Notably, MobileNetV2, with the smallest model size (14 MB) and the fewest parameters (3.5M), achieves the highest overall performance, demonstrating that model efficiency does not necessarily compromise accuracy. This suggests that MobileNetV2 is particularly well-suited for deployment in resource-constrained environments where model size and computational efficiency are critical factors. Overall, the analysis underscores that while larger models like InceptionV3 and ResNet34 perform well, optimized architectures such as MobileNetV2 can deliver comparable or even superior results with significantly lower computational demands.
Accuracy of Models with Pre-Trained CNN Feature Exactors and Different Classifiers
After training the CNN networks on the DMR-IR dataset, we use part of these networks as feature extractors and employ other classifiers (besides SoftMax) to classify breast thermography images. Subsequently, we test these classifiers on a test set consisting of 9 patients with 180 images, separate from the training dataset as described in Section “Dataset and Preprocessing.”
Table 4 compares breast cancer detection accuracy across various models using pre-trained CNN feature extractors with different classifiers. ResNet34 stands out, achieving the highest accuracy of 99.44% with an SVM classifier and consistently high accuracy with others, like Random Forest (96.11%) and KNN (95.56%). InceptionV3 also performs well, especially with Random Forest (97.78%), though it dips slightly with XGBoost (91.11%). MobileNetV2 shows strong results with SVM (97.78%), while DenseNet121 excels with SVM (96.67%) and Adaboost (96.11%). MobileNet’s performance varies, peaking with Adaboost (86.67%) but dropping with KNN (75.0%). Xception performs best with KNN (95.56%). ResNet18’s performance peaks with KNN (90.56%) but is lower with SVM and Adaboost (73.89%). Overall, it can be found that with different classifiers, each model yields significantly different classification accuracy results. These accuracy results can be either higher or lower than the accuracy of the deep learning model with the SoftMax classification layer. Moreover, the results indicate that the choice of classifier significantly influences the accuracy of breast cancer detection models; advanced classifiers like Adaboost and XGBoost generally yield lower accuracy than simpler classifiers like SVM, RF, and KNN.
With each feature extractor, the classifier, which gave the highest accuracy, consistently improves over the corresponding deep learning model. The chart illustrates the improvement in model accuracy with pre-trained CNN feature extractors and a classifier compared to the deep learning model with a SoftMax classifier, as presented in Fig. 6. The model with the DenseNet121 feature extractor and SVM classifier (DenseNet121-SVM) got the highest accuracy improvement of approximately 12%, while the ResNet18-KNN model ranked second with a 10% improvement. In contrast, the MobiNetV2-SVM and InceptionV3-RF models showed only a marginal gain of about 1%.
Breast Cancer Detection Performance of Classifiers with Features Selected by Filters
The next step involves conducting experiments using MIFS and Chi-square filters to select features to feed into the classifiers. We experimented with the number of selected features being 100, 200, 300, 400, and 500 and chose the number of selected features that yielded the highest accuracy for the classifier. Tables 5 and 6 present the performance of classifiers on the test set with features selected by the MIFS and Chi-square filters, respectively.
The performance evaluation of breast cancer detection classifiers with features selected by the MIFS filter, as presented in Table 5, highlights the significant impact of the combination of feature extractors and classifiers. ResNet34 paired with SVM outperforms all other models, achieving near-perfect metrics across accuracy (99.44%), AUC (99.44%), F1 score (99.45%), precision (99.45%), and recall (99.45%) with 300 features, indicating its superior ability to extract and classify relevant features. Other models such as ResNet18 and Xception, both utilizing KNN, also demonstrate strong performance with 97.22% accuracy. InceptionV3, combined with Random Forest, shows similar accuracy but excels in precision (97.92%), suggesting effective minimization of false positives. MobileNetV2 with SVM also delivers commendable results, particularly with a larger feature set (400 features), achieving 98.33% accuracy. However, MobileNet with AdaBoost significantly underperforms, with only 86.67% accuracy, underscoring the necessity of optimal pairings for robust breast cancer detection.
The evaluation of breast cancer detection using classifiers with features selected by the Chi-square filter, as detailed in Table 6, reveals varying levels of performance across different models. ResNet34 paired with SVM stands out, achieving the highest accuracy (99.62%) and F1 score (99.63%), underscoring its superior feature extraction and classification capabilities with a reduced feature set of 100. ResNet18 with KNN also performs exceptionally well, reaching 98.89% across all metrics, demonstrating robustness even with a larger feature set. MobileNetV2, when paired with SVM, achieves strong results with 98.33% accuracy, although slightly lower than ResNet34. InceptionV3 and DenseNet121 show solid performance with accuracies of 96.67% and 97.78%, respectively, while Xception lags slightly behind at 95.00%. Notably, MobileNet combined with AdaBoost shows the lowest performance, with 87.78% accuracy, indicating that this combination is less effective.
In summary, the highest performance model is the ResNet34-SVM with both filters. When using the MIFS filter, the performance metrics of this model are nearly equal, approximately 99.45%, with 300 selected features. On the other hand, when using the Chi-square filter, the performance metrics of this model are also nearly equal, approximately 99.67%, with 100 selected features.
In addition, we compared the accuracy of the classification model with and without the use of feature filters to observe the effectiveness of the feature filters. Tables 7 and 8 present the corresponding comparison results for MIFS and Chi-square filters.
The comparison of classification model accuracies, both with and without using the MIFS filter, as presented in Table 7, reveals the impact of feature selection on performance. ResNet18, when paired with KNN, shows a significant accuracy improvement from 90.56 to 98.89% with the use of 400 selected features, highlighting the effectiveness of MIFS in enhancing model performance. ResNet34 combined with SVM demonstrates outstanding performance with or without the filter, maintaining accuracy above 99%, suggesting that its feature extraction capabilities are robust. MobileNetV2 with SVM also benefits from MIFS, showing a modest increase in accuracy from 97.78 to 98.33%. In contrast, models like MobileNet with AdaBoost and Xception with KNN show minimal improvement or a slight decrease in accuracy, indicating that MIFS may not always provide significant benefits depending on the feature extractor and classifier used.
The comparison of classification model accuracy with and without the application of the Chi-square filter, as presented in Table 8, highlights the varying impact of feature selection on model accuracy. ResNet34 paired with SVM shows minimal improvement (0.18%) in accuracy when using 100 features selected by Chi-square, maintaining a high performance of 99.62%. ResNet18 with KNN sees a significant accuracy boost of 8.33% with 400 selected features, indicating the filter’s effectiveness in reducing dimensionality. Conversely, Xception and InceptionV3 paired with KNN exhibit slight decreases in accuracy when using selected features, with drops of 0.56% and 0.55%, respectively, suggesting that feature selection may not always be beneficial. MobileNet, combined with AdaBoost, and DenseNet121 with SVM both experience moderate accuracy gains of 1.11% using the Chi-square filter, while MobileNetV2 with SVM also benefits, with a 0.55% increase.
It can be observed that, overall, the feature filters consistently improve the accuracy of the classification models. Most notably, the ResNet 18-KNN model showed the highest accuracy improvement, with an increase of 8.33% using the Chi-square filter and 6.66% using the MIFS filter.
Summary of Classification Models’s Accuracy with Various Pre-Trained CNN Feature Extractors, Feature Filters, and Classifiers
This study proposes a method for breast cancer detection based on thermal breast images using an image classification model consisting of a pre-trained CNN feature extractor, feature filter, and classifier components. We trained and tested the models with diverse combinations of pre-trained CNN feature extractors, feature filters, and classifiers. The analysis of classification models using various pretrained CNN feature extractors, classifiers, and feature selection methods reveals key insights into performance optimization.
Table 9 provides a summary of the accuracies achieved by the implemented models. ResNet18, despite being smaller in size (43 MB) with 11.9M parameters, achieves significant accuracy improvements through feature selection. Initially, it records 80.56% accuracy with standard metrics (SM) but reaches 98.89% with 400 features selected by the Chi-square filter and KNN classifier, indicating the effectiveness of feature reduction. ResNet34, with 21.32M parameters, starts at 95% accuracy and achieves near-perfect results of 99.62% with 100 features selected by the Chi-square filter and SVM, demonstrating robustness even with reduced features. Xception and InceptionV3 models, although large with over 22 M parameters, show consistent performance, with Xception achieving 97.22% accuracy using 200 features filtered by MIFS and KNN, while InceptionV3 maintains 97.22% with Random Forest across different feature sets. MobileNet and MobileNetV2, smaller models with fewer parameters (4.3M and 3.5M, respectively), also benefit from feature selection, particularly MobileNetV2, which achieves 98.33% accuracy with both 400 and 300 features filtered by MIFS and Chi-square, respectively. DenseNet121, though relatively compact, shows a marked increase in accuracy to 97.78% with 400 features selected by Chi-square and SVM, highlighting the importance of feature selection in enhancing model performance.
Overall, the model with the ResNet34 feature extractor, Chi-square filter, and KNN classifier achieved the highest accuracy of 99.67%. The model with the combination of the ResNet18 feature extractor, Chi-square filter, and KNN classifier and the model with the combination of the MobileNetV2 feature extractor, Chi-square/MIFS filter, and SVM classifier both yield quite good results with corresponding accuracies of 98.89% and 98.33%. Moreover, they are lightweight models with small parameter counts and memory requirements. Therefore, these models demonstrate high competitiveness. Figure 7 presents the confusion matrixes and ROC curve of the models that are the highest in accuracy of the proposed method.
Furthermore, to assess the effectiveness of the proposed method, we compare the accuracy of the enhanced model using alternative classifiers, SoftMax and Chi-square/MIFS filter, with the CNN networks. Figure 8 illustrates the accuracy improvement of the proposed model compared to the regular CNN networks. The figure highlights the effectiveness of using classifiers like KNN, SVM, and AdaBoost, as well as the impact of applying MIFS and Chi-square filters. Among the models, ResNet18 paired with KNN shows the most significant improvement, with an 18.33% increase in accuracy when the Chi-square filter is applied, followed closely by a 16.66% increase with the MIFS filter. This underscores the importance of feature selection in boosting model performance. DenseNet121-SVM also exhibits notable gains, achieving a 13.34% accuracy improvement with the Chi-square filter, indicating its potential for refining model predictions through feature selection. In contrast, InceptionV3-RF demonstrates minimal change, with only a 0.55% increase in accuracy regardless of the filter applied, suggesting that this model might be less sensitive to feature selection. MobileNet and MobileNetV2 show moderate improvements across different classifiers and filters, with MobileNet-Adaboost achieving a 10% accuracy increase using the Chi-square filter. This finding highlights the potential of combining lightweight models with advanced classifiers and feature selection to enhance predictive accuracy. Overall, the analysis suggests that applying alternative classifiers and feature selection techniques can significantly improve the accuracy of deep learning models, particularly for certain model-classifier combinations like ResNet18-KNN and DenseNet121-SVM (the ResNet18-KNN model with the Chi-square filter shows a rise of 18.33%, DenseNet121-SVM with the Chi-square filter exhibits a 13.34% improvement). However, the degree of improvement varies, emphasizing the need to tailor these techniques to the specific model and task at hand.
Comparative Discussion
In this section, we present the breast cancer detection results from existing methods. We have chosen the studies that used the DRM-IR dataset the same as we did. Due to the absence of evaluation scenarios facilitating an equitable comparison between methods and the unavailability of code for the methods mentioned, the comparison was conducted based on the performance metrics documented in the literature. The results are presented in Table 10, which shows the accuracy comparison between methods. It can be observed that the accuracy of breast cancer detection in current studies is quite high. Some studies achieved approximately 100% accuracy, including studies [8, 29, 41] and our study. The breast cancer detection methods in studies [29, 41] both involve two phases: breast region segmentation and classification. The classification step is carried out using deep learning networks. Study [8], on the other hand, utilizes the VGG19 network combined with the Dragonfly Algorithm, a meta-heuristic algorithm, for feature selection. In contrast, our study proposes a method that involves only one phase, image classification, with a classification model comprising a feature extractor using a lightweight CNN, an SVM classifier, and a feature filter for breast cancer detection. This makes our model more compact, requiring fewer computational resources and achieving faster execution times. These advantages are crucial in the context of a CAD system supporting breast cancer detection in infrared imaging processes.
Conclusion
As mentioned earlier, breast cancer ranks as the second leading cause of death in women. However, it is highly manageable when detected in its initial phases. Consequently, timely diagnosis plays a crucial role. Thermography is a good alternative and supplementary method to the gold-standard breast cancer detection methods. The advantages of thermography include being a non-invasive, radiation-free, cost-effective technique, and the imaging system is simple. This paper proposes a hybrid model approach for identifying breast cancer in breast thermography images. Firstly, features were extracted from the images using a pre-trained CNN, and the feature extractor was trained using the transfer learning method. Then, to select the optimal feature subset, we applied feature filters that use statistical scores, including Chi-squared and mutual information, to score and rank the features based on their relationship with the output variable. Finally, a classifier that automatically categorizes data into one of two classes: Healthy or Cancer. The proposed model was trained and tested using DMR-IR dataset, a publicly available dataset of breast cancer thermography images for diagnostic purposes. The test results on an independent test set, separate from the training set, indicate that the model achieves high accuracy, approaching 100%. Furthermore, the proposed model is a lightweight model with a small parameter count and minimal memory requirements. Therefore, the proposed method can be utilized as a CAD tool in the diagnosis of breast cancer in women.
The main limitation of this study is that we tested the proposed method on only one publicly available dataset of breast cancer thermography images. Therefore, in future work, we aim to address the limitations associated with the DMR-IR database, particularly its age, geographic specificity, and lack of representation of diverse breast properties, including those of the Asian population and male breasts. To enhance the generalizability and accuracy of our models, we plan to incorporate more recent and diverse datasets, and further testing will be conducted on different datasets, not limited to thermal images, but potentially including mammographic images and ultrasound images of the breast. Additionally, we will focus on integrating evaluations of thermo-physical properties, which are crucial for improving the effectiveness of machine learning models in breast cancer detection through thermal imaging.
Data Availability
The data supporting the findings of this study are openly available in the Database for Mastology Research with Infrared Image at https://visual.ic.uff.br/dmi/
References
Sung, H., Ferlay, J., Siegel, R.L., Laversanne, M., Soerjomataram, I., Jemal, A., Bray, F.: Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians. 71(3), 209–249 (2021)
Zuluaga-Gomez, J., Zerhouni, N., Al Masry, Z., Devalland, C., Varnier, C.: A survey of breast cancer screening techniques: thermography and electrical impedance tomography. Journal of medical engineering & technology. 43(5), 305–322 (2019)
Mashekova, A., Zhao, Y., Ng, E.Y., Zarikas, V., Fok, S.C., Mukhmetov, O.: Early detection of the breast cancer using infrared technology–a comprehensive review. Thermal science and engineering progress. 27, 101142 (2022)
Chaves, E., Gonçalves, C.B., Albertini, M.K., Lee, S., Jeon, G., Fernandes, H.C.: Evaluation of transfer learning of pre-trained cnns applied to breast cancer detection on infrared images. Applied optics. 59(17), 23–28 (2020)
Kiymet, S., Aslankaya, M.Y., Taskiran, M., Bolat, B.: Breast cancer detection from thermography based on deep neural networks. In: 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), pp. 1–5 (2019). IEEE
Cabıoğlu, Ç., Oğul, H.: Computer-aided breast cancer diagnosis from thermal images using transfer learning. In: Bioinformatics and Biomedical Engineering: 8th International Work-Conference, IWBBIO 2020, Granada, Spain, May 6–8, 2020, Proceedings 8, pp. 716–726 (2020). Springer
Gonçalves, C.B., Souza, J.R., Fernandes, H.: Cnn architecture optimization using bio-inspired algorithms for breast cancer detection in infrared images. Computers in Biology and Medicine. 142, 105205 (2022)
Chatterjee, S., Biswas, S., Majee, A., Sen, S., Oliva, D., Sarkar, R.: Breast cancer detection from thermal images using a grunwald-letnikov-aided dragonfly algorithm-based deep feature selection method. Computers in biology and medicine. 141, 105027 (2022)
Aidossov, N., Zarikas, V., Zhao, Y., Mashekova, A., Ng, E.Y.K., Mukhmetov, O., Mirasbekov, Y., Omirbayev, A.: An integrated intelligent system for breast cancer detection at early stages using ir images and machine learning methods with explainability. SN Computer Science. 4(2), 184 (2023)
Tsietso, D., Yahya, A., Samikannu, R.: A review on thermal imaging-based breast cancer detection using deep learning. Mobile Information Systems. 2022, 1–19 (2022)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. (2017)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Hearst, M.A., Dumais, S.T., Osuna, E., Platt, J., Scholkopf, B.: Support vector machines. IEEE Intelligent Systems and their applications. 13(4), 18–28 (1998)
Biau, G., Scornet, E.: A random forest guided tour. Test. 25, 197–227 (2016)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 55(1), 119–139 (1997)
Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y., Cho, H., Chen, K., Mitchell, R., Cano, I., Zhou, T., et al.: Xgboost: extreme gradient boosting. R package version 0.4-2. 1(4), 1–4 (2015)
Witten, I.H., Frank, E., Hall, M.A., Pal, C.: Data mining: Practical machine learning tools and techniques. san francisco. Morgan Kaufmann. (2005)
Hoque, N., Bhattacharyya, D.K., Kalita, J.K.: Mifs-nd: A mutual information-based feature selection method. Expert Systems with Applications. 41(14), 6371–6385 (2014)
Yahara, T., Koga, T., Yoshida, S., Nakagawa, S., Deguchi, H., Shirouzu, K.: Relationship between microvessel density and thermographic hot areas in breast cancer. Surgery today. 33, 243–248 (2003)
Garduño-Ramón, M.A., Vega-Mancilla, S.G., Morales-Henández, L.A., Osornio-Rios, R.A.: Supportive noninvasive tool for the diagnosis of breast cancer using a thermographic camera as sensor. Sensors. 17(3), 497 (2017)
Pramanik, S., Bhattacharjee, D., Nasipuri, M.: Texture analysis of breast thermogram for differentiation of malignant and benign breast. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 8–14 (2016). IEEE
Mohamed, N.A.E.-R.: Breast cancer risk deteotion using digital infrared thermal images. (2015)
Silva, L., Saade, D., Sequeiros, G., Silva, A., Paiva, A., Bravo, R., Conci, A.: A new database for breast research with infrared image. Journal of Medical Imaging and Health Informatics. 4(1), 92–100 (2014)
Husaini, M.A.S.A., Habaebi, M.H., Hameed, S.A., Islam, M.R., Gunawan, T.S.: A systematic review of breast cancer detection using thermography and neural networks. IEEE Access. 8, 208922–208937 (2020) 10.1109/ACCESS.2020.3038817
Zuluaga-Gomez, J., Al Masry, Z., Benaggoune, K., Meraghni, S., Zerhouni, N.: A cnn-based methodology for breast cancer diagnosis using thermal images. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 9(2), 131–145 (2021)
Roslidar, R., Syaryadhi, M., Saddami, K., Pradhan, B., Arnia, F., Syukri, M., Munadi, K., Roslidar, R., Syaryadhi, M., Saddami, K., et al.: Breacnet: A high-accuracy breast thermogram classifier based on mobile convolutional neural network. Math. Biosci. Eng. 19(2), 1304–1331 (2022)
Angayarkanni, S.P.: Hybrid convolution neural network in classification of cancer in histopathology images. Journal of Digital Imaging. 35(2), 248–257 (2022)
Al Husaini, M.A.S., Habaebi, M.H., Gunawan, T.S., Islam, M.R., Elsheikh, E.A., Suliman, F.: Thermal-based early breast cancer detection using inception v3, inception v4 and modified inception mv4. Neural Computing and Applications. 34(1), 333–348 (2022)
Alshehri, A., AlSaeed, D.: Breast cancer detection in thermography using convolutional neural networks (cnns) with deep attention mechanisms. Applied Sciences. 12(24), 12922 (2022)
Morovati, B., Lashgari, R., Hajihasani, M., Shabani, H.: Reduced deep convolutional activation features (r-decaf) in histopathology images to improve the classification performance for breast cancer diagnosis. Journal of Digital Imaging. 36(6), 2602–2612 (2023)
Iyadurai, J., Chandrasekharan, M., Muthusamy, S., Panchal, H.: An extensive review on emerging advancements in thermography and convolutional neural networks for breast cancer detection. Wireless Personal Communications, 1–25 (2024)
Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET), pp. 1–6 (2017). Ieee
Iakubovskii, P.: Classification models Zoo - Keras. https://github.com/qubvel/classification_models. [Online; accessed 30-October-2023]
Patel, A.: Feature Engineering and Feature Selection. https://github.com/ashishpatel26/Amazing-Feature-Engineering. [Online; accessed 30-October-2023]
Witten, I.H., Frank, E., Hall, M.A., Pal, C.J., DATA, M.: Practical machine learning tools and techniques. Data Mining. Fourth Edition, Elsevier Publishers. (2017)
Biswal, A.: What is a Chi-Square Test? Formula, Examples and Application. https://www.simplilearn.com/tutorials/statistics-tutorial/chi-square-test. [Online; accessed 30-October-2023]
Powers, D.M.: Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061. (2020)
Tello-Mijares, S., Woo, F., Flores, F., et al.: Breast cancer identification via thermography image segmentation with a gradient vector flow and a convolutional neural network. Journal of healthcare engineering. 2019 (2019)
Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Thanh Nguyen Chi and Tu Doan Quang. The first draft of the manuscript was written by Hong Le Thi Thu, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics Approval
This article uses a public dataset, so we, the authors, confirm that no ethical approval is required.
Consent to Participate
No consent is required.
Consent for Publication
No consent is required.
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Nguyen Chi, T., Le Thi Thu, H., Doan Quang, T. et al. A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification. J Digit Imaging. Inform. med. (2024). https://doi.org/10.1007/s10278-024-01269-6
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10278-024-01269-6