ResNet-50 and ResNeXt-50 for Multiclass Classification of Chronic Wound Images under Gaussian Blur

Reynaldi Andhika, Sugiyarto Surono, Aris Thobirin

Abstract


Chronic wound image classification is important for supporting the assessment of conditions such as diabetic foot ulcers (DFU) and pressure ulcers (PU). While convolutional neural network (CNN)–based approaches have shown promising results, most previous studies focus on binary classification and rarely evaluate robustness in multiclass chronic wound scenarios. This study investigates multiclass classification of chronic wound images, distinguishing DFU, PU, and Normal Skin, using ResNet-50 and ResNeXt-50 architectures. A total of 2,146 publicly available images were stratified at the image level into training (70%), validation (15%), and test (15%) sets. Both models were trained under an identical configuration using data augmentation and class-weighted loss. On clean test images, ResNet-50 and ResNeXt-50 achieved strong and comparable performance, with accuracies of 0.9877 and 0.9938 and macro-averaged F1-scores of 0.9866 and 0.9928, respectively. Robustness was evaluated by applying Gaussian blur at the inference stage to simulate image defocus. Under stronger blur (σ = 2.0), ResNeXt-50 maintained higher performance (accuracy 0.9723, macro-F1 0.9679) than ResNet-50 (accuracy 0.9200, macro-F1 0.9123). These results highlight the contribution of this study in evaluating robustness to blur in multiclass chronic wound image classification, while emphasizing that robustness is limited to resistance against image blur or defocus.


Keywords


deep learning; convolutional neural network; medical image classification; diabetic foot ulcer; pressure ulcer; robustness evaluation

Full Text:

PDF

References


[1]    S. Surono, D. K. E. Arofah, and A. Thobirin, “Robust convolutional neural network for image classification with Gaussian noise,” Frontiers in Artificial Intelligence and Applications, vol. 378, pp. 67–76, 2023, doi: 10.3233/FAIA231007.

[2]    I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono, “Deep convolutional neural networks in medical image analysis: A review,” Information, vol. 16, no. 3, 2025, doi: 10.3390/info16030195.

[3]    G. Latif, J. Alghazo, M. A. Khan, G. Ben Brahim, K. Fawagreh, and N. Mohammad, “Deep convolutional neural network (CNN) model optimization techniques—Review for medical imaging,” AIMS Mathematics, vol. 9, no. 8, pp. 20539–20571, 2024, doi: 10.3934/math.2024998.

[4]    K. Song, J. Feng, and D. Chen, “A survey on deep learning in medical ultrasound imaging,” Frontiers in Physics, vol. 12, 2024, doi: 10.3389/fphy.2024.1398393.

[5]    X. Liu et al., “Advances in deep learning-based medical image analysis,” Health Data Science, Art. no. 8786793, 2021, doi: 10.34133/2021/8786793.

[6]    R. Nirthika, S. Manivannan, A. Ramanan, and R. Wang, “Pooling in convolutional neural networks for medical image analysis: A survey and an empirical study,” Neural Computing and Applications, vol. 34, no. 7, pp. 5321–5347, 2022, doi: 10.1007/s00521-022-06953-8.

[7]    M. Li, Y. Jiang, Y. Zhang, and H. Zhu, “Medical image analysis using deep learning algorithms,” Frontiers in Public Health, vol. 11, Art. no. 1273253, 2023, doi: 10.3389/fpubh.2023.1273253.

[8]    A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” Artificial Intelligence Review, vol. 53, no. 8, pp. 5455–5516, 2020, doi: 10.1007/s10462-020-09825-6.

[9]    M. S. Ali, M. S. Miah, J. Haque, M. M. Rahman, and M. K. Islam, “An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models,” Machine Learning with Applications, vol. 5, Art. no. 100036, 2021, doi: 10.1016/j.mlwa.2021.100036.

[10]    A. T. Ibrahim, M. Abdullahi, A. F. D. Kana, M. T. Mohammed, and I. H. Hassan, “Categorical classification of skin cancer using a weighted ensemble of transfer learning with test time augmentation,” Data Science and Management, vol. 8, no. 2, pp. 174–184, 2025, doi: 10.1016/j.dsm.2024.10.002.

[11]    M. Harahap, S. K. Anjelli, W. A. M. Sinaga, R. Alward, J. F. W. Manawan, and A. M. Husein, “Classification of diabetic foot ulcer using convolutional neural network (CNN) in diabetic patients,” Journal of INFOTEL, vol. 14, no. 3, pp. 196–202, 2022, doi: 10.20895/infotel.v14i3.796.

[12]    C. Wang, Z. Yu, Z. Long, H. Zhao, and Z. Wang, “A few-shot diabetes foot ulcer image classification method based on deep ResNet and transfer learning,” Scientific Reports, vol. 14, no. 1, 2024, doi: 10.1038/s41598-024-80691-w.

[13]    Y. Zhang, “3D reconstruction of monocular images based on ResNeXt neural network,” in Proceedings of Atlantis Press, pp. 816–829, 2024, doi: 10.2991/978-94-6463-540-9_82.

[14]    A. Priya and P. S. Bharathi, “SE-ResNeXt-50-CNN: A deep learning model for lung cancer classification,” Applied Soft Computing, vol. 171, Art. no. 112696, 2025, doi: 10.1016/j.asoc.2025.112696.

[15]    H. Neuwieser et al., “Interpreting venous and arterial ulcer images through the Grad-CAM lens: Insights and implications in CNN-based wound image classification,” Diagnostics, vol. 15, no. 17, 2025, doi: 10.3390/diagnostics15172184.

[16]    J. M. Raja, M. A. Maturana, S. Kayali, A. Khouzam, and N. Efeovbokhan, “Diabetic foot ulcer: A comprehensive review of pathophysiology and management modalities,” World Journal of Clinical Cases, vol. 11, no. 8, pp. 1684–1693, 2023, doi: 10.12998/wjcc.v11.i8.1684.

[17]    K. McDermott, M. Fang, A. J. M. Boulton, E. Selvin, and C. W. Hicks, “Etiology, epidemiology, and disparities in the burden of diabetic foot ulcers,” Diabetes Care, vol. 46, no. 1, pp. 209–221, 2022, doi: 10.2337/dci22-0043.

[18]    S. Vieira, A. Mostardinha, and P. Alves, “Unveiling the burden: A six-year retrospective analysis of pressure ulcer epidemiology in an ICU,” Nursing Reports, vol. 14, no. 4, pp. 3291–3309, 2024, doi: 10.3390/nursrep14040239.

[19]    S. Zhang, G. Wei, L. Han, W. Zhong, Z. Lu, and Z. Niu, “Global, regional and national burden of decubitus ulcers in 204 countries and territories from 1990 to 2021,” Frontiers in Public Health, vol. 13, 2025, doi: 10.3389/fpubh.2025.1494229.

[20]    Y. Patel et al., “Integrated image and location analysis for wound classification: A deep learning approach,” Scientific Reports, vol. 14, no. 1, Art. no. 7043, 2024, doi: 10.1038/s41598-024-56626-w.

[21]    G. Zhu, Z. Fu, and G. Guo, “Applications and prospects of artificial intelligence in wound healing,” Regeneration, Repair and Rehabilitation, vol. 1, no. 4, pp. 12–19, 2025, doi: 10.1016/j.rerere.2025.08.001.

[22]    P. S. Rathore, A. Kumar, A. Nandal, A. Dhaka, and A. K. Sharma, “A feature explainability-based deep learning technique for diabetic foot ulcer identification,” Scientific Reports, vol. 15, no. 1, 2025, doi: 10.1038/s41598-025-90780-z.

[23]    E. Tjoa and C. Guan, “A survey on explainable artificial intelligence (XAI): Toward medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021, doi: 10.1109/TNNLS.2020.3027314.

[24]    L. Alzubaidi, M. A. Fadhel, S. R. Oleiwi, O. Al-Shamma, and J. Zhang, “DFU_QUTNet: diabetic foot ulcer classification using novel deep convolutional neural network,” Multimedia Tools and Applications, vol. 79, no. 21, pp. 15655–15677, 2020, doi: 10.1007/s11042-019-07820-w.

[25]    L. Alzubaidi et al., “Towards a better understanding of transfer learning for medical imaging: A case study,” Applied Sciences, vol. 10, no. 13, 2020, doi: 10.3390/app10134523.

[26]    L. Alzubaidi, M. A. Fadhel, O. Al-Shamma, J. Zhang, J. Santamaría, and Y. Duan, “Robust application of new deep learning tools: An experimental study in medical imaging,” Multimedia Tools and Applications, vol. 81, no. 10, pp. 13289–13317, 2022, doi: 10.1007/s11042-021-10942-9.

[27]    B. Ay, B. Tasar, Z. Utlu, K. Ay, and G. Aydin, “Deep transfer learning-based visual classification of pressure injuries stages,” Neural Computing and Applications, vol. 34, no. 18, pp. 16157–16168, 2022, doi: 10.1007/s00521-022-07274-6.

[28]    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.

[29]    S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995, 2017, doi: 10.1109/CVPR.2017.634.

[30]    Z. Song, Z. Shi, X. Yan, B. Zhang, S. Song, and C. Tang, “An improved weighted cross-entropy-based convolutional neural network for auxiliary diagnosis of pneumonia,” Electronics, vol. 13, no. 15, 2024, doi: 10.3390/electronics13152929.

[31]    N. A. Sovia, N. Wayan, S. Wardhani, and E. Sumarminingsih, “Enhancing image classification of cabbage plant diseases using a hybrid convolutional neural network and XGBoost model,” Cauchy: Jurnal Matematika Murni dan Aplikasi, vol. 10, no. 1, pp. 278–289, 2025, doi: 10.18860/cauchy.v10i1.30866.

[32]    N. N. K. Krisnawijaya, C. Catal, B. Tekinerdogan, R. van der Tol, H. Hogeveen, and Y. Herdiyeni, “A machine learning approach to identifying foot and mouth disease incidence in dairy farms with suboptimal veterinary infrastructure,” Smart Agricultural Technology, vol. 12, Art. no. 101261, 2025, doi: 10.1016/j.atech.2025.101261.

[33]    S. Gehlot, A. Gupta, and R. Gupta, “A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple myeloma cancer diagnosis,” Medical Image Analysis, vol. 72, Art. no. 102099, 2021, doi: 10.1016/j.media.2021.102099.

[34]    P. Celard, E. L. Iglesias, J. M. Sorribes-Fdez, R. Romero, A. S. Vieira, and L. Borrajo, “A survey on deep learning applied to medical images: from simple artificial neural networks to generative models,” Neural Computing and Applications, vol. 35, no. 3, pp. 2291–2323, 2023, doi: 10.1007/s00521-022-07953-4.




DOI: https://doi.org/10.18860/cauchy.v11i1.40323

Refbacks

  • There are currently no refbacks.


Copyright (c) 2026 Reynaldi Ikbar Andhika

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Editorial Office
Mathematics Department,
Universitas Islam Negeri Maulana Malik Ibrahim Malang
Gajayana Street 50 Malang, East Java, Indonesia 65144
Faximile (+62) 341 558933
e-mail: cauchy@uin-malang.ac.id

Creative Commons License
CAUCHY: Jurnal Matematika Murni dan Aplikasi is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.