Hybrid Otsu Morphological Pre-processing for EfficientNetB4 Based Acute Lymphoblastic Leukemia Classification
Abstract
Image quality plays a crucial role in improving the performance of image-based classification models, particularly when raw images exhibit noise, uneven illumination, and unclear object boundaries. This study proposes a hybrid segmentation approach to enhance object separation by reducing background interference and refining object contours. The method combines Otsu thresholding for initial object–background separation with elliptical morphological operations to improve region consistency and boundary definition.
The segmented grayscale images are replicated into three channels and resized to 224×224 pixels before being used as input to an EfficientNetB4-based classification model optimized with the AdamW optimizer and fine-tuning. Experimental results under identical data splits, training settings, and fine-tuning protocols show that the proposed segmentation-based method achieves a final test accuracy of 97%, outperforming the baseline model trained on raw images (95% test accuracy) using the same EfficientNetB4-AdamW configuration. These results demonstrate that incorporating segmentation in the preprocessing stage effectively enhances discriminative feature learning and improves overall classification performance.
Keywords
Full Text:
PDFReferences
[1] I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 6, pp. 1–20, 2021, doi: 10.1007/s42979-021-00815-1.
[2] S. G. A, R. Prabha, S. Sridevi, and A. Rohini, “Artificial Intelligence: Advanced Mathematical Constructs and Theoretical Framework in Machine Learning and Deep Learning Algorithms,” SSRN Electron. J., 2025, doi: 10.2139/ssrn.5066774.
[3] Z. Alshingiti, R. Alaqel, J. Al-Muhtadi, Q. E. U. Haq, K. Saleem, and M. H. Faheem, “A Deep Learning-Based Phishing Detection System Using CNN, LSTM, and LSTM-CNN,” Electron., vol. 12, no. 1, pp. 1–18, 2023, doi: 10.3390/electronics12010232.
[4] D. Kurniasari, A. Su, and F. R. Lumbanraja, “Evaluating User Satisfaction in the Halodoc Application Using a Hybrid CNN-BiLTSM Model for Sentiment Analysis,” J. Tek. Inform., vol. 18, no. 2, pp. 209–225, 2025, doi: 10.15408/jti.v18i2.42762.
[5] M. M. Taye, “Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions,” Computation, vol. 11, no. 3, 2023, doi: 10.3390/computation11030052.
[6] G. Rangel, J. C. Cuevas-Tello, J. Nunez-Varela, C. Puente, and A. G. Silva-Trujillo, “A Survey on Convolutional Neural Networks and Their Performance Limitations in Image Recognition Tasks,” J. Sensors, vol. 2024, 2024, doi: 10.1155/2024/2797320.
[7] S. Surono, M. Rivaldi, D. A. Dewi, and N. Irsalinda, “New Approach to Image Segmentation: U-Net Convolutional Network for Multiresolution CT Image Lung Segmentation,” Emerg. Sci. J., vol. 7, no. 2, pp. 498–506, 2023, doi: 10.28991/ESJ-2023-07-02-014.
[8] G. S. Junior, J. Ferreira, C. Millán-arias, R. Daniel, and A. C. Junior, “Ceramic Cracks Segmentation with Deep Learning,” Appl. Sci., 2021, [Online]. Available: https://doi.org/10.3390/app11136017
[9] V. Sravan, K. Swaraja, and M. Kollati, “Magnetic Resonance Images Based Brain Tumor Segmentation - A Critical survey,” Proc. Fourth Int. Conf. Trends Electron. Informatics (ICOEI 2020), no. July 2021, 2020, doi: 10.1109/ICOEI48184.2020.9143045.
[10] A. Suyuti, “Hotspot Detection in Photovoltaic Module using Otsu Thresholding Method,” no. March 2021, 2020, doi: 10.1109/Comnetsat50391.2020.9328987.
[11] S. Hong, Z. Jiang, L. Liu, J. Wang, L. Zhou, and J. Xu, “applied sciences Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation,” Appl. Sci., vol. 12, no. 22, 2022, doi: https://doi.org/ 10.3390/app122211701 Academic.
[12] A. Kulshreshtha and A. Nagpal, “ANALYSIS OF MORPHOLOGICAL OPERATIONS ON IMAGE SEGMENTATION TECHNIQUES,” ICTACT J. IMAGE VIDEO Process., vol. 9102, no. August, pp. 2555–2558, 2021, doi: 10.21917/ijivp.2021.0362.
[13] R. Kumar et al., “Precision pest management in agriculture using Inception V3 and EfficientNet B4 : A deep learning approach for crop protection,” Inf. Process. Agric., no. September, 2025, doi: 10.1016/j.inpa.2025.09.005.
[14] A. Zafar et al., “Convolutional Neural Networks: A Comprehensive Evaluation and Benchmarking of Pooling Layer Variants,” Symmetry (Basel)., vol. 16, no. 11, 2024, doi: 10.3390/sym16111516.
[15] H. Farman, J. Ahmad, B. Jan, Y. Shahzad, M. Abdullah, and A. Ullah, “Efficientnet-based robust recognition of peach plant diseases in field images,” Comput. Mater. Contin., vol. 71, no. 1, pp. 2073–2089, 2022, doi: 10.32604/cmc.2022.018961.
[16] V. T. Hoang and K. H. Jo, “Practical Analysis on Architecture of EfficientNet,” Int. Conf. Hum. Syst. Interact. HSI, vol. 2021-July, pp. 2–5, 2021, doi: 10.1109/HSI52170.2021.9538782.
[17] P. Zhou, X. Xie, Z. Lin, and S. Yan, “Towards Understanding Convergence and Generalization of AdamW,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 9, pp. 6486–6493, 2024, doi: 10.1109/TPAMI.2024.3382294.
[18] M. Sheykhhasan, H. Manoochehri, and P. Dama, “Use of CAR T-cell for acute lymphoblastic leukemia (ALL) treatment: a review study,” Cancer Gene Ther., vol. 29, no. 8–9, pp. 1080–1096, 2022, doi: 10.1038/s41417-021-00418-1.
[19] A. Trubnikov and D. Savelyev, “Exploring Convolutional Neural Networks for the Classification of Acute Lymphoblast Leukemia Blood Cell Images,” J. Biomed. Photonics Eng., vol. 10, no. 1, 2024, doi: 10.18287/JBPE24.10.010302.
[20] A. A. Soladoye, D. B. Olawade, I. A. Adeyanju, T. Adereni, K. M. Olagunju, and A. C. David-olawade, “International Journal of Medical Informatics Enhancing leukemia detection in medical imaging using deep transfer learning,” vol. 203, no. June, 2025.
[21] J. Zheng, Y. Gao, H. Zhang, Y. Lei, and J. Zhang, “applied sciences OTSU Multi-Threshold Image Segmentation Based on Improved Particle Swarm Algorithm,” Appl. Sci., 2022, doi: 10.3390/app122211514.
[22] M. A. Javeed et al., “Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions,” Electron., vol. 12, 2023, doi: 10.3390/electronics12051079.
[23] C. Jiarui et al., “Location and Detection Method of Ring-shaped Culture Carrier for Nucleic Acid Detection,” no. 3182027, pp. 1257–1262, 2020, doi: 10.1109/DDCLS49620.2020.9275178.
[24] D. A. Zebari, D. Q. Zeebaree, A. M. Abdulazeez, H. Haron, H. Nuzly, and A. Hamed, “Improved Threshold Based and Trainable Fully Automated Segmentation for Breast Cancer Boundary and Pectoral Muscle in Mammogram Images,” vol. 8, pp. 203097–203116, 2020, doi: 10.1109/ACCESS.2020.3036072.
[25] M. F. Abdullah, S. N. Sulaiman, and M. K. Osman, “Analysis of Image Processing Using Morphological Erosion and Dilation Analysis of Image Processing Using Morphological Erosion and Dilation”, doi: 10.1088/1742-6596/2071/1/012033.
[26] J. Cohen-adad and E. Polytechnique, “SoftSeg : Advantages of soft versus binary training for image segmentation,” no. November, pp. 1–33, 2020.
[27] C. Cao, F. Zhou, Y. Dai, J. Wang, and K. Zhang, “A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability,” ACM Comput. Surv., vol. 57, no. 2, pp. 1–41, 2024, doi: 10.1145/3696206.
[28] T. Kumar, R. Brennan, A. Mileo, and M. Bendechache, “Image Data Augmentation Approaches: A Comprehensive Survey and Future Directions,” IEEE Access, vol. 12, no. September, pp. 187536–187571, 2024, doi: 10.1109/ACCESS.2024.3470122.
[29] N. Eldeen, K. Mohamed, and L. Seyedali, “A comprehensive survey of recent trends in deep learning for digital images augmentation,” Artif. Intell. Rev., vol. 55, no. 3, pp. 2351–2377, 2022, doi: 10.1007/s10462-021-10066-4.
[30] K. Maharana, S. Mondal, and B. Nemade, “A review : Data pre-processing and data augmentation techniques,” vol. 3, no. April, pp. 91–99, 2022, doi: 10.1016/j.gltp.2022.04.020.
[31] S. Montaha and S. Azam, “MNet-10 : A robust shallow convolutional neural network model performing ablation study on medical images assessing the effectiveness of applying optimal data augmentation technique,” no. August, 2022, doi: 10.3389/fmed.2022.924979.
[32] A. Chaudhari, C. Bhatt, A. Krishna, and P. L. Mazzeo, “ViTFER : Facial Emotion Recognition with Vision Transformers,” Appl. sytem Innov., vol. 5, no. 4, p. 80, 2022, doi: 10.3390/asi5040080.
[33] Y. Guan et al., “A framework for efficient brain tumor classification using MRI images,” Math. Biosci. Eng., vol. 18, no. 5, pp. 5790–5815, 2021, doi: 10.3934/MBE.2021292.
[34] T. Ahmed and N. H. N. Sabab, “Classification and Understanding of Cloud Structures via Satellite Images with EfficientUNet,” SN Comput. Sci., vol. 3, no. 1, 2022, doi: 10.1007/s42979-021-00981-2.
[35] Y. Peng, X. Li, and Y. Wang, “Quantum Squeeze-and-Excitation Networks,” Proc. - IEEE Quantum Week 2024, QCE 2024, vol. 2, pp. 39–43, 2024, doi: 10.1109/QCE60285.2024.10249.
[36] M. Pendse, V. Thangarasa, V. Chiley, R. Holmdahl, J. Hestness, and D. DeCoste, “Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12659 LNCS, pp. 388–397, 2021, doi: 10.1007/978-3-030-72087-2_34.
[37] J.-G. Jang, C. Quan, H. D. Lee, and U. Kang, “FALCON: Lightweight and Accurate Convolution,” no. 2017, pp. 1–18, 2019, [Online]. Available: http://arxiv.org/abs/1909.11321
[38] Mark Sandler, A. Howard, M. Zhu, A. Zhmoginov, and Liang-Chieh Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks Mark,” Convolutional Neural Networks with Swift Tensorflow, pp. 99–107, 2019.
[39] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-Excitation Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8, pp. 2011–2023, 2020, doi: 10.1109/TPAMI.2019.2913372.
[40] A. Izang, “SWISH ACTIVATION AND EFFICIENT NET-BASED IMAGING WITH ADAPTIVE OPTICS : REVOLUTIONIZING ORDER PICKING AND PLACING IN ROBOTIC AUTOMATION,” Int. J. Adv. Res. Infromation Technol. Manag. Sci., vol. 1, no. 1, pp. 50–58, 2024.
[41] S. Kılıçarslan, K. Adem, and M. Çelik, “An overview of the activation functions used in deep learning algorithms,” J. New Results Sci., vol. 10, no. 3, pp. 75–88, 2021, doi: 10.54187/jnrs.1011739.
[42] H. Peng, Y. Yu, and S. Yu, “Re-Thinking the Effectiveness of Batch Normalization and Beyond,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 1, pp. 1–22, 2023, doi: 10.1109/TPAMI.2023.3319005.
[43] A. Fariza, “Tooth and Supporting Tissue Anomalies Detection from Panoramic Radiography Using Integrating Convolution Neural Network with Batch Normalization,” vol. 17, no. 2, 2024, doi: 10.22266/ijies2024.0430.19.
[44] S. Batool et al., “Deploying efficient net batch normalizations (BNs) for grading diabetic retinopathy severity levels from fundus images,” Sci. Rep., vol. 13, no. 1, pp. 1–13, 2023, doi: 10.1038/s41598-023-41797-9.
[45] D. Masters, A. Labatie, Z. Eaton-Rosen, and C. Luschi, “Making EfficientNet More Efficient: Exploring Batch-Independent Normalization, Group Convolutions and Reduced Resolution Training,” 2021, [Online]. Available: http://arxiv.org/abs/2106.03640
[46] Y. Dogan, “A New Global Pooling Method for Deep Neural Networks: Global Average of Top-K Max- Pooling,” Trait. du Signal, vol. 40, no. 2, pp. 577–587, 2023, doi: 10.18280/ts.400216.
[47] K. Patel and G. Wang, “A discriminative channel diversification network for image classification,” Pattern Recognit. Lett., vol. 153, pp. 176–182, 2022, doi: 10.1016/j.patrec.2021.12.004.
[48] K. Ali, Z. A. Shaikh, A. A. Khan, and A. A. Laghari, “Multiclass Skin Cancer Classification using EfficientNets -A First Step towards Preventing Skin Cancer,” Neurosci. Informatics, no. December, p. 100034, 2021, doi: 10.1016/j.neuri.2021.100034.
[49] Q. Zhu, Z. He, T. Zhang, and W. Cui, “applied sciences Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization,” pp. 1–8, 2020.
[50] L. Nieradzik, G. Scheuermann, D. Saur, and C. Gillmann, “Effect of the output activation function on the probabilities and errors in medical image segmentation,” 2021, [Online]. Available: http://arxiv.org/abs/2109.00903
[51] Ö. Ezerceli and R. Dehkharghani, Mental disorder and suicidal ideation detection from social media using deep neural networks, vol. 7, no. 3. Springer Nature Singapore, 2024. doi: 10.1007/s42001-024-00307-1.
[52] I. Hossain, S. Jahan, R. Al, and K. Ahmed, “Smart Agricultural Technology Detecting tomato leaf diseases by image processing through deep convolutional neural networks,” Smart Agric. Technol., vol. 5, no. August, 2023, doi: 10.1016/j.atech.2023.100301.
[53] O. Rainio, “Evaluation metrics and statistical tests for machine learning,” Sci. Rep., pp. 1–14, 2024, doi: 10.1038/s41598-024-56706-x.
DOI: https://doi.org/10.18860/cauchy.v11i1.40730
Refbacks
- There are currently no refbacks.
Copyright (c) 2026 MARETTA MIA AUDINA, SUGIYARTO SURONO, ARIS THOBIRIN, GOH KHANG WEN

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Editorial Office
Mathematics Department,
Universitas Islam Negeri Maulana Malik Ibrahim Malang
Gajayana Street 50 Malang, East Java, Indonesia 65144
Faximile (+62) 341 558933
e-mail: cauchy@uin-malang.ac.id

CAUCHY: Jurnal Matematika Murni dan Aplikasi is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.







