Quick Search:

Enhancing Image Classification Performance through Hybrid Self-Supervised Learning Strategies

Siddalingappa, Rashmi ORCID logoORCID: https://orcid.org/0000-0001-9786-8436 and S, Deepa (2025) Enhancing Image Classification Performance through Hybrid Self-Supervised Learning Strategies. International Journal of Electronics and Communication Engineering, 12 (7). pp. 90-101.

[thumbnail of Rashmi_Deepa.pdf]
Preview
Text
Rashmi_Deepa.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

| Preview

Abstract

Image classification is a cornerstone of computer vision, with the applications spanning healthcare, autonomous driving and security. The dependence on large labeled datasets for supervised learning poses significant challenges, particularly in specialized fields where the labeled data is scarce and expensive to obtain. Self-supervised learning (SSL) has emerged as a promising paradigm, enabling models to learn useful representations from unlabelled data by designing pretext tasks that generate pseudo-labels. SSL faces limitations in handling complex data distributions and achieving robust generalization. This paper explores hybrid self-supervised learning strategies that combine multiple SSL techniques, such as contrastive learning, masked image modeling, and clustering, to enhance image classification performance and reduce dependence on labeled data. This study proposes a comprehensive framework that integrates data augmentation, feature extraction, and hybrid learning mechanisms, evaluated on the CIFAR-100 dataset. The experimental results demonstrate that hybrid SSL approaches achieve significant improvements in performance. The combination of SimCLR and masked image modeling (MAE) achieves a Top-1 accuracy of 77.8% on the clean test set and 71.4% on the domain-shifted set, and self-distillation with contrastive learning (DINO) achieves the highest Top-1 accuracy of 78.4% on the clean test set and 72.1% on the domain-shifted set. Advanced data augmentation techniques, such as CutMix and RandAugment, additionally enhance model robustness, with SwAV (contrastive clustering) achieving 76.5% Top-1 accuracy on the clean test set and 70.1% on the domain-shifted set. The findings highlight the effectiveness of hybrid SSL methods in addressing the challenges of limited labelled data, offering valuable insights for future research and applications in image classification.

Item Type: Article
Status: Published
DOI: 10.14445/23488549/IJECE-V12I7P108
Subjects: Q Science > Q Science (General) > Q325 Machine learning
School/Department: London Campus
URI: https://ray.yorksj.ac.uk/id/eprint/13305

University Staff: Request a correction | RaY Editors: Update this record