Osagie, Efosa ORCID: https://orcid.org/0009-0004-3462-7175, Ji, Wei and Helian, Na
(2024)
Burnt-in Text Recognition from Medical Imaging Modalities: Existing Machine Learning Practices.
Journal of Advanced Computational Intelligence and Intelligent Informatics.
Preview |
Text
Fujipress_JACIII-28-1-12.pdf - Published Version Available under License Creative Commons Attribution No Derivatives. | Preview |
Abstract
In recent times, medical imaging has become a significant component of clinical diagnosis and examinations to detect and evaluate various medical conditions. The interpretation of these medical examinations and the patient’s demographics are usually textual data, which is burned in on the pixel content of medical image modalities (MIM). Example of these MIM includes Ultrasound and X-ray imaging. As artificial intelligence advances for medical applications, there is a high demand for the accessibility of these burned-in textual data for various needs. This paper aims to review the significance of burned-in textual data recognition in MIM and recent research regarding the machine learning approach, challenges, and open issues for further investigation on this application. The paper describes the significant problems in this study area as low resolution and background interference of textual data. Finally, the paper suggests applying more advanced deep learning ensemble algorithms as possible solutions.
Item Type: | Article |
---|---|
Status: | Published |
DOI: | 10.20965/jaciii.2024.p0103 |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science Q Science > QA Mathematics > QA76 Computer software Q Science > QA Mathematics > QA76.9.H85 Human-Computer Interaction; Virtual Reality; Mixed Reality; Augmented Reality ; Extended Reality |
School/Department: | School of Science, Technology and Health |
URI: | https://ray.yorksj.ac.uk/id/eprint/12902 |
University Staff: Request a correction | RaY Editors: Update this record