SVTR model for Kazakh Handwritten Text Recognition
DOI:
https://doi.org/10.47344/sdubnts.v65i2.1183Keywords:
optical character recognition, handwritten text recognition, deep learning, KOHTD, SVTRAbstract
Handwritten Text Recognition (HTR) plays a crucial role in transforming historical and contemporary handwritten documents into digital formats, facilitating easier access, searchability, and analysis. The SVTR model, known for its state-of-the-art performance in scene text recognition (STR), stands out for its minimal resource use, and quick inference time. In this study, we apply the SVTR model to the Kazakh Offline Handwritten Text Dataset (KOHTD) to assess its capability in handwritten text recognition. Achieving a Character Error Rate (CER) of 4.59% and a Word Error Rate (WER) of 20%, our research establishes new accuracy benchmarks for the KOHTD. The findings underscore the SVTR model's high effectiveness in recognizing handwritten text.