TexTAR – Textual Attribute Recognition in Multi-domain and Multi-lingual Document Images

Accepted at ICDAR 2025 (ORAL)

Rohan Kumar · Jyothi Swaroopa Jinka · Ravi Kiran Sarvadevabhatla
International Institute of Information Technology Hyderabad

Abstract

Recognising textual attributes such as bold, italic, underline and strikeout is essential for understanding text semantics, structure and visual presentation. Existing methods struggle with computational efficiency or adaptability in noisy, multilingual settings. To address this, we introduce TexTAR, a multi-task, context-aware Transformer for Textual Attribute Recognition (TAR). Our data-selection pipeline enhances context awareness and our architecture employs a 2-D RoPE mechanism to incorporate spatial context for more accurate predictions. We also present MMTAD, a diverse multilingual dataset annotated with text attributes across real-world documents. TexTAR achieves state-of-the-art performance in extensive evaluations.

Textual Attributes in the Dataset

Image T1 T2
bolditalicunderlinestrikeout
hc
u-spanish
ub-punjabi
us-telugu
bi
iu
bs
Attribute distribution

Chart – distribution of annotated attributes in our dataset.

Data-selection Pipeline

Data Selection Pipeline

Model Architecture

Model Architecture

Comparison with State-of-the-Art Approaches

Methodsnormal T1 groupT2 group Average
bolditalicb & i underlinestrikeoutu & s
Baselines
ResNet-18 [5]0.970.750.880.770.680.970.990.86
ResNet-50 [5]0.970.740.890.690.730.980.990.86
ResNeXt-101 [18]0.970.770.910.740.780.990.990.88
EfficientNet-b4 [15]0.970.750.900.620.750.980.990.85
Variants
DeepFont [17]0.970.720.800.440.640.930.980.78
DropRegion† [21]0.980.770.900.610.750.970.990.85
MTL [9]0.970.750.890.640.700.970.990.84
TaCo† [10]0.970.790.900.600.780.860.890.83
CONSENT† [12]0.980.860.930.840.810.960.980.91
TexTAR (Ours)0.990.920.95 0.900.870.990.990.94

All scores are F1. u & s = underline & strikeout, b & i = bold & italic. † = our re-implementation.

Visualization of results for a subset of baselines and variants in comparison with TexTAR

Comparison of Results

Download the Dataset and Weights

Model weights and the MMTAD testset can be downloaded from the link. To get access to the full dataset, please contact ravi.kiran@iiit.ac.in.

Citation

@article{Kumar2025TexTAR,
  title   = {TexTAR: Textual Attribute Recognition in Multi-domain and Multi-lingual Document Images},
  author  = {Rohan Kumar and Jyothi Swaroopa Jinka and Ravi Kiran Sarvadevabhatla},
  booktitle = {International Conference on Document Analysis and Recognition, ICDAR},
  year    = {2025}
}

Acknowledgements

International Institute of Information Technology Hyderabad, India.

Contact

rohan.kumar@students.iiit.ac.in
jinka.swaroopa@research.iiit.ac.in
ravi.kiran@iiit.ac.in