Consolidating and Developing Benchmarking Datasets for the Nepali Natural Language Understanding Tasks

Planned for publication

By IRIIS;

Published Date: Invalid Date

Consolidating and Developing Benchmarking Datasets for the Nepali Natural Language Understanding Tasks

Abstract

The Nepali language has distinct linguistic features, especially its complex script (Devanagari script), morphology, and various dialects, which pose a unique challenge for natural language processing (NLP) evaluation. While the Nepali General Language Understanding Evaluation (Nep-gLUE) benchmark provides a foundation for evaluating models, it remains limited in scope, covering four tasks. This restricts their utility for comprehensive assessments of NLP models. To address this limitation, we introduce eight new datasets, creating a new benchmark, the Nepali Language Understanding Evaluation (NLUE) benchmark, which covers a total of 12 tasks for evaluating the performance of models across a diverse set of Natural Language Understanding (NLU) tasks. The added tasks include single-sentence classification, similarity and paraphrase tasks, and Natural Language Inference (NLI) tasks. On evaluating the models using added tasks, we observe that the existing models fall short in handling complex NLU tasks effectively. This expanded benchmark sets a new standard for evaluating, comparing, and advancing models, contributing significantly to the broader goal of advancing NLP research for low-resource languages.

Citation Request

Please consider citing our work if you utilize any of our resources or results. Your acknowledgment would be greatly appreciated. Thank You!

@misc{nyachhyon2024,
      title={Consolidating and Developing Benchmarking Datasets for the Nepali Natural Language Understanding Tasks}, 
      author={Jinu Nyachhyon and Mridul Sharma and Prajwal Thapa and Bal Krishna Bal},
      year={2024},
      eprint={2411.19244},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.19244}, 
}

More

Paper