Article

AI-Driven Intelligent Answer Script Evaluation System

Author : LANKAVALASA SHANMUKHA RAO, KALLEPALLI RAJYA LAXMI, BOTCHA TEJESWARARAO, HOBBILI KIRAN KUMAR, Mr.S.V.R MURTHY

The traditional method of evaluating academic answer sheets, especially descriptive or subjective responses, is a manual and labor-intensive task that often suffers from inconsistencies, evaluator fatigue, human bias, and significant delays in providing results to students. As educational institutions worldwide handle increasing volumes of examinations across multiple subjects and academic levels, the need for an intelligent, efficient, and scalable solution to automate the evaluation process has become critical. This paper proposes an AI-based answer script evaluation system that leverages cutting-edge technologies in Natural Language Processing (NLP), machine learning, and semantic analysis to assess written student responses against pre-defined model answers. The system processes both student and model answers through comprehensive NLP preprocessing including tokenization, stopword removal, and lemmatization before computing TF-IDF vector representations. Semantic similarity between answers is computed using cosine similarity, providing a content-based score that goes beyond simple keyword matching to understand meaning and context. Additionally, the system incorporates spelling and grammar correction modules using language models, and evaluates response coherence and logical structure through sentence flow analysis. The final score is computed as a weighted combination of semantic similarity (60%), grammar quality (20%), and coherence (20%), calibrated against expert human evaluators. Evaluation on 500 answer-key pairs across 5 academic subjects demonstrates a Pearson correlation coefficient of 0.91 with expert evaluators, reducing evaluation time by 92% from an average of 15 minutes per paper to 1.2 seconds while ensuring perfect scoring consistency with inter-rater reliability κ = 1.0.


Full Text Attachment
//