Abstract:
Systems for automating the score of textual answers have been available commercially
and some progress has been made in their application to scoring short, factual answers
to purpose-written questions.
Automatic scoring of short text responses to educational assessment items is a
challenging task, particularly because large amounts of labelled data (i.e.,
human-scored responses) may or may not be available due to the variety of possible
questions and topics. As such, it seems desirable to integrate various approaches,
making use of model answers from experts (e.g., to give higher scores to responses
that are similar), prescored student responses (e.g., to learn direct associations between
particular phrases and scores), etc.
This research work present the report of an Intelligent System that; acquires the
knowledge of Subject Matter Experts (SMEs) in a specific computing field,
“Advanced Operating System”, uses a built-in Inference Engine designed with
Information Extraction Techniques and a Fuzzy-Scoring Model to assess Students’
free-text answers to short response questions and hence, computes the correctness of
students’ answers with respect to lecturers’ underlying model answers or templates.
The newly developed Expert System (ES) was adapted to an academic course in the
University System and its performance was evaluated using Pearson correlation
coefficient. The results from the evaluation were compared with existing Automated
Scoring Systems and the comparative analysis shows that the computed Correlation of
the proposed expert system’s reliability in scoring is close to that of Project Essay
Grader (PEG) and above Expert System for Essay Scoring (ES4ES) developed by
Wakama in performance. This also shows a good performance of the proposed system.