Repository logo
 
Loading...
Profile Picture
Person

Oliveira, Ema Patrícia de Lima

Search Results

Now showing 1 - 2 of 2
  • Using Learning Analytics to evaluate the quality of multiple-choice questions: a perspective with Classical Test Theory and Item Response Theory
    Publication . Azevedo, José Manuel; Oliveira, Ema; Beites, P. D.
    Purpose The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence with Item Response Theory (IRT) and Classical Test Theory (CTT). The used techniques can be considered a type of Descriptive Learning Analytics since they allow the measurement, collection, analysis and reporting of data generated from students’ assessment. Design/methodology/approach A representative data set of students’ grades from tests, randomly generated with a bank of questions implemented in Moodle, was used for analysis. The data were extracted from a Moodle database using MySQL with an ODBC connector, and collected in MS ExcelTM worksheets, and appropriate macros programmed with VBA were used. The analysis with the CTT was done through appropriate MS ExcelTM formulas, and the analysis with the IRT was approached with an MS ExcelTM add-in. Findings The Difficulty and Discrimination Indexes were calculated for all the questions having enough answers. It was found that the majority of the questions presented values for these indexes, which leads to a conclusion that they have quality. The analysis also showed that the bank of questions presents some internal consistency and, consequently, some reliability. Groups of questions with similar features were obtained, which is very important for the teacher to develop tests as fair as possible. Originality/value The main contribution and originality that can be found in this research is the definition of groups of questions with similar features, regarding their difficulty and discrimination properties. These groups allow the identification of difficulty levels in the questions on the bank of questions, thus allowing teachers to build tests, randomly generated with Moodle, that include questions with several difficulty levels in the tests, as it should be done. As far as the authors’ knowledge, there are no similar results in the literature.
  • Evaluating e-assessment: A practical application using statistical methods
    Publication . Azevedo, José; Beites, P. D.; Oliveira, Ema
    The use of Information and Communication Technologies (ICT) in the assessment process is becoming an asset, giving rise to the so-called computer based assessment or e-assessment. Nowadays, its use is becoming more usual in Higher Education Institutions. Closed formats for questions, namely Multiple Choice, are the most commonly used. This chapter presents a literature review of the main aspects related to this topic, including the main modalities of assessment (Continuous Assessment, summative assessment and continuous assessment). Issues related to Multiple Choice Questions (MCQ) are discussed with more detail, referring to the various formats of MCQ, its advantages and limitations, with a particular focus on its use in mathematics tests. Also, some guidelines for the quality assurance of MCQ with quality are included.