Writing to Learn with Automated Feedback through (LSA) Latent Semantic Analysis: Experiences Dealing with Diversity in Large Online Courses Cover Image

Writing to Learn with Automated Feedback through (LSA) Latent Semantic Analysis: Experiences Dealing with Diversity in Large Online Courses
Writing to Learn with Automated Feedback through (LSA) Latent Semantic Analysis: Experiences Dealing with Diversity in Large Online Courses

Author(s): Miguel Santamaria Lancho, Mauro Hernandez, Jose Maria Luzon Encabo, Guillermo Jorge-Botana
Subject(s): Social Sciences, Education, Higher Education
Published by: European Distance and E-Learning Network
Keywords: Assessment and evaluation; Distance and e-learning methodology; E-skills; e-competences; Learning innovation; Learning psychology; Lifelong learning; MOOCs; Tutoring; student support

Summary/Abstract: The increasing demand for higher education and life-long training has induced a raising supply of online courses provided both by distance education institutions and conventional face to face universities. Simultaneously, public universities’ budgets have been experiencing serious cuts, at least in Europe. Due to this shortage of human and material resources, large online courses usually face great challenges to provide an extremely diverse student community with quality formative assessment, specially the kind that offers rich and personalized feedback. Peer to peer assessment could partially address the problem, but involves its own shortcomings.The act of writing has been identified as a high-impact learning tool across disciplines, and competence in writing has been shown to aid in access to higher education and retention. Writing to learn (WTL) is also a way to foster critical thinking and a suitable method to train soft skills such as analysis and synthesis abilities. These skills are the base for other complex learning methodologies such as PBL, case method, etc. WTL approach requires a regular feedback given by dedicated lecturers. Consistent assessing of free-text answers is more difficult than we usually assume, specially, when addressing large or massive courses. Using multiple choice objective assessment appears an obvious alternative. However, the authors feel that this alternative shows serious shortcomings when aiming to produce outcomes based on written expression and complex analysis. To face this dilemma, the authors decided to test an LSA-based automatic assessment tool developed by researchers of Developmental and Educational Psychology Department at UNED (Spanish National Distance Education University) named GRubric. The experience was launched in 2014-2015. By using GRubric, we provided automated formative and iterative feedback to our students for their open-ended questions (70-200 words). This allowed our students to improve their answers and practice writing skills, thus contributing both to better organize concepts and to build knowledge. In this paper, we present the encouraging results of our first two experiences with UNED Business Degree students in 2014/15 and 2015/16.

  • Issue Year: 2017
  • Issue No: 1
  • Page Range: 331-340
  • Page Count: 10
  • Language: English
Toggle Accessibility Mode