How far do we agree on the quality of translation? Cover Image

How far do we agree on the quality of translation?
How far do we agree on the quality of translation?

Author(s): Maria Kunilovskaya
Subject(s): Language studies, Language and Literature Studies, Applied Linguistics
Published by: Нов български университет
Keywords: TQA; translation mistakes; inter-rater reliability; error-based evaluation; error-annotated corpus; RusLTC;

Summary/Abstract: The article aims to describe the inter-rater reliability of translation quality assessment (TQA) in translator training, calculated as a measure of raters’ agreement either on the number of points awarded to each translation under a holistic rating scale or the types and number of translation mistakes marked by raters in the same translations. We analyze three different samples of student translations assessed by several different panels of raters who used different methods of assessment and draw conclusions about statistical reliability of real-life TQA results in general and objective trends in this essentially subjective activity in particular. We also try to define the more objective data as regards error-analysis based TQA and suggest an approach to rank error-marked translations which can be used for subsequent relative grading in translator training.

  • Issue Year: I/2015
  • Issue No: 1
  • Page Range: 18-31
  • Page Count: 14
  • Language: English
Toggle Accessibility Mode