The primary purpose of this study was to explore the efficacy of an automated essay scoring (AES) program, namely IntelliMetric, in scoring Korean high school students` English compositions in a large-scale standardized writing assessment setting. In addition, this study also examined the relative utility of 5-point analytic rating scales and 4-point analytic rating scales for use in scoring essays from this particular group of secondary EFL students in Korea. The results of this study showed that the computer scoring models built based on a large number of student essays with known scores (i.e., training set) could successfully predict the scores of essays set aside for the purpose of internal validation (i.e., validation set), providing evidence for the efficacy of the computer scoring models developed. This was backed up by three different measures comparing the scores derived from the automated essay scoring program and trained human raters: mean scores, agreement rates, and pearson correlation. Considering the computer program`s ability to apply scoring criteria or standards consistently over time, although far from being perfect, it seems to show the potential to score essays reliably large scale assessment contexts such as a nation-wide writing test.