A Development of English Proficiency Test of Undergraduate English Major Students of University of Phayao

ผู้แต่ง

  • Sanit Yeunsak English Department, School of Liberal Arts, University of Phayao
  • Chuanpit Sriwichai English Department, School of Liberal Arts, University of Phayao
  • Chanapa Duangfai English Department, School of Liberal Arts, University of Phayao
  • Narisa Paichareon English Department, School of Liberal Arts, University of Phayao

คำสำคัญ:

-

บทคัดย่อ

This study aimed to develop English proficiency tests for undergraduate English major students of School of Liberal Arts, University of Phayao based on CEFR C1 level and to evaluate the quality of the developed tests. Pursuing the research objectives, two versions of the tests had been developed and conducted. Each version of the test consisted of 60 items, and it was divided into 4 parts including vocabulary, reading, grammar and writing, and conversation. There were 15 items in each part. The samples of this study were 174 fourth-year English major students of the School of Liberal Arts, University of Phayao in academic year 2021. The content validity of the two tests was verified by the 3 experts in the fields of English language and Education Evaluation and Assessment using Index of Item-Objective Congruence (IOC). Then the tests were tried out with the samples to analyze difficulty and discrimination based on the 27 % technique of Chung Teh-Fan indicating that the criteria for an index of difficulty should be between 0.20 and 0.80 and the index of discrimination should be at least 0.20. Next, the tests were analyzed for reliability using KR-20 formula.     

The research results were as follows;

  1. For the first objective of the study; To develop University of

Phayao Standard Test, the study resulted in the two versions of C1 Standard Test of English Proficiency which consist of 60 items in four parts; vocabulary, reading, grammar and writing and conversation respectively for the fourth years students in University of Phayao.

  1. For the second objective of the study: evaluating the quality of  University of Phayao standard test of English proficiency C 1 level, the index of item objective congruence (IOC) which are satisfied with the mean level at over 0.50 expressed that version one of the tests ranging from 0.33 to 1.00, was at 0.94 of the IOC indexes mean, followed by the IOC index means of version two ranging from 0.67 to 1.00 at 0.99. Both versions reflected the high content validity. In terms of the standardized quality of the difficulties and the discrimination of the test, there were 57 qualified items in the first version of the tests, and all items are qualified in the second one. The reliability index of version one of the English proficiency test was at 0.82 and the value of version two was at 0.83 indicating that both tests reached acceptable reliability value.

References

Athiworakul C. et. al. (2018). SWU-SET as a CEFR standardized English test. Journal of Language Teaching and Research, 9(2), 261-267.

Athiworakul, C., & Wudthayagorn, J. (2018). Mapping Srinakharinwirot University Standardized English Test (SWU-SET) onto the Common European Framework of Reference (CEFR). Suranaree Journal of Social Science, 12(2), 69-84.

Bachman, L. F. (1990). Fundamental considerations in Language Testing. Oxford: Oxford University Press.

Brown, H.D. (2004) Language Assessment: Principles and Classroom Practices. White Plains, NY: Pearson.

Burns, N., Grove, S.K. & Sutherland, S. (2017). Burn and Grove’s the Practice of Nursing Research: Appraisal, synthesis, and generation of evidence. 8th Edition. Missouri: Elsevier.

Carr, N.T. (2011). Designing and Analyzing Language Tests. Oxford: Oxford University Press.

Danuwijaya, A. A. (2018). Item analysis of reading comprehension test for post-graduate students. English Review: Journal of English Education, 7(1), 29-40.

Heaton, J.B. (1988). Writing English Language Test. (new ed.). Essex: Longman.

Hughes, A. (2003) Testing for Language Teachers. (2nded.). Cambridge: Cambridge University Press.

Jin, Y. (2011). “Fundamental Concerns in High-Stakes Language Testing: The Case of the College English Test”. in Pan-Pacific Association of Applied Linguistics., 15(2), 71- 83.

Lord, F.M. (1952). The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties. Psychometrika, 18, 181-194.

Mcnamara, T. (2000). Language Testing.Oxford: Oxford University Press. Nation, I.S.P. (2001). Learning Vocabulary in Another Language. Cambridge: Cambridge University Press.

Ngadkatok, S. et al. (2018). Quality of Ordinary National Educational Test and Guidelines for Enhancing the Test Quality. Sukhothai Thammathirat Open University Journal, 31(2), 110-123.

Salkind, N.J. (2013) Test & Measurement for People who (think they) Hate Tests & Measurement (2nd ed.).Thousand Oaks, CA: Sage Publications.

Sanglertuthai, C. (2015). Research Instruments. Sakon Nakhon Graduate Studies Journal, 12(58), 13-24.

Saiyot, L., & Saiyot, A. (2001). Assessment Techniques. Bangkok: Suweeriyasarn.

Sinjindawong, S. (2018). Items Analysis Methods. Sripatum Review of Science and Technology, 4(1), 21-32.

Tongnak, S., Sirichai K., & Kerdsuwan, S. (2015). Development of a Competency Test for Student Teachers Based on Teaching Professional Standards Using Polytomously Scored Item. Journal of Multidisciplinary in Social Science, 9(1), 169-187.

Turner, R.C., & Carlson, L. (2003). Indexes of Item-Objective Congruence for Multidimensional Items. International Journal of Testing, 3(2), 163-171.

Wanichdee, A., Sripunworasakul. S., & Thongsorn. N. (2008). Development of English Test via Electronic Media in a Distance Learning System. Bangkok: Sukhothai Thammathirat Open University.

Wudthayagorn, J. (2018). Mapping the CU-TEP to the Common European Framework of Reference (CEFR). LEARN Journal: Language Education and Acquisition Research Network Journal, 11(2), 163-180.

Downloads

เผยแพร่แล้ว

2022-12-29