Comparison in the Final Examination Questions Paper by Means of Difficulty and Discrimination Indices
DOI:
https://doi.org/10.56225/ijgoia.v2i1.164Keywords:
examination, course learning outcome, difficulty index, discrimination indexAbstract
Examination is one of the important elements in measuring the student achievement and to determine whether the course learning outcome is achieved or not. The best exam paper is the paper that can evaluate the students’ achievement and thus satisfy the course learning outcome. The purpose of this study is to compare the examination papers and identify whether the course learning outcome is achieved or not based on the performance of students in the examinations. The sample is stratified according to year 1 until 4 and according to the program. The students are selected from three academic years 2015/2016, 2016/2017 and 2017/2018. The data is obtained from the marks of their mid-semester and final exams. Difficulty and discrimination indices is evaluated for each item in the exam paper. The research findings found that the ideal question for the mid-semester exam is from the session 2017/2018 with difficulty and discrimination indices range from 0.4 to 0.8. While the ideal question for final semester exam is from session 2016/2017 with difficulty index in range 0.4-0.6 and higher discrimination index as compared to other sessions. As a result, the student performance for 2017/2018 session has increased significantly in the overall assessment with 13.0% obtained grade A compared to 11.1% in 2016/2017 and only 4.9% in 2015/2016. The failure rate has also reduced at only 11.4% in 2017/2018 compared to 32.1% in 2015/2016 and 11.9% in 2016/2017.
References
Amamou, S., & Cheniti-Belcadhi, L. (2018). Tutoring in project-based learning. Procedia Computer Science, 126, 176–185.
Basitere, M., & Ivala, E. (2015). Mitigating the mathematical knowledge gap between high school and first year university chemical engineering mathematics course. Electronic Journal of E-Learning, 13(2), pp68-83.
Bhati, A., & Song, I. (2019). New methods for collaborative experiential learning to provide personalised formative assessment. International Journal of Emerging Technologies in Learning, 14, 179–195.
Hamzah, F. M., Kamarulzaman, P. S. D., Ismail, N. A., & Jafar, K. (2015). Student’s performance in engineering mathematics courses: Vector Calculus versus Differential Equations. Journal of Engineering Science and Technology Special Issue on UKM Teaching and Learning Congress 2013, 91–97.
Hingorjo, M. R., & Jaleel, F. (2012). Analysis of one-best MCQs: the difficulty index, discrimination index and distractor efficiency. JPMA-Journal of the Pakistan Medical Association, 62(2), 142–147.
Jandaghi, G., & Shaterian, F. (2008). Validity, Reliability and Difficulty Indices for Instructor-Built Exam Questions. Journal of Applied Quantitative Methods, 3(2), 151–155.
Maulana, I. T., Hary, R. D., Purwasih, R., Firdian, F., Sundara, T., & Na’am, J. (2019). Project-Based Learning Model Practicality on Local Network Devices Installation Subject. International Journal of Emerging Technologies in Learning, 14(15), 94–106. https://doi.org/10.3991/ijet.v14i15.10305
Nor, M. J. M., Hamzah, N., Basri, H., & Badaruzzaman, W. H. W. (2006). Pembelajaran berasaskan hasil: Prinsip dan cabaran. Pascasidang Seminar Pengajaran Dan Pembelajaran 2005, 54–62.
Othman, H., Ismail, N. A., Asshaari, I., Hamzah, F. M., & Nopiah, Z. M. (2015). Application of Rasch measurement model for reliability measurement instrument in vector calculus course. Journal of Engineering Science and Technology, 10(2), 77–83.
Rasul, S., & Bukhsh, Q. (2011). A study of factors affecting students’ performance in examination at university level. Procedia-Social and Behavioral Sciences, 15, 2042–2047. https://doi.org/10.1016/j.sbspro.2011.04.050
Shahabudin, S. H. (2004). The Malaysian Qualifications Framework. In EAHEP Roundtable on QF, 1-3 July 2009 Brussels. MAPCU National Conference.
Wilson, M. (2004). Constructing measures: An item response modeling approach. Routledge.
Wolmarans, N., Smit, R., Collier-Reed, B., & Leather, H. (2010). Addressing concerns with NCS: An analysis of first-year student performance in mathematics and physics. Science and Technology Education, 2, 274–284.
Yamamoto, H., Nakayama, M., & Shimizu, Y. (2014). Measures to Promote Practice of Quiz and Evaluation Thereof in Blended-Learning. International Journal of Emerging Technologies in Learning (Online), 9(5), 32–39. https://doi.org/10.3991/ijet.v9i5.3854
Zainudin, S., Ahmad, K., Ali, N. M., & Zainal, N. F. A. (2012). Determining course outcomes achievement through examination difficulty index measurement. Procedia-Social and Behavioral Sciences, 59, 270–276. https://doi.org/10.1016/j.sbspro.2012.09.275
Zainuri, N. A., Asshaari, I., Ariff, F. H. M., Razali, N., Othman, H., Hamzah, F. M., & Nopiah, Z. M. (2016). Item analysis for final exam questions of engineering mathematics course (Vector calculus) in UKM. Journal of Engineering Science and Technology, 11, 53–60.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright @2022. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted to copy and redistribute the material in any medium or format, remix, transform, and build upon the material for any purpose, even commercially.
This work is licensed under a Creative Commons Attribution 4.0 International License.