Analysis of Task Comparability in Digital Environment by the Case of Metacognitive Skills

132

Abstract

This article discusses the problem of task comparability with the help of scenario-based tasks for metacognitive skills. Using the data of “«4С”» tool for measuring critical thinking (N=500), the comparability of two scenarios within an identical digital environment with one set of indicators was investigated. The main difference in the scenarios lies in the contextual characteristics. The measurement invariance analysis of the instrument using confirmatory factor analysis was conducted. The results show that even with the equivalent construct structure and tasks’ characteristics, the context of the scenario has an effect on the student`’s performance. The main differences in results were recorded for tasks involving interaction with the environment, where the test-taker created an object with elements. Tasks involving working with text in a digital environment can be considered comparable in case of elements content change. The possible reasons behind the observed differences in scenarios are discussed.

General Information

Keywords: critical thinking, test comparability, scenario-based tasks, contextualized items, confirmatory factor analysis, measurement invariance

Journal rubric: Educational Psychology

Article type: scientific article

DOI: https://doi.org/10.17759/pse.2022270605

Funding. The reported study was funded by The Ministry of Education and Science of the Russian Federation, project number 075-15-2022-325 from 25.04.2022.

Acknowledgements. The author is grateful to Uglanova I.L. for her help and comments on this article.

Received: 22.10.2021

Accepted:

For citation: Gracheva D.A. Analysis of Task Comparability in Digital Environment by the Case of Metacognitive Skills. Psikhologicheskaya nauka i obrazovanie = Psychological Science and Education, 2022. Vol. 27, no. 6, pp. 57–67. DOI: 10.17759/pse.2022270605.

References

  1. Gracheva D.A., Tarasova K.V. Podhody k razrabotke variantov zadanij scenarnogo tipa v ramkah metoda dokazatel'noj argumentacii [Approaches to the development of scenario-based task forms within the framework of evidence-centered design]. Otechestvennaja i zarubezhnaja pedagogika [Domestic and foreign pedagogy], 2022, no. 3(1), pp. 83–97. (In Russ.).
  2. Uglanova I.L., Orel E.A., Brun I.V. Izmerenie kreativnosti i kriticheskogo myshlenija v nachal'noj shkole [Measuring creativity and critical thinking in primary school]. Psihologicheskij Zhurnal [Psychological Journal], 2020, no. 6(41), pp. 96–107. (In Russ.).
  3. Buerger S. [et al.]. What makes the difference? The impact of item properties on mode effects in reading assessments. Studies in Educational Evaluation, 2019. Vol. 62, pp. 1–9. DOI:10.1016/j.stueduc.2019.04.005
  4. Chen F.F. Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural equation modeling: a multidisciplinary journal, 2007. Vol. 14, no. 3, pp. 464–504. DOI:10.1080/10705510701301834
  5. Crisp V. Exploring features that affect the difficulty and functioning of science exam questions for those with reading difficulties. Irish Educational Studies, 2011. Vol. 30, no. 3, pp. 323–343.
  6. Davey T. [et al.]. Psychometric considerations for the next generation of performance assessment. Washington, DC: Center for K-12 Assessment & Performance Management, Educational Testing Service, 2015, pp. 1–100.
  7. Kuhn D. A Role for Reasoning in a Dialogic Approach to Critical Thinking. Topoi, 2018. Vol. 37, no. 1, pp. 121–128. DOI:10.1007/s11245-016-9373-4
  8. Lee H.-K., Anderson C. Validity and topic generality of a writing performance test. Language testing, 2007. Vol. 24, no. 3, pp. 307–330. DOI:10.1177/0265532207077200
  9. Li J. Establishing Comparability Across Writing Tasks With Picture Prompts of Three Alternate Tests. Language Assessment Quarterly, 2018. Vol. 15, no. 4, pp. 368–386. DOI:10.1080/15434303.2017.1405422
  10. Nelson J., Guegan J. “I’d like to be under the sea”: Contextual cues in virtual environments influence the orientation of idea generation. Computers in Human Behavior, 2019. Vol. 90. pp. 93–102.
  11. Oliveri M.E. Considerations for Designing Accessible Educational Scenario-Based Assessments for Multiple Populations: A Focus on Linguistic Complexity [Elektronnyi resurs]. Frontiers in Education, 2019. Vol. 4. DOI:10.3389/feduc.2019.00088
  12. Roos J.M., Bauldry S. Confirmatory factor analysis. SAGE Publications, 2021. 144 p.
  13. Ruiz-Primo M.A., Li M. The Relationship between Item Context Characteristics and Student Performance: The Case of the 2006 and 2009 PISA Science Items. Teachers College Record, 2015. Vol. 117, no. 1, pp. 1–36.
  14. Schmit M.J. [et al.]. Frame-of-reference effects on personality scale scores and criterion-related validity. Journal of Applied Psychology, 1995. Vol. 80, no. 5, pp. 607–620. DOI:10.1037/0021-9010.80.5.607
  15. Şengün S. [et al.]. Do players communicate differently depending on the champion played? Exploring the Proteus effect in League of Legends [Elektronnyi resurs]. Technological Forecasting and Social Change, 2022. Vol. 177. DOI:10.1016/j.techfore.2022.121556
  16. Wang Y., Lu H. Validating items of different modalities to assess the educational technology competency of pre-service teachers [Elektronnyi resurs]. Computers & Education, 2021. Vol. 162. DOI:10.1016/j.compedu.2020.104081
  17. Wested G.S.-F., Shavelson R.J. Development of performance assessments in science: Conceptual, practical, and logistical issues. Educational Measurement: issues and practice, 1997. Vol. 3, no. 16, pp. 16–24.

Information About the Authors

Daria A. Gracheva, Junior Research Fellow, Laboratory for New Construct Measurement and Test Design, Centre for Psychometrics and Measurement in Education, National Research University Higher School of Economics, Moscow, Russia, ORCID: https://orcid.org/0000-0002-4646-7349, e-mail: dgracheva@hse.ru

Metrics

Views

Total: 318
Previous month: 13
Current month: 10

Downloads

Total: 132
Previous month: 3
Current month: 6