Towards an Individualised Approach to Learners’ Errors

 
When I started working as a school teacher (which was before I began my Ph.D. studies), I decided to try to come up with a clever way of marking my students’ written assignments, correcting the errors of weaker students and only underlining those made by better ones. I found quite soon that it was often hard for me to assess my students’ abilities properly, which I realised when I started getting complaints from them that they did not understand what their mistakes had been. Unfortunately, my enthusiasm decreased, and I began abandoning my practice of approaching errors made by students of different abilities differently.

But every cloud has a silver lining—the above negative experience was one of the reasons I became interested in tests identifying learners’ strengths and weaknesses. Eventually, when I was shaping the plan of my Ph.D. research, I decided to choose diagnostic assessment, which provides learners and/or teachers with detailed feedback on learners’ problems and strengths, as my research topic.  I also found that tests diagnosing learners’ problems in foreign languages were not numerous and there was little research on the topic.  That was the second reason I decided to study diagnostic assessment, as I hoped to add to the body of research on it, believing it would help other teachers to succeed where I, as a teacher, had failed.

As a matter of fact, identifying my student’s strengths and weaknesses would not have been that much of a problem if it had not required a lot of time, which I did not have.  That is why, soon after I began to study the topic of diagnostic assessment, I  started considering possibilities of implementing computer-based diagnostic tests, rightfully assuming that computerised tests require less time to take and make providing immediate feedback to learners possible, the latter being one of the characteristics of diagnostic tests.  Later on, computer adaptive tests, i.e., computerised tests adapting their content to test-takers’ ability levels, caught my attention for the same reason, for their increased assessment precision and because they allowed for an individualised approach to learners.  In fact, I thought it would be useful to try to find out which characteristics of adaptive tests as well as which adaptive test designs could add to the precision of diagnostic tests.  In addition, as both diagnostic and adaptive tests had their limitations, I decided it was also worth studying what could be diagnosed by a computer adaptive diagnostic test; in particular, which problems with grammar or lexis in English as a foreign language could be identified and how.  To find out the latter, I decided to design a number of tests basing their content mainly on the findings of two projects: CEFLING (www.jyu.fi/cefling) and TOPLING (www.jyu.fi/topling), both aiming to determine how grammatical structures and vocabulary develop.

Those were the questions I considered interesting to find answers to.  But I also felt that to be sure that the outcomes of my research would have practical value for teachers and learners, I had to gather their experiences with the tests I designed.  Thus I am not just aiming to study how my tests work, presenting numbers arranged into tables and graphs, which will, hopefully, reveal that the tests are able to diagnose learners’ problems in English as a foreign language, but I also want to know how learners and teachers feel about the diagnostic tests I design and what exactly they find useful in them.

So far, I have conducted a case study in which I gathered learners’ (n=19) and their teacher’s experiences with a diagnostic test identifying learners’ problems with the production of questions in English as a foreign language.  In the test, the feedback learners received was adapted to their abilities.  Although the number of the participating learners was small, it appears that the test helped in identifying their problems with the English questions, according to their own and their teachers’ reports.  In addition, it also seemed to allow for an individualised handling of learners’ errors, suggesting how much assistance they needed to self-correct their mistakes.

At the moment, I am planning to try the same test design with a different content—word derivation.  I find the latter interesting as in contrast to producing questions in a foreign language, there is not much research on the development of word derivation skills.  Thus before trying to discover which problems in word derivation can be diagnosed and how it can be done, I first need to know what the specific problems in word derivation and reasons behind them are.  To do that, I have decided to co-operate with the TOPLING project team, and we are currently in the process of conducting a study the aim of which is to find answers to the above two questions.  My further plans include trying out other adaptive test designs and probably other content, with the aim of creating a test that diagnoses learners’ problems efficiently and fast, thus minimising teachers’ workload, but also one that guides learners to realising what their own mistakes are and letting them self-correct them, i.e., helping them to learn.

I feel that diagnostic tests are underestimated and see a huge potential in them, and I too, as Charles Alderson (2005) does, hope that one day we will be able to integrate diagnostic tests into the learning process so well that assessment will be an indistinguishable part of it.

 

The author is a Ph.D. student at the Centre for Applied Language Studies in the University of Jyväskylä.

 

References

Alderson, J.C. (2005). Diagnosing Foreign Language Proficiency: The Interface between Learning and Assessment. London: Continuum International Publishing Group Ltd.

 
Artikkeli Jyväskylän yliopiston JYX-julkaisuarkistossa
Lataa PDF