Problem: Learning another language takes time. The ultimate paradox being that the better you get at a language, the slower you progress.
Problem: The existing proficiency scales employ levels so broad that it can be particularly challenging to: 1) communicate to the learner exactly where s/he is within a level, and 2) communicate to the learner how much progress s/he has made during a short course of learning.
Problem: Teachers often are unclear about how to operationalize proficiency scales and articulate student learning outcomes, materials, and activities appropriate for learners at a given level.
Yup. These are all problems.
My professional training was in the US government’s Interagency Language Roundtable (ILR), the original language proficiency scale with its roots going back over 70 years. Some 40 years later, the Common European Framework (CEFR) emerged as the ILR counterpart. Working in academia/industry where students have a finite amount of instructional contact time (e.g., short immersion experiences or language training prior to academic enrollment), the CEFR could do what the ILR could not, namely label the learner’s abilities at a more granular level with corresponding can-do statements.
Still, even the CEFR has limitations.
As students progress, everyone – from students to teachers – has the expectation that one semester of coursework would equal one level of progression on the CEFR scale, or that one course level was the equivalent to one CEFR level, which is simply not the way things work. Intensive English Programs do not graduate C1s on the CEFR scale (definitely not for productive skills – maybe receptive skills – maybe). The B level students are by and large going to be at the B level for a long, long time, even though the teachers move into C level (and beyond) materials and activities.
This is an intriguing disconnect.
Especially if we think about the problem just in terms of our ability to clearly identify students’ levels. A functional proficiency scale capable of granular distinctions is a tremendously powerful tool.
Pearson’s Global Scale of English certainly appears to be the emerging solution to a pervasive problem. Sara Davila, from Pearson, gave a fantastic overview presentation of the GSE at the Fall CATESOL 2017 Conference. Pearson’s team of psychomatricians and content developers have been working to identify skills within much narrower bands on the CEFR scale and tagging them with numeric values corresponding to the GSE. Per Davila, all of the work that has gone into this process is free to the public due to a caveat with the CEFR folks. Interesting.
Setting the administratrivia aside, the applied value of this work is on point. Davila gave three examples:
- Map student learning outcomes (SLOs) to the GSE/CEFR can-do statements. Are the SLO levels consistent?
- Integrate GSE numbers into entrance/exit exams to demonstrate students’ before/after proficiency – especially useful in courses serving learner populations with high expectations over a short period of time (e.g., professional clientele in a 4-week program). The ability to make learning visible to the learner is vital for these programs.
- Integrate GSE numbers into entrance exams for placement purposes. Learners always place in a range; use the GSE numbers to more fully understand where they place and if it would be appropriate to move them up/down a level.
It will be fascinating to see how the field adopts/adapts to the GSE in the coming years. One of the biggest challenges will be educating administrators and course developers. As a practical exercise, you can take the GSE SLOs and map course SLOs from a single course as a gauge and then go from there.
Read more about the GSE from Pearson here.