Testing Assessment and Feedback SIG (TAFSIG) invites you to complete a short survey to help guide our future activities. It aims to identify areas of testing, assessment and feedback that EAP Practitioners would like to develop, share and build expertise in. There are also a few questions to collect feedback on our activities so far.
You can find it by clicking HERE. It should take you between 5-10 minutes to complete and will close on Tuesday 19th January.
This is open to all EAP Practitioners (not only BALEAP members), so please share widely.
by Robert Playfair
Going through a UKVI audit on a pre-sessional course
'UK Visas and Immigration' (UKVI) is part of the Home Office department of government. According to their website, their role is to make "millions of decisions every year about who has the right to visit or stay in the country, with a firm emphasis on national security and a culture of customer satisfaction for people who come here legally." This blog post is about when the world of EAP practitioners in the UK collides with the Home Office, in what is known as a 'UKVI audits'.
Since 2010 and the introduction of 'Secure English Language Tests' (SELTs), developers of these language tests have taken on responsibilities of immigration control, or ‘border work’ (Harding, Brunfaut & Unger, 2020). As pre-sessional tests scores in the UK have the same decision-making value as a SELT (students progress / do not progress on to their degree courses), this immigration control role has also become part of the job of EAP Practitioners involved in in-house pre-sessional assessment development, alongside the responsibility of preparing students for their future studies. For these people there is a tension between developing assessments which best prepare students for higher education study and providing evidence of language competency for UKVI audits. This concern predates SELTs, with Cyril Weir (2005) noting the impact this has on test design:
“The increased expectation that providers of educational services should be made accountable to external bodies for the impact of their work has been a powerful driving force behind [the emphasis on summative evaluation]. It has encouraged a swing from viewing tests as instruments for assisting in the development and improvement of student language ability to treating them as indicators of performance for outside agencies” (Weir, 2005: 39)
It has also been explored in other contexts, e.g., an EAP unit in Hong Kong (Bruce & Hamp-Lyons, 2015).
BALEAP has developed guidelines for understanding existing SELT tests however there is relatively little guidance on developing 'UKVI compliant' in-house assessments. Perhaps as a result, this is a common point of discussion for those with in-house assessment development responsibilities: a recent search of the past two years of the BALEAP mailing list shows over 200 messages containing 'UKVI'. This blog post collects the current government guidance that is available and offers some snapshots of narratives from BALEAP colleagues who have undergone UKVI audits over the past few years.
Government guidance on in-house language tests
'Tier-4' visas have recently changed their name to 'Student Route' visas. However, the guidance from the Home Office on assessing English language competence for in-house EAP departments has not changed (as of December 2020).
The main document which addresses this is the 'Student Sponsor Guidance Document 2: Sponsorship Duties' (version 12/2020, available here). In particular sections 5.10-5.12 'Students studying at degree level and above on the Student route', For example,
" 5.10 a. If you are an HEP [Higher Education Provider] with a track record of compliance, we will allow you to choose your own way to assess it... However, you must ensure they are proficient to level B2 in each of the four components (speaking, listening, reading and writing), unless they are exempt from being proficient in a component because of a disability." (p.29)
"5.11 You must take all reasonable steps to ensure that you are satisfied through your assessment that the applicant meets the language competence requirements. For example, you could interview students. If you have doubts about any documents then you should verify them with the appropriate body." (p.30)
Another source of information is the Government webpage, 'Prove your English language abilities with a secure English language test (SELT)' which has recently (October 2020) added guidance about integrated testing (i.e., tests generating scores of more than one skill) for SELTS which could be referred to when justifying scores from similar tests developed by in-house EAP teams:
"Where 2 or more components (reading, writing, speaking and listening) of a test are examined and awarded together, for example a combined exam and certificate for reading and writing skills, you must show that you achieved the required scores in all the relevant components during a single sitting of that examination, unless you were exempt from sitting a component on the basis of a disability."
UKVI audit case studies
TAFSIG is not yet in a position to either challenge this guidance or offer advice about interpreting it for in-house assessment development. For now, we would like to contribute the following anonymised case studies as a window into the UKVI audit experience for people working in high stakes in-house language testing in universities.
If you have an experience related to this topic you would like to share, please get in touch via our email: email@example.com
Case study 1
We were audited 2 or 3 years ago. UKVI audited the whole university, not just the pre-sessional course or even just our language centre - it was admissions for all international students.
For our summer pre-sessional course, we don't have an end of course test, and we don't test the 4 skills separately - we have integrated task-based assessments, e.g. the extended writing task assesses reading and writing together. However, on the marking criteria reading is a specific category, and it is a must-pass category, with the pass being CEFR B2. Overall, of the 4 skills required by UKVI, each has at least 1 must-pass B2 criterion on the assessments, which is how we demonstrate that students have met UKVI requirements.
We also have a detailed assessment specification document which explains how everything works.
Case study 2
The UKVI visited us in 2016, with very little advance notice given to our department (around 2 weeks). This was about 6 weeks before the main presessional courses began.
They were not there just for the presessional but for all aspects of managing Tier 4 students, including our relevant UK-based franchised courses, and were particularly exercised by record-keeping. In advance of the visit, I was asked to provide documentation relating to the content, assessment strategy, and outcomes from the previous presessionals on my watch.
I think there were 3 inspectors in total, and they stayed for two days. There were two big meetings and a number of side meetings with individuals or small groups. I was not required to attend a side-meeting.
The first big meeting was about administration and record-keeping across the University and I was not asked to attend.
The second meeting was more about course management and assessment, and involved management and academic representatives from each relevant course. It became clear that the inspectors had only a sketchy knowledge of pedagogical principles and issues. They were very keen on separate assessments for each of the four skills, or at least a distinct CEFR-related mark being awarded for each. However, we on the presessional favoured integrative assessments, so I faced some questioning about that. Our plans for the summer courses were well advanced by then, so I was loth to change things that had already been settled. However, we did meet them halfway in the end, by hastily introducing a specific listening test, which turned out to be a disaster but hey ho. I understand that my successors got pushed further in the direction of separate assessments.
The inspectors were also very interested in the credibility of the courses, insofar as having clear criteria for success and failure, and progression to degree courses. I had no problems in this regard, but the representative from one of the private-sector franchised courses was given a very hard time as they had *never* failed anyone on academic grounds on their relevant courses.
In general, the inspectors seemed to appreciate colleagues who had done their homework and were well-briefed. You need to have the facts at your fingertips.
A few weeks later, we learned that the University had passed the inspection with no serious concerns.
Case study 3
The audit we had this year involved about 40-45 mins with a UKVI panel of three auditors and they looked at the assessments for both our Presessional course and also from our internal Progress Test. I wasn't the person directly involved but I was in the 'green room' for the debrief. Their main concerns appeared to be the security and secure storage of test materials and the recording of spoken presentations rather than the content of assignments. Attendance monitoring also seemed to be a key focus.
One of the auditors did have a background in English language teaching and commented favourably on the range of test items we had written, however I think that was incidental. We are still waiting to have sight of the audit for more detail.
Case study 4
This relates to late 2019 when we went through a successful UKVI inspection of both our Pre-Sessional and Foundation programmes:
We carried out some informal research prior to our inspection to find out the experience of other HEIs, although it soon became clear that experiences were very varied and some claimed the rigor and approach depended on the inspectors involved. Nevertheless, tapping into the experiences of others was really useful.
UKVI inspected all our relevant programmes including Foundation, Pre-Masters and Pre-Sessional. I worked hard to package together a suite of documents for the inspectors which explained our language tests and provided a rationale and defence of the robustness of our assessment system. This package (which included samples of tests, marking criteria, CEFR alignment, test specs and validity report) was given to the inspectors in good time before the actual inspection. When it came to the face-to-face interrogation(!), the inspectors focussed much more on the Pre-Sessional than our other programmes. We never found out why this was, but some of our Pre-Sessional assessment is coursework-based and not carried out in exam conditions, so maybe this was why they honed in on it, or maybe it was due to student numbers going through each programme. Whatever the reasons, the inspectors were friendly and interested to learn about our programmes, but although manageable, that interrogation of the Pre-Sessional was quite a grilling and covered various aspects of the testing, both practical and pedagogical.
As well as being prepared from a testing perspective and being ready to defend the validity and reliability of assessment instruments, it was also essential that the administration of our assessment procedures was in order. The UKVI team requires a selection of student samples; some of these are to be submitted to the inspectors prior to their visit but there is also a chunk they ask for on the day of their visit. These need to be readily accessible and so administration staff need to be well involved in the preparation of any inspection.
Bruce, E. and Hamp-Lyons, L. (2015) Opposing tensions of local and international standards for EAP writing programmes: Who are we assessing for? Journal of English for Academic Purposes 18:64-77.
Harding, L. Brunfaut, T. and Unger, J. W. (2020) Language Testing in the ‘Hostile Environment’: The Discursive Construction of ‘Secure English Language Testing’ in the UK. Applied Linguistics 41(5):662–687.
Weir, C. (2005) Language Testing and Validation: An Evidence-Based Approach. Palgrave.
Dear BALEAP Colleagues,
The Testing, Assessment and Feedback SIG (TAFSIG) is developing a network of universities and language centres to facilitate piloting, pre-testing and/or trialling of tests with one another. This is an exciting opportunity not only for reciprocal help in a key stage of the test design cycle, but also to build relationships with other teams and institutions. Please email us on firstname.lastname@example.org if you are interested in finding out more about this and want to be added to the mailing list.
The TAFSIG team
TAFSIG Interview with Susie Cowley-Haselden.
In this interview, Jo Kukuczka asks Susie Cowley-Haselden about building knowledge through Academic Reading Circles (ARC) as speaking assessment.
Susie is a Senior Teaching Fellow and Course Director of EAP on the International Foundation Programme at the University of Warwick. She has worked in EAP since 2009, after 12 years in EFL/ ESOL. She has used Seburn’s Academic Reading Circle model in various teaching contexts since 2012. She is currently in the writing up phase of her PhD at Coventry University. Her PhD explores the impact of the ARC model on knowledge building in the EAP classroom. Twitter: @susiecowley
The interview is structured around four questions:
1. What is ARC? 2. Why ARC as a speaking assessment on an EAP course? 3. How do you assess speaking using ARC? 4. What literature would you recommend for EAP professionals interested in exploring speaking through ARC?
by Dr Emma Bruce
In this interview Emma Bruce asks Liz Hamp-Lyons about how she originally became interested in EAP assessment. The conversation gives some interesting insights into Liz’s experience researching ELTS, the pre-cursor to IELTS. Many thanks to Liz for giving her time and sharing her stories with the TAFSIG community.
TAFSIG podcast with Sophia Vanttinen-Newton
In this podcast Jo Kukuczka asks Sophia Vanttinen-Newton about her most recent research on student perceptions of teacher emoticon usage in feedback.
The four questions we ask Sophia in the podcast are:
1. Why your interest in the use of emoticons in feedback?
2. In your research, what did you find in terms of implications for the use of emoticons in pedagogic feedback? In particular, in terms of implications for students?
3. What were the limitations of your study?
4. What is next on your research agenda? And why?
Sophia began her career teaching abroad in Japan, Romania and France, like many in EAP, as a TEFL teacher in both state and private schools. She moved into teaching and course development in EAP in the UK after taking the DELTA. She has worked in London on EAP and IELTS pre-sessional courses and at the University of Kent teaching on credit bearing, non-credit bearing and bespoke in-sessional EAP modules. Between 2017 and 2019 she was In-sessional and Short Courses Manager at Kent leading and managing the in-sessional and short courses provision. Now, at the University of Bristol she is an IFP EAP Coordinator working on IFP EAP units. More recently she has moved into research where my interest lies in perceptions of emoticons in online virtual teaching environments and assessment feedback. Her recent MA dissertation explored student perceptions of teacher emoticon usage in credit-bearing assignment feedback.
TAFSIG Interview with Lynda Taylor - by Fiona Orel.
Many thanks indeed to Lynda Taylor for giving her time and sharing her expertise with the BALEAP TAFSIG community. In this video Lynda talks with us about some of the key issues in speaking assessment and the challenges of working with an evolving construct of speaking. As well as sharing experience and insights, Lynda encourages us to consider our audience, purpose, and context and to ponder on the current shift in skills needed in an online environment and the impact of this on the speaking construct. Lynda makes reference to Cyril Weir’s construct of academic literacy and the three paradigms that it offers for examining the EAP context. To explore Lynda’s discussion of interactional competence it’s also well worth taking a look at Galaczi, E. & Taylor, L. 2018, “Interactional Competence: Conceptualisations, Operationalisations, and Outstanding Questions”, Language assessment quarterly, vol. 15, no. 3, pp. 219 – 236.
by Wayne Rimmer.
With his extensive background in the research and practice of assessment, Tony Green, Professor of Language Assessment at the University of Bedfordshire, shares his wealth of experience on current topical issues in the field. These include assessment literacy, the legitmacy of high-stakes tests and the evolution of assessment to reflect changes in the experiences and expectations of learners in higher education. The conversation is a fascinating snapshot of the challenges facing educators and assessment professionals working in an EAP environment.
TAFSIG Interview with Dan Isbell & Benjamin Kremmel
TAFSIG is a forum for sharing knowledge, ideas, practice and research on testing, assessment and feedback, and a facilitator for events that further research and practice in the field of English for Academic Purposes (EAP) and beyond.