A genre theory informed dynamic assessment approach offers higher education students individualised support with academic writing and conceptual development.
by Dr Prithvi Shrestha
Academic writing is the main mode of meaning making and student assessment in higher education. Although many students may join higher education with required academic writing expertise, others seem to find it challenging.
National student satisfaction (Office for Students, 2020) in the UK higher education is quite low regarding assessment and feedback. A theory driven assessment approach may help address the issue.
Drawing on my research reported in my recent book (Shrestha, 2020), in this blog, I share how a theory-based assessment approach combined with a genre theory can be used to support our first year students with their disciplinary writing, and thus have an impact on student satisfaction across higher education.
Why do we need an assessment theory?
As EAP practitioners, we have conducted our students’ academic writing assessment many, many times. But, how often have we wondered whether our assessment practices are informed by any particular theories? I’m sure we tend to follow the tradition in our field and what is practised in our institutions. And possibly, we also build our assessment approach on our own personal experience of being assessed in our formal studies and professional development. Our assessment practices may be full of tensions, conflicts, uncertainties and multiple interpretations depending on our sociocultural contexts (Davison, 2004).
If we want our assessment practices to be systematic, credible and context-sensitive serving our student needs, I argue that we need to adopt a theory-informed assessment approach to academic writing (cf. Davison & Leung, 2009). Obviously, our teaching of academic writing is informed by pedagogical theories such as task-based teaching, online learning and communicative language teaching. There is no reason in principle why the same theories could not inform our assessment practices.
Among many learning theories, Vygotsky-inspired sociocultural theories (SCT) of learning (Vygotsky, 1978)have been influential in language and academic literacy education (Coffin & Donohue, 2014; Prior, 2008). Within these SCT theories, an assessment approach called dynamic assessment (DA) has been developed and DA is the focus in my research. DA blends instruction with assessment by targeting and further developing students’ Zones of Proximal Development (ZPD, learning potential) (Lantolf & Poehner, 2004).
The focus of DA is on students’ future academic writing potential rather than their past performance. It promotes dialogic feedback between the teacher and the student on academic writing assessment. To put it crudely, the teacher-assessor in DA intentionally promotes learning by targeting their support at the student’s ZPD and it is thus a learning-oriented assessment approach unlike standardised tests. As a theory of assessment and learning, however, it lacks a systematic language theory to teach and assess academic writing.
What view of language and genre?
In order to fill the gap in DA, I draw on Hallidayan Systemic Functional Linguistics (SFL) theory (Halliday & Matthiessen, 2014) to provide teaching support, assess academic writing and analyse my data. SFL views language as a choice-based meaning making semiotic resource that is socio-culturally shaped in a particular discourse community. It has been influential in EAP teaching and assessing academic writing (Gardner & Donohue, 2020).
In my work, I also deploy the genre theory/ pedagogy developed within SFL (Martin & Rose, 2007) to design academic writing assessment and provide dialogic feedback to students. In SFL, Genre is defined as ‘a staged, goal-oriented social process. Social because we participate in genres with other people; goal-oriented because we use genres to reach our goals’ (Martin & Rose, 2007, p. 8). It is well-established in our field that genre awareness is crucial for students to become members of a discourse community such as physical sciences and business management.
How does a theory driven assessment approach support students with academic writing?
As DA has no explicit linguistic theory, an SFL-based genre theory comes in handy when adopting DA to teach and assess academic writing. I use the genre theory to teach key generic features of a business case study analysis genre (e.g., Orientation, Application of business frameworks, Recommendations) through DA. My focus was on aspects like macro-Theme (introduction to text), hyper-Theme (also known as topic sentence) and technicality (use of subject relevant lexis).
In my research, I used SFL to provide linguistic evidence for student academic writing development (ZPD) systematically while applying DA to assessing academic writing in distance education. I have developed a set of DA procedures for academic writing teachers and demonstrated the application of SFL to track undergraduate business management students’ development of Textual (i.e., organisation of message) and Ideational (i.e., subject matter) meanings as construed in their written assignments.
My study provides insights into students’ maturing academic writing abilities in a discipline. The analysis of tutor-student interaction (i.e., dialogic feedback) enables us to track writing development over time. My research shows that focused tutor mediation (face-to-face or online) provides effective support for academic writing development.
Another important result of my study was the potential of DA for learning transfer (transfer of learning to another new context). Learning transfer is crucial in any academic writing programme because we want our students to transfer what they learn to their disciplinary writing. My study showed that SFL genre theory informed DA helped first year students to transfer case study analysis genre features and conceptual knowledge to second and third year business studies modules.
To theorise or not to theorise our assessment practices?
Whether to theorise or not to theorise may still be a question facing many of us who are under enormous pressure to complete student assessment to meet institutional requirements in the ongoing pandemic situation. But are we doing justice to our students (and ourselves) by simply adhering to institutional requirements? Probably not, because a theory-informed assessment approach like SFL genre theory informed DA helps us to make our assessment approach more sensitive to our own distinct sociocultural, political and institutional contexts to respond to ever changing academic writing needs of the diverse body of our students.
In fact, I have applied the findings of my study to a large-population (about 2,000 students a year) first year business communication online course at The Open University. It was not possible to adopt DA strictly as in my research but a flexible DA approach informed by SFL was worth applying. Taking this approach, students in this course complete their first assignment making notes on a case study. They get tutor feedback on these notes which they use to write their second assignment (case study analysis) applying a business framework. Additionally, they have one formative assignment writing week each in preparation for third and fourth assignments when their tutors provide formative feedback on their drafts before they submit their final versions. This promotes dialogic feedback.
There are benefits to adopting an SFL informed DA approach in our assessment practices. A theory informed assessment approach like this one comes with challenges too. But we EAP practitioners need to rise to them by being more dynamic, creative and flexible looking beyond existing EAP pedagogies and assessment practices!
Coffin, C., & Donohue, J. (2014). A Language as Social Semiotic Based Approach to Teaching and Learning in Higher Education. John Wiley & Sons Inc.
Davison, C. (2004). The contradictory culture of teacher-based assessment: ESL teacher assessment practices in Australian and Hong Kong secondary schools. Language Testing, 21(3), 305-334. https://doi.org/10.1191/0265532204lt286oa
Davison, C., & Leung, C. (2009). Current Issues in English Language Teacher-Based Assessment. TESOL Quarterly, 43(3), 393-415. https://doi.org/10.1002/j.1545-7249.2009.tb00242.x
Gardner, S., & Donohue, J. (2020). Introduction to the special collection: Halliday’s influence on EAP practice. Journal of English for Academic Purposes, 44, 100831. https://doi.org/https://doi.org/10.1016/j.jeap.2019.100831
Halliday, M. A. K., & Matthiessen, C. M. I. M. (2014). Halliday's introduction to functional grammar (Fourth Edition. ed.). Routledge.
Lantolf, J. P., & Poehner, M. E. (2004). Dynamic assessment of L2 development: bringing the past into the future. Journal of Applied Linguistics, 1(1), 49-72. http://libezproxy.open.ac.uk/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=ufh&AN=14488498&site=ehost-live&scope=site
Martin, J. R., & Rose, D. (2007). Working with Discourse: Meaning beyond the Clause. Continuum.
Office for Students. (2020). National Student Survey Results 2020 https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/get-the-nss-data/
Prior, P. (2008). A sociocultural theory of writing. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of Writing Research (pp. 54-66). The Guildford Press.
Shrestha, P. N. (2020). Dynamic Assessment of Students’ Academic Writing: Vygotskian and Systemic Functional Linguistic Perspectives. Springer. https://doi.org/https://doi.org/10.1007/978-3-030-55845-1
Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.
Dr Prithvi N. Shrestha, an award-winning author (British Council ELTons finalist 2019), is Senior Lecturer in English Language at The Open University, UK. He has led or co-led a number of funded international research projects. He has published over 40 research outputs, including one research monograph (Dynamic Assessment of Students’ Academic Writing (Springer, 2020)) and an edited volume, covering academic writing assessment in distance education, language assessment, English language education in developing countries, English medium instruction and mobile learning. His research is informed by Systemic Functional Linguistics and sociocultural theory.
For more information about him, please visit: http://www.open.ac.uk/people/pns52
Testing Assessment and Feedback SIG (TAFSIG) invites you to complete a short survey to help guide our future activities. It aims to identify areas of testing, assessment and feedback that EAP Practitioners would like to develop, share and build expertise in. There are also a few questions to collect feedback on our activities so far.
You can find it by clicking HERE. It should take you between 5-10 minutes to complete and will close on Tuesday 19th January.
This is open to all EAP Practitioners (not only BALEAP members), so please share widely.
by Robert Playfair
Going through a UKVI audit on a pre-sessional course
'UK Visas and Immigration' (UKVI) is part of the Home Office department of government. According to their website, their role is to make "millions of decisions every year about who has the right to visit or stay in the country, with a firm emphasis on national security and a culture of customer satisfaction for people who come here legally." This blog post is about when the world of EAP practitioners in the UK collides with the Home Office, in what is known as a 'UKVI audits'.
Since 2010 and the introduction of 'Secure English Language Tests' (SELTs), developers of these language tests have taken on responsibilities of immigration control, or ‘border work’ (Harding, Brunfaut & Unger, 2020). As pre-sessional tests scores in the UK have the same decision-making value as a SELT (students progress / do not progress on to their degree courses), this immigration control role has also become part of the job of EAP Practitioners involved in in-house pre-sessional assessment development, alongside the responsibility of preparing students for their future studies. For these people there is a tension between developing assessments which best prepare students for higher education study and providing evidence of language competency for UKVI audits. This concern predates SELTs, with Cyril Weir (2005) noting the impact this has on test design:
“The increased expectation that providers of educational services should be made accountable to external bodies for the impact of their work has been a powerful driving force behind [the emphasis on summative evaluation]. It has encouraged a swing from viewing tests as instruments for assisting in the development and improvement of student language ability to treating them as indicators of performance for outside agencies” (Weir, 2005: 39)
It has also been explored in other contexts, e.g., an EAP unit in Hong Kong (Bruce & Hamp-Lyons, 2015).
BALEAP has developed guidelines for understanding existing SELT tests however there is relatively little guidance on developing 'UKVI compliant' in-house assessments. Perhaps as a result, this is a common point of discussion for those with in-house assessment development responsibilities: a recent search of the past two years of the BALEAP mailing list shows over 200 messages containing 'UKVI'. This blog post collects the current government guidance that is available and offers some snapshots of narratives from BALEAP colleagues who have undergone UKVI audits over the past few years.
Government guidance on in-house language tests
'Tier-4' visas have recently changed their name to 'Student Route' visas. However, the guidance from the Home Office on assessing English language competence for in-house EAP departments has not changed (as of December 2020).
The main document which addresses this is the 'Student Sponsor Guidance Document 2: Sponsorship Duties' (version 12/2020, available here). In particular sections 5.10-5.12 'Students studying at degree level and above on the Student route', For example,
" 5.10 a. If you are an HEP [Higher Education Provider] with a track record of compliance, we will allow you to choose your own way to assess it... However, you must ensure they are proficient to level B2 in each of the four components (speaking, listening, reading and writing), unless they are exempt from being proficient in a component because of a disability." (p.29)
"5.11 You must take all reasonable steps to ensure that you are satisfied through your assessment that the applicant meets the language competence requirements. For example, you could interview students. If you have doubts about any documents then you should verify them with the appropriate body." (p.30)
Another source of information is the Government webpage, 'Prove your English language abilities with a secure English language test (SELT)' which has recently (October 2020) added guidance about integrated testing (i.e., tests generating scores of more than one skill) for SELTS which could be referred to when justifying scores from similar tests developed by in-house EAP teams:
"Where 2 or more components (reading, writing, speaking and listening) of a test are examined and awarded together, for example a combined exam and certificate for reading and writing skills, you must show that you achieved the required scores in all the relevant components during a single sitting of that examination, unless you were exempt from sitting a component on the basis of a disability."
UKVI audit case studies
TAFSIG is not yet in a position to either challenge this guidance or offer advice about interpreting it for in-house assessment development. For now, we would like to contribute the following anonymised case studies as a window into the UKVI audit experience for people working in high stakes in-house language testing in universities.
If you have an experience related to this topic you would like to share, please get in touch via our email: email@example.com
Case study 1
We were audited 2 or 3 years ago. UKVI audited the whole university, not just the pre-sessional course or even just our language centre - it was admissions for all international students.
For our summer pre-sessional course, we don't have an end of course test, and we don't test the 4 skills separately - we have integrated task-based assessments, e.g. the extended writing task assesses reading and writing together. However, on the marking criteria reading is a specific category, and it is a must-pass category, with the pass being CEFR B2. Overall, of the 4 skills required by UKVI, each has at least 1 must-pass B2 criterion on the assessments, which is how we demonstrate that students have met UKVI requirements.
We also have a detailed assessment specification document which explains how everything works.
Case study 2
The UKVI visited us in 2016, with very little advance notice given to our department (around 2 weeks). This was about 6 weeks before the main presessional courses began.
They were not there just for the presessional but for all aspects of managing Tier 4 students, including our relevant UK-based franchised courses, and were particularly exercised by record-keeping. In advance of the visit, I was asked to provide documentation relating to the content, assessment strategy, and outcomes from the previous presessionals on my watch.
I think there were 3 inspectors in total, and they stayed for two days. There were two big meetings and a number of side meetings with individuals or small groups. I was not required to attend a side-meeting.
The first big meeting was about administration and record-keeping across the University and I was not asked to attend.
The second meeting was more about course management and assessment, and involved management and academic representatives from each relevant course. It became clear that the inspectors had only a sketchy knowledge of pedagogical principles and issues. They were very keen on separate assessments for each of the four skills, or at least a distinct CEFR-related mark being awarded for each. However, we on the presessional favoured integrative assessments, so I faced some questioning about that. Our plans for the summer courses were well advanced by then, so I was loth to change things that had already been settled. However, we did meet them halfway in the end, by hastily introducing a specific listening test, which turned out to be a disaster but hey ho. I understand that my successors got pushed further in the direction of separate assessments.
The inspectors were also very interested in the credibility of the courses, insofar as having clear criteria for success and failure, and progression to degree courses. I had no problems in this regard, but the representative from one of the private-sector franchised courses was given a very hard time as they had *never* failed anyone on academic grounds on their relevant courses.
In general, the inspectors seemed to appreciate colleagues who had done their homework and were well-briefed. You need to have the facts at your fingertips.
A few weeks later, we learned that the University had passed the inspection with no serious concerns.
Case study 3
The audit we had this year involved about 40-45 mins with a UKVI panel of three auditors and they looked at the assessments for both our Presessional course and also from our internal Progress Test. I wasn't the person directly involved but I was in the 'green room' for the debrief. Their main concerns appeared to be the security and secure storage of test materials and the recording of spoken presentations rather than the content of assignments. Attendance monitoring also seemed to be a key focus.
One of the auditors did have a background in English language teaching and commented favourably on the range of test items we had written, however I think that was incidental. We are still waiting to have sight of the audit for more detail.
Case study 4
This relates to late 2019 when we went through a successful UKVI inspection of both our Pre-Sessional and Foundation programmes:
We carried out some informal research prior to our inspection to find out the experience of other HEIs, although it soon became clear that experiences were very varied and some claimed the rigor and approach depended on the inspectors involved. Nevertheless, tapping into the experiences of others was really useful.
UKVI inspected all our relevant programmes including Foundation, Pre-Masters and Pre-Sessional. I worked hard to package together a suite of documents for the inspectors which explained our language tests and provided a rationale and defence of the robustness of our assessment system. This package (which included samples of tests, marking criteria, CEFR alignment, test specs and validity report) was given to the inspectors in good time before the actual inspection. When it came to the face-to-face interrogation(!), the inspectors focussed much more on the Pre-Sessional than our other programmes. We never found out why this was, but some of our Pre-Sessional assessment is coursework-based and not carried out in exam conditions, so maybe this was why they honed in on it, or maybe it was due to student numbers going through each programme. Whatever the reasons, the inspectors were friendly and interested to learn about our programmes, but although manageable, that interrogation of the Pre-Sessional was quite a grilling and covered various aspects of the testing, both practical and pedagogical.
As well as being prepared from a testing perspective and being ready to defend the validity and reliability of assessment instruments, it was also essential that the administration of our assessment procedures was in order. The UKVI team requires a selection of student samples; some of these are to be submitted to the inspectors prior to their visit but there is also a chunk they ask for on the day of their visit. These need to be readily accessible and so administration staff need to be well involved in the preparation of any inspection.
Bruce, E. and Hamp-Lyons, L. (2015) Opposing tensions of local and international standards for EAP writing programmes: Who are we assessing for? Journal of English for Academic Purposes 18:64-77.
Harding, L. Brunfaut, T. and Unger, J. W. (2020) Language Testing in the ‘Hostile Environment’: The Discursive Construction of ‘Secure English Language Testing’ in the UK. Applied Linguistics 41(5):662–687.
Weir, C. (2005) Language Testing and Validation: An Evidence-Based Approach. Palgrave.
Dear BALEAP Colleagues,
The Testing, Assessment and Feedback SIG (TAFSIG) is developing a network of universities and language centres to facilitate piloting, pre-testing and/or trialling of tests with one another. This is an exciting opportunity not only for reciprocal help in a key stage of the test design cycle, but also to build relationships with other teams and institutions. Please email us on firstname.lastname@example.org if you are interested in finding out more about this and want to be added to the mailing list.
The TAFSIG team
TAFSIG Interview with Susie Cowley-Haselden.
In this interview, Jo Kukuczka asks Susie Cowley-Haselden about building knowledge through Academic Reading Circles (ARC) as speaking assessment.
Susie is a Senior Teaching Fellow and Course Director of EAP on the International Foundation Programme at the University of Warwick. She has worked in EAP since 2009, after 12 years in EFL/ ESOL. She has used Seburn’s Academic Reading Circle model in various teaching contexts since 2012. She is currently in the writing up phase of her PhD at Coventry University. Her PhD explores the impact of the ARC model on knowledge building in the EAP classroom. Twitter: @susiecowley
The interview is structured around four questions:
1. What is ARC? 2. Why ARC as a speaking assessment on an EAP course? 3. How do you assess speaking using ARC? 4. What literature would you recommend for EAP professionals interested in exploring speaking through ARC?
by Dr Emma Bruce
In this interview Emma Bruce asks Liz Hamp-Lyons about how she originally became interested in EAP assessment. The conversation gives some interesting insights into Liz’s experience researching ELTS, the pre-cursor to IELTS. Many thanks to Liz for giving her time and sharing her stories with the TAFSIG community.
TAFSIG podcast with Sophia Vanttinen-Newton
In this podcast Jo Kukuczka asks Sophia Vanttinen-Newton about her most recent research on student perceptions of teacher emoticon usage in feedback.
The four questions we ask Sophia in the podcast are:
1. Why your interest in the use of emoticons in feedback?
2. In your research, what did you find in terms of implications for the use of emoticons in pedagogic feedback? In particular, in terms of implications for students?
3. What were the limitations of your study?
4. What is next on your research agenda? And why?
Sophia began her career teaching abroad in Japan, Romania and France, like many in EAP, as a TEFL teacher in both state and private schools. She moved into teaching and course development in EAP in the UK after taking the DELTA. She has worked in London on EAP and IELTS pre-sessional courses and at the University of Kent teaching on credit bearing, non-credit bearing and bespoke in-sessional EAP modules. Between 2017 and 2019 she was In-sessional and Short Courses Manager at Kent leading and managing the in-sessional and short courses provision. Now, at the University of Bristol she is an IFP EAP Coordinator working on IFP EAP units. More recently she has moved into research where my interest lies in perceptions of emoticons in online virtual teaching environments and assessment feedback. Her recent MA dissertation explored student perceptions of teacher emoticon usage in credit-bearing assignment feedback.
TAFSIG Interview with Lynda Taylor - by Fiona Orel.
Many thanks indeed to Lynda Taylor for giving her time and sharing her expertise with the BALEAP TAFSIG community. In this video Lynda talks with us about some of the key issues in speaking assessment and the challenges of working with an evolving construct of speaking. As well as sharing experience and insights, Lynda encourages us to consider our audience, purpose, and context and to ponder on the current shift in skills needed in an online environment and the impact of this on the speaking construct. Lynda makes reference to Cyril Weir’s construct of academic literacy and the three paradigms that it offers for examining the EAP context. To explore Lynda’s discussion of interactional competence it’s also well worth taking a look at Galaczi, E. & Taylor, L. 2018, “Interactional Competence: Conceptualisations, Operationalisations, and Outstanding Questions”, Language assessment quarterly, vol. 15, no. 3, pp. 219 – 236.
by Wayne Rimmer.
With his extensive background in the research and practice of assessment, Tony Green, Professor of Language Assessment at the University of Bedfordshire, shares his wealth of experience on current topical issues in the field. These include assessment literacy, the legitmacy of high-stakes tests and the evolution of assessment to reflect changes in the experiences and expectations of learners in higher education. The conversation is a fascinating snapshot of the challenges facing educators and assessment professionals working in an EAP environment.
TAFSIG Interview with Dan Isbell & Benjamin Kremmel
TAFSIG is a forum for sharing knowledge, ideas, practice and research on testing, assessment and feedback, and a facilitator for events that further research and practice in the field of English for Academic Purposes (EAP) and beyond.