Assessment Symposium 2025
Emerging trends and insights in language assessment: What’s shaping the future?
Face-to-Face Event – Saturday, 19 July 2025
This year’s Assessment Symposium will be future-focused, looking at the emerging trends and insights in language assessment that will shape our practices in the coming years.
The symposium is a great opportunity for professionals in the field of English language assessment to come together to share, learn and network with colleagues.
Date: Saturday, 19 July 2025
Time: 9:30am – 4:30pm
Location:
Susan Wakil Health Building
Western Ave
Camperdown NSW 2050




With thanks to our Event Sponsors
- Cambridge
- IDP IELTS
- Language Cert
- Pearson




Event Program
Face to Face – Saturday, 19 July 2025
Registration and Tea/Coffee
Location: Foyer Area Level 4 (D18.04.C400)
9:30 – 10:00 AEST
Acknowledgement of/Welcome to Country
Welcome to the UECA Assessment Symposium
Location: Lecture Theatre 308 (D18.03.308)
10:00 – 10:15 AEST
Heather Thomas, UECA Vice President and Deputy General Manager/Director Global Programs, UOW College
Panel Keynote with Q&A
Location: Lecture Theatre 308 (D18.03.308)
10:15 – 11:45 AEST
In this year’s keynote panel session, our language assessment experts will present and discuss their diverse perspectives on the symposium theme, ‘Emerging trends and insights in language assessment: What’s shaping the future?’
Through the panel discussion, they will explore common and contrasting themes around the use of generative AI in test development, scoring and test preparation, approaches to academic integrity, adapting to the changing university assessment contexts and assessment for learning vs assessment as learning. The session will be interactive, and the panel will respond to questions posed by the audience.
Professor Ute Knoch is the Director of the Language Testing Research Centre at the University of Melbourne. Her research interests are in the areas of policy in language testing, and assessing languages for academic and professional purposes. She is currently the vice president of the Association for Language Testing and Assessment of Australia and New Zealand.
Professor Aek Phakiti is Professor of TESOL at the Sydney School of Education and Social Work at The University of Sydney. His interests include language testing and assessment, second language acquisition, AI in language education and research methods. Aek is the author of numerous books, including Language Testing and Assessment: Theory to Practice (Bloomsbury, forthcoming, 2025) and Assessment for Language Teaching (with Constant Leung, Cambridge, 2024).
Cara Dinneen is Education Manager for English Language Programs and English Medium Instruction at Macquarie University College and Convenor for the English Australia Assessment SIG. She holds an M.TESOL, Graduate Certificates in Educational Research and Business Educational Leadership, and has 22 years’ experience in teaching and leadership in Australia, Oman, and Spain.
Tracer Study Presentation
Location: Lecture Theatre 308 (D18.03.308)
11:45 – 12:15 AEST
Lunch
Location: Foyer Area & Upper Garden Level 4 (D18.04.C400 C400B)
12:15 – 13:15 AEST
Session 1A – Reimagining Assessment: Innovation and Impact
Location: 308 (D18.03.308)
13:15 – 14:00 AEST
Steve McIver, Cambridge University Press and Assessment
Abstract
This session explores insight into how Cambridge’s digital exams are being used in varied settings to support language learning, certification, and program goals in a changing educational landscape. As digital transformation continues to shape education systems, assessment must respond with equal flexibility. This session explores developments in Cambridge’s digital exams, focusing on Linguaskill and the Cambridge English Skills Test (CEST), which offer new approaches to measuring language proficiency across a range of contexts.
Using international case studies, we will examine how organisations are implementing these assessments alongside adaptations of Cambridge learning materials to address evolving educational and professional needs. The session will also explore the assessment and educational principles underpinning these tools and outline key practical considerations for implementation and delivery.
Bio
Prior to joining Cambridge University Press in 2018, Steve spent ten years teaching English, both in the ELICOS sector in Sydney and as an EFL instructor in Brazil, where he also managed his own teaching and translation business. Since joining Cambridge, he has delivered presentations at Australian institutions and internationally, including engagements with the Ministry of Education in Vietnam, focusing on the importance of assessment and the usage of educational products. Most recently, he completed a research trip to Cambridge and IATEFL in Edinburgh, to remain up to date with the latest developments and innovations in materials and language assessment.
Session 1B – Making our assessments relevant: CET’s Integrated Writing and Interactive Assessment Tasks
Location: 409 (D18.04.409)
13:15 – 14:00 AEST
Irma Basu, The University of Sydney – CET
Mohammed Sameer, The University of Sydney – CET
Abstract
The Centre for English Teaching (CET) is finalising a major curriculum overhaul initiated by an external review. The new curriculum aims to offer a direct entry program that is on par with best practice, addresses current issues, and meets university demands. This has led to a rethinking of how we assess our students.
This presentation will introduce CET’s two new assessment tasks, the designs of which have been informed by best practice. The first is our listening and reading to write task that is based on the principles of integrated assessments (Plakans, 2020; Dineen et al., 2024) and that assesses listening, reading and integrated writing skills. The listening and reading question items are designed to help students comprehend sources and facilitate the extraction of ideas for the subsequent writing task. The second assessment task is our Interactive Presentation task, which focuses on extracting “genuine and unscripted” (Sotiriadou et al., 2020) language from students for assessment purposes. This task is informed by current best practice in the field of interactive oral assessments. In this presentation, we will highlight ways in which both assessment tasks support meaningful engagement and mirror university-type assessment situations. We will also show how they address concerns regarding unethical AI use.
Our presentation will outline the assessment design process, focusing on the University of Sydney’s admission criteria, principles of secure and open assessments, and graduate qualities. We will refer to our specification documents, samples of input texts, listening and reading question items and examples of our interactive presentation questions.
Bios
Irma Basu is a member of CET’s Assessment Quality Team and Curriculum Review Team. She has a Master of Education (Educational Psychology) and has a keen interest in researching student motivation and teacher identity, and how they play out both inside and outside of the classroom.
Mohammed Sameer is part of CET’s curriculum review team and develops assessments for CET’s direct entry courses. He has a PhD in Education and a Master of Arts in Linguistics. His PhD highlights the importance of needs assessment as a primary step for curriculum development.
Session 1C – What’s left to assess? Rubrics optimised to acknowledge and reward learning
Location: 408 (D18.04.408)
13:15 – 14:00 AEST
Stuart Parker, Australian Catholic University
Abstract
Generative AI apps are fundamentally reshaping how students’ unsupervised (Lane 2) academic responses are being produced and assessed. As described by Liu and Bridgeman (2025), “AI forces us to think more deeply about what it means to be human”. Over the past two years, ACU’s Education Pathways English language programs have been on a strategic journey to respond effectively to this rapidly evolving technology, addressing both the constructive incorporation of AI tools and their potential misuse in masking competency gaps within Unit Learning Outcomes. Rubrics are emerging as a key focal point for outlining how and where GenAI capabilities can be appropriately utilised in spoken and written assessment tasks and what scoring penalties can apply when these skills are underdeveloped.
This presentation will explore:
- GenAI acknowledgement protocols enhancing transparency in submissions.
- Teaching strategies promoting ethical and meaningful student engagement with GenAI.
- Rubric descriptors distinguishing acceptable from inappropriate GenAI use.
- Techniques encouraging authentic student voices, prioritising original thought over AI-generated content.
The presentation will offer examples of rubric design and assessment scaffolding. They will be invited to comment on and critically evaluate traditional rubric descriptors vulnerable to GenAI manipulation and explore adaptive strategies that seek to maintain assessment integrity and support genuine student learning.
Bio
I have been at the Australian Catholic University, Melbourne campus for 16 years and in that time have worked as an Academic Manager, Education Pathways Coordinator, Lecturer in Charge and teacher in such programs as Foundation Studies, English for Academic Purposes, and IELTS/PTE Preparation courses. Currently, I am the coordinator for both domestic and international cohorts in Academic English & Communication Skills units.
In 2024, I presented the Reassessing Rubrics workshop at UECA’s Melbourne Symposium. In 2023, I also participated in the symposium as a member of the expert panel and as presenter for the session Learning Strategies Interrupted.
Session 2A – Factors in English language settings in admissions
Location: 308 (D18.03.308)
14:00 – 14:45 AEST
Ute Knoch, University of Melbourne Language Testing Research Centre
Abstract
To be admitted into higher education institutions in English-speaking countries, international students often take a large-scale English language proficiency test. University admission staff are therefore important test score users. Previous research has focussed on the assessment literacy of admissions staff in relation to specific large-scale English language tests (e.g., Baker et al, 2014; O’Loughlin, 2011, 2013) or a specific set of tests (Ginther & Elder, 2013). What has not yet been investigated is how admissions policy settings in relation to English language tests are made or amended, who is involved in making these decisions and what factors might influence changes to admissions policies. The current study aims to fill this gap.
Interviews were conducted with admissions managers from 31 of 42 Australian universities. In the interviews, staff were asked to describe examples of instances when English language test requirements in their admissions policy were changed, including the factors that initiated discussions about changes. The findings showed that a range of both internal and external factors triggered changes, or change discussions, including feedback from academic teaching staff, results from systematic tracking studies, comparisons with other similar, universities, major global events such as the Covid-19 pandemic, and changes to external regulations. The findings are explained drawing on the multiple streams framework (Herweg et al, 2024). The study has both theoretical and practical implications, including for language testers interested in better understanding processes involved in admissions policy-making and for efforts to improve the assessment literacy of policy makers.
Bio
Professor Ute Knoch is the Director of the Language Testing Research Centre at the University of Melbourne. Her research interests are in the areas of policy in language testing, and assessing languages for academic and professional purposes. She is the co-author of ‘Scoring Second Language Spoken and Written Performance’ (2021, Equinox, with Judith Fairbairn and Jin Yan), ‘Fairness, Justice and Language Assessment’ (2019, OUP, with Tim McNamara and Jason Fan), and ‘Assessing English for Professional Purposes’ (2020, with Susy Macqueen). She is currently the vice president of the Association for Language Testing and Assessment of Australia and New Zealand.
Session 2B – AI and Academic Integrity in ELT Assessments
Location: 409 (D18.04.409)
14:00 – 14:45 AEST
Michael Rochecouste, Torrens University Language Centre
Abstract
The rise of generative AI has had a significant impact on academic integrity across universities and education systems globally. This impact is especially pronounced in university English language centres, where the gatekeeping role of language proficiency experts is particularly vulnerable to compromise by generative AI.
This presentation will draw on interviews with Torrens University staff, policy analysis, theory, and current research, while acknowledging that AI is now a permanent feature of the digital learning landscape. A central focus will be how increasing students’ AI literacy can serve as a first line of defence against misuse—by establishing guided frameworks for ethical use, rather than imposing outright prohibitions on language learning models.
The session will then explore the distinction between integrative and reformative approaches to assessment, and how these can be applied to discourage or tightly regulate AI use. Attendees will leave with practical strategies for designing assessments and a theoretical framework to support and justify their choices. In addition, the presentation will offer insights into how policy, pedagogy, and AI literacy together shape the evolving role of AI in ELT classrooms.
Bio
Michael Rochecouste teaches English for Academic Purposes (EAP) at the Blue Mountains International Hotel Management School in Leura, a campus of Torrens University Australia. He holds a Master’s degree in Applied Linguistics (TESOL) and has over a decade of experience, including ten years as an IELTS examiner. His current professional interest lies in helping educators navigate and adapt their teaching and assessment practices in response to the rapid emergence of Artificial Intelligence in educational contexts.
Session 2C – Developing Integrated Speaking and Writing Rubrics
Location: 408 (D18.04.408)
14:00 – 14:45 AEST
Kate Randazzo
Amelia Mercieca
Abstract
This presentation examines how research undertaken by UNSW College as part of the UECA Integrated Assessment Project has developed into comprehensive master speaking and writing rubrics, which capture the core competencies across the input and output of integrated tasks at all levels from entry to exit in the College’s direct entry programs. This recently launched project addresses the need for consistent, transparent assessment practices while maintaining flexibility for diverse assessment requirements.
The master rubrics employ a competency-based framework, aligned with external proficiency standards, that identifies fundamental speaking and writing skills essential for university success. These rubrics serve as foundational assessment tools that can be systematically adapted to meet specific course requirements and assessment contexts, ensuring both consistency and pedagogical relevance across programs.
A key innovation is the development of accompanying speaking assessment checklists that translate complex rubric criteria into accessible, practical tools for educators. These checklists streamline the assessment process, reducing cognitive load for assessors while maintaining clear evaluation standards. The transparency afforded by this dual-tool approach enhances student understanding of assessment expectations and learning objectives.
The presentation will demonstrate how this system promotes clearer guidance and expectations between educators and students, supports more effective teaching practices, and facilitates meaningful feedback.
Bios
TBC
Afternoon Tea
Location: Foyer Area & Upper Garden Level 4 (D18.04.C400 C400B)
14:45 – 15:15 AEST
Session 3A – Reading Groups: Building Assessment Literacy
Location: 308 (D18.03.308)
15:15 – 16:00 AEST
Christopher Wasow, University of Adelaide
Abstract
This presentation showcases how our Centre is piloting a structured ‘Assessment Reading Group’ that leverages short, targeted research readings and guided reflection to shift teacher talk toward actionable change both in the classroom and within our Centre.
Each session in the ten-session program (mapped to Section 3 of the English Australia CPD Framework) explicitly targets at least one competency related to teacher assessment literacy, and threads foundational and contemporary research articles with preview reflections, guided Centre-specific discussion prompts, and in-session application tasks that support immediate, practice-oriented dialogue and activities, such as test evaluation or AI prompting for assessment design.
In this session, we will:
- outline the planning steps and design logic behind the program, including how we balanced foundational sources with contemporary research;
- share early impacts: great teacher feedback; richer assessment-design discourse through a forum that gives teachers a voice in shaping our practices
- discuss lessons learned – how aligning sessions with key points in the assessment cycle boosted engagement; why semi-structured discussion created space for meaningful teacher voice; which article-to-practice connections had the biggest impact; and how group insights have influenced leadership conversations and continuous improvement.
Participants will gain practical know-how and digital resources to start—or supercharge—a reading-for-action program in their own context, ensuring that low-cost reading discussions can convert research into measurable classroom and program improvements.
Bio
Christopher Wasow is currently the Education Program Manager, Short Term and Customised Programs at the University of Adelaide’s English Language Centre (ELC), where he has taught and led academic programs for over a decade. He has previously served as the Centre’s Assessment Team Lead and Academic Integrity Officer, and currently acts as a Deputy Convenor of the Assessment Special Interest Group (SIG) for English Australia. Christopher holds a Master of Education (TESOL) with distinction from Queensland University of Technology and is particularly interested in program design, assessment systems, and leveraging efficient processes to enhance quality outcomes in English language education.
Session 3B – Viva Voce: Elevating Academic Integrity in AI Era
Location: 409 (D18.04.409)
15:15 – 16:00 AEST
Mahnaz Armat, UNSW College
Abstract
The widespread adoption of AI has presented significant challenges for educators particularly in maintaining academic integrity and accurately assessing student understanding. These issues are especially pronounced in theory-intensive courses when written communication is a critical component of evaluation. This has catalyzed a rethinking of assessment design, leading to innovative solutions. The Viva Voce assessment has been adopted in a Diploma in Media course at UNSW College, a university pathway program.
This spoken assessment does not replace the original evaluation method but serves as a supplementary measure that enhances the overall assessment process. With a higher weighting than the written assessment component, viva voce serves as a gateway assessment ensuring that only students who meet the expected learning standards pass the course.
Quantitative data from Learning Management System (LMS) analytics and educator feedback indicate that this assessment approach has resulted in increased alignment between learners’ demonstrated spoken competence and their written performance. It ensures fairer allocation of marks and boosts educator confidence in grading student written submissions. Additionally, offering diverse assessment forms benefiting learners with varying learning styles ranging from auditory learners to those who excel in written communication.
Bio
Mahnaz Armat is currently a leading Arts (Social Science) education professional at UNSW College, a university pathway program in Sydney, Australia. She has extensive experience in ESL teaching, supporting non-native English learners in enhancing their language proficiency and communication skills. Her primary focus is on assisting international students in meeting tertiary entry requirements by fostering independent thinking, problem-solving abilities, and academic integrity. Additionally, she works to raise cultural awareness and develop strategies to better prepare these learners for a new academic environment.
Session 3C – Developing curriculum-wide scoring rubrics
Location: 408 (D18.04.408)
15:15 – 16:00 AEST
John Gardiner, The University of Sydney – CET
Abstract
Developing scoring rubrics that align with curriculum outcomes is a crucial component of effective test creation. However, the process of developing rubrics for high-stakes tests is frequently overlooked and underappreciated. The importance of appropriate rubric development stems from its use for measuring the intended course outcomes and making decisions that have a significant impact on students’ future study plans. Washback from important exams is acknowledged, but the washback from the scoring rubric may be even more influential. Available publications related to scoring rubrics tend to provide descriptive aspects of the stages or may explain relevant theories without layering this with the ‘real-life experience’ of the rubric development process. The multiple considerations and feedback spirals involved in rubric development stages are aspects of testing that deserve more attention.
This presentation focuses on practical aspects of developing a new writing rubric to closely align with the Intended Learning Outcomes (ILOs) of a new Direct Entry Course curriculum at the Centre for English Teaching (CET), University of Sydney. It offers practical and unique insights into the difficulties encountered and considerations in the design and development of the rubric and the responses to feedback at various stages of the rubric development. The stages of rubric development included in the presentation include actual examples of the Design Stage, Development Stage, and Pre-Operational Stage. Recommendations are made for addressing the many issues related to the process of developing rubrics at other university language centres.
Bio
John Gardiner is a teacher at the Centre for English Teaching (CET), University of Sydney. He has extensive teaching, test writing and curriculum development experience on direct entry post-graduate EAP programs. Currently, John is a member of the Assessment Quality Team at CET and develops tests and rubrics.