Assessment Symposium 2023

Assessment Symposium 2023
English Language Assessment in a Post-Covid World
Face to Face Event – Saturday, 23 September 2023

Join us for the UECA Assessment Symposium 2023, presented by the University English Language Centres of Australia (UECA) in conjunction with Monash College, on Saturday, September 23, at the Monash College campus in Melbourne’s CBD.

This year’s symposium will be a great opportunity for professionals in the field of English language assessment to come together to share, learn and network around the theme English Language Assessment in a Post-Covid World.

The symposium will feature three conference streams, each delving into different aspects of English language assessment:

  • Generative AI and Assessment
  • Learning-Oriented Assessment (LOA)
  • Assessment Development, Validation, and Benchmarking

Save the date for the UECA Assessment Symposium 2023 and join us on Saturday, 23 September 2023 for a day of thought-provoking discussions and valuable insights that will add to the conversation around the future of English language assessment.

Morning and afternoon refreshments and lunch are included.

Register Here


MONASH

The Call for Presenters is now open

If you are interested in presenting, please submit an abstract (up to 250 words) addressing this year’s theme, using the button below.

The closing date for Presenter Applications is Sunday, 30 July 2023.

The authors of successful abstracts will be contacted by Friday, 18 August 2023.

Presenter Application


This year, our keynote speaker is a leading academic in assessment design, feedback, and learning in a digital world.

Professor Margaret Bearman
Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University

Assessment in a time of genAI

Abstract
With the advent of ChatGPT, higher education assessment has been called into question. There are concerns around academic integrity: how can institutions assure that graduates can be appropriately credentialled? Moreover, for many, genAI is seen as an enhancement to learning, teaching and assessment, rather than a threat.

While rapidly emerging technologies can make it difficult to predict how to proceed, there are core educational ideas that may help guide assessment designs. This keynote explores the many implications of generative Artificial Intelligence (genAI) for what and how we assess students.  

Bio
Margaret Bearman is a Research Professor within the Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University. She holds a first class honours degree in computer science and a PhD in medical education. Margaret is known for her work in assessment design, feedback, and learning in a digital world. Her 2023 publication Learning to work with the black box: Pedagogy for a world with artificial intelligence reflects her long-standing interest in the implications of AI for higher education.

Find out more:

www.linkedin.com/in/margaret-bearman

Register Here


With thanks to our Event Sponsors

  • Bookery
  • Cambridge University Press
  • Duolingo English Test
  • ETS TOEFL
  • IDP IELTS
  • Language Cert
  • Pearson

All sessions are now available at UECA Online

You can access all session videos and download presentation materials from UECA Online.

Give Feedback Here

Registrations open on Monday, August 29

Face to Face Event – Saturday, 23 September 2023

Registration
Foyer
9:30 – 10:00 AEST

Refreshments & coffee/tea served.

Welcome & Acknowledgement of Country
The Auditorium, Level 2
10:00 – 10:10 AEST

Macgregor Haines, Manager Learning and Teaching Excellence, Monash College
Simon Winetroube, UECA President and Director English, Curtin University

Keynote: Assessment in a time of genAI
The Auditorium, Level 2
10:10 – 10:50
AEST

Professor Margaret Bearman, Research Professor – Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University

Expert Panel – Reflections on the Keynote
The Auditorium, Level 2
10:50 – 11:30 AEST

Prof.Thomas Roche, SCU College
Dr.Pamela Humphreys, Macquarie University College
Dr.Phuong Tran, Monash College
Mr.Stuart Parker, Australian Catholic University

UECA Integrated Assessment Grant Presentations
The Auditorium, Level 2
11:30 – 12:15 AEST

Hosted by:
Prof.Thomas Roche, SCU College
Dr.Pamela Humphreys, Macquarie University College

Lunch
Level 5
12:15 – 13:15 AEST

Lunch is provided.

Session 1A: Generative AI and Assessment
Room 5.62, Level 5
13:15 – 14:00 AEST

Learning Strategies Disrupted: AI generated testing and possible negative learning outcomes
Stuart Parker & Cecilia Liddle, Australian Catholic University

Abstract
Amongst the myriad of challenges facing international students in Australia, there is arguably none greater than achieving a prerequisite score in an external English language proficiency test. These scores can determine whether an international student will be able to progress to their intended mainstream degrees and/or meet the language proficiency criteria in the statutory authority pertaining to their field. This can be seen with international students intending to undertake a Bachelor of Nursing degree as an IELTS 7 [or equivalent] (Assessment criteria, 2022) is required in all bands before commencement. To support students in meeting this requirement, the Australian Catholic University’s bridging programs [TPP, EAP and FS] offer test preparation courses and workshops. However, a noticeable trend has emerged amongst these cohorts related to the common perception online AI-assessed language proficiency tests are susceptible to ‘tricks’ and shortcuts.

This presentation will seek to outline how counterproductive test washback has been observed in students preparing for AI-assessed language proficiency tests and what strategies have been utilised in developing effective, transferable learning strategies. The presentation will not be considering the efficacy of AI-generated testing themselves but rather will explore the following:

– The perception of students in bridging courses that “gaming” AI-generated language proficiency tests is efficacious in achieving desirable outcomes

– The implications these attitudes have for test preparation courses and how they can compromise positive test washback

– Strategies and approaches that have proven effective in promoting suitable learning strategies for developing language proficiency and academic skills

Bios
Stuart Parker has worked at the Australian Catholic University for 14 years in such programs as English for Academic Purposes, Tertiary Literacy and IELTS/PTE Preparation. For the past six he has also worked as an academic manager. One of the highlights during this time has been teaching Burmese tertiary students on the Thai-Myanmar border region. Prior to ACU, he worked as an English language teacher in universities, high schools and companies in Japan and Thailand. He has presented at UECA PD Fests on two previous occasions with topics related to teaching marking guides and incorporating action learning into classrooms.

Cecilia Liddle has been working at ACU with international health sciences students in pathways programs for the past 14 years. Her background is in nursing, midwifery, and education, with work experiences in regional and remote Australia and in different postings overseas. The decision to introduce an English language proficiency requirement for Bachelor of Nursing entry coincided with the arrival of the pandemic and a massive change in the way in which education was delivered to students. Many students have struggled to reach the required level and she is interested in ways to motivate students to improve their English language skills, rather than have their focus distorted by a desire only for test level achievement.

Session 1B: Assessment Development, Validation, and Benchmarking
Room 5.64, Level 5
13:15 – 14:00 AEST

Testing the waters: rethinking post-covid reading tests
John Gardiner, Mohammed Sameer & Sharon Cullen, The University of Sydney

Abstract
With the return to face-to-face EAP classes post-covid 19, many assessments that were rapidly designed and developed at ELICOS centres for online course delivery platforms are now undergoing review. This presentation offers valuable insights into our reassessment and reflection of the reading tests at the Centre for English Teaching, University of Sydney, which have moved from paper-based thematically connected assessments to an online thematically unconnected or unit-based format. As we undertook this significant change, it generated major test redesign, administration, and scoring issues that needed to be addressed, and indeed, are currently being reviewed. However, some of the solutions to these issues had their own challenges that need further refining. This discussion will present the context of our reading assessment, including examples of the past and present reading assessment items. We will then offer pragmatic solutions to common online delivery problems by exploring the issues and challenges that arose from our decisions. Many of the reflections that we will present are based on our trial and error of the new format.

Bios
John Gardiner has an M.A. in TESOL and a Dip.Teach. (Primary). He teaches academic English at the Centre for English Teaching, University of Sydney, and is a member of the Centre’s Assessment Quality Team responsible for test development. He has extensive teaching, test writing, and curriculum development experience.

Mohammed Sameer has an M.A. in Linguistics and a PhD in Education. He teaches academic English at the Centre for English Teaching at the University of Sydney. He is also part of the Centre’s Assessment Quality Team, primarily responsible for writing assessment tasks and maintaining rater standards through processes of standardisation and moderation.

Sharon Cullen is an English language teacher with over 20 years’ experience across a range of programs with the last decade devoted to teaching EAP and Direct Entry Programs at the Centre for English Teaching + Learning Hub, University of Sydney. She joined the Assessment Quality Team in 2019 and has played a pivotal role in modernising the centre’s assessment systems. 

Session 2A: Generative AI and Assessment
Room 5.62, Level 5
14:00 – 14:45 AEST

Designing an academic writing assessment in the age of GenAI
Ruth Hellema, Monash College

Abstract
Since the irruption of Chat GPT late last year, centres have been endeavouring to understand Generative AI (Gen AI) and its full impact on not only assessment, but indeed across education. This is further complicated by the evolutionary nature of Gen AI and the associated uncertainty. In response to this new development, there is a need for learning institutions to develop assessments that are designed to preserve academic integrity while also taking advantage of the affordances that Generative AI can offer.

The research paper, a report or essay, is a key assessment in many direct entry programs. In the Gen AI era, it is also one of the assessments that is most vulnerable to academic integrity breaches. This presentation will showcase an example of a re-designed research paper that aims to promote academic integrity, provide an opportunity for authentic use of Gen AI in academic writing, and encourage the use of higher order thinking skills, within the context of preparing students for the successful completion of their studies at university.

This presentation will be of interest to all those involved in designing assessments and how to both incorporate the usage of Gen AI, while addressing the academic integrity concerns it presents.

Bio
Ruth Hellema is an Assessment Design Specialist at Monash College. She has extensive experience in the design, development and delivery of a range of English language assessments, curriculum design and teaching, and has also held coordination roles. Her interests include authentic assessment, student feedback and exploring the potential of generative artificial intelligence in assessment design. Most recently, she has completed the Professional Certificate in Language Assessment at the University of Melbourne.

Session 2B: Learning-oriented Assessment
Room 5.64, Level 5
14:00 – 14:45 AEST

“I agree with you up to a point, and I’d like to make another point”; in search of real talk in interactive speaking tests
Cara Dinneen, Macquarie University College

Abstract
Our attempts at creating interactive speaking tests are often met with static, rehearsed performances by students who deliver a series of long-turns around the table linked together by a succession of overused conversational segues. This session looks at the development of two new interactive speaking tests that create conditions for a more ‘authentic’ style of discussion and provide valuable positive washback for classroom practice.

Bio
Cara Dinneen is the Education Manager of English Language Programs at Macquarie University College. Cara holds a Master of TESOL, Grad Cert Educational Research, Grad Cert Business Educational Leadership, Trinity Diploma of TESOL, and a BA Communications. Cara has over 20 years’ experience in English language teaching, teacher training and leadership, having taught and managed programs in Australia, Oman and Spain. Cara has a strong background in teaching methodology and a creative, participant-focused approach to lesson design and teacher development. She also has a keen interest in learning and assessment design and is the Convenor of the English Australia Assessment Special Interest Group.

Session 2C: Assessment Development, Validation, and Benchmarking
Room 5.61, Level 5
14:00 – 14:45 AEST

IELTS Online: Stakeholder perspectives & validation
Reza Tasviri, IELTS

Co-authors:
Dr. Tony Clark, Head of IELTS Research, Research and Thought Leadership, Cambridge University Press and Assessment
Dr. Emma Bruce, Researcher, Assessment Research Group, British Council

Abstract
Since the beginning of the COVID pandemic many English language test providers have launched online testing solutions to allow prospective international students to progress their application. Providing securely delivered and longer-term assessment capabilities in the post-pandemic high-stakes domain is now an established practice.

The test delivery platform for IELTS Online (IOL) promotes accessibility for candidates previously reliant on attending test centres. Moving an existing test online presents different challenges to developing a new online test, and the use of validity evidence to support decisions must be part of an ongoing and transparent process (Isbell & Kremmel 2020).

Although previous IELTS validation studies provide a solid research basis, a new extensive research program is currently underway to investigate the features of IOL which are different from in-centre IELTS and the potential impact of these differences.

Maintaining fairness is central to this migration online, and collecting the views of key stakeholders is necessary (Chaloub-Deville & O’Sullivan 2020) to help refine the forward-looking IELTS assessment model. Stakeholder perspectives (Test Takers, Proctors, and Examiners) are used in IOL validation program as evidence to validate the assumptions and decisions made in the development and rollout of IOL.

In the presentation we report test-taker perceptions of the at-home test environment and bespoke remote proctoring platform and examiner insights into conducting a high-stakes Speaking test remotely, collected through a post-test survey and follow-up interviews.

Bios
Reza Tasviri is the IELTS Assessment Research Lead at IDP Education. He holds a postgraduate degree in Applied Linguistics with over 25 years of experience in the ESL sector as an ESL/EAP/ESP teacher, and has lectured in teaching methodology, assessment and research methodology. Reza has been closely involved with IELTS for 15 years, is currently a member of a number of IELTS research cross-partner working groups and has been involved in a number of high-profile partnered research projects. Reza’s research interests are in language assessment, discursive psychology and discourse analysis. He has been involved in developing, benchmarking and developing assessment scales and training raters, and researching writer identity, and assessment literacy.

Dr Tony Clark is Head of IELTS Research at Cambridge University Press & Assessment. Areas of research interest include test preparation, diagnostic assessment and accommodations in language testing. He has published in several major language assessment journals, regularly presents at international conferences, and is an active member of the academic community.  

Dr Emma Bruce’s specific focus as part of the Assessment Research Group at the British Council is research and validation of IELTS. She has over 25 years of experience working in the tertiary sector in the UK and overseas. Research interests include online language testing, integrated assessment and the test impact in different contexts.

Session 3A: Generative AI and Assessment
Room 5.62, Level 5
14:45 – 15:30 AEST

Navigating Targeted Language Use in University Business Education in the era of Generative AI: Implications for teaching and assessing in University Language Centres
Dr.Hora Hedayati & Dr.Sharyn Black, UNSW Business School

Abstract
The widespread availability of big language models and generative AI presents significant opportunities to enhance language acquisition by providing increased exposure to various language genres. However, the accessibility of generative AI in higher education has also resulted in disruptions across teaching, learning and assessment domains. One notable concern is the potential of generative AI to undermine an essential standard outlined in the Higher Education Standards Framework (HESF); students are only awarded degrees upon demonstrating achievement of the learning outcomes. One of the main program learning outcomes in Business education is business communication.

The aim of this presentation is to provide insights into how teaching, learning and assessment practices are being reconsidered at university and Business school level. This transformation is driven by the increasing accessibility of Generative AI in education and its widespread application in various industries and the business sector.

These developments impact the language and communication requirements for Business students and universities are being compelled to adapt their entry requirements and language support to students accordingly to maintain high quality, reputation and student experience.

These changes in Target Language Use (Bachman and Palmer, 2010) have implications for the curriculum and assessment design in university language centres which are preparing language learners for success in their university programs.

Bios
Dr Hora Hedayati is a senior lecturer at the UNSW Business School where she works on education transformation initiatives. As the academic lead for assessment and portfolio in the Master of Commerce, she is researching and providing guidance on authentic assessments and implications of generative AI on authentic assessment.

Dr Sharyn Black is the Senior Project Officer, Quality Assurance and Accreditation at the University of New South Wales, Sydney Australia in the UNSW Business School where she is responsible for the development and implementation of the English Language Proficiency Program. Her research interests include academic literacy, education, and corpus linguistics.

Session 3B: Learning-oriented Assessment
Room 5.64, Level 5
14:45 – 15:30 AEST

New assessments for the new time
Dr.Phuong Tran & Mark Rooney, Monash College

Abstract
Assessment design and delivery is fraught with challenges and difficulties. Some of these stem from the inherent nature of assessment, while others come from institutional, organisational or societal contexts. As these contexts are ever-changing and evolving, assessment must evolve with them.

This presentation will outline the changes that were made to assessment when a suite of English language courses at Monash College English Language Centre were transformed into one streamlined course to reduce duplication and better fulfil learners’ needs in the post-COVID market. Following a principled approach, informed by Monash College’s newly written Assessment Framework, existing assessments across six former courses were re-packaged, re-written or replaced with new assessments that were more formative, innovative and personalised. This was a shift in approach from assessment of learning to assessment for and as learning. In addition, we sought to make our assessment processes and procedures more agile and streamlined, reducing the administrative load while maintaining assessment rigour.

This session will be of interest to those seeking to improve the design and delivery of assessments in their centres.

Bios
Dr Phuong Tran specialises in English language assessment and educational measurement. She is Manager of Assessment at Monash College. Phuong has a PhD in Education from the University of Melbourne specialising in Assessment in Education and English language testing. She also has a Master of Education (TESOL) from Monash University specialising in assessment design, curriculum development and teacher training. Phuong has extensive experience in assessment development, assessment validation and benchmarking, language teaching, and teacher training. Her current interests are transition education and quality assessment development in the face of generative artificial intelligence.

Mark Rooney is currently a program leader at Monash College English, where he has also worked as a learning skills adviser and teacher developer. His current areas of interest are building teacher autonomy and the role of mentoring in education.

Closing Remarks
Level 5
15:30 – 15:45 AEST

Afternoon Tea
Level 5
15:45 – 16:30 AEST

Social event. Refreshments & coffee/tea served.

Registrations open on Monday, August 29

Register Here