The 5th Annual International Conference of the Asian Association for Language Assessment (AALA)

2018 AALA Pre-Conference Workshops

Presentation type

Speaker

Title

Workshop 1

(One-day, Oct. 18)

Lin Gu & Keelan Evanini

Automated feedback on spoken and written language production

Workshop 2

(One-day, Oct. 18)

Barry O’Sullivan & Tan Jin

Using Online Resources to Identify Suitable Reading Texts for Learning and Assessment

Workshop 3

(Half day, morning of Oct. 18)

Ute Knoch

Rating scale development

Workshop 4

(Half day, afternoon of Oct. 18)

Hanan Khalifa

Assessing Reading: frameworks, models, checklists & top tips

 

Workshop 1: Automated feedback on spoken and written language production

Speaker: Lin Gu, Research Scientist, Educational Testing Service

Dr. Lin Gu is a Research Scientist in the Center for English Language Learning and Assessment at Educational Testing Service in Princeton, NJ. Lin has broad training in the areas of language pedagogy, language testing, and educational measurement. She received her Ph.D. degree in Second Language Acquisition from the University of Iowa. Her dissertation research addressed critical issues at the interface between language testing and language acquisition. Since joining ETS, Lin has focused her research on validity issues in assessing English language learners, technology-enhanced language assessments, and feedback used in learning and assessment contexts. Lin has published in a variety of journals, including Language Testing, Language Assessment Quarterly, and English for Academic Purposes

 

Speaker: Keelan Evanini, Research Director, Educational Testing Service

Dr. Keelan Evanini is a Research Director at Educational Testing Service in Princeton, NJ. His research interests include automated assessment of non-native spoken English for large-scale assessments, automated feedback in computer assisted language learning applications, and spoken dialog systems. He leads the research team that develops SpeechRater, the ETS capability for automated spoken language assessment.  He also leads the team of research engineers that work on applying state-of-the-art natural language processing, speech processing, and machine learning technology to a wide range of research projects aimed at improving assessment and learning.  He has published over 70 papers on topics related to spoken language processing in peer-reviewed journals and conference proceedings, has been awarded 9 patents, and is a senior member of the IEEE.

 

Abstract: Feedback is a crucial component of successful language learning. Recent advances in technology have increased the prospects for automated systems to provide feedback on second language (L2) learners’ spoken and written language and to help guide learners towards improved language proficiency. As opportunities for classroom teachers to give feedback are often limited, automated systems can serve as an additional learning resource by providing immediate and individualized feedback to L2 learners. Such systems can be especially valuable in parts of the world where access to high-quality L2 instruction is limited.

In this workshop we will discuss automated feedback as it is used in both learning and assessment contexts. The workshop will be divided into two parts.

Part I of the workshop focuses on the use of automated feedback in learning contexts. We will begin with an overview of recent technological and pedagogical advances in state-of-the-art learning technologies. We will review the most common types of automated feedback provided by existing language learning systems as well as research that has been conducted to evaluate the effectiveness of the automated feedback. Next, we will provide hands-on demonstrations to a few research prototype feedback systems currently being developed by Educational Testing Service (ETS), including automated systems that process complex language production that is spontaneous, interactive, and multimodal in nature. We will then review key feedback-related research findings in Second Language Acquisition (SLA). Following that, we will engage workshop participants by using criteria based on empirically researched and validated SLA practices to evaluate the types of feedback offered by the various learning products.

The second part of the workshop will consider the use of automated feedback in the context of language proficiency assessment. Influential validity frameworks (e.g., Williamson, Xi, & Breyer, 2012) for evaluating automated scoring systems will be first introduced, as we believe that principles and protocols outlined in these frameworks are closely relevant for validating automated feedback in assessment contexts. We will then describe the design and development of an ETS system that provides automated feedback on spontaneous speech produced by English language learners in a low-stakes practice test. Through a series of hands-on activities, workshop participants will be introduced to the various validity considerations that guide the selection and presentation of the feedback information, as well as the design of the feedback report.

Using a variety of presentations and hands-on activities offered in this workshop, the workshop participants will enhance their knowledge of the development and evaluation of automated feedback on L2 spoken and written language production.

 

Workshop 2:Using Online Resources to Identify Suitable Reading Texts for Learning and Assessment

Speaker: Barry O'Sullivan FAcSS FAALA, British Council

Professor Barry O’Sullivan is the Head of Assessment Research & Development at the British Council. He has undertaken research across many areas on language testing and assessment and its history and has worked on the development and refinement of the socio-cognitive model of test development and validation since 2000. He is particularly interested in the communication of test validation and in test localisation. He has presented his work at many conferences around the world, while almost 100 of his publications have appeared in a range of international journals, books and technical reports. He has published five books, the most recent being English on the Global Stage: The British Council and English Language Testing 1941-2016 (with Cyril Weir, 2017). He has worked on many test development and validation projects over the past 25 years and advises ministries and institutions on assessment policy and practice.

He is the founding president of the UK Association of Language Testing and Assessment (UKALTA) and holds honorary and visiting chairs at a number of universities globally. In 2016 he was awarded fellowship of the Academy of Social Science in the UK, and was elected to Fellowship of the Asian Association for Language Assessment in 2017.

 

Speaker:Tan Jin, Sun Yat-sen University

Dr. Tan Jin is Associate Professor in the School of Foreign Languages at Sun Yat-sen University (SYSU), where he is Executive Director of the Research Centre for SYSU English Proficiency Test. He obtained his PhD in Education from the Chinese University of Hong Kong in 2012. His research interests include language assessment, artificial intelligence, and blended learning. His current research involves the development of an intelligent system that integrates data-driven standards to facilitate teachers’ selection and adaptation of academic reading materials to support English learners’ language development as well as science learning. His publications have appeared in numerous academic journals, including Language Testing, TESOL Quarterly, Information Sciences, and IEEE Transactions on Knowledge and Data Engineering. He is Co-Editor of the UNIPUS Course Series “Language, Data and Research: Developing Data Literacy for Language Teachers” and Principle Investigator of the project “Features and Computational Models of Content Difficulty of Academic English Reading Texts”.

 

Abstract: In this work we will demonstrate a range of free-to-use online resources that you can use to identify reading texts that are at a suitable level of difficulty for your class. These texts can then be used as both learning and assessment materials.

The workshop is structured in three phases:

Phase 1

Here we introduce a pair of checklists that are useful in any preliminary overview of a text. The first checklist focuses on the usefulness of the text in terms of your context and what you hope to use it for. This relates both to the type of activity you plan to use it for, as well as its structure and organisation – this is to ensure that it will allow you to create a set of tasks or questions at a later stage. The second checklist focuses on the appropriateness of the text for your students in terms of topic and language.

At this point you are using the checklists to make initial decisions about the text. In order to ensure that it is at the correct language level we then proceed to the next phases.

Phase 2

Here we explore and use a number of online resources that look at the readability, the cohesive structure and the vocabulary of the text. These resources are freely available and are quite east to use and interpret. We will use a set of texts that we provide, but suggest that you bring some texts along that you currently use in your classroom to gain the most benefit from the workshop.

Phase 3

Numerous text complexity indices are now available to assist with the text selection process. However, little support is available to assist with the text adaptation process, despite the fact that text adaptation is challenging even for experienced teachers. In light of this situation, the final section introduces a data-driven approach to text adaptation, in which a range of corpus annotation techniques and corpus-based benchmarks are used to inform and support teachers’ text adaptation practices. We will demonstrate how this approach is implemented by providing a free online text adaptation tool together with step-by-step guidelines in which teachers can use the tool to adapt texts. Additionally, we will offer practical suggestions for using the data-driven approach and discuss ways for effectively integrating the approach in future teacher education and professional development programs.

 

Workshop 3:Rating scale development

Speaker: Ute Knoch, University of Melbourne

Associate Professor Ute Knoch is the Director of the Language Testing Research Centre at the University of Melbourne. Her research interests are in the area of writing assessment, rating processes, assessing languages for academic and professional purposes, and placement testing. She has been successful in securing grant funding, including grants from the Educational Testing Service in the US, IELTS, the British Council, Pearson and the Australian Research Council. She was the Co-president of the Association for Language Testing and Assessment of Australian and New Zealand (ALTAANZ) from 2015-2016 and has been serving on the Executive Board of the International

Language Testing Association (ILTA) from 2011 to 2014 and again since 2017. In 2014, Dr Knoch was awarded the TOEFL Outstanding Young Scholar Award by the Educational Testing Service (Princeton, US), recognizing her contribution to language assessment. In 2016, Dr Knoch was awarded a Thomson Reuter Women in Research citation award.

 

 

Abstract: Rating scales, or rating criteria, are used in many language assessment contexts to provide guidance to raters when assessing productive skills, such as speaking or writing. Rating scales provide an operationalisation of the test construct of an assessment and are therefore a key ingredient in achieving ratings that reflect test taker ability and scores that lead to valid inferences and decisions about test takers. However, the process of developing scales is not always well documented leading test developers and teachers often to adopting or adapting scales from other contexts without sufficient reflection on the implications of choosing an instrument developed for a different purpose.

In this workshop, we will examine in detail different types of rating scales that are available and how these may be used for different purposes. Participants will gain an understanding how test construct is represented in the scales. Practical activities will include participants having the opportunity to try out different scale development methods during the workshop. We will also examine how rating scales can be integrated into the teaching and learning process. The workshop will conclude with providing participants with the opportunity to plan some basic scale validation activities.

 

Workshop 4: Assessing Reading: frameworks,  models, checklists & top tips

Speaker: Hanan Khalifa,Cambridge Assessment

Dr. Hanan Khalifa is a leading language testing and evaluation expert. Since 1993, she developed national examinations, validated international assessments, and led the alignment of locally and internationally produced curriculum and examinations to international standards, namely, the CEFR. As a researcher, she focused on assessing receptive skills and impact of education reform initiatives.

Dr Khalifa is a skilful presenter and an accomplished author with cited publications. Examining Reading, with Professor Weir, continues to be key textbook and reference material in ALTE institutes and on masters’ programs in UK universities. In 1989, she was a recipient of Hornby award for ELT; in 2007 she joined the Council of Europe as a CEFR Expert member and EAQUALS  inspection committee; in 2013 she won IEAA award for innovation in

International Education together with Professor Burns and Brandon; in 2018 she became the first international expert to join board of Directors at a Malaysian State University.

Hanan has previously worked for the Egyptian Ministry of Education and Higher Education, international development agencies and world renowned educational firms (BC, AMIDEAST, AIR, FHI 360). Since 2003, she has been with Cambridge Assessment English, leading on research and then on transforming language education with governments worldwide.

 

Abstract: Assessment practitioners are increasingly being required by various stakeholders to go beyond demonstrating that their examinations/tests are valid, reliable, and fair by reporting learner-focused results aimed at continuous improvement. An explicit test development and validation framework utilised in a learner-oriented assessment context is an excellent starting point to ensure test quality, relevant feedback and positive impact of assessment on the learning and teaching processes. 

In this workshop, we will explore Cambridge Assessment English’s socio-cognitive approach to test development with a particular focus on assessing Reading skills (see Khalifa & Weir, 2009) and operating within a learning oriented assessment context (see Jones & Saville, 2006). We will be addressing three key features of an assessment context and answering a number of related questions:

1. LEARNER

How do learner characteristics affect the design or selection of a Reading test

2. TASK

  1. What are the cognitive processes a learner goes through or needs to demonstrate when reading a text and responding to a task?
  2. How do these processes differ according to CEFR levels of proficiency?
  3. Which features of Reading texts/tasks we need to be aware of when designing or selecting a Reading test?
  4. Do these features vary according to proficiency levels? 

3. FEEDBACK

  1. What feedback can we give the learners based on their performance on a Reading test?
  2. How can that feedback be addressed in the learning and teaching process?
                        

The first part of the workshop will focus on theory while engaging participants in critically thinking about their own context and how information provided is applicable to local context.

In the second part of the workshop, participants will have more hands on opportunity to become familiar with the CEFR scales for Reading ability, design reading tasks following a mind-map methodology, and critique the designed tasks and associated feedback. Participants will also be invited to think about the potential that technology offers for extending the approach and to enable innovative task designs. 

By the end of the workshop, participants will have access to the following set of materials:

  • A framework for developing and validating reading tests
  • Carefully selected set of CEFR scales from the new CEFR companion volume
  • Procedures for mind-mapping methodology
  • Evaluation checklist for various response formats
  • A relevant list of Readings for professional development related to the workshop 

 

 

Important Dates

  • Abstract Submission Deadline
  • 2018.04.15
  •  
  • Notification of Acceptance
  • 2018.06.15

 

  • Registration Starting From
    2018.07.01
  •  
  • Early Bird Discount Registration Period
    2018.07.01-2018.08.15
  •  
  • Registration closing on
  • 2018.09.30
  •  
  • Conference Dates
  • 2018.10.18~20