2018 AALA Plenary Speeches
Presentation type |
Speaker |
Title |
Plenary 1 |
Xiaoming Xi |
Language standards – Putting the construct definition in the spotlight |
Plenary 2 |
Barry O’Sullivan |
Localisation |
Plenary 3 |
Nick Saville |
Setting standards for language learning and assessment - a multilingual perspective |
Plenary 4 |
Yan Jin & Jessica Wu |
Developing Guidelines for Practice for Language Assessment in Asia: Principles and Methodology |
Plenary 1: Language standards – Putting the construct definition in the spotlight
Speaker:Xiaoming Xi, Educational Testing Service
|
Xiaoming Xi is Executive Director at Educational Testing Service (ETS), having responsibility for new innovations in the areas of global education and workforce and for conceptualizing, prioritizing, developing and deploying assessments, learning tools and services that help support ETS’s social mission of advancing quality and equity in education. In her previous position as Senior Director of the Center for English Language Learning and Assessment at ETS, she led foundational research advancing English language learning and assessment for learners worldwide and research support for ETS’ English language assessments, including the TOEFL® and TOEIC® programs, and assessments for English learners in the U.S. |
Her research spans broad areas of theory and practice, including validity and fairness issues in the broader context of test use, test validation methods, approaches to defining test constructs, validity frameworks for automated scoring, automated scoring of speech, the role of technology in language assessment and learning, and test design, rater and scoring issues. She is currently Series Co-editor of the Routledge book Series on “Innovations in Language Learning and Assessment” and Associate Editor of Language Assessment Quarterly, and serves on the Editorial Board ofLanguage Testing. She received her doctoral degree in second/foreign language assessment from the University of California, Los Angeles.
Abstract: Language standards and frameworks of language proficiency scales are becoming increasingly dominant, and there have been sweeping movements towards aligning assessments, curricula, and learning materials to them. In various standards and frameworks, descriptors of different levels of language abilities are provided, which embody different conceptualizations of language abilities. Given the pivotal role that language standards and frameworks play shaping language education and assessment, it is critical for the characterization of language abilities to be congruent with current views to guide the design of assessment, curriculum and teaching practices.
Under this backdrop, I will give an overview of the debates surrounding approaches to construct definition and discuss current perspectives. While theoretical approaches to defining language abilities continue to be debated in the last two decades, more recent conceptualizations have converged on the interactionalist view, which highlights the important role of language use context (Chapelle, 1998; Chalhoub-Deville, 2004; Bachman, 2007). The interactionalist approach has a strong potential to build on the strengths of two prevalent, yet seemingly contrasting approaches: task-based (Norris, 2002) and construct-based (Bachman, 2002).
In alignment with the interactionalist view, I will provide a brief orientation to the multi-faceted approach to construct definition, consisting of two key elements: foundational and higher-order language skills and use contexts (Xi, 2015). Central to the application of this approach are a model of communicative competence and a framework for representing contexts of tasks. As an illustration, I will present an oral communicative competence model that intends to provide an organizing structure for describing all underlying components relevant to oral communication. I will also describe a scheme to characterize key contextual factors of oral communication.
I argue that when the interactionalist approach is used to define language abilities in language standards and frameworks that consist of multi-level descriptors, the constructs require a more dynamic and fluid representation. The primary focus is on defining a learning progression model that puts learners on a learning trajectory, which needs to reflect both expansions of language use contexts and growth in higher-order and foundational skills in relation to particular contexts.
Following the discussion about conceptual perspectives on defining language abilities, I will provide a critical review of a number of prevalent language proficiency frameworks. I will analyze how language abilities and learning progressions are characterized in each, and the extent to which each characterization is consistent with or deviant from contemporary views of language constructs. I will also discuss the implications for future work on language standards and frameworks.
Plenary 2: Localisation
Speaker: Barry O'Sullivan FAcSS FAALA, British Council
Professor Barry O’Sullivan is the Head of Assessment Research & Development at the British Council. He has undertaken research across many areas on language testing and assessment and its history and has worked on the development and refinement of the socio-cognitive model of test development and validation since 2000. He is particularly interested in the communication of test validation and in test localisation. He has presented his work at many conferences around the world, while almost 100 of his publications have appeared in a range of international journals, books and technical reports. He has published five books, the most recent being English on the Global Stage: The British Council and English Language Testing 1941-2016 (with Cyril Weir, 2017). He has worked on many test development and validation projects over the past 25 years and advises ministries and institutions on assessment policy and practice. He is the founding president of the UK Association of Language Testing and Assessment (UKALTA) and holds honorary and visiting chairs at a number of |
universities globally. In 2016 he was awarded fellowship of the Academy of Social Science in the UK, and was elected to Fellowship of the Asian Association for Language Assessment in 2017.
Abstract: My thinking around the idea of localisation began with a discussion with a friend who was interested in computer games, from the standpoints of playing and development. During our discussion, I learnt that developers have been making changes to their games depending on specific market needs ever since the beginning of the craze for ‘video’ games in the 1970s. Over the years, technology has allowed for greater degrees of localisation, generally related to a fixed framework, from no changes to full localisation in which all aspects of a game (packaging, all language elements, graphics and manuals) are affected.
At the time of this discussion, I had reached the conclusion that the positioning of the candidate at the centre of the development and validation approach in the socio-cognitive approach effectively meant that tests needed to reflect the needs of real people, as opposed to essentially writing off a test population as heterogeneous and therefore impossible to define. It became increasingly clear to me that a convincing validity argument can only be made for tests that can be shown to be appropriate on a range of levels to the individual candidate. The implications of such a position for the majority of current language tests is significant. Essentially, if there is no evidence that the test is appropriate for individual learners, then it is highly unlikely to result in valid decisions about these individuals being made. This argument essentially undermines validity claims from developers of current international English language tests that claim to be candidate agnostic in targeting so-called general proficiency across an international population. The solution was suggested to me by the developers of games in their early recognition of the commercial need for localisation.
In this paper, I will present a theoretical basis for the concept of language test localisation and exemplify how it has already been applied in the world of test development. An existing operational taxonomy of localisation, based on a combination of the approaches of both game and language test developers will be presented and discussed. I will demonstrate how this view of localisation has been operationalised in a specific project with an existing test, while also discussing the implications for test development in general. I will argue that in order to reach a conceptually pure level of localisation, tests will have to change radically, as the level of localisation required for truly personalised tests is beyond the capacity of current approaches to test design and delivery. In addition, test developers are likely to continue to avoid any meaningful engagement with localisation as the potential to radically disrupt the international testing market is massive, while the technology-based solutions that have been introduced to date do not appear to offer a meaningful way forward.
Plenary 3: Setting standards for language learning and assessment - a multilingual perspective
Speaker: Nick Saville, Cambridge Assessment English, University of Cambridge
|
Dr. Nick Saville is Director of the Research and Thought Leadership Division, and is a member of Cambridge Assessment English’s Senior Management Team. He is Secretary-General of the Association of Language Testers in Europe (ALTE), on the Board of Trustees for The International Research Foundation for English Language Education (TIRF), and is a Board member of Cambridge University’s Institute for Automated Learning and Assessment (ALTA). He was a founding associate editor of the journal Language Assessment Quarterly and is the joint editor of the Studies in Language Testing (SiLT, CUP) with Prof Cyril Weir. He publishes widely and recently completed a chapter on Digital Assessment (Digital Language Learning and Teaching, Routledge ), and a volume on Learning Oriented Assessment (LOA) with Dr Neil Jones (SiLT 45). |
Abstract: Standards and standardisation have many different meanings and it is only by a careful analysis of these concepts and their application that we can understand the complexities and dynamics of language education. I will explore the following concepts and their interaction with each other in the real world of language policy and practice:
In thinking about the interrelatedness of these three meanings, we must take into account the context where the language learning is taking place and the intended uses of the target language – e.g. outside of the classroom in the home, the workplace, for further study and for global mobility.
The challenge for educators is ultimately a multilingual one involving the language used in the family, the language of schooling, and the language to be learned as a curriculum subject – often English.
I will outline some of the dilemmas faced by educators in applying standards that are relevant in a global context while at the same time respecting diversity and the need for locally-determined norms and context-specific educational policies and practices.
Plenary 4:Developing Guidelines for Practice for Language Assessment in Asia: Principles and Methodology
Speaker:Yan Jin, Shanghai Jiao Tong University
|
Yan Jin is a professor of Linguistics and Applied Linguistics at the School of Foreign Languages, Shanghai Jiao Tong University. Since 1991, she has been involved in the design and development of the College English Test (CET) in China. She has been Chair of the National College English Testing Committee since 2004. She is currently co-President of the Asian Association for Language Assessment and co-editor-in-chief of the Springer open-access journal Language Testing in Asia. She has been involved in many research projects related to the development and validation of large-scale language assessments and has published many articles in academic journals. She is on the editorial board of international journals such as Language Testing, Language Assessment Quarterly, Classroom Discourse, The International Journal of Computer-Assisted Language Learning and Teaching and Chinese journals such as Foreign Language Testing and Teaching, Foreign Languages in China, Foreign Language World, Foreign Language Education in China, Contemporary Foreign Languages Studies. |
Speaker:Jessica Wu, The Language Training and Testing Center
Jessica Wu holds a PhD in Language Testing. She is currently Program Director of the R&D Office at the Language Training and Testing Center (LTTC). She also teaches language testing courses at the graduate level. She has been deeply involved in the research and development of the General English Proficiency Test (GEPT), which targets English learners at all levels in Taiwan. She also serves as an adviser to the government on the development of L1 tests. She has published numerous articles and book chapters in the field of language testing and has presented her work at conferences around the world. She is currently President of the Asian Association for Language Assessment (AALA).
|
Abstract: In his “crystal-ball gazing into possible directions for the future”, Bachman (2000) stresses the importance of “two areas in which language testing and language testers must continue to grow and develop: the professionalization of the field, and validation research” (p. 18). As a response to the call for professionalization, regional and national testing organizations developed various codes or guidelines for language assessment practice (e.g., ALTE, 2001; JLTA, 2002; EALTA, 2006). Internationally, the International Language Testing Association Guidelines for Practice was officially implemented in 2007 (ILTA, 2007). Professionalization is an even more pressing need for the field of language assessment in Asia, given the increasing trend to develop and use locally produced tests of English and the huge number of English language learners in the region (Cheng & Curits, 2010; Weir & Wu, forthcoming; Yu & Jin, 2016). While admitting the usefulness and relevance of international guidelines to language assessment practice in Asia, we argue that there is a need for Asian language assessment organizations and associations such as the Academic Forum on English Language Testing in Asia (AFELTA) and the Asian Association for Language Assessment (AALA) to collaborate and develop guidelines for practice which fit the Asian context better.
In this presentation, first, the rationale for such an endeavor is provided through an analysis of macro-level social and educational contexts as well as micro-level features of language assessment in Asia. The analysis highlights the need for language assessment researchers and practitioners in Asia to improve the quality of their professional service and the need for other stakeholders such as teachers and educational policy-makers to better understand how language assessment works. Second, principles for developing the guidelines are discussed. Chalhoub-Deville’s (2016) framework of validity for reform-driven accountability testing is considered as providing guiding principles for developing guidelines for large-scale assessment practice. Turner and Purpura’s (2015) working framework of learning-oriented assessment (LOA) is deemed as the most suitable for guiding the development of guidelines for classroom-based language assessment practice. Third, the methodology of developing the guidelines is suggested by drawing on the experience of empirically developing a code of practice for language assessment practice in China (Fan, 2011; Fan & Jin, 2013; Fan, 2018). Finally, the presentation will end with a discussion of the challenges facing the daunting task of developing and implementing the guidelines of practice for language assessment in Asia.