Spelling suggestions: "subject:"anguage -- desting"" "subject:"anguage -- ingesting""
111 |
Zkouška z češtiny pro trvalý pobyt v České republice / The examination in the Czech language for permanent residence in the Czech republicFimanová, Barbora January 2019 (has links)
This thesis deals with the examination in the Czech language for permanent residence in the Czech Republic. The aim is to identify its basic characteristics, the success rate of the participants and whether it corresponds to the required A1 level. In the theoretical part, basic theoretical concepts of migration and integration and conditions for granting permanent residence in different countries are presented. In the practical part, the analysis itself is elaborated. It consists mainly in the comparison of the exam with the publication Referenční popis češtiny pro účely zkoušky z českého jazyka pro trvalý pobyt v ČR - úrovně A1, A2 and evaluation of its integration potential. Included in the considerations are data on average results and success rates of candidates in the period January 24, 2018 - April 24, 2019. From a didactic point of view, the thesis also deals with the available preparatory materials: a handbook Připravte se s námi na zkoušku z českého jazyka pro trvalý pobyt v ČR. Nový formát testu A1, web portal www.cestina-pro-cizince.cz and related documents. This part evaluates the information about the test and how much the materials correspond to the exam. The results showed that the success rate of candidates is only 61.82% in the period under review. Despite the low results, we...
|
112 |
DEVELOPMENT OF FLUENCY, COMPLEXITY, AND ACCURACY IN SECOND LANGUAGE ORAL PROFICIENCY: A LONGITUDINAL STUDY OF TWO INTERNATIONAL TEACHING ASSISTANTS IN THE U.S.Qiusi Zhang (16641342) 27 July 2023 (has links)
<p>I collected two types of data throughout Weeks 1-14, with the original purpose of enhancing teaching and learning in ENGL620. The data included weekly assignment recordings and weekly surveys.</p><p>The primary data were students' speech data, which were collected through 14 weekly timed speaking assessments conducted from Week 1 to Week 14. These assignments were made available on Monday at midnight and were required to be completed and submitted by Sunday at midnight). The assignments were delivered, and responses were collected using Extempore (<a href="http://www.extemporeapp.com/" target="_blank">www.extemporeapp.com</a>), a website specifically designed to support oral English assessment and practice.</p><p>To conduct more comprehensive assessments of students’ performances, I incorporated two OEPT item types into the weekly assignments, including PROS and CONS (referred to as “PC”) and LINE GRAPH (referred to as “LG”). See Appendix B for the assignment items. The PC item presented challenging scenarios ITAs may encounter and required the test-takers to make a decision and discuss the pros and cons associated with the decision. An example item is “<i>Imagine you have a student who likes to come to your office hours but often talks about something irrelevant to the course. What would you do in this situation? What are the pros and cons associated with the decision?</i>”. The LG item asked students to describe a line graph illustrating two or three lines and provide possible reasons behind those trends. It can be argued that the two tasks targeted slightly different language abilities and background knowledge. The two item types were selected because they represented two key skills that the OEPT tests. The PC task focused on stating one’s decision and presenting an argument within a personal context, while the LG item assessed students’ ability to describe visual information and engage in discussions about broader topics such as gender equality, employment, economic growth, college policy. The PC and LG items are the most difficult items in the test (Yan et al., 2019). Therefore, progress in the two tasks can be a good indicator of improvement in the speaking skills required in this context. All the items were either taken from retired OEPT items or developed by the researcher following the specifications for OEPT item development. In particular, the design of the items aimed to avoid assuming prior specific knowledge and to ensure that students could discuss them without excessive cognitive load.</p><p>For each task, the students were allocated 2 minutes for preparation and a maximum of 2 minutes to deliver their response to the assigned topic. The responses were monologic, resembling short classroom presentations. During the preparation time, the participants were permitted to take notes. Each item only allowed for one attempt, which aimed to capture students’ online production of speech and their utilization of language resources. Table 2 presents the descriptive statistics of the responses.</p><p>The PC prompt was deliberately kept consistent for Week 2 and Week 12 randomly selected as time points at the beginning and end of the semester. This deliberate choice of using the same prompt at these two distinct stages serves multiple purposes. Firstly, it provides a valuable perspective for analyzing growth over time. This approach adds depth to the study results and conclusions by providing additional evidence and triangulation. Second, this approach addresses one of the specific challenges identified by Ortega and Iberr-Shea (2005) in studies involving multiple data collection points, as maintaining consistency in the prompt can minimize potential variations in task difficulty or topic-related factors.</p><p>After completing each speaking assignment, the students were requested to rate the level of difficulty for each item on a scale of 1 (Very Easy) to 5 (Very difficult). Additionally, they were asked to fill out a weekly survey using Qualtrics. The Qualtrics survey contained six questions related to the frequency of their English language use outside of the classroom and their focus on language skills in the previous and upcoming week. These questions were considered interesting as potential contributing factors to changes in their performances throughout the semester. Refer to Appendix C for the survey questions.</p>
|
113 |
An English for Specific Purposes Curriculum to Prepare English Learners to Become Nursing AssistantsRomo, Abel Javier 11 July 2006 (has links) (PDF)
This project details the designing and implementation of an English for Specific Purposes (ESP) Curriculum to prepare English learners to become Certified Nursing Assistants (CNA) at Utah Valley Regional Medical Center (UVRMC) in Provo, Utah. UVRMC, which is owned by Intermountain Health Care (IHC), employs a group of about 40 non-native speakers of English. They work as housekeepers and have interest in learning English and consequently acquiring new skills they could use in better jobs to improve the quality of their lives. UVRMC would like these employees to obtain additional education in order to provide them with better employment opportunities. UVRMC allowed two graduate students at the Department of Linguistics and English Language at Brigham University to design and implement an ESP course to help UVRMC housekeepers improve their language skills in preparation to apply and participate in a Certified Nursing Assistant (CNA) course offered through IHC University. This report covers the linguistic needs analysis of the participants, situational analysis of UVRMC in terms of the support given to the curriculum, the designing of goals and objectives, the syllabus, the teaching of the syllabus, some material development, and the assessment of language learning. It also describes the instruments used to obtain information during each step of the designing of the curriculum and its implementation, analyzes that information, presents results, assesses the curriculum's efficacy, and explains the implications for other ESP curricula in the field of nursing and other scientific fields.
|
114 |
La evaluación de la competencia pragmática en lengua extranjera a través de una prueba adaptativaMartín Marchante, Beatriz 07 January 2016 (has links)
[EN] ABSTRACT
Most standardized tests of English as a second language (ESL) and / or Foreign Language (EFL) are high stake tests, and a growing number of these offers two versions: a traditional pen & paper, and a computerized one. Some examining bodies also implement adaptive tests. The technological advance of these standardized tests, usually commercial, it is clear and beneficial at first sight. However, some important aspects of the curriculum such as pragmatic competence are not usually measured by such tests. The only computerized tests that include items that measure this competence in their written tests are the Next generation TOEFL iBT test, and the Oxford Online Placement Test (OOPT).
This thesis aims to check if the OOPT, which is used to certify levels of EFL, is a valid indicator of the skills of interest such as pragmalinguistic competence and also to show whether this ability is measured appropriately for the intended use. Moreover, it delves into the reasons why a group of 44 students taking a first year course at the Facultat de Magisteri (University of Valencia) miss questions in the OOPT that specifically assess their pragmalinguistic competence.
Unlike previous studies, the interest of this thesis resides in the fact it analyzes not only what kind of test method facets influence these students production of errors, but also the personal characteristics involved in their error production. All of that, from a cognitive and meta-linguistic perspective, according to the students own perception. For this purpose, a retrospective questionnaire was designed and administered to the group. Several quantitative analyses (correlation, multiple regression and correspondence) were implemented to analyze the concurrent validity of the test, the weight of pragmatic ability in the assessment, and the reasons for errors, in that order. Moreover a descriptive analysis of pragmatic items was conducted to check the content validity of the pragmatics part of the OOPT. The results clearly indicate which skills require greater attention from both, teachers and designers of this type of tests. The analysis indicates that the pragmatic block, along with the listening one, holds the highest number of errors made by examinees. Moreover, the main reason highlighted by them as source of errors is the lack of certain lexical units contained in the pragmatics items.
Finally, some proposals consistent with the results are presented in order to help improve the quality of adaptive language testing. Also some ideas are suggested to overcome current limitations in the teaching of pragmalinguistic competence in EFL formal academic contexts. / [ES] RESUMEN
La mayoría de pruebas estandarizadas de inglés como segunda lengua (ISL) y /o lengua extranjera (ILE) son pruebas de alto impacto, y un número cada vez mayor de éstas ofrece dos versiones: una tradicional en papel, y otra informatizada. Algunos cuerpos examinadores implementan además pruebas adaptativas.
El avance tecnológico de estas pruebas estandarizadas, por lo general comerciales, es evidente y beneficioso a priori. Sin embargo, algunos aspectos importantes del currículo como la competencia pragmática no suelen ser medidos por este tipo de pruebas. Las únicas pruebas informatizadas que incluyen ítems que miden ésta competencia en sus pruebas escritas son el Next generation TOEFL¿ iBT test, y el Oxford Online Placement Test (OOPT).
En la presente tesis se pretende en primer lugar, comprobar si el OOP, utilizado para certificar niveles de ILE/ISL, es un indicador válido de las habilidades de interés tales como la competencia pragmalingüística, así como también evidenciar si esta habilidad es medida apropiadamente para el uso que se pretende. Por otra parte, se profundiza en las causas por las que un grupo de 44 estudiantes que cursan la asignatura de Lengua Inglesa para Maestros de primer curso en la Facultat de Magisteri (Universitat de València) yerran las preguntas del OOPT que evalúan específicamente su competencia pragmálingüística.
A diferencia de estudios anteriores, el interés de la presente tesis estriba en que en ella se investiga desde una perspectiva cognitiva y metalingüística, no solo qué tipo de factores de método pueden influir en la producción de errores de los examinandos, sino también, qué factores individuales intervienen según su propia percepción. Para tal fin se diseñó y administró a los examinandos un cuestionario retrospectivo. Se implementaron varios análisis cuantitativos (correlacional, de regresión múltiple y de correspondencia) para analizar la validez concurrente de la prueba, el peso en la evaluación de la parte de pragmática y los motivos de error respectivamente. Por otra parte se realizó un análisis descriptivo de las preguntas de pragmática para comprobar la validez de contenido de la parte de pragmática del OOPT.
Los resultados obtenidos señalan claramente cuáles son las habilidades que requieren mayor atención por parte tanto de los profesores como de los diseñadores de este tipo de pruebas. El análisis indica que el bloque de pragmática es, junto con el de comprensión oral, el que mayor número de errores cometidos por los examinandos ostenta. Por otra parte, el motivo fundamental destacado por los examinandos como causa de error es el desconocimiento de ciertas unidades léxicas contenidas en los ítems de pragmática.
Finalmente en esta tesis se presentan propuestas en consonancia con los resultados obtenidos que pueden redundar en una mejora de la calidad de las pruebas de lengua adaptativas y en la superación de algunas limitaciones actuales en la enseñanza de la competencia pragmalingüística en ILE/ISL en contextos académicos formales. / [CA] RESUM
La majoria de proves estandarditzades d'anglès com a segona llengua (AL2) i / o llengua estrangera (ALE) són proves d'alt impacte, i un nombre cada vegada major d'aquestes ofereix dues versions: una tradicional en paper, i una altra informatitzada. Alguns cossos examinadors implementen més proves adaptatives.
L'avanç tecnològic d'aquestes proves estandarditzades, en general comercials, és evident i beneficiós a priori. No obstant això, alguns aspectes importants del currículum com la competència pragmàtica no solen ser mesurats per aquest tipus de proves. Les úniques proves informatitzades que inclouen ítems que mesuren aquesta competència en les seves proves escrites són el Next generation TOEFL¿ iBT test, i l'Oxford Online Placement Test (OOPT).
En la present tesi es pretén en primer lloc, comprovar si el OOP, utilitzat per certificar nivells de ALE / AL2, és un indicador vàlid de les habilitats d'interès com ara la competència pragmalingüística, així com també evidenciar si aquesta habilitat es mesura apropiadament per l'ús que es pretén. D'altra banda, s'aprofundeix en les causes per les quals un grup de 44 estudiants que cursen l'assignatura de Llengua Anglesa per a Mestres de primer curs a la Facultat de Magisteri (Universitat de València) erren les preguntes del OOPT que avaluen específicament la seva competència pragmalingüística.
A diferència d'estudis anteriors, l' interès de la present tesi rau en el fet que en ella s'investiga des d'una perspectiva cognitiva i metalingüística, no només quin tipus de factors de mètode poden influir en la producció d'errors dels examinands, sinó també, quins factors individuals intervenen segons la seva pròpia percepció. Per a tal fi es va dissenyar i va administrar als examinands un qüestionari retrospectiu. Es van implementar diverses anàlisis quantitatius (correlacional, de regressió múltiple i de correspondència) per analitzar la validesa concurrent de la prova, el pes en l'avaluació de la part de pragmàtica i els motius d'error respectivament. D'altra banda es va realitzar una anàlisi descriptiva de les preguntes de pragmàtica per comprovar la validesa de contingut de la part de pragmàtica del OOPT.
Els resultats obtinguts assenyalen clarament quines són les habilitats que requereixen més atenció per part tant dels professors com dels dissenyadors d'aquest tipus de proves. L'anàlisi indica que el bloc de pragmàtica és, juntament amb el de comprensió oral, el que major nombre d'errors comesos pels examinands ostenta. D'altra banda, el motiu fonamental destacat pels examinands com a causa d'error és el desconeixement de certes unitats lèxiques que contenen els ítems de pragmàtica.
Finalment en aquesta tesi es presenten propostes d'acord amb els resultats obtinguts que poden redundar en una millora de la qualitat de les proves de llengua adaptatives i en la superació d'algunes limitacions actuals en l'ensenyament de la competència pragmalingüística en ALE / AL2 en contextos acadèmics formals. / Martín Marchante, B. (2015). La evaluación de la competencia pragmática en lengua extranjera a través de una prueba adaptativa [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59437
|
115 |
Maturitní zkouška z českého jazyka v úpravě pro neslyšící ve světle testování češtiny jako cizího jazyka / Adaptation of Czech Language Leaving Examination for the Deaf in Light Testing Czech Language as a Foreign LanguageAndrejsek, Jan January 2015 (has links)
This diploma work focuses on Adaptation of Czech Language Leaving Examination for the Deaf, which deaf student in Czech republic pass as a part of the their Leaving Examination. As a background of this main theme examines also present situation in the research of reading literacy. In the main part of this work is this examination presented in detail and there are also presented the results of deaf students and some other statisticts information. Examples of test units and student's written works are attached too. The other part of this work describes examinations of Czech language as a foreign language and give some detail information about the most important exams. In conclusion, the author of this work is trying to explore the way how the deaf students language skills in other countries are examined in their Leaving Examination.
|
116 |
Automatic Assessment of L2 Spoken EnglishBannò, Stefano 18 May 2023 (has links)
In an increasingly interconnected world where English has become the lingua franca of business, culture, entertainment, and academia, learners of English as a second language (L2) have been steadily growing. This has contributed to an increasing demand for automatic spoken language assessment systems for formal settings and practice situations in Computer-Assisted Language Learning. One common misunderstanding about automated assessment is the assumption that machines should replicate the human process of assessment. Instead, computers are programmed to identify, extract, and quantify features in learners' productions, which are subsequently combined and weighted in a multidimensional space to predict a proficiency level or grade. In this regard, transferring human assessment knowledge and skills into an automatic system is a challenging task since this operation should take into account the complexity and the specificities of the proficiency construct. This PhD thesis presents research conducted on methods and techniques for the automatic assessment and feedback of L2 spoken English, mainly focusing on the application of deep learning approaches. In addition to overall proficiency grades, the main forms of feedback explored in this thesis are feedback on grammatical accuracy and assessment related to particular aspects of proficiency (e.g., grammar, pronunciation, rhythm, fluency, etc.). The first study explores the use of written data and the impact of features extracted through grammatical error detection on proficiency assessment, while the second illustrates a pipeline which starts from disfluency detection and removal, passes through grammatical error correction, and ends with proficiency assessment. Grammar, as well as rhythm, pronunciation, and lexical and semantic aspects, is also considered in the third study, which investigates whether it is possible to use systems targeting specific facets of proficiency analytically when only holistic scores are available. Finally, in the last two studies, we investigate the use of self-supervised learning speech representations for both holistic and analytic proficiency assessment. While aiming at enhancing the performance of state-of-the-art automatic systems, the present work pays particular attention to the validity and interpretability of assessment both holistically and analytically and intends to pave the way to a more profound and insightful knowledge and understanding of automatic systems for speaking assessment and feedback.
|
Page generated in 0.099 seconds