Spelling suggestions: "subject:"applicability.""
11 |
Terminografia didático-pedagógica : metodologia para elaboração de recursos voltados ao ensino de inglês para fins específicosFadanelli, Sabrina Bonqueves January 2017 (has links)
Esta tese trata do desenvolvimento de materiais de apoio para o ensino de leitura em inglês em cursos técnicos e tecnológicos do Brasil, especialmente no âmbito da formação em Eletrotécnica e Engenharia Elétrica. Os objetivos do trabalho são dois: a) apresentar a Terminografia Didático-Pedagógica (TD-P), uma metodologia inédita para desenvolvimento de materiais didáticos para professores de ESP (English for Specific Purposes - Inglês para Fins Específicos); b) verificar, junto a professores, se a metodologia é útil e replicável. Essa metodologia baseia-se nos preceitos teóricos do ensino do ESP, da Terminologia de perspectiva Textual, da Teoria Sócio-Cognitiva da Terminologia e da Linguística de Corpus. Seguindo o trajeto da metodologia da TD-P, extraíram-se dados lexicais e gramaticais de um corpus composto pelo gênero textual relevante ao ambiente de ensino da Eletrotécnica e Engenharia Elétrica, a fim de construir um protótipo de um glossário idealizado para o trabalho com a área especializada em questão. Para tanto, primeiramente, compilaram-se 30 datasheets de 11 componentes elétricos, somando 21.467 tokens. A coleta de candidatos a termos se deu através das ferramentas AntConc (ANTHONY, 2004) e TermoStat (DROUIN, 2003). Procedeu-se então ao levantamento de necessidades dos aprendizes através da distribuição dos mesmos datasheets a 108 alunos de cursos técnicos e de graduação da área, os quais apontaram as construções lexicais que lhes causavam dúvidas. As informações obtidas foram contrastadas e cotejadas com a análise de um especialista da área de aplicação. Os dados obtidos com a comparação das coletas resultaram em diferenças específicas que auxiliaram a desenvolver o protótipo de ferramenta terminográfica, o GlossElectric. Para a verificação da opinião de outros profissionais sobre a replicabilidade da metodologia da TD-P, foi produzido um vídeo sobre a mesma, e este foi distribuído juntamente com um questionário a 11 professores especialistas em ensino de Inglês para fins específicos. Os resultados da verificação da metodologia apontam para um grau satisfatório de replicabilidade, com uma média de 84% de participantes considerando a metodologia altamente reproduzível em seu ambiente de trabalho. / The current thesis concerns the development of support material for the teaching of reading skills in English in technical courses in Brazil, especially regarding the areas of Electrotechnical/Electrical Engineering. There are two aims for this work: a) introducing the Didatic-Pedagogical Terminography (TD-P), a methodology for developing and building didatic material for ESP (English for Specific Purposes) teachers and professors; b) verify, among ESP professors, if they consider the methodology useful and replicable in other areas. This methodology is based on the theoretical precepts of the teaching of ESP, the Terminology of Textual Perspective, the Socio-Cognitive Theory of Terminology and Corpus Linguistics. Following the methodology’s steps, lexical and gramatical data was extracted from a corpus composed by the textual genre relevant to the teaching environment of Electrotechnical/Electrical Engineering, with the aim of building a prototype of an idealized glossary, in order to work with the mentioned technical area. 30 datasheets from 11 diffrent electrical components were compiled, with the total of 21.467 tokens. The term extraction was carried out using the tools AntConc (ANTHONY, 2004) and TermoStat (DROUIN, 2003). After that, there was a data collection to determine the needs of the learners of the technical area, who need to deal with the datasheets during their professional preparation. The same datasheets used on the term extraction were distributed to 108 students of technical and graduation courses in the Electrotechnical/Electrical Engineering domain. These students were asked to point out the lexical features that caused doubts regarding meaning. The data obtained was contrasted and submitted to the analysis of a specilist in the technical domain. The data obtained from the crossing resulted in specific differences which helped to develop the prototype of the terminographic tool, named GlossElectric. In order to verify the opinion of other professionals regarding the replicability of the TD-P methodology, a video was produced and sent together with a questionnaire to 11 professors who were specialised in ESP. The results of the methodology verification point to a satisfactory degree of replicability, with an average of 84% of participants considering the methodology highly reproducible in their work environment.
|
12 |
Increasing Reproducibility Through Provenance, Transparency and Reusability in a Cloud-Native Application for Collaborative Machine LearningEkström Hagevall, Adam, Wikström, Carl January 2021 (has links)
The purpose of this thesis paper was to develop new features in the cloud-native and open-source machine learning platform STACKn, aiming to strengthen the platform's support for conducting reproducible machine learning experiments through provenance, transparency and reusability. Adhering to the definition of reproducibility as the ability of independent researchers to exactly duplicate scientific results with the same material as in the original experiment, two concepts were explored as alternatives for this specific goal: 1) Increased support for standardized textual documentation of machine learning models and their corresponding datasets; and 2) Increased support for provenance to track the lineage of machine learning models by making code, data and metadata readily available and stored for future reference. We set out to investigate to what degree these features could increase reproducibility in STACKn, both when used in isolation and when combined. When these features had been implemented through an exhaustive software engineering process, an evaluation of the implemented features was conducted to quantify the degree of reproducibility that STACKn supports. The evaluation showed that the implemented features, especially provenance features, substantially increase the possibilities to conduct reproducible experiments in STACKn, as opposed to when none of the developed features are used. While the employed evaluation method was not entirely objective, these features are clearly a good first initiative in meeting current recommendations and guidelines on how computational science can be made reproducible.
|
13 |
Different or the Same? Determination of Discriminatory Power Threshold and Category Formation for Vague Linguistic Frequency ExpressionsBocklisch, Franziska 24 July 2019 (has links)
In psychological research, many questionnaires use verbal response scales with vague linguistic terms (e.g., frequency expressions). The words’ meanings can be formalized and evaluated using fuzzy membership functions (MFs), which allow constructing distinct and equidistant response scales. The discriminatory power value of MFs indicates how distinct the functions and, hence, the verbal expressions are. The present manuscript interrogates the threshold of discriminatory power necessary to indicate a sufficient difference in meaning. Using an empirical validation procedure, participants (N = 133) estimated (1) three correspondence values for verbal expressions to determine MFs, and (2) similarities of words by pairwise comparison ratings. Results show a non-linear relationship between discriminatory power and similarity, and fuzzy MFs, as well as the searched-for threshold value for discriminatory power. Implications for the selection of verbal expressions and the construction of verbal categories in questionnaire response scales are discussed.
|
14 |
The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential BiasesSchäfer, Thomas, Schwarz, Marcus A. 15 April 2019 (has links)
Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = 0.36) were much larger than effects from the latter (median r = 0.16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.
|
15 |
Adjusting Sample Sizes for Different Categories of Embodied Cognition ResearchSkulmowski, Alexander, Rey, Günter Daniel 20 June 2019 (has links)
Introduction
Research in the field of embodied cognition is occupied with a variety of research questions stemming from the idea that cognition is deeply connected with bodily aspects such as perception and action (Barsalou, 1999, 2008). However, some embodiment studies have been identified to exhibit problems such as non-replicable results (Lakens, 2014). With this article, we wish to accomplish three aims: exemplifying ways of categorizing embodied cognition research in an informative manner; providing guidelines on how to identify problematic study designs; suggesting solutions for potentially problematic designs.
Within the field of embodied cognition, several aspects are investigated as outlined by Wilson (2002). One example for embodiment mentioned by Wilson (2002) is gesturing (for an overview on gesturing, see Hostetter and Alibali, 2008). Embodied cognition theory can be used to analyze the relation between gestures and mental processes (e.g., Hostetter and Alibali, 2008). Furthermore, there is a debate around the question whether language and meaning are grounded in perceptual contents experienced through the body (e.g., Borghi et al., 2004; for an overview on grounded cognition, see Barsalou, 2010). Besides research on cognition, principles of embodied cognition have been applied to fields such as social psychology (see Meier et al., 2012, for an overview) and educational psychology (see Paas and Sweller, 2012, for an overview). For instance, research on embodiment in the context of social cognition has provided evidence for the claim that bodily sensations such as weight can alter judgments on importance (e.g., Ackerman et al., 2010). In educational psychology, one application of embodiment theory is the design of interactive learning environments (e.g., Johnson-Glenberg et al., 2014).
In response to the current replication crisis in psychology (for discussions, see Pashler and Wagenmakers, 2012; Maxwell et al., 2015), several solutions have been proposed to improve the quality of psychological research (e.g., Chambers, 2013; Simons, 2014; LeBel, 2015; for overviews, see Ferguson, 2015; Zwaan et al., 2017). Benjamin et al. (2018) argue for a change of the standard 0.05 alpha level and instead support to lower the default alpha value for novel findings in the field of psychology to 0.005. Importantly, the sample size and power of studies have been described as pivotal contributors to replicable results (Fraley and Vazire, 2014).
Multiple types of embodied cognition research are facing the problem of delivering non-replicable results as discussed in the literature (e.g., Rabelo et al., 2015). Perugini et al. (2014) present a method for the calculation of sample sizes for replication studies and confirmatory research that takes into account that observed effect sizes may be inaccurate estimates. They suggest to conduct sample size calculations using an effect size that is based on the lower bounds of the confidence interval computed for an observed effect size (Perugini et al., 2014). Another method is presented by Simonsohn (2015), who makes the argument that sample size calculations for replication studies should not merely use the effect sizes reported in the original research that is to be replicated. He explains that by increasing the sample size by the factor of 2.5, a replication study can be used to assess whether an effect is too small to have been appropriately captured in the original study (Simonsohn, 2015). This method has already been used in a recent replication study on embodied cognition effects (Ronay et al., 2017). We suggest to use one of the aforementioned methods of sample size calculation for studies involving embodiment-based manipulation types that are known for potential problems. In the following, we will present three important aspects that can be used to check whether an embodied cognition study design will need amendments such as an increased sample size.
|
16 |
Genetic architecture of complex disease in humans :a cross-population explorationMartínez Marigorta, Urko, 1983- 12 November 2012 (has links)
The aetiology of common diseases is shaped by the effects of genetic and environmental factors. Big efforts have been devoted to unravel the genetic basis of disease with the hope that it will help to develop new therapeutic treatments and to achieve personalized medicine. With the development of high-throughput genotyping technologies, hundreds of association studies have described many loci associated to disease. However, the depiction of disease architecture remains incomplete. The aim of this work is to perform exhaustive comparisons across human populations to evaluate pressing questions. Our results provide new insights in the allele frequency of risk variants, their sharing across populations and the likely architecture of disease / La etiología de las enfermedades comunes está formada por factores genéticos y ambientales. Se ha puesto mucho empeño en describir sus bases genéticas. Este conocimiento será útil para desarrollar nuevas terapias y la medicina personalizada. Gracias a las técnicas de genotipado masivo, centenares de estudios de asociación han descrito una infinidad de genes asociados a enfermedad. Pese a ello, la arquitectura genética de las enfermedades no ha sido totalmente descrita. Esta tesis pretende llevar a cabo exhaustivas comparaciones entre poblaciones para responder diversas preguntas candentes. Nuestros resultados dan pistas sobre la frecuencia de los alelos de riesgo, su presencia entre poblaciones y la probable arquitectura de las enfermedades.
|
17 |
La technique de la mise en abyme dans l'oeuvre romanesque d'Umberto Eco / 'Mise en abyme' technique in Umberto Eco's fictional workCraciun, Marinela-Denisa 09 February 2016 (has links)
À la fois procédé artistico-littéraire et réflexion intellectuelle, la mise en abyme est une des stratégies de création favorites du romancier Umberto Eco. Certains auteurs ont utilisé cette technique uniquement pour de créer de « simples » jeux de miroirs (comme c’est le cas des Nouveaux Romanciers). Chez Umberto Eco, la mise en abyme est destinée à faire saillir aussi bien l’intelligibilité que la structuration de l’oeuvre. Elle est un principe récursif de la génération de figures et de formes narratives fractales : elle est pour ainsi dire le raisonnement servant comme base dans la création d’un univers romanesque par excellence autoréflexif. Selon une représentation très succincte (mais essentielle), la mise en abyme est ce dispositif narratif consistant à insérer un (ou plusieurs) récit(s) dans le Récit, qui, en reproduisant les caractéristiques de ce dernier va (vont) l’illustrer, l’expliquer et mettre en évidence le/les thème(s) de l’oeuvre. / An artistic and literary procedure and, at the same time, an intellectual reflexion, mise en abyme is one of the favorite strategies of creation of the novelist U. Eco. Certain authors have used this technique just to create "simple" games of mirrors (such as the French writers called "Nouveaux Romanciers"). In the case of Umberto Eco, mise en abyme is destined to emphasize both the intelligibility of his work, as well as its being structured in multiple layers of narrativity. We reckon that this is the recursive principle, serving at the generating of fractal shapes and characters: this is therefore the reasoning laying at the foundation of creating a fictional universe self-reflective par excellence. If we want a very concise (but essential) representation, we will say that the mise en abyme is that device consisting of the insertion of one or many more stories into The Story, which, by reproducing the characteristics of the latter, provides us with an explanation and will shed light on the theme / themes of the work.
|
Page generated in 0.0588 seconds