• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 378
  • 51
  • 40
  • 39
  • 34
  • 28
  • 19
  • 19
  • 11
  • 10
  • 9
  • 8
  • 6
  • 4
  • 4
  • Tagged with
  • 786
  • 786
  • 126
  • 110
  • 89
  • 83
  • 74
  • 72
  • 69
  • 69
  • 68
  • 63
  • 62
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Kognitionsbasierte Mensch-Technik Interaktion in Cyber-Physischen Systemen am Applikationsbeispiel „Thermisches Spritzen“

Bocklisch, Franziska, Drehmann, Rico, Lampke, Thomas 01 April 2020 (has links)
Der vorliegende Artikel skizziert eine methodische Vorgehensweise zur Analyse und Gestaltung von Mensch-Technik Interaktionen, die die kognitiven Prozesse des menschlichen Bedieners/Nutzers explizit berücksichtigt (kognitionsbasierte Mensch-Technik Interaktion, Ko-MTI). Das Vorgehen ist eingebettet in die Konzeption Cyber-Physischer Systeme und erweitert diese explizit um die menschliche Perspektive. An einem Applikationsbeispiel aus der Oberflächentechnik (Thermisches Spritzen) wird die erste Ko-MTI Phase „Ganzheitliche Systemanalyse“ skizziert und anhand von Ergebnissen einer Beobachtungsstudie mit Eye-Tracking dargestellt.
772

Dopady zavedení web scale discovery systémů v akademických knihovnách / Impact of Web Scale Discovery Services in Academic Libraries

Čejka, Marek January 2016 (has links)
This diploma thesis discusses a modern concept of information retrieval and search engines for libraries and other academic institutions. The concept named "web scale discovery" comprises of search engines, whose main characteristics lie in simplicity and user friendliness for the end users while maintaining all functional qualities of traditional research databases. Users can search in a wide variety of international research databases, and also in local sources of an institution that are combined within a large central index. The theoretical section presents definitions of web scale discovery, which conceptually set the new method of information retrieval within the field of information and library science. A graphic scheme of the basic functionality of a web scale discovery system is presented. Also discussed are requirements for a modern discovery system, an overview of the current situation in the Czech Republic, and a short characteristic of commercially available discovery systems. The theoretical part concludes with a literature review of selected foreign research, studying user satisfaction with the new solution, the impact on electronic and print resources in libraries, and usability testing. The practical part presents an original research study - usability testing of EBSCO Discovery...
773

Extraction d’une image dans une vidéo en vue de la reconnaissance du visage / Extraction of an image in order to apply face recognition methods

Pyun, Nam Jun 09 November 2015 (has links)
Une vidéo est une source particulièrement riche en informations. Parmi tous les objets que nous pouvons y trouver, les visages humains sont assurément les plus saillants, ceux qui attirent le plus l’attention des spectateurs. Considérons une séquence vidéo dont chaque trame contient un ou plusieurs visages en mouvement. Ils peuvent appartenir à des personnes connues ou qui apparaissent de manière récurrente dans la vidéo Cette thèse a pour but de créer une méthodologie afin d’extraire une ou plusieurs images de visage en vue d’appliquer, par la suite, un algorithme de reconnaissance du visage. La principale hypothèse de cette thèse réside dans le fait que certains exemplaires d’un visage sont meilleurs que d’autres en vue de sa reconnaissance. Un visage est un objet 3D non rigide projeté sur un plan pour obtenir une image. Ainsi, en fonction de la position relative de l’objectif par rapport au visage, l’apparence de ce dernier change. Considérant les études sur la reconnaissance de visages, on peut supposer que les exemplaires d’un visage, les mieux reconnus sont ceux de face. Afin d’extraire les exemplaires les plus frontaux possibles, nous devons d’une part estimer la pose de ce visage. D’autre part, il est essentiel de pouvoir suivre le visage tout au long de la séquence. Faute de quoi, extraire des exemplaires représentatifs d’un visage perd tout son sens. Les travaux de cette thèse présentent trois parties majeures. Dans un premier temps, lorsqu’un visage est détecté dans une séquence, nous cherchons à extraire position et taille des yeux, du nez et de la bouche. Notre approche se base sur la création de cartes d’énergie locale principalement à direction horizontale. Dans un second temps, nous estimons la pose du visage en utilisant notamment les positions relatives des éléments que nous avons extraits. Un visage 3D a trois degrés de liberté : le roulis, le lacet et le tangage. Le roulis est estimé grâce à la maximisation d’une fonction d’énergie horizontale globale au visage. Il correspond à la rotation qui s’effectue parallèlement au plan de l’image. Il est donc possible de le corriger pour qu’il soit nul, contrairement aux autres rotations. Enfin, nous proposons un algorithme de suivi de visage basé sur le suivi des yeux dans une séquence vidéo. Ce suivi repose sur la maximisation de la corrélation des cartes d’énergie binarisées ainsi que sur le suivi des éléments connexes de cette carte binaire. L’ensemble de ces trois méthodes permet alors tout d’abord d’évaluer la pose d’un visage qui se trouve dans une trame donnée puis de lier tous les visages d’une même personne dans une séquence vidéo, pour finalement extraire plusieurs exemplaires de ce visage afin de les soumettre à un algorithme de reconnaissance du visage. / The aim of this thesis is to create a methodology in order to extract one or a few representative face images of a video sequence with a view to apply a face recognition algorithm. A video is a media particularly rich. Among all the objects present in the video, human faces are, for sure, the most salient objects. Let us consider a video sequence where each frame contains a face of the same person. The primary assumption of this thesis is that some samples of this face are better than the others in terms of face recognition. A face is a non-rigid 3D object that is projected on a plan to form an image. Hence, the face appearance changes according to the relative positions of the camera and the face. Many works in the field of face recognition require faces as frontal as possible. To extract the most frontal face samples, on the one hand, we have to estimate the head pose. On the other hand, tracking the face is also essential. Otherwise, extraction representative face samples are senseless. This thesis contains three main parts. First, once a face has been detected in a sequence, we try to extract the positions and sizes of the eyes, the nose and the mouth. Our approach is based on local energy maps mainly with a horizontal direction. In the second part, we estimate the head pose using the relative positions and sizes of the salient elements detected in the first part. A 3D face has 3 degrees of freedom: the roll, the yaw and the pitch. The roll is estimated by the maximization of a global energy function computed on the whole face. Since this roll corresponds to the rotation which is parallel to the image plan, it is possible to correct it to have a null roll value face, contrary to other rotations. In the last part, we propose a face tracking algorithm based on the tracking of the region containing both eyes. This tracking is based on the maximization of a similarity measure between two consecutive frames. Therefore, we are able to estimate the pose of the face present in a video frame, then we are also able to link all the faces of the same person in a video sequence. Finally, we can extract several samples of this face in order to apply a face recognition algorithm on them.
774

Une approche computationnelle de la complexité linguistique par le traitement automatique du langage naturel et l'oculométrie

Loignon, Guillaume 05 1900 (has links)
Le manque d'intégration des sciences cognitives et de la psychométrie est régulièrement déploré – et ignoré. En mesure et évaluation de la lecture, une manifestation de ce problème est l’évitement théorique concernant les sources de difficulté linguistiques et les processus cognitifs associés à la compréhension de texte. Pour faciliter le rapprochement souhaité entre sciences cognitives et psychométrie, nous proposons d’adopter une approche computationnelle. En considérant les procédures informatiques comme des représentations simplifiées et partielles de théories cognitivistes, une approche computationnelle facilite l’intégration d’éléments théoriques en psychométrie, ainsi que l’élaboration de théories en psychologie cognitive. La présente thèse étudie la contribution d’une approche computationnelle à la mesure de deux facettes de la complexité linguistique, abordées à travers des perspectives complémentaires. La complexité intrinsèque du texte est abordée du point de vue du traitement automatique du langage naturel, avec pour objectif d'identifier et de mesurer les attributs (caractéristiques mesurables) qui modélisent le mieux la difficulté du texte. L'article 1 présente ALSI (pour Analyseur Lexico-syntaxique intégré), un nouvel outil de traitement automatisé du langage naturel qui extrait une variété d'attributs linguistiques, principalement issus de la recherche en psycholinguistique et en linguistique computationnelle. Nous évaluons ensuite le potentiel des attributs pour estimer la difficulté du texte. L'article 2 emploie ALSI et des méthodes d’apprentissage statistique pour estimer la difficulté de textes scolaires québécois. Dans le second volet de la thèse, la complexité associée aux processus de lecture est abordée sous l'angle de l'oculométrie, qui permet de faire des inférences quant à la charge cognitive et aux stratégies d’allocation de l’attention visuelle en lecture. L'article 3 décrit une méthodologie d'analyse des enregistrements d’oculométrie mobile à l'aide de techniques de vision par ordinateur (une branche de l'intelligence artificielle); cette méthodologie est ensuite testée sur des données de simulation. L'article 4 déploie la même méthodologie dans le cadre d’une expérience pilote d’oculométrie comparant les processus de lecture de novices et d'experts répondant à un test de compréhension du texte argumentatif. Dans l’ensemble, nos travaux montrent qu’il est possible d’obtenir des résultats probants en combinant des apports théoriques à une approche computationnelle mobilisant des techniques d’apprentissage statistique. Les outils créés ou perfectionnés dans le cadre de cette thèse constituent une avancée significative dans le développement des technologies numériques en mesure et évaluation de la lecture, avec des retombées à anticiper en contexte scolaire comme en recherche. / The lack of integration of cognitive science and psychometrics is commonly deplored - and ignored. In the assessment of reading, one manifestation of this problem is a theoretical avoidance regarding sources of text difficulty and cognitive processes underlying text comprehension. To facilitate the desired integration of cognitive science and psychometrics, we adopt a computational approach. By considering computational procedures as simplified and partial representations of cognitivist models, a computational approach facilitates the integration of theoretical elements in psychometrics, as well as the development of theories in cognitive psychology. This thesis studies the contribution of a computational perspective to the measurement of two facets of linguistic complexity, using complementary perspectives. Intrinsic text complexity is approached from the perspective of natural language processing, with the goal of identifying and measuring text features that best model text difficulty. Paper 1 introduces ISLA (Integrated Lexico-Syntactic Analyzer), a new natural language processing tool that extracts a variety of linguistic features from French text, primarily taken from research in psycholinguistics and computational linguistics. We then evaluate the features’ potential to estimate text difficulty. Paper 2 uses ISLA and statistical learning methods to estimate difficulty of texts used in primary and secondary education in Quebec. In the second part of the thesis, complexity associated with reading processes is addressed using eye-tracking, which allows inferences to be made about cognitive load and visual attention allocation strategies in reading. Paper 3 describes a methodology for analyzing mobile eye-tracking recordings using computer vision techniques (a branch of artificial intelligence); this methodology is then tested on simulated data. Paper 4 deploys the same methodology in the context of an eye-tracking pilot experiment comparing reading processes in novices and experts during an argumentative text comprehension test. Overall, our work demonstrates that it is possible to obtain convincing results by combining theoretical contributions with a computational approach using statistical learning techniques. The tools created or perfected in the context of this thesis constitute a significant advance in the development of digital technologies for the measurement and evaluation of reading, with easy-to-identify applications in both academic and research contexts.
775

Physiological Reactions To Uncanny Stimuli: Substantiation Of Self-assessment And Individual Perception

Ballion, Tatiana 01 January 2012 (has links)
There is abundant anecdotal evidence substantiating Mori’s initial observation of the "uncanny valley", a point at which human response to non-human entities drops sharply with respect to comfort (Mori, 1970), and the construct itself has a long-standing history in both Robotics and Psychology. Currently, many fields such as design, training, entertainment, and education make use of heuristic approaches to accommodate the anticipated needs of the user/consumer/audience in certain important aspects. This is due to the lack of empirical substantiation or, in some cases, the impossibility of rigorous quantification; one such area is with respect to the user’s experience of uncanniness, a feeling of "eeriness" or "wrongness" when interacting with artefacts or environments. Uncanniness, however, continues to be defined and measured in a largely subjective way, and often after the fact; an experience or product’s uncanny features are pointed out after the item has been markedly avoided or complained about by the general public. These studies are among the first seeking to determine a constellation of personality traits and physiological responses that incline the user to have a more frequent or profound "uncanny" reaction when presented with stimuli meeting the criteria for a level of "eeriness". In study 1, 395 adults were asked to categorize 200 images as uncanny, neutral, pleasant, or other. In Study 2, physiological and eye-tracking data was collected from twenty two adults as they viewed uncanny, neutral and pleasant images culled from study 1. This research identifies components of the uncanny valley related to subjective assessment, personality factors (using the HEXACO and Anthropomorphic Tendencies Scale), and biophysical measures, and found that traits unique to Emotionality on the HEXACO inventory, compounded with a form of anthropomorphism demonstrates a level of relationship to the subjective experience of uncanny stimuli. There is evidence that HEXACO type and forms of anthropomorphic perception mediates the biophysical iv expression and the subjective perception of the stimuli. In keeping with psychological hypotheses, stimuli to which the participants had greatest response centered on death, the threat of death, or mismatched/absent facial features.
776

Investigating The Universality And Comprehensive Ability Of Measures To Assess The State Of Workload

Abich, Julian 01 January 2013 (has links)
Measures of workload have been developed on the basis of the various definitions, some are designed to capture the multi-dimensional aspects of a unitary resource pool (Kahneman, 1973) while others are developed on the basis of multiple resource theory (Wickens, 2002). Although many theory based workload measures exist, others have often been constructed to serve the purpose of specific experimental tasks. As a result, it is likely that not every workload measure is reliable and valid for all tasks, much less each domain. To date, no single measure, systematically tested across experimental tasks, domains, and other measures is considered a universal measure of workload. Most researchers would argue that multiple measures from various categories should be applied to a given task to comprehensively assess workload. The goal for Study 1 to establish task load manipulations for two theoretically different tasks that induce distinct levels of workload assessed by both subjective and performance measures was successful. The results of the subjective responses support standardization and validation of the tasks and demands of that task for investigating workload. After investigating the use of subjective and objective measures of workload to identify a universal and comprehensive measure or set of measures, based on Study 2, it can only be concluded that not one or a set of measures exists. Arguably, it is not to say that one will never be conceived and developed, but at this time, one does not reside in the psychometric catalog. Instead, it appears that a more suitable approach is to customize a set of workload measures based on the task. The novel approach of assessing the sensitivity and comprehensive ability of conjointly utilizing subjective, performance, and physiological workload measures for theoretically different tasks within the same domain contributes to the theory by laying the foundation for improving methodology for researching workload. The applicable contribution of this project is a stepping-stone towards developing complex profiles of workload for use in closed-loop systems, such as human-robot team iv interaction. Identifying the best combination of workload measures enables human factors practitioners, trainers, and task designers to improve methodology and evaluation of system designs, training requirements, and personnel selection
777

3D Gaze Estimation on Near Infrared Images Using Vision Transformers / 3D Ögonblicksuppskattning på Nära Infraröda Bilder med Vision Transformers

Vardar, Emil Emir January 2023 (has links)
Gaze estimation is the process of determining where a person is looking, which has recently become a popular research area due to its broad range of applications. For example, tools that estimate gaze are used for research, medical diagnosis, virtual and augmented reality, driver assistance system, and many more. Therefore, better products are sought by many. Gaze estimation methods typically use images of only the eyes or the whole face to estimate the gaze since these methods are the most practical and convenient options. Recently, Convolutional Neural Networks (CNNs) have been appealing candidates for estimating the gaze. Nevertheless, the recent success of Vision Transformers (ViTs) in image classification tasks has introduced a new potential alternative. Hence, this work investigates the potential of using ViTs to estimate the gaze on Near-Infrared (NIR) images. This is done in terms of average error and computational complexity. Furthermore, this work examines not only pure ViTs but other models, such as hybrid ViTs and CNN-Formers, which combine CNNs and ViTs. The empirical results showed that hybrid ViTs are the only models that can outperform state-of-the-art CNNs such as MobileNetV2 and ResNet-18 while maintaining similar computational complexity to ResNet-18. The results on hybrid ViTs indicate that the convolutional stem is the most crucial part of them. Improved convolutional stems lead to better outcomes. Moreover, in this work, we defined a new training algorithm for hybrid ViTs, the hybrid Data-Efficient Image Transformer (DeiT) procedure, which has shown remarkable results. It is 3.5% better than the pretrained ResNet-18 while having the same time complexity. / Blickuppskattning är processen att uppskatta en persons blick, vilket nyligen har blivit ett populärt forskningsområde på grund av dess breda användningsområde. Till exempel, verktyg för blickuppskattning används inom forskning, medicinsk diagnos, virtuell och förstärkt verklighet, förarassistanssystem och för mycket mer. Därför, bättre produkter för blickuppskattning eftersträvas av många. Blickuppskattnings metoder vanligtvis använder bilder av endast ögonen eller hela ansiktet för att uppskatta blicken eftersom denna typen av metoder är de mest praktiska och lämliga alternativ. På sistånde har Convolutional Neural Networks (CNNs) varit tilltalande kandidater för att uppskatta blicken. Dock, har den senaste framgången med Vision Transformers (ViTs) i bildklassificeringsuppgifter introducerat ett nytt potentiellt alternativ. Därför undersöker detta arbete potentialen av att använda ViTs för att uppskatta blicken på Nära-infraröda (NIR) bilder. Undersökningen görs både i termer av medelfel och beräkningskomplexitet. Hursomhelst, detta arbete undersöker inte enbart rena ViTs utan andra modeller, som hybrida ViTs och CNN-Formers, som kombinerar CNNs och ViTs. De empiriska resultaten visade att hybrida ViTs är de enda modellerna som kan överträffa toppmoderna CNNs som MobileNetV2 och ResNet-18 samtidigt som de bibehåller liknande beräkningskomplexitet som ResNet-18. Resultaten på hybrida ViTs indikerar att faltningsstammen är den mest avgörande delen av dem. Det vill säga, desto bättre faltningsstamm en har desto bättre resultat kan man erhålla. Dessutom definierade vi i detta arbete en ny träningsalgoritm för hybrida ViTs, vilket vi kallar hybrida Data-Efficient Image Transformer (DeiT) procedur som har visat anmärkningsvärda resultat. Den är 3,5% bättre än den förtränade ResNet-18 samtidigt som den har samma tid komplexitet.
778

Augmenting High-Dimensional Data with Deep Generative Models / Högdimensionell dataaugmentering med djupa generativa modeller

Nilsson, Mårten January 2018 (has links)
Data augmentation is a technique that can be performed in various ways to improve the training of discriminative models. The recent developments in deep generative models offer new ways of augmenting existing data sets. In this thesis, a framework for augmenting annotated data sets with deep generative models is proposed together with a method for quantitatively evaluating the quality of the generated data sets. Using this framework, two data sets for pupil localization was generated with different generative models, including both well-established models and a novel model proposed for this purpose. The unique model was shown both qualitatively and quantitatively to generate the best data sets. A set of smaller experiments on standard data sets also revealed cases where this generative model could improve the performance of an existing discriminative model. The results indicate that generative models can be used to augment or replace existing data sets when training discriminative models. / Dataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.
779

THEORY OF AUTOMATICITY IN CONSTRUCTION

Ikechukwu Sylvester Onuchukwu (17469117) 30 November 2023 (has links)
<p dir="ltr">Automaticity, an essential attribute of skill, is developed when a task is executed repeatedly with minimal attention and can have both good (e.g., productivity, skill acquisitions) and bad (e.g., accident involvement) implications on workers’ performance. However, the implications of automaticity in construction are unknown despite their significance. To address this knowledge gap, this research aimed to examine methods that are indicative of the development of automaticity on construction sites and its implications on construction safety and productivity. The objectives of the dissertation include: 1) examining the development of automaticity during the repetitive execution of a primary task of roofing construction and a concurrent secondary task (a computer-generated audio-spatial processing task) to measure attentional resources; 2) using eye-tracking metrics to distinguish between automatic and nonautomatic subjects and determine the significant factors contributing to the odds of automatic behavior; 3) determining which personal characteristics (such as personality traits and mindfulness dimensions) better explain the variability in the attention of workers while developing automaticity. To achieve this objective, 28 subjects were recruited to take part in a longitudinal study involving a total of 22 repetitive sessions of a simulated roofing task. The task involves the installation of 17 pieces of 25 ft2 shingles on a low-sloped roof model that was 8 ft wide, 8 ft long, and 4 ft high for one month in a laboratory. The collected data was analyzed using multiple statistical and data mining techniques such as repeated measures analysis of variance (RM-ANOVA), pairwise comparisons, principal component analysis (PCA), support vector machine (SVM), binary logistic regression (BLR), relative weight analyses (RWA), and advanced bootstrapping techniques to address the research questions. First, the findings showed that as the experiment progressed, there were significant improvements in the mean automatic performance measures such as the mean primary task duration, mean primary task accuracy, and mean secondary task score over the repeated measurements (p-value < 0.05). These findings were used to demonstrate that automaticity develops during repetitive construction activities. This is because these automatic performance measures provide an index for assessing feature-based changes that are synonymous with automaticity development. Second, this study successfully used supervised machine learning methods including SVM to classify subjects (with an accuracy of 76.8%) based on their eye-tracking data into automatic and nonautomatic states. Also, BLR was used to estimate the probability of exhibiting automaticity based on eye-tracking metrics and ascertain the variables significantly contributing to it. Eye-tracking variables collected towards safety harness and anchor, hammer, and work area AOIs were found to be significant predictors (p < 0.05) of the probability of exhibiting automatic behavior. Third, the results revealed that higher levels of agreeableness significantly impact increased levels of change in attention to productivity-related cues during automatic behavior. Additionally, higher levels of nonreactivity to inner experience significantly reduce the changes in attention to safety-related AOIs while developing automaticity. The findings of this study provide metrics to assess training effectiveness. The findings of this study can be used by practitioners to better understand the positive and negative consequences of developing automaticity, measure workers’ performance more accurately, assess training effectiveness, and personalize learning for workers. In long term, the findings of this study will also aid in improving human-AI teaming since the AI will be better able to understand the cognitive state of its human counterpart and can more precisely adapt to him or her.</p>
780

Exploring the Correlation Between Reading Ability and Mathematical Ability : KTH Master thesis report

Sol, Richard, Rasch, Alexander January 2023 (has links)
Reading and mathematics are two essential subjects for academic success and cognitive development. Several studies show a correlation between the reading ability and mathematical ability of pupils (Korpershoek et al., 2015; Ní Ríordáin &amp; O’Donoghue, 2009; Reikerås, 2006; Walker et al., 2008). The didactical part of this thesis presents a study investigating a correlation between reading ability and mathematical ability among pupils in upper secondary schools in Sweden. This study collaborated with Lexplore AB to use machine learning and eye-tracking to measure reading ability. Mathematical ability was measured with Mathematics 1c grades and Stockholmsprovet, which is a diagnostic mathematics test. Although no correlation was found, there are several insights about selection and measures following the result that may improve future studies on the subject. This thesis finds that the result could have been affected by a biased selection of the participants. This thesis also suggests that the measure through machine learning and eye-tracking used in the study may not fully capture the concept of reading ability as defined in previous studies. The technological aspect of this thesis focuses on modifying and improving the model used to calculate users’ reading ability scores. As the model’s estimation tends to plateau after the fifth year of compulsory school, the study aims to maintain the same level of progression observed before this point. Previous research indicates that silent reading, being unconstrained by vocalization, is faster than reading aloud. To address this progression flattening, a grid search algorithm was employed to adjust hyperparameters and assign appropriate weight to silent and aloud reading. The findings emphasize that reading aloud should be prioritized in the weighted average and the corresponding hyperparameters adjusted accordingly. Furthermore, gathering more data for older pupils can improve the machine learning model by accounting for individual reading strategies. Introducing different word complexity factors can also enhance the model’s performance. / Läsning och matematik är två avgörande ämnen för akademisk framgång och kognitiv utveckling. Flera studier visar på ett samband mellan elevers läsförmåga och matematiska förmåga (Korpershoek et al., 2015; Ní Ríordáin &amp; O’Donoghue, 2009; Reikerås, 2006; Walker et al., 2008). Den didaktiska delen av denna rapport presenterar en studie som undersöker sambandet mellan läsförmåga och matematisk förmåga hos elever på gymnasiet i Sverige. Studien samarbetade med Lexplore AB för att använda maskininlärning och ögonspårning för att mäta läsförmåga. Matematisk förmåga mättes genom matematikbetyg och Stockholms provet, som är ett diagnostiskt matematiktest. Trotsatt inget samband hittades uppges insikter om urvalet och åtgärder som kan förbättra framtida studier i ämnet. Rapporten konstaterar att resultatet kan ha påverkats avett sned vridet urval av deltagare. Dessutom föreslår rapporten att mätningen genom maskininlärning och ögonspårning som användes i studien kanske inte helt fångar upp begreppet läsförmåga som används i tidigare studier. Teknikdelen av denna rapport fokuserar på att modifiera och förbättra modellen som används för att beräkna användarnas läsförmågepoäng. Eftersom modellens uppskattning tenderar att avplattas efter femte året i grundskola, syftar studien till att bibehålla samma nivå av progression som observerats före denna punkt. Tidigare forskning indikerar att tyst läsning, som inte begränsas av att uttala orden, är snabbare än högläsning. För att adressera denna avplattning av progression användes en rutnätssöknings-algoritm för att justera hyperparametrar och tilldela rätt viktning åt tyst läsning. Resultaten betonar att högläsning bör prioriteras i viktade medelvärdet och att motsvarande justeringar av hyperparametrar bör implementeras. Dessutom kan insamling av mer data för äldre elever förbättra maskininlärningsmodellen genom att ta hänsyn till individuella lässtrategier. Införandet av olika faktorer för textkomplexitet kan också förbättra modellens prestanda.

Page generated in 0.1018 seconds