371 |
Transforming First Language Learning Platforms towards Adaptivity and Fairness / Models, Interventions and ArchitectureRzepka, Nathalie 10 October 2023 (has links)
In dieser Arbeit zeige ich in einem groß angelegten Experiment die Auswirkungen adaptiver Elemente in einer Online-Lernplattform. Ich werde darauf eingehen, dass die derzeitige Forschung zu Online-Lernplattformen für den L1-Erwerb hauptsächlich deskriptiv ist und dass nur wenige adaptive Lernumgebungen in der Praxis verbreitet sind. In dieser Dissertation werde ich ein Konzept entwickeln, wie adaptives Lernen in L1-Online-Lernplattformen integriert werden kann, und analysieren, ob dies zu verbesserten Lernerfahrungen führt. Dabei konzentriere ich mich auf die Effektivität und Fairness von Vorhersagen und Interventionen sowie auf die geeignete Softwarearchitektur für den Einsatz in der Praxis. Zunächst werden verschiedene Vorhersagemodelle entwickelt, die besonders in Blended-Learning-Szenarien nützlich sind. Anschließend entwickle ich ein Architekturkonzept (adaptive learning as a service), um bestehende Lernplattformen mithilfe von Microservices in adaptive Lernplattformen umzuwandeln. Darauf aufbauend wird ein groß angelegtes online-kontrolliertes Experiment mit mehr als 11.000 Nutzer*innen und mehr als 950.000 eingereichten Rechtschreibaufgaben durchgeführt. In einer abschließenden Studie werden die Vorhersagemodelle auf ihren algorithmischen Bias hin untersucht. Außerdem teste ich verschiedene Techniken zur Verringerung von Bias. Diese Arbeit bietet eine ganzheitliche Sicht auf das adaptive Lernen beim Online-L1-Lernen. Durch die Untersuchung mehrerer Schlüsselaspekte (Vorhersagemodelle, Interventionen, Architektur und Fairness) ermöglicht die Arbeit Schlussfolgerungen sowohl für die Forschung als auch für die Praxis. / In this work I show in a large scale experiment the effect of adding adaptive elements to an online learning platform. I will discuss that the current research on online learning platforms in L1 acquisition is mainly descriptive and that only few adaptive learning environments are prevalent in practice. In this dissertation, I will develop a concept on how to integrate adaptive L1 online learning and analyse if it leads to improved learning experiences. I focus on the effectiveness and fairness of predictions and interventions as well as on the suitable software architecture for use in practice. First, I develop different prediction models, which are particularly useful in blended classroom scenarios. Subsequently, I develop an architectural concept (adaptive learning as a service) to transform existing learning platforms into adaptive learning platforms using microservices. Based on this, a large-scale online-controlled experiment with more than 11,000 users and more than 950,000 submitted spelling tasks is carried out. In the final study, the prediction models are examined for their algorithmic bias, by comparing different machine learning models, varying metrics of fairness, and multiple demographic categories. Furthermore, I test various bias mitigation techniques. The success of bias mitigation approaches depends on the demographic group and metric. However, in-process methods have proven to be particularly successful. This work provides a holistic view of adaptive learning in online L1 learning. By examining several key aspects (predictive models, interventions, architecture, and fairness), the work allows conclusions to be drawn for both research and practice.
|
372 |
Predicting the Effects of Sedative Infusion on Acute Traumatic Brain Injury PatientsMcCullen, Jeffrey Reynolds 09 April 2020 (has links)
Healthcare analytics has traditionally relied upon linear and logistic regression models to address clinical research questions mostly because they produce highly interpretable results [1, 2]. These results contain valuable statistics such as p-values, coefficients, and odds ratios that provide healthcare professionals with knowledge about the significance of each covariate and exposure for predicting the outcome of interest [1]. Thus, they are often favored over new deep learning models that are generally more accurate but less interpretable and scalable. However, the statistical power of linear and logistic regression is contingent upon satisfying modeling assumptions, which usually requires altering or transforming the data, thereby hindering interpretability. Thus, generalized additive models are useful for overcoming this limitation while still preserving interpretability and accuracy.
The major research question in this work involves investigating whether particular sedative agents (fentanyl, propofol, versed, ativan, and precedex) are associated with different discharge dispositions for patients with acute traumatic brain injury (TBI). To address this, we compare the effectiveness of various models (traditional linear regression (LR), generalized additive models (GAMs), and deep learning) in providing guidance for sedative choice. We evaluated the performance of each model using metrics for accuracy, interpretability, scalability, and generalizability. Our results show that the new deep learning models were the most accurate while the traditional LR and GAM models maintained better interpretability and scalability. The GAMs provided enhanced interpretability through pairwise interaction heat maps and generalized well to other domains and class distributions since they do not require satisfying the modeling assumptions used in LR. By evaluating the model results, we found that versed was associated with better discharge dispositions while ativan was associated with worse discharge dispositions. We also identified other significant covariates including age, the Northeast region, the Acute Physiology and Chronic Health Evaluation (APACHE) score, Glasgow Coma Scale (GCS), and ethanol level. The versatility of versed may account for its association with better discharge dispositions while ativan may have negative effects when used to facilitate intubation. Additionally, most of the significant covariates pertain to the clinical state of the patient (APACHE, GCS, etc.) whereas most non-significant covariates were demographic (gender, ethnicity, etc.). Though we found that deep learning slightly improved over LR and generalized additive models after fine-tuning the hyperparameters, the deep learning results were less interpretable and therefore not ideal for making the aforementioned clinical insights. However deep learning may be preferable in cases with greater complexity and more data, particularly in situations where interpretability is not as critical. Further research is necessary to validate our findings, investigate alternative modeling approaches, and examine other outcomes and exposures of interest. / Master of Science / Patients with Traumatic Brain Injury (TBI) often require sedative agents to facilitate intubation and prevent further brain injury by reducing anxiety and decreasing level of consciousness. It is important for clinicians to choose the sedative that is most conducive to optimizing patient outcomes. Hence, the purpose of our research is to provide guidance to aid this decision. Additionally, we compare different modeling approaches to provide insights into their relative strengths and weaknesses.
To achieve this goal, we investigated whether the exposure of particular sedatives (fentanyl, propofol, versed, ativan, and precedex) was associated with different hospital discharge locations for patients with TBI. From best to worst, these discharge locations are home, rehabilitation, nursing home, remains hospitalized, and death. Our results show that versed was associated with better discharge locations and ativan was associated with worse discharge locations. The fact that versed is often used for alternative purposes may account for its association with better discharge locations. Further research is necessary to further investigate this and the possible negative effects of using ativan to facilitate intubation. We also found that other variables that influence discharge disposition are age, the Northeast region, and other variables pertaining to the clinical state of the patient (severity of illness metrics, etc.). By comparing the different modeling approaches, we found that the new deep learning methods were difficult to interpret but provided a slight improvement in performance after optimization. Traditional methods such as linear regression allowed us to interpret the model output and make the aforementioned clinical insights. However, generalized additive models (GAMs) are often more practical because they can better accommodate other class distributions and domains.
|
373 |
Dashboard versus Google Analytics : Fördelar med en skräddarsydd dashboard i jämförelse med Google Analytics egna gränssnittPlatakidou, Déspina, Dahlgren, David January 2024 (has links)
To understand a users behavior on a website could be in great help to the companybehind the website in decisions regarding future expansion. Web-analytic tools suchas Google Analytics 4 can be used in order to accomplish such tasks. The goal ofthis thesis is to determine whether Google Analytics 4 is a suitable tool in managingdata, with a high usability or if a custom designed dashboard is prefered.To determine which approach has the highest usability a custom made dashboardwas created within the framework React. Five tasks, that Nobia - the companybehind the request, found important were done in both Google Analytics 4 as wellas the custom designed dashboard. With help from earlier studies in form of surveys,reports, documentation and articles a literature study was made. The result gaveinsight on earlier views on web-analytic tools like Google Analytics 4. Thereafterfollowed semi-stuctured interviews in combination with a questionnaire includingThe System Usability Scale (SUS) to measure the result.The study showed that the users at Nobia favoured the custom designed dash-board. In the interviews comments were made on their thoughts about both system.The main difference seemed to be concerning the time it took to navigate bothsystems as well as the usability. The users found it to be a lot more intuitive andconcentrated to one place in the custom made dashboard. / Att förstå användares beteenden på en webbplats kan ge insikter som kan hjälpa ettföretag att fatta beslut vad gäller tillvägagångsätt för framtida satsningar. Webbana-lysverktyg såsom Google Analytics 4 kan användas för att åstadkomma detta. Måletmed den här studien är att undersöka huruvida Google Analytics 4 är lämplig föratt hantera data och ifall den har en hög användbarhet, eller ifall en skräddarsydddashboard är att föredra.För att undersöka vilket tillvägagångsätt som har mest användbarhet skapadesen skräddarsydd dashboard i ramverket React och fem uppgifter, som var viktiga förföretaget Nobia, utfördes i den respektive i Google Analytics 4 (GA4). Med hjälp avtidigare undersökningar, rapporter, dokumentation och ett antal artiklar genomför-des en litteraturundersökning för att se hur man tidigare sett på webbanalysverktygsåsom GA4. Därefter utfördes semi-strukturerade interjvuer i kombination med ettfrågeformulär innehållande The System Usability Scale (SUS) för att mäta resultatet.Undersökningen visade att användarna hos företaget Nobia föredrog den skräd-darsydda dashboarden. Kommentarerna från användarna belyste deras tankar omrespektive system. Skillnaden är i största del tidsåtgången samt användbarheten,hur man orienterar sig i de olika systemen. Användarna ansåg att det var tydligareoch mer koncentrerat i den skräddarsydda dashboarden.
|
374 |
IDE-based learning analytics for assessing introductory programming skillBeck, Phyllis J. 08 August 2023 (has links) (PDF)
Providing a sufficient level of personalized feedback on students' current level of strategic knowledge within the context of the natural programming environment through IDE-based learning analytics would transform learning outcomes for introductory programming students. However, providing sufficient insight into the programming process was previously inaccessible due to the need for more complex and scalable data collection methods and metrics with a wider variety for understanding programming metacognition and the full programming process.
This research developed a custom-built web-based IDE and event compression system to investigate two of the five components of a five-dimensional model of cognition for programming skill estimation (1) Design Cohesion and (2) Development Path over Time. The IDE captured students' programming process data for 25 participants, where each participated in two programming sessions that required both a design and code phase. For Design Cohesion, the alignment between flowchart design and source code implementation was investigated and manually classified. The classification process produced three Design Cohesion metrics: Design Cohesion Level, Granularity Level, and Granularity Score. The relationship between programming skill and Design Cohesion was explored using the newly developed metrics and a
case-study approach. For the Development Path over Time, the compressed programming events were used to create a Timeline of Events for each participant, which was manually examined for distinct clusters of programming patterns and behavior such as execution behavior and debugging patterns. Custom visualizations were developed to display the timelines. Then, the timelines were used to compare programming behaviors for participants with different programming skill levels. The results of the investigation into Design Cohesion and Development Path Over Time contribute to the fundamental understanding of differences between beginner, intermediate, and advanced programmers and the context in which specific programming difficulties arise. This work produced insight into students' programming processes that can be used to advance the model of cognition for programming skill estimation and provide personalized feedback to support the development of programming skills and expertise. Additionally, this research produced tools and metrics that can be used in future studies examining programming metacognition.
|
375 |
Time Is On My Side . . . Or Is It?: Time of Day and Achievement in Asynchronous Learning EnvironmentsGilleland, Angela 13 May 2016 (has links)
Previous research suggests that the optimal time of day (TOD) for cognitive function for young adults occurs in the afternoon and evening times (Allen, et al. 2008; May, et al. 1993). The implication is college students may be more successful if they schedule classes and tests in the afternoon and evening times, but in asynchronous learning environments, “class” and tests take place at any TOD (or night) a student might choose. The problem is that there may be a disadvantage for students choosing to take tests at certain TOD. As educators, we need to be aware of potential barriers to student success and be prepared to offer guidance to students.
This research study found a significant negative correlation between TOD and assessment scores on tests taken between 16:01 and 22:00 hours as measured in military time. While this study shows that academic performance on asynchronous assessments was high at 16:00 hours, student performance diminished significantly by 22:00 hours. When efforts were taken to mitigate the extraneous variables related to test complexity and individual academic achievement, the effect TOD had on assessment achievement during this time period was comparable to the effect of test complexity on that achievement. However, when analyzed using a small sub-set of the data neither GPA nor TOD could be used to predict student scores on tests taken between 16:01 and 22:00 hours. Finally, individual circadian arousal types (evening, morning and neutral) (Horne & Ostberg, 1976) and actual TOD students took tests were analyzed to determine if synchrony, the match between circadian arousal type and peak cognitive performance, existed. The synchrony effect could not be confirmed among morning type students taking this asynchronous online course, but evidence suggests that synchrony could have contributed to student success for evening types taking this asynchronous online courses.
The implication of this study is that online instructors, instructional designers and students should consider TOD as a factor affecting achievement in asynchronous online courses. Results of this research are intended to propose further research into TOD effects in asynchronous online settings, and to offer guidance to online students as well as online instructors and instructional designers faced with setting deadlines and advising students on how to be successful when learning online.
|
376 |
Nisse i Hökarängen 2.0 : En studie om webbanalys och gatekeepingprocessen inom det journalistiska fältet / Nisse i Hökarängen 2.0 : A study of web analytics and gatekeeping within the journalistic fieldBrolin, Pär, Svedström, Alexandra January 2016 (has links)
Onlinetidningar har möjliggjort ett nytt sätt för publiken att interagera med nyheter. Det har dessutom skapat nya sätt för journalister att studera publikens beteende. Genom webbanalysverktyg ökar publikens möjligheter att involvera sig i skapandet av nyheter. Detta sker i takt med att det journalistiska fältet står inför en svår ekonomisk situation där en minskad läsarskara är ett faktum. Följande studie har fokuserat på att förklara och utveckla en förståelse för hur onlinejournalistiken ter sig i och med webbanalysens framväxt. Syftet är således att studera hur journalistiska normer och arbetssätt verkar i kombination med webbanalys. En fallstudie har utförts, innefattande nio intervjuer med onlinejournalister från genrerna allmännyheter, sport och kultur. Studiens resultat påvisar att onlinejournalisterna inte uppfattar webbanalys som ett nödvändigt verktyg vid nyhetsurval. Mättal genererat från webbanalys, så som antal klick och besökstrafik, används i första hand som kompletterande tolkningsvariabler. Två faktorer som bidrar till att onlinejournalister använder webbanalys inom de journalistiska fältet har identifierats: uppfattning om ekonomisk instabilitet på organisatorisk nivå samt viljan att bibehålla sin läsarkrets genom att producera sådant innehåll som läsarna efterfrågar. Dessutom har tre krafter som hämmar användningen av webbanalys påvisats: den rutinerade generationen journalisters traditionella förhållningssätt till skapandet av nyheter, en konservativ syn på nyhetsvärde och journalistens vilja att bibehålla ett gott rykte samt uppfattade kunskapsnivå stabilt. Ytterligare forskningsbidrag som studien lägger fram är att liknande framtida studier alltid bör genomföras med genrekontext i beaktande, i synnerhet bör kulturjournalistiken studeras som ett eget fält inom det journalistiska fältet. / Online newspapers have enabled a new way for the audience to interact with the news. It has also created new ways for journalists to study crowd behavior. Through web analytics increases the public's opportunities to get involved in the creation of news. In the same time the journalistic field is facing a difficult economic situation and where a reduced readership is a disturbing fact. The following study has focused on explaining and developing an understanding of how online journalism is appearing along with the emergence of web analytics. The aim with the study was to examine how journalistic standards and way of working evolve in combination with web analytics. A case study has been carried out including 9 interviews with online journalists from the genres of general news, sports and culture. The study's results demonstrate that online journalists do not see web analytics as a necessary step in the process of news selection, but it is used as a complementary interpretation variable. Two factors that contribute to online journalists use of web analytics in the journalistic field has been identified: perception of economic instability at the organizational level, as well as the desire to maintain it’s readership by writing content that the journalist are aware that the audience wants. In addition, three negative forces that inhibit the use of web analytics has been demonstrated: the older generation of journalists' traditional view on the creation of news, a conservative view on news value and the journalist willingness to maintain its credibility and perceived knowledge stable. Another contribution is that these kind of aspects can not be studied without regard to genre context, especially when it comes to the genre of culture, that should be seen as a separate field within the journalistic field.
|
377 |
Usability evaluation framework for e-commerce websites in developing countriesHasan, Layla January 2009 (has links)
The importance of evaluating the usability of e-commerce websites is well recognised and this area has attracted research attention for more than a decade. Nearly all the studies that evaluated the usability of e-commerce websites employed either user-based (i.e. user testing) or evaluator-based (i.e. heuristic evaluation) usability evaluation methods; but no research has employed softwarebased (i.e. Google Analytics software) in the evaluation of such sites. Furthermore, the studies which employed user testing and/or heuristic evaluation methods in the evaluation of the usability of e-commerce websites did not offer detail about the benefits and drawbacks of these methods with respect to the identification of specific types of usability problems. This research developed a methodological framework for the usability evaluation of e-commerce websites which involved user testing and heuristic evaluation methods together with Google Analytics software. The framework was developed by comparing the benefits and drawbacks of these methods in terms of the specific areas of usability problems that they could or could not identify on ecommerce websites. The framework involves Google Analytics software as a preliminary step to provide a quick, easy and cheap indication of general potential usability problem areas on an e-commerce website and its specific pages. Then, the framework enables evaluators to choose other methods to provide in-depth detail about specific iv problems on the site. For instance, the framework suggests that user testing is good for identifying specific major usability problems related to four areas: navigation, design, the purchasing process and accessibility and customer service, while the heuristic evaluation is good for identifying a large number of specific minor usability problems related to eight areas including: navigation, internal search, the site architecture, the content, the design, accessibility and customer service, inconsistency and missing capabilities. The framework also suggests that the heuristic evaluation is good at identifying major security and privacy problems. The framework was developed based on an extensive evaluation of the effectiveness of the three methods in identifying specific usability problems in three case studies (e-commerce websites) in Jordan. This highlighted the usefulness of the methods and therefore helps e-commerce retailers to determine the usability method that best matches their needs. The framework was tested and the results indicated the usefulness of the suggested framework in raising awareness of usability and usability evaluation methods among e-commerce retailers in Jordan. This will help them address usability in the design of their websites, thus helping them to survive, grow and achieve success.
|
378 |
What are the Potential Impacts of Big Data, Artificial Intelligence and Machine Learning on the Auditing Profession?Evett, Chantal 01 January 2017 (has links)
To maintain public confidence in the financial system, it is essential that most financial fraud is prevented and that incidents of fraud are detected and punished. The responsibility of uncovering creatively implemented fraud is placed, in a large part, on auditors. Recent advancements in technology are helping auditors turn the tide against fraudsters. Big Data, made possible by the proliferation, widespread availability and amalgamation of diverse digital data sets, has become an important driver of technological change. Big Data analytics are already transforming the traditional audit. Sampling and testing a limited number of random samples has turned into a much more comprehensive audit that analyzes the entire population of transactions within an account, allowing auditors to flag and investigate all sorts of potentially fraudulent anomalies that were previously invisible. Artificial intelligence (AI) programs, typified by IBM’s Watson, can mimic the thought processes of the human mind and will soon be adopted by the auditing profession. Machine learning (ML) programs, with the ability to change when exposed to new data, are developing rapidly and may take over many of the decision-making functions currently performed by auditors. The SEC has already implemented pioneering fraud-detection software based on AI and ML programs. The evolution of the auditor’s role has already begun. Current accounting students must understand the traditional auditing skillset will not longer be sufficient. While facing a future with fewer auditing positions available due to increased automation, auditors will need training for roles that will be more data analytical and computer-science based.
|
379 |
Systematising glyph design for visualizationMaguire, Eamonn James January 2014 (has links)
The digitalisation of information now affects most fields of human activity. From the social sciences to biology to physics, the volume, velocity, and variety of data exhibit exponential growth trends. With such rates of expansion, efforts to understand and make sense of datasets of such scale, how- ever driven and directed, progress only at an incremental pace. The challenges are significant. For instance, the ability to display an ever growing amount of data is physically and naturally bound by the dimensions of the average sized display. A synergistic interplay between statistical analysis and visualisation approaches outlines a path for significant advances in the field of data exploration. We can turn to statistics to provide principled guidance for prioritisation of information to display. Using statistical results, and combining knowledge from the cognitive sciences, visual techniques can be used to highlight salient data attributes. The purpose of this thesis is to explore the link between computer science, statistics, visualization, and the cognitive sciences, to define and develop more systematic approaches towards the design of glyphs. Glyphs represent the variables of multivariate data records by mapping those variables to one or more visual channels (e.g., colour, shape, and texture). They offer a unique, compact solution to the presentation of a large amount of multivariate information. However, composing a meaningful, interpretable, and learnable glyph can pose a number of problems. The first of these problems exist in the subjectivity involved in the process of data to visual channel mapping, and in the organisation of those visual channels to form the overall glyph. Our first contribution outlines a computational technique to help systematise many of these otherwise subjective elements of the glyph design process. For visual information compression, common patterns (motifs) in time series or graph data for example, may be replaced with more compact, visual representations. Glyph-based techniques can provide such representations that can help users find common patterns more quickly, and at the same time, bring attention to anomalous areas of the data. However, replacing any data with a glyph is not going to make tasks such as visual search easier. A key problem is the selection of semantically meaningful motifs with the potential to compress large amounts of information. A second contribution of this thesis is a computational process for systematic design of such glyph libraries and their subsequent glyphs. A further problem in the glyph design process is in their evaluation. Evaluation is typically a time-consuming, highly subjective process. Moreover, domain experts are not always plentiful, therefore obtaining statistically significant evaluation results is often difficult. A final contribution of this work is to investigate if there are areas of evaluation that can be performed computationally.
|
380 |
Trajectory AnalyticsSantiteerakul, Wasana 05 1900 (has links)
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
|
Page generated in 0.0505 seconds