31 |
Early Warning System of Students Failing a Course : A Binary Classification Modelling Approach at Upper Secondary School Level / lFörebyggande Varningssystem av elever med icke godkänt betyg : Genom applicering av binär klassificeringsmodell inom gymnasieskolanKarlsson, Niklas, Lundell, Albin January 2022 (has links)
Only 70% of the Swedish students graduate from upper secondary school within the given time frame. Earlier research has shown that unfinished degrees disadvantage the individual student, policy makers and society. A first step for preventing dropouts is to indicate students about to fail courses. Thus the purpose is to identify tendencies whether a student will pass or not pass a course. In addition, the thesis accounts for the development of an Early Warning System to be applied to signal which students need additional support from a professional teacher. The used algorithm Random Forest functioned as a binary classification model of a failed grade against a passing grade. Data in the study are in samples of approximately 700 students from an upper secondary school within the Stockholm municipality. The chosen method originates from a Design Science Research Methodology that allows the stakeholders to be involved in the process. The results showed that the most dominant indicators for classifying correct were Absence, Previous grades and Mathematics diagnosis. Furthermore, were variables from the Learning Management System predominant indicators when the system also was utilised by teachers. The prediction accuracy of the algorithm indicates a positive tendency for classifying correctly. On the other hand, the small number of data points imply doubt if an Early Warning System can be applied in its current state. Thus, one conclusion is in further studies, it is necessary to increase the number of data points. Suggestions to address the problem are mentioned in the Discussion. Moreover, the results are analysed together with a review of the potential Early Warning Systemfrom a didactic perspective. Furthermore, the ethical aspects of the thesis are discussed thoroughly. / Endast 70% av svenska gymnasieelever tar examen inom den givna tidsramen. Tidigare forskning har visat att en oavslutad gymnasieutbildning missgynnar både eleven och samhället i stort. Ett första steg mot att förebygga att elever avviker från gymnasiet är att indikera vilka studenter som är på väg mot ett underkänt betyg i kurser. Därmed är syftet med rapporten att identifiera vilka trender som bäst indikerar att en elev kommer klara en kurs eller inte. Dessutom redogör rapporten för utvecklandet av ett förebyggande varningssystem som kan appliceras för att signalera vilka studenter som behöver ytterligare stöd från läraren och skolan. Algoritmen som användes var Random Forest och fungerar som en binär klassificeringsmodell av ett underkänt betyg mot ett godkänt. Den data som använts i studien är datapunkter för ungefär 700 elever från en gymnasieskola i Stockholmsområdet. Den valda metoden utgår från en Design Science Researchmetodik vilket möjliggör för intressenter att vara involverade i processen. Resultaten visade att de viktigaste variablerna var frånvaro, tidigare betyg och resultat från Stockholmsprovet (kommunal matematikdiagnos). Vidare var variabler från lärplattformen en viktig indikator ifall lärplattformen användes av läraren. Algoritmens noggrannhet indikerade en positiv trend för att klassificeringen gjordes korrekt. Å andra sidan är det tveksamt ifall det förebyggande systemet kan användas i sitt nuvarande tillstånd då mängden data som användes för att träna algoritmen var liten. Därav är en slutsats att det är nödvändigt för vidare studier att öka mängden datapunkter som används. I Diskussionen nämns förslag på hur problemet ska åtgärdas. Dessutom analyseras resultaten tillsammans med en utvärdering av systemet från ett didaktiskt perspektiv. Vidare diskuteras rapportens etiska aspekter genomgående.
|
32 |
Further development and optimisation of the CNN-classicification algorithm of Alfrödull for more accurate aerial image detection of decentralised solar energy systems : A study on how the performance of neural networks can beimproved through additional training data, image preprocessing, class balancing and sliding windowclassificationLindvall, Erik January 2024 (has links)
The global use of solar power is growing at an unprecedented rate, making the need toaccurately track the energy generation of decentralised solar energy systems (SES) more andmore relevant. The purpose of this thesis is to further develop a binary image classifier for thesimulation system framework known as Alfrödull, which will be used to detect and segment SESfrom aerial images to simulate the energy generation within a given Swedish municipality on anhourly basis. This project focuses on improving the Alfrödull classifier through four differentanalyses. the first focusing on examining how additional training data from publicly availabledatasets affects the model performance. The second on how the model can be improvedthrough the use of various image pre-processing techniques. The third on how the model canbe improved through balancing the training datasets to make up for the low amount of positiveimages as well as utilising model ensembles for joint classification. Finally, the fourth analysisemploys a sliding window approach to classify overlapping image tiles. The results show thathaving training data that is a good representation of the environment the model will be used in iscrucial, that the use of image augmentation policies can significantly improve modelperformance, that compensating for class imbalance as well as utilising ensemble methodspositively impacts model performance and that a sliding window approach to classifyingoverlapping images significantly decreases the amount of missed SES at the cost of clusters offalsely classified negative images (false positives). In conclusion, this thesis serves as animportant stepping stone in the practical implementation of the Alfrödull framework, showcasingthe key aspects in making a well performing binary image classifier of SES in Sweden.
|
33 |
A Dynamic Security And Authentication System For Mobile Transactions : A Cognitive Agents Based ApproachBabu, B Sathish 05 1900 (has links)
In the world of high mobility, there is a growing need for people to communicate with each other and have timely access to information regardless of the location of the individuals or the information. This need is supported by the advances in the technologies of networking, wireless communications, and portable computing devices with reduction in the physical size of computers, lead to the rapid development in mobile communication infrastructure. Hence, mobile and wireless networks present many challenges to application, hardware, software and network designers and implementers. One of the biggest challenge is to provide a secure mobile environment. Security plays a more important role in mobile communication systems than in systems that use wired communication. This is mainly because of the ubiquitous nature of the wireless medium that makes it more susceptible to security attacks than wired communications.
The aim of the thesis is to develop an integrated dynamic security and authentication system for mobile transactions. The proposed system operates at the transactions-level of a mobile application, by intelligently selecting the suitable security technique and authentication protocol for ongoing transaction. To do this, we have designed two schemes: the transactions-based security selection scheme and the transactions-based authentication selection scheme. These schemes use transactions sensitivity levels and the usage context, which includes users behaviors, network used, device used, and so on, to decide the required security and authentication levels. Based on this analysis, requisite security technique, and authentication protocols are applied for the trans-action in process. The Behaviors-Observations-Beliefs (BOB) model is developed using cognitive agents to supplement the working of the security and authentication selection schemes. A transaction classification model is proposed to classify the transactions into various sensitivity levels.
The BOB model
The BOB model is a cognitive theory based model, to generate beliefs over a user, by observing various behaviors exhibited by a user during transactions. The BOB model uses two types of Cognitive Agents (CAs), the mobile CAs (MCAs) and the static CAs (SCAs). The MCAs are deployed over the client devices to formulate beliefs by observing various behaviors of a user during the transaction execution. The SCA performs belief analysis, and identifies the belief deviations w.r.t. established beliefs. We have developed four constructs to implement the BOB model, namely: behaviors identifier, observations generator, beliefs formulator, and beliefs analyser. The BOB model is developed by giving emphasis on using the minimum computation and minimum code size, by keeping the resource restrictiveness of the mobile devices and infrastructure. The knowledge organisation using cognitive factors, helps in selecting the rational approach for deciding the legitimacy of a user or a session. It also reduces the solution search space by consolidating the user behaviors into an high-level data such as beliefs, as a result the decision making time reduces considerably.
The transactions classification model
This model is proposed to classify the given set of transactions of an application service into four sensitivity levels. The grouping of transactions is based on the operations they perform, and the amount of risk/loss involved if they are misused. The four levels are namely, transactions who’s execution may cause no-damage (level-0), minor-damage (level-1), significant-damage (level-2) and substantial-damage (level-3). A policy-based transaction classifier is developed and incorporated in the SCA to decide the transaction sensitivity level of a given transaction.
Transactions-based security selection scheme (TBSS-Scheme)
The traditional security schemes at application-level are either session or transaction or event based. They secure the application-data with prefixed security techniques on mobile transactions or events. Generally mobile transactions possesses different security risk profiles, so, empirically we may find that there is a need for various levels of data security schemes for the mobile communications environment, which face the resource insufficiency in terms of bandwidth, energy, and computation capabilities.
We have proposed an intelligent security techniques selection scheme at the application-level, which dynamically decides the security technique to be used for a given transaction in real-time. The TBSS-Scheme uses the BOB model and transactions classification model, while deciding the required security technique. The selection is purely based on the transaction sensitivity level, and user behaviors. The Security techniques repository is used in the proposed scheme, organised under three levels based on the complexity of security techniques. The complexities are decided based on time and space complexities, and the strength of the security technique against some of the latest security attacks. The credibility factors are computed using the credibility module, over transaction network, and transaction device are also used while choosing the security technique from a particular level of security repository. Analytical models are presented on beliefs analysis, security threat analysis, and average security cost incurred during the transactions session. The results of this scheme are compared with regular schemes, and advantageous and limitations of the proposed scheme are discussed. A case study on application of the proposed security selection scheme is conducted over mobile banking application, and results are presented.
Transactions-based authentication selection scheme (TBAS-Scheme)
The authentication protocols/schemes are used at the application-level to authenticate the genuine users/parties and devices used in the application. Most of these protocols challenges the user/device to get the authentication information, rather than deploying the methods to identify the validity of a user/device. Therefore, there is a need for an authentication scheme, which intelligently authenticates a user by continuously monitoring the genuinity of the activities/events/ behaviors/transactions through out the session.
Transactions-based authentication selection scheme provides a new dimension in authenticating users of services. It enables strong authentication at the transaction level, based on sensitivity level of the given transaction, and user behaviors. The proposed approach intensifies the procedure of authentication by selecting authentication schemes by using the BOB-model and transactions classification models. It provides effective authentication solution, by relieving the conventional authentication systems, from being dependent only on the strength of authentication identifiers. We have made a performance comparison between transactions-based authentication selection scheme with session-based authentication scheme in terms of identification of various active attacks, and average authentication delay and average authentication costs are analysed. We have also shown the working of the proposed scheme in inter-domain and intra-domain hand-off scenarios, and discussed the merits of the scheme comparing it with mobile IP authentication scheme. A case study on application of the proposed authentication selection scheme for authenticating personalized multimedia services is presented.
Implementation of the TBSS and the TBAS schemes for mobile commerce application
We have implemented the integrated working of both the TBSS and TBAS schemes for a mo-bile commerce application. The details on identifying vendor selection, day of purchase, time of purchase, transaction value, frequency of purchase behaviors are given. A sample list of mobile commerce transactions is presented along with their classification into various sensitivity levels. The working of the system is discussed using three cases of purchases, and the results on trans-actions distribution, deviation factor generation, security technique selection, and authentication challenge generation are presented.
In summary, we have developed an integrated dynamic security and authentication system using, the above mentioned selection schemes for mobile transactions, and by incorporating the BOB model, transactions classification model, and credibility modules. We have successfully implemented the proposed schemes using cognitive agents based middleware. The results of experiments suggest that incorporating user behaviors, and transaction sensitivity levels will bring dynamism and adaptiveness to security and authentication system. Through which the mobile communication security could be made more robust to attacks, and resource savvy in terms of reduced bandwidth and computation requirements by using an appropriate security and authentication technique/protocol.
|
34 |
Open source quality control tool for translation memory using artificial intelligenceBhardwaj, Shivendra 08 1900 (has links)
La mémoire de traduction (MT) joue un rôle décisif lors de la traduction et constitue une base
de données idéale pour la plupart des professionnels de la langue. Cependant, une MT est très
sujète au bruit et, en outre, il n’y a pas de source spécifique. Des efforts importants ont été
déployés pour nettoyer des MT, en particulier pour former un meilleur système de traduction
automatique. Dans cette thèse, nous essayons également de nettoyer la MT mais avec un objectif
plus large : maintenir sa qualité globale et la rendre suffisament robuste pour un usage interne
dans les institutions. Nous proposons un processus en deux étapes : d’abord nettoyer une MT
institutionnelle (presque propre), c’est-à-dire éliminer le bruit, puis détecter les textes traduits à
partir de systèmes neuronaux de traduction.
Pour la tâche d’élimination du bruit, nous proposons une architecture impliquant cinq approches
basées sur l’heuristique, l’ingénierie fonctionnelle et l’apprentissage profond. Nous évaluons cette
tâche à la fois par annotation manuelle et traduction automatique (TA). Nous signalons un gain
notable de +1,08 score BLEU par rapport à un système de nettoyage état de l’art. Nous proposons
également un outil Web qui annote automatiquement les traductions incorrectes, y compris mal
alignées, pour les institutions afin de maintenir une MT sans erreur.
Les modèles neuronaux profonds ont considérablement amélioré les systèmes MT, et ces systèmes
traduisent une immense quantité de texte chaque jour. Le matériel traduit par de tels systèmes
finissent par peuplet les MT, et le stockage de ces unités de traduction dans TM n’est pas
idéal. Nous proposons un module de détection sous deux conditions: une tâche bilingue et une
monolingue (pour ce dernier cas, le classificateur ne regarde que la traduction, pas la phrase
originale). Nous rapportons une précision moyenne d’environ 85 % en domaine et 75 % hors
domaine dans le cas bilingue et 81 % en domaine et 63 % hors domaine pour le cas monolingue
en utilisant des classificateurs d’apprentissage profond. / Translation Memory (TM) plays a decisive role during translation and is the go-to database for
most language professionals. However, they are highly prone to noise, and additionally, there is no
one specific source. There have been many significant efforts in cleaning the TM, especially for
training a better Machine Translation system. In this thesis, we also try to clean the TM but with a
broader goal of maintaining its overall quality and making it robust for internal use in institutions.
We propose a two-step process, first clean an almost clean TM, i.e. noise removal and then detect
texts translated from neural machine translation systems.
For the noise removal task, we propose an architecture involving five approaches based on heuristics, feature engineering, and deep-learning and evaluate this task by both manual annotation and
Machine Translation (MT). We report a notable gain of +1.08 BLEU score over a state-of-the-art,
off-the-shelf TM cleaning system. We also propose a web-based tool “OSTI: An Open-Source
Translation-memory Instrument” that automatically annotates the incorrect translations (including
misaligned) for the institutions to maintain an error-free TM.
Deep neural models tremendously improved MT systems, and these systems are translating an
immense amount of text every day. The automatically translated text finds a way to TM, and
storing these translation units in TM is not ideal. We propose a detection module under two
settings: a monolingual task, in which the classifier only looks at the translation; and a bilingual
task, in which the source text is also taken into consideration. We report a mean accuracy of around
85% in-domain and 75% out-of-domain for bilingual and 81% in-domain and 63% out-of-domain
from monolingual tasks using deep-learning classifiers.
|
Page generated in 0.1188 seconds