461 |
Watching Scenarios Of Undue Influence : Impact On Emotional Reaction Depends On Viewing Technology / Att Se Scenarion av Otillbörlig Påverkan : Påverkan på Känslomässig Reaktion Påverkas av Visningsteknik.Calado Zettergren, Carl Alexander January 2021 (has links)
The Swedish police authority is currently undertaking projects to strengthen the safety of the working environment for police employees. An issue identified in audits related to these projects is that police employees are facing what is called undue influence: events that affect on an emotional level and can induce a social-anxiety-like mindset of self-censorship. This project explores the use of 360-degree video as material for preparing for undue influence and for facilitating discussion about definitions of undue influence among police employees and across regional departments. This involves production of 360-degree video-material and a comparative study. The video-production is an example of both how a scenario of undue influence can be portrayed, and of what resources are needed to capture such a scenario. The study explored differing effects on feelings of presence and emotional reactions towards the scenario depending on the viewing technology. The compared viewing technologies were flat-screen monitors and a head mounted display. Results indicate that viewing with the head mounted display prompted stronger reactions on some emotion-subscales and on self-location related feelings of presence. / Den svenska Polismyndigheten genomför för närvarande projekt för att stärka säkerheten i arbetsmiljön för polisanställda. En fråga som identifierats i interna granskningar relaterade till dessa projekt är att polisanställda möter det som kallas otillbörlig påverkan: händelser som påverkar på en emotionell nivå och kan framkalla det social ångest-liknande fenomenet självcensur. Detta projekt utforskar användningen av 360-graders video som material för att förbereda sig för otillbörlig påverkan och för att underlätta diskussioner om definitioner av otillbörlig påverkan bland polisanställda och mellan polisregioner. Detta innefattar framställning av 360-graders videomaterial och utförande av en jämförande studie. Videoproduktionen är ett exempel på både hur ett scenario av otillbörlig påverkan kan porträtteras och vilka resurser som behövs för att skapa ett sådant scenario. Studien undersökte olika effekter på känslor av närvaro samt känslomässiga reaktioner gentemot scenariot beroende på visningstekniken. De jämförda visningsteknikerna var platta skärmar och en “head-mounted display”. Resultaten indikerar att visning med “head-mounted display” producerade starkare reaktioner på vissa av de uppmätta känsloskalorna och platsillusions-relaterade känslor av närvaro.
|
462 |
Gain Estimation using Multi-Armed Bandit Policies / Gain estimering med Multi-Arm Bandit policyerChou, Chia-Hsuan January 2021 (has links)
This thesis investigates a new method to estimate the system norm using reinforcement learning. Given an unknown system, we aim to estimate its H∞- norm with a model-free approach, which involves solving a sequential input design problem. This problem is modeled as a multi-armed bandit, which provides us a way to study optimal decision making under uncertainty. In the multi-armed bandit framework, there are two different types of policies: index and Bayesian policies. The main goal of this thesis is to compare the performance of these two class of policies. We take Thompson Sampling representing Bayesian policies and five different UCB-type algorithms in the class of index policies. We compare these algorithms in two different setups depending on the class of input signals allowed to be applied to the system, denoted as single-frequency and power-spreading strategies. The input design method provides an asymptotically optimal way to collect input- output measurements of the system without having to rely on a model, providing asymptotically the best possible information to estimate the H∞- norm of the system. Simulation results show that algorithms with Bayesian policies are able to estimate the H∞-norm accurately in both single-frequency and power spreading strategies, while index policies compare very well for the power- spreading case but they are outperformed by the other class of input signals. / Detta examensarbete undersöker en ny metod för att identifiera ett system normen genom att använda förstärknings inlärning. Givet ett okänt system, försöker vi estimera H∞-normen via en modell-fri metod vilket involverar att lösa ett sekventiellt input problem. Detta problem omformuleras som ett ett multi-armed bandit problem då ge oss ett sätt att undersöka optimala beslutfattning under opålitlighet. Det finns två olika sorters policies som används inom multi-armed bandit ramverk, Index policy och Bayesian policy. Huvudmålet med detta arbete är att jämföra prestandan mellan de två olika policyerna. Genom att använda Thompson Sampling som representation för bayesian policy och fem olika UCB algoritmer som representerar index policy. Vi jämför dessa algoritmer med två olika inställningar som skilja sig på vilket klass av input som tillåt att applicera till systemet. en singel-frekvens strategi och en power-spreading strategi. Denna input design metod ge oss ett asymptotiskt optimalt sätt att fånga input-output mätningarna av systemet utan att förlita den bästa informationen från modell att estimera H∞-normen av systemet. Simuleringen visar att genom att använda algoritmen med bayesian policyer så kunde Hnorm ackurat estimeras på båda single- frequency and power spreading strategier. Däremot med index policy, det fungerade väl med power- spreading fallet men presenterades värre med dom andra klasser av input signaler.
|
463 |
A case study on pragmatic software reuseGoncharuk, Elena January 2021 (has links)
Software reuse became a very active and demanding research area. The research on this topic is extensive, but is there is gap between theory and industrial reuse practices. There is a need to connect the theoretical and practical aspects of reuse, especially on the topic of the reuse which is not performed in a planned, systematic manner. This case study investigates a real-world case of pragmatic software reuse with help of an existing academic research. The investigation includes a literature study on software reuse processes which are summarised in a form of a general process model of pragmatic software reuse. A case of pragmatic reuse performed in the industry is then analysed and compared to the proposed model, as well additional academic research. The real-world reuse process is shown to closely follow the proposed general model. / Återanvändning av mjukvara är ett krävande och väldigt aktivt forskningsområde. Det forskas mycket i ämnet, men det finns en glipa mellan teorin och hur återanvändning går till i industriella miljöer. Det finns ett behov av att koppla samman teoretiska och praktiska aspekter av återanvändning, speciellt för sådan som är inte utförd på ett planerat, systematiserat sätt. Den här fallstudien undersöker ett fall av pragmatisk mjukvaruåteranvändning i praktiken med hjälp av existerande akademisk forskning. Den inkluderar en litteraturstudie om återanvändningsprocesser som sammanfattas i en form av en generell processmodell av pragmatisk återanvändning. Ett industriexempel på sådan process sedan analyseras och jämföras med processmodellen och annan akademisk forskning. Det analyserade återanvändningsfall i praktiken har visat dig att stämma väl överens med den generella processmodellen.
|
464 |
Combining Sidescan Sonar and Multibeam Echo Sounder to Improve Bathymetric Resolution per Ping / Kombinera sidescan sonar och multibeam echo sounder i syfte att öka den batymetriska upplösningen per pingHestell, Filip January 2021 (has links)
The master thesis degree project is carried out in the area of robotics, particularly in the area of autonomous underwater vehicles (AUV) and seafloor mapping. The goal of this degree project is to investigate if sidescan sonar data can be used to increase the bathymetric resolution. This is done on a ping-basis by using echoed intensity data from the sidescan sonar and sparse bathymetry from the lower resolution multibeam echo sounder. This ping approach can be seen as a proof of concept, with the desired outcome of being able to infer a high resolution bathymetry. An analogy of this ping approach to camera images would be to perform depth estimation per row, instead of per image. To be able to achieve this, a method for matching the sidescan with the multibeam was developed and a Multilayer Perceptron was used to infer the increased bathymetric resolution. The method relied on finding a set of sequences per ping with two bathymetric points as boundaries and sidescan sonar intensities in between those. The method needs improvement, as the results show potential but limited accuracy compared to other simpler models. / Detta examensarbete är utfört inom området robotik och autonoma undervattensfarkoster, i syfte att kartlägga havsbotten. Målet för detta arbete var att undersöka huruvida sidescan sonar-data kan användas för att öka den batymetriska upplösningen. Detta görs per ping genom att använda intensitets-data från sidescan sonar och batymetri från det lågupplösta multibeam-ekolodet. Denna pingbaserade metod kan ses som ett koncepttest, där det önskade resultatet är att kunna estimera en högupplöst batymetri. Detta koncept har en analogi till djupestimering av kamerabilder, där detta skulle ske per rad istället för per bild. För att åstadkomma detta utvecklades först en metod för att matcha de två sonartyperna med varandra, där den matchade datan sedan användes för att träna en Multilayer Perceptron. Med hjälp an denna estimerades en högupplöst batymetri. Den utvecklade metoden var baserad på att hitta en mängd sekvenser per ping, där varje sekvens innehöll två batymetriska punkter vilka agerade randvärden, med tillhörande sidescan sonar-intensiteter emellan dessa. Metoden behöver förbättras då den påvisar potential, men rätt begränsade resultat jämfört med andra enklare modeller.
|
465 |
En undersökning av möjligheten att automatiskt rätta studentrapporter i kursen EH2720Hilani, Ahmad, Youssef, Rafi January 2021 (has links)
I den här studien undersöks möjligheten att automatiskt rätta rapporter. Under studien ska en kod implementeras som ska kunna automatiskt rätta EH2720 Project Management and Business Development rapporten. Denna rapport innehåller olika avsnitt som ska undersökas och automatisk rättas efter kursens krav. Studien började med teoretisk analysering av flera välkända biblioteksalgoritmer. Biblioteksalgoritmerna används med syftet att läsa pdf rapporten samt att analysera tabellerna. Koden implementerades med objektorienterad programmering vilket ger möjligheten att bryta ner programmet till mindre koddelar som gör att vi enklare kan jobba med ett avsnitt i taget, ge bättre kvalitetskod samt möjlighet för förbättringar för framtida arbete. Efter att vi hittat lämpliga biblioteksalgoritmer började vi med att strukturera koden till olika avsnitt som finns i rapporten. Sedan började vi med kodimplementeringen för olika avsnitt i rapporten efter kravet. Sedan undersökte vi vilka avsnitt i rapporten som vi kunde rätta på effektivt sätt samt avsnitten där vi inte lyckades med att automatisk rätta effektivt. Vi har kommit fram till att det är möjligt att automaträtta en del av de vanligaste misstagen i rapporten men inte alla delar. Vi diskuterade varför det var svårt att rätta dessa avsnitt på effektivt sätt. Slutligen diskuterade vi lösningarna som skulle ge bättre resultat samt vilken kod som måste implementeras i framtida jobb för att hantera de avsnitt vi inte lyckades rätta effektivt. / This study examines the possibility of automatically correcting reports. During the study, a code will be implemented that will be able to automatically correct the EH2720 Project Management and Business Development report. This report contains various sections that must be examined and automatically corrected according to the course requirements. The study began with a theoretical analysis of several well- known library algorithms. The library algorithms are used for the purpose of reading the pdf report and analysing the tables. The code was implemented with object- oriented programming, which gives the opportunity to break down the program into smaller code parts that make it easier for us to work with one section at a time, providing a better- quality code and an opportunity for improvements for future works. After finding suitable library algorithms, we started by structuring the code into different sections that are in the report and implementing the code for different sections of the report as required, then we examined which sections of the report we could correct effectively as well as the sections where we failed to correct. The discussion went about the difficulties of the uncorrected sections and the succession of the other sections. We have come to the conclusion that it is possible to automatically correct some of the most common mistakes in the report, but not all parts. Finally, we went through the solutions that would give the best results as well as which code must be implemented in future jobs to handle the corrected sections to be dealt with more conclusively.
|
466 |
Fördelar med att applicera Collaborative Filtering på Steam : En utforskande studie / Benefits of Applying Collaborative Filtering on Steam : An explorative studyBergqvist, Martin, Glansk, Jim January 2018 (has links)
The use of recommender systems is everywhere. On popular platforms such as Netflix and Amazon, you are always given new recommendations on what to consume next, based on your specific profiling. This is done by cross-referencing users and products to find probable patterns. The aims of this study were to compare the two main ways of generating recommendations, in an unorthodox dataset where “best practice” might not apply. Subsequently, recommendation efficiency was compared between Content Based Filtering and Collaborative Filtering, on the gaming-platform of Steam, in order to establish if there was potential for a better solution. We approached this by gathering data from Steam, building a representational baseline Content-based Filtering recommendation-engine based on what is currently used by Steam, and a competing Collaborative Filtering engine based on a standard implementation. In the course of this study, we found that while Content-based Filtering performance initially grew linearly as the player base of a game increased, Collaborative Filtering’s performance grew exponentially from a small player base, to plateau at a performance-level exceeding the comparison. The practical consequence of these findings would be the justification to apply Collaborative Filtering even on smaller, more complex sets of data than is normally done; The justification being that Content-based Filtering is easier to implement and yields decent results. With our findings showing such a big discrepancy even at basic models, this attitude might well change. The usage of Collaborative Filtering has been used scarcely on the more multifaceted datasets, but our results show that the potential to exceed Content-based Filtering is rather easily obtainable on such sets as well. This potentially benefits all purchase/community-combined platforms, as the usage of the purchase is monitorable on-line, and allows for the adjustments of misrepresentational factors as they appear.
|
467 |
Autonoma drönare : Modifiering av belöningsfunktion i AirSimDzeko, Elvir, Carlsson, Markus January 2018 (has links)
Drones are growing popular and so is the research within the field of autonomous drones. There are several research problems around autonomous vehicles overall, but one interesting problem covered by this study is the autonomous manoeuvring of drones. One interesting path for autonomous drones is through deep reinforcement learning, which is a combination of deep neural networks and reinforcement learning. Problems that researchers often encounter within the field stretch from time consuming training, effective manoeuvring to problems with unpredictability and security. Even high costs of testing can be an issue. With the help of simulation programs, we are able to test algorithms without any concerns to cost or other real-world factors that could limit our work. Microsoft’s own simulator AirSim lets users control the vehicle in their simulator through an application programming interface, which enables the possibility to test a variety of algorithms. The research question addressed in this study is how can the pre-existing reward function be improved on avoiding obstacles and move the drone from start to goal. The goal of this study is to find improvements on AirSim’s pre-existing Deep Q-Network algorithm’s reward function and test it in two different simulated environments. By conducting several experiments and storing evaluation metrics produced by the agents, it was possible to observe a result. The observed evaluation metrics included the average reward that the agent received over time, number of collisions and overall performance in the respective environment. We were not successfully able to gather enough data to measure an improvement of the evaluation metrics for the modified reward function. The modified function that was created performed well but did not display any substantially improved performance. To be able to successfully compare if one reward function is better than the other more research needs to be done. With the difficulties of gathering data, the conclusion is that we created a reward function that we can’t tell if it is better or worse than
|
468 |
Lung-segmentering : för behandling av medicinsk data vid predicering med konvolutionella neurala nätverkGustavsson, Robin, Jakobsson, Johan January 2018 (has links)
In the year of 2017 the Swedish social office reported the most common cancer related death amongst women was lung cancer and the second most common amongst men. A way to find out if a patient has lung cancer is for a doctor to study a computed tomography scan of a patients lungs. This introduces the chance for human error and could lead to fatal consequences. To prevent mistakes from happening it is possible to use computers and advanced algorithms for training a network model to detect details and deviations in the scans. This technique is called deep structural learning. It is both time consuming and highly challenging to create such a model. This discloses the importance of decorous training, and a lot of studies cover this subject. What these studies fail to emphasize is the significance of the preprocessing technique called lung segmentation. Therefore we investigated how is the accuracy and loss of a convolutional network model affected when lung segmentation is applied to the model’s training and test data? In this study a number of models were trained and evaluated on data where lung segmentation was applied, in relation to when it was not. The final conclusion of this report shows that the technique counteracts overfitting of a model and we allege that this study can ease further research within the same area of study.
|
469 |
Deep learning i Starcraft 2 : Autoencoders för att förbättra end-to-end learningFrick, Victor, Mattsson, Kristoffer January 2018 (has links)
A problem for deep learning has for a long time been complex environments. Recently end-to-end learning agents have been used to master Atari games by processing raw pixel data into features and using these features to make good decisions. This method has not had the same success in the real time strategy game Starcraft 2 and the authors of this paper decided to investigate the possibility of using autoencoders to train feature extractors and thereby improving the rate of learning for reinforcement learning agents. Asynchronous Advantage Actor Critic agents are used for investigating the difference and the use of the PySC2 API enables tests in the Starcraft 2 environment. The results show that the agents need more training to be able to evaluate the pros and cons of an pretrained feature extractor. However, the training time of the autoencoder was short and if it turns out to improve the performance the authors see no arguments not to use an autoencoder to pretrain a feature extractor in Starcraft 2.
|
470 |
Requirements Analysis For AI Solutions : A Study On How Requirements Analysis Is Executed When Developing AI SolutionsOlsson, Anton, Joelsson, Gustaf January 2019 (has links)
Requirements analysis is an essential part of the System Development Life Cycle (SDLC) in order to achieve success in a software development project. There are several methods, techniques and frameworks used when expressing, prioritizing and managing requirements in IT projects. It is widely established that it is difficult to determine requirements for traditional systems, so a question naturally arises on how the requirements analysis is executed as AI solutions (that even fewer individuals can grasp) are being developed. Little research has been made on how the vital requirements phase is executed during development of AI solutions. This research aims to investigate the requirements analysis phase during the development of AI solutions. To explore this topic, an extensive literature review was made, and in order to collect new information, a number of interviews were performed with five suitable organizations (i.e, organizations that develop AI solutions). The results from the research concludes that the requirements analysis does not differ between development of AI solutions in comparison to development of traditional systems. However, the research showed that there were some deviations that can be deemed to be particularly unique for the development of AI solutions that affects the requirements analysis. These are:(1) the need for an iterative and agile systems development process, with an associated iterative and agile requirements analysis, (2) the importance of having a large set of quality data, (3) the relative deprioritization of user involvement, and (4) the difficulty of establishing timeframe, results/feasibility and the behavior of the AI solution beforehand.
|
Page generated in 0.3585 seconds