Spelling suggestions: "subject:"computer anda forminformation bcience"" "subject:"computer anda forminformation ascience""
11 |
On Practical machine Learning and Data AnalysisGillblad, Daniel January 2008 (has links)
This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven methods in e.g industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelming. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experienced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process. One of the most important issues when applying machine learning methods to complex data, such as e.g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphical models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available. Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for specifying the structure of e.g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent dependency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data. In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empirical data, and show how the limited available background information can still be used initially for a functioning diagnosis system. To minimise the effort with which results are achieved within data analysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment. Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include demand prediction for supply chain management, approximating complex simulators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system. Read more
|
12 |
Performance, Processing and Perception of Communicative Motion for Avatars and AgentsAlexanderson, Simon January 2017 (has links)
Artificial agents and avatars are designed with a large variety of face and body configurations. Some of these (such as virtual characters in films) may be highly realistic and human-like, while others (such as social robots) have considerably more limited expressive means. In both cases, human motion serves as the model and inspiration for the non-verbal behavior displayed. This thesis focuses on increasing the expressive capacities of artificial agents and avatars using two main strategies: 1) improving the automatic capturing of the most communicative areas for human communication, namely the face and the fingers, and 2) increasing communication clarity by proposing novel ways of eliciting clear and readable non-verbal behavior. The first part of the thesis covers automatic methods for capturing and processing motion data. In paper A, we propose a novel dual sensor method for capturing hands and fingers using optical motion capture in combination with low-cost instrumented gloves. The approach circumvents the main problems with marker-based systems and glove-based systems, and it is demonstrated and evaluated on a key-word signing avatar. In paper B, we propose a robust method for automatic labeling of sparse, non-rigid motion capture marker sets, and we evaluate it on a variety of marker configurations for finger and facial capture. In paper C, we propose an automatic method for annotating hand gestures using Hierarchical Hidden Markov Models (HHMMs). The second part of the thesis covers studies on creating and evaluating multimodal databases with clear and exaggerated motion. The main idea is that this type of motion is appropriate for agents under certain communicative situations (such as noisy environments) or for agents with reduced expressive degrees of freedom (such as humanoid robots). In paper D, we record motion capture data for a virtual talking head with variable articulation style (normal-to-over articulated). In paper E, we use techniques from mime acting to generate clear non-verbal expressions custom tailored for three agent embodiments (face-and-body, face-only and body-only). / <p>QC 20171127</p> Read more
|
13 |
Digitalization Dynamics: User Interface Innovation in an Automative SettingHylving, L January 2015 (has links)
import2017
|
14 |
Predictive Model: Using Text Mining for Determining Factors Leading to High-Scoring Answers in Stack OverflowQuintana Selleras, Raul 01 January 2020 (has links)
With the advent of knowledge-based economies, knowledge transfer within online forums has become increasingly important to the work of IT teams. Stack Overflow, for example, is an online community in which computer programmers can interact and consult with one another to achieve information flow efficiencies and bolster their reputations, which are numerical representations of their standings within the platform. The high volume of information available in Stack Overflow in the context of significant variance in members’ expertise and, hence, the quality of their posts hinders knowledge transfer and causes developers to waste valuable time locating good answers. Additionally, invalid answers can introduce security vulnerabilities and/or legal risks. By conducting text analytics and regression, this research presents a predictive model to optimize knowledge transfer among software developers. This model incorporates the identification of factors (e.g., good tagging, answer character count, tag frequency) that reliably lead to high-scoring answers in Stack Overflow. Upon applying natural language processing, the following variables were found to be significant: (a) the number of answers per question, (b) the cumulative tag score, (c) the cumulative comment score, and (d) the bags of words’ frequency. Additional methods were used to identify the factors that contribute to an answer being selected by the user who posted the question, the community at large, or both. Predicting what constitutes a good, accurate answer helps not only developers but also Stack Overflow, as the site can redesign its user interface to make better use of its knowledge repository to transfer knowledge more effectively. Likewise, companies who use the platform can decrease the amount of time and resources invested in training, fix software bugs faster, and complete challenging projects in a timely fashion. Read more
|
15 |
En IT Forensik utredning med fria verktygEkman, Sebastian January 2019 (has links)
No description available.
|
16 |
Evaluation of solutions for a virtual reality cockpit in fighter jet simulation / Utvärdering av lösningar för virtuell cockpit inom flygsimuleringMartinsson, Tobias January 2019 (has links)
Virtual Reality has become widespread in areas other than gaming. How this type of technology can be used in, for example, flight simulation still needs to be discovered. In this thesis virtual reality technology and free-hand interactions are examined in the context of a fighter jet cockpit. Design principles and visualization techniques are used to examine how a virtual reality cockpit and interactions can be designed with high usability. From user test sessions, and accompanying questionnaire, some guidelines for how this type of interaction should be designed are gathered. Specifically, how objects that can be interacted with, and the distance to them should be visualized, with regards to free-hand interaction. Also, different ways of providing feedback to the user are discussed. Finally, it is determined that the technology used is a good fit for the context and task, but the implementation of interaction components needs more work. Alternative ways of tracking hand motions and other configurations for the sensors should be examined in the same context.
|
17 |
A Mathematical Model of Hacking the 2016 US Presidential ElectionNilsson Sjöström, Dennis January 2018 (has links)
After the 2016 US presidential election, allegations were published that the electronic voting machines used throughout the US could have been manipulated.These claims arose due to the reported attacks by Department of Homeland Security toward voter registration databases. The US is more vulnerable against these types of attacks since electronic voting machines is the most prevalent method for voting. To reduce election costs, other countries are also considering replacing paper ballots with electronic voting machines. This, however, imposes a risk. By attacking the electronic voting machines, an attacker could change the outcome of an election. A well-executed attack would be designed to be highly successful, but at the same time the risk for detection would be low. The question evaluated in this paper is whether such an attack would be possible and if so, how much it would cost to execute. This paper presents a mathematical model of the 2016 US presidential election.The model is based on voting machine equipment data and pollingdata. The model is used to simulate how rational attackers would maximize their effect on the election and minimize their effort by hacking voting machines. By using polls, it was possible to determine the effort needed to change the outcome of the 2016 US presidential election and thus estimate the costs. Based on the model, the estimated cost to hack the 2016US presidential election would amount to at least ten million dollars. The results show that these attacks are possible by attacking only one manufacturer of electronic voting machines. Hence, the use of electronic voting machines poses too much of a risk for democracy, and paper ballots should still be considered for elections. This kind of model can be implemented on the elections of other countries that use electronic voting machines. Read more
|
18 |
Pulse Repetition Interval Time Series Modeling for Radar Waves using Long Short-Term Memory Artificial Recurrent Neural NetworksLindell, Adam January 2019 (has links)
This project is a performance study of Long Short-Term Memory artificial neural networks in the context of a specific time series prediction problem consisting of radar pulse trains. The network is tested both in terms of accuracy on a regular time series but also on an incomplete time series where values have been removed in order to test its robustness/resistance to small errors. The results indicate that the network can perform very well when no values are removed and can be trained relatively quickly using the parameters set in this project, although the robustness of the network seems to be quite low using this particular implementation.
|
19 |
Information som bevis inom arkiv- och informationsvetenskap och e-DiscoveryLendin, Emma January 2019 (has links)
Denna uppsats presenterar en kvalitativ undersökning som gjorts i syfte att visa vilka likheter och olikheter det finns på synen på bevarande av arkivinformation som bevis inom arkiv- och informationsvetenskap jämfört med e-Discovery. Målet med undersökningen var också att öka förståelsen för synen på vilka krav som ställs på bevarad information för att den ska kunna användas som bevis, både på kort och lång sikt. De data som samlats in består främst av akademiska artiklar och andra texter, där en kvalitativ innehållsanalys har använts för att tolka den latenta informationen i dessa texter. Den teori som använts som grund för denna analys är Records Continuum-teorin, där bevisaxeln har varit till störst användning. Resultatet visar att synen på bevarande av arkivinformation som bevis skiljer sig en del inom arkiv- och informationsvetenskapen och e-Discovery, men vissa likheter finns också. Det visar vidare också att tidsspannet som information anses värdefull att bevara skiljer sig mellan de två och att detta beror på att man fokuserar på olika saker. Detta påverkar slutligen vilka sammanhang information bevaras i, och vilket bevisvärde den får. / This essay presents a qualitative research undertaken with the purpose of showing similarities and differences in the views on preservation of records as evidence within archival and information sciences compared to the ones within e-Discovery. Further, the goal with the research was also to increase the understanding for how the demands are viewed, that are put on preserved information for it to be usable as evidence, both short and long term. The data which have been gathered are mainly academic articles and other texts, where a qualitative content analysis has been used to interpret the latent information in these texts. The theory used to conduct this analysis is the Records Continuum theory, where the Evidentiality axle has been of the most use. The result shows that the views on preservation pf records as evidence differ between the archival sciences and e-Discovery, but that there are also similarities. It also shows that the period of time in which information is considered worth preserving differs between the two and that this is due to the fact that their areas of focus are different. This ultimately affects in what contexts information is preserved, and also its value as evidence. Read more
|
20 |
Possibilities of Encrypted NFC Implementation : An exploratory study within swedish healthcareVeljkovic, Andrea January 2019 (has links)
This master thesis investigates the possibilities of using encrypted Near FieldCommunication (NFC) in Swedish healthcare. Issues such as lack of resources,high costs and inefficiency face not only the healthcare system but theSwedish society as a whole. A literature study as well as interviews andobservations on five different public hospitals stand as basis for a prototype of apossible solution implementing the technology. By evaluating and discussing theresults, an assessment of the future of encrypted NFC in Swedish healthcare isdone. It is concluded that the properties of the technology in combination with thegovernmental goals and the optimistic approach of patients and visitors bring fortha promising outset for implementation of encrypted NFC in Swedish healthcare.
|
Page generated in 0.1205 seconds