• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 514
  • 185
  • 3
  • 1
  • Tagged with
  • 706
  • 706
  • 706
  • 557
  • 546
  • 520
  • 518
  • 120
  • 112
  • 102
  • 101
  • 101
  • 100
  • 100
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

En forensisk analys av iOS

Ohlsson, Oliver January 2013 (has links)
Sedan Apple introducerade sin iPhone 2007 har användadet av smarta telefoner ökat ständigt. De används inte bara i hemmet utan även på företag och i militären. På företagsmobiler finns det mer och mer viktig information såsom mail, sms och viktiga filer. För en hacker skulle det därför vara möjligt att komma åt hela företaget genom att gå in på en mobiltelefon som används i verksamheten. För att motverka det har det implementerats säkerhetsfunktioner i dagens mobiltelefoner som tex kryptering. I detta arbetet har målet varit att undersöka dessa säkerhetsfunktioner och vad för information som går att utvinna ur en iPhone. Genom att undersöka vilka säkerhetsfunktioner som implementerats och hur mycket information som går att få ut kommer frågeställningarna besvaras. Det har skrivits ett antal arbeten om iOS-säkerhet, men de flesta är skrivna om äldre versioner av operativsystemet. I det här arbetet kommer det senaste, iOS 6.1.4, testas i programmet XRY.
52

Informationssäkerhetsarbete på tillverkande småföretag

Johansson, Berit January 2014 (has links)
Informationssäkerhet är ett mer aktuellt ämne än någonsin. Media rapporterar löpande om olika störningar, orsakade av både tekniskt haveri, mänskliga misstag samt sabotage. Stora förändringar har skett de senaste tjugo åren vad gäller teknik, genomloppstid för produkterna och bemanning. Att skydda sin strategiskt viktiga information borde vara viktigare än någonsin, för att skapa trovärdighet, undvika produktionsavbrott och undvika ekonomiska bortfall. Tidigare forskning har bidragit till att beskriva modeller för hur man skall arbeta för att öka informationssäkerheten inom en verksamhet. Kartläggningar har också gjorts över situationen i offentliga verksamheter, storföretag och hemmiljö. Syftet med detta examensarbete är att fylla den lucka som finns och beskriva hur situationen ute i tillverkande småföretag ser ut idag. Hur arbetar företagsledningen med informationssäkerheten? En kvalitativ studie har gjorts på tre mindre/medelstora tillverkningsindustrier i Småland och bygger på personliga intervjuer. Det som framgår tydligt i denna studie är att företagarna visserligen har gjort en del riskbedömningar och åtgärder men man arbetar inte systematiskt med informationssäkerheten. Den förändring som skett inom teknik, produkter och bemanning har inte följts upp av ett mer systematiskt säkerhetstänkande. / Information security is more than ever a current topic. Media report continuously about different disturbances, caused by technically breakdowns, human mistakes and sabotage. Big changes have occurred during the latest twenty years regarding technology, the lifespan of products and staffing. To protect the strategic information should be more important than ever, to create credibility, avoid interruptions in production and financial loss. Earlier research has contributed to describe models how to work to increase information security within an organization. Surveys have also been made regarding the situation in public service, big companies and homes. The purpose of this thesis is to fill that gap that exist and describe the situation of small manufacturing industries. How does the business management work with information security? A qualitative survey has been made at three small/ medium sized manufacturing industries in Småland and is based on personal interviews. In this survey it appears clearly that the entrepreneurs certainly have made some risk assessments and measures, but they do not work systematically with information security. The changes that have been made within technology, products and staffing have not been followed up with a more systematically security thinking.
53

Using WebGL to create TV-centric user interfaces

Karlsson, Jonathan January 2014 (has links)
In recent years the user interfaces of the TV platform have been powered by HTML, but since the platform is starting to support new techniques it might be time to change the focus. HTML is a good choice for interface development because of its high level and platform independence; however, when performance is critical and the requirements are high HTML can impose serious restrictions. WebGL is a technology released in 2011 that brings a low-level graphics API to the web. The API allows for development of advanced 3D graphics and visual effects that were impossible or impractical in the HTML world. The problem is that the hassle of using pure WebGL is in most cases too big to overcome. In this thesis a proof-of-concept was developed to investigate the issues and limitations of WebGL. A conclusion was made that even though the performance was not as good as expected it might still be viable for use in some settings.
54

A topic model-based approach for ontology extension in the computational materials science domain

Zhang, Tong January 2020 (has links)
With the continuous development and progress of human society, the demand for advanced materials in all walks of life is increasing day by day. No matter in the agrarian age or the information age, human beings have always been tireless in the study of materials science, and the field of computational materials science has been the exploration of computational methods in materials science. However, with the deepening of the research, the scale of research data related to materials science is getting larger and larger, and each research institution establishes their own material information management system. The diversity of the materials data structure and storage form causes the fuzziness of the data structure and the complexity of the integrated data. In order to make data findable and reusable, scientists introduce the concept of ontology in philosophy to generalize the context and structure of data. An ontology is mainly by the field representative extremely, including meaningful concepts and the relationship between concepts. There are a few ontologies found in the computational materials science domain, called Materials Design Ontology (MDO). This thesis mined the representative concepts and relations to extend the MDO. In order to achieve this goal, an improved Topmine framework was deployed, containing a new frequent phrase mining algorithm and an improved phrase-based Latent Dirichlet Allocation (LDA) topic model. The improved Topmine framework introduced the Part-of-Speech Tagging and defined weighted coefficients. The time and space complexity had been reduced from quadratic to linear. And the perplexity of the phrase-based LDA was reduced 26.7%, which means the results are more concentrated and accurate. Meanwhile, the concept lattice is constructed with the idea of formal concept analysis to extend the relations of the domain ontology. In brief, this paper studied the titles and abstracts of more than 9000 pieces of field literature collected to extend MDO, and demonstrate the practicability and practicality of this framework by comparing the experimental results with the existing algorithms.
55

Implementation of an abstract module for entity resolution to combine data sources with the same domain information

Chowdhury, Ziaul Islam January 2021 (has links)
Increasing digitalization is creating a lot of data every day. Sometimes the same real-world entity is stored in multiple data sources but lacks common reference. This creates a significant challenge on the integration of data sources and may cause duplicates and inconsistencies if not resolved correctly. The core idea of this thesis is to implement an abstract module for entity resolution to combine multiple data sources with similar domain information.  CRISP-DM process was used as the methodology in this thesis which started with an understanding of the business and data. Two open datasets containing product details from e-commerce sites are used to conduct the research (Abt-Buy and Amazon-Google). The datasets have similar structures and contain product name, description, manufacturer’s name, price. Both datasets contain gold-standard data to evaluate the performance of the model. In the data exploration phase, various aspects of the datasets are explored such as word-cloud containing important words in the product name and description, bigrams and trigrams of the product name, histograms, standard deviation, mean, min, max length of the product name. Data preparation phases contains NLP based preprocessing pipeline consists of normalization of case, removal of special characters and stop-words, tokenization, and lemmatization.  In the modeling phase of the CRISP-DM process, various similarity and distance measures are applied on the product name and/or description and the weighted scores are summed up to form total score of the fuzzy matching. A set of threshold values are applied to the total score and performance of the model is evaluated against the ground truth. The implemented model scored more than 60% F1-score in both datasets. Moreover, the abstract model can be applied to various datasets with similar domain information. The model is not deployed to the production environment which can be a future work. Moreover, blocking or indexing techniques can be also applied in the future with big data technologies which will reduce quadratic nature of entity resolution problem.
56

Bayesian variable selection in linear mixed effects models

Tran, Vuong January 2017 (has links)
Variable selection techniques have been well researched and used in many different fields. There is rich literature on Bayesian variable selection in linear regression models, but only few of them are about mixed effects. The topic of the thesis is Bayesian variable selection in linear mixed effect models. The choice of methods to achieve this goal is to induce different shrinkage priors. Both unimodal shrinkage priors and spike-and-slab priors are used and compared. The distributions that have been chosen, either as unimodal priors or parts of the spike-and-slab priors are the Normal distribution, the Student-t distribution and the Laplace distribution. Both the simulations and the real dataset studies have been carried out, with the intention of investigating and evaluating how good the chosen distributions are as shrinkage priors. Obtained results from the real dataset shows that spike-and-slab priors yield more shrinkage effect than what unimodal priors does. However, inducing spike-and-slab priors carelessly without any consideration if the size of the data is sufficiently large enough may lead to poor model parameter estimations. Results from the simulations studies indicates that a mixture of Laplace distribution for both the spike and slab components is the prior that yields the highest shrinkage effect among the investigated shrinkage priors.
57

Brain Emotional Learning-Inspired Models

Parsapoor, Mahboobeh January 2014 (has links)
In this thesis the mammalian nervous system and mammalian brain have been used as inspiration to develop a computational intelligence model based on the neural structure of fear conditioning and to extend the structure of the previous proposed amygdala-orbitofrontal model. The proposed model can be seen as a framework for developing general computational intelligence based on the emotional system instead of traditional models on the rational system of the human brain. The suggested model can be considered a new data driven model and is referred to as the brain emotional learning-inspired model (BELIM). Structurally, a BELIM consists of four main parts to mimic those parts of the brain’s emotional system that are responsible for activating the fear response. In this thesis the model is initially investigated for prediction and classification. The performance has been evaluated using various benchmark data sets from prediction applications, e.g. sunspot numbers from solar activity prediction, auroral electroject (AE) index from geomagnetic storms prediction and Henon map, Lorenz time series. In most of these cases, the model was tested for both long-term and short-term prediction. The performance of BELIM has also been evaluated for classification, by classifying binary and multiclass benchmark data sets.
58

Evaluation of models for process time delay estimation in a pulp bleaching plant

Dahlbäck, Marcus January 2020 (has links)
The chemical processes used to manufacture pulp are always in development to cope with increasing environmental demands and competition. With a deeper understanding of the processes, the pulping industry can become both more profitable and effective at keeping an even and good quality of pulp, while reducing emissions. One step in this direction is to more accurately determine the time delay for a process, defined as the time it takes for a change in input to affect the process’s output. This information can then be used to control the process more efficiently. The methods used today to estimate the time delay use simple models and assumptions of the processes, for example that that the pulp behaves like a ”plug” that never changes its shape throughout the process. The problem with these assumptions is that they are only valid under ideal circumstances where there are no disturbances. This Master’s thesis aims to investigate if it is possible to measure the process time delay using only the input and output data from the process, and see if this estimation is more accurate than the existing model based methods. Another aim is to investigate if the process time delay can be monitored in real time. We investigated three methods: cross-correlation applied to the raw input and output data, cross-correlation applied to the derivative of the input and output data, and a convolutional neural network trained to identify the process time delay from the input and output data. The results show that it is possible to find the time delay, but with significant deviations from the models used today. Due to a lack of data where the time delay was measured, the reason for this deviation requires more research. The results also show that the three methods are unsuitable for real-time estimation. However, the models can likely monitor how the process time delay develops over long periods.
59

Procedurally generating an initial character state for interesting role-playing game experiences

Lindholm, Emil January 2020 (has links)
No description available.
60

Relief Planning Management Systems - Investigation of the Geospatial Components

Ngo, Duc Khanh January 2013 (has links)
No description available.

Page generated in 0.1938 seconds