• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 815
  • 226
  • 1
  • Tagged with
  • 1042
  • 1026
  • 1024
  • 150
  • 124
  • 104
  • 101
  • 90
  • 88
  • 80
  • 79
  • 62
  • 60
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Power Modeling and Scheduling of Tests for Core-based System Chips

Samii, Soheil January 2005 (has links)
The technology today makes it possible to integrate a complete system on a single chip, called "System-on-Chip'' (SOC). Nowadays SOC designers use previously designed hardware modules, called cores, together with their user defined logic (UDL), to form a complete system on a single chip. The manufacturing process may result in defect chips, for instance due to the base material, and therefore testing chips after production is important in order to ensure fault-free chips. The testing time for a chip will affect its final cost. Thus it is important to minimize the testing time for each chip. For core-based SOCs this can be done by testing several cores at the same time, instead of testing the cores sequentially. However, this will result in a higher activity in the chip, hence higher power consumption. Due to several factors in the manufacturing process there are limitations of the power consumption for a chip. Therefore, the power limitations should be carefully considered when planning the testing of a chip. Otherwise it can be damaged during test, due to overheating. This leads to the problem of minimizing testing time under such power constraints. In this thesis we discuss test power modeling and its application to SOC testing. We present previous work in this area and conclude that current power modeling techniques in SOC testing are rather pessimistic. We therefore propose a more accurate power model that is based on the analysis of the test data. Furthermore, we present techniques for test pattern reordering, with the objective of partitioning the test power consumption into low parts and high parts. The power model is included in a tool for SOC test architecture design and test scheduling, where the scheduling heuristic is designed for SOCs with fixed- width test bus architectures. Several experiments have been conducted in order to evaluate the proposed approaches. The results show that, by using the presented power modeling techniques in test scheduling algorithms, we will get lower testing times and thus lower test cost.
422

Data-Centric Network of Things : A Method for Exploiting the Massive Amount of Heterogeneous Data of Internet of Things in Support of Services

Xiao, Bin January 2017 (has links)
Internet of things (IoT) generates massive amount of heterogeneous data, which should be efficiently utilized to support services in different domains. Specifically, data need to be supplied to services by understanding the needs of services and by understanding the environment changes, so that necessary data can be provided efficiently but without overfeeding. However, it is still very difficult for IoT to fulfill such data supply with only the existing supports of communication, network, and infrastructure; while the most essential issues are still unaddressed, namely the heterogeneity issue, the recourse coordination issue, and the environments’ dynamicity issue. Thus, this necessitates to specifically study on those issues and to propose a method to utilize the massive amount of heterogeneous data to support services in different domains. This dissertation presents a novel method, called the data-centric network of things (DNT), which handles heterogeneity, coordinates resources, and understands the changing IoT entity relations in dynamic environments to supply data in support of services. As results, various services based on IoT (e.g., smart cities, smart transport, smart healthcare, smart homes, etc.) are supported by receiving enough necessary data without overfeeding. The contributions of the DNT to IoT and big data research are: firstly the DNT enables IoT to perceive data, resources, and the relations among IoT entities in dynamic environments. This perceptibility enhances IoT to handle the heterogeneity in different levels. Secondly, the DNT coordinates IoT edge resources to process and disseminate data based on the perceived results. This releases the big data pressure caused by centralized analytics to certain degrees. Thirdly, the DNT manages entity relations for data supply by handling the environment dynamicity. Finally, the DNT supply necessary data to satisfy different service needs, by avoiding either data-hungry or data-overfed status.
423

Brister i IT-säkerhet inom svensk sjöfart? : En kvalitativ undersökning om IT-säkerhet på svenska fartyg

Gustafsson, Daniel, Hamid, Mohammadi January 2017 (has links)
Vissa typer av cyberattacker har ökat kraftig mellan åren 2015 och 2016, detta uppmärksammas både i land och till sjöss. Med tanke på sjöfartens unika situation är det av intresse att undersöka hur rederier har valt att skydda sig mot cyberattacker. Fyra rederier har intervjuats angående IT-säkerheten på deras fartyg. Resultatet av intervjuerna har sedan ställts mot IMOs riktlinjer släppta år 2016 för att undersöka ifall det finns brister i IT-säkerheten på svenska fartyg. Resultatet visar att det finns brister på flera av rederierna som intervjuats framförallt i form av utbildning av personal. Resultatet visar också att det finns klara kontraster mellan mindre rederier och större rederier, där de mindre rederierna har färre åtgärder på plats medans de större rederierna har fler för att förhindra eller hantera en IT-attack. / Certain types of cyber attacks have increased between 2015 and 2016, this is acknowledged both on land in the shipping sector. Given the unique situation of shipping, it is of interest to investigate how shipping companies have chosen to protect themselves from cyberattacks. Four shipping companies have been interviewed regarding the cybersecurity of their vessels. The results of the interviews have since been compared against IMO's guidelines released in 2016 to investigate whether there are deficiencies in cybersecurity on swedish ships. The results show that there are shortcomings in several of the shipping companies interviewed, primarily in the form of training of personnel. The result also shows that there are clear contrasts between smaller shipping companies and larger shipping companies, where the smaller shipping companies have fewer measures in place while the larger shipping companies have more to prevent or handle a cyberattack.
424

Advances Towards Data-Race-Free Cache Coherence Through Data Classification

Davari, Mahdad January 2017 (has links)
Providing a consistent view of the shared memory based on precise and well-defined semantics—memory consistency model—has been an enabling factor in the widespread acceptance and commercial success of shared-memory architectures. Moreover, cache coherence protocols have been employed by the hardware to remove from the programmers the burden of dealing with the memory inconsistency that emerges in the presence of the private caches. The principle behind all such cache coherence protocols is to guarantee that consistent values are read from the private caches at all times. In its most stringent form, a cache coherence protocol eagerly enforces two invariants before each data modification: i) no other core has a copy of the data in its private caches, and ii) all other cores know where to receive the consistent data should they need the data later. Nevertheless, by partly transferring the responsibility for maintaining those invariants to the programmers, commercial multicores have adopted weaker memory consistency models, namely the Total Store Order (TSO), in order to optimize the performance for more common cases. Moreover, memory models with more relaxed invariants have been proposed based on the observation that more and more software is written in compliance with the Data-Race-Free (DRF) semantics. The semantics of DRF software can be leveraged by the hardware to infer when data in the private caches might be inconsistent. As a result, hardware ignores the inconsistent data and retrieves the consistent data from the shared memory. DRF semantics therefore removes from the hardware the burden of eagerly enforcing the strong consistency invariants before each data modification. Instead, consistency is guaranteed only when needed. This results in manifold optimizations, such as reducing the energy consumption and improving the performance and scalability. The efficiency of detecting and discarding the inconsistent data is an important factor affecting the efficiency of such coherence protocols. For instance, discarding the consistent data does not affect the correctness, but results in performance loss and increased energy consumption. In this thesis we show how data classification can be leveraged as an effective tool to simplify the cache coherence based on the DRF semantics. In particular, we introduce simple but efficient hardware-based private/shared data classification techniques that can be used to efficiently detect the inconsistent data, thus enabling low-overhead and scalable cache coherence solutions based on the DRF semantics.
425

Search engines and how we evaluate them : - A comparison between the search engines Elasticsearch and EPiServer Find

Nilsson, Kim, Larsson, Anton January 2017 (has links)
Sigma IT Consulting in Växjö gave the authors of this paper a task to change the search engine in one of their projects. To get a scientific view of the project the authors specified two research questions that could be connected to Sigma's problem. This problem is interesting to investigate since search engines are frequently used all over the world and often more user-friendly than classic menu navigation. To find a solution to the problem two experiments were conducted. The result from the experiments showed that the search engine EPiServer Find were, ever so slightly, outperformed Elasticsearch in the terms of relevance. The response time, however, had no significant difference.
426

Continuous architecture in a large distributed agile organization : A case study at Ericsson

Standar, Magnus January 2017 (has links)
Agile practices have become norm, also in large scale organizations. Applying agile methods includes introducing continuous practices, including continuous architecture. For web scale applications microservices is a rising star. This thesis investigates if microservices could be an answer also for embedded systems to tackle the synchronizing problem of many parallel teams.
427

A latency comparison of IoT protocols in MES

Lindén, Erik January 2017 (has links)
Many industries are now moving several of their processes into the cloud computing sphere. One important process is to collect machine data in an effective way. Moving signal collection processes to the cloud instead of on premise raises many questions about performance, scalability, security and cost.This thesis focuses on some of the market leading and cutting edge protocols appropriate for industrial production data collection. It investigates and compares the pros and cons of the protocols with respect to the demands of industrial systems. The thesis also presents examples of how the protocols can be used to collect data all the way to a higher-level system such as ERP or MES.The protocols focused on are MQTT and AMQP (in OPC-UA). The possibilities of OPC-UA in cloud computing is of extra interest to investigate in this thesis due to its increasing usage and development.
428

Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models : An unsupervised learning approach in time-series data

Avdic, Adnan, Ekholm, Albin January 2019 (has links)
Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system. Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise. Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data. Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives. Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.
429

Implementing Bayesian Networks for online threat detection

Pappaterra, Mauro José January 2018 (has links)
Cybersecurity threats have surged in the past decades. Experts agree that conventional security measures will soon not be enough to stop the propagation of more sophisticated and harmful cyberattacks. Recently, there has been a growing interest in mastering the complexity of cybersecurity by adopting methods borrowed from Artificial Intelligence (AI) in order to support automation. Moreover, entire security frameworks, such as DETECT (Decision Triggering Event Composer and Tracker), are designed aimed to the automatic and early detection of threats against systems, by using model analysis and recognising sequences of events and other tropes, inherent to attack patterns. In this project, I concentrate on cybersecurity threat assessment by the translation of Attack Trees (AT) into probabilistic detection models based on Bayesian Networks (BN). I also show how these models can be integrated and dynamically updated as a detection engine in the existing DETECT framework for automated threat detection, hence enabling both offline and online threat assessment. Integration in DETECT is important to allow real-time model execution and evaluation for quantitative threat assessment. Finally, I apply my methodology to some real-world case studies, evaluate models with sample data, perform data sensitivity analyses, then present and discuss the results.
430

Quality and real-time performance assessment of color-correction methods : A comparison between histogram-based prefiltering and global color transfer

Nilsson, Linus January 2018 (has links)
In the field of computer vision and more specifically multi-camera systems color correction is an important topic of discussion. The need for color-tone similarity among multiple images that are used to construct a single scene is self-evident. The strength and weaknesses of color- correction methods can be assessed by using metrics to measure structural and color-tone similarity and timing the methods. Color transfer has a better structural similarity than histogram-based prefiltering and a worse color-tone similarity. The color transfer method is faster than the histogram-based prefiltering. Color transfer is a better method if the focus is a structural similar image after correction, if better color-tone similarity at the cost of structural similarity is acceptable histogram-based prefiltering is a better choice. Color transfer is a faster method and is easier to run with a parallel computing approach then histogram-based prefiltering. Color transfer might therefore be a better pick for real-time applications. There is however more room to optimize an implementation of histogram-based prefiltering utilizing parallel computing.

Page generated in 0.0516 seconds