• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6404
  • 2120
  • 2
  • Tagged with
  • 8527
  • 8524
  • 8130
  • 8064
  • 912
  • 845
  • 668
  • 665
  • 653
  • 639
  • 573
  • 491
  • 418
  • 402
  • 350
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Securing Microsoft Azure : Automatic Access Control List Rule Generation to Secure Microsoft Azure Cloud Networks

Matsson, Carl Philip January 2017 (has links)
No description available.
122

Software based memory correction for a miniature satellite in low-Earth orbit / Mjukvarustyrd rättning av minnesfel för en miniatyrsatellit i låg omloppsbana

Wikman, John, Sjöblom, Johan January 2017 (has links)
The harsh radiation environment of space is known to cause bit flips in computer memory. The conventional way to combat this is through error detection and correction (EDAC) circuitry, but for low-budget space missions software EDAC can be used. One such mission is the KTH project Miniature Student Satellite (MIST), which aims to send a 3U CubeSat into low-Earth orbit. To ensure a high level of data reliability on board MIST, this thesis investigates the performance of different types of EDAC algorithms. First, a prediction of the bit flip susceptibility of DRAM memory in the planned trajectory is made. After that, data reliability models of Hamming and Reed-Solomon (RS) codes are proposed, and their respective running times on the MIST onboard computer are approximated. Finally, the performance of the different codes is discussed with regards to data reliability, memory overhead, and CPU usage. The findings of this thesis suggest that using an EDAC algorithm would greatly increase the data reliability. Among the codes investigated, three good candidates are RS(28,24), RS(196,192) and RS(255,251), depending on how much memory overhead can be accepted. / Rymdens strålningsmiljö är känd för att orsaka bitflippar i datorminnen. Vanligtvis motverkas detta genom att felrättande hårdvara installeras på satelliten, men för lågkostnadssatelliter kan rättningen istället skötas i mjukvaran. Ett exempel på en sådan satellit är KTH-projektet Miniature Student Satellite (MIST), vars mål är att skicka upp en 3U CubeSat i låg omloppsbana. Den här uppsatsen undersöker hur olika felrättningsalgoritmer kan användas för att skydda data ombord på satelliten från att bli korrupt. Först görs en uppskattning av hur strålningskänsliga DRAM minnen är i den planerade omloppsbanan. Därefter föreslås datakorruptionsmodeller för Hamming- och Reed-Solomonkoder (RS) tillsammans med en uppskattning av deras respektive körtider på satellitens omborddator. Slutligen diskuteras de föreslagna koderna med hänsyn till datakorruptionsskydd, minnesanvändning och processoranvändning. Uppsatsens slutsats indikerar att användandet av felrättningsalgoritmer kraftigt minskar risken för datakorruption. Bland de koder som undersökts framstår RS(28,24), RS(196,192) och RS(255,251) som de bästa alternativen, beroende på hur mycket extra minnesanvändning som är acceptabelt.
123

Prioritizing Tests with Spotify’s Test & Build Data using History-based, Modification-based & Machine Learning Approaches

Öhlin, Petra January 2017 (has links)
This thesis intends to determine the extent to which machine learning can be used to solve the regression test prioritization (RTP) problem. RTP is used to order tests with respect to probability of failure. This will optimize for a fast failure, which is desirable if a test suite takes a long time to run or uses a significant amount of computational resources. A common machine learning task is to predict probabilities; this makes RTP an interesting application of machine learning. A supervised learning method is investigated to train a model to predict probabilities of failure, given a test case and a code change. The features investigated are chosen based on previous research of history- based and modification-based RTP. The main motivation for looking at these research areas is that they resemble the data provided by Spotify. The result of the report shows that it is possible to improve how tests run with RTP using machine learning. Nevertheless, a much simpler history- based approach is the best performing approach. It is looking at the history of test results, the more failures recorded for the test case over time, the higher priority it gets. Less is sometimes more.
124

Empirical Study on Quantitative Measurement Methods for Big Image Data : An Experiment using five quantitative methods

Ramya Sravanam, Ramya January 2016 (has links)
Context. With the increasing demand for image processing applications in multimedia applications, the importance for research on image quality assessment subject has received great interest. While the goal of Image Quality Assessment is to find the efficient Image Quality Metrics that are closely relative to human visual perception, from the last three decades much effort has been put by the researchers and numerous papers and literature has been developed with emerging Image Quality Assessment techniques. In this regard, emphasis is given to Full-Reference Image Quality Assessment research where analysis of quality measurement algorithms is done based on the referenced original image as that is much closer to perceptual visual quality. Objectives. In this thesis we investigate five mostly used Image Quality Metrics which were selected (which includes Peak Signal to Noise Ratio (PSNR), Structural SIMilarity Index (SSIM), Feature SIMilarity Index (FSIM), Visual Saliency Index (VSI), Universal Quality Index (UQI)) to perform an experiment on a chosen image dataset (of images with different types of distortions due to different image processing applications) and find the most efficient one with respect to the dataset used. This research analysis could possibly be helpful to researchers working on big image data projects where selection of an appropriate Image Quality Metric is of major significance. Our study details the use of dataset taken and the experimental results where the image set highly influences the results.  Methods. The goal of this study is achieved by conducting a Literature Review to investigate the existing Image Quality Assessment research and Image Quality Metrics and by performing an experiment. The image dataset used in the experiment is prepared by obtaining the database from LIVE Image Quality Assessment database. Matlab software engine was used to experiment for image processing applications. Descriptive analysis (includes statistical analysis) was employed to analyze the results obtained from the experiment. Results. For the distortion types involved (JPEG 2000, JPEG compression, White Gaussian Noise, Gaussian Blur) SSIM was efficient to measure the image quality after distortion for JPEG 2000 compressed and white Gaussian noise images and PSNR was efficient for JPEG compression and Gaussian blur images with respect to the original image.  Conclusions. From this study it is evident that SSIM and PSNR are efficient in Image Quality Assessment for the dataset used. Also, that the level of distortions in the image dataset highly influences the results, where in our case SSIM and PSNR perform efficiently for the used database.
125

Procedural Terrain Generation Using Ray Marching

Oscar, Roosvall January 2016 (has links)
No description available.
126

Informationssamhället och internetbedrägerier

Noresson, Jolina January 2016 (has links)
Denna studie behandlar utvecklingen av de internetrelaterade bedrägerierna i Sverige och hur de har utvecklats i samband med utvidgningen av informationssamhället och den eskalerade användningen av internet. Syftet med studien är att undersöka hur de internetrelaterade bedrägerierna har utvecklats sedan en bit in på 2000-talet då användningen av internet fått en allt större inverkan på våra vardagliga liv. Studien har även för avsikt att undersöka hur kriminellas tankegångar gällande bedrägeribrott kan tänkas ha ändrats i samband med den ökande internetanvändningen. För att uppnå syftet med arbetet och besvara dess frågeställningar har en genomgång av tidigare forskning gällande internetrelaterade bedrägerier och internetutvecklingen i Sverige genomförts. Som underlag för internetbedrägeriernas statistiska utveckling ligger den svenska kriminalstatistiken. För att komplettera statistiken har en semistrukturerad elitintervju med en polisinspektör genomförts. Detta för att ur en skiljaktig vinkel ge en fördjupad bild av vad statistiken faktiskt visar. Studiens resultat analyseras sedermera med hjälp av tidigare forskning och rutinaktivitetsteorin Undersökningen visar att de internetrelaterade bedrägerierna under en tid präglats av en uppåtgående trend samt att denna trend med största sannolikhet kommer att fortsätta öka tills vi bekantat oss ytterligare med informationssamhället och dess tekniker. Undersökningen visaräven att det förhöjda internetanvändandet påverkar och bidrar till ökningen av internetbedrägerier. I och med utvecklingen av informationssamhället har även förutsättningarna för bedragarna blivit allt mer gynnsamma då kontroll och lagstiftning inte följer med utvecklingen i samma takt.
127

VizzAnalyzer C/C++ Front-End Using Eclipse CDT

Wang, Xuan January 2009 (has links)
VizzAnalyzer is stand-alone tool for analyzing and visualizing the structures of largesoftware systems. Currently, it only supports to analyze Java and UML programs.Considering about the widen acceptance of C/C++ program languages, we think it isnecessary to create this C/C++ Front-End to enable VizzAnalyzer to analyzer C/C++programs.To create the C/C++ Front-End, we need to get C/C++ Front-End Meta-Model first.For doing this, we selected Eclipse CDT as the compiler for C/C++ source files.Secondly, we create a mapping between C/C++ Front-End Meta-Model to Common-Meta-Model. The mapping result will be used by VizzAnalyzer to do further analysiswork.This Bachelor thesis documents relative theory to this C/C++ Front-End and how ithas been developed and implemented.
128

Evaluation of OKL4 / Virtualisering med OKL4

Bylund, Mathias January 2009 (has links)
Virtualization is not a new concept in computer science. It has been used since the middle of the sixties and now software companies has interested in this technology. Virtualization is used in server side to maximize the capacity and reduce power consumption. This thesis focuses on virtualization in embedded system. The technology uses a hypervisor or a virtual machine monitor as a software layer that provide the virtual machine and to isolate the underlying hardware. One of most interesting issue is that is supports several operating system and applications running on the same hardware platform and the hypervisor has complete control of system resources. The company Open Kernel Labs is one of the leading providers of embedded systems software virtualization technology and OKL4 is one of theirproducts, which is based on L4 family of second-generation microkernel’s. In this thesis, we will evaluate the kernel contains, the performance, the security and the environment of the OKL4. Finally we conclude the advantages and disadvantages of the product and technology.
129

Frequency Oriented Scheduling onParallel Processors

Zhong, Siqi January 2009 (has links)
No description available.
130

Frequency Oriented Scheduling onParallel Processors

Zhong, Siqi January 2009 (has links)
No description available.

Page generated in 0.0557 seconds