• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 26
  • 25
  • 25
  • 25
  • 22
  • 15
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Securing the Future of 5G Smart Dust: Optimizing Cryptographic Algorithms for Ultra-Low SWaP Energy-Harvesting Devices

Ryu, Zeezoo 12 July 2023 (has links)
While 5G energy harvesting makes 5G smart dust possible, stretching computation across power cycles affects cryptographic algorithms. This effect may lead to new security issues that make the system vulnerable to adversary attacks. Therefore, security measures are needed to protect data at rest and in transit across the network. In this paper, we identify the security requirements of existing 5G networks and the best-of-breed cryptographic algorithms for ultra-low SWaP devices in an energy harvesting context. To do this, we quantify the performance vs. energy tradespace, investigate the device features that impact the tradespace the most, and assess the security impact when the attacker has access to intermediate results. Our open-source energy-harvesting-tolerant versions of the cryptographic algorithms provide algorithm and device recommendations and ultra-low SWaP energy-harvesting-device-optimized versions of the cryptographic algorithms. / Master of Science / Smart dust is a network of tiny and energy-efficient devices that can gather data from the environment using various sensors, such as temperature, pressure, and humidity sensors. These devices are extremely small, often as small as a grain of sand or smaller, and have numerous applications, including environmental monitoring, structural health monitoring, and military surveillance. One of the main challenges of smart dust is its small size and limited energy resources, making it challenging to power and process the collected data. However, advancements in energy harvesting and low-power computing are being developed to overcome these challenges. In the case of 5G, energy harvesting technologies can be used to power small sensors and devices that are part of the 5G network, such as the Internet of Things (IoT) devices. Examples of IoT devices are wearable fitness trackers, smart thermostats, security cameras, home automation systems, and industrial sensors. Since 5G energy harvesting impacts the daily lives of people using the relevant devices, our research seeks to find out what kind of measures are necessary to guarantee their security.
12

Shortening time-series power flow simulations for cost-benefit analysis of LV network operation with PV feed-in

López, Claudio David January 2015 (has links)
Time-series power flow simulations are consecutive power flow calculations on each time step of a set of load and generation profiles that represent the time horizon under which a network needs to be analyzed. These simulations are one of the fundamental tools to carry out cost-benefit analyses of grid planing and operation strategies in the presence of distributed energy resources, unfortunately, their execution time is quite substantial. In the specific case of cost-benefit analyses the execution time of time-series power flow simulations can easily become excessive, as typical time horizons are in the order of a year and different scenarios need to be compared, which results in time-series simulations that require a rather large number of individual power flow calculations. It is often the case that only a set of aggregated simulation outputs is required for assessing grid operation costs, examples of which are total network losses, power exchange through MV/LV substation transformers, and total power provision from PV generators. Exploring alternatives to running time-series power flow simulations with complete input data that can produce approximations of the required results with a level of accuracy that is suitable for cost-benefit analyses but that require less time to compute can thus be beneficial. This thesis explores and compares different methods for shortening time-series power flow simulations based on reducing the amount of input data and thus the required number of individual power flow calculations, and focuses its attention on two of them: one consists in reducing the time resolution of the input profiles through downsampling while the other consists in finding similar time steps in the input profiles through vector quantization and simulating them only once. The results show that considerable execution time reductions and sufficiently accurate results can be obtained with both methods, but vector quantization requires much less data to produce the same level of accuracy as downsampling. Vector quantization delivers a far superior trade-off between data reduction, time savings, and accuracy when the simulations consider voltage control or when more than one simulation with the same input data is required, as in such cases the data reduction process can be carried out only once. One disadvantage of this method is that it does not reproduce peak values in the result profiles with accuracy, which is due to the way downsampling disregards certain time steps in the input profiles and to the averaging effect vector quantization has on the them. This disadvantage makes the simulations shortened through these methods less precise, for example, for detecting voltage violations.
13

From Theory to Implementation of Embedded Control Applications : A Case Study

Fize, Florian January 2016 (has links)
Control applications are used in almost all scientific domains and are subject to timing constraints. Moreover, different applications can run on the same platform which leads to even more complex timing behaviors. However, some of the timing issues are not always considered in the implementation of such applications, and this can make the system fail. In this thesis, the timing issues are considered, i.e., the problem of non-constant delay in the control of an inverted pendulum with a real-time kernel running on an ATmega328p micro-controller. The study shows that control performance is affected by this problem. In addition, the thesis, reports the adaptation of an existing real-time kernel based on an EDF (Earliest Deadline First) scheduling policy, to the architecture of the ATmega328p. Moreover, the new approach of a server-based kernel is implemented in this thesis, still on the same Atmel micro-controller.
14

Certified Compilation and Worst-Case Execution Time Estimation / Compilation formellement vérifiée et estimation du pire temps d'éxécution

Maroneze, André Oliveira 17 June 2014 (has links)
Les systèmes informatiques critiques - tels que les commandes de vol électroniques et le contrôle des centrales nucléaires - doivent répondre à des exigences strictes en termes de sûreté de fonctionnement. Nous nous intéressons ici à l'application de méthodes formelles - ancrées sur de solides bases mathématiques - pour la vérification du comportement des logiciels critiques. Plus particulièrement, nous spécifions formellement nos algorithmes et nous les prouvons corrects, à l'aide de l'assistant à la preuve Coq - un logiciel qui vérifie mécaniquement la correction des preuves effectuées et qui apporte un degré de confiance très élevé. Nous appliquons ici des méthodes formelles à l'estimation du Temps d'Exécution au Pire Cas (plus connu par son abréviation en anglais, WCET) de programmes C. Le WCET est une propriété importante pour la sûreté de fonctionnement des systèmes critiques, mais son estimation exige des analyses sophistiquées. Pour garantir l'absence d'erreurs lors de ces analyses, nous avons formellement vérifié une méthode d'estimation du WCET fondée sur la combinaison de deux techniques principales: une estimation de bornes de boucles et une estimation du WCET via la méthode IPET (Implicit Path Enumeration Technique). L'estimation de bornes de boucles est elle-même décomposée en trois étapes : un découpage de programmes, une analyse de valeurs opérant par interprétation abstraite, et une méthode de calcul de bornes. Chacune de ces étapes est formellement vérifiée dans un chapitre qui lui est dédiée. Le développement a été intégré au compilateur C formellement vérifié CompCert. Nous prouvons que le résultat de l'estimation est correct et nous évaluons ses performances dans des ensembles de benchmarks de référence dans le domaine. Les contributions de cette thèse incluent la formalisation des techniques utilisées pour estimer le WCET, l'outil d'estimation lui-même (obtenu à partir de la formalisation), et l'évaluation expérimentale des résultats. Nous concluons que le développement fondé sur les méthodes formelles permet d'obtenir des résultats intéressants en termes de précision, mais il exige des précautions particulières pour s'assurer que l'effort de preuve reste maîtrisable. Le développement en parallèle des spécifications et des preuves est essentiel à cette fin. Les travaux futurs incluent la formalisation de modèles de coût matériel, ainsi que le développement d'analyses plus sophistiquées pour augmenter la précision du WCET estimé. / Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
15

Certified Compilation and Worst-Case Execution Time Estimation

Oliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
16

Predicting and Estimating Execution Time of Manual Test Cases - A Case Study in Railway Domain

Ameerjan, Sharvathul Hasan January 2017 (has links)
Testing plays a vital role in the software development life cycle by verifying and validating the software's quality. Since software testing is considered as an expensive activity and due to thelimitations of budget and resources, it is necessary to know the execution time of the test cases for an efficient planning of test-related activities such as test scheduling, prioritizing test cases and monitoring the test progress. In this thesis, an approach is proposed to predict and estimate the execution time of manual test cases written in English natural language. The method uses test specifications and historical data that are available from previously executed test cases. Our approach works by obtaining timing information from each and every step of previously executed test cases. The collected data is used to estimate the execution time for non-executed test cases by mapping them using text from their test specifications. Using natural language processing, texts are extracted from the test specification document and mapped with the obtained timing information. After estimating the time from this mapping, a linear regression analysis is used to predict the execution time of non-executed test cases. A case study has been conducted in Bombardier Transportation (BT) where the proposed method is implemented and the results are validated. The obtained results show that the predicted execution time of studied test cases are close to their actual execution time.
17

Maximizing the VR Play Space by Using Procedurally Generated Impossible Spaces : Research on VR Play Spaces and Their Impact on Game Development

Eklund, Vendela January 2022 (has links)
Background. Virtual Reality is a growing sector that provide the most immersive gaming experiences, especially when the locomotion technique natural walking is used. However, it is always limited by the physical play space available for the user. Introducing Impossible Spaces, also called Overlapping Maps, which when combined with Procedural Content Generation based on the users play space could maximize the experience. Objectives. The aim of this thesis is to implement a potential solution for procedurally generated impossible spaces that are sized according to the VR users playspace. Ultimately testing how the Execution Time is affected when subjecting the implementation to different sized play areas and number of maps to overlap. In addition to this, the thesis will examine the play space setup of various experiencedVR users. Methods. The three core algorithms of the implementation - Grid and Maze generation, as well as Portal placement - are evaluated in terms of execution time. A questionnaire was created for gathering data on VR users and their play space setup. Results. Questionnaire gathered 45 results in total. A majority had access to a play space area of 2-5 square meters. The VR users’ experience affected the size ofthe play space. The execution time for the core algorithms showed promising resultsin terms of execution time. Conclusions. Since most VR users do not have a large play space and the proposed solution performed well, it could be used to enhance the VR experience.
18

Evaluation of Cloud Native Solutions for Trading Activity Analysis / Evaluering av cloud native lösningar för analys av transaktionsbaserad börshandel

Johansson, Jonas January 2021 (has links)
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution. / Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.
19

An Evaluation of WebAssembly Pre-Initialization for Faster Startup Times / En Evaluering av Förinitialisering av WebAssembly för Snabbare Uppstartstider

Stackenäs, William January 2023 (has links)
WebAssembly (Wasm) has emerged as a new technology for the web that enables complex and interactive web applications, while utilizing a compact and platform-independent bytecode format. Due to its flexibility, portability, and built-in security, it has since evolved to be used in many other embeddings, such as internet-of-things, server applications, and even mobile applications. While a goal of Wasm is near-native performance, research has found that its performance is not as great as initially expected. Due to this, projects like The WebAssembly Pre-Initializer (Wizer) have emerged as potential solutions to this problem. Wizer is a tool developed by the coalition Bytecode Alliance with the purpose of speeding up the startup time, or Critical Path to Interactive (CPTI), of a Wasm module by pre-initializing it and saving a snapshot of the Wasm instance state into a new Wasm module. Wizer has been evaluated using two benchmark programs. However, no larger-scale investigation into the CPTI improvement brought by its pre-initialization has been conducted. Furthermore, saving a snapshot of the module is likely to result in a larger module in terms of file size, leading to increased compile time or, for use cases where it is relevant, network latency. This project investigates, mainly within the field of Wasm in non-web environments, the extent to which Wizer is able to improve CPTI for a Wasm module. The purpose of this is to allow both Wasm maintainers and developers to form an opinion whether pre-initialization could be standardized for use in Wasm compilers and toolchains, or whether pre-initialization should be applied to their Wasm module based on its CPTI before pre-initialization. Results are obtained by compiling a number of sample software down to Wasm, measuring their CPTI in terms of elapsed CPU cycles both without and with pre-initialization using Wizer, and comparing them. This is made possible through an extension to the Sightglass benchmarking framework also developed by Bytecode Alliance. The results show that pre-initialization using Wizer increases the CPTI if the Wasm module cannot be compiled to native CPU instructions in advance. However, if compilation can be done in advance, Wizer is able to reduce the CPTI of a Wasm module by a factor of between two to six times, depending on how it is initialized. / WebAssembly (Wasm) har framträtt som en ny teknik för webben som möjligör komplexa och interaktiva webbapplikationer, genom ett kompakt och platformsoberoende bytekodformat. Tack vare teknikens flexibilitet, portabilitet och inbyggd säkerhet, har den även utvecklats till att användas i andra samanhang, exampelvis i sakernas internet, serverapplikationer, och även mobilapplikationer. Trots att ett mål med Wasm är prestanda jämförbar med native applikationer, har forskning funnit att den inte presterat så väl som man tidigare trott. Därför har project som The WebAssembly Pre-Initializer (Wizer) framträtt som möjliga lösningar till detta problem. Wizer är ett verktyg utvecklat av koalitionen Bytecode Alliance med syftet att snabba upp uppstartstider, även kallat Critical Path to Interactive (CPTI), av en Wasm modul genom att förinitialisera den och spara en ögonblicksbild av Wasm instansens tillstånd som en ny Wasm modul. Wizer har evaluerats genom två testprogram. Dock har inte någon storskalig undersökning utförts inom CPTI-förbättringen som dess förinitialisering kan medföra. Dessutom är det sannorlikt att sparandet av en ögonblicksbild av en modul leder till en filstorleksmässigt större modul, vilket gör att kompileringstiden, och även nätverkslatensen i användningsfall där det förekommer, kan öka. Det här projektet undersöker, huvudsakligen inom Wasm utanför webbläsaren, omfattningen av Wizers förbättring av CPTI för en Wasm modul. Syftet med detta är att möjliggöra för Wasm designers och utvecklare att förstå hurvida förinitialisering skulle kunna bli standardiserat i Wasm kompilerare eller verktyg, eller om förinitialisering borde tillämpas på en Wasm modul under utveckling baserat på dens CPTI innan förinitialisering. Resultat samlas genom att kompilera flera exempelprogram ner till Wasm, mäta deras CPTI genom passerade CPU cykler både med och utan förinitialisering med Wizer, och jämföra mätningarna. Detta möjligörs genom en utökning av testramverket Sightglass som också utvecklats av Bytecode Alliance. Resultaten visar att förinitialisering med Wizer ökar CPTI om Wasm modulen inte kan i förväg kompileras till instruktioner som kan köras på CPU:n. Om kompilering i förväg dock är möjlig kan Wizer minska CPTI för en given Wasm modul med en faktor av två upp till sex gånger, beroende på den typ av förinitialisering som den gör.
20

A Dual-Port Data Cache with Pseudo-Direct Mapping Function

Gade, Arul Sandeep 07 May 2005 (has links)
Conventional on-chip (L1) data caches such as Direct-Mapped (DM) and 2-way Set-Associative Caches (SAC) have been widely used for high-performance uni (or multi)-processors. Unfortunately, these schemes suffer from high conflict misses since more than one address is mapped onto the same cache line. To reduce the conflict misses, much research has been done in developing different cache architectures such as 2-way Skewed-Associative cache (Skew cache). The 2-way Skew cache has a hardware complexity equivalent to that of 2-way SAC and has a miss-rate approaching that of 4-way SAC. However, the reduction in the miss-rate using a Skew cache is limited by the confined space available to disperse the conflicting accesses over small memory banks. This research proposes a dual-port data cache called Pseudo-Direct Cache (PDC) to minimize the conflict misses by dispersing addresses effectively over a single memory bank. Our simulation results show that PDC reduces those misses significantly compared to any conventional L1 caches and also achieves 10-15% lesser miss-rates than a 2-way Skew cache. SimpleScalar simulator is used for these simulations with SPEC95FP benchmark programs. Similar results were also seen over SPEC2000FP benchmark programs. Simulations over CACTI 3.0 were performed to evaluate the hardware implications of PDC over Skew cache. The simulation results show that the PDC has a simple hardware complexity similar to 2-way SAC and has 4-15% better AMAT compared to 2-way Skew cache. The PDC also reduces execution cycles significantly.

Page generated in 0.1377 seconds