• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1419
  • 370
  • 155
  • 140
  • 105
  • 92
  • 45
  • 32
  • 25
  • 18
  • 17
  • 15
  • 8
  • 6
  • 6
  • Tagged with
  • 2858
  • 1727
  • 814
  • 595
  • 507
  • 403
  • 399
  • 308
  • 294
  • 273
  • 270
  • 268
  • 246
  • 228
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Design and implemetation of internet mail servers with embedded data compression

Nand, Alka 01 January 1997 (has links)
No description available.
562

INVESTIGATING TEMPERATURE MEASUREMENT METHODS AND THEIR IMPACT IN EMBEDDED SYSTEMS

Nene, Zhaneta January 2020 (has links)
Testing is one of the most important aspects in the development of new products. There are different types of testing a product can undergo, either hardware durability tests or software tests. Embedded systems are closely related to hardware and a key feature of them is the reliability and dependability. In order to assure that these features will remain intact no matter where the embedded systems operate it is very important to conduct standardized testing and give validation. The purpose of this thesis is to research the temperature testing procedure and develop a measurement guideline based on several key moments. The guideline is closely related to the standards and due to this reason some of the most frequently used standards are taken in consideration. The temperature measurement technology involves different tools or equipment. One interesting technology used for this purpose is the infrared technology through the investigation provided by the IR cameras. It is benefcial to integrate this technology in the contact measurements because it describes the temperature variation by colors, information which is very important in the first steps of the test procedure.
563

Runtime Monitoring of Automated Driving Systems

Mehmed, Ayhan January 2019 (has links)
It is the period of the World's history, where the technological progress reached a level that enables the first steps towards the development of vehicles with automated driving capabilities. The swift response from the significant portion of the industry resulted in a race, the final line set at the introduction of vehicles with full automated driving capabilities. Vehicles with automated driving capabilities target making driving safer, more comfortable, and economically more efficient by assisting the driver or by taking responsibilities for different driving tasks. While vehicles with assistance and partial automation capabilities are already in series production, the ultimate goal is in the introduction of vehicles with full automated driving capabilities. Reaching this level of automation will require shifting all responsibilities, including the responsibility for the overall vehicle safety, from the human to the computer-based system responsible for the automated driving functionality (i.e., the Automated Driving System (ADS)). Such a shift makes the ADS highly safe-critical, requiring a safety level comparable to an aircraft system. It is paramount to understand that ensuring such a level of safety is a complex interdisciplinary challenge. Traditional approaches for ensuring safety require the use of fault-tolerance techniques that are unproven when it comes to the automated driving domain. Moreover, existing safety assurance methods (e.g., ISO 26262) suffer from requirements incompleteness in the automated driving context. The use of artificial intelligence-based components in the ADS further complicate the matter due to their non-deterministic behavior. At present, there is no single straightforward solution for these challenges. Instead, the consensus of cross-domain experts is to use a set of complementary safety methods that together are sufficient to ensure the required level of safety. In the context of that, runtime monitors that verify the safe operation of the ADS during execution, are a promising complementary approach for ensuring safety. However, to develop a runtime monitoring solution for ADS, one has to handle a wide range of challenges. On a conceptual level, the complex and opaque technology used in ADS often make researchers ask the question ``how should ADS be verified in order to judge it is operating safely?". Once the initial Runtime Verification (RV) concept is developed, researchers and practitioners have to deal with research and engineering challenges encountered during the realization of the RV approaches into an actual runtime monitoring solution for ADS. These challenges range from, estimating different safety parameters of the runtime monitors, finding solutions for different technical problems, to meeting scalability and efficiency requirements. The focus of this thesis is to propose novel runtime monitoring solutions for verifying the safe operation of ADS. This encompasses (i) defining novel RV approaches explicitly tailored for automated driving, and (ii) developing concepts, methods, and architectures for realizing the RV approaches into an actual runtime monitoring solution for ADS. Contributions to the former include defining two runtime RV approaches, namely the Computer Vision Monitor (CVM) and the Safe Driving Envelope Verification. Contributions to the latter include (i) estimating the sufficient diagnostic test interval of the runtime verification approaches (in particular the CVM), (ii) addressing the out-of-sequence measurement problem in sensor fusion-based ADS, and (iii) developing an architectural solution for improving the scalability and efficiency of the runtime monitoring solution. / RetNet
564

Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations

El-Shambakey, Mohammed Talat 02 October 2013 (has links)
Lock-based concurrency control suffers from programmability, scalability, and composability challenges. These challenges are exacerbated in emerging multicore architectures, on which improved software performance must be achieved by exposing greater concurrency. Transactional memory (TM) is an emerging alternative synchronization model for shared memory objects that promises to alleviate these difficulties. In this dissertation, we consider software transactional memory (STM) for concurrency control in multicore real-time software, and present a suite of real-time STM contention managers for resolving transactional conflicts. The contention managers are called ECM, RCM, LCM, PNF, and FBLT. RCM and ECM resolve conflicts using fixed and dynamic priorities of real-time tasks, respectively, and are naturally intended to be used with the fixed priority (e.g., G-RMA) and dynamic priority (e.g., G-EDF) multicore real-time schedulers, respectively. LCM resolves conflicts based on task priorities as well as atomic section lengths, and can be used with G-EDF or G-RMA schedulers. Transactions under ECM, RCM, and LCM may retry due to conflicts with higher priority tasks even when there are no shared objects, i.e., transitive retry. PNF avoids transitive retry and optimizes processor usage by lowering the priority of retrying transactions, thereby enabling other non-conflicting transactions to proceed. PNF, however, requires a priori knowledge of all requested objects for each atomic section, which is inconsistent with the semantics of dynamic STM. Moreover, its centralized design increases overhead. FBLT avoids transitive retry, do not require a priori knowledge of requested objects, and has a decentralized design. We establish upper bounds on transactional retry costs and task response times under the contention managers through schedulability analysis. Since ECM and RCM preserve the semantics of the underlying real-time scheduler, their maximum transactional retry cost is double the maximum atomic section length. This is improved in the design of LCM, which achieves shorter retry costs and tighter upper bounds. As PNF avoids transitive retry and improves processor usage, it yields shorter retry costs and tighter upper bounds than ECM, RCM, and LCM. FBLT\'s upper bounds are similarly tight because it combines the advantages of PNF and LCM. We formally compare the proposed contention managers with each other, with lock-free synchronization, and with multiprocessor real-time locking protocols. Our analysis reveals that, for most cases, ECM, RCM, and LCM achieve higher schedulability than lock-free synchronization only when the atomic section length does not exceed half of lock-free synchronization\'s retry loop length. With equal periods and greater access times for shared objects, atomic section length under ECM, RCM, and LCM can be much larger than the retry loop length while still achieving better schedulability. With proper values for LCM\'s design parameters, atomic section length can be larger than the retry loop length for better schedulability. Under PNF, atomic section length can exceed lock-free\'s retry loop length and still achieve better schedulability in certain cases. FBLT achieves equal or better schedulability than lock-free with appropriate values for design parameters. The schedulability advantage of the contention managers over multiprocessor real-time locking protocols such as Global OMLP and RNLP depends upon the value of $s_{max}/L_{max}$, the ratio of the maximum transaction length to the maximum critical section length. FBLT\'s schedulability is equal or better than Global OMLP and RNLP if $s_/L_ le 2$. Checkpointing enables partial roll-back of transactions by recording transaction execution states (i.e., checkpoints) during execution, allowing roll-back to a previous checkpoint instead of transaction start, improving task response time. We extend FBLT with checkpointing and develop CP-FBLT, and identify the conditions under which CP-FBLT achieves equal or better schedulability than FBLT. We implement the contention managers in the Rochester STM framework and conduct experimental studies using a multicore real-time Linux kernel. Our studies reveal that among the contention managers, CP-FBLT has the best average-case performance. CP-FBLT\'s higher performance is due to the fact that PNF\'s and LCM\'s advantages are combined into the design of FBLT, which is the base of CP-FBLT. Moreover, checkpointing improves task response time. The contention managers were also found to have equal or better average-case performance than lock-free synchronization: more jobs meet their deadlines using CP-FBLT, FBLT, and PNF than lock-free synchronization by 34.6%, 28.5%, and 32.4% (on average), respectively. The superiority of the contention managers is directly due to their better conflict resolution policies. Locking protocols such as OMLP and RNLP were found to perform better: more jobs meet their deadlines under OMLP and RNLP than any contention manager by 12.4% and 13.7% (on average), respectively. However, the proposed contention managers have numerous qualitative advantages over locking protocols. Locks do not compose, whereas STM transactions do. To allow multiple objects to be accessed in a critical section, OMLP assigns objects to non-conflicting groups, where each group is protected by a distinct lock. RNLP assumes that objects are accessed in a specific order to prevent deadlocks. In contrast, STM allows multiple objects to be accessed in a transaction in any order, while guaranteeing deadlock-freedom, which significantly increases programmability. Moreover, STM offers platform independence: the proposed contention managers can be entirely implemented in the user-space as a library. In contrast, real-time locking protocols such as OMLP and RNLP must be supported by the underlying platform (i.e., operating system or virtual machine). / Ph. D.
565

Applications Of Physical Unclonable Functions on ASICS and FPGAs

Usmani, Mohammad 04 April 2018 (has links)
With the ever-increasing demand for security in embedded systems and wireless sensor networks, we require integrating security primitives for authentication in these devices. One such primitive is known as a Physically Unclonable Function. This entity can be used to provide security at a low cost, as the key or digital signature can be generated by dedicating a small part of the silicon die to these primitives which produces a fingerprint unique to each device. This fingerprint produced by a PUF is called its response. The response of PUFs depends upon the process variation that occurs during the manufacturing process. In embedded systems and especially wireless sensor networks, there is a need to secure the data the collected from the sensors. To tackle this problem, we propose the use of SRAM-based PUFs to detect the temperature of the system. This is done by taking the PUF response to generate temperature based keys. The key would act as proofs of the temperature of the system. In SRAM PUFs, it is experimentally determined that at varying temperatures there is a shift in the response of the cells from zero to one and vice-versa. This variation can be exploited to generate random but repeatable keys at different temperatures. To evaluate our approach, we first analyze the key metrics of a PUF, namely, reliability and uniqueness. In order to test the idea of using the PUF as a temperature based key generator, we collect data from a total of ten SRAM chips at fixed temperatures steps. We first calculate the reliability, which is related to bit error rate, an important parameter with respect to error correction, at various temperatures to verify the stability of the responses. We then identify the temperature of the system by using a temperature sensor and then encode the key offset by PUF response at that temperature using BCH codes. This key-temperature pair can then be used to establish secure communication between the nodes. Thus, this scheme helps in establishing secure keys as the generation has an extra variable to produce confusion. We developed a novel PUF for Xilinx FPGAs and evaluated its quality metrics. It is very compact and has high uniqueness and reliability. We also implement 2 different PUF configurations to allow per-device selection of best PUFs to reduce the area and power required for key-generation. We also evaluate the temperature response of this PUF and show improvement in the response by using per-device selection.
566

A Study on Controlling Power Supply Ramp-Up Time in SRAM PUFs

Ramanna, Harshavardhan 29 October 2019 (has links)
With growing connectivity in the modern era, the risk of encrypted data stored in hardware being exposed to third-party adversaries is higher than ever. The security of encrypted data depends on the secrecy of the stored key. Conventional methods of storing keys in Non-Volatile Memory have been shown to be susceptible to physical attacks. Physically Unclonable Functions provide a unique alternative to conventional key storage. SRAM PUFs utilize inherent process variation caused during manufacturing to derive secret keys from the power-up values of SRAM memory cells. This thesis analyzes the effect of supply ramp-up times on the reliability of SRAM PUFs. We use SPICE simulations as the platform to observe the effect of supply ramp times at the circuit level using carefully controlled supply voltages during power-up. We also measure the effect of supply ramp times on commercially available SRAM ICs by performing reliability and uniqueness measurements on two commercial SRAM models. Finally, a hardware implementation is proposed in a commercial 16nm FinFET technology to establish the design flow for taping out a custom SRAM IC with separated peripheral and core power supplies that would allow for experimental evaluation of sequenced power supplies on the SRAM PUF.
567

Detecting and identifying radio jamming attacks in low-power wireless sensor networks

Kanwar, John January 2021 (has links)
Wireless sensor networks (WSNs) are used in all kinds of different sectors ranging from agriculture, environment, healthcare and the military. Embedded systems such as sensor nodes are low-power and consist of low memory, which creates a challenge for its security. One of WSN’s worst enemies is interference radio jamming attacks. They are easy to construct and execute, but hard to detect and identify. In this thesis, we tackle the problems of detecting, but most importantly identifying, and distinguishing the most commonly used interference radio jamming attacks. Presenting SpeckSense++, a firmware that makes it possible for low-power embedded systems to detect, identify and distinguish interference radio jamming attacks and unintentional interference such as Bluetooth and WiFi to a certain degree. Showing an accuracy of 96 to 90 % for proactive jammers, 89% for reactive, and 92 to 85 % for unintentional interference.
568

Embedded systém pro sběr dat / Embedded system for date collection

Varga, Kamil January 2010 (has links)
This master's thesis is describing problems of embedded systems and their development. Furthermore, the operating system GNU/Linux and it's cross-compiling is described. Hardware problems are analyzed too. Concrete solution is system which is able to boot from the USB disc and it is based on OpenWrt distribution running on Edimax BR-6104KP router. The whole solution is described the way which can lead to any general embedded system.
569

Design of multi-core dataflow cryptprocessor

Alzahrani, Ali Saeed 28 August 2018 (has links)
Embedded multi-core systems are implemented as systems-on-chip that rely on packet store-and-forward networks-on-chip for communications. These systems do not use buses nor global clock. Instead routers are used to move data between the cores and each core uses its own local clock. This implies concurrent asynchronous computing. Implementing algorithms in such systems is very much facilitated using dataflow concepts. In this work, we propose a methodology for implementing algorithms on dataflow platforms. The methodology can be applied to multi-threaded, multi-core platforms or a combination of these platforms as well. This methodology is based on a novel dataflow graph representation of the algorithm. We applied the proposed methodology to obtain a novel dataflow multi-core computing model for the secure hash algorithm-3. The resulting hardware was implemented in FPGA to verify the performance parameters. The proposed model of computation has advantages such as flexible I/O timing in term of scheduling policy, execution of tasks as soon as possible, and self-timed event-driven system. In other words, I/O timing and correctness of algorithm evaluation are dissociated in this work. The main advantage of this proposal is the ability to dynamically obfuscate algorithm evaluation to thwart side-channel attacks without having to redesign the system. This has important implications for cryptographic applications. Also, the dissertation proposes four countermeasure techniques against side-channel attacks for SHA-3 hashing. The countermeasure techniques are based on choosing stochastic or deterministic input data scheduling strategies. Extensive simulations of the SHA-3 algorithm and the proposed countermeasures approaches were performed using object-oriented MATLAB models to verify and validate the effectiveness of the techniques. The design immunity for the proposed countermeasures is assessed. / Graduate / 2020-11-19
570

Fiabilité des composants enfouis dans les circuits imprimés / Reliability of  embedded components into Printed Circuit Boards

Balmont, Mickael 14 November 2019 (has links)
Le désir de miniaturisation des circuits électroniques a mené l’électronique à développer de nouvelles méthodes d’assemblage. Les progrès réalisés passent par la complexification des fonctions, le développement de nouvelles interconnexions liant le circuit au composant ou par les choix d’architecture, une optimisation du volume. Après avoir repoussé les limites d’optimisation avec les assemblages en trois dimensions, la technologie s’est tournée vers un volume présent dans toutes les cartes électroniques mais qui ne joue aucun rôle actif dans celui-ci : le support des fonctions, le PCB. La solution apportée est l’enfouissement de composant dans ce volume. Les premiers bénéfices de cette solution apparaissent rapidement : gain de volume et protection des composants, c’est pourquoi elle se développe rapidement dans l’industrie.Partant de ce postulat, Valeo souhaite adapter cette technologie pour réduire la taille d’une caméra de recul dédiée à l’automobile. Les exigences du domaine automobile étant plus strictes que dans d’autres industries, l’investigation plus poussée de la technologie d’enfouissement est nécessaire. L’objectif est de valider la fiabilité et la robustesse du circuit selon une méthode de fabrication. Ainsi, l’IMS de Bordeaux intègre le projet EDDEMA pour apporter une expertise, via des simulations thermomécaniques par éléments finis, sur la conception du circuit.Dans le cadre de cette thèse et pour répondre aux attentes du projet, deux axes d’études sont suivis. Une méthodologie généraliste est proposée pour parvenir à définir les interconnexions jugées les plus fragiles dans le cadre de l’emploi de la technologie d’enfouissement et justifier l’utilisation des simulations par éléments finis selon les exigences attendues. L’objectif est de déterminer la durée de vie d’une interconnexion liant le composant au circuit en fonction de sa nature (brasure, via,…) et des caractéristiques du composant et du circuit (dimensions, hauteur,…) et valider les choix technologiques tels que les matériaux ou les techniques faits dès la fabrication. Cette étude s’inscrit dans une recherche locale autour du composant. La seconde étude se recentre sur le circuit développé dans le cadre du projet. Il sera étudié l’impact de la position des composants actifs enfouis dans le PCB sur le circuit (déformation, contraintes) et la représentation des composants passifs dans cette structure pour définir, selon les considérations thermomécaniques, les limites de positionnement dans la conception du circuit. Affiner le modèle passera par des mesures réalisées sur les premiers prototypes pour corroborer les simulations réalisées.Tout ceci mène à déterminer les avantages sur la technologie d’enfouissement et le gain apporté en terme de fiabilité et de robustesse du circuit et des composants et valider son utilisation dans le secteur automobile. / The desire for miniaturization of electronic circuits led the electronics to develop new methods of assembly. Progress is made through the complexification of functions, the development of new interconnections linking the circuit to the component or by the choice of architecture, an optimization of the volume. After pushing the limits of optimization with the three-dimensional assemblies, the technology turned to a volume present in all the electronic cards but which plays no active part in this one: the support of the functions, the PCB. The solution provided is to embed component in this volume. The first benefits of this solution appear quickly: volume gain and protection of components, which is why it is developing rapidly in the industry.Based on this premise, Valeo wants to adapt this technology to reduce the size of a rearview camera dedicated to the automobile. As automotive requirements are stricter than in other industries, further investigation of embedded technology is required. The objective is to validate the reliability and robustness of the circuit according to a manufacturing method. Thus, the IMS Bordeaux integrates the EDDEMA project to provide expertise, via finite element thermomechanical simulations, on the design of the circuit.As part of this thesis and to meet the expectations of the project, two studies are investigated. A general methodology is proposed to define the interconnections considered the most fragile in the context of the use of embedded technology and justify the use of finite element simulations according to the expected requirements. The objective is to determine the lifetime of an interconnection linking the component to the circuit according to its nature (solder, via, ...) and the characteristics of the component and the circuit (dimensions, height, ...) and validate the choices. such as materials or techniques made from the time of manufacture. This study is part of a local search around the component. The second study focuses on the circuit developed in the project. The impact of the position of the active components embedded in the PCB on the circuit (deformation, constraints) and the representation of the passive components in this structure will be studied to define, according to the thermomechanical considerations, the positioning limits in the circuit design. . To refine the model will pass by measurements realized on the first prototypes to corroborate the realized simulations.All this leads to determining the advantages on the embedded technology and the gain in terms of reliability and robustness of the circuit and components and validating its use in the automotive sector.

Page generated in 0.0378 seconds