• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 17
  • 10
  • 9
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fine-grained error detection techniques for fast repair of FPGAs

Nazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
2

Fine-grained error detection techniques for fast repair of FPGAs

Nazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
3

Fine-grained error detection techniques for fast repair of FPGAs

Nazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
4

The Implementation of Total Productive Maintenance (TPM) InManufacturing Company : A Case Study of XYZ Plastics Manufacturing Company in Nigerian

Labiyi, Femi Gbenga January 2019 (has links)
The purpose of this thesis is to implement Total Productive Maintenance (TPM) in Nigeria Plastics Manufacturing Company. Manufacturing companies round the world pay huge amount of money for purchasing new equipments to boost their production however nothing or little is done to achieve or obtain full output from the machine for which it is intended to do. Small losses in time or deviations from planned or calculated capability are taken as usual machine performance. But currently as a result of improved capability levels and demand of quality product at lower prices, purchasing latest machine/equipment is not a way out unless it is completely used. Total Productive Maintenance (TPM) is a method that involve everybody totally, from high management to all workers to implement a complete maintenance program for all machine/equipment during its life. This method ends up in most effectiveness of tools, equipment, virtuously improved workers, tidy up working area, neat and clean working environment. A structure is going to be developed with the potential of evaluating the impact of implementing total productive maintenance within. By evaluating the result or outcome of Total Productive Maintenance (TPM), manufacturing companies can create sensible/smart decisions to improve the potency and standard of the machine, equipment and also the product on XYZ Plastics Manufacturing Company in Nigerian.
5

Episodic Perspectives of Wireless Network Dependability

Chen, Yachuan 25 April 2006 (has links)
No description available.
6

Effects of interference on carrier tracking in fading and symbol synchronization

Emad, Amin 11 1900 (has links)
Synchronization is a very important part of every digital communication receiver. While in bandpass coherent transmission, frequency and phase synchronization play a very important role in reliable transmission, symbol timing recovery is a necessary part of every baseband and bandpass coherent receiver. This dissertation deals with the problem of synchronization in the presence of fading and interference. First, the performance of an automatic frequency control loop is investigated using two parameters of average switching rate and mean time to loss of lock. These parameters are derived in closed-form or as integral-form formulas for different scenarios of modulated and unmodulated signals in different fading channels when there is one interference signal present at the input of the AFC. Then, the results are generalized to the noisy fading scenario and it is shown that in Rayleigh fading case, the performance of AFC becomes better when the desired signal is noisier. In the second part, the problem of symbol timing recovery is investigated in systems that are subject to intersymbol interference and non-data-aided maximum likelihood synchronizer is derived in these channels. Then, a new simple bound on the performance of synchronizers is derived and compared to the previously known lower bounds. It is shown that while this lower bound solves the shortcomings of the well known modified Cramer-Rao bound at small values of signal-to-noise-ratio, it is much easier to compute compared to another well known bound, the detection theory bound. / Communications
7

Effects of interference on carrier tracking in fading and symbol synchronization

Emad, Amin Unknown Date
No description available.
8

An empirical assessment of the predictive quality of internal product metrics to predict software maintainability in practice

Wu, Xinhao, Zhang, Maike January 2020 (has links)
Background. Maintainability of software products continues to be an area of im- portance and interest both for practice and research. The time used for maintenance usually exceeds 70% of the whole period of software development process. At present, there is a large number of metrics that have been suggested to indicate the main- tainability of a software product. However, there is a gap in validation of proposed source code metrics and the external quality of software maintainability. Objectives. In this thesis, we aim to catalog the proposed metrics for software maintainability. From this catalog we will validate a subset of commonly proposed maintainability indicators. Methods. Through a literature review with a systematic search and selection ap- proach, we collated maintainability metrics from secondary studies on software main- tainability. A subset of commonly metrics identified in the literature review were validated in a retrospective study. The retrospective study used a large open source software "Elastic Search" as a case. We collected internal source code metrics and a proxy for maintainability of the system for 911 bug fixes in 14 version (11 experi- mental samples, 3 are verification samples) of the product. Results. Following a systematic search and selection process, we identified 11 sec- ondary studies on software maintainability. From these studies we identified 290 source code metrics that are claimed to be indicators of the maintainability of a soft- ware product. We used mean time to repair (MTTR) as a proxy for maintainability of a product. Our analysis reveals that for the "elasticsearch" software, the values of the four indicators LOC, CC, WMC and RFC have the strongest correlation with MTTR. Conclusions. In this thesis, we validated a subset of commonly proposed source code metrics for predicting maintainability. The empirical validation using a popu- lar large-scale open source system reveals that some metrics have shown a stronger correlation with a proxy for maintainability in use. This study provides important empirical evidence towards a better understanding of source code attributes and maintainability in practice. However, a single case and a retrospective study are insufficient to establish a cause effect relation. Therefore, further replications of our study design with more diverse cases can increase the confidence in the predictive ability and thus the usefulness of the proposed metrics.
9

Design and Analysis of Adaptive Fault Tolerant QoS Control Algorithms for Query Processing in Wireless Sensor Networks

Speer, Ngoc Anh Phan 02 May 2008 (has links)
Data sensing and retrieval in WSNs have a great applicability in military, environmental, medical, home and commercial applications. In query-based WSNs, a user would issue a query with QoS requirements in terms of reliability and timeliness, and expect a correct response to be returned within the deadline. Satisfying these QoS requirements requires that fault tolerance mechanisms through redundancy be used, which may cause the energy of the system to deplete quickly. This dissertation presents the design and validation of adaptive fault tolerant QoS control algorithms with the objective to achieve the desired quality of service (QoS) requirements and maximize the system lifetime in query-based WSNs. We analyze the effect of redundancy on the mean time to failure (MTTF) of query-based cluster-structured WSNs and show that an optimal redundancy level exists such that the MTTF of the system is maximized. We develop a hop-by-hop data delivery (HHDD) mechanism and an Adaptive Fault Tolerant Quality of Service Control (AFTQC) algorithm in which we utilize "source" and "path" redundancy with the goal to satisfy application QoS requirements while maximizing the lifetime of WSNs. To deal with network dynamics, we investigate proactive and reactive methods to dynamically collect channel and delay conditions to determine the optimal redundancy level at runtime. AFTQC can adapt to network dynamics that cause changes to the node density, residual energy, sensor failure probability, and radio range due to energy consumption, node failures, and change of node connectivity. Further, AFTQC can deal with software faults, concurrent query processing with distinct QoS requirements, and data aggregation. We compare our design with a baseline design without redundancy based on acknowledgement for data transmission and geographical routing for relaying packets to demonstrate the feasibility. We validate analytical results with extensive simulation studies. When given QoS requirements of queries in terms of reliability and timeliness, our AFTQC design allows optimal "source" and "path" redundancies to be identified and applied dynamically in response to network dynamics such that not only query QoS requirements are satisfied, as long as adequate resources are available, but also the lifetime of the system is prolonged. / Ph. D.
10

Design Techniques to Improve Time Dependent Dielectric Breakdown Based Failure for CMOS Circuits

Tarog, Emanuel S 01 January 2010 (has links) (PDF)
This project investigates the failure of various CMOS circuits as a result of Time Dependent Dielectric Breakdown (TDDB) and explores design techniques to increase the mean time to failure (MTTF) of large-scale circuits. Time Dependent Dielectric Breakdown is a phenomenon where the oxide underneath the gate degrades as a result of the electric field in the material. Currently, there are few well documented design techniques that can increase lifetime, but with a tool chain I created called the MTTF Analyzing Program, or MAP, I was able to test circuits under various conditions in order to identify weak links, discover relationships, and reiterate on my design and see improvements and effects. The tool chain calculates power consumption, performance, temperature, and MTTF for a 'real life' circuit. Electric VLSI, an Electronic Design Automation tool, outputs a Spice file that yields parasitic quantities and spatial dimensions. LTspice, a high performance Spice simulator, was used to calculate the voltage and current data. Finally, I created MAP to monitor the voltage, current, and dimension data and process that in conjunction with HotSpot, a thermal modeling tool, to calculate a MTTF for each MOSFET. Analysis of the data from the software infrastructure showed that transistor sizing played a role in the MTTF. To maximize the MTTF of a transistor in a CMOS inverter, the activity of the pull-up transistor should be balanced with the transistor in the pull-down chain, ensuring the electric fields are balanced across both transistors. While it is impossible to completely balance an arbitrary CMOS circuit's activity for an arbitrary set of input signals, circuits can be intelligently skewed to help maximize the MTTF without increasing power consumption and without sacrificing circuit performance. Consequently, attaining a maximum MTTF does not come at a cost as it is possible to design a circuit with a high MTTF that performs better and uses less power than a circuit with low MTTF.

Page generated in 0.0789 seconds