171 |
A comparative study of capacitor voltage balancing techniques for flying capacitor multi-level power electronic convertersYadhati, Vennela, January 2010 (has links) (PDF)
Thesis (M.S.)--Missouri University of Science and Technology, 2010. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed July 26, 2010) Includes bibliographical references (p. 96-102).
|
172 |
Reliability modelling of complex systemsMwanga, Alifas Yeko. January 2006 (has links)
Thesis (Ph.D.)(Industrial and Systems Engineering))--University of Pretoria, 2006. / Includes summary. Includes bibliographical references. Available on the Internet via the World Wide Web.
|
173 |
Network reliability as a result of redundant connectivityBinneman, Francois J. A. 03 1900 (has links)
Thesis (MSc (Logistics)--University of Stellenbosch, 2007. / There exists, for any connected graph G, a minimum set of vertices that, when removed, disconnects
G. Such a set of vertices is known as a minimum cut-set, the cardinality of which is known as the
connectivity number k(G) of G. A connectivity preserving [connectivity reducing, respectively] spanning
subgraph G0 ? G may be constructed by removing certain edges of G in such a way that k(G0) = k(G)
[k(G0) < k(G), respectively]. The problem of constructing such a connectivity preserving or reducing
spanning subgraph of minimum weight is known to be NP–complete.
This thesis contains a summary of the most recent results (as in 2006) from a comprehensive survey of
literature on topics related to the connectivity of graphs.
Secondly, the computational problems of constructing a minimum weight connectivity preserving or
connectivity reducing spanning subgraph for a given graph G are considered in this thesis. In particular,
three algorithms are developed for constructing such spanning subgraphs. The theoretical basis for each
algorithm is established and discussed in detail. The practicality of the algorithms are compared in terms
of their worst-case running times as well as their solution qualities. The fastest of these three algorithms
has a worst-case running time that compares favourably with the fastest algorithm in the literature.
Finally, a computerised decision support system, called Connectivity Algorithms, is developed which is
capable of implementing the three algorithms described above for a user-specified input graph.
|
174 |
Time synchronization and communication network redundancy for power network automationGuo, Hao January 2017 (has links)
Protection and Control (P&C) devices requiring accurate timing within a power transmission substation are commonly synchronized by distributed Global Positioning System (GPS) receivers. However, utilities now request a timing system that is less dependent on the direct use of distributed GPS receivers, because of the reliability issue of GPS receivers. In addition, to reduce device-to-device cabling and enable interoperability among devices from multiple vendors, utilities are looking to adopt the Ethernet based IEC 61850 protocol suites to complement or replace a conventional hardwired secondary P&C system. The IEEE 1588-2008 synchronization protocol is a network based time synchronization technique which can co-exist with the IEC 61850 applications and deliver sub-microsecond timing accuracy. A number of IEC 61850 applications require seamless communication redundancy, whilst existing technologies used in a substation only recover communications tens of milliseconds after a communication failure. Alternatively, the newly released IEC 62439-3 Parallel Redundancy Protocol (PRP) and High-availability Seamless Redundancy (HSR) can achieve seamless redundancy by transmitting duplicate data packets simultaneously in various networks and this can satisfy the extremely high reliability requirements of transmission substations. Considering the benefits, a unified network integrating IEEE 1588 and IEC 62439 PRP/HSR can be foreseen in future substations, but utilities need confidence in these technologies before real deployment. Hence, it is necessary to conduct comprehensive tests on such a timing system so that better insight into the performance and limitation can be obtained. This thesis first investigates the feasibility to integrate IEEE 1588 and IEC 62439 PRP into a single Ethernet network using a simulation tool and subsequently presents how the hardware testbed is established. Meanwhile, although GPS receivers are commonly used for time synchronization in the power industry, their performance might not be fully investigated before deployment. Hence, this thesis also proposes a procedure to assess the performance in terms of long term stability and transient behaviour of a timing system merely based on GPS receivers and one based on a mixture of GPS receivers and IEEE 1588 devices. Test results indicate whichever system is used, careful design of equipment, proper installation and appropriate engineering are required to satisfy the stringent accuracy requirements for critical automation applications in power system.
|
175 |
Designing single event upset mitigation techniques for large SRAM-Based FPGA components / Desenvolvimento de técnicas de tolerância a falhas transientes em componentes programáveis por SRAMKastensmidt, Fernanda Gusmão de Lima January 2003 (has links)
Esse trabalho consiste no estudo e desenvolvimento de técnicas de proteção a falhas transientes, também chamadas single event upset (SEU), em circuitos programáveis customizáveis por células SRAM. Os projetistas de circuitos eletrônicos estão cada vez mais predispostos a utilizar circuitos programáveis, conhecidos como Field Programmable Gate Array (FPGA), para aplicações espaciais devido a sua alta flexibilidade lógica, alto desempenho, baixo custo no desenvolvimento, rapidez na prototipação e principalmente pela reconfigurabilidade. Em particular, FPGAs customizados por SRAM são muito importantes para missões espaciais pois podem ser rapidamente reprogramados à distância quantas vezes for necessário. A técnica de proteção baseada em redundância tripla, conhecida como TMR, é comumente utilizada em circuitos integrados de aplicações específicas e pode também ser aplicada em circuitos programáveis como FPGAs. A técnica TMR foi testada no FPGA Virtex® da Xilinx em aplicações como contadores e micro-controladores. Falhas foram injetadas em todos as partes sensíveis da arquitetura e seus efeitos foram detalhadamente analisados. Os resultados de injeção de falhas e dos experimentos sob radiação em laboratório comprovaram a eficácia do TMR em proteger circuitos sintetizados em FPGAs customizados por SRAM. Todavia, essa técnica possui algumas limitações como aumento em área, uso de três vezes mais pinos de entrada e saída (E/S) e conseqüentemente, aumento na dissipação de potência. Com o objetivo de reduzir custos no TMR e melhorar a confiabilidade, uma técnica inovadora de tolerância a falhas para FPGAs customizados por SRAM foi desenvolvida para ser implementada em alto nível, sem modificações na arquitetura do componente. Essa técnica combina redundância espacial e temporal para reduzir custos e assegurar confiabilidade. Ela é baseada em duplicação com um circuito comparador e um bloco de detecção concorrente de falhas. Esta nova técnica proposta neste trabalho foi especificamente projetada para tratar o efeito de falhas transientes em blocos combinacionais e seqüenciais na arquitetura reconfigurável, reduzir o uso de pinos de E/S, área e dissipação de potência. A metodologia foi validada por injeção de falhas emuladas em uma placa de prototipação. O trabalho mostra uma comparação nos resultados de cobertura de falhas, área e desempenho entre as técnicas apresentadas. / This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
|
176 |
The redundancy effect in human causal learning : attention, uncertainty, and inhibitionZaksaite, Gintare January 2017 (has links)
Using an allergist task, Uengoer, Lotz and Pearce (2013) found that in a design A+/AX+/BY+/CY-, the blocked cue X was indicated to cause the outcome to a greater extent than the uncorrelated cue Y. This finding has been termed “the redundancy effect” by Pearce and Jones (2015). According to Vogel and Wagner (2017), the redundancy effect “presents a serious challenge for those theories of conditioning that compute learning through a global error-term” (p. 119). One such theory is the Rescorla-Wagner (1972) model, which predicts the opposite result, that Y will have a stronger association with the outcome than X. This thesis explored the basis of the redundancy effect in human causal learning. Evidence from Chapter 2 suggested that the redundancy effect was unlikely to have been due to differences in attention between X and Y. Chapter 3 explored whether differences in participants’ certainty about the causal status of X and of Y contributed to the redundancy effect. Manipulations aimed at disambiguating the effects that X had on the outcome, including outcome-additivity training and low outcome rate, resulted in lower ratings for this cue and a smaller redundancy effect. However, the redundancy effect was still significant with both manipulations, suggesting that while participants’ uncertainty about the causal status of X contributed to it, there may have been other factors. Chapter 4 investigated whether another factor was a lack of inhibition for cue C. In a scenario where inhibition was more plausible than in an allergist task, a negative correlation between causal ratings for C and for Y, and a positive correlation between ratings for C and the magnitude of the redundancy effect, were found. In addition, establishing C as inhibitory resulted in a smaller redundancy effect than establishing C as neutral. Overall, findings of this thesis suggest that the redundancy effect in human causal learning is the result of participants’ uncertainty about the causal status of X, and a lack of inhibition for C. Further work is recommended to explore whether combining manipulations targeting X and Y would reverse the redundancy effect, whether effects of outcome additivity and outcome rate on X are the result of participants’ uncertainty about this cue, and the extent to which participants rely on single versus summed error.
|
177 |
Investigating techniques to reduce soft error rate under single-event-induced charge sharing / Investigando técnicas para reduzir a taxa de erro de soft sob evento único induzido de carga compartilhadaAlmeida, Antonio Felipe Costa de January 2014 (has links)
The interaction of radiation with integrated circuits can provoke transient faults due to the deposit of charge in sensitive nodes of transistors. Because of the decrease the size in the process technology, charge sharing between transistors placed close to each other has been more and more observed. This phenomenon can lead to multiple transient faults. Therefore, it is important to analyze the effect of multiple transient faults in integrated circuits and investigate mitigation techniques able to cope with multiple faults. This work investigates the effect known as single-event-induced charge sharing in integrated circuits. Two main techniques are analyzed to cope with this effect. First, a placement constraint methodology is proposed. This technique uses placement constraints in standard cell based circuits. The objective is to achieve a layout for which the Soft-Error Rate (SER) due charge shared at adjacent cell is reduced. A set of fault injection was performed and the results show that the SER can be minimized due to single-event-induced charge sharing in according to the layout structure. Results show that by using placement constraint, it is possible to reduce the error rate from 12.85% to 10.63% due double faults. Second, Triple Modular Redundancy (TMR) schemes with different levels of granularities limited by majority voters are analyzed under multiple faults. The TMR versions are implemented using a standard design flow based on a traditional commercial standard cell library. An extensive fault injection campaign is then performed in order to verify the softerror rate due to single-event-induced charge sharing in multiple nodes. Results show that the proposed methodology becomes crucial to find the best trade-off in area, performance and soft-error rate when TMR designs are considered under multiple upsets. Results have been evaluated in a case-study circuit Advanced Encryption Standard (AES), synthesized to 90nm Application Specific Integrated Circuit (ASIC) library, and they show that combining the two techniques, the error rate resulted from multiple faults can be minimized or masked. By using TMR with different granularities and placement constraint methodology, it is possible to reduce the error rate from 11.06% to 0.00% for double faults. A detailed study of triple, four and five multiple faults combining both techniques are also described. We also tested the TMR with different granularities in SRAM-based FPGA platform. Results show that the versions with a fine grain scheme (FGTMR) were more effectiveness in masking multiple faults, similarly to results observed in the ASICs. In summary, the main contribution of this master thesis is the investigation of charge sharing effects in ASICs and the use of a combination of techniques based on TMR redundancy and placement to improve the tolerance under multiple faults.
|
178 |
Designing single event upset mitigation techniques for large SRAM-Based FPGA components / Desenvolvimento de técnicas de tolerância a falhas transientes em componentes programáveis por SRAMKastensmidt, Fernanda Gusmão de Lima January 2003 (has links)
Esse trabalho consiste no estudo e desenvolvimento de técnicas de proteção a falhas transientes, também chamadas single event upset (SEU), em circuitos programáveis customizáveis por células SRAM. Os projetistas de circuitos eletrônicos estão cada vez mais predispostos a utilizar circuitos programáveis, conhecidos como Field Programmable Gate Array (FPGA), para aplicações espaciais devido a sua alta flexibilidade lógica, alto desempenho, baixo custo no desenvolvimento, rapidez na prototipação e principalmente pela reconfigurabilidade. Em particular, FPGAs customizados por SRAM são muito importantes para missões espaciais pois podem ser rapidamente reprogramados à distância quantas vezes for necessário. A técnica de proteção baseada em redundância tripla, conhecida como TMR, é comumente utilizada em circuitos integrados de aplicações específicas e pode também ser aplicada em circuitos programáveis como FPGAs. A técnica TMR foi testada no FPGA Virtex® da Xilinx em aplicações como contadores e micro-controladores. Falhas foram injetadas em todos as partes sensíveis da arquitetura e seus efeitos foram detalhadamente analisados. Os resultados de injeção de falhas e dos experimentos sob radiação em laboratório comprovaram a eficácia do TMR em proteger circuitos sintetizados em FPGAs customizados por SRAM. Todavia, essa técnica possui algumas limitações como aumento em área, uso de três vezes mais pinos de entrada e saída (E/S) e conseqüentemente, aumento na dissipação de potência. Com o objetivo de reduzir custos no TMR e melhorar a confiabilidade, uma técnica inovadora de tolerância a falhas para FPGAs customizados por SRAM foi desenvolvida para ser implementada em alto nível, sem modificações na arquitetura do componente. Essa técnica combina redundância espacial e temporal para reduzir custos e assegurar confiabilidade. Ela é baseada em duplicação com um circuito comparador e um bloco de detecção concorrente de falhas. Esta nova técnica proposta neste trabalho foi especificamente projetada para tratar o efeito de falhas transientes em blocos combinacionais e seqüenciais na arquitetura reconfigurável, reduzir o uso de pinos de E/S, área e dissipação de potência. A metodologia foi validada por injeção de falhas emuladas em uma placa de prototipação. O trabalho mostra uma comparação nos resultados de cobertura de falhas, área e desempenho entre as técnicas apresentadas. / This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
|
179 |
Investigating techniques to reduce soft error rate under single-event-induced charge sharing / Investigando técnicas para reduzir a taxa de erro de soft sob evento único induzido de carga compartilhadaAlmeida, Antonio Felipe Costa de January 2014 (has links)
The interaction of radiation with integrated circuits can provoke transient faults due to the deposit of charge in sensitive nodes of transistors. Because of the decrease the size in the process technology, charge sharing between transistors placed close to each other has been more and more observed. This phenomenon can lead to multiple transient faults. Therefore, it is important to analyze the effect of multiple transient faults in integrated circuits and investigate mitigation techniques able to cope with multiple faults. This work investigates the effect known as single-event-induced charge sharing in integrated circuits. Two main techniques are analyzed to cope with this effect. First, a placement constraint methodology is proposed. This technique uses placement constraints in standard cell based circuits. The objective is to achieve a layout for which the Soft-Error Rate (SER) due charge shared at adjacent cell is reduced. A set of fault injection was performed and the results show that the SER can be minimized due to single-event-induced charge sharing in according to the layout structure. Results show that by using placement constraint, it is possible to reduce the error rate from 12.85% to 10.63% due double faults. Second, Triple Modular Redundancy (TMR) schemes with different levels of granularities limited by majority voters are analyzed under multiple faults. The TMR versions are implemented using a standard design flow based on a traditional commercial standard cell library. An extensive fault injection campaign is then performed in order to verify the softerror rate due to single-event-induced charge sharing in multiple nodes. Results show that the proposed methodology becomes crucial to find the best trade-off in area, performance and soft-error rate when TMR designs are considered under multiple upsets. Results have been evaluated in a case-study circuit Advanced Encryption Standard (AES), synthesized to 90nm Application Specific Integrated Circuit (ASIC) library, and they show that combining the two techniques, the error rate resulted from multiple faults can be minimized or masked. By using TMR with different granularities and placement constraint methodology, it is possible to reduce the error rate from 11.06% to 0.00% for double faults. A detailed study of triple, four and five multiple faults combining both techniques are also described. We also tested the TMR with different granularities in SRAM-based FPGA platform. Results show that the versions with a fine grain scheme (FGTMR) were more effectiveness in masking multiple faults, similarly to results observed in the ASICs. In summary, the main contribution of this master thesis is the investigation of charge sharing effects in ASICs and the use of a combination of techniques based on TMR redundancy and placement to improve the tolerance under multiple faults.
|
180 |
An Investigation of Kinematic Redundancy for Reduced Error in MicromillingJanuary 2014 (has links)
abstract: Small metallic parts of size less than 1mm, with features measured in tens of microns, with tolerances as small as 0.1 micron are in demand for the research in many fields such as electronics, optics, and biomedical engineering. Because of various drawbacks with non-mechanical micromanufacturing processes, micromilling has shown itself to be an attractive alternative manufacturing method. Micromilling is a microscale manufacturing process that can be used to produce a wide range of small parts, including those that have complex 3-dimensional contours. Although the micromilling process is superficially similar to conventional-scale milling, the physical processes of micromilling are unique due to the scale effects. These scale effects occur due to unequal scaling of the parameters from the macroscale to the microscale milling. One key example of scale effects in micromilling process is a geometrical source of error known as chord error. The chord error limits the feedrate to a reduced value to produce the features within machining tolerances. In this research, it is hypothesized that the increase of chord error in micromilling can be alleviated by intelligent modification of the kinematic arrangement of the micromilling machine. Currently, all 3-axis micromilling machines are constructed with a Cartesian kinematic arrangement with three perpendicular linear axes. In this research, the cylindrical kinematic arrangement is introduced, and an analytical expression for the chord error for this arrangement is derived. The numerical simulations are performed to evaluate the chord errors for the cylindrical kinematic arrangement. It is found that cylindrical kinematic arrangement gives reduced chord error for some types of the desired toolpaths. Then, the kinematic redundancy is introduced to design a novel kinematic arrangement. Several desired toolpaths have been numerically simulated to evaluate the chord error for kinematically redundant arrangement. It is concluded that this arrangement gives up to 5 times reduced error for all the desired toolpaths considered, and allows significant gains in allowable feedrates. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2014
|
Page generated in 0.0201 seconds