• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 23
  • 23
  • 22
  • 19
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Výpočtový systém pro vyhodnocení výrobních ukazatelů spaloven komunálních odpadů / Computational tool for processing of production data from waste-to-energy systems

Machát, Ondřej January 2013 (has links)
This thesis contains evaluation of crucial operational indicators of a waste-to-energy plant. Above all, it is lower heating value of municipal solid waste and boiler efficiency. An approach for evaluation improvement by mathematical methods is proposed. The approach is implemented in a computational tool developed in Microsoft Excel. The approach is tested and subsequently used for operational data from a real waste-to-energy plant.
92

AHEAD: Adaptable Data Hardening for On-the-Fly Hardware Error Detection during Database Query Processing

Kolditz, Till, Habich, Dirk, Lehner, Wolfgang, Werner, Matthias, de Bruijn, S. T. J. 13 June 2022 (has links)
We have already known for a long time that hardware components are not perfect and soft errors in terms of single bit flips happen all the time. Up to now, these single bit flips are mainly addressed in hardware using general-purpose protection techniques. However, recent studies have shown that all future hardware components become less and less reliable in total and multi-bit flips are occurring regularly rather than exceptionally. Additionally, hardware aging effects will lead to error models that change during run-time. Scaling hardware-based protection techniques to cover changing multi-bit flips is possible, but this introduces large performance, chip area, and power overheads, which will become non-affordable in the future. To tackle that, an emerging research direction is employing protection techniques in higher software layers like compilers or applications. The available knowledge at these layers can be efficiently used to specialize and adapt protection techniques. Thus, we propose a novel adaptable and on-the-fly hardware error detection approach called AHEAD for database systems in this paper. AHEAD provides configurable error detection in an end-to-end fashion and reduces the overhead (storage and computation) compared to other techniques at this level. Our approach uses an arithmetic error coding technique which allows query processing to completely work on hardened data on the one hand. On the other hand, this enables on-the-fly detection during query processing of (i) errors that modify data stored in memory or transferred on an interconnect and (ii) errors induced during computations. Our exhaustive evaluation clearly shows the benefits of our AHEAD approach.
93

Community-Based Optimal Scheduling of Smart Home Appliances Incorporating Occupancy Error

Ansu-Gyeabour, Ernest 22 August 2013 (has links)
No description available.
94

Fast error detection method for additive manufacturing process monitoring using structured light three dimensional imaging technique

Jack Matthew Girard (17584095) 19 January 2024 (has links)
<p dir="ltr">Monitoring of additive manufacturing (AM) processes allows for saving time and materials by detecting and addressing errors as they occur. When fast and efficient, the monitored AM of each unit can be completed in less time, thus improving overall economics and allowing the user to accept a higher capacity of AM requests with the same number of machines. Based on existing AM process monitoring solutions, it is very challenging for any approach to analyze full-resolution sensor data that yields three-dimensional (3D) topological information for closed-loop real-time applications. It is also challenging for any approach to be simultaneously capable of <i>plug-and-play</i> operation once AM hardware and sensor subsystems are configured. This thesis presents a novel method to speed up error detection in an additive manufacturing (AM) process by minimizing the necessary three-dimensional (3D) reconstruction and comparison. A structured light 3D imaging technique is developed that has native pixel-by-pixel mapping between the captured two-dimensional (2D) absolute phase image and the reconstructed 3D point cloud. This 3D imaging technique allows error detection to be performed in the 2D absolute phase image domain prior to 3D point cloud generation, which drastically reduces complexity and computational time. For each layer of an AM process, an artificial threshold phase image is generated and compared to the measured absolute phase image to identify error regions. Compared to an existing AM error detection method based on 3D reconstruction and point cloud processing, experimental results from a material extrusion (MEX) AM process demonstrate that the proposed method has comparable error detection capabilities. The proposed method also significantly increases the error detection speed, where the relationship between the speed improvement factor and the percentage of erroneous pixels in the captured 2D image follows a power-law relationship. The proposed method was also successfully used to implement closed-loop error correction to demonstrate a potential process monitoring application.</p>
95

Single Event Upset error detection on routing tracks of Xilinx FPGAs

Taj, Billy 24 September 2014 (has links)
<p>This thesis proposes a new method to detect routing switch alterations on FPGAs in real-time. By sampling the circuit path at the source and destination, and comparing the samples, it is possible to find out if there has been a routing change in the circuit path. We compare and contrast this probing method with previously established techniques such as Cyclic Redundancy Checks, Built-in-self-tests, Triple Modular Redundancy, Duplication with Comparison, and redesigning the FPGA. The probe method finds the routing error in one clock cycle, using the pre-existing elements on the FPGA, while the FPGA is still operational. This method works on all FPGAs that use the Wilton style switchbox. An automated tool for probing the design circuit is presented in this thesis that applies the probe scheme on a circuit built on the Xilinx Virtex-4 FPGA.</p> / Master of Applied Science (MASc)
96

New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

Espinosa García, Jaime 03 November 2016 (has links)
Tesis por compendio / [EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application. / [ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación. / [CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació. / Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146 / Compendio
97

Cost Beneficial Solution for High Rate Data Processing

Mirchandani, Chandru, Fisher, David, Ghuman, Parminder 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / GSFC in keeping with the tenets of NASA has been aggressively investigating new technologies for spacecraft and ground communications and processing. The application of these technologies, together with standardized telemetry formats, make it possible to build systems that provide high-performance at low cost in a short development cycle. The High Rate Telemetry Acquisition System (HRTAS) Prototype is one such effort that has validated Goddard's push towards faster, better and cheaper. The HRTAS system architecture is based on the Peripheral Component Interconnect (PCI) bus and VLSI Application-Specific Integrated Circuits (ASICs). These ASICs perform frame synchronization, bit-transition density decoding, cyclic redundancy code (CRC) error checking, Reed-Solomon error detection/correction, data unit sorting, packet extraction, annotation and other service processing. This processing in performed at rates of up to and greater than 150 Mbps sustained using a high-end performance workstation running standard UNIX O/S, (DEC 4100 with DEC UNIX or better). ASICs are also used for the digital reception of Intermediate Frequency (IF) telemetry as well as the spacecraft command interface for commands and data simulations. To improve the efficiency of the back-end processing, the level zero processing sorting element is being developed. This will provide a complete hardware solution to extracting and sorting source data units and making these available in separate files on a remote disk system. Research is on going to extend this development to higher levels of the science data processing pipeline. The fact that level 1 and higher processing is instrument dependent; an acceleration approach utilizing ASICs is not feasible. The advent of field programmable gate array (FPGA) based computing, referred to as adaptive or reconfigurable computing, provides a processing performance close to ASIC levels while maintaining much of the programmability of traditional microprocessor based systems. This adaptive computing paradigm has been successfully demonstrated and its cost performance validated, to make it a viable technology for the level one and higher processing element for the HRTAS. Higher levels of processing are defined as the extraction of useful information from source telemetry data. This information has to be made available to the science data user in a very short period of time. This paper will describe this low cost solution for high rate data processing at level one and higher processing levels. The paper will further discuss the cost-benefit of this technology in terms of cost, schedule, reliability and performance.
98

Metodologia para depuração off-line de parâmetros série e shunt de linhas de transmissão através de diversas amostras de medidas / Methodology for off-line validation of transmission line parameters via several measurement snapshots

Albertini, Madeleine Rocio Medrano Castillo 08 September 2010 (has links)
Neste trabalho propõe-se uma metodologia off-line, prática e eficiente, para detectar, identificar e corrigir erros em parâmetros série e shunt de linhas de transmissão. As linhas de transmissão, ou ramos do modelo barra-ramo, suspeitas de estarem com EPs são identificadas através do Índice de Suspeita (IS). O IS de um ramo é a relação entre o número de medidas incidentes a esse ramo, cujos resíduos normalizados são maiores que um valor pré-estabelecido, e o número total de medidas incidentes a esse ramo. Usando várias amostras de medidas, os parâmetros dos ramos suspeitos são estimados, de forma seqüencial, via um estimador de estado e parâmetros baseado nas equações normais, que aumenta o vetor de variáveis de estado para inclusão dos parâmetros suspeitos. Resultados numéricos de diversas simulações, com os sistemas de 14, 30 e 57 barras do IEEE, têm demonstrado a alta precisão e confiabilidade da metodologia proposta, mesmo na ocorrência de erros múltiplos (em mais de um parâmetro) em ramos adjacentes, como também em linhas de transmissão paralelas com compensação série. Comprovou-se a viabilidade prática da metodologia proposta através da aplicação da mesma, para depuração (detecção, identificação e correção) dos valores dos parâmetros de dois subsistemas da Hydro-Québec Trans-Énergie. / A practical and efficient off-line approach to detect, identify and correct series and shunt branch parameter errors is proposed in this thesis. The branches suspected of having parameter errors are identified by means of the Suspicious Index (SI). The SI of a branch is the ratio between the number of measurements incident to that branch, whose normalized residuals are larger than one specified threshold value, and the total number of measurements incident to that branch. Using several measurement snapshots, the suspicious parameters are sequentially estimated, via an augmented state and parameter estimator which increases the V-\'teta\' state vector for the inclusion of suspicious parameters. Several simulation results (with IEEE 14, 30 and 57 bus systems) have demonstrated the high accuracy and reliability of the proposed approach to de al with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. The proposed approach is confirmed by tests performed in two subsystems of the Hydro-Québec Trans-Énergie.
99

Metodologia para depuração off-line de parâmetros série e shunt de linhas de transmissão através de diversas amostras de medidas / Methodology for off-line validation of transmission line parameters via several measurement snapshots

Madeleine Rocio Medrano Castillo Albertini 08 September 2010 (has links)
Neste trabalho propõe-se uma metodologia off-line, prática e eficiente, para detectar, identificar e corrigir erros em parâmetros série e shunt de linhas de transmissão. As linhas de transmissão, ou ramos do modelo barra-ramo, suspeitas de estarem com EPs são identificadas através do Índice de Suspeita (IS). O IS de um ramo é a relação entre o número de medidas incidentes a esse ramo, cujos resíduos normalizados são maiores que um valor pré-estabelecido, e o número total de medidas incidentes a esse ramo. Usando várias amostras de medidas, os parâmetros dos ramos suspeitos são estimados, de forma seqüencial, via um estimador de estado e parâmetros baseado nas equações normais, que aumenta o vetor de variáveis de estado para inclusão dos parâmetros suspeitos. Resultados numéricos de diversas simulações, com os sistemas de 14, 30 e 57 barras do IEEE, têm demonstrado a alta precisão e confiabilidade da metodologia proposta, mesmo na ocorrência de erros múltiplos (em mais de um parâmetro) em ramos adjacentes, como também em linhas de transmissão paralelas com compensação série. Comprovou-se a viabilidade prática da metodologia proposta através da aplicação da mesma, para depuração (detecção, identificação e correção) dos valores dos parâmetros de dois subsistemas da Hydro-Québec Trans-Énergie. / A practical and efficient off-line approach to detect, identify and correct series and shunt branch parameter errors is proposed in this thesis. The branches suspected of having parameter errors are identified by means of the Suspicious Index (SI). The SI of a branch is the ratio between the number of measurements incident to that branch, whose normalized residuals are larger than one specified threshold value, and the total number of measurements incident to that branch. Using several measurement snapshots, the suspicious parameters are sequentially estimated, via an augmented state and parameter estimator which increases the V-\'teta\' state vector for the inclusion of suspicious parameters. Several simulation results (with IEEE 14, 30 and 57 bus systems) have demonstrated the high accuracy and reliability of the proposed approach to de al with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. The proposed approach is confirmed by tests performed in two subsystems of the Hydro-Québec Trans-Énergie.
100

Using On-Chip Error Detection to Estimate FPGA Design Sensitivity to Configuration Upsets

Keller, Andrew Mark 01 April 2017 (has links)
SRAM-based FPGAs provide valuable computation resources and reconfigurability; however, ionizing radiation can cause designs operating on these devices to fail. The sensitivity of an FPGA design to configuration upsets, or its SEU sensitivity, is an indication of a design's failure rate. SEU mitigation techniques can reduce the SEU sensitivity of FPGA designs in harsh radiation environments. The reliability benefits of these techniques must be determined before they can be used in mission-critical applications and can be determined by comparing the SEU sensitivity of an FPGA design with and without these techniques applied to it. Many approaches can be taken to evaluate the SEU sensitivity of an FPGA design. This work describes a low-cost easier-to-implement approach for evaluating the SEU sensitivity of an FPGA design. This approach uses additional logic resources on the same FPGA as the design under test to determine when the design has failed, or deviated from its specified behavior. Three SEU mitigation techniques were evaluated using this approach: triple modular redundancy (TMR), configuration scrubbing, and user-memory scrubbing. Significant reduction in SEU sensitivity is demonstrated through fault injection and radiation testing. Two LEON3 processors operating in lockstep are compared against each other using on-chip error detection logic on the same FPGA. The design SEU sensitivity is reduced by 27x when TMR and configuration scrubbing are applied, and by approximately 50x when TMR, configuration scrubbing, and user-memory scrubbing are applied together. Using this approach, an SEU sensitivity comparison is made of designs implemented on both an Altera Stratix V FPGA and a Xilinx Kintex 7 FPGA. Several instances of a finite state machine are compared against each other and a set of golden output vectors, all on the same FPGA. Instances of an AES cryptography core are chained together and the output of two chains are compared using on-chip error detection. Fault injection and neutron radiation testing reveal several similarities between the two FPGA architectures. SEU mitigation techniques reduce the SEU sensitivity of the two designs between 4x and 728x. Protecting on-chip functional error detection logic with TMR and duplication with compare (DWC) is compared. Fault injection results suggest that it is more favorable to protect on-chip functional error detection logic with DWC than it is to protect it with TMR for error detection.

Page generated in 0.0311 seconds