• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 477
  • 146
  • 55
  • 45
  • 44
  • 32
  • 20
  • 17
  • 14
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • Tagged with
  • 1105
  • 656
  • 650
  • 447
  • 270
  • 217
  • 213
  • 183
  • 173
  • 141
  • 121
  • 119
  • 108
  • 103
  • 98
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Cost-effective dynamic repair for FPGAs in real-time systems / Reparo dinâmico de baixo custo para FPGAs em sistemas tempo-real

Santos, Leonardo Pereira January 2016 (has links)
Field-Programmable Gate Arrays (FPGAs) são largamente utilizadas em sistemas digitais por características como flexibilidade, baixo custo e alta densidade. Estas características advém do uso de células de SRAM na memória de configuração, o que torna estes dispositivos suscetíveis a erros induzidos por radiação, tais como SEUs. TMR é o método de mitigação mais utilizado, no entanto, possui um elevado custo tanto em área como em energia, restringindo seu uso em aplicações de baixo custo e/ou baixo consumo. Como alternativa a TMR, propõe-se utilizar DMR associado a um mecanismo de reparo da memória de configuração da FPGA chamado scrubbing. O reparo de FPGAs em sistemas em tempo real apresenta desafios específicos. Além da garantia da computação correta dos dados, esta computação deve se dar completamente dentro do tempo disponível (time-slot), devendo ser finalizada antes do tempo limite (deadline). A diferença entre o tempo de computação dos dados e a deadline é chamado de slack e é o tempo disponível para reparo do sistema. Este trabalho faz uso de scrubbing deslocado dinâmico, que busca maximizar a probabilidade de reparo da memória de configuração de FPGAs dentro do slack disponível, baseado em um diagnóstico do erro. O scrubbing deslocado já foi utilizado com técnicas de diagnóstico de grão fino (NAZAR, 2015). Este trabalho propõe o uso de técnicas de diagnóstico de grão grosso para o scrubbing deslocado, evitando as penalidades de desempenho e custos em área associados a técnicas de grão fino. Circuitos do conjunto MCNC foram protegidos com as técnicas propostas e submetidos a seções de injeção de erros (NAZAR; CARRO, 2012a). Os dados obtidos foram analisados e foram calculadas as melhores posição iniciais do scrubbing para cada um dos circuitos. Calculou-se a taxa de Failure-in-Time (FIT) para comparação entre as diferentes técnicas de diagnóstico propostas. Os resultados obtidos confirmaram a hipótese inicial deste trabalho que a redução do número de bits sensíveis e uma baixa degradação do período do ciclo de relógio permitiram reduzir a taxa de FIT quando comparadas com técnicas de grão fino. Por fim, uma comparação entre as três técnicas propostas é feita, analisando o desempenho e custos em área associados a cada uma. / Field-Programmable Gate Arrays (FPGAs) are widely used in digital systems due to characteristics such as flexibility, low cost and high density. These characteristics are due to the use of SRAM memory cells in the configuration memory, which make these devices susceptible to radiation-induced errors, such as SEUs. TMR is the most used mitigation technique, but it has an elevated cost both in area as well as in energy, restricting its use in low cost/low energy applications. As an alternative to TMR, we propose the use of DMR associated with a repair mechanism of the FPGA configuration memory called scrubbing. The repair of FPGA in real-time systems present a specific set of challenges. Besides guaranteeing the correct computation of data, this computation must be completely carried out within the available time (time-slot), being finalized before a time limit (deadline). The difference between the computation time and the deadline is called the slack and is the time available to repair the system. This work uses a dynamic shifted scrubbing that aims to maximize the repair probability of the configuration memory of the FPGA within the available slack based on error diagnostic. The shifted scrubbing was already proposed with fine-grained diagnostic techniques (NAZAR, 2015). This work proposes the use of coarse-grained diagnostic technique as a way to avoid the performance penalties and area costs associated to fine-grained techniques. Circuits of the MCNC suite were protected by the proposed techniques and subject to error-injection campaigns (NAZAR; CARRO, 2012a). The obtained data was analyzed and the best scrubbing starting positions for each circuit were calculated. The Failure-in-Time (FIT) rates were calculated to compare the different proposed diagnostic techniques. The obtained results validated the initial hypothesis of this work that the reduction of the number of sensitive bits and a low degradation of the clock cycle allowed a reduced FIT rate when compared with fine-grained diagnostic techniques. Finally, a comparison is made between the proposed techniques, considering performance and area costs associated to each one.
372

Development of a programmable load

Minnaar, Ulrich John 14 November 2006 (has links)
Student Number : 0400486V - MSc (Eng) dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment / The Voltage Dip Test Facility at the University of the Witwatersrand utilises a resistive load during testing of variable speed drives. This method produces valuable results regarding the performance of drives under dip conditions. It has been shown that load type does influence the performance of drives and this variation cannot be tested under current conditions as only linear loading is attainable with resistive loads. This thesis proposes a programmable load based on the concept of field-oriented control of an induction motor. The concepts involved with field-oriented control are discussed and shown to be suitable for this application. An implementation strategy utilising custom-designed software and an off-the-shelf VSD is developed and executed. The performance of the programmable load is analysed under both steady-state and dynamic conditions.
373

Construção de um gerador de pulsos programável para experiência em RMNp / A programmable pulse generator for experiments in Pulsed Nuclear Magnetic Resonance

Paiva, Maria Stela Veludo de 19 December 1984 (has links)
Este trabalho descreve o desenvolvimento e a construção de um gerador de pulsos de 8 canais, com interface para controle externo por microcomputador. O gerador possui 16 passos programáveis definindo a largura do pulso entre 200 ns e 10 segundos. Permite também a repetição automática de um intervalo selecionado. O microcomputador tem controle total do gerador de pulsos, incluindo programação de memórias e execução e interrupção de sequências de pulsos. Este gerador foi construído para ser usado em experiências de Ressonância Magnética Nuclear Pulsada, no controle de portas de RF e sistema de detecção / This work describes the development and construction of a 8 channel pulse generator with interface for external microcomputer control. The generator has 16 programmable steps defining pulse widths between 200 nsec and 10 seconds, with 100 nsec resolution. Automatic repeat of a selected step range is also provided. The microcomputer has full control of the pulse generator including programing of memories, execution and interruption of pulse sequences. The generator was built to be used in Pulsed Nuclear Magnetic Resonance experiments to control the high Power RF gate and the detection system
374

The Hybrid Architecture Parallel Fast Fourier Transform (HAPFFT)

Palmer, Joseph M. 16 June 2005 (has links)
The FFT is an efficient algorithm for computing the DFT. It drastically reduces the cost of implementing the DFT on digital computing systems. Nevertheless, the FFT is still computationally intensive, and continued technological advances of computers demand larger and faster implementations of this algorithm. Past attempts at producing high-performance, and small FFT implementations, have focused on custom hardware (ASICs and FPGAs). Ultimately, the most efficient have been single-chipped, streaming I/O, pipelined FFT architectures. These architectures increase computational concurrency through the use of hardware pipelining. Streaming I/O, pipelined FFT architectures are capable of accepting a single data sample every clock cycle. In principle, the maximum clock frequency of such a circuit is limited only by its critical delay path. The delay of the critical path may be decreased by the addition of pipeline registers. Nevertheless this solution gives diminishing returns. Thus, the streaming I/O, pipelined FFT is ultimately limited in the maximum performance it can provide. Attempts have been made to map the Parallel FFT algorithm to custom hardware. Yet, the Parallel FFT was formulated and optimized to execute on a machine with multiple, identical, processing elements. When executed on such a machine, the FFT requires a large expense on communications. Therefore, a direct mapping of the Parallel FFT to custom hardware results in a circuit with complex control and global data movement. This thesis proposes the Hybrid Architecture Parallel FFT (HAPFFT) as an alternative. The HAPFFT is an improved formulation for building Parallel FFT custom hardware modules. It provides improved performance, efficient resource utilization, and reduced design time. The HAPFFT is modular in nature. It includes a custom front-end parallel processing unit which produces intermediate results. The intermediate results are sent to multiple, independent FFT modules. These independent modules form the back-end of the HAPFFT, and are generic, meaning that any prexisting FFT architecture may be used. With P back-end modules a speedup of P will be achieved, in comparison to an FFT module composed solely of a single module. Furthermore, the HAPFFT defines the front-end processing unit as a function of P. It hides the high communication costs typically seen in Parallel FFTs. Reductions in control complexity, memory demands, and logical resources, are achieved. An extraordinary result of the HAPFFT formulation is a sublinear area-time growth. This phenomenon is often also called superlinear speedup. Sublinear area-time growth and superlinear speedup are equivalent terms. This thesis will subsequently use the term superlinear speedup to refer to the HAPFFT's outstanding speedup behavior. A further benefit resulting from the HAPFFT formulation is reduced design time. Because the HAPFFT defines only the front-end module, and because the back-end parallel modules may be composed of any preexisting FFT modules, total design time for a HAPFFT is greatly reduced
375

Deteção coerente de sinais acústicos para localização robusta de veículos subaquáticos

Alves, Miguel Antenor Anjos Soares January 2013 (has links)
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores - Major Telecomunicações. Faculdade de Engenharia. Universidade do Porto. 2013
376

ASSESSMENT OF DISAGGREGATING THE SDN CONTROL PLANE

Adib Rastegarnia (7879706) 20 November 2019 (has links)
Current SDN controllers have been designed based on a monolithic approach that integrates all of services and applications into one single, huge program. The monolithic design of SDN controllers restricts programmers who build management applications to specific programming interfaces and services that a given SDN controller provides, making application development dependent on the controller, and thereby restricting portability of management applications across controllers. Furthermore, the monolithic approach means an SDN controller must be recompiled whenever a change is made, and does not provide an easy way to add new functionality or scale to handle large networks. To overcome the weaknesses inherent in the monolithic approach, the next generation of SDN controllers must use a distributed, microservice architecture that disaggregates the control plane by dividing the monolithic controller into a set of cooperative microservices. Disaggregation allows a programmer to choose a programming language that is appropriate for each microservice. In this dissertation, we describe steps taken towards disaggregating the SDN control plane, consider potential ways to achieve the goal, and discuss the advantages and disadvantages of each. We propose a distributed architecture that disaggregates controller software into a small controller core and a set of cooperative microservices. In addition, we present a software defined network programming framework called Umbrella that provides a set of abstractions that programmers can use for writing of SDN management applications independent of NB APIs that SDN controllers provide. Finally, we present an intent-based network programming framework called OSDF to provide a high-level policy based API for programming of network devices using SDN. <br>
377

Semantic-aware Stealthy Control Logic Infection Attack

kalle, Sushma 06 August 2018 (has links)
In this thesis work we present CLIK, a new, automated, remote attack on the control logic of a programmable logic controller (PLC) in industrial control systems. The CLIK attack modifies the control logic running in a remote target PLC automatically to disrupt a physical process. We implement the CLIK attack on a real PLC. The attack is initiated by subverting the security measures that protect the control logic in a PLC. We found a critical (zero-day) vulnerability, which allows the attacker to overwrite password hash in the PLC during the authentication process. Next, CLIK retrieves and decompiles the original logic and injects a malicious logic into it and then, transfers the infected logic back to the PLC. To hide the infection, we propose a virtual PLC that engages the software the virtual PLC intercepts the request and then, responds with the original (uninfected) control logic to the software.
378

Modifications to a Cavity Ringdown Spectrometer to Improve Data Acquisition Rates

Bostrom, Gregory Alan 04 March 2015 (has links)
Cavity ringdown spectroscopy (CRDS) makes use of light retention in an optical cavity to enhance the sensitivity to absorption or extinction of light from a sample inside the cavity. When light entering the cavity is stopped, the output is an exponential decay with a decay constant that can be used to determine the quantity of the analyte if the extinction or absorption coefficient is known. The precision of the CRDS is dependent on the rate at which the system it acquires and processes ringdowns, assuming randomly distributed errors. We have demonstrated a CRDS system with a ringdown acquisition rate of 1.5 kHz, extendable to a maximum of 3.5 kHz, using new techniques that significantly changed the way in which the ringdowns are both initiated and processed. On the initiation side, we combined a custom high-resolution laser controller with a linear optical feedback configuration and a novel optical technique for initiating a ringdown. Our optical injection "unlock" method switches the laser off-resonance, while allowing the laser to immediately return to resonance, after terminating the unlock, to allow for another ringdown (on the same cavity resonance mode). This part of the system had a demonstrated ringdown initiation rate of 3.5 kHz. To take advantage of this rate, we developed an optimized cost-effective FGPA-based data acquisition and processing system for CRDS, capable of determining decay constants at a maximum rate of 4.4 kHz, by modifying a commercial ADC-FPGA evaluation board and programming it to apply a discrete Fourier transform-based algorithm for determining decay constants. The entire system shows promise with a demonstrated ability to determine gas concentrations for H2O with a measured concentration accuracy of ±3.3%. The system achieved an absorption coefficient precision of 0.1% (95% confidence interval). It also exhibited a linear response for varying H2O concentrations, a 2.2% variation (1σ) for repeated measurements at the same H2O concentration, and a corresponding precision of 0.6% (standard error of the mean). The absorption coefficient limit of detection was determined to be 1.6 x 10-8 cm-1 (root mean square of the baseline residual). Proposed modifications to our prototype system offer the promise of more substantial gains in both precision and limit of detection. The system components developed here for faster ringdown acquisition and processing have broader applications for CRDS in atmospheric science and other fields that need fast response systems operating at high-precision.
379

Logic design using programmable logic devices

Nguyen, Loc Bao 01 January 1988 (has links)
The Programmable Logic Devices, PLO, have caused a major impact in logic design of digital systems in this decade. For instance, a twenty pin PLO device can replace from three hundreds to six hundreds Transistor Transistor Logic gates, which people have designed with since the 60s. Therefore, by using PLD devices, designers can squeeze more features, reduce chip counts, reduce power consumption, and enhance the reliability of the digital systems. This thesis covers the most important aspects of logic design using PLD devices. They are Logic Minimization and State Assignment. In addition, the thesis also covers a seldomly used but very useful design style, Self-Synchronized Circuits. The thesis introduces a new method to minimize Two-Level Boolean Functions using Graph Coloring Algorithms and the result is very encouraging. The raw speed of the coloring algorithms is as fast as the Espresso, the industry standard minimizer from Berkeley, and the solution is equally good. The thesis also introduces a rule-based state assignment method which gives equal or better solutions than STASH (an Intel Automatic CAD tool) by as much as twenty percent. One of the problems with Self-Synchronized circuits is that it takes many extra components to implement the circuit. The thesis shows how it can be designed using PLD devices and also suggests the idea of a Clock Chip to reduce the chip count to make the design style more attractive.
380

A New Approach to the Decomposition of Incompletely Specified Functions Based on Graph Coloring and Local Transformation and Its Application to FPGA Mapping

Wan, Wei 08 May 1992 (has links)
The thesis presents a new approach to the decomposition of incompletely specified functions and its application to FPGA (Field Programmable Gate Array) mapping. Five methods: Variable Partitioning, Graph Coloring, Bond Set Encoding, CLB Reusing and Local Transformation are developed in order to efficiently perform decomposition and FPGA (Lookup-Table based FPGA) mapping. 1) Variable Partitioning is a high quality hemistic method used to find the "best" partitions, avoiding the very time consuming testing of all possible decomposition charts, which is impractical when there are many input variables in the input function. 2) Graph Coloring is another high quality heuristic\ used to perform the quasi-optimum don't care assignment, making the program possible to accept incompletely specified function and perform a quasi-optimum assignment to the unspecified part of the function. 3) Bond Set Encoding algorithm is used to simplify the decomposed blocks during the process of decomposition. 4) CLB Reusing algorithm is used to reduce the number of CLBs used in the final mapped circuit. 5) Local Transformation concept is introduced to transform nondecomposable functions into decomposable ones, thus making it possible to apply decomposition method to FPGA mapping. All the above developed methods are incorporated into a program named TRADE, which performs global optimization over the input functions. While most of the existing methods recursively perform local optimization over some kinds of network-like graphs, and few of them can handle incompletely specified functions. Cube calculus is used in the TRADE program, the operations are global and very fast. A short description of the TRADE program and the evaluation of the results are provided at the_ end of the thesis. For many benchmarks the TRADE program gives better results than any program published in the literature.

Page generated in 0.0551 seconds