• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 235
  • 42
  • 18
  • 16
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 447
  • 447
  • 442
  • 437
  • 115
  • 69
  • 64
  • 55
  • 55
  • 53
  • 51
  • 50
  • 45
  • 44
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

HE-MT6D: A Network Security Processor with Hardware Engine for Moving Target IPv6 Defense (MT6D) over 1 Gbps IEEE 802.3 Ethernet

Sagisi, Joseph Lozano 28 July 2017 (has links)
Traditional static network addressing allows attackers the incredible advantage of taking time to plan and execute attacks against a network. To counter, Moving Target IPv6 Defense (MT6D) provides a network host obfuscation technique that dynamically obscures network and transport layer addresses. Software driven implementations have posed many challenges, namely, constant code maintenance to remain compliant with all library and kernel dependencies, less than optimal throughput, and the requirement for a dedicated general purpose hardware. The work of this thesis presents Network Security Processor and Hardware Engine for MT6D (HE-MT6D) to overcome these challenges. HE-MT6D is a soft core Intellectual Property (IP) block developed in full Register Transfer Level (RTL) and is the first hardware-oriented design of MT6D. Major contributions of HE-MT6D include the complete separation of the data and control planes, development of a nonlinear Complex Instruction Set Computer (CISC) Network Security Processor for in-flight packet modification, a specialized Packet Assembly language, a configurable and a parallelized memory search through tag-based Hybrid Content Addressable Memory (HCAM) L1 write-through cache, full RTL Network Time Protocol version 4 hardware module, and a modular crypto engine. HE-MT6D supports multiple nodes and provides 1,025% throughput performance increase over earlier C-based MT6D at 863 Mbps with full encapsulation and decapsulation, and it matches bare wire throughput performance for all other traffic. The HE-MT6D IP block can be configured as an independent physical gateway device, built as embedded Application Specific Integrated Circuit (ASIC), or serve as a System on Chip (SoC) integrated submodule. / Master of Science / Traditional static network addressing allows attackers the incredible advantage of taking time to plan and execute attacks against a network. One approach to counter this effect is dynamic addressing through Moving Target Defense, which the Department of Homeland Security Cyber Security Division (CSD) designated as one of the fourteen primary Technical Topic Areas for securing federal networks and the larger Internet. A specific application for Internet Protocol version 6 (IPv6) networks is Moving Target IPv6 Defense (MT6D). This provides tunneling and dynamic cryptographic network address translation, where new addresses are cryptographically generated every few seconds. The work of this thesis presents a Network Security Processor and Hardware Engine for MT6D (HE-MT6D). HE-MT6D is the first hardware-oriented implementation of MT6D developed in full Register Transfer Level (RTL) logic and provides 1,025% performance increase over earlier C-based MT6D at 863 Mbps full duplex throughput. It also provides support for multiple nodes. The HE-MT6D Intellectual Property (IP) block is modular for maximum flexibility towards system deployment: it can be configured as an independent physical gateway device, built as embedded Application Specific Integrated Circuit (ASIC), or serve as a System on Chip (SoC) integrated submodule.
352

Securing Software Intellectual Property on Commodity and Legacy Embedded Systems

Gora, Michael Arthur 25 June 2010 (has links)
The proliferation of embedded systems into nearly every aspect of modern infrastructure and society has seen their deployment in such diverse roles as monitoring the power grid and processing commercial payments. Software intellectual property (SWIP) is a critical component of these increasingly complex systems and represents a significant investment to its developers. However, deeply immersed in their environment, embedded systems are difficult to secure. As a result, developers want to ensure that their SWIP is protected from being reverse engineered or stolen by unauthorized parties. Many techniques have been proposed to address the issue of SWIP protection for embedded systems. These range from secure memory components to complete shifts in processor architectures. While powerful, these approaches often require the development of systems from the ground up or the application of specialized and often expensive hardware components. As a result they are poorly suited to address the security concerns of legacy embedded systems or systems based on commodity components. This work explores the protection of SWIP on heavily constrained, legacy and commodity embedded systems. We accomplish this by evaluating a generic embedded system to identify the security concerns in the context of SWIP protection. The evaluation is applied to determine the limitations of a software only approach on a real world legacy embedded system that lacks any specialized security hardware features. We improve upon this system by developing a prototype system using only commodity components. Finally we propose a Portable Embedded Software Intellectual Property Security (PESIPS) system that can easily be deployed as a framework on both legacy and commodity systems. / Master of Science
353

Implementation of a Trusted I/O Processor on a Nascent SoC-FPGA Based Flight Controller for Unmanned Aerial Systems

Kini, Akshatha Jagannath 26 March 2018 (has links)
Unmanned Aerial Systems (UAS) are aircraft without a human pilot on board. They are comprised of a ground-based autonomous or human operated control system, an unmanned aerial vehicle (UAV) and a communication, command and control (C3) link between the two systems. UAS are widely used in military warfare, wildfire mapping, aerial photography, etc primarily to collect and process large amounts of data. While they are highly efficient in data collection and processing, they are susceptible to software espionage and data manipulation. This research aims to provide a novel solution to enhance the security of the flight controller thereby contributing to a secure and robust UAS. The proposed solution begins by introducing a new technology in the domain of flight controllers and how it can be leveraged to overcome the limitations of current flight controllers. The idea is to decouple the applications running on the flight controller from the task of data validation. The authenticity of all external data processed by the flight controller can be checked without any additional overheads on the flight controller, allowing it to focus on more important tasks. To achieve this, we introduce an adjacent controller whose sole purpose is to verify the integrity of the sensor data. The controller is designed using minimal resources from the reconfigurable logic of an FPGA. The secondary I/O processor is implemented on an incipient Zynq SoC based flight controller. The soft-core microprocessor running on the configurable logic of the FPGA serves as a first level check on the sensor data coming into the flight controller thereby forming a trusted boundary layer. / Master of Science / UAV is an aerial vehicle which does not carry a human operator, uses aerodynamic forces to lift the vehicle and is controlled either autonomously by an onboard computer or remotely controlled by a pilot on ground. The software application running on the onboard computer is known as flight controller. It is responsible for guidance and trajectory tracking capabilities of the aircraft. A UAV consists of various sensors to measure parameters such as orientation, acceleration, air speed, altitude, etc. A sensor is a device that detects or measures a physical property. The flight controller continuously monitors the sensor values to guide the UAV along a specific trajectory. Successful maneuvering of a UAV depends entirely on the data from sensors, thus making it vulnerable to sensor data attacks using fabricated physical stimuli. These kind of attacks can trigger an undesired response or mask the occurrence of actual events. In this thesis, we propose a novel approach where we perform a first-level check on the incoming sensor data using a dedicated low cost hardware designed to protect data integrity. The data is then forwarded to the flight controller for further access and processing.
354

Cost Beneficial Solution for High Rate Data Processing

Mirchandani, Chandru, Fisher, David, Ghuman, Parminder 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / GSFC in keeping with the tenets of NASA has been aggressively investigating new technologies for spacecraft and ground communications and processing. The application of these technologies, together with standardized telemetry formats, make it possible to build systems that provide high-performance at low cost in a short development cycle. The High Rate Telemetry Acquisition System (HRTAS) Prototype is one such effort that has validated Goddard's push towards faster, better and cheaper. The HRTAS system architecture is based on the Peripheral Component Interconnect (PCI) bus and VLSI Application-Specific Integrated Circuits (ASICs). These ASICs perform frame synchronization, bit-transition density decoding, cyclic redundancy code (CRC) error checking, Reed-Solomon error detection/correction, data unit sorting, packet extraction, annotation and other service processing. This processing in performed at rates of up to and greater than 150 Mbps sustained using a high-end performance workstation running standard UNIX O/S, (DEC 4100 with DEC UNIX or better). ASICs are also used for the digital reception of Intermediate Frequency (IF) telemetry as well as the spacecraft command interface for commands and data simulations. To improve the efficiency of the back-end processing, the level zero processing sorting element is being developed. This will provide a complete hardware solution to extracting and sorting source data units and making these available in separate files on a remote disk system. Research is on going to extend this development to higher levels of the science data processing pipeline. The fact that level 1 and higher processing is instrument dependent; an acceleration approach utilizing ASICs is not feasible. The advent of field programmable gate array (FPGA) based computing, referred to as adaptive or reconfigurable computing, provides a processing performance close to ASIC levels while maintaining much of the programmability of traditional microprocessor based systems. This adaptive computing paradigm has been successfully demonstrated and its cost performance validated, to make it a viable technology for the level one and higher processing element for the HRTAS. Higher levels of processing are defined as the extraction of useful information from source telemetry data. This information has to be made available to the science data user in a very short period of time. This paper will describe this low cost solution for high rate data processing at level one and higher processing levels. The paper will further discuss the cost-benefit of this technology in terms of cost, schedule, reliability and performance.
355

Implementation of decision trees for embedded systems

Badr, Bashar January 2014 (has links)
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm. Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree. The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time. The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time.
356

Towards the development of a reliable reconfigurable real-time operating system on FPGAs

Hong, Chuan January 2013 (has links)
In the last two decades, Field Programmable Gate Arrays (FPGAs) have been rapidly developed from simple “glue-logic” to a powerful platform capable of implementing a System on Chip (SoC). Modern FPGAs achieve not only the high performance compared with General Purpose Processors (GPPs), thanks to hardware parallelism and dedication, but also better programming flexibility, in comparison to Application Specific Integrated Circuits (ASICs). Moreover, the hardware programming flexibility of FPGAs is further harnessed for both performance and manipulability, which makes Dynamic Partial Reconfiguration (DPR) possible. DPR allows a part or parts of a circuit to be reconfigured at run-time, without interrupting the rest of the chip’s operation. As a result, hardware resources can be more efficiently exploited since the chip resources can be reused by swapping in or out hardware tasks to or from the chip in a time-multiplexed fashion. In addition, DPR improves fault tolerance against transient errors and permanent damage, such as Single Event Upsets (SEUs) can be mitigated by reconfiguring the FPGA to avoid error accumulation. Furthermore, power and heat can be reduced by removing finished or idle tasks from the chip. For all these reasons above, DPR has significantly promoted Reconfigurable Computing (RC) and has become a very hot topic. However, since hardware integration is increasing at an exponential rate, and applications are becoming more complex with the growth of user demands, highlevel application design and low-level hardware implementation are increasingly separated and layered. As a consequence, users can obtain little advantage from DPR without the support of system-level middleware. To bridge the gap between the high-level application and the low-level hardware implementation, this thesis presents the important contributions towards a Reliable, Reconfigurable and Real-Time Operating System (R3TOS), which facilitates the user exploitation of DPR from the application level, by managing the complex hardware in the background. In R3TOS, hardware tasks behave just like software tasks, which can be created, scheduled, and mapped to different computing resources on the fly. The novel contributions of this work are: 1) a novel implementation of an efficient task scheduler and allocator; 2) implementation of a novel real-time scheduling algorithm (FAEDF) and two efficacious allocating algorithms (EAC and EVC), which schedule tasks in real-time and circumvent emerging faults while maintaining more compact empty areas. 3) Design and implementation of a faulttolerant microprocessor by harnessing the existing FPGA resources, such as Error Correction Code (ECC) and configuration primitives. 4) A novel symmetric multiprocessing (SMP)-based architectures that supports shared memory programing interface. 5) Two demonstrations of the integrated system, including a) the K-Nearest Neighbour classifier, which is a non-parametric classification algorithm widely used in various fields of data mining; and b) pairwise sequence alignment, namely the Smith Waterman algorithm, used for identifying similarities between two biological sequences. R3TOS gives considerably higher flexibility to support scalable multi-user, multitasking applications, whereby resources can be dynamically managed in respect of user requirements and hardware availability. Benefiting from this, not only the hardware resources can be more efficiently used, but also the system performance can be significantly increased. Results show that the scheduling and allocating efficiencies have been improved up to 2x, and the overall system performance is further improved by ~2.5x. Future work includes the development of Network on Chip (NoC), which is expected to further increase the communication throughput; as well as the standardization and automation of our system design, which will be carried out in line with the enablement of other high-level synthesis tools, to allow application developers to benefit from the system in a more efficient manner.
357

Modélisation, exploration et estimation de la consommation pour les architectures hétérogènes reconfigurables dynamiquement / Model, exploration and estimation of consumption in dynamically reconfigurable heterogeneous architectures

Bonamy, Robin 12 July 2013 (has links)
L'utilisation des accélérateurs reconfigurables, pour la conception de system-on-chip hétérogènes, offre des possibilités intéressantes d'augmentation des performances et de réduction de la consommation d'énergie. En effet, ces accélérateurs sont couramment utilisés en complément d'un (ou de plusieurs) processeur(s) pour permettre de décharger celui-ci (ceux-ci) des calculs intensifs et des traitements de flots de données. Le concept de reconfiguration dynamique, supporté par certains constructeurs de FPGA, permet d'envisager des systèmes beaucoup plus flexibles en offrant notamment la possibilité de séquencer temporellement l'exécution de blocs de calcul sur la même surface de silicium, réduisant alors les besoins en ressources d'exécution. Cependant, la reconfiguration dynamique n'est pas sans impact sur les performances globales du système et il est difficile d'estimer la répercussion des décisions de configuration sur la consommation d'énergie. L'objectif principal de cette thèse consiste à proposer une méthodologie d'exploration permettant d'évaluer l'impact des choix d'implémentation des différentes tâches d'une application sur un system-on-chip contenant une ressource reconfigurable dynamiquement, en vue d'optimiser la consommation d'énergie ou le temps d'exécution. Pour cela, nous avons établi des modèles de consommation des composants reconfigurables, en particulier les FPGAs, qui permettent d'aider le concepteur dans son design. À l'aide d'une méthodologie de mesure sur Virtex-5, nous montrons dans un premier temps qu'il est possible de générer des accélérateurs matériels de tailles variées ayant des performances temporelles et énergétiques diverses. Puis, afin de quantifier les coûts d'implémentation de ces accélérateurs, nous construisons trois modèles de consommation de la reconfiguration dynamique partielle. Finalement, à partir des modèles définis et des accélérateurs produits, nous développons un algorithme d'exploration des solutions d'implémentation pour un système complet. En s'appuyant sur une plate-forme de modélisation à haut niveau, celui-ci analyse les coûts d'implémentation des tâches et leur exécution sur les différentes ressources disponibles (processeur ou région configurable). Les solutions offrant les meilleures performances en fonction des contraintes de conception sont retenues pour être exploitées. / The use of reconfigurable accelerators when designing heterogeneous system-on-chip has the potential to increase performance and reduce energy consumption. Indeed, these accelerators are commonly a adjunct to one (or more) processor(s) and unload intensive computations and treatments. The concept of dynamic reconfiguration, supported by some FPGA vendors, allows to consider more flexible systems including the ability to sequence the execution of accelerators on the same silicon area, while reducing resource requirements. However, dynamic reconfiguration may impact overall system performance and it is hard to estimate the impact of configuration decisions on energy consumption.. The main objective of this thesis is to provide an exploration methodology to assess the impact of implementation choices of tasks of an application on a system-on-chip containing a dynamically reconfigurable resource, to optimize the energy consumption or the processing time. Therefore, we have established consumption models of reconfigurable components, particularly FPGAs, which assists the designer. Using a measurement methodology on Virtex-5, we first show the possibility to generate hardware accelerators of various sizes, execution time and energy consumption. Then, in order to quantify the implementation costs of these accelerators, we build three power models of the dynamic and partial reconfiguration. Finally, from these models, we develop an algorithm for the exploration of implementation and allocation possibilities for a complete system. Based on a high-level modeling platform, the implementation costs of the tasks and their performance on various resources (CPU or reconfigurable region) are analyzed. The solutions with the best characteristics, based on design constraints, are extracted.
358

Fast Code Exploration for Pipeline Processing in FPGA Accelerators / Exploração Rápida de Códigos para Processamento Pipeline em Aceleradores FPGA

Rosa, Leandro de Souza 31 May 2019 (has links)
The increasing demand for energy efficient computing has endorsed the usage of Field-Programmable Gate Arrays to create hardware accelerators for large and complex codes. However, implementing such accelerators involve two complex decisions. The first one lies in deciding which code snippet is the best to create an accelerator, and the second one lies in how to implement the accelerator. When considering both decisions concomitantly, the problem becomes more complicated since the code snippet implementation affects the code snippet choice, creating a combined design space to be explored. As such, a fast design space exploration for the accelerators implementation is crucial to allow the exploration of different code snippets. However, such design space exploration suffers from several time-consuming tasks during the compilation and evaluation steps, making it not a viable option to the snippets exploration. In this work, we focus on the efficient implementation of pipelined hardware accelerators and present our contributions on speeding up the pipelines creation and their design space exploration. Towards loop pipelining, the proposed approaches achieve up to 100× speed-up when compared to the state-uf-the-art methods, leading to 164 hours saving in a full design space exploration with less than 1% impact in the final results quality. Towards design space exploration, the proposed methods achieve up to 9:5× speed-up, keeping less than 1% impact in the results quality. / A demanda crescente por computação energeticamente eficiente tem endossado o uso de Field- Programmable Gate Arrays para a criação de aceleradores de hardware para códigos grandes e complexos. Entretanto, a implementação de tais aceleradores envolve duas decisões complexas. O primeiro reside em decidir qual trecho de código é o melhor para se criar o acelerador, e o segundo reside em como implementar tal acelerador. Quando ambas decisões são consideradas concomitantemente, o problema se torna ainda mais complicado dado que a implementação do trecho de código afeta a seleção dos trechos de código, criando um espaço de projeto combinatorial a ser explorado. Dessa forma, uma exploração do espaço de projeto rápida para a implementação de aceleradores é crucial para habilitar a exploração de diferentes trechos de código. Contudo, tal exploração do espaço de projeto é impedida por várias tarefas que consumem tempo durante os passos de compilação a análise, o que faz da exploração de trechos de códigos inviável. Neste trabalho, focamos na implementação eficiente de aceleradores pipeline em hardware e apresentamos nossas contribuições para o aceleramento da criações de pipelines e de sua exploração do espaço de projeto. Referente à criação de pipelines, as abordagens propostas alcançam uma aceleração de até 100× quando comparadas às abordagens do estado-da-arte, levando à economia de 164 horas em uma exploração de espaço de projeto completa com menos de 1% de impacto na qualidade dos resultados. Referente à exploração do espaço de projeto, as abordagens propostas alcançam uma aceleração de até 9:5×, mantendo menos de 1% de impacto na qualidade dos resultados.
359

Análise do uso de redundância em circuitos gerados por síntese de alto nível para FPGA programado por SRAM sob falhas transientes

Santos, André Flores dos January 2017 (has links)
Este trabalho consiste no estudo e análise da suscetibilidade a efeitos da radiação em projetos de circuitos gerados por ferramenta de Síntese de Alto Nível para FPGAs (Field Programmable Gate Array), ou seja, circuitos programáveis e sistemas em chip, do inglês System-on-Chip (SOC). Através de um injetor de falhas por emulação usando o ICAP (Internal Configuration Access Port) localizado dentro do FPGA é possível injetar falhas simples ou acumuladas do tipo SEU (Single Event Upset), definidas como perturbações que podem afetar o funcionamento correto do dispositivo através da inversão de um bit por uma partícula carregada. SEU está dentro da classificação de SEEs (Single Event Effects), efeitos transitórios em tradução livre, podem ocorrer devido a penetração de partículas de alta energia do espaço e do sol (raios cósmicos e solares) na atmosfera da Terra que colidem com átomos de nitrogênio e oxigênio resultando na produção de partículas carregadas, na grande maioria nêutrons. Dentro deste contexto além de analisar a suscetibilidade de projetos gerados por ferramenta de Síntese de Alto Nível, torna-se relevante o estudo de técnicas de redundância como TMR (Triple Modular Redundance) para detecção, correção de erros e comparação com projetos desprotegidos verificando a confiabilidade. Os resultados mostram que no modo de injeção de falhas simples os projetos com redundância TMR demonstram ser efetivos. Na injeção de falhas acumuladas o projeto com múltiplos canais apresentou melhor confiabilidade do que o projeto desprotegido e com redundância de canal simples, tolerando um maior número de falhas antes de ter seu funcionamento comprometido. / This work consists of the study and analysis of the susceptibility to effects of radiation in circuits projects generated by High Level Synthesis tool for FPGAs Field Programmable Gate Array (FPGAs), that is, system-on-chip (SOC). Through an emulation fault injector using ICAP (Internal Configuration Access Port), located inside the FPGA, it is possible to inject single or accumulated failures of the type SEU (Single Event Upset), defined as disturbances that can affect the correct functioning of the device through the inversion of a bit by a charged particle. SEU is within the classification of SEEs (Single Event Effects), can occur due to the penetration of high energy particles from space and from the sun (cosmic and solar rays) in the Earth's atmosphere that collide with atoms of nitrogen and oxygen resulting in the production of charged particles, most of them neutrons. In this context, in addition to analyzing the susceptibility of projects generated by a High Level Synthesis tool, it becomes relevant to study redundancy techniques such as TMR (Triple Modular Redundancy) for detection, correction of errors and comparison with unprotected projects verifying the reliability. The results show that in the simple fault injection mode TMR redundant projects prove to be effective. In the case of accumulated fault injection, the multichannel design presented better reliability than the unprotected design and with single channel redundancy, tolerating a greater number of failures before its operation was compromised.
360

Análise e implementação de estruturas de controle em dispositivo FPGA aplicadas a um conversor Buck / Analisys and implementation of control structures in a FPGA device applied to a Buck converter

Lucas, Ricardo 08 May 2015 (has links)
Este trabalho aborda diversas técnicas de controle, com o intuito de comparação do desempenho e robustez ao aplicá-los a um conversor Buck. Iniciando pelo controlador PID (Proporcional, Integral e Derivativo), amplamente explorado e dominado no meio industrial, ele é adotado neste trabalho como referência de comparação para as demais técnicas desenvolvidas. Outra estratégia aqui apresentada é o GANLPID (Gaussian Adaptative Non Linear PID ou PID Adaptativo Não Linear Gaussiano), trata-se de uma técnica não linear, possui ganhos variantes em função do erro baseados em uma função gaussiana. O controle por alocação de polos é uma técnica de controle que em sua forma básica não possui parcela integral, sendo necessária a inclusão deste termo para minimizar o erro em regime permanente. As principais características de análise de desempenho são o tempo de acomodação e overshoot. Todas as técnicas são exploradas a fim de serem implementadas em dispositivos FPGA (Field Programmable Gate Array), possuindo algumas vantagens sobre microcontroladores e DSP’s (Digital Signal Processor), pois conseguem executar tarefas em paralelo deixando a execução do algoritmo mais rápida. As técnicas de controle escolhidas foram simuladas utilizando a ferramenta DSP Builder e compiladas diretamente em código HDL (linguagem de descrição de hardware). Os resultados de simulação e experimentais são apresentados e comentados para validar os projetos propostos. / This work discuss several techniques of control, with an intention of comparison of performance and robustness to apply them to Buck coverter. Starting with PID (Proportional, Integral, Derivative) controller, widely explored and dominated in an industrial environment, it’s used in this work as comparison reference for the others techniques developed. Another strategy presented here is the GANLPID (Gaussian Adaptative Non LinearPID), it’s a case of non linear technique, has won variants in function of the based on a Gaussian error function. variants have gains on function of error based on a Gaussian function. The pole placement control technique not having full part in their basic forms, being necessary to include this term to eliminate the steady-state error. The main performance analysis features are the settling time and overshoot. All the techniques are explored in order to be implemented in FPGA (Field Programmable Gate Array) devices, having some advantages over microcontrollers and DSP’s (Digital Signal Processor), because can execute tasks in parallel allowing the implementation of the algorithm more faster. The chosen control techniques were simulated using the DSP Builder tool and and compiled directly in HDL (hardware description language) code. The results of simulation and experimental are presented and discussed in order to validate the proposed projects.

Page generated in 0.0509 seconds