• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 14
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 20
  • 12
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Segurança cibernética com hardware reconfigurável em subestações de energia elétrica utilizando o padrão IEC 61850 / Cyber security with reconfigurable hardware in power substations using the IEC 61850 standard

Juliano Coêlho Miranda 20 September 2016 (has links)
Com a tecnologia digital, as redes de comunicação têm sido de fundamental importância para o bom funcionamento das subestações de energia elétrica. Criado em 2002, o padrão IEC 61850 busca harmonizar a diversidade de equipamentos e fabricantes, e possibilitar a integração de dados para que o máximo de benefícios possa ser extraído. Nesse contexto, o protocolo GOOSE (Generic Object Oriented Substation Event), pertinente ao padrão IEC 61850, é um datagrama multicast concebido para funcionar na rede local ou de longa distância que interliga as subestações de energia elétrica. Nos ambientes de longa distância, o tráfego de dentro para fora, e vice-versa, deveria passar por um firewall. Porém, a tecnologia de firewall atual não é capaz de inspecionar as mensagens GOOSE reais ou originadas a partir de um ataque, e afeta o tempo de transferência das mesmas, que, no enlace de comunicação, não deve exceder 5ms. Dessa forma, o objetivo deste trabalho é desenvolver um firewall em hardware reconfigurável, por meio da plataforma NetFPGA, de modo que o incremento no tempo de propagação de uma mensagem GOOSE, Tipo 1A (Trip), ao transpor o dispositivo de segurança, não ultrapasse 20% do tempo total destinado ao enlace de comunicação. Por ter a capacidade de ser um acelerador, construído por meio de hardware reconfigurável FPGA (Field Programmable Gate Array), a NetFPGA conduz enlaces Gigabit, e torna possível examinar e estabelecer regras iniciais de autorização ou negação para o tráfego de mensagens GOOSE, manipulando os campos do quadro ISO/IEC 8802-3. O incremento no tempo máximo de propagação de uma mensagem com 1518 bytes foi de 77,39 μs, com 77,38 μs de tempo médio. Um algoritmo de criptografia e outro de autenticação também foram testados e mensagens falsas não conseguiram transpor o firewall. No momento atual da pesquisa, concluiu-se que o firewall em NetFPGA, pertinente ao conjunto de recursos de hardware e software destinados a garantir a segurança de uma rede, é capaz de rejeitar mensagens GOOSE falsas e fornecer segurança aos dispositivos ativos de uma subestação, sem atrasos adicionais superiores a 1ms. / With the digital technology, the communication networks have been of fundamental importance for the good performance of power substations. Created in 2002, the IEC 61850 standard seeks for harmonization of the different equipment and manufacturers, enabling the integration of data for maximum performance. In this context, the GOOSE (Generic Object Oriented Substation Event) message, concerning the IEC 61850 standard, is a multicast datagram, designed to operate in LAN or WAN that connects power substations. In the long-distance environment, the propagation time in the communication link must not exceed 5ms. The current firewall technology is not able to differ true GOOSE messages from the ones originated from an attack, and it affects the transfer time of messages. The objective of this research is to develop a reconfigurable firewall hardware, using the NetFPGA platform, so that the increase in propagation time of a GOOSE message, Type 1A (Trip), does not exceed 20% of the total time allocated to the link communication. Due to the ability of NetFPGA of being an accelerator, and having been built by using reconfigurable FPGA (Field Programmable Gate Array) leading to Gigabit links, it was possible to examine and establish initial rules of authorization or denial of GOOSE messages by manipulating some of the fields from the table ISO/IEC 8802-3. The increase in the maximum propagation time of a message of 1518 bytes was 77.39 μs, with the average of 77.38 μs. Fake messages failed to cross the firewall. Results from a process of authentication and encryption were also presented. At the present study, it has been concluded that the firewall using NetFPGA, concerning the hardware and software in order to ensure the security of a network, is able to reject false GOOSE messages and provide security to devices of a power substation without time increments greater than 1ms.
32

Modélisation et réalisation de la couche physique du système de communication numérique sans fil, WiMax, sur du matériel reconfigurable

Ezzeddine, Mazen January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
33

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi 25 February 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
34

Sistema de hardware reconfigurável para navegação visual de veículos autônomos / Reconfigurable hardware system for autonomous vehicles visual navigation

Mauricio Acconcia Dias 04 October 2016 (has links)
O número de acidentes veiculares têm aumentado mundialmente e a principal causa associada a estes acidentes é a falha humana. O desenvolvimento de veículos autônomos é uma área que ganhou destaque em vários grupos de pesquisa do mundo, e um dos principais objetivos é proporcionar um meio de evitar estes acidentes. Os sistemas de navegação utilizados nestes veículos precisam ser extremamente confiáveis e robustos o que exige o desenvolvimento de soluções específicas para solucionar o problema. Devido ao baixo custo e a riqueza de informações, um dos sensores mais utilizados para executar navegação autônoma (e nos sistemas de auxílio ao motorista) são as câmeras. Informações sobre o ambiente são extraídas por meio do processamento das imagens obtidas pela câmera, e em seguida são utilizadas pelo sistema de navegação. O objetivo principal desta tese consiste do projeto, implementação, teste e otimização de um comitê de Redes Neurais Artificiais utilizadas em Sistemas de Visão Computacional para Veículos Autônomos (considerando em específico o modelo proposto e desenvolvido no Laboratório de Robótica Móvel (LRM)), em hardware, buscando acelerar seu tempo de execução, para utilização como classificadores de imagens nos veículos autônomos desenvolvidos pelo grupo de pesquisa do LRM. Dentre as contribuições deste trabalho, as principais são: um hardware configurado em um FPGA que executa a propagação do sinal em um comitê de redes neurais artificiais de forma rápida com baixo consumo de energia, comparado a um computador de propósito geral; resultados práticos avaliando precisão, consumo de hardware e temporização da estrutura para a classe de aplicações em questão que utiliza a representação de ponto-fixo; um gerador automático de look-up tables utilizadas para substituir o cálculo exato de funções de ativação em redes MLP; um co-projeto de hardware/software que obteve resultados relevantes para implementação do algoritmo de treinamento Backpropagation e, considerando todos os resultados, uma estrutura que permite uma grande diversidade de trabalhos futuros de hardware para robótica por implementar um sistema de processamento de imagens em hardware. / The number of vehicular accidents have increased worldwide and the leading associated cause is the human failure. Autonomous vehicles design is gathering attention throughout the world in industry and universities. Several research groups in the world are designing autonomous vehicles or driving assistance systems with the main goal of providing means to avoid these accidents. Autonomous vehicles navigation systems need to be reliable with real-time performance which requires the design of specific solutions to solve the problem. Due to the low cost and high amount of collected information, one of the most used sensors to perform autonomous navigation (and driving assistance systems) are the cameras.Information from the environment is extracted through obtained images and then used by navigation systems. The main goal of this thesis is the design, implementation, testing and optimization of an Artificial Neural Network ensemble used in an autonomous vehicle navigation system (considering the navigation system proposed and designed in Mobile Robotics Lab (LRM)) in hardware, in order to increase its capabilites, to be used as image classifiers for robot visual navigation. The main contributions of this work are: a reconfigurable hardware that performs a fast signal propagation in a neural network ensemble consuming less energy when compared to a general purpose computer, due to the nature of the hardware device; practical results on the tradeoff between precision, hardware consumption and timing for the class of applications in question using the fixed-point representation; a automatic generator of look-up tables widely used in hardware neural networks to replace the exact calculation of activation functions; a hardware/software co-design that achieve significant results for backpropagation training algorithm implementation, and considering all presented results, a structure which allows a considerable number of future works on hardware image processing for robotics applications by implementing a functional image processing hardware system.
35

Chipcflow - validação e implementação do modelo de partição e protocolo de comunicação no grafo a fluxo de dados dinâmico / Chipflow - gvalidation and implementation of the partition model and communication protocol in the dynamic data flow graph

Francisco de Souza Júnior 24 January 2011 (has links)
A ferramenta ChipCflow vem sendo desenvolvida nos últimos quatro anos, inicialmente a partir de um projeto de arquitetura a fluxo de dados dinâmico em hardware reconfigurável, mas agora como uma ferramenta de compilação. Ela tem como objetivo a execução de algoritmos por meio do modelo de arquitetura a fluxo de dados associado ao conceito de dispositivos parcialmente reconfiguráveis. Sua característica principal é acelerar o tempo de execução de programas escritos em Linguagem de Programação de Alto Nível (LPAN), do inglês, High Level Languages, em particular nas partes mais intensas de processamento. Isso é feito por meio da implementação dessas partes de código diretamente em hardware reconfigurável - utilizando a tecnologia Field-programmable Gate Array (FPGA) - aproveitando ao máximo o paralelismo considerado natural do modelo a fluxo de dados e as características do hardware parcialmente reconfigurável. Neste trabalho, o objetivo é a prova de conceito do processo de partição e do protocolo de comunicação entre as partições definidas a partir de um Grafo de Fluxo de Dados (GFD), para a execução direta em hardware reconfigurável utilizando Reconfiguração Parcial Dinâmica (RPD). Foi necessário elaborar um mecanismo de partição e protocolo de comunicação entre essas partições, uma vez que a RPD insere características tecnológicas limitantes não encontradas em hardwares reconfiguráveis mais tradicionais. O mecanismo criado se mostrou parcialmente adequado à prova de conceito, significando a possibilidade de se executar GFDs na plataforma parcialmente reconfigurável. Todavia, os tempos de reconfiguração inviabilizaram a proposta inicial de se utilizar RPD para diminuir o tempo de tag matching dos GFDs dinâmicos / The ChipCflow tool has been developed over the last four years, initially from an architectural design the flow of Dynamic Data in reconfigurable hardware, but now as a compilation tool. It aims to run algorithms using the model of the data flow architecture associated with the concept of partially reconfigurable devices. Its main feature is to accelerate the execution time of programs written in High Level Languages, particularly in the most intense processing. This is done by implementing those parts of code directly in reconfigurable hardware - using FPGA technology - leveraging the natural parallelism of the data flow model and characteristics of the partially reconfigurable hardware. In this work, the main goal is the proof of concept of the partition process and protocol communication between the partitions defined from Data Flow Graph for direct execution in reconfigurable hardware using Active Partial Reconfiguration. This required a mechanism to partition and a protocol for communication between these partitions, since the Active Partial Reconfiguration inserts technological features limiting not found in traditional reconfigurable hardware. The mechanism developed is show to be partially adequate to the proof of concept, meaning the ability to run Data Flow Graphs in a platform that is partially reconfigurable. However, the reconfiguration time inserts a great overhead into the execution time, which made the proposal of the use of Active Partial Reconfiguration to decrease the time matching Data Flow Graph unfeasible
36

Aplicação de controlador evolutico a pendulo servo acionado / Application of evolutionary controller to a pendulum driver

Delai, Andre Luiz 12 August 2018 (has links)
Orientador: Jose Raimundo de Oliveira / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e Computação / Made available in DSpace on 2018-08-12T06:03:33Z (GMT). No. of bitstreams: 1 Delai_AndreLuiz_M.pdf: 1492988 bytes, checksum: 31f63c43dc3e2cd453b8182ce82bc542 (MD5) Previous issue date: 2008 / Resumo: O uso de técnicas evolutivas empregando algoritmos genéticos na obtenção de projetos de circuitos eletrônicos analógicos e digitais já é fato e vem sendo estudado a alguns anos. Neste contexto, o objetivo deste trabalho foi o de implementar em hardware reconfigurável a proposta de um controlador para pendulo não-linear amortecido, obtido através de técnicas de Hardware Evolutivo. Para desenvolver um modelo físico baseado no modelo teórico (simulado) foram utilizadas tecnologias tais como a dos Field Programable Gate Arrays (FPGAs) e também a linguagem de descrição de hardware VHSIC Hardware Description Language (VHDL), dentre outros recursos. / Abstract: The use of genetic algorithms using evolutionary techniques in obtaining projects of analogue and digital electronic circuits is already fact and have been studied for some years. In this context, the objective of this work was the implementation in reconfigurable hardware of a driver for non-linear damped pendulum, obtained through Evolvable Hardware approach. Technologies such as the Field Programable Gate Arrays (FPGA's) and VHDL were used to develop a physical model based on the theoretical model(simulated), among other resources. / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
37

Embebed wavelet image reconstruction in parallel computation hardware

Guevara Escobedo, Jorge January 2016 (has links)
In this thesis an algorithm is demonstrated for the reconstruction of hard-field Tomography images through localized block areas, obtained in parallel and from a multiresolution framework. Block areas are subsequently tiled to put together the full size image. Given its properties to preserve its compact support after being ramp filtered, the wavelet transform has received to date much attention as a promising solution in radiation dose reduction in medical imaging, through the reconstruction of essentially localised regions. In this work, this characteristic is exploited with the aim of reducing the time and complexity of the standard reconstruction algorithm. Independently reconstructing block images with geometry allowing to cover completely the reconstructed frame as a single output image, allows the individual blocks to be reconstructed in parallel, and to experience its performance in a multiprocessor hardware reconfigurable system (i.e. FPGA). Projection data from simulated Radon Transform (RT) was obtained at 180 evenly spaced angles. In order to define every relevant block area within the sinogram, forward RT was performed over template phantoms representing block frames. Reconstruction was then performed in a domain beyond the block frame limits, to allow calibration overlaps when fitting of adjacent block images. The 256 by 256 Shepp-Logan phantom was used to test the methodology of both parallel multiresolution and parallel block reconstruction generalisations. It is shown that the reconstruction time of a single block image in a 3-scale multiresolution framework, compared to the standard methodology, performs around 48 times faster. By assuming a parallel implementation, it can implied that the reconstruction time of a single tile, should be very close related to the reconstruction time of the full size and resolution image.
38

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi January 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
39

Rekonfigurierbare Hardwarekomponenten im Kontext von Cloud-Architekturen

Knodel, Oliver 30 August 2018 (has links)
Reconfigurable circuits (Field Programmable Gate Arrays (FPGAs)) for accelerating applications have been a key technology for many years. Thus, the world’s leading data center operators and providers of cloud infrastructures, namely Microsoft, IBM, and soon Amazon, are using FPGAs on their application platforms. The central question of this contribution is how FPGAs can be virtualized for a flexible and dynamic deployment in cloud infrastructures. In addition to the virtualization of FPGA resources, service models for the provision of virtualized FPGAs are developed and embedded into a resource management system in order to evaluate the cloud system’s behaviour. The objective of this work is not to build a cloud architecture, but rather to examine selected aspects of cloud systems with regard to the integration of reconfigurable hardware. The FPGAs are not only virtualized but, unlike in many other projects, the entire system and the application are taken into account. As a result, the vFPGAs are used dynamically and adaptively at different locations and topologies in the cloud architecture, depending on the user’s requirements. Furthermore, a prototypical implementation of a cloud system has been developed, and evaluated in several projects. The virtualization using state-of-the-art FPGAs has shown that the establishment of homogenous environments is possible. The Migration of a partial FPGA context is also possible with current FPGA architectures, but is associated with high costs in form of hardware resources. Furthermore, a simulation was carried out to determine whether virtualization and migration, could contribute to a more efficient utilization of resources in a larger cloud system or impair the service level agreement. In summary, both the developed virtualization and the possibility of a migration make it possible to reduce the amount of necessary resources in a modern cloud system. / Rekonfigurierbare Schaltkreise wie Field Programmable Gate Arrays (FPGAs) stellen seit Jahren für viele Unternehmen eine Schlüsseltechnologie zur Hintergrundbeschleunigung von Anwendungen und Cloud- Diensten dar. Als weltweit führende Betreiber von Rechenzentren und Anbieter von Cloud-Infrastrukturen setzten mittlerweile Microsoft, IBM und demnächst auch Amazon in ihren Systemen FPGAs auf Anwendungsebene ein, um sowohl die Rechenleistung zu erhöhen als auch die Verlustleistung und damit die Betriebskosten zu reduzieren. Ebenso stellt die Erhöhung der Zugangssicherheit durch Nutzung von FPGAs einen weiteren bedeutenden Aspekt dar. Die zentrale Fragestellung dieser Arbeit besteht darin, wie FPGAs durch Virtualisierung effizient auf der Anwendungsebene nutzbar gemacht werden können. Das Ziel besteht darin, die FPGAs wie andere Komponenten flexibel und dynamisch in der Cloud einzusetzen. Um ein Cloud-System mit FPGAs evaluieren zu können, werden zunächst Servicemodelle für eine Bereitstellung der virtualisierten FPGAs entwickelt und in eine Ressourcenverwaltung eingebettet. Ziel der Arbeit ist hierbei nicht der Aufbau einer Cloud-Architektur selbst, sondern vielmehr die Untersuchung ausgewählter Aspekte mit Hinblick auf die Integration rekonfigurierbarer Hardware in eine Cloud. Dabei wird die klassische System-Virtualisierung auf die rekonfigurierbare Hardware übertragen, um eine Abstraktion vom physischen FPGA zu erreichen und diesen möglichst effizient auslasten zu können. Das Ziel besteht hierbei darin, mehrere unabhängige, nebenläufig arbeitende Nutzerkerne auf demselben physischen FPGA zu realisieren und durch Migration auf andere Rechenknoten zu übertragen sowie von der physischen Größe und der Architektur des FPGAs zu abstrahieren. Dabei wird nicht nur der FPGA virtualisiert, sondern – anders als bei der Mehrzahl vergleichbarer Arbeiten – das Gesamtsystem und der Einsatzzweck berücksichtigt. Ein prototypisch entwickeltes Cloud-System wurde im Rahmen mehrerer Projekte evaluiert. Durch diese prototypische Umsetzung wird nachgewiesen, dass eine FPGA-Virtualisierung auf aktuellen FPGAs möglich ist und welche Kosten dazu erforderlich sind. Ebenso zeigt sich, dass aufgrund bestimmter fester Strukturen eine Etablierung von homogenen Bereichen notwendig ist, um die Migration eines partiellen FPGA-Kontextes zu ermöglichen und eine effiziente Lastverteilung in der Cloud zu realisieren. Die prototypische Implementierung zeigt, dass eine Migration mit aktuellen FPGA-Architekturen möglich, aber mit Kosten in Form von FPGA-Ressourcen verbunden ist. Des Weiteren wird mittels Simulation untersucht, ob die in einem komplexen Anwendungsszenario angewendete Migration auch in einem größeren Cloud-System zu einer effizienteren Auslastung der Ressourcen beitragen kann. Zusammenfassend ist sowohl durch die entwickelte Virtualisierung als auch durch die Möglichkeit einer Migration die Einsparung von Hardware-Ressourcen und somit auch Energie in einem modernen Cloud-System möglich.
40

Accelerated Large-Scale Multiple Sequence Alignment with Reconfigurable Computing

Lloyd, G Scott 20 May 2011 (has links) (PDF)
Multiple Sequence Alignment (MSA) is a fundamental analysis method used in bioinformatics and many comparative genomic applications. The time to compute an optimal MSA grows exponentially with respect to the number of sequences. Consequently, producing timely results on large problems requires more efficient algorithms and the use of parallel computing resources. Reconfigurable computing hardware provides one approach to the acceleration of biological sequence alignment. Other acceleration methods typically encounter scaling problems that arise from the overhead of inter-process communication and from the lack of parallelism. Reconfigurable computing allows a greater scale of parallelism with many custom processing elements that have a low-overhead interconnect. The proposed parallel algorithms and architecture accelerate the most computationally demanding portions of MSA. An overall speedup of up to 150 has been demonstrated on a large data set when compared to a single processor. The reduced runtime for MSA allows researchers to solve the larger problems that confront biologists today.

Page generated in 0.1042 seconds