• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 223
  • 59
  • 56
  • 55
  • 29
  • 25
  • 23
  • 18
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 611
  • 158
  • 116
  • 106
  • 90
  • 90
  • 77
  • 63
  • 57
  • 55
  • 53
  • 52
  • 50
  • 49
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A HIGH SPEED REAL TIME SPACE QUALIFIED TIME DIVISION MULTIPLEXED DATA FORMATTER

Schwartz, Paul D., Hersman, Christopher B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / A system to generate a contiguous high speed time division multiplexed (TDM) spacecraft downlink data stream has been developed. The 25 MBPS downlink data stream contains high rate real time imager data, intermediate rate subsystem processor data, and low rate spacecraft housekeeping data. Imager data is transferred directly into the appropriate TDM downlink data window using control signals and clocks generated in the central data formatter and distributed to the data sources. Cable and electronics delays inherent in this process can amount to several clock periods, while the uncertainty and variations in those delays (e.g. temperature effects) can exceed the clock period. Unique (patent pending) electronic circuitry has been included in the data formatter to sense the total data gathering delay for each high speed data source and use the results to control series programmable delay elements to equalize the delays from all sources and permit the formation of a contiguous output data stream.
182

TECHNOLOGY EVOLUTION AND INNOVATION IN SPACECRAFT COMMUNICATIONS

Voudouris, Thanos 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / This paper discusses the evolution of the ground satellite communication systems and the efforts made by the Goddard Space Flight Center's (GSFC) Advanced Architectures and Automation (AAA) branch, Code 588 to bring satellite scientific data to the user’s desktop. Primarily, it describes the next generation desktop system, its architecture and processing capabilities, which provide autonomous high-performance telemetry acquisition at the lowest possible cost. It also discusses the planning processes and the applicability of new technologies for communication needs in the next century. The paper is presented in terms simple for those not very familiar with current space programs to understand.
183

AN INTEGRATED GPS TRACKING AND TELEMETRY SYSTEM FOR RANGE APPLICATIONS

Wells, Lawrence L., Montgomery, Robert S. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / This paper describes a highly integrated and low cost GPS Translator/Telemetry system for use on missile platforms – the Digital GPS Translator (DGT), a component part of the Translated GPS Range System (TGRS). The DGT provides translated GPS tracking capability combined with transmission of telemetry at rates of up to 10 Mbps with optional encoding and/or encryption. This integrated approach to GPS tracking and telemetry results in a significant reduction in hardware size and cost compared to a segregated approach. The TGRS includes a ground-processing unit that provides real time processing of both the GPS and telemetry portions of the DGT transmission.
184

APPLICATION OF A STORAGE AREA NETWORK IN A HIGHRATE TELEMETRY GROUND STATION

Ozkan, Siragan, Zimmerman, Bryan, Williams, Mike, DeShong, Monica 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A traditional Front-end Processor (FEP) with local RAID storage can limit the operational throughput of a high-rate telemetry ground station. The Front-end processor must perform pass processing (frame synchronization, decoding, routing, and storage), post-pass processing (level-zero processing), and tape archiving. A typical fifteen minute high-rate satellite pass can produce data files of 10 to 20 GB. The FEP may require up to 2 hours to perform the post-pass processing and tape archiving functions for these size files. During this time, it is not available to support real-time pass operations. Honeywell faced this problem in the design of the data management system for the DataLynx ä* ground stations. Avtec Systems, Inc. and Honeywell worked together to develop a data management system that utilizes a Storage Area Network (SAN) in conjunction with multiple High-speed Front-end Processors (HSFEP) for Pass Processing (PFEP), multiple HSFEPs for Post-pass Processing (PPFEP), and a dedicated Tape Archive server. A SAN consists of a high-capacity, high-bandwidth shared RAID that is connected to multiple nodes using 1 Gbps Fibre Channel interfaces. All of the HSFEPs as well as the Tape Archive server have direct access to the shared RAID via a Fibre Channel network. The SAN supports simultaneous read/write transfers between the nodes at aggregate rates up to 120 Mbytes/sec. With the Storage Area Network approach, the High-Speed Front-end Processors can quickly transfer the data captured during a pass to the shared RAID for post-processing and tape archiving so that they are available to support another satellite pass. This paper will discuss the architecture of the Storage Area Network and how it optimizes ground station data management in a high-rate environment.
185

Performance and Energy Efficient Building Blocks for Network-on-Chip Architectures

Vangal, Sriram R. January 2006 (has links)
The ever shrinking size of the MOS transistors brings the promise of scalable Network-on-Chip (NoC) architectures containing hundreds of processing elements with on-chip communication, all integrated into a single die. Such a computational fabric will provide high levels of performance in an energy efficient manner. To mitigate emerging wire-delay problem and to address the need for substantial interconnect bandwidth, packet switched routers are fast replacing shared buses and dedicated wires as the interconnect fabric of choice. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as 3D graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput. Therefore, this work focuses on two key building blocks critical to the success of NoC design: high performance, area and energy efficient router and floating-point processor architectures. This thesis first presents a six-port four-lane 57 GB/s non-blocking router core based on wormhole switching. The router features double-pumped crossbar channels and destinationaware channel drivers that dynamically configure based on the current packet destination. This enables 45% reduction in crossbar channel area, 23% overall router area, up to 3.8X reduction in peak channel power, and 7.2% improvement in average channel power, with no performance penalty over a published design. In a 150nm six-metal CMOS process, the 12.2mm2 router contains 1.9 million transistors and operates at 1GHz at 1.2V. We next present a new pipelined single-precision floating-point multiply accumulator core (FPMAC) featuring a single-cycle accumulate loop using base 32 and internal carry-save arithmetic, with delayed addition techniques. Combined algorithmic, logic and circuit techniques enable multiply-accumulates at speeds exceeding 3GHz, with single-cycle throughput. Unlike existing FPMAC architectures, the design eliminates scheduling restrictions between consecutive FPMAC instructions. The optimizations allow removal of the costly normalization step from the critical accumulate loop and conditionally powered down using dynamic sleep transistors on long accumulate operations, saving active and leakage power. In addition, an improved leading zero anticipator (LZA) and overflow detection logic applicable to carry-save format is presented. In a 90nm seven-metal dual-VT CMOS process, the 2mm2 custom design contains 230K transistors. The fully functional first silicon achieves 6.2 GFLOPS of performance while dissipating 1.2W at 3.1GHz, 1.3V supply. It is clear that realization of successful NoC designs require well balanced decisions at all levels: architecture, logic, circuit and physical design. Our results from key building blocks demonstrate the feasibility of pushing the performance limits of compute cores and communication routers, while keeping active and leakage power, and area under control. / Report code: LiU-TEK-LIC-2006:36.
186

Resource Optimization of MPSoC for Industrial Use-cases

Kågesson, Filip, Cederbom, Simon January 2019 (has links)
Today’s embedded systems require more and more performance but they are still required to meet power constraints. Single processor systems can deliver high performance but this leads to high power consumption. One solution to this problem is to use a multiprocessor system instead which is able to provide high performance and at the same time meet the power constraints. The reason that such a system can meet the power constraints is that it can have a lower clock frequency than a similar single processor system. The focus of the project is to explore possibilities when developing new multiprocessor systems. The project makes a comparison of asymmetric multiprocessing (AMP) systems and symmetric multiprocessing (SMP) systems in terms of task management and communication between the processors. A comparison is made between the Advanced High-performance Bus (AHB) interface and the Advanced eXtensible Interface (AXI). The fixed priority and round-robin arbitration algorithms is also compared. The project also contains a practical part where a demo is developed to show that an inter-processor communication using exclusive access is possible to implement. The theoretical part of the project containing the comparisons result in good comparisons that can be used to get an overview of what to use when developing new Multiprocessor System on Chip (MPSoC) designs. The demo developed in this project failed to meet the requirement of having a fully functional spinlock. This problem can be solved in the future if new hardware is developed. / Dagens inbyggda system kräver mer och mer prestanda men de måste fortfarande klara av kraven kring strömförbrukning. System med en processor kan leverera hög prestanda men detta leder till hög strömförbrukning. En lösning till detta problem är att använda ett multiprocessorsystem istället som klarar av att leverera hög prestanda och samtidigt klara av kraven kring strömförbrukning. Anledningen till att denna typ av system klarar av kraven kring strömförbrukning är att de kan använda en lägre klockfrekvens än ett system med en processor. Fokuset på detta projektet ligger på att utforska möjligheterna som finns när nya multiprocessorsystem ska utvecklas. Projektet gör en jämförelse mellan asymmetriska och symmetriska multiprocessorsystem i termer av uppgiftshantering och kommunikation mellan processorerna. En jämförelse har gjorts mellan Advanced High-Performance Bus (AHB) gränssnittet och Advanced eXtensible Interface (AXI) gränssnittet. Fixed priority och round-robin algoritmerna för hantering av krockar mellan processorerna har också jämförts. Det finns även en praktisk del i projektet där en demo har utvecklats för att visa en fungerande kommunikation mellan processorer som använder funktionaliteten för exklusiv åtkomst till den gemensamma bussen. Den teoretiska delen av projektet som innehåller jämförelserna resulterar i bra jämförelser som kan användas när nya multiprocessorsystem utvecklas. Demon som har utvecklats i detta projekt misslyckades med att klara av kravet kring att ha ett fullt fungerande lås. Detta problemet kan lösas i framtiden ifall ny hårdvara utvecklas.
187

Projeto e desenvolvimento de uma arquitetura de baixo consumo de potência para microprocessadores. / Design and implementation of low power architecture for microcontroller.

Morita, Augusto Ken 29 June 2015 (has links)
O trabalho trata do projeto e do desenvolvimento de um processador de baixo consumo de potência, de forma simplificada, explorando técnicas de microarquitetura, para atingir menor consumo de potência. É apresentada uma sequência lógica de desenvolvimento, a partir de conceitos e estruturas básicas, até chegar a estruturas mais complexas e, por fim, mostrar a microarquitetura completa do processador. Esse novo modelo de processador é comparado com estudos prévios de três processadores, sendo o primeiro modelo síncrono, o segundo assíncrono e o terceiro uma versão melhorada do primeiro modelo, que inclui minimizações de registradores e circuitos. Uma nova metodologia de criação de padring de microcontroladores, baseada em reuso de informações de projetos anteriores, é apresentada. Essa nova metodologia foi criada para a rápida prototipagem e para diminuir possíveis erros na geração do código do padring. Comparações de resultados de consumo de potência e área são apresentadas para o processador desenvolvido e resultados obtidos com a nova metodologia de geração de padring também são apresentados. Para o processador, um modelo, no qual se utilizam múltiplos barramentos para minimizar o número de ciclos de máquina por instrução, é apresentado. Também foram ressaltadas estruturas que podem ser otimizadas e circuitos que podem ser reaproveitados para diminuir a quantidade de circuito necessário na implementação. Por fim, a nova implementação é comparada com os três modelos anteriores; os ganhos obtidos de desempenho com a implementação dessas estruturas foram de 18% que, convertidos em consumo de potência, representam economia de 13% em relação ao melhor caso dos processadores comparados. A tecnologia utilizada no desenvolvimento dos processadores foi CMOS 250nm da TSMC. / This work is a development and implementation of a low power processor in a simplified way, exploring microarchitecture techniques to achieve low power consumption. A logic sequence of design flow is presented, starting from basic concepts and circuit structures incrementing these concepts and structures to achieve a complex microarchitecture of a processor. A new methodology for microcontroller padring creations based in reuse of previous project information is presented. This new methodology was developed for fast prototyping and decreases the possible error in generation of microcontroler padring code creation. This new microarchitecture is compared with three previous processors, one is an original synchronous version, the second is an asynchronous version, and the third is based on the first model with register and circuit minimizations. Results of area and power consumption are compared with this new proposed architecture. The new model uses multiple buses with access timing tuned for different internal blocks. This timing tuning decrease the number of machine cycle necessary per instruction. In addition, it presents some macro block circuit partition and circuit reuse to minimize the circuit necessary for implementation. The gain obtained in performance with these new structures was 18%, converting to power consumption, it represent a decrease in 13% in relation with the best of three processor compared. The technology used in the development of these processors was CMOS 250nm from TSMC.
188

Single-phase laminar flow heat transfer from confined electron beam enhanced surfaces

Ferhati, Arben January 2015 (has links)
The continuing requirement for computational processing power, multi-functional devices and component miniaturization have emphasised the need for thermal management systems able to maintain the temperature at safe operating condition. The thermal management industry is constantly seeking for new cutting edge, efficient, cost effective heat transfer enhancement technologies. The aim of this study is to utilize the electron beam treatment for the improvement of the heat transfer area in liquid cooled plates and experimentally evaluate the performance. Considering the complexity of the technology, this thesis focuses on the design and production of electron beam enhanced test samples, construction of the test facility, testing procedure and evaluation of thermal and hydraulic characteristics. In particular, the current research presented in this thesis contains a number of challenging and cutting edge technological developments that include: (1) an overview of the semiconductor industry, cooling requirements, the market of thermal management systems, (2) an integral literature review of pin-fin enhancement technology, (3) design and fabrication of the electron beam enhanced test samples, (4) upgrade and construction of the experimental test rig and the development of the test procedure, (5) reduction of the experimental data and analysis to evaluate thermal and hydraulic performance. The experimental results show that the capability of the electron beam treatment to improve the thermal efficiency of current untreated liquid cooled plates is approximately three times. The highest heat transfer rate was observed for the sample S3; this is attributed to the irregularities of the enhanced structure, which improves the heat transfer area, mixing, and disturbs the thermal and velocity boundary layers. Enhancement of heat transfer for all three samples was characterised by an increase of pressure drop. The electron beam enhancement technique is a rapid process with zero material waste and cost effective. It allows thermal management systems to be produced smaller and faster, reduce material usage, without compromising safety, labour cost or the environment.
189

The effects of the compiler optimizations in embedded processors reliability

Lins, Filipe Maciel January 2017 (has links)
O recente avanço tecnológico dos processadores embarcados aumentou a complexidade dos compiladores e o uso de recursos heterogêneos, como Arranjo de Portas Programáveis em Campo (Field Programmable Gate Array - FPGA) e Unidade de Processamento Gráfico (Graphics Processing Unit - GPU), integrado aos processadores. Além disso, aumentou-se o uso de componentes de prateleira (Commercial off-the-shelf - COTS) em aplicações críticas, ao invés de chips tolerantes a radiação, pois os COTS podem ser mais baratos, flexíveis, terem uma rápida colocação no mercado e um menor consumo de energia. No entanto, mesmo com essas vantagens, os COTS são suscetíveis a falha sendo necessário garantir uma alta confiabilidade nos sistemas utilizados. Assim como, no caso de aplicações em tempo real, também se precisa respeitar os requisitos determinísticos. Como caso de estudo, este trabalho utiliza a Zynq que é um dispositivo COTS do tipo Sistema em Chip Totalmente Programável (All Programmable System on Chip - APSoC) no qual possui um processador ARM Cortex-A9 embarcado. Nesta pesquisa, investigou-se o impacto das falhas que afetam o arquivo de registradores na confiabilidade dos processadores embarcados. Para tanto, experimentos de injeção de falhas e de radiação de íons pesados foram realizados. Além do mais, avaliou-se como os diferentes níveis de otimização do compilador modificam o uso e a probabilidade de falha do arquivo de registradores do processador. Selecionou-se seis benchmarks representativos, cada um compilado com três níveis diferentes de otimização. Realizamos campanhas exaustivas de injeção de falhas para medir o Fator de Vulnerabilidade Arquitetural (Architectural Vulnerability Factor - AVF) de cada código e configuração, identificando os registradores que são mais propensos a gerar uma corrupção de dados silenciosos (Silent Data Corruption - SDC) ou uma interrupção funcional de evento único (Single Event Functional Interruption - SEFI). Também foram correlacionadas as variações de confiabilidade observadas com a utilização do arquivo de registradores. Finalmente, irradiamos com íons pesados dois dos benchmarks selecionados compilados com dois níveis de otimização. Os resultados mostram que mesmo com o melhor desempenho, o menor uso do arquivo de registradores ou o menor AVF não é garantido que as aplicações irão alcançar a maior Carga de Trabalho Média Entre Falhas (Mean Workload Between Failure - MWBF). Por exemplo, os resultados mostram que o melhor desempenho da aplicação Multiplicação de Matrizes (Matrix Multiplication - MxM) é alcançado no nível de otimização mais alta. No entanto, nos resultados dos experimentos de injeção de falhas, a maior confiabilidade é alcançada no menor nível de otimização que possuem os menores AVFs e o menor uso do arquivo de registradores. Os resultados também mostram que o impacto das otimizações está fortemente relacionado com o algoritmo executado e como o compilador faz esta otimização. / The recent advances in the embedded processors increase the compilers complexity, and the usage of heterogeneous resources such as Field Programmable Gate Array (FPGA) and Graphics Processing Unit (GPU) integrated with the processors. Additionally, the increase in the usage of Commercial off-the-shelf (COTS) instead of radiation hardened chips in safety critical applications occurs because the COTS can be more flexible, inexpensive, have a fast time-to market and a lower power consumption. However, even with these advantages, it is still necessary to guarantee a high reliability in a system that uses a COTS for safety critical applications because they are susceptible to failures. Additionally, in the case of real time applications, the time requirements also need to be respected. As a case of study, this work uses the Zynq which is a COTS device classified as an All Programmable System-on-Chip (APSOC) and has an ARM Cortex-A9 as the embedded processor. In this research, the impact of faults that affect the register file in the embedded processors reliability was investigated. For that, fault-injection and heavy-ion radiation experiments were performed. Moreover, an evaluation of how the different levels of compiler optimization modify the usage and the failure probability of a processor register file. A set of six representative benchmarks, each one compiled with three different levels of compiler optimization. Exhaustive fault injection campaigns were performed to measure the registers Architectural Vulnerability Factor (AVF) of each code and configuration, identifying the registers that are more likely to generate Silent Data Corruption (SDC) or Single Event Functional Interruption (SEFI). Moreover, the observed reliability variations with register file utilization were correlated. Finally, two of the selected benchmarks, each one compiled with two different levels of optimization were irradiated in the heavy ions experiments. The results show that the best performance, the minor register file usage, or the lowest AVF does not always bring the highest Mean Workload Between Failures (MWBF). As an example, in the Matrix Multiplication (MxM) application, the best performance is achieved in the highest compiler optimization. However, in the fault injection, the higher reliability is obtained in the lower compiler optimization which has, the lower AVFs and the lower register file usage. Results also show that the impact of optimizations is strongly related to the executed algorithm and how the compiler optimizes them.
190

A platform to evaluate the fault sensitivity of superscalar processors

Tonetto, Rafael Billig January 2017 (has links)
A diminuição agressiva dos transistores, a qual levou a reduções na tensão de operação, vem proporcionando enormes benefícios em termos de poder computacional, mantendo o consumo de energia em um nível aceitável. No entanto, à medida que o tamanho dos recursos e a tensão diminuem, a susceptibilidade a falhas tende a aumentar e a importância das avaliações com falhas cresce. Os processadores superescalares, que hoje dominam o mercado, são um exemplo significativo de sistemas que se beneficiam destas melhorias tecnológicas e são mais suscetíveis a erros. Juntamente com isso, existem vários métodos para injeção de falhas, que é um meio eficiente para avaliar a resiliência desses processadores. No entanto, os métodos tradicionais de injeção de falhas, como a técnica baseada em hardware, impõem que o processador seja implementado fisicamente antes que os testes possam ser conduzidos, sem fornecer níveis razoáveis de controlabilidade. Por outro lado, as técnicas baseadas em simuladores implementados em software oferecem altos níveis de controlabilidade. No entanto, enquanto os simuladores em SW de alto nível (que são rápidos) podem levar a uma avaliação incompleta, ou mesmo equivocada, da resiliência do sistema, uma vez que não modelam os componentes internos do hardware (como os registradores do pipeline), simuladores em SW de baixo nível são extremamente lentos e dificilmente estão disponíveis em RTL (Register-Transfer Level). Considerando este cenário, propomos uma plataforma que preenche a lacuna entre as abordagens em HW e SW para avaliar falhas em processadores superescalares: é rápida, tem alta controlabilidade, disponível em software, flexível e, o mais importante, modela o processador em RTL. A ferramenta foi implementada sobre a plataforma usada para gerar o processador superescalar The Berkeley Out-of-Order Machine (BOOM), que é um processador altamente escalável e parametrizável. Esta propriedade nos permitiu experimentar três arquiteturas diferentes do processador: single-, dual- e quad-issue, e, ao analisar como a resiliência a falhas é influenciada pela complexidade de diferentes processadores, usamos os processadores para validar nossa ferramenta.

Page generated in 0.0613 seconds