• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 72
  • 64
  • 50
  • 25
  • 21
  • 15
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 680
  • 197
  • 162
  • 136
  • 135
  • 134
  • 127
  • 124
  • 118
  • 85
  • 81
  • 75
  • 73
  • 69
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A VERY LOW COST 150 MBPS DESKTOP CCSDS GATEWAY

Davis, Don, Bennett, Toby, Harris, Jonathan 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The wide use of standard packet telemetry protocols based on the Consultative Committee for Space Data Systems (CCSDS) recommendations in future space science missions has created a large demand for low-cost ground CCSDS processing systems. Some of the National Aeronautics and Space Administration (NASA) missions using CCSDS telemetry include Small Explorer, Earth Observing System (EOS), Space Station, and Advanced Composite Explorer. For each mission, ground telemetry systems are typically used in a variety of applications including spacecraft development facilities, mission control centers, science data processing sites, tracking stations, launch support equipment, and compatibility test systems. The future deployment of EOS spacecraft allowing direct broadcast of data to science users will further increase demand for such systems. For the last ten years, the Data Systems Technology Division (DSTD) at NASA Goddard Space Flight Center (GSFC) has been applying state-of-the-art commercial Very Large Scale Integration (VLSI) Application Specific Integrated Circuit (ASIC) technology to further reduce the cost of ground telemetry data systems. As a continuation of this effort, a new desktop CCSDS processing system is being prototyped that offers up to 150 Mbps performance at a replication cost of less than $20K. This system acts as a gateway that captures and processes CCSDS telemetry streams and delivers them to users over standard commercial network interfaces. This paper describes the development of this prototype system based on the Peripheral Component Interconnect (PCI) bus and 0.6 micron complementary metal oxide semiconductor (CMOS) ASIC technology. The system performs frame synchronization, bit transition density decoding, cyclic redundancy code (CRC) error checking, Reed-Solomon decoding, virtual channel sorting/filtering, packet extraction, and quality annotation and accounting at data rates up to and beyond 150 Mbps.
162

Asynchronous spike event coding scheme for programmable analogue arrays and its computational applications

Gouveia, Luiz Carlos Paiva January 2012 (has links)
This work is the result of the definition, design and evaluation of a novel method to interconnect the computational elements - commonly known as Configurable Analogue Blocks (CABs) - of a programmable analogue array. This method is proposed for total or partial replacement of the conventional methods due to serious limitations of the latter in terms of scalability. With this method, named Asynchronous Spike Event Coding (ASEC) scheme, analogue signals from CABs outputs are encoded as time instants (spike events) dependent upon those signals activity and are transmitted asynchronously by employing the Address Event Representation (AER) protocol. Power dissipation is dependent upon input signal activity and no spike events are generated when the input signal is constant. On-line, programmable computation is intrinsic to ASEC scheme and is performed without additional hardware. The ability of the communication scheme to perform computation enhances the computation power of the programmable analogue array. The design methodology and a CMOS implementation of the scheme are presented together with test results from prototype integrated circuits (ICs).
163

Shrinking the Cost of Telemetry Frame Synchronization

Ghuman, Parminder, Bennett, Toby, Solomon, Jeff 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / To support initiatives for cheaper, faster, better ground telemetry systems, the Data Systems Technology Division (DSTD) at NASA Goddard Space Flight Center is developing a new Very Large Scale Integration (VLSI) Application Specific Integrated Circuit (ASIC) targeted to dramatically lower the cost of telemetry frame synchronization. This single VLSI device, known as the Parallel Integrated Frame Synchronizer (PIFS) chip, integrates most of the functionality contained in high density 9U VME card frame synchronizer subsystems currently in use. In 1987, a first generation 20 Mbps VMEBus frame synchronizer based on 2.0 micron CMOS VLSI technology was developed by Data Systems Technology Division. In 1990, this subsystem architecture was recast using 0.8 micron ECL & GaAs VLSI to achieve 300 Mbps performance. The PIFS chip, based on 0.7 micron CMOS technology, will provide a superset of the current VMEBus subsystem functions at rates up to 500 Mbps at approximately one-tenth current replication costs. Functions performed by this third generation device include true and inverted 64 bit marker correlation with programmable error tolerances, programmable frame length and marker patterns, programmable search-check-lock-flywheel acquisition strategy, slip detection, and CRC error detection. Acquired frames can optionally be annotated with quality trailer and time stamp. A comprehensive set of cumulative accounting registers are provided on-chip for data quality monitoring. Prototypes of the PIFS chip are expected in October 1995. This paper will describe the architecture and implementation of this new low-cost high functionality device.
164

A Parallel Genetic Algorithm for Placement and Routing on Cloud Computing Platforms

Berlier, Jacob A. 05 May 2011 (has links)
The design and implementation of today's most advanced VLSI circuits and multi-layer printed circuit boards would not be possible without automated design tools that assist with the placement of components and the routing of connections between these components. In this work, we investigate how placement and routing can be implemented and accelerated using cloud computing resources. A parallel genetic algorithm approach is used to optimize component placement and the routing order supplied to a Lee's algorithm maze router. A study of mutation rate, dominance rate, and population size is presented to suggest favorable parameter values for arbitrary-sized printed circuit board problems. The algorithm is then used to successfully design a Microchip PIC18 breakout board and Micrel Ethernet Switch. Performance results demonstrate that a 50X runtime performance improvement over a serial approach is achievable using 64 cloud computing cores. The results further suggest that significantly greater performance could be achieved by requesting additional cloud computing resources for additional cost. It is our hope that this work will serve as a framework for future efforts to improve parallel placement and routing algorithms using cloud computing resources.
165

Trapp : uma ferramenta para particionamento/posicionamento de celulas para metodologia tranca / A trapp tool for partitioning/placement of methodology tranca's cells

Schermer, Paulo Armando January 1995 (has links)
Este trabalho propõe e avalia um novo algoritmo para o posicionamento de células de circuitos que utilizam a metodologia de projeto TRANCA. O algoritmo proposto realiza o posicionamento por particionamento, em n-blocos, baseado no conceito de balanceamento de redes, realizando um pré-roteamento global. A maioria dos algoritmos de posicionamento por particionamento são baseados na heurística de Kernighan-Lin[KER 70] e Fidducia-Mattheyses[FID 82] com migração de grupos. Estes algoritmos utilizam uma função de corte mínimo para diminuir o cruzamento de redes entre as duas partições, produzindo regiões saturadas. Sendo assim, o conceito de balanceamento de redes significa a busca de um equilíbrio no comprimento das conexões para evitar a criação de regiões saturadas, diminuindo o tempo computacional e facilitando a etapa de roteamento. Apresenta-se uma visão geral de síntese automática. Descreve-se os estilos de projeto mais utilizados, define-se e analisa-se o problema de particionamento e posicionamento de células. As principais características da metodologia TRANCA são apresentadas. Resume-se as principais características das ferramentas de síntese TRANCA, destacando-se as etapas de particionamento e posicionamento de cada uma, visando o aproveitamento destas características positivas. Com o propósito de fundamentar os conceitos usados para o desenvolvimento do algoritmo, apresenta-se os métodos de posicionamento mais relevantes, dando destaque aqueles baseados em particionamento. Descreve-se algumas das heurísticas existentes. Os conceitos utilizados para o desenvolvimento do algoritmo são então descritos. O algoritmo consiste basicamente da distribuição das conexões, utilizando um mapa de congestionamento do circuito, o que caracteriza um pré-roteamento global. O mapa de congestionamento é montado sobre as partições geradas no circuito. Além do mapa de congestionamento, a descrição dos caminhos das redes é realizada sobre um modelo definido para controlar o cruzamento de redes. Apos a definição dos conceitos, o ambiente criado para o algoritmo é apresentado. Com o objetivo de validar os conceitos estudados e aqueles propostos, implementou-se um protótipo, chamado TRAPP(TRAnsparent Placement by Partitioning), e um visualizador de posicionamento chamado CIPPATO. Finalmente, alguns resultados do protótipo desenvolvido e uma avaliação sobre o comportamento dente protótipo são apresentados. Propõe também implementações alternativas e direções para trabalhos futuros. / This work proposes and evaluates a new algorithm for cells' placement, for use on TRANCA[REI 87] layouts. The algorithm proposed makes a placement by partitioning using multiple steps, based on the concept of net balancing, in order to make a global prerouting. Most partitioning algorithms are based on the Kernighan-Lin[KER 70] and Fidducia-Mattheyses[FID 82] heuristics with migration groups. These algorithms use a mincut heuristic to decrease the crossing nets between the two blocks, producing saturated regions. Therefore, the nets balancing concept means to search for a balance in the connections size to avoid satured regions, decreasing a computation time and to increase the routing performance. The global vision of automatic synthesis is shown. The main design styles are described and the placement and partitioning problems are analysed. The main features of TRANCA methodology are shown. A summary about the TRANCA synthesis tools is presented, emphasizing the partitioning and placement step in each one. This main features are evaluated. The basic ideas that suported the development of the algorithm are described. The algorithm provides a connection distribuition, using a congestion map of the circuit that describes a global pre-routing. The congestion map is generated based on the circuit partitioning. In addition (to the congestion map), the net paths are defined to control the crossing nets. After the definition of the concepts, the environment created for the algorithm is showed. The most important placement methods are studied and presented in order to provide a general picture of the problem. Among them, specifc attention is given to those based an partitioning. Some particular heuristics are detailed. A prototype system called TRAPP( TRAnsparent Placement by Partitioning) was developed to evaluate this approach. It is completed by a placement viewer, CIPPATO. Finally, some results and conclusions are presented. New implementations and directions for further works are proposed too.
166

Generic low power reconfigurable distributed arithmetic processor

Liu, Zhenyu January 2009 (has links)
Higher performance, lower cost, increasingly minimizing integrated circuit components, and higher packaging density of chips are ongoing goals of the microelectronic and computer industry. As these goals are being achieved, however, power consumption and flexibility are increasingly becoming bottlenecks that need to be addressed with the new technology in Very Large-Scale Integrated (VLSI) design. For modern systems, more energy is required to support the powerful computational capability which accords with the increasing requirements, and these requirements cause the change of standards not only in audio and video broadcasting but also in communication such as wireless connection and network protocols. Powerful flexibility and low consumption are repellent, but their combination in one system is the ultimate goal of designers. A generic domain-specific low-power reconfigurable processor for the distributed arithmetic algorithm is presented in this dissertation. This domain reconfigurable processor features high efficiency in terms of area, power and delay, which approaches the performance of an ASIC design, while retaining the flexibility of programmable platforms. The architecture not only supports typical distributed arithmetic algorithms which can be found in most still picture compression standards and video conferencing standards, but also offers implementation ability for other distributed arithmetic algorithms found in digital signal processing, telecommunication protocols and automatic control. In this processor, a simple reconfigurable low power control unit is implemented with good performance in area, power and timing. The generic characteristic of the architecture makes it applicable for any small and medium size finite state machines which can be used as control units to implement complex system behaviour and can be found in almost all engineering disciplines. Furthermore, to map target applications efficiently onto the proposed architecture, a new algorithm is introduced for searching for the best common sharing terms set and it keeps the area and power consumption of the implementation at low level. The software implementation of this algorithm is presented, which can be used not only for the proposed architecture in this dissertation but also for all the implementations with adder-based distributed arithmetic algorithms. In addition, some low power design techniques are applied in the architecture, such as unsymmetrical design style including unsymmetrical interconnection arranging, unsymmetrical PTBs selection and unsymmetrical mapping basic computing units. All these design techniques achieve extraordinary power consumption saving. It is believed that they can be extended to more low power designs and architectures. The processor presented in this dissertation can be used to implement complex, high performance distributed arithmetic algorithms for communication and image processing applications with low cost in area and power compared with the traditional methods.
167

Hardware Implementation Techniques for JPEG2000.

Dyer, Michael Ian, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
JPEG2000 is a recently standardized image compression system that provides substantial improvements over the existing JPEG compression scheme. This improvement in performance comes with an associated cost in increased implementation complexity, such that a purely software implementation is inefficient. This work identifies the arithmetic coder as a bottleneck in efficient hardware implementations, and explores various design options to improve arithmetic coder speed and size. The designs produced improve the critical path of the existing arithmetic coder designs, and then extend the coder throughput to 2 or more symbols per clock cycle. Subsequent work examines more system level implementation issues. This work examines the communication between hardware blocks and utilizes certain modes of operation to add flexibility to buffering solutions. It becomes possible to significantly reduce the amount of intermediate buffering between blocks, whilst maintaining a loose synchronization. Full hardware implementations of the standard are necessarily limited in the amount of features that they can offer, in order to constrain complexity and cost. To circumvent this, a hardware / software codesign is produced using the Altera NIOS II softcore processor. By keeping the majority of the standard implemented in software and using hardware to accelerate those time consuming functions, generality of implementation can be retained, whilst implementation speed is improved. In addition to this, there is the opportunity to explore parallelism, by providing multiple identical hardware blocks to code multiple data units simultaneously.
168

Models and algorithms for physical cryptanalysis

Lemke-Rust, Kerstin January 2007 (has links)
Zugl.: Bochum, Univ., Diss., 2007 / Bibliogr. S. 229 - 232
169

Quelques algorithmes systoliques pour le calcul scientifique

Robert, Yves 16 December 1982 (has links) (PDF)
.
170

The Design Procedure Language Manual

Batali, John, Hartheimer, Anne 01 September 1980 (has links)
This manual describes the Design Procedure Language (DPL) for LSI design. DPL creates and maintains a representation of a design in a hierarchically organized, object-oriented LISP data-base. Designing in DPL involves writing programs (Design Procedures) which construct and manipulate descriptions of a project. The programs use a call-by-keyword syntax and may be entered interactively or written by other programs. DPL is the layout language for the LISP-based Integrated Circuit design system (LISPIC) being developed at the Artificial Intelligence Laboratory at MIT. The LISPIC design environment will combine a large set of design tools that interact through a common data-base. This manual is for prospective users of the DPL and covers the information necessary to design a project with the language. The philosophy and goals of the LISPIC system as well as some details of the DPL data-base are also discussed.

Page generated in 0.0163 seconds