• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 286
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Preemptive HW/SW-Threading by combining ESL methodology and coarse grained reconfiguration

Rößler, Marko, Heinkel, Ulrich 14 January 2014 (has links)
Modern systems fulfil calculation tasks across the hardware- software boundary. Tasks are divided into coarse parallel subtasks that run on distributed resources. These resources are classified into a software (SW) and a hardware (HW) domain. The software domain usually contains processors for general purpose or digital signal calculations. Dedicated co-processors such as encryption or video en-/decoding units belong to the hardware domain. Nowadays, a decision in which domain a certain subtask will be executed in a system is usually taken during system level design. This is done on the basis of certain assumptions about the system requirements that might not hold at runtime. The HW/SW partitioning is static and cannot adapt to dynamically changing system requirements at runtime. Our contribution to tackle this, is to combine a ESL based HW/SW codesign methodology with a coarse grained reconfigurable System on Chip architecture. We propose this as Preemptive HW/SW-Threading.
122

Optimizing the instruction scheduler of high-level synthesis tool / Optimera instruktion schemaläggaren för högnivå syntes verktyg

Xu, Zihao January 2023 (has links)
With the increasing complexity of the chip architecture design for meeting different application requirements, the corresponding instruction scheduler of high-level synthesis tool needs to solve complex scheduling problems. Dynamically Reconfigurable Resource Array (DRRA) is a novel architecture based on Coarse-Grained Reconfigurable Architecture (CGRA) on SiLago platform, the instruction scheduler of Vesyla-II, the dedicated High-Level Synthesis (HLS) tool targets for DRRA needs to schedule the specific instruction sets designed for Distributed Two-level Control System (D2LC). This kind of instruction has different lifetimes and is fully cooperative and persistent. Based on these features, the instruction scheduler needs to be applied to the scheduling algorithm under complex constraints. The previously existing naive algorithm shows poor scalability and low efficiency. This thesis attempts to design and implement a new scheduling algorithm to improve the performance of a constraint programming engine-based scheduler. The new scheduling algorithm is based on the heuristic method, the scheduler with this algorithm does the order prediction during the resource scheduling process. Besides, a test bench for meeting different instruction scheduling behavior is also designed, and the test bench could generate the maximum boundary of the schedule to do the performance profiling of the developed algorithm. Several experiments are performed to compare the proposed method against the previous naive algorithm. The execution time and quality of the result are crucial to determine which algorithm has better performance. The experiment result shows that the scheduler with a heuristic algorithm could reduce the execution time and have comparable schedule quality, and it could solve all the test cases, whilst the naive algorithm only can solve part of them. / Med den ökande komplexiteten hos chiparkitekturdesignen för att möta olika applikationskrav, måste motsvarande instruktionsschemaläggare för högnivåsyntesverktyg lösa komplexa schemaläggningsproblem. Dynamically Reconfigurable Resource Array (DRRA) är en ny arkitektur baserad på Coarse-Grained Reconfigurable Architecture (CGRA) på SiLago-plattformen, instruktionsschemaläggaren för Vesyla-II, de dedikerade High Level Synthesis (HLS) verktygsmålen för DRRA behöver för att schemalägga de specifika instruktionsuppsättningar designade för distribuerat tvånivåstyrsystem (D2LC). Denna typ av undervisning har olika livslängder och är helt samarbetsvillig och ihållande. Baserat på dessa funktioner måste instruktionsschemaläggaren appliceras på schemaläggningsalgoritmen under komplexa begränsningar. Den tidigare existerande naiva algoritmen visar dålig skalbarhet och låg effektivitet. Den här avhandlingen försöker designa och implementera en ny schemaläggningsalgoritm för att förbättra prestandan hos en schemaläggare som är baserad på begränsningsprogrammeringsmotorer. Den nya schemaläggningsalgoritmen är baserad på den heuristiska metoden, schemaläggaren med denna algoritm gör ordningsförutsägelsen under resursschemaläggningsprocessen. Dessutom är en testbänk för att möta olika instruktionsschemaläggningsbeteenden också utformad, och testbänken kan generera den maximala gränsen för schemat för att göra prestandaprofileringen av den utvecklade algoritmen. Flera experiment utförs för att jämföra den föreslagna metoden mot den tidigare naiva algoritmen. Exekveringstiden och kvaliteten på resultatet är avgörande för att avgöra vilken algoritm som har bättre prestanda. Experimentresultatet visar att schemaläggaren med en heuristisk algoritm kan minska exekveringstiden och ha jämförbar schemakvalitet, och den kan lösa alla testfall, medan den naiva algoritmen bara kan lösa en del av dem.
123

High-Level Language Programming Environment for Parallel Real-Time Telemetry Processor

LaPlante, John R., Barge, Steve G. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The difficulty of incorporating custom real-time processing into a conventional telemetry system frustrates many design engineers. Custom algorithms such as data compression/conversion, software decommutation, signal processing or sensitive defense related algorithms, are often executed on expensive and time-consuming mainframe computers during post-processing. The cost to implement such algorithms on real-time hardware is greater, because programming for such hardware is usually done in assembly language or microcode, resulting in: * The need for specially trained software specialists * Long and often unpredictable development time * Poor maintainability * Non-portability to new applications or hardware. This paper presents an alternative to host-based, post-processing telemetry systems. The Loral System 500 offers an easy to use, high-level language programming environment that couples real-time performance with fast development time, portability and easy maintenance. Targeted to Weltek's XL-Serles 32 and 64 bit floating point processors, delivering 20 MFLOPS peak performance, the environment transparently integrates the C programming environment with a parallel date-flow telemetry processing architecture. Supporting automatic human interface generation, symbolic high-level debugging and a complete floating point math library the System 500 programming environment extends to parallel execution transparently. It handles process scheduling, memory management and data conversion automatically. Configured to run under UNIX, the system's development environment is powerful and portable. The platform can be migrated to PC's and other hosts, facilitating eventual integration with an array of standard off-the-shelf tools.
124

HIGH-LEVEL LANGUAGE PROGRAMMING ENVIRONMENT FOR PARALLEL REAL-TIME TELEMETRY PROCESSOR

LaPlante, John R., Barge, Steve G. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The difficulty of incorporating custom real-time processing into a conventional telemetry system frustrates many design engineers. Custom algorithms such as data compression/conversion, software decommutation, signal processing or sensitive defense related algorithms, are often executed on expensive and timeconsuming mainframe computers during post-processing. The cost to implement such algorithms on real-time hardware is greater, because programming for such hardware is usually done in assembly language or microcode, resulting in: The need for specially trained software specialists Long and often unpredictable development time Poor maintainability Non-portability to new applications or hardware This paper presents an alternative to host-based, post-processing telemetry systems. The Loral System 500 offers an easy to use, high-level language programming environment that couples real-time performance with fast development time, portability and easy maintenance. Targeted to Weltek’s XL-Serles 32 and 64 bit floating point processors, delivering 20 MFLOPS peak performance, the environment transparently integrates the C programming environment with a parallel date-flow telemetry processing architecture. Supporting automatic human interface generation, symbolic high-level debugging and a complete floating point math library the System 500 programming environment extends to parallel execution transparently. It handles process scheduling, memory management and data conversion automatically. Configured to run under UNIX, the system’s development environment is powerful and portable. The platform can be migrated to PC’s and other hosts, facilitating eventual integration with an array of standard off-the-shelf tools.
125

SiLago: Enabling System Level Automation Methodology to Design Custom High-Performance Computing Platforms : Toward Next Generation Hardware Synthesis Methodologies

Farahini, Nasim January 2016 (has links)
<p>QC 20160428</p>
126

Design and Multi-Technology Multi-objective Comparative Analysis of Families of MPSOC.

Wang, Zhoukun 12 November 2009 (has links) (PDF)
Multiprocessor system on chip (MPSOC) have strongly emerged in the past decade in communication, multimedia, networking and other embedded domains. MPSOC became a new paradigm of high performance embedded application design. This thesis addresses the design and the physical implementation of a Network on Chip (NoC) based Multiprocessor System on Chip. We studied several aspects at different design stages: high level synthesis, architecture design, FPGA implementation, application evaluation and ASIC physical implementation. We try to analysis and find the impacts of these aspects for the MPSOC's final performance, power consumption and area cost. We implemented a NoC based 16 processors embedded system on FPGA prototyping. Three NoCs provide different functionalities for sixteen PE tiles. We also demonstrated the use of our performance monitoring system for software debugging and tuning. With the bi-synchronous FIFO method, our GALS architecture successfully solves the long clock signal distribution problem and allows that each clock domain can run at its own clock frequency. On the other hand we successfully implemented AES and TDES block cipher cryptographic algorithms on this platform and results show linear speedup in computation time. The network part of our architecture has been implemented on ASIC technology and has been explored with different timing constraints and different library categories of STmicroelectronics' 65nm/45nm technologies. The experimental results of ASIC and FPGA are compared, and we inducted the discussion of technology change impact on parallel programming.
127

Towards hardware synthesis of a flexible radio from a high-level language / Synthèse matérielle d'une radio flexible et reconfigurable depuis un langage de haut niveau dédié aux couches physiques radio

Tran, Mai-Thanh 13 November 2018 (has links)
La radio logicielle est une technologie prometteuse pour répondre aux exigences de flexibilité des nouvelles générations de standards de communication. Elle peut être facilement reprogrammée au niveau logiciel pour implémenter différentes formes d'onde. En s'appuyant sur une technologie dite logicielle telle que les microprocesseurs, cette approche est particulièrement flexible et assez facile à mettre en œuvre. Cependant, ce type de technologie conduit généralement à une faible capacité de calcul et, par conséquent, à des débit faibles. Pour résoudre ce problème, la technologie FPGA s'avère être une bonne alternative pour la mise en œuvre de la radio logicielle. En effet, les FPGAs offrent une puissance de calcul élevée et peuvent être reconfigurés. Ainsi, inclure des FPGAs dans le concept de radio logicielle peut permettre de prendre en charge plus de formes d'onde avec des exigences plus strictes qu'une approche basée sur la technologie logicielle. Cependant, les principaux inconvénients d’une conception à base de FPGAs sont le niveau du langage de description d'entrée qui doit typiquement être le niveau matériel, et le temps de reconfiguration qui peut dépasser les exigences d'exécution si le FPGA est entièrement reconfiguré. Pour surmonter ces problèmes, cette thèse propose une méthodologie de conception qui exploite à la fois la synthèse de haut niveau et la reconfiguration dynamique. La méthodologie proposée donne un cadre pour construire une radio flexible pour la radio logicielle à base de FPGAs et qui peut être reconfigurée pendant l'exécution. / Software defined radio (SDR) is a promising technology to tackle flexibility requirements of new generations of communication standards. It can be easily reprogrammed at a software level to implement different waveforms. When relying on a software-based technology such as microprocessors, this approach is clearly flexible and quite easy to design. However, it usually provides low computing capability and therefore low throughput performance. To tackle this issue, FPGA technology turns out to be a good alternative for implementing SDRs. Indeed, FPGAs have both high computing power and reconfiguration capacity. Thus, including FPGAs into the SDR concept may allow to support more waveforms with more strict requirements than a processor-based approach. However, main drawbacks of FPGA design are the level of the input description language that basically needs to be the hardware level, and, the reconfiguration time that may exceed run-time requirements if the complete FPGA is reconfigured. To overcome these issues, this PhD thesis proposes a design methodology that leverages both high-level synthesis tools and dynamic reconfiguration. The proposed methodology is a guideline to completely build a flexible radio for FPGA-based SDR, which can be reconfigured at run-time.
128

Low Power Technology Mapping and Performance Driven Placement for Field Programmable Gate Arrays

Li, Hao, 09 November 2004 (has links)
As technology geometries have shrunk to the deep sub-micron (DSM) region, the chip density and clock frequency of FPGAs have increased significantly. This makes computer-aided design (CAD) for FPGAs very important and challenging. Due to the increasing demands of portable devices and mobile computing, low power design is crucial in CAD nowadays. In this dissertation, we present a framework to optimize power consumption for technology mapping onto FPGAs. We propose a low-power technology mapping scheme which is able to predict the impact of choosing a subnetwork covering on the ultimate mapping solution. We dynamically update the power estimation for a sequence of options and choose the one that yields the least power consumption. This technique outperforms the best low-power mapping algorithms reported in the literature. We further extend this work to generate mapping solutions with optimal delay. We also propose placement algorithms to optimize the performance of the placed circuit. Net cluster based methodology is designed to ensure closely connected nets will be routed in the same region. Net cluster is obtained by clique partitioning on the net dependency graph. Net positions and consequent cell positions are computed with a force-directed approach which drags nets connected to closer positions. We further study the performance-driven placement problem for high level synthesis. We use the Automatic Design Instantiation (AUDI) high level synthesis system to generate a register-transistor level (RTL) netlist. This RTL netlist is fed into a CAD tool for physical synthesis. We do not necessarily go through the entire physical design process which is usually quite time-consuming. Instead, we have created an accurate wirelength/timing estimator working on the floorplan. If the estimated timing information does not meet the constraints, a guidance is generated and provided to AUDI system. The guidance consists of the estimated timing information and instructions to produce a new netlist in order to improve the performance. Finally the circuit is placed and routed on a satisfying design. This performance-driven placement framework yields better results as compared to a commercial CAD tool.
129

CHESS: A Tool for CDFG Extraction and High-Level Synthesis of VLSI Systems

Namballa, Ravi K 08 July 2003 (has links)
In this thesis, a new tool, named CHESS, is designed and developed for control and data-flow graph (CDFG) extraction and the high-level synthesis of VLSI systems. The tool consists of three individual modules for:(i) CDFG extraction, (ii) scheduling and allocation of the CDFG, and (iii) binding, which are integrated to form a comprehensive high-level synthesis system. The first module for CDFG extraction includes a new algorithm in which certain compiler-level transformations are applied first, followed by a series of behavioral-preserving transformations on the given VHDL description. Experimental results indicate that the proposed conversion tool is quite accurate and fast. The CDFG is fed to the second module which schedules it for resource optimization under a given set of time constraints. The scheduling algorithm is an improvement over the Tabu Search based algorithm described in [6] in terms of execution time. The improvement is achieved by moving the step of identifying mutually exclusive operations to the CDFG extraction phase, which, otherwise, is normally done during scheduling. The last module of the proposed tool implements a new binding algorithm based on a game-theoretic approach. The problem of binding is formulated as a non-cooperative finite game, for which a Nash-Equilibrium function is applied to achieve a power-optimized binding solution. Experimental results for several high-level synthesis benchmarks are presented which establish the efficacy of the proposed synthesis tool.
130

Using Ontologies to Support Interoperability in Federated Simulation

Rathnam, Tarun 20 August 2004 (has links)
A vast array of computer-based simulation tools are used to support engineering design and analysis activities. Several such activities call for the simulation of various coupled sub-systems in parallel, typically to study the emergent behavior of large, complex systems. Most sub-systems have their own simulation models associated with them, which need to interoperate with each other in a federated fashion to simulate system-level behavior. The run-time exchange of information between federate simulations requires a common information model that defines the representation of simulation concepts shared between federates. However, most federate simulations employ disparate representations of shared concepts. Therefore, it is often necessary to implement transformation stubs that convert concepts between their common representation to those used in federate simulations. The tasks of defining a common representation for shared simulation concepts and building translation stubs around them adds to the cost of performing a system-level simulation. In this thesis, a framework to support automation and reuse in the process of achieving interoperability between federate simulations is developed. This framework uses ontologies as a means to capture the semantics of different simulation concepts shared in a federation in a formal, reusable fashion. Using these semantics, a common representation for shared simulation entities, and a corresponding set of transformation stubs to convert entities from their federate to common representations (and vice-versa) are derived automatically. As a foundation to this framework, a schema to enable the capture of simulation concepts in an ontology is specified. Also, a graph-based algorithm is developed to extract the appropriate common information model and transformation procedures between federate and common simulation entities. As a proof of concept, this framework is applied to support the development of a federated air traffic simulation. To progress with the design of an airport, the combined operation of its individual systems (air traffic control, ground traffic control, and ground-based aircraft services) in handling varying volumes of aircraft traffic is to be studied. To do so, the individual simulation models corresponding to the different sub-systems of the airport need to be federated, for which the ontology-based framework is applied.

Page generated in 0.0355 seconds