• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 56
  • 27
  • 27
  • 25
  • 22
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

VLSI REALIZATION OF AHPL DESCRIPTIONS AS STORAGE LOGIC ARRAY.

CHIANG, CHEN HUEI. January 1982 (has links)
A methodology for the automatic translation of a Hardware Description Language (HDL) formulation of a VLSI system to a structured array-type of target realization is the subject of this investigation. A particular combination of input HDL and target technology has been implemented as part of the exercise, and a detailed evaluation of the result is presented. The HDL used in the study is AHPL, a synchronous clock-mode language which accepts the description of the hardware at Register Transfer Level. The target technology selected is Storage Logic Array (SLA), an evolution of PLA concept. Use of the SLA has a distinct advantage, notably in the ability to sidestep the interconnection routing problem, an expensive and time-consuming process in normal IC design. Over the past years, an enormous amount of effort has gone into generation of layout from an interconnection list. This conventional approach seems to complicate the placement and routing processes in later stages. In this research project the major emphasis has therefore been on extracting relevant global information from the higher-level description to guide the subsequent placement and routing algorithms. This effectively generates the lower-level layout directly from higher-level description. A special version of AHPL compiler (stage 3) has been developed as part of the project. The SLA data structure formats and the implementation of the Data and Control Sections of the target are described in detail. Also the evaluation and possibilities for future research are discussed.
42

A new quadratic formulation for incremental timing-driven placement / Uma nova formulação quadrática para posicionamento inncremental guiado à tempos de programação

Fogaça, Mateus Paiva January 2016 (has links)
O tempo de propagação dos sinais nas interconexões é um fator dominante para atingir a frequência de operação desejada em circuitos nanoCMOS. Durante a síntese física, o posicionamento visa espalhar as células na área disponível enquanto otimiza uma função custo obedecendo aos requisitos do projeto. Portanto, o posicionamento é uma etapa chave na determinação do comprimento total dos fios e, consequentemente, na obtenção da frequência de operação desejada. Técnicas de posicionamento incremental visam melhorar a qualidade de uma dada solução. Neste trabalho, são propostas duas abordagens para o posicionamento incremental guiado à tempos de propagação através de suavização de caminhos e balanceamento de redes. Ao contrário dos trabalhos existentes na literatura, a formulação proposta inclui um modelo de atraso na função quadrática. Além disso, o posicionamento quadrático é aplicado incrementalmente através de uma operação, chamada de neutralização, que ajuda a manter as qualidades da solução inicial. Em ambas as técnicas, o comprimento quadrático de fios é ponderado pelo drive strength das células e a criticalidade dos pinos. Os resultados obtidos superam o estado-da-arte em média 9,4% e 7,6% com relação ao WNS e TNS, respectivamente. / The interconnection delay is a dominant factor for achieving timing closure in nanoCMOS circuits. During physical synthesis, placement aims to spread cells in the available area while optimizing an objective function w.r.t. the design constraints. Therefore, it is a key step to determine the total wirelength and hence to achieve timing closure. Incremental placement techniques aim to improve the quality of a given solution. Two quadratic approaches for incremental timing driven placement to mitigate late violations through path smoothing and net load balancing are proposed in this work. Unlike previous works, the proposed formulations include a delay model into the quadratic function. Quadratic placement is applied incrementally through an operation called neutralization which helps to keep the qualities of the initial placement solution. In both techniques, the quadratic wirelength is pondered by cell’s drive strengths and pin criticalities. The final results outperform the state-of-art by 9.4% and 7.6% on average for WNS and TNS, respectively.
43

Fluigi: an end-to-end software workflow for microfluidic design

Huang, Haiyao 17 February 2016 (has links)
One goal of synthetic biology is to design and build genetic circuits in living cells for a range of applications with implications in health, materials, and sensing. Computational design methodologies allow for increased performance and reliability of these circuits. Major challenges that remain include increasing the scalability and robustness of engineered biological systems and streamlining and automating the synthetic biology workflow of “specify-design-build-test.” I summarize the advances in microfluidic technology, particularly microfluidic large scale integration, that can be used to address the challenges facing each step of the synthetic biology workflow for genetic circuits. Microfluidic technologies allow precise control over the flow of biological content within microscale devices, and thus may provide more reliable and scalable construction of synthetic biological systems. However, adoption of microfluidics for synthetic biology has been slow due to the expert knowledge and equipment needed to fabricate and control devices. I present an end-to-end workflow for a computer-aided-design (CAD) tool, Fluigi, for designing microfluidic devices and for integrating biological Boolean genetic circuits with microfluidics. The workflow starts with a ``netlist" input describing the connectivity of microfluidic device to be designed, and proceeds through placement, routing, and design rule checking in a process analogous to electronic computer aided design (CAD). The output is an image of the device for printing as a mask for photolithography or for computer numerical control (CNC) machining. I also introduced a second workflow to allocate biological circuits to microfluidic devices and to generate the valve control scheme to enable biological computation on the device. I used the CAD workflow to generate 15 designs including gradient generators, rotary pumps, and devices for housing biological circuits. I fabricated two designs, a gradient generator with CNC machining and a device for computing a biological XOR function with multilayer soft lithography, and verified their functions with dye. My efforts here show a first end-to-end demonstration of an extensible and foundational microfluidic CAD tool from design concept to fabricated device. This work provides a platform that when completed will automatically synthesize high level functional and performance specifications into fully realized microfluidic hardware, control software, and synthetic biological wetware.
44

Considering Manufacturing in the Design of Thick-Panel Origami Mechanisms

Crampton, Erica Brunson 01 October 2017 (has links)
Origami has been investigated and demonstrated for engineering applications in recent years. Many techniques for accommodating the thickness of most engineering materials have been developed. In this work, tables comparing performance and manufacturing characteristics are presented. These tables can serve as useful design tools for engineers when selecting an appropriate thickness-accommodation technique for their application. The use of bent sheet metal for panels in thick-origami mechanisms shows promise as a panel design approach that mitigates several trade-offs between performance and manufacturing characteristics. A process is described and demonstrated that can be employed to use sheet metal in designs of origami-adapted mechanisms that utilize specific thickness-accommodation techniques. Data structures based on origami can be useful in the automation of thick-origami mechanism design. The use of such data structures is explained and shown in the context of a program that will automatically create the 3D CAD models and assembly of a thick-origami mechanism using the tapered panels technique based on the input origami crease pattern. Manufacturability in the design of origami-adapted mechanisms is discussed through presenting and examining three examples of origami-adapted mechanisms. As the manufacturability of origami-adapted products is addressed and improved, their robustness will also improve, thereby enabling greater use of origami-adapted design.
45

Design automation methodologies for extensible processor platform

Cheung, Newton, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis addresses two ubiquitous trends in the embedded system world - the increasing importance of design turnaround time as a design metric, and the move towards closing the design productivity gap. Adopting the right choice of design approach has been recognised as an integral part of the design flow in order to meet desired characteristics such as increasing software content, satisfying the growing complexities of an application, reusing off-the-shelf components, and exploring design metrics tradeoff, which closes the design productivity gap. The importance of design turnaround time is motivated by the intensive competition between manufacturers, especially makers of mainstream electronic consumer products, who shrinks the product life cycle and requires faster time-to-market to maximise economic benefits. This thesis presents a suite of design automation methodologies to automatically design embedded systems for an application in the state-of-the-art design approach - the extensible processor platform. These design automation methodologies systematise the extensible processor platform???s design flow, with particular emphasis on solving four challenging design problems: i) code segment identification; ii) instruction generation; iii) architectural customisation selection; and iv) processor evaluation. Our suite of design automation methodologies includes: i) a semi-automatic design system - to design an extensible processor that maximises the application performance while satisfying the area constraint. By specifying a fitting function to identify suitable code segments within an application, a two-level hierarchy selection algorithm is used to first select a predefined processor and then select the right instruction, and a performance estimator is used to estimate an application's performance; ii) a tool to match instructions - to automatically match the pre-designed instructions with computationally intensive code segments, reducing verification time and effort; iii) an instructions estimation model - to estimate the area overhead, latency, power consumption of extensible instructions, exploring larger design space; and iv) an instructions generation tool - to generate new extensible instructions that maximises the speedup while minimising power dissipation. A number of techniques such as system decomposition, combinational equivalence checking and regression analysis etc., have been heavily relied upon in the creation of the final design system. This thesis shows results at every stage to demonstrate the efficacy of our design methodologies in the creation of extensible processors. The methodologies and results presented in this thesis demonstrate that automating the design process for an extensible processor platform results in significant performance increase - on average, an increase of 4.74x (up to 15.71x) compared to the original base processor. Our system achieves significant design turnaround time savings (2.5% of the full simulation time for the entire design space) with majority Pareto points obtained (91% on average), and can lead to fewer and faster design iterations. Our instruction matching tool is 7.3x faster on average compared to the best known approaches to the problem (partial simulations). Our estimation model has a mean absolute error as small as 3.4% (6.7% max.) for area overhead, 5.9% (9.4% max.) for latency, and 4.2% (7.2% max.) for power consumption, compared to estimation through the time consuming synthesis and simulation steps using commercial tools. Finally, the instruction generation tool reduces energy consumption by a further 5.8% on average (up to 17.7%) compared to extensible instructions generated by previous approaches.
46

The reuse of design rules by product and process documentation : A descriptive case study

Andersson, Emma January 2010 (has links)
The problem of automating design processes is often related to the difficulties with updating, maintaining and sharing the information. This thesis provides a descriptive case study of a large company’s design automation process and the difficulties of reusing already existing solutions.   The main purpose of the thesis has been to trace a product family from its specification of demands to a complete design program. An account is given of the documentation written during the product development process, of the different data storages and also how the company has implemented design automation in their process.   The results have been reached through a series of interviews as well as previous studies and material from the company. From an analysis of the results proposed solutions are given and focus on the low quality the documentation has and how it is a result of a rapid growth within the company.
47

Analysis and Optimization for Testing Using IEEE P1687

Ghani Zadegan, Farrokh January 2010 (has links)
The IEEE P1687 (IJTAG) standard proposal aims at providing a standardized interface between on-chip embedded test, debug and monitoring logic (instruments), such as scan-chains and temperature sensors, and the Test Access Port of IEEE Standard 1149.1 mainly used for board test. A key feature in P1687 is to include Segment Insertion Bits (SIBs) in the scan path. SIBs make it possible to construct a multitude of different P1687 networks for the same set of instruments, and provide flexibility in test scheduling. The work presented in this thesis consists of two parts. In the first part, analysis regarding test application time is given for P1687 networks while making use of two test schedule types, namely concurrent and sequential test scheduling. Furthermore, formulas and novel algorithms are presented to compute the test time for a given P1687 network and a given schedule type. The algorithms are implemented and employed in extensive experiments on realistic industrial designs. In the second part, design of IEEE P1687 networks is studied. Designing the P1687 network that results in the least test application time for a given set of instruments, is a time-consuming task in the absence of automatic design tools. In this thesis work, novel algorithms are presented for automated design of P1687 networks which are optimized with respect to test application time and the required number of SIBs. The algorithms are implemented and demonstrated in experiments on industrial SOCs.
48

Unified Design and Optimization Tools for Digital Microfluidic Biochips

Zhao, Yang January 2011 (has links)
<p>Digital microfluidics is an emerging technology that provides fluid-handling capability on a chip. Biochips based on digital microfluidics have therefore enabled the automation of laboratory procedures in biochemistry. By reducing the rate of sample and reagent consumption, digital microfluidic biochips allow continuous sampling and analysis for real-time biochemical analysis, with application to clinical diagnostics, immunoassays, and DNA sequencing. Recent advances in technology and applications serve as a powerful driver for research on computer-aided design (CAD) tools for biochips.</p><p>This thesis research is focused on a design automation framework that addresses chip synthesis, droplet routing, control-pin mapping, testing and diagnosis, and error recovery. In contrast to prior work on automated design techniques for digital microfluidics, the emphasis here is on practical CAD optimization methods that can target different design problems in a unified manner. Constraints arising from the underlying technology and the application domain are directly incorporated in the optimization framework.</p><p>The avoidance of cross-contamination during droplet routing is a key design challenge for biochips. As a first step in this thesis research, a droplet-routing method based on disjoint droplet routes has been developed to avoid cross-contamination during the design of droplet flow paths. A wash-operation synchronization method has been developed to synchronize wash-droplet routing steps with sample/reagent droplet-routing steps by controlling the order of arrival of droplets at cross-contamination sites.</p><p>In pin-constrained digital microfluidic biochips, concurrently-implemented fluidic operations may involve pin-actuation conflicts if they are not carefully synchronized. A two-phase optimization method has been proposed to identify and synchronize these fluidic operations. The goal is to implement these fluidic operations without pin-actuation conflict, and minimize the duration of implementing the outcome sequence after synchronization.</p><p>Due to the interdependence between droplet routing and pin-count reduction, this thesis presents two optimization methods to concurrently solve the droplet-routing and the pin-mapping design problems. First, an integer linear programming (ILP)-based optimization method has been developed to minimize the number of control pins. Next an efficient heuristic approach has been developed to tackle the co-optimization problem.</p><p>Dependability is an important system attribute for microfluidic biochips. Robust testing methods are therefore needed to ensure correct results. This thesis presents a built-in self-test (BIST) method for digital microfluidic biochips. This method utilizes digital microfluidic logic gates to implement the BIST architecture. A cost-effective fault diagnosis method has also been proposed to locate a single defective cell, multiple</p><p>rows/columns with defective cells, as well as an unknown number of rows/columns-under-test with defective cells. A BIST method for on-line testing of digital microfluidic biochips has been proposed. An automatic test pattern generation (ATPG) method has been proposed for non-regular digital microfluidic chips. A pin-count-aware online testing method has been developed for pin-constrained designs to support the execution of both fault testing and the target bioassay protocol.</p><p>To better monitor and manage the execution of bioassays, control flow has been incorporated in the design and optimization framework. A synthesis method has been developed to incorporate control paths and an error-recovery mechanism during chip design. This method addresses the problem of recovering from fluidic errors that occur</p><p>during on-chip bioassay execution.</p><p>In summary, this thesis research has led to a set of unified design tools for digital microfluidics. This work is expected to reduce human effort during biochip design and biochip usage, and enable low-cost manufacture and more widespread adoption for laboratory procedures.</p> / Dissertation
49

VLSI DESIGN AUTOMATION USING A HARDWARE PROGRAMMING LANGUAGE

Navabi, Zainalabedin, 1952- January 1981 (has links)
Manual design methods used successfully up to now for SSI and MSI parts are inadequate for logically complex and densely packed VLSI circuitry. Automating the design process has, therefore, become an essential goal of present-day practice. Hardware description languages form a useful front-end to the design-automation process which ultimately generates masks suitable for chip fabrication. AHPL has long been in use as a vehicle for the description of clock-mode digital systems. Supporting software packages include a simulator which allows the designer to debug his design at a functional level. A subsequent 3-stage compiler extracts global information contained in the original AHPL description to produce a comprehensive data-base. It then generates hardware specifications suitable for down-stream design and manufacturing activities. The SLA is an evolution of the PLA concept. Design with SLA's has the notable advantage of allowing hardware representation of functional and layout information, while sidestepping the costly and time-consuming placement and routing problem. This dissertation describes a methodology for translation to an SLA form of hardware realization from an AHPL description. The global information extracted from the AHPL data-base plays a prominent part in guiding the heuristic placement and routing algorithms.
50

Design automation methodologies for extensible processor platform

Cheung, Newton, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis addresses two ubiquitous trends in the embedded system world - the increasing importance of design turnaround time as a design metric, and the move towards closing the design productivity gap. Adopting the right choice of design approach has been recognised as an integral part of the design flow in order to meet desired characteristics such as increasing software content, satisfying the growing complexities of an application, reusing off-the-shelf components, and exploring design metrics tradeoff, which closes the design productivity gap. The importance of design turnaround time is motivated by the intensive competition between manufacturers, especially makers of mainstream electronic consumer products, who shrinks the product life cycle and requires faster time-to-market to maximise economic benefits. This thesis presents a suite of design automation methodologies to automatically design embedded systems for an application in the state-of-the-art design approach - the extensible processor platform. These design automation methodologies systematise the extensible processor platform???s design flow, with particular emphasis on solving four challenging design problems: i) code segment identification; ii) instruction generation; iii) architectural customisation selection; and iv) processor evaluation. Our suite of design automation methodologies includes: i) a semi-automatic design system - to design an extensible processor that maximises the application performance while satisfying the area constraint. By specifying a fitting function to identify suitable code segments within an application, a two-level hierarchy selection algorithm is used to first select a predefined processor and then select the right instruction, and a performance estimator is used to estimate an application's performance; ii) a tool to match instructions - to automatically match the pre-designed instructions with computationally intensive code segments, reducing verification time and effort; iii) an instructions estimation model - to estimate the area overhead, latency, power consumption of extensible instructions, exploring larger design space; and iv) an instructions generation tool - to generate new extensible instructions that maximises the speedup while minimising power dissipation. A number of techniques such as system decomposition, combinational equivalence checking and regression analysis etc., have been heavily relied upon in the creation of the final design system. This thesis shows results at every stage to demonstrate the efficacy of our design methodologies in the creation of extensible processors. The methodologies and results presented in this thesis demonstrate that automating the design process for an extensible processor platform results in significant performance increase - on average, an increase of 4.74x (up to 15.71x) compared to the original base processor. Our system achieves significant design turnaround time savings (2.5% of the full simulation time for the entire design space) with majority Pareto points obtained (91% on average), and can lead to fewer and faster design iterations. Our instruction matching tool is 7.3x faster on average compared to the best known approaches to the problem (partial simulations). Our estimation model has a mean absolute error as small as 3.4% (6.7% max.) for area overhead, 5.9% (9.4% max.) for latency, and 4.2% (7.2% max.) for power consumption, compared to estimation through the time consuming synthesis and simulation steps using commercial tools. Finally, the instruction generation tool reduces energy consumption by a further 5.8% on average (up to 17.7%) compared to extensible instructions generated by previous approaches.

Page generated in 0.1153 seconds