• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 57
  • 28
  • 27
  • 25
  • 23
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bluenose II: Towards Faster Design and Verification of Pipelined Circuits

Chan, Ca Bol 08 1900 (has links)
The huge demand for electronic devices has driven semiconductor companies to create better products in terms of area, speed, power etc. and to deliver them to market faster. Delay to market can result in lost opportunities. The length of the design cycle directly affects the time to market. However, inadequate time for design and verification can cause bugs that will cause further delays to market and correcting the error after manufacturing is very expensive. A bug in an ASIC found after fabrication requires respinning the mask at a cost of several million dollars. Even as the pressure to reduce the length of the design cycles grows, the size and complexity of digital hardware circuits have increased, which puts even greater pressure on design and verification productivity. Pipelining is one optimization technique which has contributed to the increased complexity in hardware design. Pipeline increases throughput by overlapping the execution of instructions. It is a challenge to design and verify pipelines because the specification is written to describe how instructions are executed in sequence while there can be multiple instructions being executed in a pipeline at one time. The overlapping of instructions adds further complexity to the hardware in the form of hazards which arise from resource conflicts, data dependencies or speculation of parcels due to branch instructions. To address these issues, we present PipeNet, a metamodel for describing hardware design at a higher level of abstraction and Bluenose II, a graphical tool for manipulating a PipeNet model. PipeNet is based on a pipeline model in a formal pipeline verification framework. The pipeline model contains arbiters, flow-control state machines, datapath and data-routing. The designer describes the pipeline design using PipeNet. Based on the PipeNet model, Bluenose II generates synthesizable VHDL code and a HOL verification script. Bluenose II's ability to generate HOL scripts turns the HOL theorem prover into Bluenose II's external verification environment. A direct connection to HOL is implemented in the form of a console to display results from HOL directly in Bluenose II. The data structures that represent PipeNet are evaluated for their extensibility to accommodate future changes. Finally, a case study based on an implementation of a two-wide superscalar 32-bit RISC integer pipeline is conducted to examine the quality of the generated codes and the entire design process in Bluenose II. The generation of VHDL code is improved over that provided in Bluenose I, Bluenose II's predecessor.
22

EDA Solutions for Double Patterning Lithography

Mirsaeedi, Minoo January 2012 (has links)
Expanding the optical lithography to 32-nm node and beyond is impossible using existing single exposure systems. As such, double patterning lithography (DPL) is the most promising option to generate the required lithography resolution, where the target layout is printed with two separate imaging processes. Among different DPL techniques litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) methods are the most popular ones, which apply two complete exposure lithography steps and an exposure lithography followed by a chemical imaging process, respectively. To realize double patterning lithography, patterns located within a sub-resolution distance should be assigned to either of the imaging sub-processes, so-called layout decomposition. To achieve the optimal design yield, layout decomposition problem should be solved with respect to characteristics and limitations of the applied DPL method. For example, although patterns can be split between the two sub-masks in the LELE method to generate conflict free masks, this pattern split is not favorable due to its sensitivity to lithography imperfections such as the overlay error. On the other hand, pattern split is forbidden in SADP method because it results in non-resolvable gap failures in the final image. In addition to the functional yield, layout decomposition affects parametric yield of the designs printed by double patterning. To deal with both functional and parametric challenges of DPL in dense and large layouts, EDA solutions for DPL are addressed in this thesis. To this end, we proposed a statistical method to determine the interconnect width and space for the LELE method under the effect of random overlay error. In addition to yield maximization and achieving near-optimal trade-off between different parametric requirements, the proposed method provides valuable insight about the trend of parametric and functional yields in future technology nodes. Next, we focused on self-aligned double patterning and proposed layout design and decomposition methods to provide SADP-compatible layouts and litho-friendly decomposed layouts. Precisely, a grid-based ILP formulation of SADP decomposition was proposed to avoid decomposition conflicts and improve overall printability of layout patterns. To overcome the limited applicability of this ILP-based method to fully-decomposable layouts, a partitioning-based method is also proposed which is faster than the grid-based ILP decomposition method too. Moreover, an A∗-based SADP-aware detailed routing method was proposed which performs detailed routing and layout decomposition simultaneously to avoid litho-limited layout configurations. The proposed router preserves the uniformity of pattern density between the two sub-masks of the SADP process. We finally extended our decomposition method for double patterning to triple patterning and formulated SATP decomposition by integer linear programming. In addition to conventional minimum width and spacing constraints, the proposed decomposition method minimizes the mandrel-trim co-defined edges and maximizes the layout features printed by structural spacers to achieve the minimum pattern distortion. This thesis is one of the very early researches that investigates the concept of litho-friendliness in SADP-aware layout design and decomposition. Provided by experimental results, the proposed methods advance prior state-of-the-art algorithms in various aspects. Precisely, the suggested SADP decomposition methods improve total length of sensitive trim edges, total EPE and overall printability of attempted designs. Additionally, our SADP-detailed routing method provides SADP-decomposable layouts in which trim patterns are highly robust to lithography imperfections. The experimental results for SATP decomposition show that total length of overlay-sensitive layout patterns, total EPE and overall printability of the attempted designs are also improved considerably by the proposed decomposition method. Additionally, the methods in this PhD thesis reveal several insights for the upcoming technology nodes which can be considered for improving the manufacturability of these nodes.
23

Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization

Liu, Yifang 2010 May 1900 (has links)
This research addresses some critical challenges in various problems of VLSI design automation, including sophisticated solution search on DAG topology, simultaneous multi-stage design optimization, optimization on multi-scenario and multi-core designs, and GPU-based parallel computing for runtime acceleration. Discrete optimization for VLSI design automation problems is often quite complex, due to the inconsistency and interference between solutions on reconvergent paths in directed acyclic graph (DAG). This research proposes a systematic solution search guided by a global view of the solution space. The key idea of the proposal is joint relaxation and restriction (JRR), which is similar in spirit to mathematical relaxation techniques, such as Lagrangian relaxation. Here, the relaxation and restriction together provides a global view, and iteratively improves the solution. Traditionally, circuit optimization is carried out in a sequence of separate optimization stages. The problem with sequential optimization is that the best solution in one stage may be worse for another. To overcome this difficulty, we take the approach of performing multiple optimization techniques simultaneously. By searching in the combined solution space of multiple optimization techniques, a broader view of the problem leads to the overall better optimization result. This research takes this approach on two problems, namely, simultaneous technology mapping and cell placement, and simultaneous gate sizing and threshold voltage assignment. Modern processors have multiple working modes, which trade off between power consumption and performance, or to maintain certain performance level in a powerefficient way. As a result, the design of a circuit needs to accommodate different scenarios, such as different supply voltage settings. This research deals with this multi-scenario optimization problem with Lagrangian relaxation technique. Multiple scenarios are taken care of simultaneously through the balance by Lagrangian multipliers. Similarly, multiple objective and constraints are simultaneously dealt with by Lagrangian relaxation. This research proposed a new method to calculate the subgradients of the Lagrangian function, and solve the Lagrangian dual problem more effectively. Multi-core architecture also poses new problems and challenges to design automation. For example, multiple cores on the same chip may have identical design in some part, while differ from each other in the rest. In the case of buffer insertion, the identical part have to be carefully optimized for all the cores with different environmental parameters. This problem has much higher complexity compared to buffer insertion on single cores. This research proposes an algorithm that optimizes the buffering solution for multiple cores simultaneously, based on critical component analysis. Under the intensifying time-to-market pressure, circuit optimization not only needs to find high quality solutions, but also has to come up with the result fast. Recent advance in general purpose graphics processing unit (GPGPU) technology provides massive parallel computing power. This research turns the complex computation task of circuit optimization into many subtasks processed by parallel threads. The proposed task partitioning and scheduling methods take advantage of the GPU computing power, achieve significant speedup without sacrifice on the solution quality.
24

Physical design for performance and thermal and power-supply reliability in modern 2D and 3D microarchitectures

Healy, Michael Benjamin 27 August 2010 (has links)
The main objective of this research is to examine the performance, power noise, and thermal trade-offs in modern traditional (2D) and three-dimensionally-integrated (3D) architectures and to present design automation tools and physical design methodologies that enable higher reliability while maintaining microarchitectural performance for these systems. Five main research topics that support this goal are included. The first topic focuses on thermal reliability. The second, third, and fourth, topics examine power-supply noise. The final topic presents a set of physical design and analysis methodologies used to produce a 3D design that was sent for fabrication in March of 2010. The first section of this dissertation details a microarchitectural floorplanning algorithm that enables the user to choose and adjust the trade-off between microarchitectural performance and general operating temperature in both 2D and 3D systems, which is a major determinant of overall reliability and chip lifetime. Simulation results demonstrate that the algorithm performs as expected and successfully provides the user with the desired trade-off. The first section also presents a thermal-aware microarchitectural floorplanning algorithm designed to help reduce the operating temperature of the cores in the unique environment present within multi-core processors. Heat-coupling between neighboring cores is considered during the optimization process to provide floorplans that result in lower maximum temperature. The second section explores power-supply noise in processors caused by fine-grained clock-gating and describes a floorplanning algorithm created to work with an active noise-canceling clock-gating controller. Simulation results show that combining these two techniques results in lower power-supply noise with minimal processor performance impact. The third section turns to future 3D systems with a large number of stacked active layers (many-tier systems) and examines power-supply delivery challenges in these systems. Parasitic resistance, capacitance, and inductance are calculated for the 3D vias, and the results of scaling various parameters in the power-supply-network design are presented. Several techniques for reducing power-supply-network noise in these many-tier systems are explored. The fourth section describes a layout-level analysis of a novel power distribution through-silicon-via topology and it's effect on IR-drop and dynamic noise. Simulations show that both types of power-supply noise can be reduced by more than 20\% in systems with non-uniform per-tier power dissipation when using the proposed topology. The final section explains the physical design and analysis techniques used to produce the layouts for 3D-MAPS, a 64-core 3D-stacked memory-on-processor system targeted at demonstration of large memory bandwidth using 3D connections. The 3D-aware physical design flow utilizing non-3D-aware commercial tools is detailed, along with the techniques and add-ons that were developed to enable this process.
25

Robust, Low Power, Discrete Gate Sizing

Casagrande, Anthony Joseph 01 January 2015 (has links)
Ultra-deep submicron circuits require accurate modeling of gate delay in order to meetaggressive timing constraints. With the lack of statistical data, variability due to the mechanical manufacturing process and its chemical properties poses a challenging problem. Discrete gate sizing requires (i) accurate models that take into account random parametric variation and (ii) a fair allocation of resources to optimize the solution. The proposed GTFUZZ gate sizing algorithm handles both tasks. Gate sizing is modeled as a resource allocation problem using fuzzy game theory. Delay is modeled as a constraint and power is optimized in this algorithm. In GTFUZZ, delay is modeled as a fuzzy goal with fuzzy parameters to capture the imprecision of gate delay early in the design phase when extensive empirical data is absent. Dynamic power is modeled as a fuzzy goal without varying coefficients. The fuzzy goals provide a flexible platform for multimetric optimization. The robust GTFUZZ algorithm is compared against fuzzy linear programming (FLP) and deterministic worst-case FLP (DWCFLP) algorithms. The benchmark circuits are first synthesized, placed, routed, and optimized for performance using the Synopsys University 32/28nm standard cell library and technology files. Operating at the optimized clock frequency, results show an average power reduction of about 20% versus DWCFLP and 9% against variation-aware gate sizing with FLP. Timing and timing yield are verified by both Synopsys PrimeTime and Monte Carlo simulations of the critical paths using HSPICE.
26

Physical design automation of structured high-performance integrated circuits

Ward, Samuel Isaac 06 February 2014 (has links)
During the last forty years, advancements have pushed state-of-the-art placers to impressive performance placing modern multimillion gate designs in under an hour. Wide industry adoption of the analytical framework indicates the quality of these approaches. However, modern designs present significant challenges to address the multi objective requirements for multi GHz designs. As devices continue to scale, wires become more resistive and power constraints significantly dampen performance gains, continued improvement in placement quality is necessary. Additionally, placement has become more challenging with the integration of multi-objective constraints such as routability, timing and reliability. These constraints intensify the challenge of producing quality placement solutions and must be handled carefully. Exasperating the issue, shrinking schedules and budgets are requiring increased automation by blurring the boundary between manual and automated placement. An example of this new hybrid design style is the integration of structured placement constraints within traditional ASIC style circuit structures. Structure aware placement is a significant challenge to modern high performance physical design flows. The goal of this dissertation is to develop enhancements to state-of-the-art placement flows overcoming inadequacies for structured circuits. A key observation is that specific structures exist where modern analytical placement frameworks significantly underperform. Accurately measuring suboptimality of a particular placement solution however is very challenging. As such, this work begins by designing a series of structured placement benchmarks. Generating placement for the benchmarks manually offers the opportunity to accurately quantify placer performance. Then, the latest generation of academic placers is compared to evaluate how the placers performed for these design styles. Results of this work lead to discoveries in three key aspects of modern physical design flows. Datapath placement is the first aspect to be examined. This work narrows the focus to specifically target datapath style circuits that contain high fanout nets. As the datapath benchmarks showed, these high fanout nets misdirect analytical placement flows. To effectively handle these circuit styles, this work proposes a new unified placement flow that simultaneously places random-logic and datapath cells. The flow is built on top of a leading academic force-directed placer and significantly improves the quality of datapath placement while leveraging the speed and flexibility of existing algorithms. Effectively placing these circuits is not enough because in modern high performance designs, datapath circuits are often embedded within a larger ASIC style circuit and thus are unknown. As such, the next aspect of structured placement applies novel data learning techniques to train, predict, and evaluate potential structured circuits. Extracted circuits are mapped to groups that are aligned and simultaneously placed with random logic. The third aspect that can be enhanced with improved structured placement impacts local clock tree synthesis. Performance and power requirements for multi-GHz microprocessors necessitate the use of a grid-based clock network methodology, wherein a global clock grid is overlaid on the entire die area followed by local buffered clock trees. This clock mesh methodology is driven by three key reasons: First, full trees do not offer enough performance for modern microprocessors. Second, clock trees offer significant power savings over full clock meshes. Third, local clock trees reduce the local clock wiring demands compared to full meshes at lower level metal layers. To meet these demands, a shift in latch placement methodology is proposed by using structured placement templates. Placement configurations are identified a priori with significantly lower capacitance and the solutions are developed into placement templates. Results through careful experimentation demonstrate the effectiveness of these approaches and the impact potential for modern high-speed designs. / text
27

VLSI physical design automation for double patterning and emerging lithography

Yuan, Kun, 1983- 07 February 2011 (has links)
Due to aggressive scaling in semiconductor industry, the traditional optical lithography system is facing great challenges printing 32nm and below circuit layouts. Various promising nanolithography techniques have been developed as alternative solutions for patterning sub-32nm feature size. This dissertation studies physical design related optimization problem for these emerging methodologies, mainly focusing on double patterning and electronic beam lithography. Double Patterning Lithography (DPL) decomposes a single layout into two masks, and patterns the chip in two exposure steps. As a benefit, the pitch size is doubled, which enhances the resolution. However, the decomposition process is not a trivial task. Conflict and stitch are its two main manufacturing challenges. First of all, a post-routing layout decomposer has been developed to perform simultaneous conflict and stitch minimization, making use of the integer linear programming and efficient graph reduction techniques. Compared to the previous work which optimizes conflict and stitch separately, the proposed method produces significantly better result. Redundant via insertion, another key yield improvement technique, may increase the complexity in DPL-compliance. It could easily introduce unmanufacturable conflict, while not carefully planned and inserted. Two algo- rithms have been developed to take care of this redundant via DPL-compliance problem in the design side. While design itself is not DPL-friendly, post-routing decomposition may not achieve satisfactory solution quality. An efficient framework of WISDOM has been further proposed to perform wire spreading for better conflict and stitch elimination. The solution quality has been improved in great extent, with a little extra layout perturbations. As another promising solution for sub-22nm, Electronic Beam Lithography (EBL) is a maskless technology which shoots desired patterns directly into a silicon wafer, with charged particle beam. EBL overcomes the diffraction limit of light in current optical lithography system, however, the low throughput becomes its key technical hurdle. The last work of my dissertation formulates and investigates a bin-packing problem for reducing the processing time of EBL. / text
28

EDA Solutions for Double Patterning Lithography

Mirsaeedi, Minoo January 2012 (has links)
Expanding the optical lithography to 32-nm node and beyond is impossible using existing single exposure systems. As such, double patterning lithography (DPL) is the most promising option to generate the required lithography resolution, where the target layout is printed with two separate imaging processes. Among different DPL techniques litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) methods are the most popular ones, which apply two complete exposure lithography steps and an exposure lithography followed by a chemical imaging process, respectively. To realize double patterning lithography, patterns located within a sub-resolution distance should be assigned to either of the imaging sub-processes, so-called layout decomposition. To achieve the optimal design yield, layout decomposition problem should be solved with respect to characteristics and limitations of the applied DPL method. For example, although patterns can be split between the two sub-masks in the LELE method to generate conflict free masks, this pattern split is not favorable due to its sensitivity to lithography imperfections such as the overlay error. On the other hand, pattern split is forbidden in SADP method because it results in non-resolvable gap failures in the final image. In addition to the functional yield, layout decomposition affects parametric yield of the designs printed by double patterning. To deal with both functional and parametric challenges of DPL in dense and large layouts, EDA solutions for DPL are addressed in this thesis. To this end, we proposed a statistical method to determine the interconnect width and space for the LELE method under the effect of random overlay error. In addition to yield maximization and achieving near-optimal trade-off between different parametric requirements, the proposed method provides valuable insight about the trend of parametric and functional yields in future technology nodes. Next, we focused on self-aligned double patterning and proposed layout design and decomposition methods to provide SADP-compatible layouts and litho-friendly decomposed layouts. Precisely, a grid-based ILP formulation of SADP decomposition was proposed to avoid decomposition conflicts and improve overall printability of layout patterns. To overcome the limited applicability of this ILP-based method to fully-decomposable layouts, a partitioning-based method is also proposed which is faster than the grid-based ILP decomposition method too. Moreover, an A∗-based SADP-aware detailed routing method was proposed which performs detailed routing and layout decomposition simultaneously to avoid litho-limited layout configurations. The proposed router preserves the uniformity of pattern density between the two sub-masks of the SADP process. We finally extended our decomposition method for double patterning to triple patterning and formulated SATP decomposition by integer linear programming. In addition to conventional minimum width and spacing constraints, the proposed decomposition method minimizes the mandrel-trim co-defined edges and maximizes the layout features printed by structural spacers to achieve the minimum pattern distortion. This thesis is one of the very early researches that investigates the concept of litho-friendliness in SADP-aware layout design and decomposition. Provided by experimental results, the proposed methods advance prior state-of-the-art algorithms in various aspects. Precisely, the suggested SADP decomposition methods improve total length of sensitive trim edges, total EPE and overall printability of attempted designs. Additionally, our SADP-detailed routing method provides SADP-decomposable layouts in which trim patterns are highly robust to lithography imperfections. The experimental results for SATP decomposition show that total length of overlay-sensitive layout patterns, total EPE and overall printability of the attempted designs are also improved considerably by the proposed decomposition method. Additionally, the methods in this PhD thesis reveal several insights for the upcoming technology nodes which can be considered for improving the manufacturability of these nodes.
29

Automated Design Analysis Of Anti-roll Bars

Caliskan, Kemal 01 January 2003 (has links) (PDF)
Vehicle anti-roll bars are suspension components used for limiting body roll angle. They have a direct effect on the handling characteristics of the vehicle. Design changes of anti-roll bars are quite common at various steps of vehicle production, and a design analysis must be performed for each change. Finite Element Analysis (FEA) can be effectively used in design analysis of anti-roll bars. However, due to high number of repeated design analyses, the analysis time and cost problems associated with the use of general FEA package programs may create considerable disadvantages in using these package programs for performing anti-roll bar design analysis. In this study, an automated design program is developed for performing design analysis of vehicle anti-roll bars. The program is composed of two parts, the user interface and the FEA macro. The FEA macro includes the codes for performing deformation, stress, fatigue, and modal analysis of anti-roll bars in ANSYS 7.0. The user interface, which is composed in Visual Basic 6.0, includes the forms for data input and result output procedures. By the developed software, the FEA of the anti-roll bars is simplified to simple data entry via user interface. The flow of the analysis is controlled by the program and the finite element analysis is performed by ANSYS at the background. The developed software can perform design analysis for a wide range of anti-roll bars: The bar centerline can have any 3D shape, the cross section can be solid or hollow circular, the end connections can be of pin or spherical joint type, the bushings can be mounted at any position on the bar with a user defined bushing length. The effects of anti-roll bar design parameters on final anti-roll bar properties are also evaluated by performing sample analyses with the automated design program developed in this study.
30

NANOPIPELINED THRESHOLD SYNTHESIS USING GATE REPLICATION

Pierce, Luke 01 August 2011 (has links)
Threshold logic gates allow for complex multi-input functions to be implemented using a single gate reducing the power and area of the circuit. Clocked based threshold gates have the additional advantage of its capability of being nanopipelined to increase network throughput. To produce a threshold network the proposed algorithm accepts a traditional algebraic boolean network as an input and resynthesizes it into a nanopipelined threshold logic network. The algorithm is the first to our knowledge that synthesizes in a manner to not only minimize the number of clusters produced from synthesizing the algebraic boolean network but also to minimize associated buffer insertion overhead in producing a clocked threshold gate network.

Page generated in 0.1129 seconds