• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 11
  • Tagged with
  • 1243
  • 1243
  • 278
  • 269
  • 255
  • 255
  • 167
  • 164
  • 164
  • 130
  • 129
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

On the Parameter Selection Problem in the Newton-ADI Iteration for Large Scale Riccati Equations

Benner, Peter, Mena, Hermann, Saak, Jens 26 November 2007 (has links) (PDF)
The numerical treatment of linear-quadratic regulator problems for parabolic partial differential equations (PDEs) on infinite time horizons requires the solution of large scale algebraic Riccati equations (ARE). The Newton-ADI iteration is an efficient numerical method for this task. It includes the solution of a Lyapunov equation by the alternating directions implicit (ADI) algorithm in each iteration step. On finite time intervals the solution of a large scale differential Riccati equation is required. This can be solved by a backward differentiation formula (BDF) method, which needs to solve an ARE in each time step. Here, we study the selection of shift parameters for the ADI method. This leads to a rational min-max-problem which has been considered by many authors. Since knowledge about the complete complex spectrum is crucial for computing the optimal solution, this is infeasible for the large scale systems arising from finite element discretization of PDEs. Therefore several alternatives for computing suboptimal parameters are discussed and compared for numerical examples.
392

Gramian-Based Model Reduction for Data-Sparse Systems

Baur, Ulrike, Benner, Peter 27 November 2007 (has links) (PDF)
Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated large-scale systems have to be solved very often. To attack these problems in reasonable time it is absolutely necessary to reduce the dimension of the underlying system. We focus on model reduction by balanced truncation where a system theoretical background provides some desirable properties of the reduced-order system. The major computational task in balanced truncation is the solution of large-scale Lyapunov equations, thus the method is of limited use for really large-scale applications. We develop an effective implementation of balancing-related model reduction methods in exploiting the structure of the underlying problem. This is done by a data-sparse approximation of the large-scale state matrix A using the hierarchical matrix format. Furthermore, we integrate the corresponding formatted arithmetic in the sign function method for computing approximate solution factors of the Lyapunov equations. This approach is well-suited for a class of practical relevant problems and allows the application of balanced truncation and related methods to systems coming from 2D and 3D FEM and BEM discretizations.
393

MULTIRATE INTEGRATION OF TWO-TIME-SCALE DYNAMIC SYSTEMS

Keepin, William North. January 1980 (has links)
Simulation of large physical systems often leads to initial value problems in which some of the solution components contain high frequency oscillations and/or fast transients, while the remaining solution components are relatively slowly varying. Such a system is referred to as two-time-scale (TTS), which is a partial generalization of the concept of stiffness. When using conventional numerical techniques for integration of TTS systems, the rapidly varying components dictate the use of small stepsizes, with the result that the slowly varying components are integrated very inefficiently. This could mean that the computer time required for integration is excessive. To overcome this difficulty, the system is partitioned into "fast" and "slow" subsystems, containing the rapidly and slowly varying components of the solution respectively. Integration is then performed using small stepsizes for the fast subsystem and relatively large stepsizes for the slow subsystem. This is referred to as multirate integration, and it can lead to substantial savings in computer time required for integration of large systems having relatively few fast solution components. This study is devoted to multirate integration of TTS initial value problems which are partitioned into fast and slow subsystems. Techniques for partitioning are not considered here. Multirate integration algorithms based on explicit Runge-Kutta (RK) methods are developed. Such algorithms require a means for communication between the subsystems. Internally embedded RK methods are introduced to aid in computing interpolated values of the slow variables, which are supplied to the fast subsystem. The use of averaging in the fast subsystem is discussed in connection with communication from the fast to the slow subsystem. Theoretical support for this is presented in a special case. A proof of convergence is given for a multirate algorithm based on Euler's method. Absolute stability of this algorithm is also discussed. Four multirate integration routines are presented. Two of these are based on a fixed-step fourth order RK method, and one is based on the variable step Runge-Kutta-Merson scheme. The performance of these routines is compared to that of several other integration schemes, including Gear's method and Hindmarsh's EPISODE package. For this purpose, both linear and nonlinear examples are presented. It is found that multirate techniques show promise for linear systems having eigenvalues near the imaginary axis. Such systems are known to present difficulty for Gear's method and EPISODE. A nonlinear TTS model of an autopilot is presented. The variable step multirate routine is found to be substantially more efficient for this example than any other method tested. Preliminary results are also included for a pressurized water reactor model. Indications are that multirate techniques may prove fruitful for this model. Lastly, an investigation of the effects of the step-size ratio (between subsystems) is included. In addition, several suggestions for further work are given, including the possibility of using multistep methods for integration of the slow subsystem.
394

MODULAR IMPLEMENTATION OF A DIGITAL HARDWARE DESIGN AUTOMATION SYSTEM

Masud, Manzer, 1950- January 1981 (has links)
With the advent of LSI and VLSI technology, the demand and affordability of custom tailored design has increased considerably. A short turnaround time is desirable along with more credible testing techniques. For a low-production device it is necessary to reduce the time and money spent in the design process. Traditional hardware design automation techniques rely on extensive engineer interaction. A detailed description of the circuit to be manufactured must be entered manually. It is often necessary to prepare a separate description for each phase of the design process. In order to be successful, a modern design automation system must be capable of supporting all phases of design activities from a single circuit description. It must also provide an adequate level of abstraction so that the circuit may be described conveniently and concisely. Such abstraction is provided by computer hardware description languages (CHDL). In this research, an automation system based on AHPL (A Hardware Programming Language) has been developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) Illustration of how the compiler may be used to support several phases of design activities. Several new features have been added to AHPL. These include: application-dependent parameters, multiple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1) BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems. In the end, a discussion on how the AHPL compiler can be interfaced to other CAD systems is given, followed by examples from current applications and from ongoing research projects. These applications illustrate the usefulness of a CHDL-based approach to the design of digital hardware automation systems.
395

VLSI DESIGN AUTOMATION USING A HARDWARE PROGRAMMING LANGUAGE

Navabi, Zainalabedin, 1952- January 1981 (has links)
Manual design methods used successfully up to now for SSI and MSI parts are inadequate for logically complex and densely packed VLSI circuitry. Automating the design process has, therefore, become an essential goal of present-day practice. Hardware description languages form a useful front-end to the design-automation process which ultimately generates masks suitable for chip fabrication. AHPL has long been in use as a vehicle for the description of clock-mode digital systems. Supporting software packages include a simulator which allows the designer to debug his design at a functional level. A subsequent 3-stage compiler extracts global information contained in the original AHPL description to produce a comprehensive data-base. It then generates hardware specifications suitable for down-stream design and manufacturing activities. The SLA is an evolution of the PLA concept. Design with SLA's has the notable advantage of allowing hardware representation of functional and layout information, while sidestepping the costly and time-consuming placement and routing problem. This dissertation describes a methodology for translation to an SLA form of hardware realization from an AHPL description. The global information extracted from the AHPL data-base plays a prominent part in guiding the heuristic placement and routing algorithms.
396

APTMC: AN INTERFACE PROGRAM FOR USE WITH ANSYS FOR THERMAL AND THERMALLY INDUCED STRESS MODELING/SIMULATION OF LEVEL 1 AND LEVEL 2 VLSI PACKAGING

Shiang, Jyue-Jon, 1956- January 1987 (has links)
ANSYS Packaging Thermal/Mechanical Calculator (APTMC) is an interface program developed for use with ANSYS and specially designed to handle thermal and thermally induced stress modeling/simulation of Level 1 and Level 2 VLSI packaging structures and assemblies. APTMC is written in PASCAL and operates in an interactive I/O format mode. This user-friendly tool leads an analyst/designer through the process of creating appropriate thermal and thermally induced stress models and other operations necessary to run ANSYS. It includes such steps as the following: (1) construction of ANSYS commands through the string process; (2) creation of a dynamic data structure which expands and contracts during program execution based on the data storage requirements of the program sets to control model generation; (3) access of material data and model parameters from the developed INTERNAL DATABANK which contains: (a) material data list; (b) heat transfer modes; and (c) library of structures; (4) forming ANSYS PREP7 and POSTn command files. (Abstract shortened with permission of author.)
397

Interactive Visualization Of Large Scale Time-Varying Datasets

Frishert, Willem Jan January 2008 (has links)
Visualization of large scale time-varying volumetric datasets is an active topic of research. Technical limitations in terms of bandwidth and memory usage become a problem when visualizing these datasets on commodity computers at interactive frame rates. The overall objective is to overcome these limitations by adapting the methods of an existing Direct Volume Rendering pipeline. The objective is considered to be a proof of concept to assess the feasibility of visualizing large scale time-varying datasets using this pipeline. The pipeline consists of components from previous research, which make extensive use of graphics hardware to visualize large scale static data on commodity computers. This report presents a diploma work, which adapts the pipeline to visualize flow features concealed inside the large scale Computational Fluid Dynamics dataset. The work provides a foundation to address the technical limitations of the commodity computer to visualize time-varying datasets. The report describes the components making up the Direct Volume Rendering pipeline together with the adaptations. It also briefly describes the Computational Fluid Dynamics simulation, the flow features and an earlier visualization approach to show the system’s limitations when exploring the dataset.
398

Area efficient PLA's for the recognition of regular expression languages

Chandrasekhar, Muthyala. January 1985 (has links)
No description available.
399

The design of digital machines tolerant of soft errors /

Savaria, Yvon, 1958- January 1985 (has links)
This thesis deals primarily with the problem of soft-error tolerance in digital machines. The possible sources of soft errors are reviewed. It is shown that the significance of ionizing radiation increases with the scaling down of MOS technologies. The characteristics of electromagnetic interference sources are also discussed. After presenting the conventional methods of dealing with soft errors, a new approach to this problem is suggested. The new approach, called Soft-Error Filtering (SEF), consists of filtering every output of the logic before latching it, in such a way that a transient injected into a machine does not change the final result of an operation. An analysis of the reduction in the error rate that is obtained by using SEF is presented. For example, this analysis demonstrates that the error rate due to alpha particles generated by the decay of radioactive elements becomes negligible. A great deal of attention is devoted to the design of filtering latches which is an essential component for implementing SEF machines. Three structures are considered and a CMOS implementation is proposed in each case. The double-filter latch is the best of the three implementations. It features a nearly optimum performance in the time domain and it is relatively insensitive to process fluctuations. An overhead analysis demonstrates that SEF usually results in a small overhead, both in area and in time simultaneously. In conclusion, SEF is the best approach to the problem of designing a machine tolerant to short transients.
400

Time domain space mapping optimization of digital interconnect circuits

Haddadin, Baker. January 2009 (has links)
Microwave circuit design including the design of Interconnect circuits are proving to be a very hard and complex process where the use of CAD tools is becoming more essential to the reduction in design time and in providing more accurate results. Space mapping methods, the relatively new and very efficient way of optimization which are used in microwave filters and structures will be investigated in this thesis and applied to the time domain optimization of digital interconnects. The main advantage is that the optimization is driven using simpler models called coarse models that would approximate the more complex fine model of the real system, which provide a better insight to the problem and at the same time reduce the optimization time. The results are always mapped back to the real system and a relation/mapping is found between both systems which would help the convergence time. In this thesis, we study the optimization of interconnects where we build certain practical error functions to evaluate performance in the time domain. The space mapping method is formulated to avoid problems found in the original formulation where we apply some necessary modifications to the Trust Region Aggressive Space Mapping TRASM for it to be applicable to the design process in time domain. This new method modified TRASM or MTRASM is then evaluated and tested on multiple circuits with different configuration and the results are compared to the results obtained from TRASM.

Page generated in 0.0632 seconds