• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 47
  • 16
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 220
  • 41
  • 39
  • 36
  • 30
  • 23
  • 23
  • 19
  • 19
  • 17
  • 17
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods

Tseng, Yu-ling 30 August 2011 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
82

Design of a CORDIC Function Generator Using Table-Driven Function Evaluation with Bit-Level Truncation

Hsu, Wei-Cheng 10 September 2012 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluation methods, piecewise polynomial approximation is the most popular approach which interpolates the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a lookup table ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry, multipliers, and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units in the design of CORDIC processors.
83

Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation

Lin, Shin-hung 12 September 2012 (has links)
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
84

An Extension To The Variational Iteration Method For Systems And Higher-order Differential Equations

Altintan, Derya 01 June 2011 (has links) (PDF)
It is obvious that differential equations can be used to model real-life problems. Although it is possible to obtain analytical solutions of some of them, it is in general difficult to find closed form solutions of differential equations. Finding thus approximate solutions has been the subject of many researchers from different areas. In this thesis, we propose a new approach to Variational Iteration Method (VIM) to obtain the solutions of systems of first-order differential equations. The main contribution of the thesis to VIM is that proposed approach uses restricted variations only for the nonlinear terms and builds up a matrix-valued Lagrange multiplier that leads to the extension of the method (EVIM). Close relation between the matrix-valued Lagrange multipliers and fundamental solutions of the differential equations highlights the relation between the extended version of the variational iteration method and the classical variation of parameters formula. It has been proved that the exact solution of the initial value problems for (nonhomogenous) linear differential equations can be obtained by such a generalisation using only a single variational step. Since higher-order equations can be reduced to first-order systems, the proposed approach is capable of solving such equations too / indeed, without such a reduction, variational iteration method is also extended to higher-order scalar equations. Further, the close connection with the associated first-order systems is presented. Such extension of the method to higher-order equations is then applied to solve boundary value problems: linear and nonlinear ones. Although the corresponding Lagrange multiplier resembles the Green&rsquo / s function, without the need of the latter, the extended approach to the variational iteration method is systematically applied to solve boundary value problems, surely in the nonlinear case as well. In order to show the applicability of the method, we have applied the EVIM to various real-life problems: the classical Sturm-Liouville eigenvalue problems, Brusselator reaction-diffusion, and chemical master equations. Results show that the method is simple, but powerful and effective.
85

Envelopes, duality, and multipliers for certain non-locally convex Hardy-Lorentz spaces

Lengfield, Marc. Oberlin Daniel M. January 2004 (has links)
Thesis (Ph. D.)--Florida State University, 2004. / Advisor: Dr. Daniel M. Oberlin, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (June 18, 2004). Includes bibliographical references.
86

Total delay optimization for column reduction multipliers considering non-uniform arrival times to the final adder

Waters, Ronald S. 26 June 2014 (has links)
Column Reduction Multiplier techniques provide the fastest multiplier designs and involve three steps. First, a partial product array of terms is formed by logically ANDing each bit of the multiplier with each bit of the multiplicand. Second, adders or counters are used to reduce the number of terms in each bit column to a final two. This activity is commonly described as column reduction and occurs in multiple stages. Finally, some form of carry propagate adder (CPA) is applied to the final two terms in order to sum them to produce the final product of the multiplication. Since forming the partial products, in the first step, is simply forming an array of the logical AND's of two bits, there is little opportunity for delay improvement for the first step. There has been much work done in optimizing the reduction stages for column multipliers in the second reduction step. All of the reduction approaches of the second step result in non-uniform arrival times to the input of the final carry propagate adder in the final step. The designs for carry propagate adders have been done assuming that the input bits all have the same arrival time. It is not evident that the non-uniform arrival times from the columns impacts the performance of the multiplier. A thorough analysis of the several column reduction methods and the impact of carry propagate adder designs, along with the column reduction design step, to provide the fastest possible final results, for an array of multiplier widths has not been undertaken. This dissertation investigates the design impact of three carry propagate adders, with different performance attributes, on the final delay results for four column reduction multipliers and suggests general ways to optimize the total delay for the multipliers. / text
87

TOWARDS IMPROVED IDENTIFICATION OF SPATIALLY-DISTRIBUTED RAINFALL RUNOFF MODELS

Pokhrel, Prafulla January 2010 (has links)
Distributed rainfall runoff hydrologic models can be highly effective in improving flood forecasting capabilities at ungauged, interior locations of the watershed. However, their implementation in operational decision-making is hindered by the high dimensionality of the state-parameter space and by lack of methods/understanding on how to properly exploit and incorporate available spatio-temporal information about the system. This dissertation is composed of a sequence of five studies, whose overall goal is to improve understanding on problems relating to parameter identifiability in distributed models and to develop methodologies for their calibration.The first study proposes and investigates an approach for calibrating catchment scale distributed rainfall-runoff models using conventionally available data. The process, called regularization, uses spatial information about soils and land-use that is embedded in prior parameter estimates (Koren et al. 2000) and knowledge of watershed characteristics, to constrain and reduce the dimensionality of the feasible parameter space.The methodology is further extended in the second and third studies to improve extraction of `hydrologically relevant' information from the observed streamflow hydrograph. Hydrological relevance is provided by using signature measures (Yilmaz et al 2008) that correspond to major watershed functions. While the second study applies a manual selection procedure to constrain parameter sets from the subset of post calibrated solutions, the third develops an automatic procedure based on a penalty function optimization approach.The fourth paper investigates the relative impact of using the commonly used multiplier approach to distributed model calibration, in comparison with other spatial regularization strategies and also includes investigations on whether calibration to data at the catchment outlet can provide improved performance at interior locations. The model calibration study conducted for three mid sized catchments in the US led to the important finding that basin outlet hydrographs might not generally contain information regarding spatial variability of the parameters, and that calibration of the overall mean of the spatially distributed parameter fields may be sufficient for flow forecasting at the outlet. This then was the motivation for the fifth paper which investigates to what degree the spatial characteristics of parameter and rainfall fields can be observable in catchment outlet hydrographs.
88

Type I multiplier representations of locally compact groups / by A.K. Holzherr

Holzherr, A. K. (Anton Karl) January 1982 (has links)
Includes bibliographical references / 123, [10] leaves ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Pure Mathematics, 1984
89

Small area, low power, mixed-mode circuits for hybrid neural network applications

Fang, Xuefeng. January 1994 (has links)
Thesis (Ph. D.)--Ohio University, November, 1994. / Title from PDF t.p.
90

An ADMM approach to the numerical solution of state constrained optimal control problems for systems modeled by linear parabolic equations

Song, Yongcun 05 July 2018 (has links)
We address in this thesis the numerical solution of state constrained optimal control problems for systems modeled by linear parabolic equations. For the unconstrained or control-constrained optimal control problem, the first order optimality condition can be obtained in a general way and the associated Lagrange multiplier has low regularity, such as in the L²(Ω). However, for state-constrained optimal control problems, additional assumptions are required in general to guarantee the existence and regularity of Lagrange multipliers. The resulting optimality system leads to difficulties for both the numerical solution and the theoretical analysis. The approach discussed here combines the alternating direction of multipliers (ADMM) with a conjugate gradient (CG) algorithm, both operating in well-chosen Hilbert spaces. The ADMM approach allows the decoupling of the state constraints and the parabolic equation, in which we need solve an unconstrained parabolic optimal control problem and a projection onto the admissible set in each iteration. It has been shown that the CG method applied to the unconstrained optimal control problem modeled by linear parabolic equation is very efficient in the literature. To tackle the issue about the associated Lagrange multiplier, we prove the convergence of our proposed algorithm without assuming the existence and regularity of Lagrange multipliers. Furthermore, a worst case O(1/k) convergence rate in the ergodic sense is established. For numerical purposes, we employ the finite difference method combined with finite element method to implement the time-space discretization. After full discretization, the numerical results we obtain validate the methodology discussed in this thesis.

Page generated in 0.0381 seconds