461 |
Polynomial containment in refinement spaces and wavelets based on local projection operatorsMoubandjo, Desiree V. 03 1900 (has links)
Dissertation (PhD)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: See full text for abstract / AFRIKAANSE OPSOMMING: Sien volteks vir opsomming
|
462 |
HILBERT POLYNOMIALS AND STRONGLY STABLE IDEALSMoore, Dennis 01 January 2012 (has links)
Strongly stable ideals are important in algebraic geometry, commutative algebra, and combinatorics. Prompted, for example, by combinatorial approaches for studying Hilbert schemes and the existence of maximal total Betti numbers among saturated ideals with a given Hilbert polynomial, three algorithms are presented. Each of these algorithms produces all strongly stable ideals with some prescribed property: the saturated strongly stable ideals with a given Hilbert polynomial, the almost lexsegment ideals with a given Hilbert polynomial, and the saturated strongly stable ideals with a given Hilbert function. Bounds for the complexity of our algorithms are included. Also included are some applications for these algorithms and some estimates for counting strongly stable ideals with a fixed Hilbert polynomial.
|
463 |
Mathematical approach to channel codes with a diagonal matrix structureMitchell, David G. M. January 2009 (has links)
Digital communications have now become a fundamental part of modern society. In communications, channel coding is an effective way to reduce the information rate down to channel capacity so that the information can be transmitted reliably through the channel. This thesis is devoted to studying the mathematical theory and analysis of channel codes that possess a useful diagonal structure in the parity-check and generator matrices. The first aspect of these codes that is studied is the ability to describe the parity-check matrix of a code with sliding diagonal structure using polynomials. Using this framework, an efficient new method is proposed to obtain a generator matrix G from certain types of parity-check matrices with a so-called defective cyclic block structure. By the nature of this method, G can also be completely described by a polynomial, which leads to efficient encoder design using shift registers. In addition, there is no need for the matrices to be in systematic form, thus avoiding the need for Gaussian elimination. Following this work, we proceed to explore some of the properties of diagonally structured lowdensity parity-check (LDPC) convolutional codes. LDPC convolutional codes have been shown to be capable of achieving the same capacity-approaching performance as LDPC block codes with iterative message-passing decoding. The first crucial property studied is the minimum free distance of LDPC convolutional code ensembles, an important parameter contributing to the error-correcting capability of the code. Here, asymptotic methods are used to form lower bounds on the ratio of the free distance to constraint length for several ensembles of asymptotically good, protograph-based LDPC convolutional codes. Further, it is shown that this ratio of free distance to constraint length for such LDPC convolutional codes exceeds the ratio of minimum distance to block length for corresponding LDPC block codes. Another interesting property of these codes is the way in which the structure affects the performance in the infamous error floor (which occurs at high signal to noise ratio) of the bit error rate curve. It has been suggested that “near-codewords” may be a significant factor affecting decoding failures of LDPC codes over an additive white Gaussian noise (AWGN) channel. A near-codeword is a sequence that satisfies almost all of the check equations. These nearcodewords can be associated with so-called ‘trapping sets’ that exist in the Tanner graph of a code. In the final major contribution of the thesis, trapping sets of protograph-based LDPC convolutional codes are analysed. Here, asymptotic methods are used to calculate a lower bound for the trapping set growth rates for several ensembles of asymptotically good protograph-based LDPC convolutional codes. This value can be used to predict where the error floor will occur for these codes under iterative message-passing decoding.
|
464 |
Spectral technique in relaxation-based simulation of MOS circuits.Guarini, Marcello Walter. January 1989 (has links)
A new method for transient simulation of integrated circuits has been developed and investigated. The method utilizes expansion of circuit variables into Chebyshev series. A prototype computer simulation program based on this technique has been implemented and applied in the transient simulation of several MOS circuits. The results have been compared with those generated by SPICE. The method has been also combined with the waveform relaxation technique. Several algorithms were developed using the Gauss-Seidel and Gauss-Jacobi iterative procedures. The algorithms based on the Gauss-Seidel iterative procedure were implemented in the prototype software. They offer substantial CPU time savings in comparison with SPICE without compromising the accuracy of solutions. A description of the prototype computer simulation program and a summary of the results of simulation experiments are included.
|
465 |
POLYNOMIAL FIT OF INTERFEROGRAMS.KIM, CHEOL-JUNG. January 1982 (has links)
The conventional Zernike polynomial fit of circular aperture interferograms is reviewed and a more quantitative and statistical analysis is added. Some conventional questions such as the required number of polynomials, sampling requirements, and how to determine the optimum references surface are answered. Then, the analysis is applied to the polynomial fit of noncircular aperture interferograms and axicon interferograms. The problems and limitations of using Zernike polynomials are presented. A method of obtaining the surface figure error information from several smaller subaperture interferograms is analyzed. The limitations of the analysis for testing a large flat, a large parabola, or an aspheric surface are presented. The analysis is compared with the local connection method using overlapped wavefront information. Finally, the subaperture interferogram analysis is used to average several interferograms and to analyze lateral shearing interferograms.
|
466 |
Efficient Analysis for Nonlinear Effects and Power Handling Capability in High Power HTSC Thin Film Microwave CircuitsTang, Hongzhen January 2000 (has links)
In this study two nonlinear analysis methods are proposed for investigation of nonlinear effects of high temperature superconductive(HTSC) thin film planar microwave circuits. The MoM-HB combination method is based on the combination formulation of the moment method(MoM) and the harmonic balance(HB) technique. It consists of linear and nonlinear solvers. The power series method treats the voltages at higher order frequencies as the excitations at the corresponding frequencies, and the higher order current distributions are then obtained by using the moment method again. The power series method is simple and fast for finding the output power at higher order frequencies. The MoM-HB combination method is suitable for strong nonlinearity, and it can be also used to find the fundamental current redistribution, conductor loss, and the scattering parameters variation at the fundamental frequency. These two proposed methods are efficient, accurate, and suitable for distributed-type HTSC nonlinearity. They can be easily incorporated into commercial EM CAD softwares to expand their capabilities. These two nonlinear analysis method are validated by analyzing a HTSC stripline filter and HTSC antenna dipole circuits. HTSC microstrip lines are then investigated for the nonlinear effects of HTSC material on the current density distribution over the cross section and the conductor loss as a function of the applied power. The HTSC microstrip patch filters are then studied to show that the HTSCinterconnecting line could dominate the behaviors of the circuits at high power. The variation of the transmission and reflection coefficients with the applied power and the third output power are calculated. The HTSC microstrip line structure with gilded edges is proposed for improving the power handling capability of HTSC thin film circuit based on a specified limit of harmonic generation and conductor loss. A general analysis approach suitable for any thickness of gilding layer is developed by integrating the multi-port network theory into aforementioned proposed nonlinear analysis methods. The conductor loss and harmonic generation of the gilded HTSC microstrip line are investigated.
|
467 |
Les polynômes orthogonaux matriciels et la méthode de factorisationGreavu, Cristina 08 1900 (has links)
La méthode de factorisation est appliquée sur les données initiales d'un problème de mécanique quantique déja résolu. Les solutions (états propres et fonctions propres) sont presque tous retrouvés. / The factorization methode is applied to the initial data of an already solved quantum mechanics problem. The solutions (eigenfunctions and eigenvalues) are almost all rederived.
|
468 |
The determinant method and applicationsReuss, Thomas January 2015 (has links)
The thesis is structured into 5 chapters as follows: <strong>Chapter 1</strong> is an introduction to the tools and methods we use most frequently. <strong>Chapter 2</strong> Pairs of k-free Numbers, consecutive square-full Numbers. In this chapter, we refine the approximate determinant method by Heath-Brown. We present applications to asymptotic formulas for consecutive k-free integers, and more generally for k-free integers represented by r-tuples of linear forms. We also show how the method can be used to derive an upper bound for the number of consecutive square-full integers. Finally, we apply the method to make a statement about the size of the fundamental solution of Pell equations. <strong>Chapter 3</strong> Power-Free Values of Polynomials. A conjecture by Erdös states that for any irreducible polynomial f of degree d≥3 with no fixed (d-1)-th power prime divisor, there are infinfinitely many primes p such that f(p) is (d-1)-free. We prove this conjecture and derive the corresponding asymptotic formulas. <strong>Chapter 4</strong> Integer Points on Bilinear and Trilinear Equations. In the fourth chapter, we derive upper bounds for the number of integer solutions on bilinear or trilinear forms. <strong>Chapter 5</strong> In the fifth chapter, we present a method to count the monomials that occur in the projective determinant method when the method is applied to cubic varieties.
|
469 |
On Lagrangian meshless methods in free-surface flowsSilverberg, Jon P. 01 1900 (has links)
Classically, fluid dynamics have been dealt with analytically because of the lack of numerical resources (Yeung, 1982). With the development of computational ability, many formulations have been developed which typically use the traditional Navier-Stokes equations along with an Eulerian grid. Today, there exists the possibility of using a moving grid (Lagrangian) along with a meshless discretization. The first issue in meshless fluid dynamics is the equations of motion. There are currently two types of Lagrangian formulations. Spherical Particle Hydrodynamics (SPH) is a method which calculates all equations of motion explicitly. The Moving Particle Semi-implicit (MPS) method uses a mathematical foundation based on SPH. However, instead of calculating all laws of motion explicitly, a fractional time step is performed to calculate pressure. A proposed method, Lagrange Implicit Fraction Step (LIFS), has been created which improves the mathematical formulations on the fluid domain. The LIFS method returns to Continuum mechanics to construct the laws of motion based on decomposing all forces of a volume. It is assumed that all forces on this volume can be linearly superposed to calculate the accelerations of each mass. The LIFS method calculates pressure from a boundary value problem with the inclusion of proper flux boundary conditions. The second issue in meshless Lagrangian dynamics is the calculation of derivatives across a domain. The Monte Carlo Integration (MCI) method uses weighted averages to calculate operators. However, the MCI method can be very inaccurate, and is not suitable for sparse grids. The Radial Basis Function (RBF) method is introduced and studied as a possibility to calculate meshless operators. The RBF method involves a solution of a system of equations to calculate interpolants. Machine expenses are shown to limit the viability of the RBF method for large domains. A new method of calculation has been created called Multi-dimensional Lagrange Interpolating Polynomials (MLIP). While Lagrange Interpolating Polynomials are essentially a one-dimensional interpolation, the use of "dimensional-cuts" and Gaussian quadratures can provide multi-dimensional interpolation. This paper is divided into three sections. The first section specifies the equations of motion. The second section provides the mathematical basis for meshless calculations. The third section evaluates the effectiveness of the meshless calculations and compares two fluiddynamic codes. / Fund number: N62271-97-G-0041. / US Navy (USN) author.
|
470 |
L'œuvre mathématique de Descartes dans La Géométrie / The mathematical work of Descartes in La GéométrieWarusfel, André 21 June 2010 (has links)
La Géométrie de Descartes peut être lue comme un traité consacré à la résolution (graphique) de toutes les équations polynomiales grâce à un outil forgé pour la circonstance, qui permettra à l'homme de créer les sciences quantitatives et d'atteindre - presque - le but fixé au Premier Chapitre de la Genèse : dominer le monde. Cet outil est le calcul des coordonnées, invention exceptionnelle dont cependant il n'avait pas vu toute la puissance.Ce qu'il savait, c'était simplement que, outre la possibilité de définir et de construire un stock infini de courbes, il lui permettait - croyait-il - de donner une réponse définitive au problème de la recherche des racines des équations, mais aussi, grâce à cette technique, de ramener toute question de géométrie à un calcul, bref à mécaniser en quelque sorte les dernières questions ouvertes des mathématiques de son temps.Cette grille de lecture est à confronter à l'attitude plus conservatrice pour laquelle c'était là une mise en œuvre de la Méthode, voire de la Mathesis, fondée autour de l'algébrisation de la géométrie classique, plutôt qu'une arrivée de la géométrie venant à la rescousse de l'algèbre. / La Géométrie of Descartes can be read as a treatise on (graphic) resolution of all polynomial equations, by means of a tool made up on purpose, and by which man will be able to build up the quantitative sciences and to - almost - fulfil the object as stated in Genesis, 1: to rule over the world. That tool is the coordinates system, an extraordinary discovery, more powerful even than what Descartes had imagined.He only saw a means of defining and keeping in stock an endless number of curves and, beyond that, of finding a final answer to the question of the research of the equation roots; and, through that technical medium, he knew also he could reduce any geometrical problem to algebraic calculation; in a word, solve mechanically the last open questions in the mathematics of his time.This reading of the book must be confronted with a more usual posture according to which there is nothing else here than an application of the Method, or even of the Mathesis, grounded on the algebraization of the classical geometry, more than an advent of geometry used to help algebra.
|
Page generated in 0.0178 seconds