1 |
Development and Validation of a Method of Moments approach for modeling planar antenna structuresKulkarni, Shashank D 20 April 2007 (has links)
In this dissertation, a Method of Moments (MoM) Volume Integral Equation (VIE)-based modeling approach suitable for a patch or slot antenna on a thin finite dielectric substrate is developed and validated. Two new key features of this method are the use of proper dielectric basis functions and proper VIE conditioning, close to the metal surface, where the surface boundary condition of the zero tangential-component must be extended into adjacent tetrahedra. The extended boundary condition is the exact result for the piecewise-constant dielectric basis functions. The latter operation allows one to achieve a good accuracy with one layer of tetrahedra for a thin dielectric substrate and thereby greatly reduces computational cost. The use of low-order basis functions also implies the use of low-order integration schemes and faster filling of the impedance matrix. For some common patch/slot antennas, the VIE-based modeling approach is found to give an error of about 1% or less in the resonant frequency for one-layer tetrahedral meshes with a relatively small number of unknowns. This error is obtained by comparison with fine finite- element method (FEM) simulations, or with measurements, or with the analytical mode matching approach. Hence it is competitive with both the method of moments surface integral equation approach and with the FEM approach for the printed antennas on thin dielectric substrates. Along with the MoM development, the dissertation also presents the models and design procedures for a number of practical antenna configurations. They in particular include: i. a compact linearly polarized broadband planar inverted-F antenna (PIFA); ii. a circularly polarized turnstile bowtie antenna. Both the antennas are designed to operate in the low UHF band and used for indoor positioning/indoor geolocation.
|
2 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
3 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
Page generated in 0.0765 seconds