• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 473
  • 171
  • 62
  • 40
  • 26
  • 19
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1011
  • 1011
  • 200
  • 181
  • 165
  • 157
  • 148
  • 137
  • 123
  • 115
  • 96
  • 93
  • 80
  • 79
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Inverse Analysis of Transient Heat Source from Arc Erosion

Li, Yung-Yuan 02 July 2001 (has links)
An inverse method is developed to analyze the transient heat source from arc erosion. The temperature at the contour of arc erosion is assumed as melting point. And the temperature in grid points at the last time is calculated by interpolation, which include measurement errors. Then, the unknown parameters of transient heat source can be solved by linear least-squares error method. These parameters are plasma radius at the anode surface grows with time, arc power, and plasma flushing efficiency on the anode. Because the temperature in measuring points includes measurement errors, the exact solution can be found when fewer unknowns are considered. The inverse method is sensitivity to measurement errors.
252

Adaptive Rake Multiuser Receiver with Linearly Constrained Sliding Window RLS Algorithm for DS-CDMA Systems

Lee, Hsin-Pei 04 July 2003 (has links)
The technique of direct sequence code division multiple access (DS-CDMA) cellular system has been the focus of increased attention. In this thesis, we will consider the environment of DS-CDMA systems, where the asynchronous narrow band interference due to other systems is joined suddenly to the CDMA system. The suddenly joined narrow band interference will make the system crush down. The main concern of this thesis is to deal with suddenly joined narrow band interference cancellation. An adaptive filtering algorithm based on sliding window criterion and variable forgetting factor is known to be very attractive for violent changing environment. In this thesis, a new sliding window linearly constrained recursive least squares (SW LC-RLS) algorithm and variable forgetting factor linearly constrained recursive least squares (VFF LC-RLS) algorithm on the modified minimum mean squared error (MMSE) structure [9] is devised for RAKE receiver in direct sequence code-division multiple access (DS-CDMA) system over multipath fading channels. Where the channel estimation scheme is accomplished at the output of adaptive filter. The proposed SW LC-RLS algorithm and VFF LC-RLS has the advantage of having faster convergence property and tracking ability, and can be applied to the environment, where the narrow band interference is suddenly joined to the system, to achieve desired performance. Via computer simulation, we show that the performance, in terms of mean square errors (MSE) and signal to interference plus noise ratio (SINR), is superior to the conventional LC-RLS and orthogonal decomposition-based LMS algorithms based on the MMSE structure [9].
253

TOA Wireless Location Algorithm with NLOS Mitigation Based on LS-SVM in UWB Systems

Lin, Chien-hung 29 July 2008 (has links)
One of the major problems encountered in wireless location is the effect caused by non-line of sight (NLOS) propagation. When the direct path from the mobile station (MS) to base stations (BSs) is blocked by obstacles or buildings, the signal arrival times will delay. That will make the signal measurements include an error due to the excess path propagation. If we use the NLOS signal measurements for localization, that will make the system localization performance reduce greatly. In the thesis, a time-of-arrival (TOA) based location system with NLOS mitigation algorithm is proposed. The proposed method uses least squares-support vector machine (LS-SVM) with optimal parameters selection by particle swarm optimization (PSO) for establishing regression model, which is used in the estimation of propagation distances and reduction of the NLOS propagation errors. By using a weighted objective function, the estimation results of the distances are combined with suitable weight factors, which are derived from the differences between the estimated measurements and the measured measurements. By applying the optimality of the weighted objection function, the method is capable of mitigating the NLOS effects and reducing the propagation range errors. Computer simulation results in ultra-wideband (UWB) environments show that the proposed NLOS mitigation algorithm can reduce the mean and variance of the NLOS measurements efficiently. The proposed method outperforms other methods in improving localization accuracy under different NLOS conditions.
254

Development and Application of Kinetic Meshless Methods for Euler Equations

C, Praveen 07 1900 (has links)
Meshless methods are a relatively new class of schemes for the numerical solution of partial differential equations. Their special characteristic is that they do not require a mesh but only need a distribution of points in the computational domain. The approximation at any point of spatial derivatives appearing in the partial differential equations is performed using a local cloud of points called the "connectivity" (or stencil). A point distribution can be more easily generated than a grid since we have less constraints to satisfy. The present work uses two meshless methods; an existing scheme called Least Squares Kinetic Upwind Method (LSKUM) and a new scheme called Kinetic Meshless Method (KMM). LSKUM is a "kinetic" scheme which uses a "least squares" approximation} for discretizing the derivatives occurring in the partial differential equations. The first part of the thesis is concerned with some theoretical properties and application of LSKUM to 3-D point distributions. Using previously established results we show that first order LSKUM in 1-D is positivity preserving under a CFL-like condition. The 3-D LSKUM is applied to point distributions obtained from FAME mesh. FAME, which stands for Feature Associated Mesh Embedding, is a composite overlapping grid system developed at QinetiQ (formerly DERA), UK, for store separation problems. The FAME mesh has a cell-based data structure and this is first converted to a node-based data structure which leads to a point distribution. For each point in this distribution we find a set of nearby nodes which forms the connectivity. The connectivity at each point (which is also the "full stencil" for that point) is split along each of the three coordinate directions so that we need six split (or half or one-sided) stencils at each point. The split stencils are used in LSKUM to calculate the split-flux derivatives arising in kinetic schemes which gives the upwind character to LSKUM. The "quality" of each of these stencils affects the accuracy and stability of the numerical scheme. In this work we focus on developing some numerical criteria to quantify the quality of a stencil for meshless methods like LSKUM. The first test is based on singular value decomposition of the over-determined problem and the singular values are used to measure the ill-conditioning (generally caused by a flat stencil). If any of the split stencils are found to be ill-conditioned then we use the full stencil for calculating the corresponding split flux derivative. A second test that is used is based on an accuracy measurement. The idea of this test is that a "good" stencil must give accurate estimates of derivatives and vice versa. If the error in the computed derivatives is above some specified tolerance the stencil is classified as unacceptable. In this case we either enhance the stencil (to remove disc-type degenerate structure) or switch to full stencil. It is found that the full stencil almost always behaves well in terms of both the tests. The use of these two tests and the associated modifications of defective stencils in an automatic manner allows the solver to converge without any blow up. The results obtained for a 3-D configuration compare favorably with wind tunnel measurements and the framework developed here provides a rational basis for approaching the connectivity selection problem. The second part of the thesis deals with a new scheme called Kinetic Meshless Method (KMM) which was developed as a consequence of the experience obtained with LSKUM and FAME mesh. As mentioned before the full stencil is generally better behaved than the split stencils. Hence the new scheme is constructed so that it does not require split stencils but operates on a full stencil (which is like a centered stencil). In order to obtain an upwind bias we introduce mid-point states (between a point and its neighbour) and the least squares fitting is performed using these mid-point states. The mid-point states are defined in an upwind-biased manner at the kinetic/Boltzmann level and moment-method strategy leads to an upwind scheme at the Euler level. On a standard 4-point Cartesian stencil this scheme reduces to finite volume method with KFVS fluxes. We can also show the rotational invariance of the scheme which is an important property of the governing equations themselves. The KMM is extended to higher order accuracy using a reconstruction procedure similar to finite volume schemes even though we do not have (or need) any cells in the present case. Numerical studies on a model 2-D problem show second order accuracy. Some theoretical and practical advantages of using a kinetic formulation for deriving the scheme are recognized. Several 2-D inviscid flows are solved which also demonstrate many important characteristics. The subsonic test cases show that the scheme produces less numerical entropy compared to LSKUM, and is also better in preserving the symmetry of the flow. The test cases involving discontinuous flows show that the new scheme is capable of resolving shocks very sharply especially with adaptation. The robustness of the scheme is also very good as shown in the supersonic test cases.
255

Acoustic Emission in Composite Laminates - Numerical Simulations and Experimental Characterization

Johnson, Mikael January 2002 (has links)
No description available.
256

Semiparametric estimation of unimodal distributions [electronic resource] / by Jason K. Looper.

Looper, Jason K. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 93 pages. / Thesis (M.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: One often wishes to understand the probability distribution of stochastic data from experiment or computer simulations. However, where no model is given, practitioners must resort to parametric or non-parametric methods in order to gain information about the underlying distribution. Others have used initially a nonparametric estimator in order to understand the underlying shape of a set of data, and then later returned with a parametric method to locate the peaks. However they are interested in estimating spectra, which may have multiple peaks, where in this work we are interested in approximating the peak position of a single-peak probability distribution. One method of analyzing a distribution of data is by fitting a curve to, or smoothing them. Polynomial regression and least-squares fit are examples of smoothing methods. Initial understanding of the underlying distribution can be obscured depending on the degree of smoothing. / ABSTRACT: Problems such as under and oversmoothing must be addressed in order to determine the shape of the underlying distribution.Furthermore, smoothing of skewed data can give a biased estimation of the peak position. We propose two new approaches for statistical mode estimation based on the assumption that the underlying distribution has only one peak. The first method imposes the global constraint of unimodality locally, by requiring negative curvature over some domain. The second method performs a search that assumes a position of the distribution's peak and requires positive slope to the left, and negative slope to the right. / ABSTRACT: Each approach entails a constrained least-squares fit to the raw cumulative probability distribution.We compare the relative efficiencies [12] of finding the peak location of these two estimators for artificially generated data from known families of distributions Weibull, beta, and gamma. Within each family a parameter controls the skewness or kurtosis, quantifying the shapes of the distributions for comparison. We also compare our methods with other estimators such as the kernel-density estimator, adaptive histogram, and polynomial regression. By comparing the effectiveness of the estimators, we can determine which estimator best locates the peak position. We find that our estimators do not perform better than other known estimators. We also find that our estimators are biased. / ABSTRACT: Overall, an adaptation of kernel estimation proved to be the most efficient.The results for the work done in this thesis will be submitted, in a different form, for publication by D.A. Rabson and J.K. Looper. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
257

Partial least squares structural equation modelling with incomplete data : an investigation of the impact of imputation methods

Mohd Jamil, J. B. January 2012 (has links)
Despite considerable advances in missing data imputation methods over the last three decades, the problem of missing data remains largely unsolved. Many techniques have emerged in the literature as candidate solutions. These techniques can be categorised into two classes: statistical methods of data imputation and computational intelligence methods of data imputation. Due to the longstanding use of statistical methods in handling missing data problems, it takes quite some time for computational intelligence methods to gain profound attention even though these methods have analogous accuracy, in comparison to other approaches. The merits of both these classes have been discussed at length in the literature, but only limited studies make significant comparison to these classes. This thesis contributes to knowledge by firstly, conducting a comprehensive comparison of standard statistical methods of data imputation, namely, mean substitution (MS), regression imputation (RI), expectation maximization (EM), tree imputation (TI) and multiple imputation (MI) on missing completely at random (MCAR) data sets. Secondly, this study also compares the efficacy of these methods with a computational intelligence method of data imputation, ii namely, a neural network (NN) on missing not at random (MNAR) data sets. The significance difference in performance of the methods is presented. Thirdly, a novel procedure for handling missing data is presented. A hybrid combination of each of these statistical methods with a NN, known here as the post-processing procedure, was adopted to approximate MNAR data sets. Simulation studies for each of these imputation approaches have been conducted to assess the impact of missing values on partial least squares structural equation modelling (PLS-SEM) based on the estimated accuracy of both structural and measurement parameters. The best method to deal with particular missing data mechanisms is highly recognized. Several significant insights were deduced from the simulation results. It was figured that for the problem of MCAR by using statistical methods of data imputation, MI performs better than the other methods for all percentages of missing data. Another unique contribution is found when comparing the results before and after the NN post-processing procedure. This improvement in accuracy may be resulted from the neural network's ability to derive meaning from the imputed data set found by the statistical methods. Based on these results, the NN post-processing procedure is capable to assist MS in producing significant improvement in accuracy of the approximated values. This is a promising result, as MS is the weakest method in this study. This evidence is also informative as MS is often used as the default method available to users of PLS-SEM software.
258

Acoustic impulse detection algorithms for application in gunshot localization

Van der Merwe, J. F. January 2012 (has links)
M. Tech. Electrical Engineering. / Attempts to find computational efficient ways to identify and extract gunshot impulses from signals. Areas of study include Generalised Cross Correlation (GCC), sidelobe minimisation utilising Least Square (LS) techniques as well as training algorithms using a Reproducing Kernel Hilbert Space (RKHS) approach. It also incorporates Support Vector Machines (SVM) to train a network to recognise gunshot impulses. By combining these individual research areas more optimal solutions are obtainable.
259

On the QR Decomposition of H-Matrices

Benner, Peter, Mach, Thomas 28 August 2009 (has links) (PDF)
The hierarchical (<i>H-</i>) matrix format allows storing a variety of dense matrices from certain applications in a special data-sparse way with linear-polylogarithmic complexity. Many operations from linear algebra like matrix-matrix and matrix-vector products, matrix inversion and LU decomposition can be implemented efficiently using the <i>H</i>-matrix format. Due to its importance in solving many problems in numerical linear algebra like least-squares problems, it is also desirable to have an efficient QR decomposition of <i>H</i>-matrices. In the past, two different approaches for this task have been suggested. We will review the resulting methods and suggest a new algorithm to compute the QR decomposition of an <i>H</i>-matrix. Like other <i>H</i>-arithmetic operations the <i>H</i>QR decomposition is of linear-polylogarithmic complexity. We will compare our new algorithm with the older ones by using two series of test examples and discuss benefits and drawbacks of the new approach.
260

Least-squares variational principles and the finite element method: theory, formulations, and models for solid and fluid mechanics

Pontaza, Juan Pablo 30 September 2004 (has links)
We consider the application of least-squares variational principles and the finite element method to the numerical solution of boundary value problems arising in the fields of solidand fluidmechanics.For manyof these problems least-squares principles offer many theoretical and computational advantages in the implementation of the corresponding finite element model that are not present in the traditional weak form Galerkin finite element model.Most notably, the use of least-squares principles leads to a variational unconstrained minimization problem where stability conditions such as inf-sup conditions (typically arising in mixed methods using weak form Galerkin finite element formulations) never arise. In addition, the least-squares based finite elementmodelalways yields a discrete system ofequations witha symmetric positive definite coeffcientmatrix.These attributes, amongst manyothers highlightedand detailed in this work, allow the developmentofrobust andeffcient finite elementmodels for problems of practical importance. The research documented herein encompasses least-squares based formulations for incompressible and compressible viscous fluid flow, the bending of thin and thick plates, and for the analysis of shear-deformable shell structures.

Page generated in 0.0555 seconds