• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Partial least squares structural equation modelling with incomplete data : an investigation of the impact of imputation methods

Mohd Jamil, J. B. January 2012 (has links)
Despite considerable advances in missing data imputation methods over the last three decades, the problem of missing data remains largely unsolved. Many techniques have emerged in the literature as candidate solutions. These techniques can be categorised into two classes: statistical methods of data imputation and computational intelligence methods of data imputation. Due to the longstanding use of statistical methods in handling missing data problems, it takes quite some time for computational intelligence methods to gain profound attention even though these methods have analogous accuracy, in comparison to other approaches. The merits of both these classes have been discussed at length in the literature, but only limited studies make significant comparison to these classes. This thesis contributes to knowledge by firstly, conducting a comprehensive comparison of standard statistical methods of data imputation, namely, mean substitution (MS), regression imputation (RI), expectation maximization (EM), tree imputation (TI) and multiple imputation (MI) on missing completely at random (MCAR) data sets. Secondly, this study also compares the efficacy of these methods with a computational intelligence method of data imputation, ii namely, a neural network (NN) on missing not at random (MNAR) data sets. The significance difference in performance of the methods is presented. Thirdly, a novel procedure for handling missing data is presented. A hybrid combination of each of these statistical methods with a NN, known here as the post-processing procedure, was adopted to approximate MNAR data sets. Simulation studies for each of these imputation approaches have been conducted to assess the impact of missing values on partial least squares structural equation modelling (PLS-SEM) based on the estimated accuracy of both structural and measurement parameters. The best method to deal with particular missing data mechanisms is highly recognized. Several significant insights were deduced from the simulation results. It was figured that for the problem of MCAR by using statistical methods of data imputation, MI performs better than the other methods for all percentages of missing data. Another unique contribution is found when comparing the results before and after the NN post-processing procedure. This improvement in accuracy may be resulted from the neural network's ability to derive meaning from the imputed data set found by the statistical methods. Based on these results, the NN post-processing procedure is capable to assist MS in producing significant improvement in accuracy of the approximated values. This is a promising result, as MS is the weakest method in this study. This evidence is also informative as MS is often used as the default method available to users of PLS-SEM software.
372

Acoustic impulse detection algorithms for application in gunshot localization

Van der Merwe, J. F. January 2012 (has links)
M. Tech. Electrical Engineering. / Attempts to find computational efficient ways to identify and extract gunshot impulses from signals. Areas of study include Generalised Cross Correlation (GCC), sidelobe minimisation utilising Least Square (LS) techniques as well as training algorithms using a Reproducing Kernel Hilbert Space (RKHS) approach. It also incorporates Support Vector Machines (SVM) to train a network to recognise gunshot impulses. By combining these individual research areas more optimal solutions are obtainable.
373

Innovate On A Shoestring : Product development for the Least Developed Countries and what we can re-use in the Established Markets

Ottosson, Hans January 2015 (has links)
By understanding current approaches and methods of product development (PD) combined with knowledge of the needs and know-how of customers in the least developed countries (LDCs) associated risks and excessive costs can be avoided. The main purpose of this thesis is to highlight the important need of developing products and services for the LDCs and to look at current practices for PD and to distill these into one method for developing products pertinent to LDC needs and markets. Conversely, the second purpose for this thesis is to examine possible LDC based development tools that can be applicable when designing for the more established markets. There are also crucial social, cultural, economic and political reasons for addressing LDC related issues. The goal is to show companies of all sizes that it can be profitable to expand to new markets in the LDCs and also that the steps used there can help generate new revenue when implemented in their current markets, as well as to provide them a model for it. This thesis includes and clearly demonstrates the importance of development involvement on the local level and the benefit of using complementors. The thesis data and conclusions are based on literature studies and an extended stay in the Dominican Republic. It is here observed that by getting closer to the end customer, a company will get an increased understanding and knowledge that provides an advantage over the competition. And for companies to succeed in the LDCs, the three most significant things to consider are: 1) to find the specific needs of the customer, 2) design for affordability, and 3) to source and manufacture locally. It will be seen that the benefits to such an approach extend outward in essentially all directions.
374

On the QR Decomposition of H-Matrices

Benner, Peter, Mach, Thomas 28 August 2009 (has links) (PDF)
The hierarchical (<i>H-</i>) matrix format allows storing a variety of dense matrices from certain applications in a special data-sparse way with linear-polylogarithmic complexity. Many operations from linear algebra like matrix-matrix and matrix-vector products, matrix inversion and LU decomposition can be implemented efficiently using the <i>H</i>-matrix format. Due to its importance in solving many problems in numerical linear algebra like least-squares problems, it is also desirable to have an efficient QR decomposition of <i>H</i>-matrices. In the past, two different approaches for this task have been suggested. We will review the resulting methods and suggest a new algorithm to compute the QR decomposition of an <i>H</i>-matrix. Like other <i>H</i>-arithmetic operations the <i>H</i>QR decomposition is of linear-polylogarithmic complexity. We will compare our new algorithm with the older ones by using two series of test examples and discuss benefits and drawbacks of the new approach.
375

Least-squares variational principles and the finite element method: theory, formulations, and models for solid and fluid mechanics

Pontaza, Juan Pablo 30 September 2004 (has links)
We consider the application of least-squares variational principles and the finite element method to the numerical solution of boundary value problems arising in the fields of solidand fluidmechanics.For manyof these problems least-squares principles offer many theoretical and computational advantages in the implementation of the corresponding finite element model that are not present in the traditional weak form Galerkin finite element model.Most notably, the use of least-squares principles leads to a variational unconstrained minimization problem where stability conditions such as inf-sup conditions (typically arising in mixed methods using weak form Galerkin finite element formulations) never arise. In addition, the least-squares based finite elementmodelalways yields a discrete system ofequations witha symmetric positive definite coeffcientmatrix.These attributes, amongst manyothers highlightedand detailed in this work, allow the developmentofrobust andeffcient finite elementmodels for problems of practical importance. The research documented herein encompasses least-squares based formulations for incompressible and compressible viscous fluid flow, the bending of thin and thick plates, and for the analysis of shear-deformable shell structures.
376

Periodinių sistemų parametrų įvertinimas / Estimation of the parameters ofperiodically time-varying system

Gajevski, Pavel 11 June 2004 (has links)
In this work are discussed a block parameter estimation method for linear periodically time varying system. The whole work consists of two parts: theoretical and practical. The theoretical part is based on model’s description, its creation and structure. There it is shown estimation Markova, or an estimation of the least squares generalized method and the description of the generalized model. The practical part is devoted to fulfilling experiments and their describing. The conclusion about estimation of block parameter method’s achievement was also made. The experiments have been fulfilled using Matlab program. In addition count correctly Matlab (matrica, period) have been used. The results of experiments are given in the tables and schedules.
377

Curvelet-domain preconditioned "wave-equation" depth-migration with sparseness and illumination constraints

Herrmann, Felix J., Moghaddam, Peyman P. January 2004 (has links)
A non-linear edge-preserving solution to the least-squares migration problem with sparseness & illumination constraints is proposed. The applied formalism explores Curvelets as basis functions. By virtue of their sparseness and locality, Curvelets not only reduce the dimensionality of the imaging problem but they also naturally lead to a dense preconditioning that almost diagonalizes the normal/Hessian operator. This almost diagonalization allows us to recast the imaging problem into a ’simple’ denoising problem. As such, we are in the position to use non-linear estimators based on thresholding. These estimators exploit the sparseness and locality of Curvelets and allow us to compute a first estimate for the reflectivity, which approximates the least-squares solution of the seismic inverse scattering problem. Given this estimate, we impose sparseness and additional amplitude corrections by solving a constrained optimization problem. This optimization problem is initialized and constrained by the thresholded image and is designed to remove remaining imaging artifacts and imperfections in the estimation and reconstruction.
378

Optimization strategies for sparseness- and continuity- enhanced imaging : Theory

Herrmann, Felix J., Moghaddam, Peyman P., Kirlin, Rodney L. January 2005 (has links)
Two complementary solution strategies to the least-squares migration problem with sparseness- & continuity constraints are proposed. The applied formalism explores the sparseness of curvelets on the reflectivity and their invariance under the demigration migration operator. Sparseness is enhanced by (approximately) minimizing a (weighted) l1-norm on the curvelet coefficients. Continuity along imaged reflectors is brought out by minimizing the anisotropic difussion or total variation norm which penalizes variations along and in between reflectors. A brief sketch of the theory is provided as well as a number of synthetic examples. Technical details on the implementation of the optimization strategies are deferred to an accompanying paper: implementation.
379

Investigation of wireless local area network facilitated angle of arrival indoor location

Wong, Carl Monway 11 1900 (has links)
As wireless devices become more common, the ability to position a wireless device has become a topic of importance. Accurate positioning through technologies such as the Global Positioning System is possible for outdoor environments. Indoor environments pose a different challenge, and research continues to position users indoors. Due to the prevalence of wireless local area networks (WLANs) in many indoor spaces, it is prudent to determine their capabilities for the purposes of positioning. Signal strength and time based positioning systems have been studied for WLANs. Direction or angle of arrival (AOA) based positioning will be possible with multiple antenna arrays, such as those included with upcoming devices based on the IEEE 802.11n standard. The potential performance of such a system is evaluated. The positioning performance of such a system depends on the accuracy of the AOA estimation as well as the positioning algorithm. Two different maximum-likelihood (ML) derived algorithms are used to determine the AOA of the mobile user: a specialized simple ML algorithm, and the space- alternating generalized expectation-maximization (SAGE) channel parameter estimation algorithm. The algorithms are used to determine the error in estimating AOAs through the use of real wireless signals captured in an indoor office environment. The statistics of the AOA error are used in a positioning simulation to predict the positioning performance. A least squares (LS) technique as well as the popular extended Kalman filter (EKF) are used to combine the AOAs to determine position. The position simulation shows that AOA- based positioning using WLANs indoors has the potential to position a wireless user with an accuracy of about 2 m. This is comparable to other positioning systems previously developed for WLANs.
380

Analysis of Additive Risk Model with High Dimensional Covariates Using Partial Least Squares

Zhou, Yue 09 June 2006 (has links)
In this thesis, we consider the problem of constructing an additive risk model based on the right censored survival data to predict the survival times of the cancer patients, especially when the dimension of the covariates is much larger than the sample size. For microarray Gene Expression data, the number of gene expression levels is far greater than the number of samples. Such ¡°small n, large p¡± problems have attracted researchers to investigate the association between cancer patient survival times and gene expression profiles for recent few years. We apply Partial Least Squares to reduce the dimension of the covariates and get the corresponding latent variables (components), and these components are used as new regressors to fit the extensional additive risk model. Also we employ the time dependent AUC curve (area under the Receiver Operating Characteristic (ROC) curve) to assess how well the model predicts the survival time. Finally, this approach is illustrated by re-analysis of the well known AML data set and breast cancer data set. The results show that the model fits both of the data sets very well.

Page generated in 0.0275 seconds