• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 474
  • 171
  • 62
  • 40
  • 26
  • 19
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1012
  • 1012
  • 201
  • 182
  • 165
  • 157
  • 148
  • 137
  • 123
  • 115
  • 96
  • 93
  • 80
  • 79
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Regularization Techniques for Linear Least-Squares Problems

Suliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
332

Signály s omezeným spektrem, jejich vlastnosti a možnosti jejich extrapolace / Bandlimited signals, their properties and extrapolation capabilities

Mihálik, Ondrej January 2019 (has links)
The work is concerned with the band-limited signal extrapolation using truncated series of prolate spheroidal wave function. Our aim is to investigate the extent to which it is possible to extrapolate signal from its samples taken in a finite interval. It is often believed that this extrapolation method depends on computing definite integrals. We show an alternative approach by using the least squares method and we compare it with the methods of numerical integration. We also consider their performance in the presence of noise and the possibility of using these algorithms for real-time data processing. Finally all proposed algorithms are tested using real data from a microphone array, so that their performance can be compared.
333

On the MSE Performance and Optimization of Regularized Problems

Alrashdi, Ayed 11 1900 (has links)
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
334

Samonastavitelná regulace elektrického motoru / Self-tuning control of electric motor

Havlíček, Jiří January 2017 (has links)
The diploma thesis deals with the self-tuning PSD controllers. The parameters of the model are obtained by a non-recurring method of least squares. With the assistance of the Matlab/Simulink programme, the individual processes of the PSD controller are compared on a second order system. In the thesis, a simulation of the self-tuning cascade control of PMSM‘s current and speed loop is created. The following part of the thesis covers the implementation of individual algorithms on the dSPACE platform for the real PMSM.
335

Trialability, perceived risk and complexity of understanding as determinants of cloud computing services adoption

Etsebeth, Eugene Everard 16 February 2013 (has links)
In 2011 one-third of South African organisations did not intend to adopt cloud computing services because IT decision-maker lacked understanding of the related concepts and benefits (Goldstuck, 2011). This research develops a media-oriented model to examine the adoption of these services in South Africa. The model uses the technology acceptance model (TAM) and innovation diffusion theory (IDT) to develop variables that are considered determinants of adoption including trialability, complexity of understanding, perceived risk, perceived ease of use and perceived usefulness.An electronic survey was sent to 107 IT decision-makers. Over 80% of the respondents were C-suite executives. The Partial Least Squares (PLS) method was chosen to depict and test the proposed model. PLS is superior to normal regression models and is a second generation technique. The data analysis included evaluating and modifying the model, assessing the new measurement model, testing the hypotheses of the model structure and presenting the structural model.The research found that media, experts and word of mouth mitigate perceived risks including bandwidth, connectivity and power. Furthermore, trialability and perceived usefulness were affected by social influence, as well as influencing adoption. The results enable service providers and marketers to develop product roadmaps and pinpoint media messages. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted
336

Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection

Huang, Yunsong 09 1900 (has links)
Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate-wavenumber components of the velocity model not available in the primaries. In addition, diffractions can provide twice or better the resolution as specular reflections for comparable depths of the reflector and diffractor. The width of the diffraction-transmission wavepath is on the order of λ at the diffractor location for the diffraction-transmission wavepath.
337

Public Opinion on Tobacco, Alcohol, and Sugar Policy and its Economic Implications in Sweden : A study on sociodemographic factors’ effects on health policy attitudes of Swedes

Karlsson, Jonas January 2020 (has links)
Using paired samples t-tests, this study examines attitudes toward government intervention to decrease the consumption of tobacco, alcohol, and sugar to improve public health in Sweden. The effects of the four sociodemographic variables gender, age, education, and income on attitudes toward health policies are tested using Ordinary Least Squares and ordered probit regressions. The research is performed using cross-sectional data which is supplied by a national survey. The results show that tobacco should be regulated the most, followed by alcohol and lastly sugar. According to the respondents, tobacco and alcohol consumption need clear societal restrictions while individuals should be responsible for their sugar consumption. This implies that tobacco and alcohol restrictions introduced by the government should be effective and should, therefore, reduce the consumption and subsequently decrease a country’s economic costs. The opposite is true for sugar policy. Women, younger people, highly educated people, and people with higher incomes are positively related to support toward tobacco restrictions. Women, younger people, and highly educated people show more support for alcohol restrictions. Lastly, respondents with higher levels of education are more supportive of sugar restrictions.
338

Using Second-Order Information in Training Deep Neural Networks

Ren, Yi January 2022 (has links)
In this dissertation, we are concerned with the advancement of optimization algorithms for training deep learning models, and in particular about practical second-order methods that take into account the structure of deep neural networks (DNNs). Although first-order methods such as stochastic gradient descent have long been the predominant optimization algorithm used in deep learning, second-order methods are of interest because of their ability to use curvature information to accelerate the optimization process. After the presentation of some background information in Chapter 1, Chapters 2 and 3 focus on the development of practical quasi-Newton methods for training DNNs. We analyze the Kronecker-factored structure of the Hessian matrix of multi-layer perceptrons and convolutional neural networks and consequently propose block-diagonal Kronecker-factored quasi-Newton methods named K-BFGS and K-BFGS(L). To handle the non-convexity nature of DNNs, we also establish new double damping techniques for our proposed methods. Our K-BFGS and K-BFGS(L) methods have memory requirements comparable to first-order methods and experience only mild overhead in terms of per-iteration time complexity. In Chapter 4, we develop a new approximate natural gradient method named Tensor Normal Training (TNT), in which the Fisher matrix is viewed as the covariance matrix of a tensor normal distribution (a generalized form of the normal distribution). The tractable Kronecker-factored approximation to the Fisher information matrix that results from this approximation enables TNT to enjoy memory requirements and per-iteration computational costs that are only slightly higher than those for first-order methods. Notably, unlike KFAC and K-BFGS/K-BFGS(L), TNT only requires the knowledge of the shape of the trainable parameters of a model and does not depend on the specific model architecture. In Chapter 5, we consider the subsampled versions of Gauss-Newton and natural gradient methods applied to DNNs. Because of the low-rank nature of the subsampled matrices, we make use of the Sherman-Morrison-Woodbury formula along with backpropagation to efficiently compute their inverse. We also show that, under rather mild conditions, the algorithm converges to a stationary point if Levenberg-Marquardt damping is used. The results of a substantial number of numerical experiments are reported in Chapters 2, 3, 4 and 5, in which we compare the performance of our methods to state-of-the-art methods used to train DNNs, that demonstrate the efficiency and effectiveness of our proposed new second-order methods.
339

Multivariate analysis and GIS in generating vulnerability map of acid sulfate soils.

Nguyen, Nga January 2015 (has links)
The study employed multi-variate methods to generate vulnerability maps for acid sulfate soils (AS) in the Norrbotten county of Sweden. In this study, the relationships between the reclassified datasets and each biogeochemical element was carefully evaluated with ANOVA Kruskal Wallis and PLS analysis. The sta-tistical results of ANOVA Kruskall-Wallis provided us a useful knowledge of the relationships of the preliminary vulnerability ranks in the classified datasets ver-sus the amount of each biogeochemical element. Then, the statistical knowledge and expert knowledge were used to generate the final vulnerability ranks of AS soils in the classified datasets which were the input independent variables in PLS analyses. The results of Kruskal-Wallis one way ANOVA and PLS analyses showed a strong correlation of the higher levels total Cu2+, Ni2+ and S to the higher vulnerability ranks in the classified datasets. Hence, total Cu2+, Ni2+ and S were chosen as the dependent variables for further PLS analyses. In particular, the Variable Importance in the Projection (VIP) value of each classified dataset was standardized to generate its weight. Vulnerability map of AS soil was a result of a lineal combination of the standardized values in the classified dataset and its weight. Seven weight sets were formed from either uni-variate or multi-variate PLS analyses. Accuracy tests were done by testing the classification of measured pH values of 74 soil profiles with different vulnerability maps and evaluating the areas that were not the AS soil within the groups of medium to high AS soil probability in the land-cover and soil-type datasets. In comparison to the other weight sets, the weight set of multi-variate PLS analysis of the matrix of total Ni2+& S or total Cu2+& S had the robust predictive performance. Sensitivity anal-ysis was done in the weight set of total Ni2+& S, and the results of sensitivity analyses showed that the availability of ditches, and the change in the terrain sur-faces, the altitude level, and the slope had a high influence to the vulnerability map of AS soils. The study showed that using multivariate analysis was a very good approach methodology for predicting the probability of acid sulfate soil.
340

Minimax D-optimal designs for regression models with heteroscedastic errors

Yzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate

Page generated in 0.0597 seconds