• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Estimation Using Low Rank Signal Models

Mahata, Kaushik January 2003 (has links)
<p>Designing estimators based on low rank signal models is a common practice in signal processing. Some of these estimators are designed to use a single low rank snapshot vector, while others employ multiple snapshots. This dissertation deals with both these cases in different contexts.</p><p>Separable nonlinear least squares is a popular tool to extract parameter estimates from a single snapshot vector. Asymptotic statistical properties of the separable non-linear least squares estimates are explored in the first part of the thesis. The assumptions imposed on the noise process and the data model are general. Therefore, the results are useful in a wide range of applications. Sufficient conditions are established for consistency, asymptotic normality and statistical efficiency of the estimates. An expression for the asymptotic covariance matrix is derived and it is shown that the estimates are circular. The analysis is extended also to the constrained separable nonlinear least squares problems.</p><p>Nonparametric estimation of the material functions from wave propagation experiments is the topic of the second part. This is a typical application where a single snapshot vector is employed. Numerical and statistical properties of the least squares algorithm are explored in this context. Boundary conditions in the experiments are used to achieve superior estimation performance. Subsequently, a subspace based estimation algorithm is proposed. The subspace algorithm is not only computationally efficient, but is also equivalent to the least squares method in accuracy.</p><p>Estimation of the frequencies of multiple real valued sine waves is the topic in the third part, where multiple snapshots are employed. A new low rank signal model is introduced. Subsequently, an ESPRIT like method named R-Esprit and a weighted subspace fitting approach are developed based on the proposed model. When compared to ESPRIT, R-Esprit is not only computationally more economical but is also equivalent in performance. The weighted subspace fitting approach shows significant improvement in the resolution threshold. It is also robust to additive noise.</p>
532

A Study of Missing Data Imputation and Predictive Modeling of Strength Properties of Wood Composites

Zeng, Yan 01 August 2011 (has links)
Problem: Real-time process and destructive test data were collected from a wood composite manufacturer in the U.S. to develop real-time predictive models of two key strength properties (Modulus of Rupture (MOR) and Internal Bound (IB)) of a wood composite manufacturing process. Sensor malfunction and data “send/retrieval” problems lead to null fields in the company’s data warehouse which resulted in information loss. Many manufacturers attempt to build accurate predictive models excluding entire records with null fields or using summary statistics such as mean or median in place of the null field. However, predictive model errors in validation may be higher in the presence of information loss. In addition, the selection of predictive modeling methods poses another challenge to many wood composite manufacturers. Approach: This thesis consists of two parts addressing above issues: 1) how to improve data quality using missing data imputation; 2) what predictive modeling method is better in terms of prediction precision (measured by root mean square error or RMSE). The first part summarizes an application of missing data imputation methods in predictive modeling. After variable selection, two missing data imputation methods were selected after comparing six possible methods. Predictive models of imputed data were developed using partial least squares regression (PLSR) and compared with models of non-imputed data using ten-fold cross-validation. Root mean square error of prediction (RMSEP) and normalized RMSEP (NRMSEP) were calculated. The second presents a series of comparisons among four predictive modeling methods using imputed data without variable selection. Results: The first part concludes that expectation-maximization (EM) algorithm and multiple imputation (MI) using Markov Chain Monte Carlo (MCMC) simulation achieved more precise results. Predictive models based on imputed datasets generated more precise prediction results (average NRMSEP of 5.8% for model of MOR model and 7.2% for model of IB) than models of non-imputed datasets (average NRMSEP of 6.3% for model of MOR and 8.1% for model of IB). The second part finds that Bayesian Additive Regression Tree (BART) produced most precise prediction results (average NRMSEP of 7.7% for MOR model and 8.6% for IB model) than other three models: PLSR, LASSO, and Adaptive LASSO.
533

On Some Properties of Interior Methods for Optimization

Sporre, Göran January 2003 (has links)
This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments. The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct. In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints. The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case. The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming. <b>Keywords:</b>Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path. <b>Mathematics Subject Classification (2000):</b>Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.
534

Modeling financial volatility : A functional approach with applications to Swedish limit order book data

Elezovic, Suad January 2009 (has links)
<!-- /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0cm; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-fareast-font-family:"Times New Roman"; mso-ansi-language:SV;} @page Section1 {size:612.0pt 792.0pt; margin:72.0pt 90.0pt 72.0pt 90.0pt; mso-header-margin:35.4pt; mso-footer-margin:35.4pt; mso-paper-source:0;} div.Section1 {page:Section1;} --> This thesis is designed to offer an approach to modeling volatility in the Swedish limit order market. Realized quadratic variation is used as an estimator of the integrated variance, which is a measure of the variability of a stochastic process in continuous time. Moreover, a functional time series model for the realized quadratic variation is introduced. A two-step estimation procedure for such a model is then proposed. Some properties of the proposed two-step estimator are discussed and illustrated through an application to high-frequency financial data and simulated experiments. In Paper I, the concept of realized quadratic variation, obtained from the bid and ask curves, is presented. In particular, an application to the Swedish limit order book data is performed using signature plots to determine an optimal sampling frequency for the computations. The paper is the first study that introduces realized quadratic variation in a functional context. Paper II introduces functional time series models and apply them to the modeling of volatility in the Swedish limit order book. More precisely, a functional approach to the estimation of volatility dynamics of the spreads (differences between the bid and ask prices) is presented through a case study. For that purpose, a two-step procedure for the estimation of functional linear models is adapted to the estimation of a functional dynamic time series model. Paper III studies a two-step estimation procedure for the functional models introduced in Paper II. For that purpose, data is simulated using the Heston stochastic volatility model, thereby obtaining time series of realized quadratic variations as functions of relative quantities of shares. In the first step, a dynamic time series model is fitted to each time series. This results in a set of inefficient raw estimates of the coefficient functions. In the second step, the raw estimates are smoothed. The second step improves on the first step since it yields both smooth and more efficient estimates. In this simulation, the smooth estimates are shown to perform better in terms of mean squared error. Paper IV introduces an alternative to the two-step estimation procedure mentioned above. This is achieved by taking into account the correlation structure of the error terms obtained in the first step. The proposed estimator is based on seemingly unrelated regression representation. Then, a multivariate generalized least squares estimator is used in a first step and its smooth version in a second step. Some of the asymptotic properties of the resulting two-step procedure are discussed. The new procedure is illustrated with functional high-frequency financial data.
535

Modelling and simulation of turbulence subject to system rotation

Grundestam, Olof January 2006 (has links)
Simulation and modelling of turbulent flows under influence of streamline curvature and system rotation have been considered. Direct numerical simulations have been performed for fully developed rotating turbulent channel flow using a pseudo-spectral code. The rotation numbers considered are larger than unity. For the range of rotation numbers studied, an increase in rotation number has a damping effect on the turbulence. DNS-data obtained from previous simulations are used to perform a priori tests of different pressure-strain and dissipation rate models. Furthermore, the ideal behaviour of the coefficients of different model formulations is investigated. The main part of the modelling is focused on explicit algebraic Reynolds stress models (EARSMs). An EARSM based on a pressure strain rate model including terms that are tensorially nonlinear in the mean velocity gradients is proposed. The new model is tested for a number of flows including a high-lift aeronautics application. The linear extensions are demonstrated to have a significant effect on the predictions. Representation techniques for EARSMs based on incomplete sets of basis tensors are also considered. It is shown that a least-squares approach is favourable compared to the Galerkin method. The corresponding optimality aspects are considered and it is deduced that Galerkin based EARSMs are not optimal in a more strict sense. EARSMs derived with the least-squares method are, on the other hand, optimal in the sense that the error of the underlying implicit relation is minimized. It is further demonstrated that the predictions of the least-squares EARSMs are in significantly better agreement with the corresponding complete EARSMs when tested for fully developed rotating turbulent pipe flow. / QC 20100825
536

Parameter estimation methods for biological systems

Mu, Lei 13 April 2010
<p>The inverse problem of modeling biochemical processes mathematically from measured time course data falls into the category of system identification and parameter estimation. Analyzing the time course data would provide valuable insights into the model structure and dynamics of the biochemical system. Based on the types of biochemical reactions, such as metabolic networks and genetic networks, several modeling frameworks have been proposed, developed and proved effective, including the Michaelis-Menten equation, the Biochemical System Theory (BST), etc. One bottleneck in analyzing the obtained data is the estimation of parameter values within the system model.</p> <p>As most models for molecular biological systems are nonlinear with respect to both parameters and system state variables, estimation of parameters in these models from experimental measurement data is thus a nonlinear estimation problem. In principle, all algorithms for nonlinear optimization can be used to deal with this problem, for example, the Gauss-Newton iteration method and its variants. However, these methods do not take the special structures of biological system models into account. When the number of parameters to be determined increases, it will be challenging and computationally expensive to apply these conventional methods.</p> <p>In this research, several methods are proposed for estimating parameters in two classes of widely used biological system models: the S-system model and the linear fractional model (LFM), by utilizing their structure specialties. For the S-system, two estimation methods are designed. 1) Based on the two-term structure (production and degradation) of the model, an alternating iterative least squares method is proposed. 2) A separation nonlinear least squares method is proposed to deal with the partially linear structure of the model. For the LFM, two estimation methods are provided. 1) The separation nonlinear least squares method can also be adopted to treat the partially linear structure of the LFM, and moreover a modified iterative version is included. 2) A special strategy using the separation principle and the weighted least squares method is implemented to turn the cost function into a quadratic form and thus the estimates for parameters can be analytically solved. Simulation results have demonstrated the effectiveness of the proposed methods, which have shown better performance in terms of estimation accuracy and computation time, compared with those conventional nonlinear estimation methods.</p>
537

Electrophysiological Events Related to Top-down Contrast Sensitivity Control

Misic, Bratislav 14 July 2009 (has links)
Stimulus-driven changes in the gain of sensory neurons are well-documented, but relatively little is known about whether analogous gain-control can also be effected in a top-down manner. A recent psychophysical study demonstrated that sensitivity to luminance contrast can be modulated by a priori knowledge (de la Rosa et al., in press). In the present study, event-related potentials were used to resolve the stages of information processing that facilitate such knowledge-driven adjustments. Groupwise independent component analysis identified two robust spatiotemporal patterns of endogenous brain activity that captured experimental effects. The first pattern was associated with obligatory processing of contextual information, while the second pattern was associated with selective initiation of contrast gain adjustment. These data suggest that knowledge-driven contrast gain control is mediated by multiple independent electrogenic sources.
538

Identification Of Periodic Autoregressive Moving Average Models

Akgun, Burcin 01 September 2003 (has links) (PDF)
In this thesis, identification of periodically varying orders of univariate Periodic Autoregressive Moving-Average (PARMA) processes is mainly studied. The identification of the varying orders of PARMA process is carried out by generalizing the well-known Box-Jenkins techniques to a seasonwise manner. The identification of pure periodic moving-average (PMA) and pure periodic autoregressive (PAR) models are considered only. For PARMA model identification, the Periodic Autocorrelation Function (PeACF) and Periodic Partial Autocorrelation Function (PePACF), which play the same role as their ARMA counterparts, are employed. For parameter estimation, which is considered only to refine model identification, the conditional least squares estimation (LSE) method is used which is applicable to PAR models. Estimation becomes very complicated, difficult and may give unsatisfactory results when a moving-average (MA) component exists in the model. On account of overcoming this difficulty, seasons following PMA processes are tried to be modeled as PAR processes with reasonable orders in order to employ LSE. Diagnostic checking, through residuals of the fitted model, is also performed stating its reasons and methods. The last part of the study demonstrates application of identification techniques through analysis of two seasonal hydrologic time series, which consist of average monthly streamflows. For this purpose, computer programs were developed specially for PARMA model identification.
539

Electrophysiological Events Related to Top-down Contrast Sensitivity Control

Misic, Bratislav 14 July 2009 (has links)
Stimulus-driven changes in the gain of sensory neurons are well-documented, but relatively little is known about whether analogous gain-control can also be effected in a top-down manner. A recent psychophysical study demonstrated that sensitivity to luminance contrast can be modulated by a priori knowledge (de la Rosa et al., in press). In the present study, event-related potentials were used to resolve the stages of information processing that facilitate such knowledge-driven adjustments. Groupwise independent component analysis identified two robust spatiotemporal patterns of endogenous brain activity that captured experimental effects. The first pattern was associated with obligatory processing of contextual information, while the second pattern was associated with selective initiation of contrast gain adjustment. These data suggest that knowledge-driven contrast gain control is mediated by multiple independent electrogenic sources.
540

Modeling and experimental evaluation of the effective bulk modulus for a mixture of hydraulic oil and air

2013 September 1900 (has links)
The bulk modulus of pure hydraulic oil and its dependency on pressure and temperature has been studied extensively over the past years. A comprehensive review of some of the more common definitions of fluid bulk modulus is conducted and comments on some of the confusion over definitions and different methods of measuring the fluid bulk modulus are presented in this thesis. In practice, it is known that there is always some form of air present in hydraulic systems which substantially decreases the oil bulk modulus. The term effective bulk modulus is used to account for the effect of air and/or the compliance of transmission lines. A summary from the literature of the effective bulk modulus models for a mixture of hydraulic oil and air is presented. Based on the reviews, these models are divided into two groups: “compression only” models and “compression and dissolve” models. A comparison of various “compression only” models, where only the volumetric compression of air is considered, shows that the models do not match each other at the same operating conditions. The reason for this difference is explained and after applying some modifications to the models, a theoretical model of the “compression only” model is suggested. The “compression and dissolve” models, obtained from the literature review, include the effects of the volumetric compression of air and the volumetric reduction of air due to the dissolving of air into the oil. It is found that the existing “compression and dissolve” models have a discontinuity at some critical pressure and as a result do not match the experimental results very well. The reason for the discontinuity is discussed and a new “compression and dissolve” model is proposed by introducing some new parameters to the theoretical model. A new critical pressure (PC) definition is presented based on the saturation limit of oil. In the new definition, the air stops dissolving into the oil after this critical pressure is reached and any remaining air will be only compressed afterwards. An experimental procedure is successfully designed and fabricated to verify the new proposed models and to reproduce the operating conditions that underlie the model assumptions. The pressure range is 0 to 6.9 MPa and the temperature is kept constant at °C. Air is added to the oil in different forms and the amount of air varies from about 1 to 5%. Experiments are conducted in three different phases: baseline (without adding air to the oil), lumped air (air added as a pocket of air to the top of the oil column) and distributed air (air is distributed in the oil in the form of small air bubbles). The effect of different forms and amounts of air and various volume change rates are investigated experimentally and it is shown that the value of PC is strongly affected by the volume change rate, the form, and the amount of air. It is also shown that the new model can represent the experimental data with great accuracy. The new proposed “compression and dissolve” model can be considered as a general model of the effective bulk modulus of a mixture of oil and air where it is applicable to any form of a mixture of hydraulic oil and air. However, it is required to identify model parameters using experimental measurements. A method of identifying the model parameters is introduced and the modeling errors are evaluated. An attempt is also made to verify independently the value of some of the parameters. The new proposed model can be used in analyzing pressure variations and improving the accuracy of the simulations in low pressure hydraulic systems. The new method of modeling the air dissolving into the oil can be also used to improve the modeling of cavitation phenomena in hydraulic systems.

Page generated in 0.0585 seconds