• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 470
  • 171
  • 62
  • 40
  • 26
  • 19
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1008
  • 1008
  • 199
  • 181
  • 165
  • 157
  • 148
  • 137
  • 123
  • 115
  • 96
  • 93
  • 80
  • 79
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Hierarchical Sampling for Least-Squares Policy Iteration

Schwab, Devin 26 January 2016 (has links)
No description available.
142

Calculations for positioning with the Global Navigation Satellite System

Cheng, Chao-heh January 1998 (has links)
No description available.
143

Optimization Based Domain Decomposition Methods for Linear and Nonlinear Problems

Lee, Hyesuk Kwon 05 August 1997 (has links)
Optimization based domain decomposition methods for the solution of partial differential equations are considered. The crux of the method is a constrained minimization problem for which the objective functional measures the jump in the dependent variables across the common boundaries between subdomains; the constraints are the partial differential equations. First, we consider a linear constraint. The existence of optimal solutions for the optimization problem is shown as is its convergence to the exact solution of the given problem. We then derive an optimality system of partial differential equations from which solutions of the domain decomposition problem may be determined. Finite element approximations to solutions of the optimality system are defined and analyzed as is an eminently parallelizable gradient method for solving the optimality system. The linear constraint minimization problem is also recast as a linear least squares problem and is solved by a conjugate gradient method. The domain decomposition method can be extended to nonlinear problems such as the Navier-Stokes equations. This results from the fact that the objective functional for the minimization problem involves the jump in dependent variables across the interfaces between subdomains. Thus, the method does not require that the partial differential equations themselves be derivable through an extremal problem. An optimality system is derived by applying a Lagrange multiplier rule to a constrained optimization problem. Error estimates for finite element approximations are presented as is a gradient method to solve the optimality system. We also use a Gauss-Newton method to solve the minimization problem with the nonlinear constraint. / Ph. D.
144

A method of determining modal residues using an improved residual model and least squares

Kochersberger, Kevin B. 24 October 2005 (has links)
A new approach to determining mode vectors is presented which uses predetermined global parameters and an improved residual model to iteratively determine modal residues. The motivation for such a technique is to determine modal parameters rapidly so that, as data acquisition techniques become faster, more structural degrees of freedom can be measured without significantly slowing down the parameter estimation process. The technique requires an accurate determination of the global parameters of natural frequency and damping by means of an FRF curve fit. More than one structural point is recommended to determine the global parameters since they will be used in determining the mode vectors. A structurally damped curve fitter which uses one or two FRFs is described and can be used for determining the global parameters. Examples of curve fitting simulated and measured data are presented and a comparison is made to a commercially available curve-fitter. Once a frequency range-of-interest is selected, frequencies will be chosen at which the mobility is measured using sine excitation. The in-range modal response is represented by a matrix-vector product where the vector contains the residues for the modes of interest. The out-of-range modal content is also represented by a matrix-vector product and forms the improved residual model. The residual content is removed from the measured mobility by an iterative technique which allows for an accurate determination of the residues of interest. An evaluation of the technique is carried out by simulating a dynamic system including the shaker and power supply. The simulated system is closely modeled after a real system used to evaluate the technique on experimental data. Convergence rates are shown for cases of close modes, low amplitude modes and errors in the global parameters. The results of using the technique on experimental data shows that convergence typically occurs in under 15 iterations. Regenerating the FRF from the modal parameters shows close agreement to the original FRF and better agreement than the regeneration from modal parameters derived from a commercially available curve fitter.> / Ph. D.
145

Evaluating and improvement of tree stump volume prediction models in the eastern United States

Barker, Ethan Jefferson 06 June 2017 (has links)
Forests are considered among the best carbon stocks on the planet. After forest harvest, the residual tree stumps persist on the site for years after harvest continuing to store carbon. A bigger concern is that the component ratio method requires a way to get stump volume to obtain total tree aboveground biomass. Therefore, the stump volumes contribute to the National Carbon Inventory. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, many model forms are evaluated for their accuracy in predicting stump volume. Stump profile and stump volume predictions were among the types of estimates done here for both outside and inside bark measurements. Fitting previously used models to a larger data set allows for improved regression coefficients and potentially more flexible and accurate models. The data set was compiled from a large selection of legacy data as well as some newly collected field measurements. Analysis was conducted for thirty of the most numerous tree species in the eastern United States as well as provide an improved method for inside and outside bark stump volume estimation. / Master of Science
146

A New Method of Determining the Transmission Line Parameters of an Untransposed Line using Synchrophasor Measurements

Lowe, Bradley Shayne 10 September 2015 (has links)
Transmission line parameters play a significant role in a variety of power system applications. The accuracy of these parameters is of paramount importance. Traditional methods of determining transmission line parameters must take a large number of factors into consideration. It is difficult and in most cases impractical to include every possible factor when calculating parameter values. A modern approach to the parameter identification problem is an online method by which the parameter values are calculated using synchronized voltage and current measurements from both ends of a transmission line. One of the biggest problems facing the synchronized measurement method is line transposition. Several methods have been proposed that demonstrate how the line parameters of a transposed line may be estimated. However, the present case of today's power systems is such that a majority of transmission lines are untransposed. While transposed line methods have value, they cannot be applied in real-world scenarios. Future efforts of using synchronized measurements to estimate transmission line parameters must focus on the development and refining of untransposed line methods. This thesis reviews the existing methods of estimation transmission line parameters using synchrophasor measurements and proposes a new method of estimating the parameters of an untransposed line. After the proposal of this new method, a sensitivity analysis is conducted to determine its performance when noise is present in the measurements. / Master of Science
147

The Sherman Morrison Iteration

Slagel, Joseph Tanner 17 June 2015 (has links)
The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix. / Master of Science
148

HATLINK: a link between least squares regression and nonparametric curve estimation

Einsporn, Richard L. January 1987 (has links)
For both least squares and nonparametric kernel regression, prediction at a given regressor location is obtained as a weighted average of the observed responses. For least squares, the weights used in this average are a direct consequence of the form of the parametric model prescribed by the user. If the prescribed model is not exactly correct, then the resulting predictions and subsequent inferences may be misleading. On the other hand, nonparametric curve estimation techniques, such as kernel regression, obtain prediction weights solely on the basis of the distance of the regressor coordinates of an observation to the point of prediction. These methods therefore ignore information that the researcher may have concerning a reasonable approximate model. In overlooking such information, the nonparametric curve fitting methods often fit anomalous patterns in the data. This paper presents a method for obtaining an improved set of prediction weights by striking the proper balance between the least squares and kernel weighting schemes. The method is called "HATLINK," since the appropriate balance is achieved through a mixture of the hat matrices corresponding to the least squares and kernel fits. The mixing parameter is determined adaptively through cross-validation (PRESS) or by a version of the Cp statistic. Predictions obtained through the HATLINK procedure are shown through simulation studies to be robust to model misspecification by the researcher. It is also demonstrated that the HA TLINK procedure can be used to perform many of the usual tasks of regression analysis, such as estimate the error variance, provide confidence intervals, test for lack of fit of the user's prescribed model, and assist in the variable selection process. In accomplishing all of these tasks, the HATLINK procedure provides a modelrobust alternative to the standard model-based approach to regression. / Ph. D.
149

Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimators

Vaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust WLS) are currently the two primary estimation methods designed for structural equation modeling with ordinal observed variables. These methods assume that continuous latent variables were coarsely categorized by the measurement process to yield the observed ordinal variables, and that the model proposed by the researcher pertains to these latent variables rather than to their ordinal manifestations. Previous research has strongly suggested that robust WLS is superior to full WLS when models are correctly specified. Given the realities of applied research, it was critical to examine these methods with misspecified models. This Monte Carlo simulation study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or misspecified in one of two ways. Seven conditions of five-category indicator distribution shape at four sample sizes were simulated. These design factors were completely crossed for a total of 224 cells. Previously findings of the relative superiority of robust WLS with correctly specified models were replicated, and robust WLS was also found to perform better than full WLS given overspecification or misspecification. Robust WLS parameter estimates were usually more accurate for correct and overspecified models, especially at the smaller sample sizes. In the face of misspecification, full WLS better approximated the correct loading values whereas robust estimates better approximated the correct factor correlation. Robust WLS chi-square values discriminated between correct and misspecified models much better than full WLS values at the two smaller sample sizes. For all four model specifications, robust parameter estimates usually showed lower variability and robust standard errors usually showed lower bias. These findings suggest that robust WLS should likely remain the estimator of choice for applied researchers. Additionally, highly leptokurtic distributions should be avoided when possible. It should also be noted that robust WLS performance was arguably adequate at the sample size of 100 when the indicators were not highly leptokurtic. / text
150

Analysis of 3D objects at multiple scales : application to shape matching

Mellado, Nicolas 06 December 2012 (has links)
Depuis quelques années, l’évolution des techniques d’acquisition a entraîné une généralisation de l’utilisation d’objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l’analyse dite scale-space est aujourd’hui un standard pour l’étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d’instabilité et nécessite une information de connectivité, qui n’est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d’outils mathématiques pour l’analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l’analyse des nuages de points à plusieurs échelles. Un défi intéressant est l’analyse d’objets 3D acquis dans le cadre de l’étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l’acquisition des fragments des statues entourant par le passé le Phare d’Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d’objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l’action du temps. Nous proposons un formalisme pour la conception de systèmes d’assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d’assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets. / Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist’s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.

Page generated in 0.0452 seconds