• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Evaluating and improvement of tree stump volume prediction models in the eastern United States

Barker, Ethan Jefferson 06 June 2017 (has links)
Forests are considered among the best carbon stocks on the planet. After forest harvest, the residual tree stumps persist on the site for years after harvest continuing to store carbon. A bigger concern is that the component ratio method requires a way to get stump volume to obtain total tree aboveground biomass. Therefore, the stump volumes contribute to the National Carbon Inventory. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, many model forms are evaluated for their accuracy in predicting stump volume. Stump profile and stump volume predictions were among the types of estimates done here for both outside and inside bark measurements. Fitting previously used models to a larger data set allows for improved regression coefficients and potentially more flexible and accurate models. The data set was compiled from a large selection of legacy data as well as some newly collected field measurements. Analysis was conducted for thirty of the most numerous tree species in the eastern United States as well as provide an improved method for inside and outside bark stump volume estimation. / Master of Science
202

GIS based optimal design of sewer networks and pump stations

Agbenowosi, Newland Komla 11 June 2009 (has links)
In the planning and design of sewer networks, most of the decisions are spatially dependent because of the right of way considerations and the desire to have flow by gravity. This research addresses the application of combined optimization-geographic information system (GIS) technology in the design process. The program developed for the design uses selected manhole locations to generate the candidate potential sewer networks. The design area is delineated into subwatersheds for determining the locations for lift stations when gravity flow is not possible. Flows from upstream subwatersheds are transported to the downstream subwatersheds via a force main. The path and destination of each force main in the system is determined by applying the Dijkstra's shortest path algorithm to select the least cost path from a set of potential paths. This method seeks to minimize the total dynamic head. A modified length is used to represent the length of each link or force main segment. The modified length is the physical length of the link (representing the friction loss) plus an equivalent length (representing the static head). The least cost path for the force main is the path with the least total modified length. The design approach is applied to two areas in the town of Blacksburg, Virginia. The resulting network and the force main paths are discussed. / Master of Science
203

A New Method of Determining the Transmission Line Parameters of an Untransposed Line using Synchrophasor Measurements

Lowe, Bradley Shayne 10 September 2015 (has links)
Transmission line parameters play a significant role in a variety of power system applications. The accuracy of these parameters is of paramount importance. Traditional methods of determining transmission line parameters must take a large number of factors into consideration. It is difficult and in most cases impractical to include every possible factor when calculating parameter values. A modern approach to the parameter identification problem is an online method by which the parameter values are calculated using synchronized voltage and current measurements from both ends of a transmission line. One of the biggest problems facing the synchronized measurement method is line transposition. Several methods have been proposed that demonstrate how the line parameters of a transposed line may be estimated. However, the present case of today's power systems is such that a majority of transmission lines are untransposed. While transposed line methods have value, they cannot be applied in real-world scenarios. Future efforts of using synchronized measurements to estimate transmission line parameters must focus on the development and refining of untransposed line methods. This thesis reviews the existing methods of estimation transmission line parameters using synchrophasor measurements and proposes a new method of estimating the parameters of an untransposed line. After the proposal of this new method, a sensitivity analysis is conducted to determine its performance when noise is present in the measurements. / Master of Science
204

The Sherman Morrison Iteration

Slagel, Joseph Tanner 17 June 2015 (has links)
The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix. / Master of Science
205

Least Cost Path Modeling Between Inka and Amazon Civilizations

Lewis, Colleen Paige 09 June 2022 (has links)
Least Cost Path Analysis (LCPA) is a GIS-based approach for calculating the most efficient route between a start and end point, often in terms of shortest time or least amount of energy. The approach is often applied in archaeology to estimate locations of sites, and routes between them. We applied LCPA to estimate how sites in the Andes in the eastern portion of the Inka empire may have connected to sites in the western Amazon Basin. Our approach further used the known Inka Road network to test performance of two types of LCP models (linear vs. areal calculation) and four types of cost functions. LCPs can be calculated with an areal approach, where each cell of the DEM is given one overall slope value, or linearly, where the direction of travel across a cell affects the slope value. Four different algorithms were tested: Tobler's Hiking Function (1993), Tobler's Hiking Function with a vertical exaggeration of 2.3 based on human perceptions of slope (Pingel 2010), Pingel's empirical estimation approach (2010), and Pandolf et al.'s energy expenditure equation (1977) using both an areal and linear approach for all the algorithms. An initial study was conducted in the Cusco region and results were compared to the Inka Road network using the linear accuracy assessment method of Goodchild and Hunter (1997) and Güimil-Fariña and Parcero-Oubiña (2015). The findings suggest that the empirical estimation and caloric cost methods were the most accurate and performed similarly, both were more accurate than travel-time based costs, and linear methods were better than areal based methods when using higher resolution DEM inputs. / Master of Science / Least Cost Path Analysis (LCPA) is a method used for determining the most efficient route between a start and end point, often in terms of shortest time or least amount of energy. The approach is often applied in archaeology to estimate locations of sites, and routes between them. We applied LCPA to estimate how sites in the Andes in the eastern portion of the Inka empire may have connected to sites in the western Amazon Basin. Our approach further used the known Inka Road network to test performance of two types of Least Cost Path (LCP) models (linear vs. areal calculation) and four types of cost functions. LCPs can be calculated with an areal approach, where each cell in an elevation dataset is given one overall slope value, or linearly, where the direction of travel across a cell affects the slope value. Four different ways of calculating cost were tested: Tobler's Hiking Function (1993) using time as a cost, Tobler's Hiking Function with a vertical exaggeration of 2.3 where the cost is based on human perceptions of slope (Pingel 2010), Pingel's empirical estimation approach (2010) based on the preexisting Inka Road system, and Pandolf et al.'s energy expenditure equation (1977). All four ways of calculating costs were used both an areal and linear approach. An initial study was conducted in the Cusco region and results were compared to the Inka Road network by seeing what percent of each LCP was within 500 m of the Inka Road. The findings suggest that the empirical estimation and energy based methods were the most accurate and performed similarly, both were more accurate than travel-time based costs, and linear methods were better than areal based methods when using higher resolution elevation data inputs.
206

HATLINK: a link between least squares regression and nonparametric curve estimation

Einsporn, Richard L. January 1987 (has links)
For both least squares and nonparametric kernel regression, prediction at a given regressor location is obtained as a weighted average of the observed responses. For least squares, the weights used in this average are a direct consequence of the form of the parametric model prescribed by the user. If the prescribed model is not exactly correct, then the resulting predictions and subsequent inferences may be misleading. On the other hand, nonparametric curve estimation techniques, such as kernel regression, obtain prediction weights solely on the basis of the distance of the regressor coordinates of an observation to the point of prediction. These methods therefore ignore information that the researcher may have concerning a reasonable approximate model. In overlooking such information, the nonparametric curve fitting methods often fit anomalous patterns in the data. This paper presents a method for obtaining an improved set of prediction weights by striking the proper balance between the least squares and kernel weighting schemes. The method is called "HATLINK," since the appropriate balance is achieved through a mixture of the hat matrices corresponding to the least squares and kernel fits. The mixing parameter is determined adaptively through cross-validation (PRESS) or by a version of the Cp statistic. Predictions obtained through the HATLINK procedure are shown through simulation studies to be robust to model misspecification by the researcher. It is also demonstrated that the HA TLINK procedure can be used to perform many of the usual tasks of regression analysis, such as estimate the error variance, provide confidence intervals, test for lack of fit of the user's prescribed model, and assist in the variable selection process. In accomplishing all of these tasks, the HATLINK procedure provides a modelrobust alternative to the standard model-based approach to regression. / Ph. D.
207

Choosing summary statistics by least angle regression for approximate Bayesian computation

Faisal, Muhammad, Futschik, A., Hussain, I., Abd-el.Moemen, M. 01 February 2016 (has links)
Yes / Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares. / Higher Education Commission (HEC), College Deanship of Scientific Research, King Saud University, Riyadh Saudi Arabia - research group project RGP-VPP-280.
208

Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimators

Vaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust WLS) are currently the two primary estimation methods designed for structural equation modeling with ordinal observed variables. These methods assume that continuous latent variables were coarsely categorized by the measurement process to yield the observed ordinal variables, and that the model proposed by the researcher pertains to these latent variables rather than to their ordinal manifestations. Previous research has strongly suggested that robust WLS is superior to full WLS when models are correctly specified. Given the realities of applied research, it was critical to examine these methods with misspecified models. This Monte Carlo simulation study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or misspecified in one of two ways. Seven conditions of five-category indicator distribution shape at four sample sizes were simulated. These design factors were completely crossed for a total of 224 cells. Previously findings of the relative superiority of robust WLS with correctly specified models were replicated, and robust WLS was also found to perform better than full WLS given overspecification or misspecification. Robust WLS parameter estimates were usually more accurate for correct and overspecified models, especially at the smaller sample sizes. In the face of misspecification, full WLS better approximated the correct loading values whereas robust estimates better approximated the correct factor correlation. Robust WLS chi-square values discriminated between correct and misspecified models much better than full WLS values at the two smaller sample sizes. For all four model specifications, robust parameter estimates usually showed lower variability and robust standard errors usually showed lower bias. These findings suggest that robust WLS should likely remain the estimator of choice for applied researchers. Additionally, highly leptokurtic distributions should be avoided when possible. It should also be noted that robust WLS performance was arguably adequate at the sample size of 100 when the indicators were not highly leptokurtic. / text
209

Analysis of 3D objects at multiple scales : application to shape matching

Mellado, Nicolas 06 December 2012 (has links)
Depuis quelques années, l’évolution des techniques d’acquisition a entraîné une généralisation de l’utilisation d’objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l’analyse dite scale-space est aujourd’hui un standard pour l’étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d’instabilité et nécessite une information de connectivité, qui n’est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d’outils mathématiques pour l’analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l’analyse des nuages de points à plusieurs échelles. Un défi intéressant est l’analyse d’objets 3D acquis dans le cadre de l’étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l’acquisition des fragments des statues entourant par le passé le Phare d’Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d’objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l’action du temps. Nous proposons un formalisme pour la conception de systèmes d’assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d’assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets. / Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist’s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.
210

Algorithm-Based Efficient Approaches for Motion Estimation Systems

Lee, Teahyung 14 November 2007 (has links)
Algorithm-Based Efficient Approaches for Motion Estimation Systems Teahyung Lee 121 pages Directed by Dr. David V. Anderson This research addresses algorithms for efficient motion estimation systems. With the growth of wireless video system market, such as mobile imaging, digital still and video cameras, and video sensor network, low-power consumption is increasingly desirable for embedded video systems. Motion estimation typically needs considerable computations and is the basic block for many video applications. To implement low-power video systems using embedded devices and sensors, a CMOS imager has been developed that allows low-power computations on the focal plane. In this dissertation efficient motion estimation algorithms are presented to complement this platform. In the first part of dissertation we propose two algorithms regarding gradient-based optical flow estimation (OFE) to reduce computational complexity with high performance. The first is a checkerboard-type filtering (CBTF) algorithm for prefiltering and spatiotemporal derivative calculations. Another one is spatially recursive OFE frameworks using recursive LS (RLS) and/or matrix refinement to reduce the computational complexity for solving linear system of derivative values of image intensity in least-squares (LS)-OFE. From simulation results, CBTF and spatially recursive OFE show improved computational efficiency compared to conventional approaches with higher or similar performance. In the second part of dissertation we propose a new algorithm for video coding application to improve motion estimation and compensation performance in the wavelet domain. This new algorithm is for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD) to enhance rate-distortion (RD) performance under temporal aliasing noise. This technique gives competitive or better performance in terms of RD compared to conventional MRME and MRME with motion vector prediction through median filtering.

Page generated in 0.0258 seconds