201 |
GIS based optimal design of sewer networks and pump stationsAgbenowosi, Newland Komla 11 June 2009 (has links)
In the planning and design of sewer networks, most of the decisions are spatially dependent because of the right of way considerations and the desire to have flow by gravity. This research addresses the application of combined optimization-geographic information system (GIS) technology in the design process. The program developed for the design uses selected manhole locations to generate the candidate potential sewer networks. The design area is delineated into subwatersheds for determining the locations for lift stations when gravity flow is not possible. Flows from upstream subwatersheds are transported to the downstream subwatersheds via a force main. The path and destination of each force main in the system is determined by applying the Dijkstra's shortest path algorithm to select the least cost path from a set of potential paths. This method seeks to minimize the total dynamic head. A modified length is used to represent the length of each link or force main segment. The modified length is the physical length of the link (representing the friction loss) plus an equivalent length (representing the static head). The least cost path for the force main is the path with the least total modified length. The design approach is applied to two areas in the town of Blacksburg, Virginia. The resulting network and the force main paths are discussed. / Master of Science
|
202 |
A New Method of Determining the Transmission Line Parameters of an Untransposed Line using Synchrophasor MeasurementsLowe, Bradley Shayne 10 September 2015 (has links)
Transmission line parameters play a significant role in a variety of power system applications. The accuracy of these parameters is of paramount importance. Traditional methods of determining transmission line parameters must take a large number of factors into consideration. It is difficult and in most cases impractical to include every possible factor when calculating parameter values. A modern approach to the parameter identification problem is an online method by which the parameter values are calculated using synchronized voltage and current measurements from both ends of a transmission line.
One of the biggest problems facing the synchronized measurement method is line transposition. Several methods have been proposed that demonstrate how the line parameters of a transposed line may be estimated. However, the present case of today's power systems is such that a majority of transmission lines are untransposed. While transposed line methods have value, they cannot be applied in real-world scenarios. Future efforts of using synchronized measurements to estimate transmission line parameters must focus on the development and refining of untransposed line methods.
This thesis reviews the existing methods of estimation transmission line parameters using synchrophasor measurements and proposes a new method of estimating the parameters of an untransposed line. After the proposal of this new method, a sensitivity analysis is conducted to determine its performance when noise is present in the measurements. / Master of Science
|
203 |
The Sherman Morrison IterationSlagel, Joseph Tanner 17 June 2015 (has links)
The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix. / Master of Science
|
204 |
HATLINK: a link between least squares regression and nonparametric curve estimationEinsporn, Richard L. January 1987 (has links)
For both least squares and nonparametric kernel regression, prediction at a given regressor location is obtained as a weighted average of the observed responses. For least squares, the weights used in this average are a direct consequence of the form of the parametric model prescribed by the user. If the prescribed model is not exactly correct, then the resulting predictions and subsequent inferences may be misleading. On the other hand, nonparametric curve estimation techniques, such as kernel regression, obtain prediction weights solely on the basis of the distance of the regressor coordinates of an observation to the point of prediction. These methods therefore ignore information that the researcher may have concerning a reasonable approximate model. In overlooking such information, the nonparametric curve fitting methods often fit anomalous patterns in the data.
This paper presents a method for obtaining an improved set of prediction weights by striking the proper balance between the least squares and kernel weighting schemes. The method is called "HATLINK," since the appropriate balance is achieved through a mixture of the hat matrices corresponding to the least squares and kernel fits. The mixing parameter is determined adaptively through cross-validation (PRESS) or by a version of the Cp statistic. Predictions obtained through the HATLINK procedure are shown through simulation studies to be robust to model misspecification by the researcher. It is also demonstrated that the HA TLINK procedure can be used to perform many of the usual tasks of regression analysis, such as estimate the error variance, provide confidence intervals, test for lack of fit of the user's prescribed model, and assist in the variable selection process. In accomplishing all of these tasks, the HATLINK procedure provides a modelrobust alternative to the standard model-based approach to regression. / Ph. D.
|
205 |
Evaluating and improvement of tree stump volume prediction models in the eastern United StatesBarker, Ethan Jefferson 06 June 2017 (has links)
Forests are considered among the best carbon stocks on the planet. After forest harvest, the residual tree stumps persist on the site for years after harvest continuing to store carbon. A bigger concern is that the component ratio method requires a way to get stump volume to obtain total tree aboveground biomass. Therefore, the stump volumes contribute to the National Carbon Inventory. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, many model forms are evaluated for their accuracy in predicting stump volume. Stump profile and stump volume predictions were among the types of estimates done here for both outside and inside bark measurements. Fitting previously used models to a larger data set allows for improved regression coefficients and potentially more flexible and accurate models. The data set was compiled from a large selection of legacy data as well as some newly collected field measurements. Analysis was conducted for thirty of the most numerous tree species in the eastern United States as well as provide an improved method for inside and outside bark stump volume estimation. / Master of Science / Forests are considered among the best carbon stocks on the planet, and estimates of total tree aboveground biomass are needed to maintain the National Carbon Inventory. Tree stump volumes contribute to total tree aboveground biomass estimates. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, existing mathematical equations used to estimate tree stump volume are evaluated. A larger and more inclusive data set was utilized to improve the current equations, and to gather more insight in to which equations are best for different tree species and different areas of the eastern United States.
|
206 |
Choosing summary statistics by least angle regression for approximate Bayesian computationFaisal, Muhammad, Futschik, A., Hussain, I., Abd-el.Moemen, M. 01 February 2016 (has links)
Yes / Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares. / Higher Education Commission (HEC), College Deanship of Scientific Research, King Saud University, Riyadh Saudi Arabia - research group project RGP-VPP-280.
|
207 |
Sanctuary and Community in the Chroa of MetapontoDavidson, Christine January 2025 (has links)
The following thesis investigates routes of communication and connectivity within the chora of Metaponto. Using digital methodologies within a project Geographic Information System, Least Cost Path (LCP) is used to reconstruct ancient routes between rural farmsteads and sanctuaries. LCPs are a means of presenting lines drawn over digitized terrain in order to reconstruct the easiest route of travel between two points with respect to cost factors such as slope and distance. The results of Least Cost Path are then compared against topographical anomalies identified within the landscape of the chora, many of which confirm the likelihood that these linear anomalies were used as ancient routes of travel. A survey of ancient scholarship regarding the parceling of land in both colonial and rural contexts suggests that the lines identified in the chora and supported by this LCP analysis are evidence of rural land division, urban planning, and of ancient roads from as early as the 6th century BCE.
This evidence is used to suggest that independent communities formed within the Metapontine chora, using rural sanctuaries as locations for agglomeration in both cultic and secular contexts. The founding of these sanctuaries created spheres of influence within which a nucleated collection of inhabitants of the countryside formed. The boundaries of these rural communities are identifiable using a combination of Cost-Distance Allocation and a system of land division first proposed by Giovanni Uggeri in 1969.
This thesis concludes that communities formed within the chora of Metaponto, exercising self-governance in local affairs related to life in the countryside. Identities within these communities were of a composite nature, at once both members of a polis and of a unique regional community centered upon their nearest extra-urban sanctuary. Residents of the chora used these sacred spaces as the symbolic capitol of their neighbourhood and the sanctuaries themselves communicated the limits of Metapontine influence and protection. / Thesis / Doctor of Philosophy (PhD) / This thesis investigates the nature of rural habitation and use of sacred spaces in the countryside of the Greek settlement of Metaponto. Using digital methodologies within the field of landscape archaeology and a robust database of archaeological material, routes of access are reconstructed between these rural settlements and their nearest sanctuary beyond the walls of the city proper. When visualizing these routes which have been digitally reconstructed, it becomes clear that they meaningfully interact with linear topographical anomalies identifiable in the landscape of the countryside. This provides confirmation of the use of these routes in antiquity, as well as of regular frequentation at these sacred sites. These data are then used to explore autonomy outside the city of Metaponto and to suggest that regions formed in which sacred spaces were used as points of contact among residents of the countryside, creating communities which self-managed and evolved semi-independently from the asty.
|
208 |
Information and distancesEpstein, Samuel 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.
|
209 |
Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimatorsVaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust
WLS) are currently the two primary estimation methods designed for structural equation
modeling with ordinal observed variables. These methods assume that continuous latent
variables were coarsely categorized by the measurement process to yield the observed
ordinal variables, and that the model proposed by the researcher pertains to these latent
variables rather than to their ordinal manifestations.
Previous research has strongly suggested that robust WLS is superior to full WLS
when models are correctly specified. Given the realities of applied research, it was
critical to examine these methods with misspecified models. This Monte Carlo simulation
study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or
misspecified in one of two ways. Seven conditions of five-category indicator distribution
shape at four sample sizes were simulated. These design factors were completely crossed
for a total of 224 cells.
Previously findings of the relative superiority of robust WLS with correctly
specified models were replicated, and robust WLS was also found to perform better than
full WLS given overspecification or misspecification. Robust WLS parameter estimates
were usually more accurate for correct and overspecified models, especially at the
smaller sample sizes. In the face of misspecification, full WLS better approximated the
correct loading values whereas robust estimates better approximated the correct factor
correlation. Robust WLS chi-square values discriminated between correct and
misspecified models much better than full WLS values at the two smaller sample sizes.
For all four model specifications, robust parameter estimates usually showed lower
variability and robust standard errors usually showed lower bias.
These findings suggest that robust WLS should likely remain the estimator of
choice for applied researchers. Additionally, highly leptokurtic distributions should be
avoided when possible. It should also be noted that robust WLS performance was
arguably adequate at the sample size of 100 when the indicators were not highly
leptokurtic. / text
|
210 |
Analysis of 3D objects at multiple scales : application to shape matchingMellado, Nicolas 06 December 2012 (has links)
Depuis quelques années, l’évolution des techniques d’acquisition a entraîné une généralisation de l’utilisation d’objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l’analyse dite scale-space est aujourd’hui un standard pour l’étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d’instabilité et nécessite une information de connectivité, qui n’est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d’outils mathématiques pour l’analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l’analyse des nuages de points à plusieurs échelles. Un défi intéressant est l’analyse d’objets 3D acquis dans le cadre de l’étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l’acquisition des fragments des statues entourant par le passé le Phare d’Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d’objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l’action du temps. Nous proposons un formalisme pour la conception de systèmes d’assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d’assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets. / Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist’s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.
|
Page generated in 0.3461 seconds