Spelling suggestions: "subject:"square""
181 |
Studying the Impact of Solar Photovoltaic on Transient Stability of Power Systems using Direct MethodsMishra, Chetan 07 December 2017 (has links)
The increasing penetration of inverter based renewable generation in the form of solar photo-voltaic (PV) or wind has introduced numerous operational challenges and uncertainties. Among these challenges, one of the major ones is the impact on the transient stability of the grid. On the other hand, the direct methods for transient stability assessment of power systems have also fairly evolved over the past 30 years. These set of techniques inspired from the Lyapunov's direct method provide a clear insight into the system stability changes with a changing grid. The most attractive feature of these types of techniques is the heavy reduction in the computational burden by cutting down on the simulation time. These advancements were still aimed at analyzing the stability of a non-linear autonomous dynamical system and the existing power system perfectly fits that definition. Due to the changing renewable portfolio standards, the power system is undergoing serious structural and performance alterations. The whole idea of power system stability is changing and there is a major lack of work in the field of direct methods in keeping up with these changes. This dissertation aims at employing the pre-existing direct methods as well as developing new techniques to visualize and analyze the stability of a power system with an added subset of complexities introduced by PV generation. / Ph. D. / The increasing penetration of inverter based renewable generation in the form of solar photo-voltaic (PV) or wind has introduced numerous operational challenges and uncertainties. Among these challenges, one of the major ones is the impact on the transient stability of the grid. A set of techniques called the direct methods significantly cut down the simulation time required for transient stability studies. However, these techniques did not keep up with the changing power system dynamics due to renewable generation and thus there is a need to develop new methods to study this changing system which is the aim of this thesis.
|
182 |
A New Method of Determining the Transmission Line Parameters of an Untransposed Line using Synchrophasor MeasurementsLowe, Bradley Shayne 10 September 2015 (has links)
Transmission line parameters play a significant role in a variety of power system applications. The accuracy of these parameters is of paramount importance. Traditional methods of determining transmission line parameters must take a large number of factors into consideration. It is difficult and in most cases impractical to include every possible factor when calculating parameter values. A modern approach to the parameter identification problem is an online method by which the parameter values are calculated using synchronized voltage and current measurements from both ends of a transmission line.
One of the biggest problems facing the synchronized measurement method is line transposition. Several methods have been proposed that demonstrate how the line parameters of a transposed line may be estimated. However, the present case of today's power systems is such that a majority of transmission lines are untransposed. While transposed line methods have value, they cannot be applied in real-world scenarios. Future efforts of using synchronized measurements to estimate transmission line parameters must focus on the development and refining of untransposed line methods.
This thesis reviews the existing methods of estimation transmission line parameters using synchrophasor measurements and proposes a new method of estimating the parameters of an untransposed line. After the proposal of this new method, a sensitivity analysis is conducted to determine its performance when noise is present in the measurements. / Master of Science
|
183 |
The Sherman Morrison IterationSlagel, Joseph Tanner 17 June 2015 (has links)
The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix. / Master of Science
|
184 |
HATLINK: a link between least squares regression and nonparametric curve estimationEinsporn, Richard L. January 1987 (has links)
For both least squares and nonparametric kernel regression, prediction at a given regressor location is obtained as a weighted average of the observed responses. For least squares, the weights used in this average are a direct consequence of the form of the parametric model prescribed by the user. If the prescribed model is not exactly correct, then the resulting predictions and subsequent inferences may be misleading. On the other hand, nonparametric curve estimation techniques, such as kernel regression, obtain prediction weights solely on the basis of the distance of the regressor coordinates of an observation to the point of prediction. These methods therefore ignore information that the researcher may have concerning a reasonable approximate model. In overlooking such information, the nonparametric curve fitting methods often fit anomalous patterns in the data.
This paper presents a method for obtaining an improved set of prediction weights by striking the proper balance between the least squares and kernel weighting schemes. The method is called "HATLINK," since the appropriate balance is achieved through a mixture of the hat matrices corresponding to the least squares and kernel fits. The mixing parameter is determined adaptively through cross-validation (PRESS) or by a version of the Cp statistic. Predictions obtained through the HATLINK procedure are shown through simulation studies to be robust to model misspecification by the researcher. It is also demonstrated that the HA TLINK procedure can be used to perform many of the usual tasks of regression analysis, such as estimate the error variance, provide confidence intervals, test for lack of fit of the user's prescribed model, and assist in the variable selection process. In accomplishing all of these tasks, the HATLINK procedure provides a modelrobust alternative to the standard model-based approach to regression. / Ph. D.
|
185 |
Evaluating and improvement of tree stump volume prediction models in the eastern United StatesBarker, Ethan Jefferson 06 June 2017 (has links)
Forests are considered among the best carbon stocks on the planet. After forest harvest, the residual tree stumps persist on the site for years after harvest continuing to store carbon. A bigger concern is that the component ratio method requires a way to get stump volume to obtain total tree aboveground biomass. Therefore, the stump volumes contribute to the National Carbon Inventory. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, many model forms are evaluated for their accuracy in predicting stump volume. Stump profile and stump volume predictions were among the types of estimates done here for both outside and inside bark measurements. Fitting previously used models to a larger data set allows for improved regression coefficients and potentially more flexible and accurate models. The data set was compiled from a large selection of legacy data as well as some newly collected field measurements. Analysis was conducted for thirty of the most numerous tree species in the eastern United States as well as provide an improved method for inside and outside bark stump volume estimation. / Master of Science / Forests are considered among the best carbon stocks on the planet, and estimates of total tree aboveground biomass are needed to maintain the National Carbon Inventory. Tree stump volumes contribute to total tree aboveground biomass estimates. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, existing mathematical equations used to estimate tree stump volume are evaluated. A larger and more inclusive data set was utilized to improve the current equations, and to gather more insight in to which equations are best for different tree species and different areas of the eastern United States.
|
186 |
Information and distancesEpstein, Samuel 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.
|
187 |
Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimatorsVaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust
WLS) are currently the two primary estimation methods designed for structural equation
modeling with ordinal observed variables. These methods assume that continuous latent
variables were coarsely categorized by the measurement process to yield the observed
ordinal variables, and that the model proposed by the researcher pertains to these latent
variables rather than to their ordinal manifestations.
Previous research has strongly suggested that robust WLS is superior to full WLS
when models are correctly specified. Given the realities of applied research, it was
critical to examine these methods with misspecified models. This Monte Carlo simulation
study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or
misspecified in one of two ways. Seven conditions of five-category indicator distribution
shape at four sample sizes were simulated. These design factors were completely crossed
for a total of 224 cells.
Previously findings of the relative superiority of robust WLS with correctly
specified models were replicated, and robust WLS was also found to perform better than
full WLS given overspecification or misspecification. Robust WLS parameter estimates
were usually more accurate for correct and overspecified models, especially at the
smaller sample sizes. In the face of misspecification, full WLS better approximated the
correct loading values whereas robust estimates better approximated the correct factor
correlation. Robust WLS chi-square values discriminated between correct and
misspecified models much better than full WLS values at the two smaller sample sizes.
For all four model specifications, robust parameter estimates usually showed lower
variability and robust standard errors usually showed lower bias.
These findings suggest that robust WLS should likely remain the estimator of
choice for applied researchers. Additionally, highly leptokurtic distributions should be
avoided when possible. It should also be noted that robust WLS performance was
arguably adequate at the sample size of 100 when the indicators were not highly
leptokurtic. / text
|
188 |
Analysis of 3D objects at multiple scales : application to shape matchingMellado, Nicolas 06 December 2012 (has links)
Depuis quelques années, l’évolution des techniques d’acquisition a entraîné une généralisation de l’utilisation d’objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l’analyse dite scale-space est aujourd’hui un standard pour l’étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d’instabilité et nécessite une information de connectivité, qui n’est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d’outils mathématiques pour l’analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l’analyse des nuages de points à plusieurs échelles. Un défi intéressant est l’analyse d’objets 3D acquis dans le cadre de l’étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l’acquisition des fragments des statues entourant par le passé le Phare d’Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d’objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l’action du temps. Nous proposons un formalisme pour la conception de systèmes d’assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d’assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets. / Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist’s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.
|
189 |
Algorithm-Based Efficient Approaches for Motion Estimation SystemsLee, Teahyung 14 November 2007 (has links)
Algorithm-Based Efficient Approaches
for Motion Estimation Systems
Teahyung Lee
121 pages
Directed by Dr. David V. Anderson
This research addresses algorithms for efficient motion estimation systems. With the growth of wireless video system market, such as mobile imaging, digital still and video cameras, and video sensor network, low-power consumption is increasingly desirable for embedded video systems. Motion estimation typically needs considerable computations and is the basic block for many video applications. To implement low-power video systems using embedded devices and sensors, a CMOS imager has been developed that allows low-power computations on the focal plane. In this dissertation efficient motion estimation algorithms are presented to complement this platform.
In the first part of dissertation we propose two algorithms regarding gradient-based optical flow estimation (OFE) to reduce computational complexity with high performance. The first is a checkerboard-type filtering (CBTF) algorithm for prefiltering and spatiotemporal derivative calculations. Another one is spatially recursive OFE frameworks using recursive LS (RLS) and/or matrix refinement to reduce the computational complexity for solving linear system of derivative values of image intensity in least-squares (LS)-OFE. From simulation results, CBTF and spatially recursive OFE show improved computational efficiency compared to conventional approaches with higher or similar performance.
In the second part of dissertation we propose a new algorithm for video coding application to improve motion estimation and compensation performance in the wavelet domain. This new algorithm is for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD) to enhance rate-distortion (RD) performance under temporal aliasing noise. This technique gives competitive or better performance in terms of RD compared to conventional MRME and MRME with motion vector prediction through median filtering.
|
190 |
On the nonnegative least squaresSantiago, Claudio Prata 19 August 2009 (has links)
In this document, we study the nonnegative least squares primal-dual method
for solving linear programming problems. In particular, we investigate connections
between this primal-dual method and the classical Hungarian method for the assignment problem.
Firstly, we devise a fast procedure for computing the unrestricted least
squares solution of a bipartite matching problem by exploiting the special
structure of the incidence matrix of a bipartite graph. Moreover, we explain
how to extract a solution for the cardinality matching problem from the
nonnegative least squares solution. We also give an efficient procedure
for solving the cardinality matching problem on general graphs using the
nonnegative least squares approach.
Next we look into some theoretical results concerning the minimization of p-norms,
and separable differentiable convex functions, subject to linear constraints
described by node-arc incidence matrices for graphs.
Our main result is the reduction of the assignment problem to a single
nonnegative least squares problem. This means that the primal-dual
approach can be made to converge in one step for the assignment problem.
This method does not reduce the primal-dual approach to one step for
general linear programming problems, but it appears to give a good
starting dual feasible point for the general problem.
|
Page generated in 0.0688 seconds