Spelling suggestions: "subject:"square""
231 |
Estimation of Aerodynamic Parameters in Real-Time : Implementation and Comparison of a Sequential Frequency Domain Method and a Batch MethodNyman, Lina January 2016 (has links)
The flight testing and evaluation of collected data must be efficient during intensive flight-test programs such as the ones conducted during development of new aircraft. The aim of this thesis has thus been to produce a first version of an aerodynamic derivative estimation program that is to be used during real-time flight tests. The program is to give a first estimate of the aerodynamic derivatives as well as check the quality of the data collected and thus serve as a decision support during tests. The work that has been performed includes processing of data in order to use it in computations, comparing a batch and a sequential estimation method using real-time data and programming a user interface. All computations and programming has been done in Matlab. The estimation methods that have been compared are both built on transforming data to the frequency domain using a Chirp z-transform and then estimating the aerodynamic derivatives using complex least squares with instrumental variables.The sequential frequency domain method performs estimates at a given interval while the batch method performs one estimation at the end of the maneuver. Both methods compared in this thesis produce equal results. The continuous updates of the sequential method was however found to be better suited for a real-time application than the single estimation of the batch method. The telemetric data received from the aircraft must be synchronized to a common frequency of 60 Hz. Missing samples of the data stream must be linearly interpolated and different units of measured parameters must be corrected in order to be able to perform these estimations in the real-time test environment.
|
232 |
CHEMOMETRIC ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL LIQUID CHROMATOGRAPHIC-DIODE ARRAY DETECTION DATA: PEAK RESOLUTION, QUANTIFICATION AND RAPID SCREENINGBailey, Hope P. 09 October 2012 (has links)
This research project sought to explore, compare and develop chemometric methods with the goal of resolving chromatographically overlapped peaks though the use of spectral information gained from the four-way data sets associated with comprehensive two-dimensional liquid chromatography with diode array detection (LC ´ LC-DAD). A chemometric method combining iterative key set factor analysis (IKSFA) and multivariate curve resolution-alternating least squares (MCR-ALS) was developed. In the section of urine data analyzed, over 50 peaks were found, with 18 visually observable and 32 additional compounds found only after application of the chemometric method. Upon successful chemometric resolution of chromatographically overlapped peaks, accurate and precise quantification was then necessary. Of the compared methods for quantification, the manual baseline method was determined to offer the best precisions. Of the 50 found peaks from the urine analysis, 34 were successfully quantified using the manual baseline method with percent relative standard deviations ranging from 0.09 to 16. The accuracy of quantification was then investigated by the analysis of wastewater treatment plant effluent (WWTPE) samples. The chemometrically determined concentration of the unknown phenytoin sample was found to not exhibit a significant difference from the result obtained by the LC-MS/MS reference method, and the precision of the IKSFA-ALS method was better than that of the precision of the LC-MS/MS analysis. Chromatographic factors (data complexity, large dynamic range, retention time shifting, chromatographic and spectral peak overlap and background removal, were all found to affect the quantification results. The last part of this work focused on rapid screening methods that were capable of locating peaks between samples that exhibited significant differences in concentration. The aim here was to reduce the amount of data required to be resolved and quantified to only those peaks that were of interest. This would then reduce the time required to analyze large, complex samples by eliminating the need to first quantify all peaks in a given sample for many different samples. Both the similarity index (SI) method and the Fisher ratio (FR) method were found to fulfill this requirement in a rapid means of screening fifteen wine samples.
|
233 |
THE STRATEGIC ASSOCIATION BETWEEN ENTERPRISE CONTENT MANAGEMENT AND DECISION SUPPORTAlalwan, Jaffar 03 April 2012 (has links)
To deal with the increasing information overload and with the structured and unstructured data complexity, many organizations have implemented enterprise content management (ECM) systems. Published research on ECM so far is very limited and reports on ECM implementations have been scarce until recently (Tyrväinen et al. 2006). However, the little available ECM literature shows that many organizations using ECM focus on operational benefits while strategic decision-making benefits are rarely considered. Moreover, the strategic capabilities such as decision making capabilities of ECM are not fully investigated in the current literature. In addition, the literature lacks a strategic management framework (SMF) that links strategies, business objectives, and performance management although there are several published studies that discuss ECM strategy. A strategic management framework would seem essential to effectively manage ECM strategy formulation, implementation, and performance evaluation (Kaplan and Norton 1996; Ittner and Larcker 1997). The absence of an appropriate strategic management framework keeps organizations from effective strategic planning, implementation, and evaluation, which affects the organizational capabilities overall. Therefore, the objective of this dissertation is to determine the decision support capabilities of ECM, and specify how ECM strategies can be formulated, implemented, and evaluated in order to fully utilize the ECM strategic capabilities. Structural equation modeling as well as design science approaches will be adopted to achieve the dissertation objectives.
|
234 |
Direct L2 Support Vector MachineZigic, Ljiljana 01 January 2016 (has links)
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative.
Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets.
In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
|
235 |
Development of novel electrical power distribution system state estimation and meter placement algorithms suitable for parallel processingNusrat, Nazia January 2015 (has links)
The increasing penetration of distributed generation, responsive loads and emerging smart metering technologies will continue the transformation of distribution systems from passive to active network conditions. In such active networks, State Estimation (SE) tools will be essential in order to enable extensive monitoring and enhanced control technologies. In future distribution management systems, the novel electrical power distribution system SE requires development in a scalable manner in order to accommodate small to massive size networks, be operable with limited real time measurements and a restricted time frame. Furthermore, a significant phase of new sensor deployment is inevitable to enable distribution system SE, since present-day distribution networks lack the required level of measurement and instrumentation. In the above context, the research presented in this thesis investigates five SE optimization solution methods with various case studies related to expected scenarios of future distribution networks to determine their suitability. Hachtel's Augmented Matrix method is proposed and developed as potential SE optimizer for distribution systems due to its potential performance characteristics with regard to accuracy and convergence. Differential Evolution Algorithm (DEA) and Overlapping Zone Approach (OZA) are investigated to achieve scalability of SE tools; followed by which the network division based OZA is proposed and developed. An OZA requiring additional measurements is also proposed to provide a feasible solution for voltage estimation at a reduced computation cost. Realising the requirement of additional measurements deployment to enable distribution system SE, the development of a novel meter placement algorithm that provides economical and feasible solutions is demonstrated. The algorithm is strongly focused on reducing the voltage estimation errors and is capable of reducing the error below desired threshold with limited measurements. The scalable SE solution and meter placement algorithm are applied on a multi-processor system in order to examine effective reduction of computation time. Significant improvement in computation time is observed in both cases by dividing the problem into smaller segments. However, it is important to note that enhanced network division reduces computation time further at the cost of accuracy of estimation. Different networks including both idealised (16, 77, 356 and 711 node UKGDS) and real (40 and 43 node EG) distribution network data are used as appropriate to the requirement of the applications throughout this thesis.
|
236 |
Autoregresní modely typu NIAR(1) / Near integrated AR(1) modelsOnderko, Martin January 2015 (has links)
My final thesis firstly addresses basic knowledge of the theory of stochastic processes. This is firstly due to the author's effort to make the thesis more comprehensible, and also due to the need for introduction of key concepts. The autoregressive model AR(1) is defined in the thesis through basic linear time series models, and in this model, the estimation of model parameter by the method of least squares is introduced. For this estimation, the theoretical findings of the thesis are extended through the classical limit theory. Furthermore, the models with their parameter dependent on number of observations are introduced and models of NIAR (1) are defined. Classical limit theory for least squares estimation is then enriched by the limit theory in these models. The category of more general models is introduced and using the acquired knowledge, the features for the model AR (1) are derived. This thesis deals with this issue in models of NIAR (1) and its area of interest is also the bootstrap. The theoretical part of the thesis is supplemented by a practical part represented by numerical studies.
|
237 |
The Distribution of Cotton Fiber LengthBelmasrour, Rachid 05 August 2010 (has links)
By testing a fiber beard, certain cotton fiber length parameters can be obtained rapidly. This is the method used by the High Volume Instrument (HVI). This study is aimed to explore the approaches and obtain the inference of length distributions of HVI beard sam- ples in order to develop new methods that can help us find the distribution of original fiber lengths and further improve HVI length measurements. At first, the mathematical functions were searched for describing three different types of length distributions related to the beard method as used in HVI: cotton fiber lengths of the original fiber population before picked by the HVI Fibrosampler, fiber lengths picked by HVI Fibrosampler, and fiber beard's pro-jecting portion that is actually scanned by HVI. Eight sets of cotton samples with a wide range of fiber lengths are selected and tested on the Advanced Fiber Information System (AFIS). The measured single fiber length data is used for finding the underlying theoreti-cal length distributions, and thus can be considered as the population distributions of the cotton samples. In addition, fiber length distributions by number and by weight are dis- cussed separately. In both cases a mixture of two Weibull distributions shows a good fit to their fiber length data. To confirm the findings, Kolmogorov-Smirnov goodness-of-fit tests were conducted. Furthermore, various length parameters such as Mean Length (ML) and Upper Half Mean Length (UHML) are compared between the original distribution from the experimental data and the fitted distributions. The results of these obtained fiber length distributions are discussed by using Partial Least Squares (PLS) regression, where the dis-tribution of the original fiber length from the distribution of the projected one is estimated.
|
238 |
Completely Recursive Least Squares and Its ApplicationsBian, Xiaomeng 02 August 2012 (has links)
The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. It is important to generalize RLS for generalized LS (GLS) problem. It is also of value to develop an efficient initialization for any RLS algorithm.
In Chapter 2, we develop a unified RLS procedure to solve the unconstrained/linear-equality (LE) constrained GLS. We also show that the LE constraint is in essence a set of special error-free observations and further consider the GLS with implicit LE constraint in observations (ILE-constrained GLS).
Chapter 3 treats the RLS initialization-related issues, including rank check, a convenient method to compute the involved matrix inverse/pseudoinverse, and resolution of underdetermined systems. Based on auxiliary-observations, the RLS recursion can start from the first real observation and possible LE constraints are also imposed recursively. The rank of the system is checked implicitly. If the rank is deficient, a set of refined non-redundant observations is determined alternatively.
In Chapter 4, base on [Li07], we show that the linear minimum mean square error (LMMSE) estimator, as well as the optimal Kalman filter (KF) considering various correlations, can be calculated from solving an equivalent GLS using the unified RLS.
In Chapters 5 & 6, an approach of joint state-and-parameter estimation (JSPE) in power system monitored by synchrophasors is adopted, where the original nonlinear parameter problem is reformulated as two loosely-coupled linear subproblems: state tracking and parameter tracking. Chapter 5 deals with the state tracking which determines the voltages in JSPE, where dynamic behavior of voltages under possible abrupt changes is studied. Chapter 6 focuses on the subproblem of parameter tracking in JSPE, where a new prediction model for parameters with moving means is introduced. Adaptive filters are developed for the above two subproblems, respectively, and both filters are based on the optimal KF accounting for various correlations. Simulations indicate that the proposed approach yields accurate parameter estimates and improves the accuracy of the state estimation, compared with existing methods.
|
239 |
Quadrados latinos balanceados para a vizinhança - planejamento e análise de dados sensoriais por meio da ADQ / Latin squares balanced for the neighborhood - planning and analysis of sensory data obtained by the QDA.Sanches, Paula da Fonte 25 January 2010 (has links)
As avaliações sensoriais tomam cada vez mais sua posição de importância dentro dos centros produtores e vendedores de alimentos e de outros produtos. Nestes, o objetivo final dos trabalhos realizados nas áreas de desenvolvimento, produção e `marketing\' e o consumidor, cuja avaliação se baseia, principalmente, na aceitabilidade e custos dos produtos. Nesses experimentos, uma serie de tratamentos e dada para cada provador, e um problema relevante e que a variável resposta dependa não só do tratamento aplicado atualmente, mas também do anterior seguido a ele, chamados de efeitos residuais. Visando uma melhor qualidade do produto, analises cada vez mais rigorosas são exigidas. Assim, um método frequentemente utilizado e o da analise descritiva, que tem por objetivo descrever e avaliar a intensidade dos atributos sensoriais dos produtos avaliados, orientando eventuais modificações das características das mesmas a m de atender as exigências do consumidor. Realizada por pessoas treinadas, com habilidade de discriminar, recebendo o nome de analise descritiva quantitativa (ADQ). Consequentemente, dadas as limitações quanto ao numero de provas sucessivas de degustação e presença frequente de efeitos residuais, o planejamento e analise dos experimentos para ADQ adquirem importância fundamental. Assim, de modo a resolver o problema apresentado, Williams (1949) apresentou os delineamentos quadrados latinos balanceados para vizinhança que, de forma geral, garantem que os efeitos residuais dos tratamentos não exerçam influência sobre a comparação dos efeitos dos tratamentos. Métodos adequados de construção, aleatorizacão e analise, utilizando o método ADQ de tais delineamentos são descritos e adaptados para o problema. São apresentados, analisados e discutidos, ainda, os resultados de um experimento de analise sensorial de diferentes cachaças, planejado e conduzido pela autora. Assim, com os resultados obtidos, concluiu-se que, para o planejamento de ensaios para a análise descritiva quantitativa (ADQ), os quadrados latinos balanceados para vizinhança, com a última coluna repetida, são uma alternativa importante / The sensory evaluations are increasingly taking its position of importance within the centers producers and sellers of food and other products. In these, the ultimate goal of the work in the areas of development, production and \'marketing\' is the consumer, whose evaluation is based mainly on the acceptability and cost of products. In these experiments, a series of treatments is given to each panelist, and a major problem is that the response depends not only on the treatment currently applied, but on the former followed by him. For a better quality product, analyzes increasingly stringent are required. Therefore, a method often used is descriptive analysis, which aims to describe and evaluate the intensity of sensory attributes of the products evaluated, guiding future modications of the same characteristics in order to meet consumer demands.Performed by trained people, with ability to discriminate, receiving the name of quantitative descriptive analysis (QDA). Therefore, given the limitations on the number of successive tasting trials and frequent presence of residual eects, planning and analysing the experiments for QDA are fundamentaly important. Thus, in order to solve the problem presented, Williams (1949) presented the Latin square design balanced for neighborhood that, in general, ensuring that the residual eects of the treatments do not in uence the comparison of treatment eects. Appropriate methods of construction, randomization and analysis, using the method of QDA such designs are described and adapted to the problem. Are presented, analyzed and discussed, yet, the results of an experiment of sensory analysis of dierent brandy, planned and conducted by the author. So with these results, we concluded that, for the planning of tests to quantitative descriptive analysis (QDA), the Latin squares balanced for neighborhoods, and repeated the last column, are an important alternative.
|
240 |
On the regularization of the recursive least squares algorithm. / Sobre a regularização do algoritmo dos mínimos quadrados recursivos.Tsakiris, Manolis 25 June 2010 (has links)
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization. / Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.
|
Page generated in 0.0422 seconds