Spelling suggestions: "subject:"leastsquares"" "subject:"least0squares""
191 |
The Distribution of Cotton Fiber LengthBelmasrour, Rachid 05 August 2010 (has links)
By testing a fiber beard, certain cotton fiber length parameters can be obtained rapidly. This is the method used by the High Volume Instrument (HVI). This study is aimed to explore the approaches and obtain the inference of length distributions of HVI beard sam- ples in order to develop new methods that can help us find the distribution of original fiber lengths and further improve HVI length measurements. At first, the mathematical functions were searched for describing three different types of length distributions related to the beard method as used in HVI: cotton fiber lengths of the original fiber population before picked by the HVI Fibrosampler, fiber lengths picked by HVI Fibrosampler, and fiber beard's pro-jecting portion that is actually scanned by HVI. Eight sets of cotton samples with a wide range of fiber lengths are selected and tested on the Advanced Fiber Information System (AFIS). The measured single fiber length data is used for finding the underlying theoreti-cal length distributions, and thus can be considered as the population distributions of the cotton samples. In addition, fiber length distributions by number and by weight are dis- cussed separately. In both cases a mixture of two Weibull distributions shows a good fit to their fiber length data. To confirm the findings, Kolmogorov-Smirnov goodness-of-fit tests were conducted. Furthermore, various length parameters such as Mean Length (ML) and Upper Half Mean Length (UHML) are compared between the original distribution from the experimental data and the fitted distributions. The results of these obtained fiber length distributions are discussed by using Partial Least Squares (PLS) regression, where the dis-tribution of the original fiber length from the distribution of the projected one is estimated.
|
192 |
Completely Recursive Least Squares and Its ApplicationsBian, Xiaomeng 02 August 2012 (has links)
The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. It is important to generalize RLS for generalized LS (GLS) problem. It is also of value to develop an efficient initialization for any RLS algorithm.
In Chapter 2, we develop a unified RLS procedure to solve the unconstrained/linear-equality (LE) constrained GLS. We also show that the LE constraint is in essence a set of special error-free observations and further consider the GLS with implicit LE constraint in observations (ILE-constrained GLS).
Chapter 3 treats the RLS initialization-related issues, including rank check, a convenient method to compute the involved matrix inverse/pseudoinverse, and resolution of underdetermined systems. Based on auxiliary-observations, the RLS recursion can start from the first real observation and possible LE constraints are also imposed recursively. The rank of the system is checked implicitly. If the rank is deficient, a set of refined non-redundant observations is determined alternatively.
In Chapter 4, base on [Li07], we show that the linear minimum mean square error (LMMSE) estimator, as well as the optimal Kalman filter (KF) considering various correlations, can be calculated from solving an equivalent GLS using the unified RLS.
In Chapters 5 & 6, an approach of joint state-and-parameter estimation (JSPE) in power system monitored by synchrophasors is adopted, where the original nonlinear parameter problem is reformulated as two loosely-coupled linear subproblems: state tracking and parameter tracking. Chapter 5 deals with the state tracking which determines the voltages in JSPE, where dynamic behavior of voltages under possible abrupt changes is studied. Chapter 6 focuses on the subproblem of parameter tracking in JSPE, where a new prediction model for parameters with moving means is introduced. Adaptive filters are developed for the above two subproblems, respectively, and both filters are based on the optimal KF accounting for various correlations. Simulations indicate that the proposed approach yields accurate parameter estimates and improves the accuracy of the state estimation, compared with existing methods.
|
193 |
On the regularization of the recursive least squares algorithm. / Sobre a regularização do algoritmo dos mínimos quadrados recursivos.Tsakiris, Manolis 25 June 2010 (has links)
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization. / Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.
|
194 |
Least Squares Monte Carlo-metoden & korgoptioner : En kvantitativ studieSandin, Måns January 2019 (has links)
Inom bank och försäkringsbranschen finns behov av framtidsprognoser och riskmått kopplade till finansiella instrument. För att skapa prisfördelningar, som kan användas som grund till olika riskmått, används ibland nästlad simulering. För att göra detta simuleras först en stor mängd yttre scenarion för någon tillgång, som används i ett finanisellt instrument. Vilket görs genom att priser simuleras över en tidsperiod. Detta utgör tidshorisonten varvid prisfördelningen befinner sig. Utifrån varje yttre scenario simuleras sedan ett antal inre. Som i sin tur används för att prissätta finansiella instrumentet i det yttre scenariot. En metod som används för att prisätta de yttre scenariona är Monte Carlo-metoden, vilket kräver ett stort antal inre scenarion för att prissättningen ska bli korrekt. Detta gör metoden krävande i tidsåtgång och datorkraft. Least Squares Monte Carlo-metoden är en alternativ metod som använder sig av regression och minstakvadratmetoden för att utföra prissättningen med ett mindre antal inre scenarion. En regressionsfunktion anpassas efter yttre scenarionas värden och används sedan för att omvärdera dessa, vilket minskar felen som ett mindre antal slumptal annars skulle ge. Regressionsfunktionen kan även användas för att prissätta värden utanför de som den anpassas efter, vilket gör att den kan återanvändas vid liknande beräkningar. I detta arbete undersöks hur väl Least Squares Monte Carlo-metoden beskriver prisfördelningen för korgoptioner, som är optioner med flera underliggande tillgångar. Tester utförs med olika värden för parametrarna och vikt läggs vid vilken effekt yttre scenarionas längd har, samt hur väl priserna beskrivs i prisfördelningens svansar. Resultatet är delvis svåranalyserat på grund av många extrema värden, men visade på svårigheter med prissättningen vid längre yttre scenarion. Vilket kan bero på att regressionsfunktionen som användes hade svårt att anpassa sig efter och beskriva mer spridda prisfördelningar. Metoden fungerade också sämre i den nedre delen av prisfördelningen, något som den dock delar med Standard Monte Carlo. Mer forskning behövs för att undersöka vilken effekt andra uppsättningar regressionsfunktioner skulle ha på metoden. / In the banking and insurance industry, there exists a need for forecasting and measures of risk connecting to financial instruments. To create price distributions, used to create measures of risk, nested simulations are sometimes used. This is done by simulating a large amount of outer scenarios, for some asset in a financial instrument. Which is done by simulating prices over a certain time period. This now outlines the time horizon of the price distribution. From each outer scenario, some inner scenarios are simulated. Which in turn are used to price the financial instrument in the outer scenario. A common method for pricing the outer scenarios is the Monte Carlo method, which uses a large amount of random numbers for the pricing to be accurate. This makes the method time consuming, as well as requiring large amounts of computing power. The Least Squares Monte Carlo method is an alternative method, using regression and the least squares method to perform the pricing using a smaller amount of inner scenarios. A regression function is fitted to the values of the outer scenarios and then used to revalue these, reducing the error which a smaller number of random numbers otherwise would give. The regression function can also be used to price outside of the values used for the fitting, making it reusable in similar computations. This paper examines how well the Least Squares Monte Carlo-method describes the price distribution of basket options, which are options containing several underlying assets. Tests are made for different values for the parameters used and an emphasis is laid on the effect of the time length of the outer scenarios, also, how accurate the tails of the distribution are. The results are somewhat hard to analyze,due to some extreme values, but showed difficulties for the method, when pricing longer outer scenarios. This can be due to the regression function having problems fitting to - and valuing - broader price distributions. The method also performed worse in the lower parts of the distribution, something it shares with the standard Monte Carlo method. More research is needed to ascertain the effects of other regression functions.
|
195 |
Split algorithms for LMS adaptive systems.January 1991 (has links)
by Ho King Choi. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1991. / Includes bibliographical references. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Adaptive Filter and Adaptive System --- p.1 / Chapter 1.2 --- Applications of Adaptive Filter --- p.4 / Chapter 1.2.1 --- System Identification --- p.4 / Chapter 1.2.2 --- Noise Cancellation --- p.6 / Chapter 1.2.3 --- Echo Cancellation --- p.8 / Chapter 1.2.4 --- Speech Processing --- p.10 / Chapter 1.3 --- Chapter Summary --- p.14 / References --- p.15 / Chapter 2. --- Adaptive Filter Structures and Algorithms --- p.17 / Chapter 2.1 --- Filter Structures for Adaptive Filtering --- p.17 / Chapter 2.2 --- Adaptation Algorithms --- p.22 / Chapter 2.2.1 --- The LMS Adaptation Algorithm --- p.24 / Chapter 2.2.1.1 --- Convergence Analysis --- p.28 / Chapter 2.2.1.2 --- Steady State Performance --- p.33 / Chapter 2.2.2 --- The RLS Adaptation Algorithm --- p.35 / Chapter 2.3 --- Chapter Summary --- p.39 / References --- p.41 / Chapter 3. --- Parallel Split Adaptive System --- p.45 / Chapter 3.1 --- Parallel Form Adaptive Filter --- p.45 / Chapter 3.2 --- Joint Process Estimation with a Split-Path Adaptive Filter --- p.49 / Chapter 3.2.1 --- The New Adaptive System Identification Configuration --- p.53 / Chapter 3.2.2 --- Analysis of the Split-Path System Modeling Structure --- p.57 / Chapter 3.2.3 --- Comparison with the Non-Split Configuration --- p.63 / Chapter 3.2.4 --- Some Notes on Even Filter Order Case --- p.67 / Chapter 3.2.5 --- Simulation Results --- p.70 / Chapter 3.3 --- Autoregressive Modeling with a Split-Path Adaptive Filter --- p.75 / Chapter 3.3.1 --- The Split-Path Adaptive Filter for AR Modeling --- p.79 / Chapter 3.3.2 --- Analysis of the Split-Path AR Modeling Structure --- p.84 / Chapter 3.3.3 --- Comparison with Traditional AR Modeling System --- p.89 / Chapter 3.3.4 --- Selection of Step Sizes --- p.90 / Chapter 3.3.5 --- Some Notes on Odd Filter Order Case --- p.94 / Chapter 3.3.6 --- Simulation Results --- p.94 / Chapter 3.3.7 --- Application to Noise Cancellation --- p.99 / Chapter 3.4 --- Chapter Summary --- p.107 / References --- p.109 / Chapter 4. --- Serial Split Adaptive System --- p.112 / Chapter 4.1 --- Serial Form Adaptive Filter --- p.112 / Chapter 4.2 --- Time Delay Estimation with a Serial Split Adaptive Filter --- p.125 / Chapter 4.2.1 --- Adaptive TDE --- p.125 / Chapter 4.2.2 --- Split Filter Approach to Adaptive TDE --- p.132 / Chapter 4.2.3 --- Analysis of the New TDE System --- p.136 / Chapter 4.2.3.1 --- Least-mean-square Solution --- p.138 / Chapter 4.2.3.2 --- Adaptation Algorithm and Performance Evaluation --- p.142 / Chapter 4.2.4 --- Comparison with Traditional Adaptive TDE Method --- p.147 / Chapter 4.2.5 --- System Implementation --- p.148 / Chapter 4.2.6 --- Simulation Results --- p.148 / Chapter 4.2.7 --- Constrained Adaptation for the New TDE System --- p.156 / Chapter 4.3 --- Chapter Summary --- p.163 / References --- p.165 / Chapter 5. --- Extension of the Split Adaptive Systems --- p.167 / Chapter 5.1 --- The Generalized Parallel Split System --- p.167 / Chapter 5.2 --- The Generalized Serial Split System --- p.170 / Chapter 5.3 --- Comparison between the Parallel and the Serial Split Adaptive System --- p.172 / Chapter 5.4 --- Integration of the Two Forms of Split Predictors --- p.177 / Chapter 5.5 --- Application of the Integrated Split Model to Speech Encoding --- p.179 / Chapter 5.6 --- Chapter Summary --- p.188 / References --- p.139 / Chapter 6. --- Conclusions --- p.191 / References --- p.197
|
196 |
Improving collaborative forecasting performance in the food supply chainEksoz, Can January 2014 (has links)
The dynamic structure of the Food Supply Chain (FSC) distinguishes itself from other supply chains. Providing food to customers in a healthy and fresh manner necessitates a significant effort on the part of manufacturers and retailers. In practice, while these partners collaboratively forecast time-sensitive and / or short-life product-groups (e.g. perishable, seasonal, promotional and newly launched products), they confront significant challenges which prevent them from generating accurate forecasts and conducting long-term collaborations. Partners’ challenges are not limited only to the fluctuating demand of time-sensitive product-groups and continuously evolving consumer choices, but are also largely related to their conflicting expectations. Partners’ contradictory expectations mainly occur during the practices of integration, forecasting and information exchange in the FSC. This research specifically focuses on the Collaborative Forecasting (CF) practices in the FSC. However, CF is addressed from the manufacturers’ point of view, when they collaboratively forecast perishable, seasonal, promotional and newly launched products with retailers in the FSC. The underlying reasons are that while there is a paucity of research studying CF from the manufacturers’ standpoint, associated product-groups decay at short notice and their demand is influenced by uncertain consumer behaviour and the dynamic environment of FSC. The aim of the research is to identify factors that have a significant influence on the CF performance. Generating accurate forecasts over the aforementioned product-groups and sustaining long-term collaborations (one year or more) between partners are the two major performance criteria of CF in this research. This research systematically reviews the literature on Collaborative Planning, Forecasting and Replenishment (CPFR), which combines the supply chain practices of upstream and downstream members by linking their planning, forecasting and replenishment operations. The review also involves the research themes of supply chain integration, forecasting process and information sharing. The reason behind reviewing these themes is that partners’ CF is not limited to forecasting practices, it also encapsulates the integration of chains and bilateral information sharing for accurate forecasts. A single semi-structured interview with a UK based food manufacturer and three online group discussions on the business oriented social networking service of LinkedIn enrich the research with pragmatic and qualitative data, which are coded and analysed via software package QSR NVivo 9. Modifying the results of literature review through the qualitative data makes it possible to develop a rigorous conceptual model and associated hypotheses. Then, a comprehensive online survey questionnaire is developed to be delivered to food manufacturers located in the UK & Ireland, North America and Europe. An exploratory data analysis technique using Partial Least Squares (PLS) guides the research to analyse the online survey questionnaire empirically. The most significant contributions of this research are (i) to extend the body of literature by offering a new CF practice, aiming to improve forecast accuracy and long-term collaborations, and (ii) to provide managerial implications by offering a rigorous conceptual model guiding practitioners to implement the CF practice, for the achievement of accurate forecasts and long-term collaborations. In detail, the research findings primarily emphasise that manufacturers’ interdepartmental integration plays a vital role for successful CF and integration with retailers. Effective integration with retailers encourages manufacturers to conduct stronger CF in the FSC. Partners’ forecasting meetings are another significant factor for CF while the role of forecasters in these meetings is crucial too, implying forecasters’ indirect influence on CF. Complementary to past studies, this research further explores the manufacturers’ various information sources that are significant for CF and which should be shared with retailers. It is also significant to maintain the quality level of information whilst information is shared with retailers. This result accordingly suggests that the quality level of information is obliquely important for CF. There are two major elements that contribute to the literature. Firstly, relying on the particular product-groups in the FSC and examining CF from the manufacturers’ point of view not only closes a pragmatic gap in the literature, but also identifies new areas for future studies in the FSC. Secondly, the CF practice of this research demonstrates the increasing forecast satisfaction of manufacturers over the associated product-groups. Given the subjective forecast expectations of manufacturers, due to organisational objectives and market dynamics, demonstrating the significant impact of the CF practice on the forecast satisfaction leads to generalising its application to the FSC. Practitioners need to avail themselves of this research when they aim to collaboratively generate accurate forecasts and to conduct long-term collaborations over the associated product-groups. The benefits of this research are not limited to the FSC. Manufacturers in other industries can benefit from the research while they collaborate with retailers over similar product-groups having a short shelf life and / or necessitating timely and reliable forecasts. In addition, this research expands new research fields to academia in the areas of the supply chain, forecasting and information exchange, whilst it calls the interest of academics to particular product-groups in the FSC for future research. Nevertheless, this research is limited to dyad manufacturer-retailer forecast collaborations over a limited range of product-groups. This is another opportunity for academics to extend this research to different types of collaborations and products.
|
197 |
Uso de técnicas de previsão de demanda como ferramenta de apoio à gestão de emergências hospitalares com alto grau de congestionamentoCalegari, Rafael January 2016 (has links)
Os serviços de emergências hospitalares (EH) desempenham um papel fundamental no sistema de saúde, servindo de porta de entrada para hospitais e fornecendo cuidados para pacientes com lesões e doenças graves. No entanto, as EH em todo o mundo sofrem com o aumento da demanda e superlotação. Múltiplos fatores convergem simultaneamente para resultar nessa superlotação, porém a otimização do gerenciamento do fluxo dos pacientes pode auxiliar na redução do problema. Nesse contexto, o tempo de permanência dos pacientes na EH (TPEH) é consolidado na literatura como indicador de qualidade do fluxo de pacientes. O tema desta dissertação é a previsão e gestão da demanda em EH com alto grau de congestionamento, que é abordado através de três artigos científicos. O objeto de estudo é o Hospital de Clínicas de Porto Alegre (HCPA). No primeiro artigo, são aplicados quatro modelos de previsão da procura por atendimento na EH, avaliando-se a influência de fatores climáticos e de calendário. O segundo artigo utiliza a técnica de regressão por mínimos quadrados parciais (PLS – partial least squares) para previsão de quatro indicadores relacionados ao TPEH para hospitais com alto grau de congestionamento. O tempo médio de permanência (TM) na EH resultou em um modelo preditivo com melhor ajuste, com erro médio absoluto percentual (MAPE - mean absolute percent error) de 5,68%. O terceiro artigo apresenta um estudo de simulação para identificação dos fatores internos do hospital que influenciam o TPEH. O número de exames de tomografias e a taxa de ocupação nas enfermarias clínicas e cirúrgicas (ECC) foram as que mais influenciaram. / Emergency departments (ED) play a key role in the health system, serving as gateway to hospitals and providing care for patients with injuries and serious illnesses. However, EDs worldwide suffer from increased demand and overcrowding. Multiple factors simultaneously converge to result in such overcrowding, and the optimization of patient flow management can help reduce the problem. In this context, the length of stay of patients in ED (LSED) is consolidated in the literature as a patient flow quality indicator. This thesis deals with forecast and demand management in EDs with a high degree of congestion. The subject is covered in three scientific papers, all analyzing data from the Hospital de Clínicas de Porto Alegre’s ED. In the first paper we apply four demand forecasting models to predict demand for service in the ED, evaluating the influence of climatic and calendar factors. The second article uses partial least squares (PLS) regression to predict four indicators related to LSED. The mean length of stay in the ED resulted in a model with the best fit, with mean percent absolute error (MAPE) of 5.68%. The third article presents a simulation study to identify the internal hospital factors influencing LSED. The number of CT exams and the occupancy rate in the clinical and surgical wards were the most influential factors.
|
198 |
Previsão de níveis fluviais em tempo atual com modelo de regressão adaptativo: aplicação na bacia do rio UruguaiMoreira, Giuliana Chaves January 2016 (has links)
Este trabalho avaliou o potencial da aplicação da técnica recursiva dos mínimos quadrados (MQR) para o ajuste em tempo atual dos parâmetros de modelos autorregressivos com variáveis exógenas (ARX), as quais são constituídas pelos níveis de montante para melhorar o desempenho das previsões de níveis fluviais em tempo atual. Três aspectos foram estudados em conjunto: variação do alcance escolhido para a previsão, variação da proporção da área controlada em bacias a montante e variação da área da bacia da seção de previsão. A pesquisa foi realizada em três dimensões principais: a) metodológica (sem recursividade; com recursividade; com recursividade e fator de esquecimento); b) temporal (6 alcances diferentes: 10, 24, 34, 48, 58 e 72 horas); e c) espacial (variação da área controlada da bacia e da área da bacia definida pela seção de previsão). A área de estudo escolhida para essa pesquisa foi a bacia do rio Uruguai com exutório no posto fluviométrico de Uruguaiana (190.000 km²) e as suas sub-bacias embutidas de Itaqui (131.000 km²), Passo São Borja (125.000km²), Garruchos (116.000 km²), Porto Lucena (95.200 km²), Alto Uruguai (82.300 km²) e Iraí (61.900 km²). Os dados de níveis fluviométricos, com leituras diárias às 07:00 e às 17:00 horas, foram fornecidos pela Companhia de Pesquisa de Recursos Minerais (CPRM), sendo utilizados os dados de 1/1/1991 a 30/6/2015. Para a análise de desempenho dos modelos, foi aplicado como estatística de qualidade o coeficiente de Nash-Sutcliffe (NS) e o quantil 0,95 dos erros absolutos (EA(0,95): erro que não foi ultrapassado com a frequência de 0,95). Observou-se que os erros EA(0,95) dos melhores modelos obtidos para cada bacia sempre aumentam com a redução da área controlada, ou seja, a qualidade das previsões diminui com o deslocamento da seção de controle de jusante para montante. O ganho na qualidade das previsões com a utilização dos recursos adaptativos torna-se mais evidente, especialmente quando observam-se os valores de EA(0,95), pois esta estatística é mais sensível, com diferenças maiores em relação ao coeficiente NS. Além disso, este é mais representativo para os erros maiores, que ocorrem justamente durante os eventos de inundações. De modo geral, foi observado que, à medida que diminui a área da bacia, é possível obter previsões com alcances cada vez menores. Porém a influência do tamanho da área controlada de bacias a montante melhora o desempenho de bacias menores quando se observam principalmente os erros EA(0,95). Por outro lado, se a proporção da bacia controlada de montante já é bastante grande, como é o caso das alternativas 1 e 2 utilizadas para previsão em Itaqui (entre 88,5% e 95,4 %, respectivamente), os recursos adaptativos não fazem muita diferença na obtenção de melhores resultados. Todavia, quando se observam bacias com menores áreas de montante controladas, como é o caso de Porto Lucena para a alternativa 2 (65% de área controlada), o ganho no desempenho dos modelos com a utilização dos recursos adaptativos completos (MQR+f.e: mínimos quadrados recursivos com fator de esquecimento) torna-se relevante. / This study evaluated the potential of the application of the recursive least squares technique (RLS) to adjust in real time the model parameters of the autoregressive models with exogenous variables (ARX), which consists of the upstream levels, to improve the performance of the forecasts of river levels in real time. Three aspects were studied jointly: the variation of the lead time chosen for the forecast, the variation in the proportion of controlled area in upstream basins and variation in the area of forecasting section of the basin. The research was conducted in three main dimensions: a) methodological (without recursion; with recursion; with recursion and forgetting factor); b) temporal (6 different lead times: 10, 24, 34, 48, 58 and 72 hours); and c) spatial (variation in the controlled area of the basin and the area of the basin defined by the forecast section). The study area chosen for this research was the Uruguay River basin with its outflow at the river gage station of Uruguaiana (190,000 km²) and its entrenched sub-basins in Itaqui (131,000 km²), Passo São Borja (125,000 km²), Garruchos (116,000 km²), Porto Lucena (95,200 km²), Alto Uruguai (82,300 km²), and Iraí (61,900 km²). The river levels data, with daily readings at 7am and 5pm, were provided by the Company of Mineral Resources Research (CPRM), with the data used from January 1, 1991 to June 30, 2015. We applied the Nash-Sutcliffe coefficient (NS) and the quantile 0.95 of absolute errors (EA(0,95): error has not been exceeded at the rate of 0.95) for the analysis of models performances. We observed that the errors EA(0.95) of the best models obtained for each basin always increase with the reduction of the controlled area then the quality of the forecasts decreases with displacement of the downstream control section upstream. The gain in quality of the forecasts with the use of adaptive resources becomes more evident especially when the observed values of EA(0.95) as this statistic is more sensitive with greater differences in relation to the Nash-Sutcliffe Coefficient (NS). Moreover, this is most representative for larger errors which occur precisely during flooding events. In general, we observed that, as much as the area of the basin decreases, it is possible to obtain forecasts with smaller lead times, but the influence of the size of the area controlled upstream basins improves the performance of smaller basins when observing, especially the errors EA (0.95). However, if the proportion of the upstream of controlled basin is already quite large - as in the case of the alternatives 1 and 2 used for forecast in Itaqui (between 88.5% and 95.4%, respectively) - the adaptive resources do not differ too much in getting better results. However, when observing basins with smaller areas controlled upstream - as is the case of Porto Lucena to alternative 2 (65% controlled area) - the performance gain of the models with the use of the complete adaptive resources (MQR+f.e.) becomes relevant.
|
199 |
Estimação de parâmetros de máquinas de indução através de ensaio de partida em vazioSogari, Paulo Antônio Brudna January 2017 (has links)
Neste trabalho são propostos métodos para a estimação de parâmetros de motores de indução através do método dos Mínimos Quadrados com medição apenas de tensões, correntes e resistência do estator em um ensaio de partida em vazio. São detalhados os procedimentos para o tratamento dos sinais medidos, além das estimações do fluxo magnético e da velocidade mecânica do motor. Para a estimação dos parâmetros elétricos, são propostos métodos que diferem nos requisitos e no tratamento dos parâmetros como invariantes ou variantes no tempo. Em relação a esse último caso, é empregado um método de estimação de parâmetros por janelas de dados, aplicando um modelo com parâmetros invariantes no tempo localmente em diversas partes do ensaio. São feitas simulações para validar os métodos propostos, e dados de ensaio de três motores de diferentes potências são utilizados para analisar a escala de variação paramétrica durante a partida. É feita uma comparação entre os resultados obtidos com e sem consideração de variação nos parâmetros. / In this work, methods are proposed to estimate the parameters of induction motors through the Least Squares method with the measurement of only voltages, currents and resistance of the stator in a no-load startup test. Procedures are detailed to process the measured signals, as well as to estimate magnetic flux and rotor mechanical speed. In order to estimate the electrical parameters, methods are proposed which differ in their requisites and in the treatment of parameters as time invariant or time-varying. For the latter, a methodology for parameter estimation through data windows is used, applying a model with time invariant parameters locally to different parts of the test. Simulations are made to validate the proposed methodology, and data from tests of three motors with different powers are used to analyze the scale of parameter variation during startup. A comparison is made between the results obtained with and without the consideration of variation in the parameters.
|
200 |
Time-varying linear predictive coding of speech signals.Hall, Mark Gilbert January 1977 (has links)
Thesis. 1977. M.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / M.S.
|
Page generated in 0.0607 seconds