• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 31
  • 9
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 37
  • 32
  • 32
  • 31
  • 24
  • 24
  • 23
  • 22
  • 22
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Alocação de Medidores para a Estimação de Estado em Redes Elétricas Inteligentes

Raposo, Antonio Adolpho Martins 26 February 2016 (has links)
Made available in DSpace on 2016-08-17T14:52:40Z (GMT). No. of bitstreams: 1 Dissertacao-AntonioAdolphoMartinsRaposo.pdf: 6219934 bytes, checksum: 92f0e1fb7c3d703fcf27aae305b549f2 (MD5) Previous issue date: 2016-02-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / To plan and operate properly a Smart Grid (SG), many new technical considerations in the context of distribution systems, must be considered, for example: stability (due to installation of Distributed Generation (DG), the load and generation dispatch, management of energy storage devices and the assessment of the impact of electric vehicle connection on the distribution system. The main prerequisite for many of these new functions in the distribution system control center is to determine the electrical network state (magnitude and angle of nodal voltages) in real time from measurement devices installed in it. In the transmission system control centers, this task is performed by the state estimation tool. Thus, the Distribution System State Estimation (DSSE) is one of the cornerstones for the implementation of a SG. The presence of a small number of measurements can make the grid unobservable in the context of the DSSE. That is, the state variables (magnitude and angle of the node voltages of all bus) can not be determined from a set of measurements by a state estimator. Due to this, it is usually added a large number of pseudo measurements to the existing measurement plan to ensure observability and to enable the DSSE. A drawback with this strategy is that the accuracy of the estimated state is compromised due to the fact that the errors associated with the pseudo measurements are considerably higher than those relating to real measurements. Consequently, it is necessary to allocate meters (voltage magnitude, active and reactive power flows, current magnitudes, etc.) to guarantee the accuracy of the DSEE. The meter placement problem for the state estimation in the transmission networks is usually carried out with the objective of assuring the observability. On the other hand, the meter placement for the EERD aims to minimize probabilistic index associated with the errors between the true and estimated state vectors. An important component of the method used to solve the meters placement problem is a probabilistic technique used to estimate the objective function. Due to the nonlinear nature of DSSE problem, the best option has been to use the Monte Carlo Simulation (MCS). A disadvantage of the MCS to estimate the objective function of the allocation problem is its high computational cost due to the need to solve a nonlinear state estimation problem for each sample element. The main objective of this dissertation is to propose a probabilistic techniques to improve the computational performance of existing methodologies for meter placement without reducing the accuracy of the estimated ix state. This compromise has been established using two strategies. In the first one, a linear model is used to estimate the state and the MCS is applied to determine the risks of the objective function. In the second one, a closed analytical formula is used to determine the risks based on the linearized model. Furthermore, the improved versions of the meter placement algorithms proposed in this dissertation consider the effect of the correlation among the measurements. The proposed meter placement algorithms were tested in the British distribution system of 95 bus. The tests results demonstrate that the introduction of the proposed strategies in a meter placement algorithm significantly reduced its computational cost. Moreover, it can be observed that there were improvements in accuracy in some cases, because the risk estimates provided by MCS are not accurate with small samples. / Para planejar e operar adequadamente uma Rede Elétrica Inteligente (REI), muitas novas considerações técnicas, no âmbito de sistemas de distribuição, devem ser apreciadas, por exemplo: a estabilidade devido a instalação de Geração Distribuída (GD), o despacho de carga e geração, o gerenciamento de dispositivos de armazenamento de energia e a avaliação do impacto da conexão de veículos elétricos na rede de distribuição. O principal pré-requisito para muitas destas novas funções do centro de controle do sistema de distribuição é a determinação do estado da rede elétrica (módulo e a fase das tensões nodais) em tempo real a partir de dispositivos de medição nela instalados. Em centros de controle de sistemas de transmissão esta tarefa é realizada por ferramentas de estimação de estado. Desta forma, a Estimação de Estado em Redes de Distribuição (EERD) é um dos alicerces para a implantação de uma REI. A presença de um número reduzido de medições pode tornar a rede elétrica não observável no âmbito da EERD. Isto é, as variáveis de estado (módulo e fase das tensões nodais em todas as barras) não podem ser determinadas a partir de um conjunto de medições por um estimador de estado. Devido a isto, geralmente adiciona-se um grande número de pseudo-medições ao plano de medição existente para assegurar a observabilidade e viabilizar a EERD. Um problema com esta estratégia é que a precisão do estado estimado é comprometida devido ao fato de que os erros associados com as pseudo-medições são consideravelmente maiores do que aqueles referentes às medições reais. Consequentemente é necessário alocar medidores (magnitude das tensões, fluxos de potência ativa e reativa, magnitude das correntes, etc.) para garantir a precisão do EERD. O problema de alocação de medidores para a estimação de estado em redes de transmissão é, geralmente, realizado com o objetivo de assegurar a observabilidade. Por outro lado, a alocação de medidores para EERD é realizada visando minimizar índices probabilísticos associados com os erros entre os vetores de estado estimado e verdadeiro. Um componente importante do método usado para resolver o problema de alocação de medidores é a técnica probabilística usada para estimar a função objetivo. Devido à natureza não-linear do problema de EERD, a melhor opção tem sido utilizar a Simulação Monte Carlo (SMC). Uma desvantagem da SMC para estimar a função objetivo do problema de alocação é o seu alto custo computacional devido a necessidade de resolver um problema de estimação de estado não-linear para cada vii elemento da amostra. O principal objetivo desta dissertação é propor técnicas probabilísticas para melhorar o desempenho computacional de metodologias existentes para alocação de medidores sem sacrificar a precisão do estado estimado. Este compromisso foi estabelecido usando-se duas estratégias. Na primeira, um modelo linearizado é usado para estimar o estado e a SMC para determinar os riscos da função objetivo. Na segunda, uma fórmula analítica fechada é usada para determinar os riscos com base no modelo linearizado. Além disso, as versões melhoradas dos algoritmos de alocação propostos nesta dissertação consideram o efeito da correlação entre as medições. As metodologias de alocação propostas foram testadas no sistema de distribuição britânico de 95 barras. Os resultados dos testes demonstraram que a introdução das estratégias propostas em um algoritmo de alocação de medidores reduziu significativamente o seu custo computacional. Além disso, pode-se observar que ocorreram melhorias na precisão em alguns casos, pois as estimativas dos riscos fornecidas pela SMC não são precisas com pequenas amostras.
132

A distribuição beta generalizada semi-normal / The beta generalized half-normal distribution

Rodrigo Rossetto Pescim 29 January 2010 (has links)
Uma nova família de distribuições denominada distribuição beta generalizada semi-normal, que inclui algumas distribuições importantes como casos especiais, tais como as distribuições semi-normal e generalizada semi-normal (Cooray e Ananda, 2008), é proposta neste trabalho. Para essa nova família de distribuições, foi realizado o estudo da função densidade probabilidade, função de distribuição acumulada e da função de taxa de falha (ou risco), que não dependeram de funções matemáticas complicadas. Obteve-se uma expressão formal para os momentos, função geradora de momentos, função densidade da distribuição de estatística de ordem, desvios médios, entropia, contabilidade e para as curvas de Bonferroni e Lorenz. Examinaram-se os estimadores de máxima verossimilhança dos parâmetros e deduziu- se a matriz de informação esperada. Neste trabalho é proposto, também, um modelo de regressão utilizando a distribuição beta generalizada semi-normal. A utilidade dessa nova distribuição é ilustrada através de dois conjuntos de dados, mostrando que ela é mais flexível na análise de dados de tempo de vida do que outras distribuições existentes na literatura. / A new family of distributions so-called beta generalized half-normal distribution, which includes some important distributions as special cases, such as the half-normal and generalized half-normal (Cooray and Ananda, 2008) distributions, is proposed in this work. For this new family of distributions, we studied the probability density function, cumulative distribution function and failure rate function (or hazard function), which did not depend on complicated mathematical functions. We obtained a formal expression for the moments, moment generating function, density function of order statistics distribution, mean deviation, entropy, reliability and Bonferroni and Lorenz curves. We examined maximum likelihood estimation of parameters and provided the information matrix. This work also proposed a regression model using the beta generalized half-normal distribution. The usefulness of the new distribution is illustrated through two data sets by showing that it is quite °exible in analyzing lifetime data instead other distributions in the literature.
133

Single and Multiple Emitter Localization in Cognitive Radio Networks

Ureten, Suzan January 2017 (has links)
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
134

Robustness of Sequential Probability Ratio Tests in Case of Nuisance Parameters

Eger, Karl-Heinz, Tsoy, Evgeni Borisovich 27 June 2010 (has links)
This paper deals with the computation of OC- and ASN-function of sequential probability ratio tests in the multi-parameter case. In generalization of the method of conjugated parameter pairs Wald-like approximations are presented for the OC- and ASN-function. These characteristics can be used describing robustness properties of a sequential test in case of nuisance parameters. As examples tests are considered for the mean and the variance of a normal distribution.
135

Utilizing self-similar stochastic processes to model rare events in finance

Wesselhöfft, Niels 24 February 2021 (has links)
In der Statistik und der Mathematik ist die Normalverteilung der am meisten verbreitete, stochastische Term für die Mehrheit der statistischen Modelle. Wir zeigen, dass der entsprechende stochastische Prozess, die Brownsche Bewegung, drei entscheidende empirische Beobachtungen nicht abbildet: schwere Ränder, Langzeitabhängigkeiten und Skalierungsgesetze. Ein selbstähnlicher Prozess, der in der Lage ist Langzeitabhängigkeiten zu modellieren, ist die Gebrochene Brownsche Bewegung, welche durch die Faltung der Inkremente im Limit nicht normalverteilt sein muss. Die Inkremente der Gebrochenen Brownschen Bewegung können durch einen Parameter H, dem Hurst Exponenten, Langzeitabhängigkeiten darstellt werden. Für die Gebrochene Brownsche Bewegung müssten die Skalierungs-(Hurst-) Exponenten über die Momente verschiedener Ordnung konstant sein. Empirisch beobachten wir variierende Hölder-Exponenten, die multifraktales Verhalten implizieren. Wir erklären dieses multifraktale Verhalten durch die Änderung des alpha-stabilen Indizes der alpha-stabilen Verteilung, indem wir Filter für Saisonalitäten und Langzeitabhängigkeiten über verschiedene Zeitfrequenzen anwenden, startend bei 1-minütigen Hochfrequenzdaten. Durch die Anwendung eines Filters für die Langzeitabhängigkeit zeigen wir, dass die Residuen des stochastischen Prozesses geringer Zeitfrequenz (wöchentlich) durch die alpha-stabile Bewegung beschrieben werden können. Dies erlaubt es uns, den empirischen, hochfrequenten Datensatz auf die niederfrequente Zeitfrequenz zu skalieren. Die generierten wöchentlichen Daten aus der Frequenz-Reskalierungs-Methode (FRM) haben schwerere Ränder als der ursprüngliche, wöchentliche Prozess. Wir zeigen, dass eine Teilmenge des Datensatzes genügt, um aus Risikosicht bessere Vorhersagen für den gesamten Datensatz zu erzielen. Im Besonderen wäre die Frequenz-Reskalierungs-Methode (FRM) in der Lage gewesen, die seltenen Events der Finanzkrise 2008 zu modellieren. / Coming from a sphere in statistics and mathematics in which the Normal distribution is the dominating underlying stochastic term for the majority of the models, we indicate that the relevant diffusion, the Brownian Motion, is not accounting for three crucial empirical observations for financial data: Heavy tails, long memory and scaling laws. A self-similar process, which is able to account for long-memory behavior is the Fractional Brownian Motion, which has a possible non-Gaussian limit under convolution of the increments. The increments of the Fractional Brownian Motion can exhibit long memory through a parameter H, the Hurst exponent. For the Fractional Brownian Motion this scaling (Hurst) exponent would be constant over different orders of moments, being unifractal. But empirically, we observe varying Hölder exponents, the continuum of Hurst exponents, which implies multifractal behavior. We explain the multifractal behavior through the changing alpha-stable indices from the alpha-stable distributions over sampling frequencies by applying filters for seasonality and time dependence (long memory) over different sampling frequencies, starting at high-frequencies up to one minute. By utilizing a filter for long memory we show, that the low-sampling frequency process, not containing the time dependence component, can be governed by the alpha-stable motion. Under the alpha-stable motion we propose a semiparametric method coined Frequency Rescaling Methodology (FRM), which allows to rescale the filtered high-frequency data set to the lower sampling frequency. The data sets for e.g. weekly data which we obtain by rescaling high-frequency data with the Frequency Rescaling Method (FRM) are more heavy tailed than we observe empirically. We show that using a subset of the whole data set suffices for the FRM to obtain a better forecast in terms of risk for the whole data set. Specifically, the FRM would have been able to account for tail events of the financial crisis 2008.
136

Rozpoznávání pozic a gest / Recognition of Poses and Gestures

Jiřík, Leoš January 2008 (has links)
This thesis inquires the existing methods on the field of image recognition with regards to gesture recognition. Some methods have been chosen for deeper study and these are to be discussed later on. The second part goes in for the concenpt of an algorithm that would be able of robust gesture recognition based on data acquired within the AMI and M4 projects. A new ways to achieve precise information on participants position are suggested along with dynamic data processing approaches toward recognition. As an alternative, recognition using Gaussian Mixture Models and periodicity analysis are brought in. The gesture class in focus are speech supporting gestures. The last part demonstrates the results and discusses future work.
137

[en] A POISSON-LOGNORMAL MODEL TO FORECAST THE IBNR QUANTITY VIA MICRO-DATA / [pt] UM MODELO POISSON-LOGNORMAL PARA PREVISÃO DA QUANTIDADE IBNR VIA MICRO-DADOS

JULIANA FERNANDES DA COSTA MACEDO 02 February 2016 (has links)
[pt] O principal objetivo desta dissertação é realizar a previsão da reserva IBNR. Para isto foi desenvolvido um modelo estatístico de distribuições combinadas que busca uma adequada representação dos dados. A reserva IBNR, sigla em inglês para Incurred But Not Reported, representa o montante que as seguradoras precisam ter para pagamentos de sinistros atrasados, que já ocorreram no passado, mas ainda não foram avisados à seguradora até a data presente. Dada a importância desta reserva, diversos métodos para estimação da reserva IBNR já foram propostos. Um dos métodos mais utilizado pelas seguradoras é o Método Chain Ladder, que se baseia em triângulos run-off, que é o agrupamento dos dados conforme data de ocorrência e aviso de sinistro. No entanto o agrupamento dos dados faz com que informações importantes sejam perdidas. Esta dissertação baseada em outros artigos e trabalhos que consideram o não agrupamento dos dados, propõe uma nova modelagem para os dados não agrupados. O modelo proposto combina a distribuição do atraso no aviso da ocorrência, representada aqui pela distribuição log-normal truncada (pois só há informação até a última data observada); a distribuição da quantidade total de sinistros ocorridos num dado período, modelada pela distribuição Poisson; e a distribuição do número de sinistros ocorridos em um dado período e avisados até a última data observada, que será caracterizada por uma distribuição Binomial. Por fim, a quantidade de sinistros IBNR foi estimada por método e pelo Chain Ladder e avaliou-se a capacidade de previsão de ambos. Apesar da distribuição de atrasos do modelo proposto se adequar bem aos dados, o modelo proposto obteve resultados inferiores ao Chain Ladder em termos de previsão. / [en] The main objective of this dissertation is to predict the IBNR reserve. For this, it was developed a statistical model of combined distributions looking for a new distribution that fits the data well. The IBNR reserve, short for Incurred But Not Reported, represents the amount that insurers need to have to pay for the claims that occurred in the past but have not been reported until the present date. Given the importance of this reserve, several methods for estimating this reserve have been proposed. One of the most used methods for the insurers is the Chain Ladder, which is based on run-off triangles; this is a format of grouping the data according to the occurrence and the reported date. However this format causes the lost of important information. This dissertation, based on other articles and works that consider the data not grouped, proposes a new model for the non-aggregated data. The proposed model combines the delay in the claim report distribution represented by a log normal truncated (because there is only information until the last observed date); the total amount of claims incurred in a given period modeled by a Poisson distribution and the number of claims occurred in a certain period and reported until the last observed date characterized by a binomial distribution. Finally, the IBNR reserve was estimated by this method and by the chain ladder and the prediction capacity of both methods will be evaluated. Although the delay distribution seems to fit the data well, the proposed model obtained inferior results to the Chain Ladder in terms of forecast.
138

Unsupervised Change Detection Using Multi-Temporal SAR Data : A Case Study of Arctic Sea Ice / Oövervakad förändringsdetektion med multitemporell SAR data : En fallstudie över arktisk havsis

Fröjse, Linda January 2014 (has links)
The extent of Arctic sea ice has decreased over the years and the importance of sea ice monitoring is expected to increase. Remote sensing change detection compares images acquired over the same geographic area at different times in order to identify changes that might have occurred in the area of interest. Change detection methods have been developed for cryospheric topics. The Kittler-Illingworth thresholding algorithm has proven to be an effective change detection tool, but has not been used for sea ice. Here it is applied to Arctic sea ice data. The objective is to investigate the unsupervised detection of changes in Arctic sea ice using multi-temporal SAR images. The well-known Kittler-Illingworth algorithm is tested using two density function models, i.e., the generalized Gaussian and the log-normal model. The difference image is obtained using the modified ratio operator. The histogram of the change image, which approximates its probability distribution, is considered to be a combination of two classes, i.e., the changed and unchanged classes. Histogram fitting techniques are used to estimate the unknown density functions and the prior probabilities. The optimum threshold is selected using a criterion function directly related to classification error. In this thesis three datasets were used covering parts of the Beaufort Sea from the years 1992, 2002, 2007 and 2009. The SAR and ASAR C-band data came from satellites ERS and ENVISAT respectively. All three were interpreted visually. For all three datasets, the generalized Gaussian detected a lot of change, whereas the log-normal detected less. Only one small subset of a dataset was validated against reference data. The log-normal distribution then obtained 0% false alarm rate through all trials. The generalized Gaussian obtained false alarm rates around 4% for most of the trials. The generalized Gaussian achieved detection accuracies around 95%, whereas the log-normal achieved detection accuracies around 70%. The overall accuracies for the generalized Gaussian were about 95% in most trials. The log-normal achieved overall accuracies at around 85%. The KHAT for the generalized Gaussian was in the range of 0.66-0.93. The KHAT for log-normal was in the range of 0.68-0.77. Using one additional speckle filter iteration increased the accuracy for the log-normal distribution. Generally, the detection of positive change has been accomplished with higher level of accuracy compared with negative change detection. A visual inspection shows that the generalized Gaussian distribution probably over-estimates the change. The log-normal distribution consistently detects less change than the generalized Gaussian. Lack of validation data made validation of the results difficult. The performed validation might not be reliable since the available validation data was only SAR imagery and differentiating change and no-change is difficult in the area. Further due to the lack of reference data it could not be decided, with certainty, which distribution performed the best. / Ytan av arktisk havsis har minskat genom åren och vikten av havsisövervakning förväntas öka. Förändrigsdetection jämför bilder från samma geografiska område från olika tidpunkter föra att identifiera förändringar som kan ha skett i intresseområdet. Förändringsdekteringsmetoder har utvecklats för kryosfäriska ämnen. Tröskelvärdesbestämning med Kittler-Illingworth algoritmen har visats sig vara ett effektivt verktyg för förändringsdetektion, men har inte änvänts på havsis. Här appliceras algoritmen på arktisk havsis. Målet är att undersökra oövervakad förändringsdetektion i arktisk havsis med multitemporella SAR bilder. Den välkända Kittler-Illingworth algoritmen testas med två täthetsfunktioner, nämligen generaliserad normaldistribution och log-normal distributionen. Differensbilden erhålls genom den modifierad ratio-operator. Histogrammet från förändringsbilden skattar dess täthetsfunktion, vilken anses vara en kombination av två klasser, förändring- och ickeförändringsklasser. Histogrampassningstekniker används för att uppskatta de okända täthetsfunktionerna och a priori sannolikheterna. Det optimala tröskelvärdet väljs genom en kriterionfunktion som är direkt relaterad till klassifikationsfel. I detta examensarbete användes tre dataset som täcker delar av Beaufort-havet från åren 1992, 2002, 2007 och 2009. SAR C-band data kom från satelliten ERS och ASAR C-band data kom från satelliten ENVISAT. Alla tre tolkades visuellt och för alla tre detekterade generaliserad normaldistribution mycket mer förändring än lognormal distributionen. Bara en mindre del av ett dataset validerades mot referensdata. Lognormal distributionen erhöll då 0% falska alarm i alla försök. Generalised normaldistributionen erhöll runt 4% falska alarm i de flesta försöken. Generaliserad normaldistributionen nådde detekteringsnoggrannhet runt 95% medan lognormal distributionen nådde runt 70%. Generell noggrannheten för generaliserad normaldistributionen var runt 95% i flesta försöken. För lognormal distributionen nåddes en generell noggrannhet runt 85%. KHAT koefficienten för generaliserad normaldistributionen var i intervallet 0.66-0.93. För lognormal distributionen var den i intervallet 0.68-0.77. Med en extra speckle-filtrering ökades nogranneheten för lognormal distributionen. Generellt sett, detekterades positiv förändring med högre nivå av noggrannhet än negativ förändring. Visuell inspektion visar att generaliserad normaldistribution troligen överskattar förändringen. Lognormal distributionen detekterar konsistent mindre förändring än generaliserad normaldistributionen. Bristen på referensdata gjorde valideringen av resultaten svårt. Den utförda valideringen är kanske inte så trovärdig, eftersom den tillgänliga referensdatan var bara SAR bilder och att särskilja förändring och ickeförändring är svårt i området. Vidare, på grund av bristen på referensdata, kunde det inte bestämmas med säkerhet vilken distribution som var bäst.
139

Coincidence Factor and Battery Energy Storage Systems for Industrial Electrical Power Supply : A Field Study of Building 178 at Scania AB, Södertälje / Sammanlagringsfaktor och energilagringssystem i försörjningsssytem för elkraft : En modell för byggnad 178 hos Scania AB, Södertälje

Wallhager, Lucas January 2023 (has links)
Coincidence factors have been researched since the late 1800s, as they displays the ratio between the maximum coincidencental power usage of a system, and the sum of the maximum individual loads of the system. Accurate estimations of the highest coincidental power usage allow for minimal material usage when constructing substations, transformers, overhead lines, and cables for power transmission. Scania is large bus and truck production industry in Sweden, and has realised that it over-estimate the largest coincidental power usage of production facilities, which leads to unnecessary investment costs and power subscription with the power distribution utility. This study is special, as the area of coincidence factors for industrial purposes are rarely investigated. In combination with modelling of BESS for power supply, this study aims to investigate established methods of calculating coincidence factors for industrial purposes and their relevance, as the results will be compared to actual values from measurements. The results of the study showed that Velander ́s method, used by utilities in Sweden and a few other countries, is not very relevant for estimating highest coincidental power usage, as this requires accurate estimation of yearly energy usage, and two other parameters, k1 & k2 . The normal distribution is better for this purpose but also requires accurate data. This study proposed a method based on the normal distribution, that requires follow-up in order to guarantee that it is accurate in multiple cases. In addition, a BESS was modelled using Matlab, with the initial aim of peak-shaving. Since this did not prove profitable with Scania ́s standards, the modelling simply aimed at being profitable using Net Present Value as economical tool for evaluating profitability. The results displayed a lot of profitable sizes of the BESS where the battery became profitable after five years minimum. / Sammanlagringsfaktorer har studerats sedan slutet på 1800-talet. De visar kvoten mellan den högsta sammanfallande effekten och summan av de högsta individuella effekterna per last i ett system. En mer träffsäker uppskattning av sammanfallande effekt, reducerar materialanvändning vid byggande av ställverk, transformatorer och elkablar som ska användas vid strömöverföring. Scania är en stor produktionsindustri i Sverige som tillverkar bussar och lastbilar. De har insett att de överdimensionerar sina effektbehov hos olika produktionsfabriker, vilket leder till onödigt höga investeringskostnader och höga effektabonnemang gentemot eldistributören. Studien är ovanlig, då sammanlagringsfaktorer inom industrin är väldigt lite forskat kring. Studien undersöker redan etablerade metoder för att beräkna sammanlagringsfaktorer och hur relevanta de är för området. Dessutom studeras batterier för energilagring. Detta görs genom jämförelse av mätningar av strömmar i en av Scanias lokaler. Resultaten av studien visar att Velanders metod är olämplig för användning då den kräver kunskap om årlig energiförbrukning, samt korrekta konstanter k1 & k2. Normaldistribution som är ett statistikt verktyg, gav mer liknande värden av de uppmätta strömmarna i B178, men hade sin svaghet i att metoden också krävde kunskaper om effektanvändning, vilket blir problematiskt när en ny fabrik ska byggas, samt uppskatta effektbehovet för denna. Studien föreslår en modell som baseras på normaldistribution, men som kräver uppföljning för att säkerställa relevans. Utöver detta, används Matlab för att modellera ett batteri, vars primära syfte är att kapa effekttoppar. När detta visade sig vara icke lönsamt med Scanias standarder blev målet att istället modellera ett batteri vars enda mål att vara lönsamt. Där visade det sig finnas flera storlekar på batterier som var lönsamma efter minst fem år.
140

Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models and High-dimensional Extensions

Jana, Sayantee 11 1900 (has links)
A Growth Curve Model (GCM) is a multivariate linear model used for analyzing longitudinal data with short to moderate time series. It is a special case of Generalized Multivariate Analysis of Variance (GMANOVA) models. Analysis using the GCM involves comparison of mean growths among different groups. The classical GCM, however, possesses some limitations including distributional assumptions, assumption of identical degree of polynomials for all groups and it requires larger sample size than the number of time points. In this thesis, we relax some of the assumptions of the traditional GCM and develop appropriate inferential tools for its analysis, with the aim of reducing bias, improving precision and to gain increased power as well as overcome limitations of high-dimensionality. Existing methods for estimating the parameters of the GCM assume that the underlying distribution for the error terms is multivariate normal. In practical problems, however, we often come across skewed data and hence estimation techniques developed under the normality assumption may not be optimal. Simulation studies conducted in this thesis, in fact, show that existing methods are sensitive to the presence of skewness in the data, where estimators are associated with increased bias and mean square error (MSE), when the normality assumption is violated. Methods appropriate for skewed distributions are, therefore, required. In this thesis, we relax the distributional assumption of the GCM and provide estimators for the mean and covariance matrices of the GCM under multivariate skew normal (MSN) distribution. An estimator for the additional skewness parameter of the MSN distribution is also provided. The estimators are derived using the expectation maximization (EM) algorithm and extensive simulations are performed to examine the performance of the estimators. Comparisons with existing estimators show that our estimators perform better than existing estimators, when the underlying distribution is multivariate skew normal. Illustration using real data set is also provided, wherein Triglyceride levels from the Framingham Heart Study is modelled over time. The GCM assumes equal degree of polynomial for each group. Therefore, when groups means follow different shapes of polynomials, the GCM fails to accommodate this difference in one model. We consider an extension of the GCM, wherein mean responses from different groups can have different shapes, represented by polynomials of different degree. Such a model is referred to as Extended Growth Curve Model (EGCM). We extend our work on GCM to EGCM, and develop estimators for the mean and covariance matrices under MSN errors. We adopted the Restricted Expectation Maximization (REM) algorithm, which is based on the multivariate Newton-Raphson (NR) method and Lagrangian optimization. However, the multivariate NR method and hence, the existing REM algorithm are applicable to vector parameters and the parameters of interest in this study are matrices. We, therefore, extended the NR approach to matrix parameters, which consequently allowed us to extend the REM algorithm to matrix parameters. The performance of the proposed estimators were examined using extensive simulations and a motivating real data example was provided to illustrate the application of the proposed estimators. Finally, this thesis deals with high-dimensional application of GCM. Existing methods for a GCM are developed under the assumption of ‘small p large n’ (n >> p) and are not appropriate for analyzing high-dimensional longitudinal data, due to singularity of the sample covariance matrix. In a previous work, we used Moore-Penrose generalized inverse to overcome this challenge. However, the method has some limitations around near singularity, when p~n. In this thesis, a Bayesian framework was used to derive a test for testing the linear hypothesis on the mean parameter of the GCM, which is applicable in high-dimensional situations. Extensive simulations are performed to investigate the performance of the test statistic and establish optimality characteristics. Results show that this test performs well, under different conditions, including the near singularity zone. Sensitivity of the test to mis-specification of the parameters of the prior distribution are also examined empirically. A numerical example is provided to illustrate the usefulness of the proposed method in practical situations. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.1275 seconds