• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 465
  • 63
  • 56
  • 56
  • 55
  • 48
  • 45
  • 43
  • 41
  • 40
  • 38
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Multivariate EWMA Control Chart and Application to a Semiconductor Manufacturing Process

Huh, Ick 09 1900 (has links)
<p>The multivariate cumulative sum (MCUSUM) and the multivariate exponentially weighted moving average (MEWMA) control charts are the two leading methods to monitor a multivariate process. This thesis focuses on the MEWMA control chart. Specifically, using the Markov chain method, we study in detail several aspects of the run length distribution both for the on- and off- target cases. Regarding the on-target run length analysis, we express the probability mass function of the run length distribution, the average run length (ARL), the variance of run length (V RL) and higher moments of the run length distribution in mathematically closed forms. In previous studies, with respect to the off-target performance for the MEWMA control chart, the process mean shift was usually assumed to take place at the beginning of the process. We extend the classical off-target case and introduce a generalization of the probability mass function of the run length distribution, the ARL and the V RL. What Prabhu and Runger (1996) proposed can be derived from our new model. By evaluating the off-target ARL values for the MEWMA control chart, we determine the optimal smoothing parameters by using the partition method that provides an easy algorithm to find the optimal smoothing parameters and study how they respond as the process mean shift time changes. We compare the ARL performance of the MEWMA control chart with that of the multivariate Shewhart control chart to see whether the MEWMA chart is still effective in detecting a small mean shift as the process mean shift time changes. In order to apply the model to semiconductor manufacturing processes, we use a bivariate normal distribution to generate sample data and compare the MEWMA control chart with the multivariate Shewhart control chart to evaluate how the MEWMA control chart behaves when a delayed mean shift happens. We also apply the variation transmission model introduced by Lawless et al. (1999) to the semiconductor manufacturing process and show an extension of the model to make our application to semiconductor manufacturing processes more realistic. All the programming and calculations were done in R</p> / Master of Science (MS)
362

[en] TECHNIQUES FOR DETECTION OF BIAS IN DEMAND FORECASTING: PERFORMANCE COMPARISON / [pt] TÉCNICAS PARA DETECÇÃO DE VIÉS EM PREVISÃO DE DEMANDA: COMPARAÇÃO DE DESEMPENHOS

FELIPE SCHOEMER JARDIM 09 November 2021 (has links)
[pt] Em um mundo globalizado, em contínua transformação, são cada vez mais freqüentes mudanças no perfil da demanda. Se não detectadas rapidamente, elas podem gerar impactos negativos no progresso de um negócio devido à baixa qualidade nas previsões de venda, que começam a gerar valores sistematicamente acima ou abaixo da demanda real indicando a presença de viés. Para evitar esse cenário, técnicas formais para detecção de viés podem ser incorporadas ao processo de previsão de demanda. Diante desse quadro, a presente dissertação compara os desempenhos, via simulação, das principais técnicas formais de detecção de viés em previsão de demanda presentes na literatura. Nesse sentido, seis técnicas são identificadas e analisadas. Quatro são baseadas em estatísticas Tracking Signal e duas são adaptadas de técnicas de Controle Estatístico de Processos. Os modelos de previsão de demanda monitorados pelas técnicas em questão são os de séries temporais estruturadas, associados ao método de amortecimento exponencial simples e ao método de Holt, respectivamente, para séries com nível médio constante e séries com tendência. Três tipos de alterações no perfil da demanda – que acarretam em viés na previsão – são examinados. O primeiro consiste em mudanças no nível médio em séries temporais de nível médio constante. O segundo tipo também considera séries temporais de nível médio constante, porém com o foco em surgimentos de tendências. O terceiro viés consiste em mudanças na tendência em series temporais com tendência pré-incorporada. Entre os resultados obtidos, destaca-se a conclusão de que, para a maioria das situações estudadas, as técnicas baseadas nas estatísticas Tracking Signal possuem desempenho superior às demais técnicas com relação à eficiência na detecção de viés. / [en] In a globalized world, in continuous transformation, changes in the demand pattern are increasingly frequent. If not rapidly detected, they can have a negative and persistent impact in the wellbeing of a business due to continuously poor quality sales forecasts, which begin to generate values systematically above or below the actual demand indicating the presence of bias. To avoid this happening, statistical techniques can be incorporated in a prediction process with the objective known as bias detection in demand forecasting. Considering this situation, the present dissertation compares, through simulation, the efficiency performance of the main existing formal techniques of monitoring demand forecasting models, with the view of bias detection. Six of such techniques are identified and analyzed in this work. Four are based on Tracking Signal Statistics and two are adapted from the Statistical Process Control approach. The demand forecasting models monitored by the techniques in question can be classified as structured time series, for a constant level or trend pattern, and using both the simple exponential smoothing and the Holt s methods. Three types of changes in the demand pattern - which result in biased prediction - are examined. The first one focus on simulated changes on the average level of various constant times series. The second type also considered various constant times series, but now simulating the appearance of different trends. And the third refers to simulate changes in trends in various times series with pre-established trends. Among the results attained, one stands out: the techniques based on Tracking Signal Statistics - when compares to other methods - showed superior performance insofar as efficient bias detection in demand forecasting.
363

Statistical Predictions Based on Accelerated Degradation Data and Spatial Count Data

Duan, Yuanyuan 04 March 2014 (has links)
This dissertation aims to develop methods for statistical predictions based on various types of data from different areas. We focus on applications from reliability and spatial epidemiology. Chapter 1 gives a general introduction of statistical predictions. Chapters 2 and 3 investigate the photodegradation of an organic coating, which is mainly caused by ultraviolet (UV) radiation but also affected by environmental factors, including temperature and humidity. In Chapter 2, we identify a physically motivated nonlinear mixed-effects model, including the effects of environmental variables, to describe the degradation path. Unit-to-unit variabilities are modeled as random effects. The maximum likelihood approach is used to estimate parameters based on the accelerated test data from laboratory. The developed model is then extended to allow for time-varying covariates and is used to predict outdoor degradation where the explanatory variables are time-varying. Chapter 3 introduces a class of models for analyzing degradation data with dynamic covariate information. We use a general path model with random effects to describe the degradation paths and a vector time series model to describe the covariate process. Shape restricted splines are used to estimate the effects of dynamic covariates on the degradation process. The unknown parameters of these models are estimated by using the maximum likelihood method. Algorithms for computing the estimated lifetime distribution are also described. The proposed methods are applied to predict the photodegradation path of an organic coating in a complicated dynamic environment. Chapter 4 investigates the Lyme disease emergency in Virginia at census tract level. Based on areal (census tract level) count data of Lyme disease cases in Virginia from 1998 to 2011, we analyze the spatial patterns of the disease using statistical smoothing techniques. We also use the space and space-time scan statistics to reveal the presence of clusters in the spatial and spatial/temporal distribution of Lyme disease. Chapter 5 builds a predictive model for Lyme disease based on historical data and environmental/demographical information of each census tract. We propose a Divide-Recombine method to take advantage of parallel computing. We compare prediction results through simulation studies, which show our method can provide comparable fitting and predicting accuracy but can achieve much more computational efficiency. We also apply the proposed method to analyze Virginia Lyme disease spatio-temporal data. Our method makes large-scale spatio-temporal predictions possible. Chapter 6 gives a general review on the contributions of this dissertation, and discusses directions for future research. / Ph. D.
364

Graph Cut Based Mesh Segmentation Using Feature Points and Geodesic Distance

Liu, L., Sheng, Y., Zhang, G., Ugail, Hassan January 2015 (has links)
No / Both prominent feature points and geodesic distance are key factors for mesh segmentation. With these two factors, this paper proposes a graph cut based mesh segmentation method. The mesh is first preprocessed by Laplacian smoothing. According to the Gaussian curvature, candidate feature points are then selected by a predefined threshold. With DBSCAN (Density-Based Spatial Clustering of Application with Noise), the selected candidate points are separated into some clusters, and the points with the maximum curvature in every cluster are regarded as the final feature points. We label these feature points, and regard the faces in the mesh as nodes for graph cut. Our energy function is constructed by utilizing the ratio between the geodesic distance and the Euclidean distance of vertex pairs of the mesh. The final segmentation result is obtained by minimizing the energy function using graph cut. The proposed algorithm is pose-invariant and can robustly segment the mesh into different parts in line with the selected feature points.
365

EFFICIENT INFERENCE AND DOMINANT-SET BASED CLUSTERING FOR FUNCTIONAL DATA

Xiang Wang (18396603) 03 June 2024 (has links)
<p dir="ltr">This dissertation addresses three progressively fundamental problems for functional data analysis: (1) To do efficient inference for the functional mean model accounting for within-subject correlation, we propose the refined and bias-corrected empirical likelihood method. (2) To identify functional subjects potentially from different populations, we propose the dominant-set based unsupervised clustering method using the similarity matrix. (3) To learn the similarity matrix from various similarity metrics for functional data clustering, we propose the modularity guided and dominant-set based semi-supervised clustering method.</p><p dir="ltr">In the first problem, the empirical likelihood method is utilized to do inference for the mean function of functional data by constructing the refined and bias-corrected estimating equation. The proposed estimating equation not only improves efficiency but also enables practically feasible empirical likelihood inference by properly incorporating within-subject correlation, which has not been achieved by previous studies.</p><p dir="ltr">In the second problem, the dominant-set based unsupervised clustering method is proposed to maximize the within-cluster similarity and applied to functional data with a flexible choice of similarity measures between curves. The proposed unsupervised clustering method is a hierarchical bipartition procedure under the penalized optimization framework with the tuning parameter selected by maximizing the clustering criterion called modularity of the resulting two clusters, which is inspired by the concept of dominant set in graph theory and solved by replicator dynamics in game theory. The advantage offered by this approach is not only robust to imbalanced sizes of groups but also to outliers, which overcomes the limitation of many existing clustering methods.</p><p dir="ltr">In the third problem, the metric-based semi-supervised clustering method is proposed with similarity metric learned by modularity maximization and followed by the above proposed dominant-set based clustering procedure. Under semi-supervised setting where some clustering memberships are known, the goal is to determine the best linear combination of candidate similarity metrics as the final metric to enhance the clustering performance. Besides the global metric-based algorithm, another algorithm is also proposed to learn individual metrics for each cluster, which permits overlapping membership for the clustering. This is innovatively different from many existing methods. This method is superiorly applicable to functional data with various similarity metrics between functional curves, while also exhibiting robustness to imbalanced sizes of groups, which are intrinsic to the dominant-set based clustering approach.</p><p dir="ltr">In all three problems, the advantages of the proposed methods are demonstrated through extensive empirical investigations using simulations as well as real data applications.</p>
366

Likelihood Ratio Combination of Multiple Biomarkers and Change Point Detection in Functional Time Series

Du, Zhiyuan 24 September 2024 (has links)
Utilizing multiple biomarkers in medical research is crucial for the diagnostic accuracy of detecting diseases. An optimal method for combining these biomarkers is essential to maximize the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC). The optimality of the likelihood ratio has been proven but the challenges persist in estimating the likelihood ratio, primarily on the estimation of multivariate density functions. In this study, we propose a non-parametric approach for estimating multivariate density functions by utilizing Smoothing Spline density estimation to approximate the full likelihood function for both diseased and non-diseased groups, which compose the likelihood ratio. Simulation results demonstrate the efficiency of our method compared to other biomarker combination techniques under various settings for generated biomarker values. Additionally, we apply the proposed method to a real-world study aimed at detecting childhood autism spectrum disorder (ASD), showcasing its practical relevance and potential for future applications in medical research. Change point detection for functional time series has attracted considerable attention from researchers. Existing methods either rely on FPCA, which may perform poorly with complex data, or use bootstrap approaches in forms that fall short in effectively detecting diverse change functions. In our study, we propose a novel self-normalized test for functional time series implemented via a non-overlapping block bootstrap to circumvent reliance on FPCA. The SN factor ensures both monotonic power and adaptability for detecting diverse change functions on complex data. We also demonstrate our test's robustness in detecting changes in the autocovariance operator. Simulation studies confirm the superior performance of our test across various settings, and real-world applications further illustrate its practical utility. / Doctor of Philosophy / In medical research, it is crucial to accurately detect diseases and predict patient outcomes using multiple health indicators, also known as biomarkers. Combining these biomarkers effectively can significantly improve our ability to diagnose and treat various health conditions. However, finding the best way to combine these biomarkers has been a long-standing challenge. In this study, we propose a new, easy-to-understand method for combining multiple biomarkers using advanced estimation techniques. Our method takes into account various factors and provides a more accurate way to evaluate the combined information from different biomarkers. Through simulations, we demonstrated that our method performs better than other existing methods under a variety of scenarios. Furthermore, we applied our new method to a real-world study focusing on detecting childhood autism spectrum disorder (ASD), highlighting its practical value and potential for future applications in medical research. Detecting changes in patterns over time, especially shifts in averages, has become an important focus in data analysis. Existing methods often rely on techniques that may not perform well with more complex data or are limited in the types of changes they can detect. In this study, we introduce a new approach that improves the accuracy of detecting changes in complex data patterns. Our method is flexible and can identify changes in both the mean and variation of the data over time. Through simulations, we demonstrate that this approach is more accurate than current methods. Furthermore, we applied our method to real-world climate research data, illustrating its practical value.
367

Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional Data

Fang, Zaili 13 November 2012 (has links)
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model. To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information. In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed. The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction. / Ph. D.
368

盈餘品質與盈餘管理實證研究-以台灣上市公司為例 / The Empirical Study of Earning Quality and Motivation of Earning Management – The Example of publicly listed Taiwanese companies

林鈺凱, Lin ,Yu Kai Unknown Date (has links)
近幾年來,財務弊案層出不窮,管理當局參與創造性會計的情事日益嚴重,資本市場陷入紀律危機,徒增成本。為喚醒投資者對於盈餘品質的關注,以及對盈餘管理有更客觀的瞭解,本研究提出兩種不同基礎之盈餘品質分類法,並探討台灣上市公司在盈餘品質分類法下,財務特性與盈餘管理成分的差異。 以往國內文獻多個別探討盈餘品質的內涵,或盈餘管理的現象;將盈餘品質與盈餘管理兩大主題做結合,乃新嘗試。研究主要分為兩大部分,第一部份定義盈餘品質,而第二部分深入盈餘管理的課題。 研究以2002年第3季到2004年第3季,381家台灣上市公司,共3429個樣本點,進行迴歸模型分析。 首先將樣本以:一、盈餘對營運現金流量的關係,及二、應收帳款成長率對營收成長率的比較,共兩種基礎,區分盈餘品質。在區分盈餘品質後,以盈餘品質測試盈餘評價能力與持續性。發現以盈餘對營運現金流量關係作為基礎之盈餘品質,具有增額評價能力,而以應收帳款成長率對營收成長率的比較做為基礎者,無增額評價能力。而兩種盈餘品質對盈餘持續性有顯著貢獻。 第二部分將樣本分為盈餘平滑企業與非盈餘平滑企業,探討盈餘評價功能之強弱,發現在盈餘平滑與盈餘非平滑兩組別中,盈餘平滑化並不影響盈餘評價能力。接著導入盈餘品質,觀察在交叉分組下盈餘評價功能的差異,發現高盈餘品質結合盈餘非平滑化(Quality Non-Smoother)的組別中,有最高盈餘評價係數。盈餘進一步拆解成三個組成份子:營運現金流量,裁量性應計數,與非裁量性應計數。其中,觀察重點在於裁量性應計數,研究同樣加入盈餘品質,測試其評價能力與持續性。發現裁量性應計數具有評價功能,而在兩種盈餘品質指標分類下,高盈餘品質之裁量性應計數,並無增量評價貢獻;在持續性方面,兩種盈餘品質指標同樣對裁量性應計數有正向貢獻。 為測試盈餘品質在盈餘管理誘因下的反應,最後將盈餘管理誘因區分為達成損益兩平與超越前期盈餘兩項目標,並加入盈餘品質,觀察其交互作用。發現在「達成損益兩平」與「超越前期盈餘」兩種盈餘目標下,盈餘管理現象的確存在。加入盈餘品質變數後,在兩種盈餘品質變數之作用下,對「達成損益兩平」與「超越前期盈餘」兩項管理誘因均有抑制作用。 / During the last few years, there have been numerous cases of financial ma-nipulation and scandals of firms and the situation of the managing authority par-ticipating in creative accounting has become worsen which has posed enormous disciplinary risks and unnecessary costs on the entire capital market. In order to invoke the concern of investors towards earning quality and to objectively under-stand more broadly about earning management, this study focused on two dif-ferent earning quality categorization based on different basis. The other aim of this study was to discuss the differences of financial characteristics and earning management that arise under two different earning quality categorizations. Most of the local existing literature discussed separately on the essence of earning management or the phenomenon of earning management; this study would be a completely new attempt, which combined the above-mentioned two topics into one study. The first part of this study focused on the definition of earn-ing quality and the second part discussed in detail on issues concerning earning management. A regression analysis was conducted on 381 publicly listed firms in Taiwan during the period from the third quarter of 2002 to the third quarter of 2004 and the total sample points were 3429. Firstly, the samples were processed and the samples’ earning qualities were categorized based on: 1. the comparison between earning and operating cash flow; 2. the comparison between the growth rate of account receivables and the growth rate of revenue. After the categorization of earning qualities, earning qual-ity was used to test the ability of earning valuation and the persistence of the earning. It was revealed in this study that the earning quality based on the com-parison between earning and operating cash flow could greatly enhance the abil-ity of earning valuation. On other hand, the earning quality based on the compari-son between the growth rate of account receivables and the growth rate of reve-nue failed to enhance the earning valuation. However, both categories of earning qualities had significant contribution to the persistence of earning. The second part of the study separated the samples into earning smoothing firms and non-earning smoothing firms in order to discuss the power of the ability of earning valuation. The results showed that earning smoothing did not have any impact on ability of earning valuation. The next step was to introduce earning quality into this part of the study and to observe the differences in the ability of earning valuation that arise from cross grouping. It was revealed that the group of high earning quality combining non-smoother had the highest earning valuation coefficient. Earning was further decomposed into three components: operating cash flow, discretional accruals and non-discretional accruals. The focus was on the observation of discretional accruals, and in this part of the study, the earning quality was also introduced in order to test the ability to valuate and the persis-tence of earning. It was discovered that discretional accruals possessed the func-tion of valuation. Furthermore, under the categorization of earning quality indexes with two different bases, discretional accruals with high earning quality had no contribution toward the ability to enhance valuation; however, about the persis-tence, both earning quality indexes had positive contribution toward discretional accruals. Finally, in order to test the reactions of earning quality under the influence of the incentives of earning management, the incentives of earning management were categorized into two groups with two different goals: 1. to reach breakeven; 2. to exceed prior period earning. The earning quality was also introduced to ob-serve the interactions. It was observed that under the two different goals in earn-ing, the phenomenon of earning management did indeed exist. After including the variable of earning quality, under the influences of two different categories of earning quality variables, there was some kind of suppressive effects on the management incentives of “reaching breakeven” and “earning that surpasses the prior period earning”.
369

Les inégalités sociales dans la durée de vie la plus commune : la répartition des décès selon l'âge et le quintile de défavorisation au Québec en 2000-2002 et 2005-2007

Lecours, Chantale 10 1900 (has links)
Nous avons choisi de focaliser nos analyses sur les inégalités sociales de mortalité spécifiquement aux grands âges. Pour ce faire, l'utilisation de l'âge modal au décès combiné à la dispersion des décès au-delà de cet âge s'avère particulièrement adapté pour capter ces disparités puisque ces mesures ne sont pas tributaires de la mortalité prématurée. Ainsi, à partir de la distribution des âges au décès selon le niveau de défavorisation, au Québec au cours des périodes 2000-2002 et 2005-2007, nous avons déterminé l'âge le plus commun au décès et la dispersion des durées de vie au-delà de celui-ci. L'estimation de la distribution des décès selon l'âge et le niveau de défavorisation repose sur une approche non paramétrique de lissage par P-splines développée par Nadine Ouellette dans le cadre de sa thèse de doctorat. Nos résultats montrent que l'âge modal au décès ne permet pas de détecter des disparités dans la mortalité des femmes selon le niveau de défavorisation au Québec en 2000-2002 et en 2005-2007. Néanmoins, on assiste à un report de la mortalité vers des âges plus avancés alors que la compression de la mortalité semble s'être stabilisée. Pour les hommes, les inégalités sociales de mortalité sont particulièrement importantes entre le sous-groupe le plus favorisé et celui l'étant le moins. On constate un déplacement de la durée de vie la plus commune des hommes vers des âges plus élevés et ce, peu importe le niveau de défavorisation. Cependant, contrairement à leurs homologues féminins, le phénomène de compression de la mortalité semble toujours s'opérer. / "Social inequalities in the most common age at death : the distribution of deaths by age and deprivation quintile in Quebec in 2000-2002 and 2005-2007" We chose to focus our analysis on the social inequalities of mortality at older ages especially. The use of the modal age at death, combined with the dispersion of deaths above this age is particularly adapted to capture such disparities. Indeed, these measures are not dependent on premature mortality. From the distribution of ages at death by level of deprivation in Quebec during the periods of 2000-2002 and 2005-2007, we determined the most common age at death and the dispersion of deaths above it. We first estimated the distribution of deaths by age and level of deprivation with a nonparametric smoothing approach based on P-splines developed by Nadine Ouellette in her doctoral thesis. Our results show that the modal age at death does not allow to detect disparities in mortality among women by level of deprivation in Quebec in 2000-2002 and in 2005-2007. Nevertheless, mortality shifted to older ages, while the compression of mortality seems to have stabilized. For men, social inequalities in mortality are particularly important between the most and least favored subgroups. There is a shift in male modal age at death towards older ages, regardless of the level of deprivation. However, unlike their female counterparts, the phenomenon of compression of mortality still seems ongoing.
370

Tillämpning av batterilager som energitjänsten lastutjämnare : En studie om batterilagring för en medelstor abonnent i Varberg Energis elnät / Application of battery energy storage as smoothening of power fluctuation

Al-imarah, Amena, Stenberg, Elin January 2016 (has links)
Arbetet Tillämpning av batterilager som energitjänsten lastutjämnare är en litteraturstudie och en kvantitativ studie. I studien har driftkarakteristiken år 2015 hos en matvarubutik legat till grunden. Arbetet har syftat i att besvara frågan kring ett batterilagers lämplighet som agerade för lastutjämning. För att ta reda på det har batterilagersegenskaper kartlagts och dimensionering gjorts utifrån två olika driftfall. En ekonomisk besparingspotential har även beräknats utifrån de bägge driftfallen. Driftfallen har valts att kallas teknisk dimensionering och ekonomisk dimensionering. De tekniska dimensionerade lagerna har en lager storlek om 617 kWh och 555kWh vilket motsvarar 7,1% respektive 5,8% av den dagliga energianvändningen. För de ekonomiskt dimensionerade lagerna har en lager storlek om 597 kWh och 233kWh vilket motsvarar 6,8% respektive 2,8% av den dagliga energianvändningen. Den ekonomiska besparingspotentialen blir som störst för en blandad körning av de bägge driftfallen. Trotts att besparingspotentialen är uppskattade under ideala förhållanden med varken förluster eller degraderad prestanda lönar det inte sig att investera i ett batterilager för att enbart utföra tjänsten effektutjämning idag. Investering i ett batterilager för effektutjämning har potential att bli lönsam först när den kan tillgodose fler energitjänster eller när alternativkostnaden är förhöjd. / This thesis, is a study of battery energy storage and its use as energy source and smoothening of power fluctuation. Studies have been made as a systematic review and a quantitative study. The study has consisted of analysing the power characteristic from a supermarket in the city of Varberg during year 2015. The object has been to evaluate the energy storage and the power smoothing qualities. Therefore the battery energy storages characteristics have been evaluated in this systematic review. For the quantitative study, calculations of the energy storage sizes were made for two separate operation modes. The two different operation modes were named technical dimensioning and economic dimensioning. The function of the technical dimensioning was to smooth the power outlet from the grid, while the function of the economic dimensioning was to enable the supermarket to buy more energy during low-price hours. Based on monthly power characteristics, each dimensioning gave as a result two energy storage possibilities, one in medium and one in small size. The technical dimensioning resulted in battery energy storage of the sizes 617 kWh and 555kWh which is comparable to 7,1% and 5,8% of the daily energy usage of the supermarket. The economic dimensioning resulted in battery energy storage of the sizes 597 kWh and 233kWh which is comparable to 6,8% and 2,8% of the daily energy usage of the supermarket. For optimizing the economic savings, a variation of technical and economic operation mode are needed, depending on calculated power usage through the day and elspot prices. The study shows that a battery storage is difficult to finance. The calculated economic savings were estimated during ideal conditions and without power loss or loss in performance. As a conclusion from this study a battery storage may have a good payback if there are several energy services to be filled.

Page generated in 0.0618 seconds