• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 61
  • 18
  • 14
  • 14
  • 13
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 240
  • 86
  • 82
  • 65
  • 43
  • 36
  • 36
  • 32
  • 28
  • 28
  • 26
  • 26
  • 24
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Elasticity in IaaS Cloud, Preserving Performance SLAs

Dhingra, Mohit January 2014 (has links) (PDF)
Infrastructure-as-a-Service(IaaS), one of the service models of cloud computing, provides resources in the form of Virtual Machines(VMs). Many applications hosted on the IaaS cloud have time varying workloads. These kind of applications benefit from the on-demand provision ing characteristic of cloud platforms. Applications with time varying workloads demand time varying resources in IaaS, which requires elastic resource provisioning in IaaS, such that their performance is intact. In current IaaS cloud systems, VMs are static in nature as their configurations do not change once they are instantiated. Therefore, fluctuation in resource demand is handled in two ways: allocating more VMs to the application(horizontal scaling) or migrating the application to another VM with a different configuration (vertical scaling). This forces the customers to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. Furthermore, the current IaaS architecture does not provide performance guarantees to applications, because of two major factors: 1)Performance metrics of the application are not used for resource allocation mechanisms by the IaaS, 2) Current resource allocation mechanisms do not consider virtualization overheads, can significantly impact the application’s performance, especially for I/O workloads. In this work, we develop an Elastic Resource Framework for IaaS, which provides flexible resource provisioning mechanism and at the same time preserves performance of applications specified by the Service Level Agreement(SLA). For identification of workloads which needs elastic resource allocation, variability has been defined as a metric and is associated with the definition of elasticity of a resource allocation system. We introduce new components Forecasting Engine based on a Cost Model and Resource manager in Open Nebula IaaS cloud, which compute a n optimal resource requirement for the next scheduling cycle based on prediction. Scheduler takes this as an input and enables fine grained resource allocation by dynamically adjusting the size of the VM. Since the prediction may not always be entirely correct, there might be under-allocation or over-allocation of resources based on forecast errors. The design of the cost model accounts for both over-allocation of resources and SLA violations caused by under-allocation of resources. Also, proper resource allocation requires consideration of the virtualization overhead, which is not captured by current monitoring frameworks. We modify existing monitoring frameworks to monitor virtualization over head and provide fine-grained monitoring information in the Virtual Machine Monitor (VMM) as well as VMs. In our approach, the performance of the application is preserved by 1)binding the application level performance SLA store source allocation, and 2) accounting for virtualization over-head while allocating resources. The proposed framework is implemented using the forecasting strategies like Seasonal Auto Regressive and Moving Average model (Seasonal ARIMA), and Gaussian Process model. However, this framework is generic enough to use any other forecasting strategy as well. It is applied to the real workloads, namely web server and mail server workloads, obtained through Supercomputer Education and Research Centre, Indian Institute of Science. The results show that significant reduction in the resource requirements can be obtained while preserving the performance of application by restricting the SLA violations. We further show that more intelligent scaling decisions can be taken using the monitoring information derived by the modification in monitoring framework.
212

Forecasting Mortality Rates using the Weighted Hyndman-Ullah Method

Ramos, Anthony Kojo January 2021 (has links)
The performance of three methods of mortality modelling and forecasting are compared. These include the basic Lee–Carter and two functional demographic models; the basic Hyndman–Ullah and the weighted Hyndman–Ullah. Using age-specific data from the Human Mortality Database of two developed countries, France and the UK (England&Wales), these methods are compared; through within-sample forecasting for the years 1999-2018. The weighted Hyndman–Ullah method is adjudged superior among the three methods through a comparison of mean forecast errors and qualitative inspection per the dataset of the selected countries. The weighted HU method is then used to conduct a 32–year ahead forecast to the year 2050.
213

Investigation and forecasting drift component of a gas sensor

Chowdhury Tondra, Farhana January 2021 (has links)
Chemical sensor based systems that are used for detection, identification, or quantification of various gases are very complex in nature. Sensor response data collected as a multivariate time series signals encounters gradual change of the sensor characteristics(known as sensor drift) due to several reasons. In this thesis, drift component of a silicon carbide Field-Effect Transistor (SiC-FET) sensor data was analyzed using time series. The data was collected from an experiment measuring output response of the sensor with respect to gases emitted by certain experimental object at different temperatures. Augmented Dickey Fuller Test (ADF) was carried out to analyze the sensor drift which revealed that stochastic trend along with deterministic trend characterized the drift components of the sensor. The drift started to rise in daily measurements which contributed to the total drift. / Traditional Autoregressive Integrated Moving Average (ARIMA) and deep learning based Long Short-Term Memory (LSTM) algorithm were carried out to forecast the sensor drift in reduced set of data. However, reduction of the data size degraded the forecasting accuracy and imposed loss of information. Therefore, careful selection of data using only one temperature from the temperature cycle was chosen instead of all time points. This chosen data from sensor array outperformed forecasting of sensor drift than reduced dataset using both traditional and deep learning methods.
214

Forecasting anomalies in time series data from online production environments

Sseguya, Raymond January 2020 (has links)
Anomaly detection on time series forecasts can be used by many industries in especially forewarning systems that can predict anomalies before they happen. Infor (Sweden) AB is software company that provides Enterprise Resource Planning cloud solutions. Infor is interested in predicting anomalies in their data and that is the motivation for this thesis work. The general idea is firstly to forecast the time series and then secondly detect and classify anomalies on the forecast. The first part is time series forecasting and the second part is anomaly detection and classification done on the forecasted values. In this thesis work, the time series forecasting to predict anomalous behaviour is done using two strategies namely the recursive strategy and the direct strategy. The recursive strategy includes two methods; AutoRegressive Integrated Moving Average and Neural Network AutoRegression. The direct strategy is done with ForecastML-eXtreme Gradient Boosting. Then the three methods are compared concerning performance of forecasting. The anomaly detection and classification is done by setting a decision rule based on a threshold. In this thesis work, since the true anomaly thresholds were not previously known, an arbitrary initial anomaly threshold is set by using a combination of statistical methods for outlier detection and then human judgement by the company commissioners. These statistical methods include Seasonal and Trend decomposition using Loess + InterQuartile Range, Twitter + InterQuartile Range and Twitter + GESD (Generalized Extreme Studentized Deviate). After defining what an anomaly threshold is in the usage context of Infor (Sweden) AB, then a decision rule is set and used to classify anomalies in time series forecasts. The results from comparing the classifications of the forecasts from the three time series forecasting methods are unfortunate and no recommendation is made concerning what model or algorithm to be used by Infor (Sweden) AB. However, the thesis work concludes by recommending other methods that can be tried in future research.
215

A Statistical Methodology for Classifying Time Series in the Context of Climatic Data

Ramírez Buelvas, Sandra Milena 24 February 2022 (has links)
[ES] De acuerdo con las regulaciones europeas y muchos estudios científicos, es necesario monitorear y analizar las condiciones microclimáticas en museos o edificios, para preservar las obras de arte en ellos. Con el objetivo de ofrecer herramientas para el monitoreo de las condiciones climáticas en este tipo de edificios, en esta tesis doctoral se propone una nueva metodología estadística para clasificar series temporales de parámetros climáticos como la temperatura y humedad relativa. La metodología consiste en aplicar un método de clasificación usando variables que se computan a partir de las series de tiempos. Los dos primeros métodos de clasificación son versiones conocidas de métodos sparse PLS que no se habían aplicado a datos correlacionados en el tiempo. El tercer método es una nueva propuesta que usa dos algoritmos conocidos. Los métodos de clasificación se basan en diferentes versiones de un método sparse de análisis discriminante de mínimos cuadra- dos parciales PLS (sPLS-DA, SPLSDA y sPLS) y análisis discriminante lineal (LDA). Las variables que los métodos de clasificación usan como input, corresponden a parámetros estimados a partir de distintos modelos, métodos y funciones del área de las series de tiempo, por ejemplo, modelo ARIMA estacional, modelo ARIMA- TGARCH estacional, método estacional Holt-Winters, función de densidad espectral, función de autocorrelación (ACF), función de autocorrelación parcial (PACF), rango móvil (MR), entre otras funciones. También fueron utilizadas algunas variables que se utilizan en el campo de la astronomía para clasificar estrellas. En los casos que a priori no hubo información de los clusters de las series de tiempos, las dos primeras componentes de un análisis de componentes principales (PCA) fueron utilizadas por el algoritmo k- means para identificar posibles clusters de las series de tiempo. Adicionalmente, los resultados del método sPLS-DA fueron comparados con los del algoritmo random forest. Tres bases de datos de series de tiempos de humedad relativa o de temperatura fueron analizadas. Los clusters de las series de tiempos se analizaron de acuerdo a diferentes zonas o diferentes niveles de alturas donde fueron instalados sensores para el monitoreo de las condiciones climáticas en los 3 edificios.El algoritmo random forest y las diferentes versiones del método sparse PLS fueron útiles para identificar las variables más importantes en la clasificación de las series de tiempos. Los resultados de sPLS-DA y random forest fueron muy similares cuando se usaron como variables de entrada las calculadas a partir del método Holt-Winters o a partir de funciones aplicadas a las series de tiempo. Aunque los resultados del método random forest fueron levemente mejores que los encontrados por sPLS-DA en cuanto a las tasas de error de clasificación, los resultados de sPLS- DA fueron más fáciles de interpretar. Cuando las diferentes versiones del método sparse PLS utilizaron variables resultantes del método Holt-Winters, los clusters de las series de tiempo fueron mejor discriminados. Entre las diferentes versiones del método sparse PLS, la versión sPLS con LDA obtuvo la mejor discriminación de las series de tiempo, con un menor valor de la tasa de error de clasificación, y utilizando el menor o segundo menor número de variables.En esta tesis doctoral se propone usar una versión sparse de PLS (sPLS-DA, o sPLS con LDA) con variables calculadas a partir de series de tiempo para la clasificación de éstas. Al aplicar la metodología a las distintas bases de datos estudiadas, se encontraron modelos parsimoniosos, con pocas variables, y se obtuvo una discriminación satisfactoria de los diferentes clusters de las series de tiempo con fácil interpretación. La metodología propuesta puede ser útil para caracterizar las distintas zonas o alturas en museos o edificios históricos de acuerdo con sus condiciones climáticas, con el objetivo de prevenir problemas de conservación con las obras de arte. / [CA] D'acord amb les regulacions europees i molts estudis científics, és necessari monitorar i analitzar les condiciones microclimàtiques en museus i en edificis similars, per a preservar les obres d'art que s'exposen en ells. Amb l'objectiu d'oferir eines per al monitoratge de les condicions climàtiques en aquesta mena d'edificis, en aquesta tesi es proposa una nova metodologia estadística per a classificar series temporals de paràmetres climàtics com la temperatura i humitat relativa.La metodologia consisteix a aplicar un mètode de classificació usant variables que es computen a partir de les sèries de temps. Els dos primers mètodes de classificació són versions conegudes de mètodes sparse PLS que no s'havien aplicat adades correlacionades en el temps. El tercer mètode és una nova proposta que usados algorismes coneguts. Els mètodes de classificació es basen en diferents versions d'un mètode sparse d'anàlisi discriminant de mínims quadrats parcials PLS (sPLS-DA, SPLSDA i sPLS) i anàlisi discriminant lineal (LDA). Les variables queels mètodes de classificació usen com a input, corresponen a paràmetres estimats a partir de diferents models, mètodes i funcions de l'àrea de les sèries de temps, per exemple, model ARIMA estacional, model ARIMA-TGARCH estacional, mètode estacional Holt-Winters, funció de densitat espectral, funció d'autocorrelació (ACF), funció d'autocorrelació parcial (PACF), rang mòbil (MR), entre altres funcions. També van ser utilitzades algunes variables que s'utilitzen en el camp de l'astronomia per a classificar estreles. En els casos que a priori no va haver-hi información dels clústers de les sèries de temps, les dues primeres components d'una anàlisi de components principals (PCA) van ser utilitzades per l'algorisme k-means per a identificar possibles clústers de les sèries de temps. Addicionalment, els resultats del mètode sPLS-DA van ser comparats amb els de l'algorisme random forest.Tres bases de dades de sèries de temps d'humitat relativa o de temperatura varen ser analitzades. Els clústers de les sèries de temps es van analitzar d'acord a diferents zones o diferents nivells d'altures on van ser instal·lats sensors per al monitoratge de les condicions climàtiques en els edificis.L'algorisme random forest i les diferents versions del mètode sparse PLS van ser útils per a identificar les variables més importants en la classificació de les series de temps. Els resultats de sPLS-DA i random forest van ser molt similars quan es van usar com a variables d'entrada les calculades a partir del mètode Holt-winters o a partir de funcions aplicades a les sèries de temps. Encara que els resultats del mètode random forest van ser lleument millors que els trobats per sPLS-DA quant a les taxes d'error de classificació, els resultats de sPLS-DA van ser més fàcils d'interpretar.Quan les diferents versions del mètode sparse PLS van utilitzar variables resultants del mètode Holt-Winters, els clústers de les sèries de temps van ser més ben discriminats. Entre les diferents versions del mètode sparse PLS, la versió sPLS amb LDA va obtindre la millor discriminació de les sèries de temps, amb un menor valor de la taxa d'error de classificació, i utilitzant el menor o segon menor nombre de variables.En aquesta tesi proposem usar una versió sparse de PLS (sPLS-DA, o sPLS amb LDA) amb variables calculades a partir de sèries de temps per a classificar series de temps. En aplicar la metodologia a les diferents bases de dades estudiades, es van trobar models parsimoniosos, amb poques variables, i varem obtindre una discriminació satisfactòria dels diferents clústers de les sèries de temps amb fácil interpretació. La metodologia proposada pot ser útil per a caracteritzar les diferents zones o altures en museus o edificis similars d'acord amb les seues condicions climàtiques, amb l'objectiu de previndre problemes amb les obres d'art. / [EN] According to different European Standards and several studies, it is necessary to monitor and analyze the microclimatic conditions in museums and similar buildings, with the goal of preserving artworks. With the aim of offering tools to monitor the climatic conditions, a new statistical methodology for classifying time series of different climatic parameters, such as relative humidity and temperature, is pro- posed in this dissertation.The methodology consists of applying a classification method using variables that are computed from time series. The two first classification methods are ver- sions of known sparse methods which have not been applied to time dependent data. The third method is a new proposal that uses two known algorithms. These classification methods are based on different versions of sparse partial least squares discriminant analysis PLS (sPLS-DA, SPLSDA, and sPLS) and Linear Discriminant Analysis (LDA). The variables that are computed from time series, correspond to parameter estimates from functions, methods, or models commonly found in the area of time series, e.g., seasonal ARIMA model, seasonal ARIMA-TGARCH model, seasonal Holt-Winters method, spectral density function, autocorrelation function (ACF), partial autocorrelation function (PACF), moving range (MR), among others functions. Also, some variables employed in the field of astronomy (for classifying stars) were proposed.The methodology proposed consists of two parts. Firstly, different variables are computed applying the methods, models or functions mentioned above, to time series. Next, once the variables are calculated, they are used as input for a classification method like sPLS-DA, SPLSDA, or SPLS with LDA (new proposal). When there was no information about the clusters of the different time series, the first two components from principal component analysis (PCA) were used as input for k-means method for identifying possible clusters of time series. In addition, results from random forest algorithm were compared with results from sPLS-DA.This study analyzed three sets of time series of relative humidity or temperate, recorded in different buildings (Valencia's Cathedral, the archaeological site of L'Almoina, and the baroque church of Saint Thomas and Saint Philip Neri) in Valencia, Spain. The clusters of the time series were analyzed according to different zones or different levels of the sensor heights, for monitoring the climatic conditions in these buildings.Random forest algorithm and different versions of sparse PLS helped identifying the main variables for classifying the time series. When comparing the results from sPLS-DA and random forest, they were very similar for variables from seasonal Holt-Winters method and functions which were applied to the time series. The results from sPLS-DA were easier to interpret than results from random forest. When the different versions of sparse PLS used variables from seasonal Holt- Winters method as input, the clusters of the time series were identified effectively.The variables from seasonal Holt-Winters helped to obtain the best, or the second best results, according to the classification error rate. Among the different versions of sparse PLS proposed, sPLS with LDA helped to classify time series using a fewer number of variables with the lowest classification error rate.We propose using a version of sparse PLS (sPLS-DA, or sPLS with LDA) with variables computed from time series for classifying time series. For the different data sets studied, the methodology helped to produce parsimonious models with few variables, it achieved satisfactory discrimination of the different clusters of the time series which are easily interpreted. This methodology can be useful for characterizing and monitoring micro-climatic conditions in museums, or similar buildings, for preventing problems with artwork. / I gratefully acknowledge the financial support of Pontificia Universidad Javeriana Cali – PUJ and Instituto Colombiano de Crédito Educativo y Estudios Técnicos en el Exterior – ICETEX who awarded me the scholarships ’Convenio de Capacitación para Docentes O. J. 086/17’ and ’Programa Crédito Pasaporte a la Ciencia ID 3595089 foco-reto salud’ respectively. The scholarships were essential for obtaining the Ph.D. Also, I gratefully acknowledge the financial support of the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 814624. / Ramírez Buelvas, SM. (2022). A Statistical Methodology for Classifying Time Series in the Context of Climatic Data [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181123 / TESIS
216

Risk Management In Reservoir Operations In The Context Of Undefined Competitive Consumption

Salami, Yunus 01 January 2012 (has links)
Dams and reservoirs with multiple purposes require effective management to fully realize their purposes and maximize efficiency. For instance, a reservoir intended mainly for the purposes of flood control and hydropower generation may result in a system with primary objectives that conflict with each other. This is because higher hydraulic heads are required to achieve the hydropower generation objective while relatively lower reservoir levels are required to fulfill flood control objectives. Protracted imbalances between these two could increase the susceptibility of the system to risks of water shortage or flood, depending on inflow volumes and operational policy effectiveness. The magnitudes of these risks can become even more pronounced when upstream use of the river is unregulated and uncoordinated so that upstream consumptions and releases are arbitrary. As a result, safe operational practices and risk management alternatives must be structured after an improved understanding of historical and anticipated inflows, actual and speculative upstream uses, and the overall hydrology of catchments upstream of the reservoir. One of such systems with an almost yearly occurrence of floods and shortages due to both natural and anthropogenic factors is the dual reservoir system of Kainji and Jebba in Nigeria. To analyze and manage these risks, a methodology that combines a stochastic and deterministic approach was employed. Using methods outlined by Box and Jenkins (1976), autoregressive integrated moving average (ARIMA) models were developed for forecasting Niger river inflows at Kainji reservoir based on twenty-seven-year-long historical inflow data (1970-1996). These were then validated using seven-year inflow records (1997-2003). The model with the best correlation was a seasonal multiplicative ARIMA (2,1,1)x(2,1,2)12 model. Supplementary iv validation of this model was done with discharge rating curves developed for the inlet of the reservoir using in situ inflows and satellite altimetry data. By comparing net inflow volumes with storage deficit, flood and shortage risk factors at the reservoir were determined based on (a) actual inflows, (b) forecasted inflows (up to 2015), and (c) simulated scenarios depicting undefined competitive upstream consumption. Calculated highrisk years matched actual flood years again suggesting the reliability of the model. Monte Carlo simulations were then used to prescribe safe outflows and storage allocations in order to reduce futuristic risk factors. The theoretical safety levels achieved indicated risk factors below threshold values and showed that this methodology is a powerful tool for estimating and managing flood and shortage risks in reservoirs with undefined competitive upstream consumption
217

A Gasoline Demand Model For The United States Light Vehicle Fleet

Rey, Diana 01 January 2009 (has links)
The United States is the world's largest oil consumer demanding about twenty five percent of the total world oil production. Whenever there are difficulties to supply the increasing quantities of oil demanded by the market, the price of oil escalates leading to what is known as oil price spikes or oil price shocks. The last oil price shock which was the longest sustained oil price run up in history, began its course in year 2004, and ended in 2008. This last oil price shock initiated recognizable changes in transportation dynamics: transit operators realized that commuters switched to transit as a way to save gasoline costs, consumers began to search the market for more efficient vehicles leading car manufactures to close 'gas guzzlers' plants, and the government enacted a new law entitled the Energy Independence Act of 2007, which called for the progressive improvement of the fuel efficiency indicator of the light vehicle fleet up to 35 miles per gallon in year 2020. The past trend of gasoline consumption will probably change; so in the context of the problem a gasoline consumption model was developed in this thesis to ascertain how some of the changes will impact future gasoline demand. Gasoline demand was expressed in oil equivalent million barrels per day, in a two steps Ordinary Least Square (OLS) explanatory variable model. In the first step, vehicle miles traveled expressed in trillion vehicle miles was regressed on the independent variables: vehicles expressed in million vehicles, and price of oil expressed in dollars per barrel. In the second step, the fuel consumption in million barrels per day was regressed on vehicle miles traveled, and on the fuel efficiency indicator expressed in miles per gallon. The explanatory model was run in EVIEWS that allows checking for normality, heteroskedasticty, and serial correlation. Serial correlation was addressed by inclusion of autoregressive or moving average error correction terms. Multicollinearity was solved by first differencing. The 36 year sample series set (1970-2006) was divided into a 30 years sub-period for calibration and a 6 year "hold-out" sub-period for validation. The Root Mean Square Error or RMSE criterion was adopted to select the "best model" among other possible choices, although other criteria were also recorded. Three scenarios for the size of the light vehicle fleet in a forecasting period up to 2020 were created. These scenarios were equivalent to growth rates of 2.1, 1.28, and about 1 per cent per year. The last or more optimistic vehicle growth scenario, from the gasoline consumption perspective, appeared consistent with the theory of vehicle saturation. One scenario for the average miles per gallon indicator was created for each one of the size of fleet indicators by distributing the fleet every year assuming a 7 percent replacement rate. Three scenarios for the price of oil were also created: the first one used the average price of oil in the sample since 1970, the second was obtained by extending the price trend by exponential smoothing, and the third one used a longtime forecast supplied by the Energy Information Administration. The three scenarios created for the price of oil covered a range between a low of about 42 dollars per barrel to highs in the low 100's. The 1970-2006 gasoline consumption trend was extended to year 2020 by ARIMA Box-Jenkins time series analysis, leading to a gasoline consumption value of about 10 millions barrels per day in year 2020. This trend line was taken as the reference or baseline of gasoline consumption. The savings that resulted by application of the explanatory variable OLS model were measured against such a baseline of gasoline consumption. Even on the most pessimistic scenario the savings obtained by the progressive improvement of the fuel efficiency indicator seem enough to offset the increase in consumption that otherwise would have occurred by extension of the trend, leaving consumption at the 2006 levels or about 9 million barrels per day. The most optimistic scenario led to savings up to about 2 million barrels per day below the 2006 level or about 3 millions barrels per day below the baseline in 2020. The "expected" or average consumption in 2020 is about 8 million barrels per day, 2 million barrels below the baseline or 1 million below the 2006 consumption level. More savings are possible if technologies such as plug-in hybrids that have been already implemented in other countries take over soon, are efficiently promoted, or are given incentives or subsidies such as tax credits. The savings in gasoline consumption may in the future contribute to stabilize the price of oil as worldwide demand is tamed by oil saving policy changes implemented in the United States.
218

Short-term Forecasting of EV Charging Stations Power Consumption at Distribution Scale / Korttidsprognoser för elbils laddstationer Strömförbrukning i distributionsskala

Clerc, Milan January 2022 (has links)
Due to the intermittent nature of renewable energy production, maintaining the stability of the power supply system is becoming a significant challenge of the energy transition. Besides, the penetration of Electric Vehicles (EVs) and the development of a large network of charging stations will inevitably increase the pressure on the electrical grid. However, this network and the batteries that are connected to it also constitute a significant resource to provide ancillary services and therefore a new opportunity to stabilize the power grid. This requires to be able to produce accurate short term forecasts of the power consumption of charging stations at distribution scale. This work proposes a full forecasting framework, from the transformation of discrete charging sessions logs into a continuous aggregated load profile, to the pre-processing of the time series and the generation of predictions. This framework is used to identify the most appropriate model to provide two days ahead predictions of the hourly load profile of large charging stations networks. Using three years of data collected at Amsterdam’s public stations, the performance of several state-of-the-art forecasting models, including Gradient Boosted Trees (GBTs) and Recurrent Neural Networks (RNNs) is evaluated and compared to a classical time series model (Auto Regressive Integrated Moving Average (ARIMA)). The best performances are obtained with an Extreme Gradient Boosting (XGBoost) model using harmonic terms, past consumption values, calendar information and temperature forecasts as prediction features. This study also highlights periodical patterns in charging behaviors, as well as strong calendar effects and an influence of temperature on EV usage. / På grund av den intermittenta karaktären av förnybar energiproduktion, blir upprätthållandet av elnäts stabilitet en betydande utmaning. Dessutom kommer penetrationen av elbilar och utvecklingen av ett stort nät av laddstationer att öka trycket på elnätet. Men detta laddnät och batterierna som är anslutna till det utgör också en betydande resurs för att tillhandahålla kompletterande tjänster och därför en ny möjlighet att stabilisera elnätet. För att göra sådant bör man kunna producera korrekta kortsiktiga prognoser för laddstationens strömförbrukning i distributions skala. Detta arbete föreslår ett fullständigt prognos protokoll, från omvandlingen av diskreta laddnings sessioner till en kontinuerlig förbrukningsprofil, till förbehandling av tidsserier och generering av förutsägelser. Protokollet används för att identifiera den mest lämpliga metoden för att ge två dagars förutsägelser av timförbrukning profilen för ett stort laddstation nät. Med hjälp av tre års data som samlats in på Amsterdams publika stationer utvärderas prestanda för flera avancerade prognosmodeller som är gradient boosting och återkommande neurala nätverk, och jämförs med en klassisk tidsseriemodell (ARIMA). De bästa resultaten uppnås med en XGBoost modell med harmoniska termer, tidigare förbrukningsvärden, kalenderinformation och temperatur prognoser som förutsägelse funktioner. Denna studie belyser också periodiska mönster i laddningsbeteenden, liksom starka kalendereffekter och temperaturpåverkan på elbilar-användning.
219

INTELLIGENT MULTIPLE-OBJECTIVE PROACTIVE ROUTING IN MANET WITH PREDICTIONS ON DELAY, ENERGY, AND LINK LIFETIME

Guo, Zhihao January 2008 (has links)
No description available.
220

Efficient Resource Management : A Comparison of Predictive Scaling Algorithms in Cloud-Based Applications

Dahl, Johanna, Strömbäck, Elsa January 2024 (has links)
This study aims to explore predictive scaling algorithms used to predict and manage workloadsin a containerized system. The goal is to identify which predictive scaling approach delivers themost effective results, contributing to research on cloud elasticity and resource management.This potentially leads to reduced infrastructure costs while maintaining efficient performance,enabling a more sustainable cloud-computing technology. The work involved the developmentand comparison of three different autoscaling algorithms with an interchangeable predictioncomponent. For the predictive part, three different time-series analysis methods were used:XGBoost, ARIMA, and Prophet. A simulation system with the necessary modules wasdeveloped, as well as a designated target service to experience the load. Each algorithm'sscaling accuracy was evaluated by comparing its suggested number of instances to the optimalnumber, with each instance representing a simulated CPU core. The results showed varyingefficiency: XGBoost and Prophet excelled with richer datasets, while ARIMA performed betterwith limited data. Although XGBoost and Prophet maintained 100% uptime, this could lead toresource wastage, whereas ARIMA's lower uptime percentage possibly suggested a moreresource-efficient, though less reliable, approach. Further analysis, particularly experimentalinvestigation is required to deepen the understanding of these predictors' influence on resourceallocation.

Page generated in 0.4132 seconds