Spelling suggestions: "subject:"clustering time deries"" "subject:"clustering time 3series""
1 |
Munch : an efficient modularisation strategy on sequential source code check-insArzoky, Mahir January 2015 (has links)
As developers are increasingly creating more sophisticated applications, software systems are growing in both their complexity and size. When source code is easy to understand, the system can be more maintainable, which leads to reduced costs. Better structured code can also lead to new requirements being introduced more efficiently with fewer issues. However, the maintenance and evolution of systems can be frustrating; it is difficult for developers to keep a fixed understanding of the system’s structure as the structure can change during maintenance. Software module clustering is the process of automatically partitioning the structure of the system using low-level dependencies in the source code, to improve the system’s structure. There have been a large number of studies using the Search Based Software Engineering approach to solve the software module clustering problem. A software clustering tool, Munch, was developed and employed in this study to modularise a unique dataset of sequential source code software versions. The tool is based on Search Based Software Engineering techniques. The tool constitutes of a number of components that includes the clustering algorithm, and a number of different fitness functions and metrics that are used for measuring and assessing the quality of the clustering decompositions. The tool will provide a framework for evaluating a number of clustering techniques and strategies. The dataset used in this study is provided by Quantel Limited, it is from processed source code of a product line architecture library that has delivered numerous products. The dataset analysed is the persistence engine used by all products, comprising of over 0.5 million lines of C++. It consists of 503 software versions. This study looks to investigate whether search-based software clustering approaches can help stakeholders to understand how inter-class dependencies of the software system change over time. It performs efficient modularisation on a time-series of source code relationships, taking advantage of the fact that the nearer the source code in time the more similar the modularisation is expected to be. This study introduces a seeding concept and highlights how it can be used to significantly reduce the runtime of the modularisation. The dataset is not treated as separate modularisation problems, but instead the result of the previous modularisation of the graph is used to give the next graph a head start. Code structure and sequence is used to obtain more effective modularisation and reduce the runtime of the process. To evaluate the efficiency of the modularisation numerous experiments were conducted on the dataset. The results of the experiments present strong evidence to support the seeding strategy. To reduce the runtime further, statistical techniques for controlling the number of iterations of the modularisation, based on the similarities between time adjacent graphs, is introduced. The convergence of the heuristic search technique is examined and a number of stopping criterions are estimated and evaluated. Extensive experiments were conducted on the time-series dataset and evidence are presented to support the proposed techniques. In addition, this thesis investigated and evaluated the starting clustering arrangement of Munch’s clustering algorithm, and introduced and experimented with a number of starting clustering arrangements that includes a uniformly random clustering arrangement strategy. Moreover, this study investigates whether the dataset used for the modularisation resembles a random graph by computing the probabilities of observing certain connectivity. This thesis demonstrates how modularisation is not possible with data that resembles random graphs, and demonstrates that the dataset being used does not resemble a random graph except for small sections where there were large maintenance activities. Furthermore, it explores and shows how the random graph metric can be used as a tool to indicate areas of interest in the dataset, without the need to run the modularisation. Last but not least, there is a huge amount of software code that has and will be developed, however very little has been learnt from how the code evolves over time. The intention of this study is also to help developers and stakeholders to model the internal software and to aid in modelling development trends and biases, and to try and predict the occurrence of large changes and potential refactorings. Thus, industrial feedback of the research was obtained. This thesis presents work on the detection of refactoring activities, and discusses the possible applications of the findings of this research in industrial settings.
|
2 |
[en] COMBINING TO SUCCEED: A NOVEL STRATEGY TO IMPROVE FORECASTS FROM EXPONENTIAL SMOOTHING MODELS / [pt] COMBINANDO PARA TER SUCESSO: UMA NOVA ESTRATÉGIA PARA MELHORAR A PREVISÕES DE MODELOS DE AMORTECIMENTO EXPONENCIALTIAGO MENDES DANTAS 04 February 2019 (has links)
[pt] A presente tese se insere no contexto de previsão de séries temporais. Nesse sentido, embora muitas abordagens tenham sido desenvolvidas, métodos simples como o de amortecimento exponencial costumam gerar resultados extremamente competitivos muitas vezes superando abordagens com maior nível de complexidade. No contexto previsão, papers seminais na área mostraram que a combinação de previsões tem potencial para reduzir de maneira acentuada o erro de previsão. Especificamente, a combinação de
previsões geradas por amortecimento exponencial tem sido explorada em papers recentes. Apesar da combinação de previsões utilizando Amortecimento Exponencial poder ser feita de diversas formas, um método proposto recentemente e chamado de Bagged.BLD.MBB.ETS utiliza uma técnica chamada
Bootstrap Aggregating (Bagging) em combinação com métodos de amortecimento exponencial para gerar previsões mostrando que a abordagem é capaz de gerar previsões mensais mais precisas que todos os benchmarks analisados. A abordagem era considerada o estado da arte na utilização de Bagging e Amortecimento Exponencial até o desenvolvimento dos resultados obtidos nesta tese. A tese em questão se ocupa de, inicialmente, validar o método Bagged.BLD.MBB.ETS em um conjunto de dados relevante
do ponto de vista de uma aplicação real, expandindo assim os campos de aplicação da metodologia. Posteriormente, são identificados motivos relevantes para redução do erro de e é proposta uma nova metodologia que utiliza Bagging, Amortecimento Exponencial e Clusters para tratar o efeito covariância, até então não identificado anteriormente na literatura do método. A abordagem proposta foi testada utilizando diferentes tipo de séries temporais da competição M3, CIF 2016 e M4, bem como utilizando dados
simulados. Os resultados empíricos apontam para uma redução substancial na variância e no erro de previsão. / [en] This thesis is inserted in the context of time series forecasting. In this sense, although many approaches have been developed, simple methods such as exponential smoothing usually produce extremely competitive results, often surpassing approaches with a higher level of complexity. Seminal papers
in time series forecasting showed that the combination of forecasts has the potential to dramatically reduce the forecast error. Specifically, the combination of forecasts generated by Exponential Smoothing has been explored in recent papers. Although this can be done in many ways, a specific method called Bagged.BLD.MBB.ETS uses a technique called Bootstrap Aggregating (Bagging) in combination with Exponential Smoothing methods to generate forecasts, showing that the approach can generate more accurate monthly forecasts than all the analyzed benchmarks. The approach was considered the state of the art in the use of Bagging and Exponential Smoothing until the development of the results obtained in this thesis. This thesis initially deals with validating Bagged.BLD.MBB.ETS in a data set relevant from the point of view of a real application, thus expanding the fields of application of the methodology. Subsequently, relevant motifs for error reduction are identified and a new methodology using Bagging, Exponential Smoothing and Clusters is proposed to treat the covariance effect, not previously identified in the method s literature. The proposed approach was tested using data from three time series competitions (M3, CIF 2016 and M4), as well as using simulated data. The empirical results point to a substantial reduction in variance and forecast error.
|
3 |
Clustering and forecasting for rain attenuation time series dataLi, Jing January 2017 (has links)
Clustering is one of unsupervised learning algorithm to group similar objects into the same cluster and the objects in the same cluster are more similar to each other than those in the other clusters. Forecasting is making prediction based on the past data and efficient artificial intelligence models to predict data developing tendency, which can help to make appropriate decisions ahead. The datasets used in this thesis are the signal attenuation time series data from the microwave networks. Microwave networks are communication systems to transmit information between two fixed locations on the earth. They can support increasing capacity demands of mobile networks and play an important role in next generation wireless communication technology. But inherent vulnerability to random fluctuation such as rainfall will cause significant network performance degradation. In this thesis, K-means, Fuzzy c-means and 2-state Hidden Markov Model are used to develop one step and two step rain attenuation data clustering models. The forecasting models are designed based on k-nearest neighbor method and implemented with linear regression to predict the real-time rain attenuation in order to help microwave transport networks mitigate rain impact, make proper decisions ahead of time and improve the general performance. / Clustering is een van de unsupervised learning algorithmen om groep soortgelijke objecten in dezelfde cluster en de objecten in dezelfde cluster zijn meer vergelijkbaar met elkaar dan die in de andere clusters. Prognoser är att göra förutspårningar baserade på övergående data och effektiva artificiella intelligensmodeller för att förutspå datautveckling, som kan hjälpa till att fatta lämpliga beslut. Dataseten som används i denna avhandling är signaldämpningstidsseriedata från mikrovågsnätverket. Mikrovågsnät är kommunikationssystem för att överföra information mellan två fasta platser på jorden. De kan stödja ökade kapacitetsbehov i mobilnät och spela en viktig roll i nästa generationens trådlösa kommunikationsteknik. Men inneboende sårbarhet för slumpmässig fluktuering som nedbörd kommer att orsaka betydande nätverksförstöring. I den här avhandlingen används K-medel, Fuzzy c-medel och 2-state Hidden Markov Model för att utveckla ett steg och tvåstegs regen dämpning dataklyvningsmodeller. Prognosmodellerna är utformade utifrån k-närmaste granne-metoden och implementeras med linjär regression för att förutsäga realtidsdämpning för att hjälpa mikrovågstransportnät att mildra regnpåverkan, göra rätt beslut före tid och förbättra den allmänna prestandan.
|
4 |
A Statistical Methodology for Classifying Time Series in the Context of Climatic DataRamírez Buelvas, Sandra Milena 24 February 2022 (has links)
[ES] De acuerdo con las regulaciones europeas y muchos estudios científicos, es necesario monitorear y analizar las condiciones microclimáticas en museos o edificios, para preservar las obras de arte en ellos. Con el objetivo de ofrecer herramientas para el monitoreo de las condiciones climáticas en este tipo de edificios, en esta tesis doctoral se propone una nueva metodología estadística para clasificar series temporales de parámetros climáticos como la temperatura y humedad relativa. La metodología consiste en aplicar un método de clasificación usando variables que se computan a partir de las series de tiempos. Los dos primeros métodos de clasificación son versiones conocidas de métodos sparse PLS que no se habían aplicado a datos correlacionados en el tiempo. El tercer método es una nueva propuesta que usa dos algoritmos conocidos. Los métodos de clasificación se basan en diferentes versiones de un método sparse de análisis discriminante de mínimos cuadra- dos parciales PLS (sPLS-DA, SPLSDA y sPLS) y análisis discriminante lineal (LDA). Las variables que los métodos de clasificación usan como input, corresponden a parámetros estimados a partir de distintos modelos, métodos y funciones del área de las series de tiempo, por ejemplo, modelo ARIMA estacional, modelo ARIMA- TGARCH estacional, método estacional Holt-Winters, función de densidad espectral, función de autocorrelación (ACF), función de autocorrelación parcial (PACF), rango móvil (MR), entre otras funciones. También fueron utilizadas algunas variables que se utilizan en el campo de la astronomía para clasificar estrellas. En los casos que a priori no hubo información de los clusters de las series de tiempos, las dos primeras componentes de un análisis de componentes principales (PCA) fueron utilizadas por el algoritmo k- means para identificar posibles clusters de las series de tiempo. Adicionalmente, los resultados del método sPLS-DA fueron comparados con los del algoritmo random forest. Tres bases de datos de series de tiempos de humedad relativa o de temperatura fueron analizadas. Los clusters de las series de tiempos se analizaron de acuerdo a diferentes zonas o diferentes niveles de alturas donde fueron instalados sensores para el monitoreo de las condiciones climáticas en los 3 edificios.El algoritmo random forest y las diferentes versiones del método sparse PLS fueron útiles para identificar las variables más importantes en la clasificación de las series de tiempos. Los resultados de sPLS-DA y random forest fueron muy similares cuando se usaron como variables de entrada las calculadas a partir del método Holt-Winters o a partir de funciones aplicadas a las series de tiempo. Aunque los resultados del método random forest fueron levemente mejores que los encontrados por sPLS-DA en cuanto a las tasas de error de clasificación, los resultados de sPLS- DA fueron más fáciles de interpretar. Cuando las diferentes versiones del método sparse PLS utilizaron variables resultantes del método Holt-Winters, los clusters de las series de tiempo fueron mejor discriminados. Entre las diferentes versiones del método sparse PLS, la versión sPLS con LDA obtuvo la mejor discriminación de las series de tiempo, con un menor valor de la tasa de error de clasificación, y utilizando el menor o segundo menor número de variables.En esta tesis doctoral se propone usar una versión sparse de PLS (sPLS-DA, o sPLS con LDA) con variables calculadas a partir de series de tiempo para la clasificación de éstas. Al aplicar la metodología a las distintas bases de datos estudiadas, se encontraron modelos parsimoniosos, con pocas variables, y se obtuvo una discriminación satisfactoria de los diferentes clusters de las series de tiempo con fácil interpretación. La metodología propuesta puede ser útil para caracterizar las distintas zonas o alturas en museos o edificios históricos de acuerdo con sus condiciones climáticas, con el objetivo de prevenir problemas de conservación con las obras de arte. / [CA] D'acord amb les regulacions europees i molts estudis científics, és necessari monitorar i analitzar les condiciones microclimàtiques en museus i en edificis similars, per a preservar les obres d'art que s'exposen en ells. Amb l'objectiu d'oferir eines per al monitoratge de les condicions climàtiques en aquesta mena d'edificis, en aquesta tesi es proposa una nova metodologia estadística per a classificar series temporals de paràmetres climàtics com la temperatura i humitat relativa.La metodologia consisteix a aplicar un mètode de classificació usant variables que es computen a partir de les sèries de temps. Els dos primers mètodes de classificació són versions conegudes de mètodes sparse PLS que no s'havien aplicat adades correlacionades en el temps. El tercer mètode és una nova proposta que usados algorismes coneguts. Els mètodes de classificació es basen en diferents versions d'un mètode sparse d'anàlisi discriminant de mínims quadrats parcials PLS (sPLS-DA, SPLSDA i sPLS) i anàlisi discriminant lineal (LDA). Les variables queels mètodes de classificació usen com a input, corresponen a paràmetres estimats a partir de diferents models, mètodes i funcions de l'àrea de les sèries de temps, per exemple, model ARIMA estacional, model ARIMA-TGARCH estacional, mètode estacional Holt-Winters, funció de densitat espectral, funció d'autocorrelació (ACF), funció d'autocorrelació parcial (PACF), rang mòbil (MR), entre altres funcions. També van ser utilitzades algunes variables que s'utilitzen en el camp de l'astronomia per a classificar estreles. En els casos que a priori no va haver-hi información dels clústers de les sèries de temps, les dues primeres components d'una anàlisi de components principals (PCA) van ser utilitzades per l'algorisme k-means per a identificar possibles clústers de les sèries de temps. Addicionalment, els resultats del mètode sPLS-DA van ser comparats amb els de l'algorisme random forest.Tres bases de dades de sèries de temps d'humitat relativa o de temperatura varen ser analitzades. Els clústers de les sèries de temps es van analitzar d'acord a diferents zones o diferents nivells d'altures on van ser instal·lats sensors per al monitoratge de les condicions climàtiques en els edificis.L'algorisme random forest i les diferents versions del mètode sparse PLS van ser útils per a identificar les variables més importants en la classificació de les series de temps. Els resultats de sPLS-DA i random forest van ser molt similars quan es van usar com a variables d'entrada les calculades a partir del mètode Holt-winters o a partir de funcions aplicades a les sèries de temps. Encara que els resultats del mètode random forest van ser lleument millors que els trobats per sPLS-DA quant a les taxes d'error de classificació, els resultats de sPLS-DA van ser més fàcils d'interpretar.Quan les diferents versions del mètode sparse PLS van utilitzar variables resultants del mètode Holt-Winters, els clústers de les sèries de temps van ser més ben discriminats. Entre les diferents versions del mètode sparse PLS, la versió sPLS amb LDA va obtindre la millor discriminació de les sèries de temps, amb un menor valor de la taxa d'error de classificació, i utilitzant el menor o segon menor nombre de variables.En aquesta tesi proposem usar una versió sparse de PLS (sPLS-DA, o sPLS amb LDA) amb variables calculades a partir de sèries de temps per a classificar series de temps. En aplicar la metodologia a les diferents bases de dades estudiades, es van trobar models parsimoniosos, amb poques variables, i varem obtindre una discriminació satisfactòria dels diferents clústers de les sèries de temps amb fácil interpretació. La metodologia proposada pot ser útil per a caracteritzar les diferents zones o altures en museus o edificis similars d'acord amb les seues condicions climàtiques, amb l'objectiu de previndre problemes amb les obres d'art. / [EN] According to different European Standards and several studies, it is necessary to monitor and analyze the microclimatic conditions in museums and similar buildings, with the goal of preserving artworks. With the aim of offering tools to monitor the climatic conditions, a new statistical methodology for classifying time series of different climatic parameters, such as relative humidity and temperature, is pro- posed in this dissertation.The methodology consists of applying a classification method using variables that are computed from time series. The two first classification methods are ver- sions of known sparse methods which have not been applied to time dependent data. The third method is a new proposal that uses two known algorithms. These classification methods are based on different versions of sparse partial least squares discriminant analysis PLS (sPLS-DA, SPLSDA, and sPLS) and Linear Discriminant Analysis (LDA). The variables that are computed from time series, correspond to parameter estimates from functions, methods, or models commonly found in the area of time series, e.g., seasonal ARIMA model, seasonal ARIMA-TGARCH model, seasonal Holt-Winters method, spectral density function, autocorrelation function (ACF), partial autocorrelation function (PACF), moving range (MR), among others functions. Also, some variables employed in the field of astronomy (for classifying stars) were proposed.The methodology proposed consists of two parts. Firstly, different variables are computed applying the methods, models or functions mentioned above, to time series. Next, once the variables are calculated, they are used as input for a classification method like sPLS-DA, SPLSDA, or SPLS with LDA (new proposal). When there was no information about the clusters of the different time series, the first two components from principal component analysis (PCA) were used as input for k-means method for identifying possible clusters of time series. In addition, results from random forest algorithm were compared with results from sPLS-DA.This study analyzed three sets of time series of relative humidity or temperate, recorded in different buildings (Valencia's Cathedral, the archaeological site of L'Almoina, and the baroque church of Saint Thomas and Saint Philip Neri) in Valencia, Spain. The clusters of the time series were analyzed according to different zones or different levels of the sensor heights, for monitoring the climatic conditions in these buildings.Random forest algorithm and different versions of sparse PLS helped identifying the main variables for classifying the time series. When comparing the results from sPLS-DA and random forest, they were very similar for variables from seasonal Holt-Winters method and functions which were applied to the time series. The results from sPLS-DA were easier to interpret than results from random forest. When the different versions of sparse PLS used variables from seasonal Holt- Winters method as input, the clusters of the time series were identified effectively.The variables from seasonal Holt-Winters helped to obtain the best, or the second best results, according to the classification error rate. Among the different versions of sparse PLS proposed, sPLS with LDA helped to classify time series using a fewer number of variables with the lowest classification error rate.We propose using a version of sparse PLS (sPLS-DA, or sPLS with LDA) with variables computed from time series for classifying time series. For the different data sets studied, the methodology helped to produce parsimonious models with few variables, it achieved satisfactory discrimination of the different clusters of the time series which are easily interpreted. This methodology can be useful for characterizing and monitoring micro-climatic conditions in museums, or similar buildings, for preventing problems with artwork. / I gratefully acknowledge the financial support of Pontificia Universidad Javeriana Cali – PUJ and Instituto Colombiano de Crédito Educativo y Estudios Técnicos en el Exterior – ICETEX who awarded me the scholarships ’Convenio de Capacitación para Docentes O. J. 086/17’ and ’Programa Crédito Pasaporte a la Ciencia ID 3595089 foco-reto salud’ respectively. The scholarships were essential for obtaining the Ph.D. Also, I gratefully acknowledge the financial support of the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 814624. / Ramírez Buelvas, SM. (2022). A Statistical Methodology for Classifying Time Series in the Context of Climatic Data [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181123
|
5 |
High-Dimensional Data Representations and Metrics for Machine Learning and Data Mining / Reprezentacije i metrike za mašinsko učenje i analizu podataka velikih dimenzijaRadovanović Miloš 11 February 2011 (has links)
<p>In the current information age, massive amounts of data are gathered, at a rate prohibiting their effective structuring, analysis, and conversion into useful knowledge. This information overload is manifested both in large numbers of data objects recorded in data sets, and large numbers of attributes, also known as high dimensionality. This dis-sertation deals with problems originating from high dimensionality of data representation, referred to as the “curse of dimensionality,” in the context of machine learning, data mining, and information retrieval. The described research follows two angles: studying the behavior of (dis)similarity metrics with increasing dimensionality, and exploring feature-selection methods, primarily with regard to document representation schemes for text classification. The main results of the dissertation, relevant to the first research angle, include theoretical insights into the concentration behavior of cosine similarity, and a detailed analysis of the phenomenon of hubness, which refers to the tendency of some points in a data set to become hubs by being in-cluded in unexpectedly many <em>k</em>-nearest neighbor lists of other points. The mechanisms behind the phenomenon are studied in detail, both from a theoretical and empirical perspective, linking hubness with the (intrinsic) dimensionality of data, describing its interaction with the cluster structure of data and the information provided by class la-bels, and demonstrating the interplay of the phenomenon and well known algorithms for classification, semi-supervised learning, clustering, and outlier detection, with special consideration being given to time-series classification and information retrieval. Results pertaining to the second research angle include quantification of the interaction between various transformations of high-dimensional document representations, and feature selection, in the context of text classification.</p> / <p>U tekućem „informatičkom dobu“, masivne količine podataka se<br />sakupljaju brzinom koja ne dozvoljava njihovo efektivno strukturiranje,<br />analizu, i pretvaranje u korisno znanje. Ovo zasićenje informacijama<br />se manifestuje kako kroz veliki broj objekata uključenih<br />u skupove podataka, tako i kroz veliki broj atributa, takođe poznat<br />kao velika dimenzionalnost. Disertacija se bavi problemima koji<br />proizilaze iz velike dimenzionalnosti reprezentacije podataka, često<br />nazivanim „prokletstvom dimenzionalnosti“, u kontekstu mašinskog<br />učenja, data mining-a i information retrieval-a. Opisana istraživanja<br />prate dva pravca: izučavanje ponašanja metrika (ne)sličnosti u odnosu<br />na rastuću dimenzionalnost, i proučavanje metoda odabira atributa,<br />prvenstveno u interakciji sa tehnikama reprezentacije dokumenata za<br />klasifikaciju teksta. Centralni rezultati disertacije, relevantni za prvi<br />pravac istraživanja, uključuju teorijske uvide u fenomen koncentracije<br />kosinusne mere sličnosti, i detaljnu analizu fenomena habovitosti koji<br />se odnosi na tendenciju nekih tačaka u skupu podataka da postanu<br />habovi tako što bivaju uvrštene u neočekivano mnogo lista k najbližih<br />suseda ostalih tačaka. Mehanizmi koji pokreću fenomen detaljno su<br />proučeni, kako iz teorijske tako i iz empirijske perspektive. Habovitost<br />je povezana sa (latentnom) dimenzionalnošću podataka, opisana<br />je njena interakcija sa strukturom klastera u podacima i informacijama<br />koje pružaju oznake klasa, i demonstriran je njen efekat na<br />poznate algoritme za klasifikaciju, semi-supervizirano učenje, klastering<br />i detekciju outlier-a, sa posebnim osvrtom na klasifikaciju vremenskih<br />serija i information retrieval. Rezultati koji se odnose na<br />drugi pravac istraživanja uključuju kvantifikaciju interakcije između<br />različitih transformacija višedimenzionalnih reprezentacija dokumenata<br />i odabira atributa, u kontekstu klasifikacije teksta.</p>
|
Page generated in 0.142 seconds