1 |
Auto-scaling Prediction using MachineLearning Algorithms : Analysing Performance and Feature CorrelationAhmed, Syed Saif, Arepalli, Harshini Devi January 2023 (has links)
Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.
|
2 |
Application of Load Updating to a Complex Three Dimensional Frame StructureNichols, Jonathan Tyler 28 June 2017 (has links)
This thesis presents a novel method for the correlation of FEM results to experimental test results known as the "Load updating method." Specifically, the load updating method uses the math model from the FEM and the strains measured from experimental or flight test data as inputs and then predicts the loads in the FEM which would result in strains that would correlate best to the measured strains in the least squared sense. In this research, the load updating method is applied to the analysis of a complex frame structure whose validation is challenging due to the complex nature of its structural behavior, load distributions, and error derived from residual strains. A FEM created for this structure is used to generate strain data for thirty-two different load cases. These same thirty-two load cases are replicated in an experimental setup consisting of the frame, supporting structure, and thirty actuators which are used to load the frame according to the specifications for each of the thirty-two load conditions. A force-strain matrix is created from the math model in NASTRAN using unit loads which are separately applied to each load point in order to extract strain results for each of the locations of the seventy-four strain gages. The strain data from the structural test and the force-strain matrix is then input into a Matlab code which is created to perform the load updating method. This algorithm delivers a set of coefficients which in turn gives the updated loads. These loads are applied to the FEM and the strain values extracted for correlation to the strains from test data. It is found that the load updating method applied to this structure produces strains which correlate well to the experimental strain data. Although the loads found using the load updating method do not perfectly match those which are applied during the test, this error is primarily attributed to residual strains within the structure. In summary, the load updating method provides a way to predict loads which, when applied to the FEM, would result in strains that correlate best to the experimental strains. Ultimately, this method could prove especially useful for predicting loads in experimental and flight test structures and could aid greatly in the Federal Aviation Administration (FAA) certification process. / Master of Science / The research presented in this thesis provides a new way for correlating data obtained during structural testing with results obtained from computer analysis known as the finite element method (FEM). During the process of certifying an aircraft structure with the FAA, it is important to be able to demonstrate that the results obtained for a given structure with a computer model matches the results produced by a real world experiment within a reasonable tolerance. Traditionally, differences between these two results have been accounted for by adjusting the model within the computer until its results match those from the test. However, in this research the loads which are applied on the computer model are changed instead until loads are found which produce results in the computer models that match those from testing. This method, known as the load updating method, therefore provides a way to predict loads on a structure where the loads are unknown such as a flight test article. Here, the ability of the load updating method to predict loads on a complex three dimensional frame structure is explored and the accuracy of the results studied by comparing the results to those from a structural test whose loads are known. It was found that the load updating method does indeed predict unknown loads to a reasonable accuracy and could aid future design efforts immensely.
|
3 |
Linear And Nonlinear Analysis Of Human Postural SwayCelik, Huseyin 01 September 2008 (has links) (PDF)
Human upright posture exhibits an everlasting oscillatory behavior of complex nature, called as human postural sway. Variations in the position of the Center-of-Pressure (CoP) were used to describe the human postural sway. In this study / CoP data, which has experimentally been collected from 28 different subjects (14 males and 14 females with their ages ranging from 6 to 84), who were divided into 4 groups according to their ages has been analyzed. The data collection from each of the subjects was performed in 5 successive trials, each of which has lasted for 180-seconds long. Linear analysis methods such as the variance/standard deviation, Fast Fourié / r Transformation, and Power Spectral Density estimates were applied to the detrended CoP signal of human postural sway. Also the Run test and Ensemble averages methods were used to search for stationarity and ergodicity of the CoP signal respectively. Furthermore, in order to reveal the nonlinear characteristics of the human postural sway, its dynamics were reconstructed in m-dimensional state space from the CoPx signals. Then, the correlation dimension (D2) estimates from the embedded dynamics were calculated. Additionally, the statistical and dynamical measures computed were checked against any significant changes, which may occur during aging. The results of the study suggested that human postural sway is a stationary process when 180-second long biped quiet stance data is considered. In addition, it exhibits variable dynamical structure complex in nature (112 deterministic chaos versus 28 stochastic time series of human postural sway) for five successive trials of 28 different subjects. Moreover, we found that groups were significantly different in the correlation dimension (D2) measure (p& / #8804 / 0.0003). Finally, the behavior of the experimental CoPx signals was checked against two types of linear processes by using surrogate data method. The shuffled CoPx signals (Surrogate I) suggested that temporal order of CoPx is important / however, phase-randomization (Surrogate II) did not change the behavioral characteristics of the CoPx signal.
|
4 |
Correlação entre características fonético-fonológicas da fala e características ortográficas da escrita em crianças com alterações fonológicas /Guilherme, Jhulya January 2020 (has links)
Orientador: Lourenço Chacon / Resumo: A proposta desta pesquisa foi investigar possível correlação entre características fonético-fonológicas da fala e características ortográficas da escrita em crianças com alterações fonológicas para melhor conhecê-la, pois a literatura apresenta divergências em relação a esse assunto. Pensando nessas divergências, formulamos duas perguntas de pesquisa: (1) existiria relação de dependência entre erros de fala e erros de escrita em seu plano ortográfico?; e (2) se essa relação existir, qual seria sua natureza? Pensando em possíveis respostas a essas perguntas, formulamos as hipóteses: (i) crianças com alterações fonológicas apresentariam também alterações ortográficas e, ainda, seus possíveis erros de fala apresentariam correlação positiva com seus erros de ortografia. No entanto, como há subtipos de alterações fonológicas, esperase que (ii) essa diferenciação se mostre nas características segmentais da fala e nas características ortográficas em função desses diferentes subtipos, especialmente no que diz respeito às classes fonológicas. Para isso assumimos que crianças com alterações fonológicas teriam problemas em sua representação fonológica; consequentemente, como a ortografia do Português Brasileiro se sustenta (também) em princípios fonológicos, esse aspecto da escrita dessas crianças estaria comprometido. Esta pesquisa teve como objetivo primeiro comparar e correlacionar achados fonético-fonológicos da fala e achados ortográficos em crianças com alterações fonológicas e, c... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The proposal of this research was to investigate a possible correlation between phonetic-phonological characteristics of speech and orthographic characteristics of writing in children with speech sound disorders (SSD) to clarify the knowledge about it, because in literature there are divergences related to this subject. Regarding these divergences we formulate two research questions: (1) is there a relation of dependence between speech production errors and orthographic errors?; and (2) if this relation does exist, what would be its nature? Thinking about possible answers to these questions we developed the hypotheses: (i) children with SSD may also present orthographic errors, and, moreover, speech production errors may present positive correlation with orthographic errors. However, as there are SSD subtypes, thus, it is expected that (ii) this difference shows up in the segmental characteristics of speech and in the orthographic characteristics of children due to the subtypes, especially regarding to phonological classes. We assume that children with SSD would also have problem in their phonological representation. Hence, as orthography in Brazilian Portuguese is based (also) in phonological principles, this aspect of writing in these children would also be affected. The first aim of this research was to compare and to correlate phonetic-phonological findings of speech and orthographic findings in children with SSD, and the second aim was to explore the nature of SSD and or... (Complete abstract click electronic access below) / Mestre
|
5 |
Vis-Scholar: uma metodologia de visualização e análise de dados na educaçãoCosta, Jean Carlos Araújo 01 March 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-05-25T12:28:08Z
No. of bitstreams: 1
Jean Carlos Araújo Costa_.pdf: 1155126 bytes, checksum: 15210c31e7d20bb22cb98f8732173d6d (MD5) / Made available in DSpace on 2016-05-25T12:28:09Z (GMT). No. of bitstreams: 1
Jean Carlos Araújo Costa_.pdf: 1155126 bytes, checksum: 15210c31e7d20bb22cb98f8732173d6d (MD5)
Previous issue date: 2016-03-01 / Nenhuma / Técnicas de visualização de dados podem auxiliar nas mais diversas áreas de atuação humana, em especial na compreensão de dados e informações de diferentes fenômenos que se quer estudar. Quanto mais variáveis estão relacionadas com esse fenômeno, mais desafiador se torna seu tratamento e representação visual. Pensando em educação no Brasil e suas bases de dados abertas, bem como em bases de dados acadêmicas existentes nas instituições, o uso de técnicas matemáticas para correlacionar conjuntos de dados e métodos de visualização para apresentar essas correlações, disponíveis em uma ferramenta de fácil acesso e operação, podem tornar públicas informações sobre a qualidade da educação de determinada região, estado, município e instituição de ensino. Outro benefício pode ser a indicação de fatores que antes eram ignorados, como alvos de investimento e ainda ajudar na elaboração de políticas públicas, nacionais ou regionais, que tornem a educação mais eficiente, abrangente e inclusiva. Iniciativas de organizações não governamentais e algumas vinculadas ao governo brasileiro tem elaborado ferramentas de filtragem de informações e divulgação de dados sobre qualidade e investimento de recursos na educação. O governo brasileiro usa índices de desempenho para avaliar suas Instituições de Ensino Superior. O Conceito Preliminar de Curso é um desses. Este trabalho apresenta uma solução, visando elaborar uma metodologia de visualização de dados através de uma aplicação web, com tecnologias open source, utilizando o método de análise de componentes principais (ACP) como técnica matemática de correlação de variáveis, e distribuindo resultados sobre um mapa com a utilização da API do Google Maps, porém, tendo como foco, a busca do nível de influência de diferentes fatores, inclusive de alguns não ligados diretamente à educação, na performance de instituições de ensino e no rendimento acadêmico de alunos, tendo como estudo de caso, a análise de um índice de desempenho na educação superior. / Data visualization techniques can help in several areas of human activity, especially in understanding data and information from different phenomena to be studied. The more variables are related to this phenomenon, the more challenging it becomes their treatment and visual representation. Thinking about education in Brazil and its open databases, as well as in existing academic databases in institutions, using mathematical techniques to correlate data sets and visualization methods to present these correlations available in an easy tool access and operation may disclose information on the quality of education in a region, state, county and educational institution. Another benefit coud be the indication of factors that were ignored, as investment targets and also help in the development of public policies, national or regional, that make more efficient, comprehensive and inclusive education. Initiatives of non-governmental organizations and some linked to the Brazilian government has prepared information filtering tools and dissemination of data on quality and investment of resources in education. Brazilian government uses performance indicators to assess their undergraduation institutions. Course Preliminar Concept (CPC) is one of those. This paper presents a solution to this profile, aiming to develop a data visualization methodology through a web application with open source technologies, using principal component analysis method (PCA) as mathematical technique of variable correlation, and distributing results on a map using the Google Maps API, however, focusing on the search for the level of influence of different factors, including some not directly related to education, performance of educational institutions and the academic performance of students, taking as a case study, the analysis of a performance index in undergraduation.
|
6 |
Spatially Correlated Data Accuracy Estimation Models in Wireless Sensor NetworksKarjee, Jyotirmoy January 2013 (has links) (PDF)
One of the major applications of wireless sensor networks is to sense accurate and reliable data from the physical environment with or without a priori knowledge of data statistics. To extract accurate data from the physical environment, we investigate spatial data correlation among sensor nodes to develop data accuracy models. We propose three data accuracy models namely Estimated Data Accuracy (EDA) model, Cluster based Data Accuracy (CDA) model and Distributed Cluster based Data Accuracy (DCDA) model with a priori knowledge of data statistics.
Due to the deployment of high density of sensor nodes, observed data are highly correlated among sensor nodes which form distributed clusters in space. We describe two clustering algorithms called Deterministic Distributed Clustering (DDC) algorithm and Spatial Data Correlation based Distributed Clustering (SDCDC) algorithm implemented under CDA model and DCDA model respectively. Moreover, due to data correlation in the network, it has redundancy in data collected by sensor nodes. Hence, it is not necessary for all sensor nodes to transmit their highly correlated data to the central node (sink node or cluster head node). Even an optimal set of sensor nodes are capable of measuring accurate data and transmitting the accurate, precise data to the central node. This reduces data redundancy, energy consumption and data transmission cost to increase the lifetime of sensor networks.
Finally, we propose a fourth accuracy model called Adaptive Data Accuracy (ADA) model that doesn't require any a priori knowledge of data statistics. ADA model can sense continuous data stream at regular time intervals to estimate accurate data from the environment and select an optimal set of sensor nodes for data transmission to the network. Data transmission can be further reduced for these optimal sensor nodes by transmitting a subset of sensor data using a methodology called Spatio-Temporal Data Prediction (STDP) model under data reduction strategies. Furthermore, we implement data accuracy model when the network is under a threat of malicious attack.
|
7 |
Energy Conservation for Collaborative Applications in Wireless Sensor Networks / Conservation d'énergie pour les applications collaboratives dans les réseaux de capteurs sans filDemigha, Oualid 29 November 2015 (has links)
Les réseaux de capteurs sans fil est une technologie nouvelle dont les applications s'étendent sur plusieurs domaines: militaire, scientifique, médicale, industriel, etc. La collaboration entre les noeuds capteurs, caractérisés par des capacités minimales en termes de capture, de transmission, de traitement et d'énergie, est une nécessité pour réaliser des tâches aussi complexes que la collecte des données, le pistage des objets mobiles, la surveillance des zones sensibles, etc. La contrainte matérielle sur le développement des ressources énergétiques des noeuds capteurs est persistante. D'où la nécessité de l'optimisation logicielle dans les différentes couches de la pile protocolaire et du système d'exploitation des noeuds. Dans cette thèse, nous approchons le problème d'optimisation d'énergie pour les applications collaboratives via les méthodes de sélection des capteurs basées sur la prédiction et la corrélation des données issues du réseau lui-même. Nous élaborons plusieurs méthodes pour conserver les ressources énergétiques du réseau en utilisant la prédiction comme un moyen pour anticiper les actions des noeuds et leurs rôles afin de minimiser le nombre des noeuds impliqués dans la tâche en question. Nous prenons l'application de pistage d'objets mobiles comme un cas d'étude. Ceci, après avoir dresser un état de l'art des différentes méthodes et approches récentes utilisées dans ce contexte. Nous formalisons le problème à l'aide d'un programme linéaire à variables binaires dans le but de trouver une solution générale exacte. Nous modélisons ainsi le problème de minimisation de la consommation d'énergie des réseaux de capteurs sans fil, déployé pour des applications de collecte de données soumis à la contrainte de précision de données, appelé EMDP. Nous montrons que ce problème est NP-Complet. D'où la nécessité de solutions heuristiques. Comme solution approchée, nous proposons un algorithme de clustering dynamique, appelé CORAD, qui adapte la topologie du réseau à la dynamique des données capturées afin d'optimiser la consommation d'énergie en exploitant la corrélation qui pourrait exister entre les noeuds. Toutes ces méthodes ont été implémentées et testées via des simulations afin de montrer leur efficacité. / Wireless Sensor Networks is an emerging technology enabled by the recent advances in Micro-Electro-Mechanical Systems, that led to design tiny wireless sensor nodes characterized by small capacities of sensing, data processing and communication. To accomplish complex tasks such as target tracking, data collection and zone surveillance, these nodes need to collaborate between each others to overcome the lack of battery capacity. Since the development of the batteries hardware is very slow, the optimization effort should be inevitably focused on the software layers of the protocol stack of the nodes and their operating systems. In this thesis, we investigated the energy problem in the context of collaborative applications and proposed an approach based on node selection using predictions and data correlations, to meet the application requirements in terms of energy-efficiency and quality of data. First, we surveyed almost all the recent approaches proposed in the literature that treat the problem of energy-efficiency of prediction-based target tracking schemes, in order to extract the relevant recommendations. Next, we proposed a dynamic clustering protocol based on an enhanced version of the Distributed Kalman Filter used as a prediction algorithm, to design an energy-efficient target tracking scheme. Our proposed scheme use these predictions to anticipate the actions of the nodes and their roles to minimize their number in the tasks. Based on our findings issued from the simulation data, we generalized our approach to any data collection scheme that uses a geographic-based clustering algorithm. We formulated the problem of energy minimization under data precision constraints using a binary integer linear program to find its exact solution in the general context. We validated the model and proved some of its fundamental properties. Finally and given the complexity of the problem, we proposed and evaluated a heuristic solution consisting of a correlation-based adaptive clustering algorithm for data collection. We showed that, by relaxing some constraints of the problem, our heuristic solution achieves an acceptable level of energy-efficiency while preserving the quality of data.
|
Page generated in 0.1274 seconds