Spelling suggestions: "subject:"8upport detector cachine SVM"" "subject:"8upport detector amachine SVM""
31 |
Machine Learning Based Failure Detection in Data CentersPiran Nanekaran, Negin January 2020 (has links)
This work proposes a new approach to fast detection of abnormal behaviour of cooling, IT, and power distribution systems in micro data centers based on machine learning techniques. Conventional protection of micro data centers focuses on monitoring individual parameters such as temperature at different locations and when these parameters reach certain high values, then an alarm will be triggered. This research employs machine learning techniques to extract normal and abnormal behaviour of the cooling and IT systems. Developed data acquisition system together with unsupervised learning methods quickly learns the physical dynamics of normal operation and can detect deviations from such behaviours. This provides an efficient way for not only producing health index for the micro data center, but also a rich label logging system that will be used for the supervised learning methods. The effectiveness of the proposed detection technique is evaluated on an micro data center placed at Computing Infrastructure Research Center (CIRC) in McMaster Innovation Park (MIP), McMaster University. / Thesis / Master of Science (MSc)
|
32 |
Performance evaluation of two machine learning algorithms for classification in a production line : Comparing artificial neural network and support vector machine using a quasi-experimentJörlid, Olle, Sundbeck, Erik January 2024 (has links)
This thesis investigated the possibility of using machine learning algorithms for classifying items in a queuing system to optimize a production line. The evaluated algorithms are Artificial Neural Network (ANN) and Support Vector Machine (SVM), selected based on other research projects. A quasi-experiment evaluates the two machine learning algorithms trained on the same data. The dataset used in the experiment was complex and contained 47,212 rows of samples with features of items from a production setting. Both models performed better than the current system, with ANN reaching 97,5% and SVM 98% on all measurements. The ANN and SVM models differed in training time where ANN took almost 205 seconds and SVM took 1.97 seconds, ANN was however 20 times faster to classify. We conclude that ANN and SVM are feasible options for using Artificial Intelligence (AI) to classify items in an industrial environment with similar scenarios.
|
33 |
Sélection de modèle par chemin de régularisation pour les machines à vecteurs support à coût quadratique / Model selection using regularization path for quadratic cost support vector machinesBonidal, Rémi 19 June 2013 (has links)
La sélection de modèle est un thème majeur de l'apprentissage statistique. Dans ce manuscrit, nous introduisons des méthodes de sélection de modèle dédiées à des SVM bi-classes et multi-classes. Ces machines ont pour point commun d'être à coût quadratique, c'est-à-dire que le terme empirique de la fonction objectif de leur problème d'apprentissage est une forme quadratique. Pour les SVM, la sélection de modèle consiste à déterminer la valeur optimale du coefficient de régularisation et à choisir un noyau approprié (ou les valeurs de ses paramètres). Les méthodes que nous proposons combinent des techniques de parcours du chemin de régularisation avec de nouveaux critères de sélection. La thèse s'articule autour de trois contributions principales. La première est une méthode de sélection de modèle par parcours du chemin de régularisation dédiée à la l2-SVM. Nous introduisons à cette occasion de nouvelles approximations de l'erreur en généralisation. Notre deuxième contribution principale est une extension de la première au cas multi-classe, plus précisément à la M-SVM². Cette étude nous a conduits à introduire une nouvelle M-SVM, la M-SVM des moindres carrés. Nous présentons également de nouveaux critères de sélection de modèle pour la M-SVM de Lee, Lin et Wahba à marge dure (et donc la M-SVM²) : un majorant de l'erreur de validation croisée leave-one-out et des approximations de cette erreur. La troisième contribution principale porte sur l'optimisation des valeurs des paramètres du noyau. Notre méthode se fonde sur le principe de maximisation de l'alignement noyau/cible, dans sa version centrée. Elle l'étend à travers l'introduction d'un terme de régularisation. Les évaluations expérimentales de l'ensemble des méthodes développées s'appuient sur des benchmarks fréquemment utilisés dans la littérature, des jeux de données jouet et des jeux de données associés à des problèmes du monde réel / Model selection is of major interest in statistical learning. In this document, we introduce model selection methods for bi-class and multi-class support vector machines. We focus on quadratic loss machines, i.e., machines for which the empirical term of the objective function of the learning problem is a quadratic form. For SVMs, model selection consists in finding the optimal value of the regularization coefficient and choosing an appropriate kernel (or the values of its parameters). The proposed methods use path-following techniques in combination with new model selection criteria. This document is structured around three main contributions. The first one is a method performing model selection through the use of the regularization path for the l2-SVM. In this framework, we introduce new approximations of the generalization error. The second main contribution is the extension of the first one to the multi-category setting, more precisely the M-SVM². This study led us to derive a new M-SVM, the least squares M-SVM. Additionally, we present new model selection criteria for the M-SVM introduced by Lee, Lin and Wahba (and thus the M-SVM²). The third main contribution deals with the optimization of the values of the kernel parameters. Our method makes use of the principle of kernel-target alignment with centered kernels. It extends it through the introduction of a regularization term. Experimental validation of these methods was performed on classical benchmark data, toy data and real-world data
|
34 |
An IoT Solution for Urban Noise Identification in Smart Cities : Noise Measurement and ClassificationAlsouda, Yasser January 2019 (has links)
Noise is defined as any undesired sound. Urban noise and its effect on citizens area significant environmental problem, and the increasing level of noise has become a critical problem in some cities. Fortunately, noise pollution can be mitigated by better planning of urban areas or controlled by administrative regulations. However, the execution of such actions requires well-established systems for noise monitoring. In this thesis, we present a solution for noise measurement and classification using a low-power and inexpensive IoT unit. To measure the noise level, we implement an algorithm for calculating the sound pressure level in dB. We achieve a measurement error of less than 1 dB. Our machine learning-based method for noise classification uses Mel-frequency cepstral coefficients for audio feature extraction and four supervised classification algorithms (that is, support vector machine, k-nearest neighbors, bootstrap aggregating, and random forest). We evaluate our approach experimentally with a dataset of about 3000 sound samples grouped in eight sound classes (such as car horn, jackhammer, or street music). We explore the parameter space of the four algorithms to estimate the optimal parameter values for the classification of sound samples in the dataset under study. We achieve noise classification accuracy in the range of 88% – 94%.
|
35 |
Optimisation des techniques de compression d'images fixes et de vidéo en vue de la caractérisation des matériaux : applications à la mécanique / Optimization of compression techniques for still images and video for characterization of materials : mechanical applicationsEseholi, Tarek Saad Omar 17 December 2018 (has links)
Cette thèse porte sur l’optimisation des techniques de compression d'images fixes et de vidéos en vue de la caractérisation des matériaux pour des applications dans le domaine de la mécanique, et s’inscrit dans le cadre du projet de recherche MEgABIt (MEchAnic Big Images Technology) soutenu par l’Université Polytechnique Hauts-de-France. L’objectif scientifique du projet MEgABIt est d’investiguer dans l’aptitude à compresser de gros volumes de flux de données issues d’instrumentation mécanique de déformations à grands volumes tant spatiaux que fréquentiels. Nous proposons de concevoir des algorithmes originaux de traitement dans l’espace compressé afin de rendre possible au niveau calculatoire l’évaluation des paramètres mécaniques, tout en préservant le maximum d’informations fournis par les systèmes d’acquisitions (imagerie à grande vitesse, tomographie 3D). La compression pertinente de la mesure de déformation des matériaux en haute définition et en grande dynamique doit permettre le calcul optimal de paramètres morpho-mécaniques sans entraîner la perte des caractéristiques essentielles du contenu des images de surface mécaniques, ce qui pourrait conduire à une analyse ou une classification erronée. Dans cette thèse, nous utilisons le standard HEVC (High Efficiency Video Coding) à la pointe des technologies de compression actuelles avant l'analyse, la classification ou le traitement permettant l'évaluation des paramètres mécaniques. Nous avons tout d’abord quantifié l’impact de la compression des séquences vidéos issues d’une caméra ultra-rapide. Les résultats expérimentaux obtenus ont montré que des taux de compression allant jusque 100 :1 pouvaient être appliqués sans dégradation significative de la réponse mécanique de surface du matériau mesurée par l’outil d’analyse VIC-2D. Finalement, nous avons développé une méthode de classification originale dans le domaine compressé d’une base d’images de topographie de surface. Le descripteur d'image topographique est obtenu à partir des modes de prédiction calculés par la prédiction intra-image appliquée lors de la compression sans pertes HEVC des images. La machine à vecteurs de support (SVM) a également été introduite pour renforcer les performances du système proposé. Les résultats expérimentaux montrent que le classificateur dans le domaine compressé est robuste pour la classification de nos six catégories de topographies mécaniques différentes basées sur des méthodologies d'analyse simples ou multi-échelles, pour des taux de compression sans perte obtenus allant jusque 6: 1 en fonction de la complexité de l'image. Nous avons également évalué les effets des types de filtrage de surface (filtres passe-haut, passe-bas et passe-bande) et de l'échelle d'analyse sur l'efficacité du classifieur proposé. La grande échelle des composantes haute fréquence du profil de surface est la mieux appropriée pour classer notre base d’images topographiques avec une précision atteignant 96%. / This PhD. thesis focuses on the optimization of fixed image and video compression techniques for the characterization of materials in mechanical science applications, and it constitutes a part of MEgABIt (MEchAnic Big Images Technology) research project supported by the Polytechnic University Hauts-de-France (UPHF). The scientific objective of the MEgABIt project is to investigate the ability to compress large volumes of data flows from mechanical instrumentation of deformations with large volumes both in the spatial and frequency domain. We propose to design original processing algorithms for data processing in the compressed domain in order to make possible at the computational level the evaluation of the mechanical parameters, while preserving the maximum of information provided by the acquisitions systems (high-speed imaging, tomography 3D). In order to be relevant image compression should allow the optimal computation of morpho-mechanical parameters without causing the loss of the essential characteristics of the contents of the mechanical surface images, which could lead to wrong analysis or classification. In this thesis, we use the state-of-the-art HEVC standard prior to image analysis, classification or storage processing in order to make the evaluation of the mechanical parameters possible at the computational level. We first quantify the impact of compression of video sequences from a high-speed camera. The experimental results obtained show that compression ratios up to 100: 1 could be applied without significant degradation of the mechanical surface response of the material measured by the VIC-2D analysis tool. Then, we develop an original classification method in the compressed domain of a surface topography database. The topographical image descriptor is obtained from the prediction modes calculated by intra-image prediction applied during the lossless HEVC compression of the images. The Support vector machine (SVM) is also introduced for strengthening the performance of the proposed system. Experimental results show that the compressed-domain topographies classifier is robust for classifying the six different mechanical topographies either based on single or multi-scale analyzing methodologies. The achieved lossless compression ratios up to 6:1 depend on image complexity. We evaluate the effects of surface filtering types (high-pass, low-pass, and band-pass filter) and the scale of analysis on the efficiency of the proposed compressed-domain classifier. We verify that the high analysis scale of high-frequency components of the surface profile is more appropriate for classifying our surface topographies with accuracy of 96 %.
|
36 |
Applications of Soft Computing for Power-Quality Detection and Electric Machinery Fault DiagnosisWu, Chien-Hsien 20 November 2008 (has links)
With the deregulation of power industry and the market competition, stable and reliable power supply is a major concern of the independent system operator (ISO). Power-quality (PQ) study has become a more and more important subject lately. Harmonics, voltage swell, voltage sag, and power interruption could downgrade the service quality. In recent years, high speed railway (HSR) and massive rapid transit (MRT) system have been rapidly developed, with the applications of widespread semiconductor technologies in the auto-traction system. The harmonic distortion level worsens due to these increased uses of electronic equipment and non-linear loads. To ensure the PQ, power-quality disturbances (PQD) detection becomes important. A detection method with classification capability will be helpful for detecting disturbance locations and types.
Electric machinery fault diagnosis is another issue of considerable attentions from utilities and customers. ISO need to provide a high quality service to retain their customers. Fault diagnosis of turbine-generator has a great effect on the benefit of power plants. The generator fault not only damages the generator itself, but also causes outages and loss of profits. With high-temperature, high-pressure and factors such as thermal fatigues, many components may go wrong, which will not only lead to great economic loss, but sometimes a threat to social security. Therefore, it is necessary to detect generator faults and take immediate actions to cut the loss. Besides, induction motor plays a major role in a power system. For saving cost, it is important to run periodical inspections to detect incipient faults inside the motor. Preventive techniques for early detection can find out the incipient faults and avoid outages. This dissertation developed various soft computing (SC) algorithms for detection including power-quality disturbances (PQD), turbine-generator fault diagnosis, and induction motor fault diagnosis. The proposed SC algorithms included support vector machine (SVM), grey clustering analysis (GCA), and probabilistic neural network (PNN). Integrating the proposed diagnostic procedure and existing monitoring instruments, a well-monitored power system will be constructed without extra devices. Finally, all the methods in the dissertation give reasonable and practical estimation method. Compared with conventional method, the test results showed a high accuracy, good robustness, and a faster processing performance.
|
37 |
Forecasting Mid-Term Electricity Market Clearing Price Using Support Vector Machines2014 May 1900 (has links)
In a deregulated electricity market, offering the appropriate amount of electricity at the right time with the right bidding price is of paramount importance. The forecasting of electricity market clearing price (MCP) is a prediction of future electricity price based on given forecast of electricity demand, temperature, sunshine, fuel cost, precipitation and other related factors. Currently, there are many techniques available for short-term electricity MCP forecasting, but very little has been done in the area of mid-term electricity MCP forecasting. The mid-term electricity MCP forecasting focuses electricity MCP on a time frame from one month to six months. Developing mid-term electricity MCP forecasting is essential for mid-term planning and decision making, such as generation plant expansion and maintenance schedule, reallocation of resources, bilateral contracts and hedging strategies.
Six mid-term electricity MCP forecasting models are proposed and compared in this thesis: 1) a single support vector machine (SVM) forecasting model, 2) a single least squares support vector machine (LSSVM) forecasting model, 3) a hybrid SVM and auto-regression moving average with external input (ARMAX) forecasting model, 4) a hybrid LSSVM and ARMAX forecasting model, 5) a multiple SVM forecasting model and 6) a multiple LSSVM forecasting model. PJM interconnection data are used to test the proposed models. Cross-validation technique was used to optimize the control parameters and the selection of training data of the six proposed mid-term electricity MCP forecasting models. Three evaluation techniques, mean absolute error (MAE), mean absolute percentage error (MAPE) and mean square root error (MSRE), are used to analysis the system forecasting accuracy. According to the experimental results, the multiple SVM forecasting model worked the best among all six proposed forecasting models. The proposed multiple SVM based mid-term electricity MCP forecasting model contains a data classification module and a price forecasting module. The data classification module will first pre-process the input data into corresponding price zones and then the forecasting module will forecast the electricity price in four parallel designed SVMs. This proposed model can best improve the forecasting accuracy on both peak prices and overall system compared with other 5 forecasting models proposed in this thesis.
|
38 |
Metamodel-Based Multidisciplinary Design Optimization of Automotive StructuresRyberg, Ann-Britt January 2017 (has links)
Multidisciplinary design optimization (MDO) can be used in computer aided engineering (CAE) to efficiently improve and balance performance of automotive structures. However, large-scale MDO is not yet generally integrated within automotive product development due to several challenges, of which excessive computing times is the most important one. In this thesis, a metamodel-based MDO process that fits normal company organizations and CAE-based development processes is presented. The introduction of global metamodels offers means to increase computational efficiency and distribute work without implementing complicated multi-level MDO methods. The presented MDO process is proven to be efficient for thickness optimization studies with the objective to minimize mass. It can also be used for spot weld optimization if the models are prepared correctly. A comparison of different methods reveals that topology optimization, which requires less model preparation and computational effort, is an alternative if load cases involving simulations of linear systems are judged to be of major importance. A technical challenge when performing metamodel-based design optimization is lack of accuracy for metamodels representing complex responses including discontinuities, which are common in for example crashworthiness applications. The decision boundary from a support vector machine (SVM) can be used to identify the border between different types of deformation behaviour. In this thesis, this information is used to improve the accuracy of feedforward neural network metamodels. Three different approaches are tested; to split the design space and fit separate metamodels for the different regions, to add estimated guiding samples to the fitting set along the boundary before a global metamodel is fitted, and to use a special SVM-based sequential sampling method. Substantial improvements in accuracy are observed, and it is found that implementing SVM-based sequential sampling and estimated guiding samples can result in successful optimization studies for cases where more conventional methods fail.
|
39 |
Sparse Multiclass And Multi-Label Classifier Design For Faster InferenceBapat, Tanuja 12 1900 (has links) (PDF)
Many real-world problems like hand-written digit recognition or semantic scene classification are treated as multiclass or multi-label classification prob-lems. Solutions to these problems using support vector machines (SVMs) are well studied in literature. In this work, we focus on building sparse max-margin classifiers for multiclass and multi-label classification. Sparse representation of the resulting classifier is important both from efficient training and fast inference viewpoints. This is true especially when the training and test set sizes are large.Very few of the existing multiclass and multi-label classification algorithms have given importance to controlling the sparsity of the designed classifiers directly. Further, these algorithms were not found to be scalable. Motivated by this, we propose new formulations for sparse multiclass and multi-label classifier design and also give efficient algorithms to solve them. The formulation for sparse multi-label classification also incorporates the prior knowledge of label correlations. In both the cases, the classification model is designed using a common set of basis vectors across all the classes. These basis vectors are greedily added to an initially empty model, to approximate the target function. The sparsity of the classifier can be controlled by a user defined parameter, dmax which indicates the max-imum number of common basis vectors. The computational complexity of these algorithms for multiclass and multi-label classifier designisO(lk2d2 max),
Where l is the number of training set examples and k is the number of classes. The inference time for the proposed multiclass and multi-label classifiers is O(kdmax). Numerical experiments on various real-world benchmark datasets demonstrate that the proposed algorithms result in sparse classifiers that require lesser number of basis vectors than required by state-of-the-art algorithms, to attain the same generalization performance. Very small value of dmax results in significant reduction in inference time. Thus, the proposed algorithms provide useful alternatives to the existing algorithms for sparse multiclass and multi-label classifier design.
|
40 |
Identifying Categorical Land Use Transition and Land Degradation in Northwestern Drylands of EthiopiaZewdie, Worku, Csaplovics, Elmar 08 June 2016 (has links)
Land use transition in dryland ecosystems is one of the major driving forces to landscape change that directly impacts the welfare of humans. In this study, the support vector machine (SVM) classification algorithm and cross tabulation matrix analysis are used to identify systematic and random processes of change. The magnitude and prevailing signals of land use transitions are assessed taking into account net change and swap change. Moreover, spatiotemporal patterns and the relationship of precipitation and the Normalized Difference Vegetation Index (NDVI) are explored to evaluate landscape degradation. The assessment showed that 44% of net change and about 54% of total change occurred during the study period, with the latter being due to swap change. The conversion of over 39% of woodland to cropland accounts for the existence of the highest loss of valuable ecosystem of the region. The spatial relationship of NDVI and precipitation also showed R2 of below 0.5 over 55% of the landscape with no significant changes in the precipitation trend, thus representing an indicative symptom of land degradation. This in-depth analysis of random and systematic landscape change is crucial for designing policy intervention to halt woodland degradation in this fragile environment.
|
Page generated in 0.0536 seconds