• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 561
  • 225
  • 180
  • 175
  • 170
  • 169
  • 148
  • 81
  • 75
  • 71
  • 68
  • 67
  • 64
  • 64
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Prediction of the company insolvency using machine learning methods in the EU passenger transport industry

Čarnogurská, Anna January 2019 (has links)
The diploma thesis focuses on the application of Support vector machines (SVM) in the area of bankruptcy prediction. Theoretical research deals with the overview of the passenger transport industry in the EU for each mode of transport individually. Potential causes of bankruptcy in the researched industry are presented based on real examples. Empirical analysis examines the accuracy of SVM classifier with different types of kernels and compares its prediction force with the logistic regression model. In the end, obtained results are summarized, commented on in economic terms and discussed with selected studies.
232

Cross Site Product Page Classification with Supervised Machine Learning / Webbsideöverskridande klassificering av produktsidor med övervakad maskininlärning

Huss, Jakob January 2016 (has links)
This work outlines a possible technique for identifying webpages that contain product  specifications. Using support vector machines a product web page classifier was constructed and tested with various settings. The final result for this classifier ended up being 0.958 in precision and 0.796 in recall for product pages. The scores imply that the method could be considered a valid technique in real world web classification tasks if additional features and more data were made available.
233

Mapping forest habitats in protected areas by integrating LiDAR and SPOT Multispectral Data

Alvarez, Manuela January 2016 (has links)
KNAS (Continuous Habitat Mapping of Protected Areas) is a Metria AB project that produces vegetation and habitat mapping in protected areas in Sweden. Vegetation and habitat mapping is challenging due to its heterogeneity, spatial variability and complex vertical and horizontal structure. Traditionally, multispectral data is used due to its ability to give information about horizontal structure of vegetation. LiDAR data contains information about vertical structure of vegetation, and therefore contributes to improve classification accuracy when used together with spectral data. The objectives of this study are to integrate LiDAR and multispectral data for KNAS and to determine the contribution of LiDAR data to the classification accuracy. To achieve these goals, two object-based classification schemes are proposed and compared: a spectral classification scheme and a spectral-LiDAR classification scheme. Spectral data consists of four SPOT-5 bands acquired in 2005 and 2006. Spectral-LiDAR includes the same four spectral bands from SPOT-5 and nine LiDAR-derived layers produced from NH point cloud data from airborne laser scanning acquired in 2011 and 2012 from The Swedish Mapping, Cadastral and Land Registration Authority. Processing of point cloud data includes: filtering, buffer and tiles creation, height normalization and rasterization. Due to the complexity of KNAS production, classification schemes are based on a simplified KNAS workflow and a selection of KNAS forest classes. Classification schemes include: segmentation, database creation, training and validation areas collection, SVM classification and accuracy assessment. Spectral-LiDAR data fusion is performed during segmentation in eCognition. Results from segmentation are used to build a database with segmented objects, and mean values of spectral or spectral-LiDAR data. Databases are used in Matlab to perform SVM classification with cross validation. Cross validation accuracy, overall accuracy, kappa coefficient, producer’s and user’s accuracy are computed. Training and validation areas are common to both classification schemes. Results show an improvement in overall classification accuracy for spectral-LiDAR classification scheme, compared to spectral classification scheme. Improvements of 21.9 %, 11.0 % and 21.1 % are obtained for the study areas of Linköping, Örnsköldsvik and Vilhelmina respectively.
234

Support Vector Machine Algorithm applied to Industrial Robot Error Recovery / Support Vector Machine algoritm tillämpad inom felhantering på industrirobotar

Lau, Cidney January 2015 (has links)
A Machine Learning approach for error recovery in an industrial robot for the plastic mold industry isproposed in this master thesis project. The goal was to improve the present error recovery method byproviding a learning algorithm to the system instead of using the traditional algorithm-based control.The chosen method was the Support Vector Machine (SVM) due to the robustness and the goodgeneralization performance in real-world applications. Furthermore, SVM generates good classifierseven with a minimal number of training examples. In production, there will be no need for a humanoperator to train the SVM with hundreds or thousands of training examples to achieve goodgeneralization. The advantage with SVM is that good accuracy can be achieved with only a couple oftraining examples if the training examples are well designed.Firstly, the algorithm proposed was evaluated experimentally. The experiments consisted of correcthandling of classification performance on training examples, which was a hand-coded data set createdwith defined in- and output signals. Secondly, the results from the experiments were tested in asimulated environment. By using only a few training examples the SVM reached perfect performance.In conclusion, SVM is a good tool for classification and a suitable method for error recovery on theindustrial robot for the plastic mold industry. / En maskininlärningsstrategi för felhantering på industrirobotar inom plastformindustrin presenteras idetta examensarbete. Målet var att förbättra den nuvarande felhanteringen genom att applicera eninlärningsalgoritm istället för det traditionella förprogrammerade systemet till roboten. Den valdametoden är Support Vector Machine (SVM), då SVM är en robust metod som ger bra prestanda iverkliga tillämpningar. SVM genererar bra klassificerare även med ett minimalt antal träningsexempel.Fördelen med SVM är att god precision kan uppnås med bara ett par träningsexempel förutsatt attträningsexemplen är väldesignade. Detta betyder att operatörerna i produktionen inte behöver tränahundratals eller tusentals träningsexempel med SVM för att uppnå en god generalisering.I projektet utvärderasdes SVM metoden experimentellt varefter den testades i ett simuleringsprogram.Resultatet visade att SVM metoden gav en perfekt precision med hjälp av endast ett fåtal träningsdata.En slutsats från denna studie är att SVM är en bra metod för klassificering och lämplig för felhanteringpå industrirobotar inom plastindustrin.
235

Behavioral Monitoring on Smartphones for Intrusion Detection in Web Systems : A Study of Limitations and Applications of Touchscreen Biometrics / Bevakning av användarbeteende på mobila enheter för identifiering av intrång i webbsystem

Lövmar, Anton January 2015 (has links)
Touchscreen biometrics is the process of measuring user behavior when using a touchscreen, and using this information for authentication. This thesis uses SVM and k-NN classifiers to test the applicability of touchscreen biometrics in a web environment for smartphones. Two new concepts are introduced: model training using the Local Outlier Factor (LOF), as well as building custom models for touch behaviour in the context of individual UI components instead of the whole screen. The lowest error rate achieved was 5.6 \% using the k-NN classifier, with a standard deviation of 2.29 \%. No real benefit using the LOF algorithm in the way presented in this thesis could be found. It is found that the method of using contextual models yields better performance than looking at the entire screen. Lastly, ideas for using touchscreen biometrics as an intrusion detection system is presented. / Pekskärmsbiometri innebär att mäta beteende hos en användare som använder en pekskärm och känna denna baserat på informationen. I detta examensarbete används SVM och k-NN klassifierare för att testa tillämpligheten av denna typ av biometri i en webbmiljö för smarttelefoner. Två nya koncept introduceras: modellträning med ''Local Outlier Factor'' samt att bygga modeller för användarinteraktioner med enskilda gränssnittselement iställer för skärmen i sin helhet. De besta resultaten för klassifierarna hade en felfrekvens på 5.6 \% med en standardavvikelse på 2.29 \%. Ingen fördel med användning av LOF för träning framför slumpmässig träning kunde hittas. Däremot förbättrades resultaten genom att använda kontextuella modeller. Avslutande så presenteras idéer för hur ett system som beskrivet kan användas för att upptäcka intrång i webbsystem.
236

Combining RGB and Depth Images for Robust Object Detection using Convolutional Neural Networks / Kombinera RGB- och djupbilder för robust objektdetektering med neurala faltningsnätverk

Thörnberg, Jesper January 2015 (has links)
We investigated the advantage of combining RGB images with depth data to get more robust object classifications and detections using pre-trained deep convolutional neural networks. We relied upon the raw images from publicly available datasets captured using Microsoft Kinect cameras. The raw images varied in size, and therefore required resizing to fit our network. We designed a resizing method called "bleeding edge" to avoid distorting the objects in the images. We present a novel method of interpolating the missing depth pixel values by comparing to similar RGB values. This method proved superior to the other methods tested. We showed that a simple colormap transformation of the depth image can provide close to state-of-art performance. Using our methods, we can present state-of-art performance on the Washington Object dataset and we provide some results on the Washington Scenes (V1) dataset. Specifically, for the detection, we used contours at different thresholds to find the likely object locations in the images. For the classification task we can report state-of-art results using only RGB and RGB-D images, depth data alone gave close to state-of-art results. For the detection task we found the RGB only detector to be superior to the other detectors.
237

Individualized Motion Monitoring by Wearable Sensor : Pre-impact fall detection using SVM and sensor fusion / Individanpassad rörelsemonitorering med hjälp av bärbara sensorer

Carlsson, Tor January 2015 (has links)
Among the elderly, falling represents a major threat to the individual health, and is considered as a major source of morbidity and mortality. In Sweden alone, three elderly are lost each day in accidents related to falling. The elderly who survive the fall are likely to be suffering from decreased quality of life. As the percentage of elderly increase in the population worldwide, the need for preventive methods and tools will grow drastically in order to deal with the increasing health-care costs. This report is the result of a conceptual study where an algorithm for individualized motion monitoring and pre-impact fall detection is developed. The algorithm learns the normal state of the wearer in order to detect anomalous events such as a fall. Furthermore, this report presents the requirements and issues related to the implementation of such a system. The result of the study is presented as a comparison between the individualized system and a more generalized fall detection system. The conclusion is that the presented type of algorithm is capable of learning the user behaviour and is able to detect a fall before the user impacts the ground, with a mean lead time of 301ms. / Bland äldre är risken för att drabbas av fallrelaterade skador överhängande, ofta med svåra fysiska skador och psykiska effekter som följd. Med en ökande andel äldre i befolkningsmängden beräknas även samhällets kostnad för vård att stiga. Genom aktiva samt preventiva åtgärder kan graden av personligt lidande och fallre- laterade samhällskostnader reduceras. Denna rapport är resultatet av en konceptuell studie där en algoritm för aktiv, individanpassad falldetektion utvecklats. Algoritmen lär sig användarens normala rörelsemönster och skall därefter särskilja dessa från onormala rörelsemönster. Rapporten beskriver de krav och frågeställningar som är relevanta för utvecklingen av ett sådant system. Vidare presenteras resultatet av studien i form av en jämförelse mellan ett individanpassat och generellt system. Resultatet av studien visar att algoritmen kan lära sig användarens vanliga rörelsemönster och därefer särskilja dessa från ett fall, i medelvärde 301ms innan användaren träffar marken.
238

Using machine learning to predict power deviations at Forsmark

Björn, Albin January 2021 (has links)
The power output at the Forsmark nuclear power plant sometimes deviates from the expected value. The causes of these deviations are sometimes known and sometimes unknown. Three types of machine learning methods (k-nearest neighbors, support vector machines and linear regression) were trained to predict whether or not the power deviation would be outside an expected interval. The data used to train the models was gathered from points in the power production process and the data signals consisted mostly of temperatures, pressures and flows. A large part of the project was dedicated to preparing the data before using it to train the models. Temperature signals were shown to be the best predictors of deviation in power, followed by pressure and flow. The model type that performed the best was k-nearest neighbors, followed by support vector machines and linear regression. Principal component analysis was performed to reduce the size of the training datasets and was found to perform equally well in the prediction task as when principal component analysis was not used.
239

Shearlet-Based Descriptors and Deep Learning Approaches for Medical Image Classification

Al-Insaif, Sadiq 07 June 2021 (has links)
In this Ph.D. thesis, we develop effective techniques for medical image classification, particularly, for histopathological and magnetic resonance images (MRI). Our techniques are capable of handling the high variability in the content of such images. Handcrafted techniques based on texture analysis are used for the classification task. We also use deep learning models but training such models from scratch can be a challenging process, instead, we employ deep features and transfer learning. First, we propose a combined texture-based feature representation that is computed in the complex shearlet domain for histopathological image classification. With complex coefficients, we examine both the magnitude and relative phase of shearlets to form the feature space. Our proposed techniques are successful for histopathological image classification. Furthermore, we investigate their ability to generalize to MRI datasets that present an additional challenge, namely high dimensionality. An MRI sample consists of a large number of slices. Our proposed shearlet-based feature representation for histopathological images cannot be used without adjustment. Therefore, we consider the 3D shearlet transform given the volumetric nature of MRI data. An advantage of the 3D shearlet transform is that it takes into consideration adjacent slices of MRI data. Secondly, we study the classification of histopathological images using pre-trained deep learning models. A pre-trained deep learning model can act as a starting point for datasets with a limited number of samples. Therefore, we used various models either as unsupervised feature extractors, or weight initializers to classify histopathological images. When it comes to MRI samples, fine-tuning a deep learning model is not straightforward. Pre-trained models are trained on RGB images which have a channel size of 3, but an MRI sample has a larger number of slices. Fine-tuning a convolutional neural network (CNN) requires adjusting a model to work with MRI data. We fine-tune pre-trained models and then use them as feature extractors. Thereafter, we demonstrate the effectiveness of fine-tuned deep features with classical machine learning (ML) classifiers, namely a support vector machine and a decision tree bagger. Furthermore, instead of using a classical ML classifier for the MRI sample, we built a custom CNN that takes both the 3D shearlet descriptors and deep features as an input. This custom network processes our feature representation end-to-end and then classifies an MRI sample. Our custom CNN is more effective in comparison to a classical ML on a hidden MRI dataset. It is an indication that our CNN model is less susceptible to over-fitting.
240

Techniques d'amélioration des performances de compression dans le cadre du codage vidéo distribué / Techniques for improving the performance of distributed video coding

Abou El Ailah, Abdalbassir 14 December 2012 (has links)
Le codage vidéo distribué (DVC) est une technique récemment proposée dans le cadre du codage vidéo, et qui convient surtout à une nouvelle classe d'applications telles que la surveillance vidéo sans fil, les réseaux de capteurs multimédia, et les téléphones mobiles. En DVC, une information adjacente (SI) est estimée au décodeur en se basant sur les trames décodées disponibles, et utilisée pour le décodage et la reconstruction des autres trames. Dans cette thèse, nous proposons de nouvelles techniques qui permettent d'améliorer la qualité de la SI. Tout d'abord, le raffinement itératif de la SI est réalisé après le décodage de chaque sous-bande DCT. Ensuite, une nouvelle méthode de génération de la SI est proposée, qui utilise l'estimation des vecteurs de mouvement dans les deux sens et le raffinement Quad-tree. Ensuite, de nouvelles approches sont proposées afin de combiner les estimations globale et locale en utilisant les différences entre les blocs correspondants et la technique SVM. En plus, des algorithmes sont proposés pour améliorer la fusion au cours du décodage. En outre, les objets segmentés des trames de référence sont utilisés dans la fusion, en utilisant les courbes élastiques et la compensation de mouvement basée-objets. De nombreuses simulations ont été effectuées pour tester les performances des techniques proposés et qui montrent des gains importants par rapport au codeur classique DISCOVER. Par ailleurs, les performances de DVC obtenues en appliquant les algorithmes proposés surpassent celles de H.264/AVC Intra et H.264/AVC No motion pour les séquences testées. En plus, l'écart vis-à-vis de H.264/AVC Inter (IB...IB) est considérablement réduit. / Distributed Video Coding (DVC) is a recently proposed paradigm in video communication, which fits well emerging applications such as wireless video surveillance, multimedia sensor networks, wireless PC camera, and mobile cameras phones. These applications require a low complexity encoding, while possibly affording a high complexity decoding. In DVC, a Side Information (SI) is estimated at the decoder, using the available decoded frames, and used for the decoding and reconstruction of other frames. In this PhD thesis, we propose new techniques in order to improve the quality of the SI. First, successive refinement of the SI is performed after each decoded DCT band. Then, a new scheme for SI generation based on backward, forward motion estimations, and Quad-tree refinement is proposed. Furthermore, new methods for combining global and local motion estimations are proposed, to further improve the SI, using the differences between the corresponding blocks and Support Vector Machine (SVM). In addition, algorithms are proposed to refine the fusion during the decoding process. Furthermore, the foreground objects are used in the combination of the global and local motion estimations, using elastic curves and foreground objects motion compensation. Extensive experiments have been conducted showing that important gains are obtained by the proposed techniques compared to the classical DISCOVER codec. In addition, the performance of DVC applying the proposed algorithms outperforms now the performance of H.264/AVC Intra and H.264/AVC No motion for tested sequences. Besides that, the gap with H.264/AVC in an Inter IB…IB configuration is significantly reduced.

Page generated in 0.0189 seconds