451 |
Automatic loose gravel condition detection using acoustic observationsKyros, Gionian, Myrén, Elias January 2022 (has links)
Evaluation of the road's condition and state is essential for its upkeep, especially when discussing gravel roads, for the following reasons, among other. When loose gravel is not adequately maintained, it can pose a hazard to drivers, who can lose control of their vehicle and cause accidents. Current maintenance procedures are either laborious or time-consuming. Road agencies and institutions are on the lookout for more effective techniques. This study seeks to establish an automatic method for estimating loose gravel using acoustic observation. On gravelroads, recordings from a car's interior were evaluated and matched to the road's state. The first strategy examined road sections with a four-tier (multiclass) manual classification, based on their perceived condition of loose gravel, in accordance with the Swedish road administration authority’s guidelines. The second, examined two tier (binary) manual classification, distinguishing between roads with low and high maintenance needs. Sound features were extracted and processed for subsequentanalysis. Several supervised machine learning methods and algorithms, combined with selected data preprocessing strategies, were deployed. The performance of each strategy and model is determined by assessing and evaluating their classification accuracy along with other performance metrics. The SVM classifier had the best performance in classifying both multiclass as well as binary gravel road conditions. SVM achieved an accuracy of 57.8% when classifying on a four-tier scale and an accuracy of 82% when classifying on a two-tier scale. These results indicate some merits of using audio features as predictive features in the automatic classification of loose gravel conditions on gravel roads.
|
452 |
Diagnostic prediction on anamnesis in digital primary health care / Diagnostisk predicering genom anamnes inom den digitala primärvårdenKindblom, Marie January 2018 (has links)
Primary health care is facing extensive changes due to digitalization, while the field of application for machine learning is expanding. The merging of these two fields could result in a range of outcomes, one of them being an improved and more rigorous adoption of clinical decision support systems. Clinical decision support systems have been around for a long time but are still not fully adopted in primary health care due to insufficient performance and interpretation. Clinical decision support systems have a range of supportive functions to assist the clinician during decision making, where one of the most researched topics is diagnostic support. This thesis investigates how the use of self-described anamnesis in the form of free text and multiple-choice questions performs in prediction of diagnostic outcome. The chosen approach is to compare text to different subsets of multiple-choice questions for diagnostic prediction on a range of classification methods. The results indicate that text data holds a substantial amount of information, and that the multiple-choice questions used in this study are of varying quality, yet suboptimal compared to text data. The over-all tendency is that Support Vector Machines perform well on text classification and that Random Forests and Naive Bayes have equal performance to Support Vector Machines on multiple-choice questions. / Primärvården förväntas genomgå en utbredd digitalisering under de kommande åren, samtidigt som maskininlärning får utökade tillämpningsområden. Sammanslagningen av dessa två fält möjliggör en mängd förbättrade tekniker, varav en vore ett förbättrat och mer rigoröst anammande av kliniska beslutsstödsystem. Det har länge funnits varianter av kliniska beslutsstödsystem, men de har ännu inte lyckats blivit fullständigt inkorporerade i primärvården, framför allt på̊ grund av bristfällig prestanda och förmåga till tolkning. Kliniskt beslutstöd erbjuder en mängd funktioner för läkare vid beslutsfattning, där ett av de mest uppmärksammade fälten inom forskningen är support vid diagnosticering. Denna uppsats ämnar att undersöka hur självbeskriven anamnes i form av fritext och flervalsfrågor presterar för förutsägning av diagnos. Det valda tillvägagångssättet har varit att jämföra text med olika delmängder av flervalsfrågor med hjälp av en mängd metoder för klassificering. Resultaten indikerar att textdatan innehåller en avsevärt större mängd information än flervalsfrågorna, samt att flervalsfrågorna som har använts i denna studie är av varierande kvalité, men generellt sett suboptimala vad gäller prestanda i jämförelse med textdatan. Den generella tendensen är att Support Vector Machines presterar bra för klassificering med text data medan Random Forests och Naive Bayes är likvärdiga alternativ till Support Vector Machines för predicering vid användning av flervalsfrågor.
|
453 |
Classification of Healthy and Alzheimer's Patients Using Electroencephalography and Supervised Machine Learning / Klassifiering av friska och alzheimers patienter med hjälp av elektroencefalografi och maskininlärningJavanmardi, Ramtin, Rehman, Dawood January 2018 (has links)
Alzheimer’s is one of the most costly illnesses that exists today and the number of people with alzheimers diease is expected to increase with 100 million until the year 2050. The medication that exists today is most effective if Alzheimer’s is detected during early stages since these medications do not cure Alzheimer’s but slows down the progression of the disease. Electroencephalography (EEG) is a relatively cheap method in comparison to for example Magnetic Resonance Imaging when it comes to diagnostic tools. However it is not clear how to deduce whether a patient has Alzheimer’s disease just from EEG data when the analyst is a human. This is the underlying motivation for our investigation; can supervised machine learning methods be used for pattern recognition using only the spectral power of EEG data to tell whether an individual has alzheimer’s disease or not? The output accuracy of the trained supervised machine learning models showed an average accuracy of above 80%. This indicates that there is a difference in the neural oscillations of the brain between healthy individuals and alzheimer’s disease patients which the machine learning methods are able to detect using pattern recognition. / Alzheimer är en av de mest kostsamma sjukdomar som existerar idag och antalet människor med alzheimer förväntas öka med omkring 100 miljoner människor tills 2050. Den medicinska hjälp som finns tillgänglig idag är som mest effektiv om man upptäcker Alzheimer i ett tidigt stadium eftersom dagens mediciner inte botar sjukdomen utan fungerar som bromsmedicin. Elektroencefalografi är en relativt billig metod för diagnostisering jämfört med Magnetisk resonanstomografi. Det är emellertid inte tydligt hur en läkare eller annan tränad individ ska tolka EEG datan för att kunna avgöra om det är en patient med alzheimers som de kollar på. Så den bakomliggande motivation till vår undersökning är; Kan man med hjälp av övervakad maskininlärning i kombination med spektral kraft från EEG datorn skapa modeller som kan avgöra om en patient har alzheimers eller inte. Medelvärdet av våra modellers noggrannhet var över 80%. Detta tyder på att det finns en faktiskt skillnad mellan hjärna signalerna hos en patient med alzheimer och en frisk individ, och att man med hjälp av maskininlärning kan hitta dessa skillnader som en människa enkelt missar.
|
454 |
Detecting, Tracking, And Recognizing Activities In Aerial VideoReilly, Vladimir 01 January 2012 (has links)
In this dissertation, we address the problem of detecting humans and vehicles, tracking them in crowded scenes, and finally determining their activities in aerial video. Even though this is a well explored problem in the field of computer vision, many challenges still remain when one is presented with realistic data. These challenges include large camera motion, strong scene parallax, fast object motion, large object density, strong shadows, and insufficiently large action datasets. Therefore, we propose a number of novel methods based on exploiting scene constraints from the imagery itself to aid in the detection and tracking of objects. We show, via experiments on several datasets, that superior performance is achieved with the use of proposed constraints. First, we tackle the problem of detecting moving, as well as stationary, objects in scenes that contain parallax and shadows. We do this on both regular aerial video, as well as the new and challenging domain of wide area surveillance. This problem poses several challenges: large camera motion, strong parallax, large number of moving objects, small number of pixels on target, single channel data, and low frame-rate of video. We propose a method for detecting moving and stationary objects that overcomes these challenges, and evaluate it on CLIF and VIVID datasets. In order to find moving objects, we use median background modelling which requires few frames to obtain a workable model, and is very robust when there is a large number of moving objects in the scene while the model is being constructed. We then iii remove false detections from parallax and registration errors using gradient information from the background image. Relying merely on motion to detect objects in aerial video may not be sufficient to provide complete information about the observed scene. First of all, objects that are permanently stationary may be of interest as well, for example to determine how long a particular vehicle has been parked at a certain location. Secondly, moving vehicles that are being tracked through the scene may sometimes stop and remain stationary at traffic lights and railroad crossings. These prolonged periods of non-motion make it very difficult for the tracker to maintain the identities of the vehicles. Therefore, there is a clear need for a method that can detect stationary pedestrians and vehicles in UAV imagery. This is a challenging problem due to small number of pixels on the target, which makes it difficult to distinguish objects from background clutter, and results in a much larger search space. We propose a method for constraining the search based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of the ground plane normal, the orientation of the shadows cast by out of plane objects in the scene, and the relationship between object heights and the size of their corresponding shadows. We utilize the above information in a geometry-based shadow and ground plane normal blob detector, which provides an initial estimation for the locations of shadow casting out of plane (SCOOP) objects in the scene. These SCOOP candidate locations are then classified as either human or clutter using a combination of wavelet features, and a Support Vector Machine. Additionally, we combine regular SCOOP and inverted SCOOP candidates to obtain vehicle candidates. We show impressive results on sequences from VIVID and CLIF datasets, and provide comparative quantitative and qualitative analysis. We also show that we can extend the SCOOP detection method to automatically estimate the iv orientation of the shadow in the image without relying on metadata. This is useful in cases where metadata is either unavailable or erroneous. Simply detecting objects in every frame does not provide sufficient understanding of the nature of their existence in the scene. It may be necessary to know how the objects have travelled through the scene over time and which areas they have visited. Hence, there is a need to maintain the identities of the objects across different time instances. The task of object tracking can be very challenging in videos that have low frame rate, high density, and a very large number of objects, as is the case in the WAAS data. Therefore, we propose a novel method for tracking a large number of densely moving objects in an aerial video. In order to keep the complexity of the tracking problem manageable when dealing with a large number of objects, we divide the scene into grid cells, solve the tracking problem optimally within each cell using bipartite graph matching and then link the tracks across the cells. Besides tractability, grid cells also allow us to define a set of local scene constraints, such as road orientation and object context. We use these constraints as part of cost function to solve the tracking problem; This allows us to track fast-moving objects in low frame rate videos. In addition to moving through the scene, the humans that are present may be performing individual actions that should be detected and recognized by the system. A number of different approaches exist for action recognition in both aerial and ground level video. One of the requirements for the majority of these approaches is the existence of a sizeable dataset of examples of a particular action from which a model of the action can be constructed. Such a luxury is not always possible in aerial scenarios since it may be difficult to fly a large number of missions to observe a particular event multiple times. Therefore, we propose a method for v recognizing human actions in aerial video from as few examples as possible (a single example in the extreme case). We use the bag of words action representation and a 1vsAll multi-class classification framework. We assume that most of the classes have many examples, and construct Support Vector Machine models for each class. Then, we use Support Vector Machines that were trained for classes with many examples to improve the decision function of the Support Vector Machine that was trained using few examples, via late weighted fusion of decision values.
|
455 |
A Cloud-Based Intelligent and Energy Efficient Malware Detection Framework. A Framework for Cloud-Based, Energy Efficient, and Reliable Malware Detection in Real-Time Based on Training SVM, Decision Tree, and Boosting using Specified Heuristics Anomalies of Portable Executable FilesMirza, Qublai K.A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
|
456 |
Mobile Machine Learning for Real-time Predictive Monitoring of Cardiovascular DiseaseBoursalie, Omar January 2016 (has links)
Chronic cardiovascular disease (CVD) is increasingly becoming a burden for global healthcare systems. This burden can be attributed in part to traditional methods of managing CVD in an aging population that involves periodic meetings between the patient and their healthcare provider. There is growing interest in developing continuous monitoring systems to assist in the management of CVD. Monitoring systems can utilize advances in wearable devices and health records, which provides minimally invasive methods to monitor a patient’s health. Despite these advances, the algorithms deployed to automatically analyze the wearable sensor and health data is considered too computationally expensive to run on the mobile device. Instead, current mobile devices continuously transmit the collected data to a server for analysis at great computational and data transmission expense.
In this thesis a novel mobile system designed for monitoring CVD is presented. Unlike existing systems, the proposed system allows for the continuous monitoring of physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each algorithm for monitoring CVD is also discussed. Both models were able to classify CVD risk with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, unlike current systems the resource requirements for each component in the system was evaluated. The MLP was found to be more efficient when running on the mobile device compared to the SVM. The results of thesis also show that the MLAs complexity was not a barrier to deployment on a mobile device. / Thesis / Master of Applied Science (MASc) / In this thesis, a novel mobile system for monitoring cardiovascular (CVD) disease is presented. The system allows for the continuous monitoring of both physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a remote server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each MLA for monitoring CVD is also discussed. Both models were able to classify CVD severity with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, the resource requirements for each component in the system were evaluated. The results show that the MLAs complexity was not a barrier to deployment on a mobile device.
|
457 |
Development of new data fusion techniques for improving snow parameters estimationDe Gregorio, Ludovica 26 November 2019 (has links)
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
|
458 |
Modeling, Detection, and Prevention of Electricity Theft for Enhanced Performance and Security of Power GridDepuru, Soma Shekara 24 September 2012 (has links)
No description available.
|
459 |
Discriminative Articulatory Feature-based Pronunciation Models with Application to Spoken Term DetectionPrabhavalkar, Rohit Prakash 27 September 2013 (has links)
No description available.
|
460 |
Experiments with Support Vector Machines and KernelsKohram, Mojtaba 21 October 2013 (has links)
No description available.
|
Page generated in 0.0323 seconds