• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 225
  • 72
  • 24
  • 22
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 462
  • 462
  • 462
  • 156
  • 128
  • 109
  • 105
  • 79
  • 76
  • 70
  • 67
  • 64
  • 60
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Realtidsövervakning av multicastvideoström / Monitoring of multicast video streaming in realtime

Hassan, Waleed, Hellström, Martin January 2017 (has links)
Den enorma ökningen av multicasttjänster har visat begränsningarna hos traditionella nätverkshanteringsverktyg vid multicastkvalitetsövervakning. Det behövs någon annan form av övervakningsteknik som inte är en hårdvaruinriktad lösning så som ökad länkgenomströmmning, buffertlängd och kapacitet för att förbättra kundupplevelsen. I rapporten undersöks användningen av biblioteken FFmpeg, och OpenCV samt no-reference image quality assessemnt algoritmen BRISQUE för att förbättra tjänstekvaliteten och kundupplevelsen. Genom att upptäcka kvalitetsbrister hos bildrutor samt bitfel i videoströmmen kan QoS och QoE förbättras. Uppgiftens ändamål är att i realtid detektera avvikelser i bildkvalitet och bitfel i en multicastvideoström för att sedan notifiera tjänsteleverantören med hjälp av SNMP traps. Undersökningen visar positiva resultat med en hybridlösning med användning av både BRISQUE och FFmpeg då båda ensamma inte är tillräckligt anpassade för multimediaövervakning. FFmpeg har möjligheten att detektera avkodningsfel som oftast beror på allvarliga bitfel, och BRISQUE algoritmen utvecklades för att analysera bilder och bestämma bildkvaliteten. Enligt testresultaten kan BRISQUE användas för multicastvideoanalysering eftersom att den subjektiva bildkvaliteten kan bestämmas med god pålitlighet. Kombinationen av dessa metoder har visat bra resultat men behöver undersökas mer för användning av multicastövervakning. / The enormous increase in multicast services has shown the limitations of traditional network management tools in multicast quality monitoring. There is a need for new monitoring technologies that are not hardware-based solutions such as increased link throughput, buffer length and capacity to enhance the quality of experience. This paper examines the use of FFmpeg, and OpenCV as well the no-reference image quality assessment algorithm BRISQUE to improve the quality of service as well as the quality of experience. By detecting image quality deficiencies as well as bit errors in the video stream, the QoS and QoE can be improved. The purpose of this project was to develop a monitoring system that has the ability to detect fluctuations in image quality and bit errors in a multicast video stream in real time and then notify the service provider using SNMP traps. The tests performed in this paper shows positive results when using the hybrid solution proposed in this paper, both BRISQUE and FFmpeg alone are not sufficiently adapted for this purpose. FFmpeg has the ability to detect decoding errors that usually occurs due to serious bit errors and the BRISQUE algorithm was developed to analyse images and determine the subjective image quality. According to the test results BRISQUE can be used for multicast video analysis because the subjective image quality can be determined with good reliability. The combination of these methods has shown good results but needs to be investigated and developed further.
322

Learning to Grasp Unknown Objects using Weighted Random Forest Algorithm from Selective Image and Point Cloud Feature

Iqbal, Md Shahriar 01 January 2014 (has links)
This method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the system to be computationally very fast while preserving maximum information gain. In this approach, the Random Forest operates using optimum parameters e.g. Number of Trees, Number of Features at each node, Information Gain Criteria etc. ensures optimization in learning, with highest possible accuracy in minimum time in an advanced practical setting. The Weighted Random Forest chosen over Support Vector Machine (SVM), Decision Tree and Adaboost for implementation of the grasping system outperforms the stated machine learning algorithms both in training and testing accuracy and other performance estimates. The Grasping System utilizing learning from a score function detects the rectangular grasping region after selecting the top rectangle that has the largest score. The system is implemented and tested in a Baxter Research Robot with Parallel Plate Gripper in action.
323

Remote Sensing of Urbanization and Environmental Impacts

Haas, Jan January 2013 (has links)
The unprecedented growth of urban areas all over the globe is nowadays maybe most apparent in China having undergone rapid urbanization since the late 1970s. The need for new residential, commercial and industrial areas leads to new urban regions challenging sustainable development and the maintenance and creation of a high living standard as well as the preservation of ecological functionality. Therefore, timely and reliable information on land-cover changes and their consequent environmental impacts are needed to support sustainable urban development.The objective of this research is the analysis of land-cover changes, especially the development of urban areas in terms of speed, magnitude and resulting implications for the natural and rural environment using satellite imagery and the quantification of environmental impacts with the concepts of ecosystem services and landscape metrics. The study areas are the cities of Shanghai and Stockholm and the three highly-urbanized Chinese regions Jing-Jin-Ji, the Yangtze River Delta and the Pearl River Delta. The analyses are based on classification of optical satellite imagery (Landsat TM/ETM+ and HJ-1A/B) over the past two decades. The images were first co-registered and mosaicked, whereupon GLCM texture features were generated and tasseled cap transformations performed to improve class separabilities. The mosaics were classified with a pixel-based SVM and a random forest decision tree ensemble classifier. Based on the classification results, two urbanization indices were derived that indicate both the absolute amount of urban land and the speed of urban development. The spatial composition and configuration of the landscape was analysed by landscape metrics. Environmental impacts were quantified by attributing ecosystem service values to the classifications and the observation of value changes over time. ivThe results from the comparative study between Shanghai and Stockholm show a decrease in all natural land-cover classes and agricultural areas, whereas urban areas increased by approximately 120% in Shanghai, nearly ten times as much as in Stockholm where no significant land-cover changes other than a 12% urban expansion could be observed. From the landscape metrics analysis results, it appears that fragmentation in both study regions occurred mainly due to the growth of high density built-up areas in previously more natural environments, while the expansion of low density built-up areas was for the most part in conjunction with pre-existing patches. Urban growth resulted in ecosystem service value losses of ca. 445 million US dollars in Shanghai, mostly due to a decrease in natural coastal wetlands. In Stockholm, a 4 million US dollar increase in ecosystem service values could be observed that can be explained by the maintenance and development of urban green spaces. Total urban growth in Shanghai was 1,768 km2 compared to 100 km2 in Stockholm. Regarding the comparative study of urbanization in the three Chinese regions, a total increase in urban land of about 28,000 km2 could be detected with a simultaneous decrease in ecosystem service values corresponding to ca. 18.5 billion Chinese Yuan Renminbi. The speed and relative urban growth in Jing-Jin-Ji was highest, followed by the Yangtze River Delta and the Pearl River Delta. The increase in urban land occurred predominately at the expense of cropland. Wetlands decreased due to land reclamation in all study areas. An increase in landscape complexity in terms of land-cover composition and configuration could be detected. Urban growth in Jing-Jin-Ji contributed most to the decrease in ecosystem service values, closely followed by the Yangtze River Delta and the Pearl River Delta. / <p>QC 20130610</p>
324

A Cloud-Based Intelligent and Energy Efficient Malware Detection Framework. A Framework for Cloud-Based, Energy Efficient, and Reliable Malware Detection in Real-Time Based on Training SVM, Decision Tree, and Boosting using Specified Heuristics Anomalies of Portable Executable Files

Mirza, Qublai K.A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
325

Mobile Machine Learning for Real-time Predictive Monitoring of Cardiovascular Disease

Boursalie, Omar January 2016 (has links)
Chronic cardiovascular disease (CVD) is increasingly becoming a burden for global healthcare systems. This burden can be attributed in part to traditional methods of managing CVD in an aging population that involves periodic meetings between the patient and their healthcare provider. There is growing interest in developing continuous monitoring systems to assist in the management of CVD. Monitoring systems can utilize advances in wearable devices and health records, which provides minimally invasive methods to monitor a patient’s health. Despite these advances, the algorithms deployed to automatically analyze the wearable sensor and health data is considered too computationally expensive to run on the mobile device. Instead, current mobile devices continuously transmit the collected data to a server for analysis at great computational and data transmission expense. In this thesis a novel mobile system designed for monitoring CVD is presented. Unlike existing systems, the proposed system allows for the continuous monitoring of physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each algorithm for monitoring CVD is also discussed. Both models were able to classify CVD risk with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, unlike current systems the resource requirements for each component in the system was evaluated. The MLP was found to be more efficient when running on the mobile device compared to the SVM. The results of thesis also show that the MLAs complexity was not a barrier to deployment on a mobile device. / Thesis / Master of Applied Science (MASc) / In this thesis, a novel mobile system for monitoring cardiovascular (CVD) disease is presented. The system allows for the continuous monitoring of both physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a remote server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each MLA for monitoring CVD is also discussed. Both models were able to classify CVD severity with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, the resource requirements for each component in the system were evaluated. The results show that the MLAs complexity was not a barrier to deployment on a mobile device.
326

DEVELOPMENT OF NOISE AND VIBRATION BASED FAULT DIAGNOSIS METHOD FOR ELECTRIFIED POWERTRAIN USING SUPERVISED MACHINE LEARNING CLASSIFICATION

Joohyun Lee (17552055) 06 December 2023 (has links)
<p dir="ltr">The industry's interest in electrified powertrain-equipped vehicles has increased due to environmental and economic reasons. Electrified powertrains, in general, produce lower sound and vibration level than those equipped with internal combustion engines, making noise and vibration (N&V) from other non-engine powertrain components more perceptible. One such N&V type that arouses concern to both vehicle manufacturers and passengers is gear growl, but the signal characteristics of gear growl noise and vibration and the threshold of those characteristics that can be used to determine whether a gear growl requires attention are not yet well understood. This study focuses on developing a method to detect gear-growl based on the N\&V measurements and determining thresholds on various severities of gear-growl using supervised machine learning classification. In general, a machine learning classifier requires sufficient high-quality training data with strong information independence to ensure accurate classification performance. In industrial practices, acquiring high-quality vehicle NVH data is expensive in terms of finance, time, and effort. A physically informed data augmentation method is, thus, proposed to generate realistic powertrain NVH signals based on high-quality measurements which not only provides a larger training data set but also enriches the signal feature variations included in the data set. More specifically, this method extracts physical information such as angular speed, tonal amplitudes distribution, and broadband spectrum shape from the measurement data. Then, it recreates a synthetic signal that mimics the measurement data. The measured and simulated (via data augmentation) are transformed into feature matrix representation so that the N\&V signals can be used in the classification model training process. Features describing signal characteristics are studied, extracted, and selected. While the root-mean-square (RMS) of the vibration signal and spectral entropy were sufficient for detecting gear-growl with a test accuracy of 0.9828, the acoustic signal required more features due to background noise, making data linearly inseparable. The minimum Redundancy Maximum Relevance (mRMR) feature scoring method was used to assess the importance of acoustic signal features in classification. The five most important features based on the importance score were the angular acceleration of the driveshaft, the time derivative of RMS, the tone-to-noise ratio (TNR), the time derivative of the spectral spread of the tonal component of the acoustic signal, and the time derivative of the spectral spread of the original acoustic signal (before tonal and broadband separation). A supervised classification model is developed using a support vector machine from the extracted acoustic signal features. Data used in training and testing consists of steady-state vehicle operations of 25, 35, 45, and 55 mph, with two vehicles with two different powertrain specs: axles with 4.56 and 6.14 gear ratios. The dataset includes powertrains with swapped axles (four different configurations). Techniques such as cost weighting, median filter, and hyperparameter tuning are implemented to improve the classification performance where the model classifies if a segment in the signal represents a gear-growl event or no gear-growl event. The average accuracy of test data was 0.918. A multi-class classification model is further implemented to classify different severities based on preliminary subjective listening studies. Data augmentation using signal simulation showed improvement in binary classification applications. In this study, only gear-growl was used as a fault type. Still, data augmentation, feature extraction and selection, and classification methods can be generalized for NVH signal-based fault diagnosis applications. Further listening studies are suggested for improved classification of multi-class classification applications.</p>
327

Development of new data fusion techniques for improving snow parameters estimation

De Gregorio, Ludovica 26 November 2019 (has links)
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
328

Implementering av maskinginlärningsmodeller för detektering av ett objekt baserad på endimensionell elektromagnetisk strålningsdata / Implementation of machine learning models for detecting an object based on one-dimensional electromagnetic radiation data

Heinke, Simon, Åberg, Marcus January 2020 (has links)
Clinical trials are experiments or observations on a patient’s responses of different medical treatments to cure diseases. Such trials are heavily regulated and must achieve a certain quality standard of the trial and clinical adherence is a determining factor on the success of a study. However, it has historically been difficult to systematically follow and understand patient adherence to medical ordinations, predominately due to lack of proper tools. One new type of tools is a digital pillbox that can be used to supply pills to participants in clinical trials. This paper examines implementing two supervised machine learning models to detect if an object (a pill) is found in an encapsulated compartment (pillbox) based on electromagnetic radiation data from a proximity sensor. Support Vector Machine (SVM) and Random Forest (RF) were evaluated on a data set of N=1,485 observations, consisting of five classes: four different pills and ‘no pill’. RF performs best with accuracy of 98.0% and weighted average precision of 98.0%. SVM had 97.3% accuracy and 97.6% weighted average precision. Best performance was achieved at N=1,000 for RF and 1,100 for SVM. The conclusion was that a high accuracy and precision can be achieved using either RF or SVM. The classification model strengthens the value proposition of a digital pillbox and can improve clinical trials to achieve better data quality. However, for the model to contribute actual economical value, digital pillboxes must be a common practice in clinical trials. / Kliniska studier är experiment eller observationer av en patients reaktion på olika typer av medicinsk vård för behandling sjukdomar. Sådana studier är tungt reglerade och behöver uppnå en viss kvalitésstandard och klinisk följsamhet är en avgörande faktor för en studies framgång. Trots det har det historiskt varit svårt att systematiskt mäta och förstå en patients följsamhet av en medicinsk ordination, primärt på grund av brist av användbara verktyg. En ny typ av verktyg är en digital  pillerbox som försörjer piller till deltagare i kliniska studier. Denna studie undersöker implementation av två bevakade maskininlärningsmodeller för detektion om ett objekt (ett piller) befinner sig i ett slutet fack baserad på elektromagnetisk strålning från en närhetssensor. Support Vector Machine (SVM) och Random Forest (RF) utvärderades på ett dataset av N=1 485 observationer utgjort av fem klasser: fyra piller och ’inget piller’. RF presterar bäst med 98,0% i träffsäkerhet och 98,0% i viktad medelprecision. SVM fick 97,3% träffsäkerhet och 97,6% viktad medelprecision. Bäst prestation uppnåddes vid N=1 000 för RF och N=1 100 för SVM. Slutsatsen var att en hög träffsäkerhet och precision kan uppnås genom antingen RF eller SVM. Klassificeringsmodellen förstärker en digital pillerbox värdeerbjudande och kan hjälpa kliniska studier att uppnå högre datakvalité. Däremot, för klassificeringsmodellen ska bidra med faktiskt ekonomiskt värde, behöver digitala pillerboxar vara en vedertagen praxis.
329

Predicting Purchase of Airline Seating Using Machine Learning / Förutsägelse på köp av sätesreservation med maskininlärning.

El-Hage, Sebastian January 2020 (has links)
With the continuing surge in digitalization within the travel industry and the increased demand of personalized services, understanding customer behaviour is becoming a requirement to survive for travel agencies. The number of cases that addresses this problem are increasing and machine learning is expected to be the enabling technique. This thesis will attempt to train two different models, a multi-layer perceptron and a support vector machine, to reliably predict whether a customer will add a seat reservation with their flight booking. The models are trained on a large dataset consisting of 69 variables and over 1.1 million historical recordings of bookings dating back to 2017. The results from the trained models are satisfactory and the models are able to classify the data with an accuracy of around 70%. This shows that this type of problem is solvable with the techniques used. The results moreover suggest that further exploration of models and additional data could be of interest since this could help increase the level of performance. / Med den fortsatta ökningen av digitalisering inom reseindustrin och det faktum att kunder idag visar ett stort behov av skräddarsydda tjänster så stiger även kraven på företag att förstå sina kunders beteende för att överleva. En uppsjö av studier har gjorts där man försökt tackla problemet med att kunna förutse kundbeteende och maskininlärning har pekats ut som en möjliggörande teknik. Inom maskininlärning har det skett en stor utveckling och specifikt inom området djupinlärning. Detta har gjort att användningen av dessa teknologier för att lösa komplexa problem spritt sig till allt fler branscher. Den här studien implementerar en Multi-Layer Perceptron och en Support Vector Machine och tränar dessa på befintliga data för att tillförlitligt kunna avgöra om en kund kommer att köpa en sätesreservation eller inte till sin bokning. Datat som användes bestod av 69 variabler och över 1.1 miljoner historiska bokningar inom tidsspannet 2017 till 2020. Resultaten från studien är tillfredställande då modellerna i snitt lyckas klassificera med en noggrannhet på 70%, men inte optimala. Multi-Layer Perceptronen presterar bäst på båda mätvärdena som användes för att estimera prestandan på modellerna, accuracy och F1 score. Resultaten pekar även på att en påbyggnad av denna studie med mer data och fler klassificeringsmodeller är av intresse då detta skulle kunna leda till en högre nivå av prestanda.
330

Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications

Razzaghi, Talayeh 01 January 2014 (has links)
Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties create bias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics.

Page generated in 0.057 seconds