271 |
Rozpoznávač hudebního stylu z MP3 / Music Style Recognizer from MP3Deutscher, Michael January 2009 (has links)
This document describes the concept of music style recognition. It gives a quick reference to the digitalization of music data and storing music data in computers. It also mentions features used for music style recognition and their extraction. The main part of this document compares the successfulness of music genre recognition using features extracted directly from audio data in mp3 format and features extracted by usual analysis.
|
272 |
A Perception Payload for Small-UAS Navigation in Structured EnvironmentsBharadwaj, Akshay S. 26 September 2018 (has links)
No description available.
|
273 |
Functional Principal Component Analysis of Vibrational Signal Data: A Functional Data Analytics Approach for Fault Detection and Diagnosis of Internal Combustion EnginesMcMahan, Justin Blake 14 December 2018 (has links)
Fault detection and diagnosis is a critical component of operations management systems. The goal of FDD is to identify the occurrence and causes of abnormal events. While many approaches are available, data-driven approaches for FDD have proven to be robust and reliable. Exploiting these advantages, the present study applied functional principal component analysis (FPCA) to carry out feature extraction for fault detection in internal combustion engines. Furthermore, a feature subset that explained 95% of the variance of the original vibrational sensor signal was used in a multilayer perceptron to carry out prediction for fault diagnosis. Of the engine states studied in the present work, the ending diagnostic performance shows the proposed approach achieved an overall prediction accuracy of 99.72 %. These results are encouraging because they show the feasibility for applying FPCA for feature extraction which has not been discussed previously within the literature relating to fault detection and diagnosis.
|
274 |
Estimating Pinyon and Juniper Cover Across Utah Using NAIP ImageryRoundy, Darrell B 01 June 2015 (has links) (PDF)
Expansion of Pinus L. (pinyon) and Juniperus L. (juniper) (P-J) trees into sagebrush (Artemisia L.) steppe communities can lead to negative effects on hydrology, loss of wildlife habitat, and a decrease in desirable understory vegetation. Tree reduction treatments are often implemented to mitigate these negative effects. In order to prioritize and effectively plan these treatments, rapid, accurate, and inexpensive methods are needed to estimate tree canopy cover at the landscape scale. We used object based image analysis (OBIA) software (Feature AnalystTM for ArcMap 10.1®, ENVI Feature Extraction®, and Trimble eCognition Developer 8.2®) to extract tree canopy cover using NAIP (National Agricultural Imagery Program) imagery. We then compared our extractions with ground measured tree canopy cover (crown diameter and line point) on 309 subplots across 44 sites in Utah. Extraction methods did not consistently over- or under-estimate ground measured P-J canopy cover except where tree cover was > 45%. Estimates of tree canopy cover using OBIA techniques were strongly correlated with estimates using the crown diameter method (r = 0.93 for ENVI, 0.91 for Feature Analyst, and 0.92 for eCognition). Tree cover estimates using OBIA techniques had lower correlations with tree cover measurements using the line-point method (r = 0.85 for ENVI, 0.83 for Feature Analyst, and 0.83 for eCognition). Results from this study suggest that OBIA techniques may be used to extract P-J tree canopy cover accurately and inexpensively. All software packages accurately evaluated accurately extracted P-J canopy cover from NAIP imagery when imagery was not blurred and when P-J cover was not mixed with Amelanchier alnifolia (Utah serviceberry) and Quercus gambelii (Gambel's oak), which are shrubs with similar spectral values as P-J.
|
275 |
Scalable Extraction and Visualization of Scientific Features with Load-Balanced ParallelismXu, Jiayi January 2021 (has links)
No description available.
|
276 |
Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy / Generativamodeller för patientbilder inom strålterapiGruselius, Hanna January 2018 (has links)
This Master thesis focuses on generative models for medical patient data for radiation therapy. The objective with the project is to implement and investigate the characteristics of a Variational Autoencoder applied to this diverse and versatile data. The questions this thesis aims to answer are: (i) whether the VAE can capture salient features of medical image data, and (ii) if these features can be used to compare similarity between patients. Furthermore, (iii) if the VAE network can successfully reconstruct its input and lastly (iv) if the VAE can generate artificial data having a reasonable anatomical appearance. The experiments carried out conveyed that the VAE is a promising method for feature extraction, since it appeared to ascertain similarity between patient images. Moreover, the reconstruction of training inputs demonstrated that the method is capable of identifying and preserving anatomical details. Regarding the generative abilities, the artificial samples generally conveyed fairly realistic anatomical structures. Future work could be to investigate the VAEs ability to generalize, with respect to both the amount of data and probabilistic considerations as well as probabilistic assumptions. / Fokuset i denna masteruppsats är generativa modeller för patientdata från strålningsbehandling. Syftet med projektet är att implementera och undersöka egenskaperna som en “Variational Autoencoder” (VAE) har på denna typ av mångsidiga och varierade data. Frågorna som ska besvaras är: (i) kan en VAE fånga särdrag hos medicinsk bild-data, och (ii) kan dessa särdrag användas för att jämföra likhet mellan patienter. Därutöver, (iii) kan VAE-nätverket återskapa sin indata väl och slutligen (iv) kan en VAE skapa artificiell data med ett rimligt anatomiskt utseende. De experiment som utfördes pekade på att en VAE kan vara en lovande metod för att extrahera framtydande drag hos patienter, eftersom metoden verkade utröna likheter mellan olika patienters bilder. Dessutom påvisade återskapningen av träningsdata att metoden är kapabel att identifiera och bevara anatomiska detaljer. Vidare uppvisade generellt den artificiellt genererade datan, en realistisk anatomisk struktur. Framtida arbete kan bestå i att undersöka hur väl en VAE kan generalisera, med avseende på både mängd data som krävs och sannolikhetsteorietiska avgränsningar och antaganden.
|
277 |
Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing EnvironmentStromann, Oliver January 2018 (has links)
Mapping the Earth’s surface and its rapid changes with remotely sensed data is a crucial tool to un-derstand the impact of an increasingly urban world population on the environment. However, the impressive amount of freely available Copernicus data is only marginally exploited in common clas-sifications. One of the reasons is that measuring the properties of training samples, the so-called ‘fea-tures’, is costly and tedious. Furthermore, handling large feature sets is not easy in most image clas-sification software. This often leads to the manual choice of few, allegedly promising features. In this Master’s thesis degree project, I use the computational power of Google Earth Engine and Google Cloud Platform to generate an oversized feature set in which I explore feature importance and analyse the influence of dimensionality reduction methods. I use Support Vector Machines (SVMs) for object-based classification of satellite images - a commonly used method. A large feature set is evaluated to find the most relevant features to discriminate the classes and thereby contribute most to high clas-sification accuracy. In doing so, one can bypass the sensitive knowledge-based but sometimes arbi-trary selection of input features.Two kinds of dimensionality reduction methods are investigated. The feature extraction methods, Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA), which transform the original feature space into a projected space of lower dimensionality. And the filter-based feature selection methods, chi-squared test, mutual information and Fisher-criterion, which rank and filter the features according to a chosen statistic. I compare these methods against the default SVM in terms of classification accuracy and computational performance. The classification accuracy is measured in overall accuracy, prediction stability, inter-rater agreement and the sensitivity to training set sizes. The computational performance is measured in the decrease in training and prediction times and the compression factor of the input data. I conclude on the best performing classifier with the most effec-tive feature set based on this analysis.In a case study of mapping urban land cover in Stockholm, Sweden, based on multitemporal stacks of Sentinel-1 and Sentinel-2 imagery, I demonstrate the integration of Google Earth Engine and Google Cloud Platform for an optimised supervised land cover classification. I use dimensionality reduction methods provided in the open source scikit-learn library and show how they can improve classification accuracy and reduce the data load. At the same time, this project gives an indication of how the exploitation of big earth observation data can be approached in a cloud computing environ-ment.The preliminary results highlighted the effectiveness and necessity of dimensionality reduction methods but also strengthened the need for inter-comparable object-based land cover classification benchmarks to fully assess the quality of the derived products. To facilitate this need and encourage further research, I plan to publish the datasets (i.e. imagery, training and test data) and provide access to the developed Google Earth Engine and Python scripts as Free and Open Source Software (FOSS). / Kartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
|
278 |
Classifying Electrocardiogram with Machine Learning TechniquesJarrar, Hillal 01 December 2021 (has links) (PDF)
Classifying the electrocardiogram is of clinical importance because classification can be used to diagnose patients with cardiac arrhythmias. Many industries utilize machine learning techniques that consist of feature extraction methods followed by Naive- Bayesian classification in order to detect faults within machinery. Machine learning techniques that analyze vibrational machine data in a mechanical application may be used to analyze electrical data in a physiological application. Three of the most common feature extraction methods used to prepare machine vibration data for Naive-Bayesian classification are the Fourier transform, the Hilbert transform, and the Wavelet Packet transform. Each machine learning technique consists of a different feature extraction method to prepare the data for Naive-Bayesian classification. The effectiveness of the different machine learning techniques, when applied to electrocardiogram, is assessed by measuring the sensitivity and specificity of the classifications. Comparing the sensitivity and specificity of each machine learning technique to the other techniques revealed that the Wavelet Packet transform, followed by Naïve-Bayesian classification, is the most effective machine learning technique.
|
279 |
Combining Machine Learning and Empirical Engineering Methods Towards Improving Oil Production ForecastingAllen, Andrew J 01 July 2020 (has links) (PDF)
Current methods of production forecasting such as decline curve analysis (DCA) or numerical simulation require years of historical production data, and their accuracy is limited by the choice of model parameters. Unconventional resources have proven challenging to apply traditional methods of production forecasting because they lack long production histories and have extremely variable model parameters. This research proposes a data-driven alternative to reservoir simulation and production forecasting techniques. We create a proxy-well model for predicting cumulative oil production by selecting statistically significant well completion parameters and reservoir information as independent predictor variables in regression-based models. Then, principal component analysis (PCA) is applied to extract key features of a well’s time-rate production profile and is used to estimate cumulative oil production. The efficacy of models is examined on field data of over 400 wells in the Eagle Ford Shale in South Texas, supplied from an industry database. The results of this study can be used to help oil and gas companies determine the estimated ultimate recovery (EUR) of a well and in turn inform financial and operational decisions based on available production and well completion data.
|
280 |
Clinical dose feature extraction for prediction of dose mimicking parameters / Extrahering av features från kliniska dosbilder för prediktion av doshärmande parametrarFinnson, Anton January 2021 (has links)
Treating cancer with radiotherapy requires precise planning. Several planning pipelines rely on reference dose mimicking, where one tries to find machine parameters best mimicking a given reference dose. Dose mimicking relies on having a function that quantifies dose similarity well, necessitating methods for feature extraction of dose images. In this thesis we investigate ways of extracting features from clinical doseimages, and propose a few proof-of-concept dose mimicking functions using the extracted features. We extend current techniques and lay the foundation for new techniques for feature extraction, using mathematical frameworks developed in entirely different areas. In particular we give an introduction to wavelet theory, which provides signal decomposition techniques suitable for analysing local structure, and propose two different dose mimicking functions using wavelets. Furthermore, we extend ROI-based mimicking functions to use artificial ROIs, and we investigate variational autoencoders and their application to the clinical dose feature extraction problem. We conclude that the proposed functions have the potential to address certain shortcomings of current dose mimicking functions. The four methods all seem to approximately capture some notion of dose similarity. Used in combination with the current framework they have the potential of improving dose mimickingresults. However, the numerical tests supporting this are brief, and more thorough numerical investigations are necessary to properly evaluate the usefulness of the new dose mimicking functions. / Behandling av cancer med strålterapi kräver precis planering. Flera olika planeringsramverk bygger på doshärmning, som innebär att hitta de maskinparametrar som bäst härmar en given referensdos. För doshärmning behövs en funktion som kvantifierar likheten mellan två doser, vilket kräver ett sätt att extrahera utmärkande egenskaper – så kallade features – från dosbilder. I det här examensarbetet undersöker vi olika matematiska metoder för att extrahera features från kliniska dosbilder, och presenterar några olika förslag på prototyper till doshärmningsfunktioner, konstruerade utifrån extraherade features. Vi utvidgar nuvarande tekniker och lägger grunden för nya tekniker genom att använda matematiska ramverk utvecklade för helt andra syften. Speciellt så ger vi en introduktion till wavelet-teori, som ger matematiska verktyg för att analysera lokala beteenden hos signaler, exempelvis bilder. Vi föreslår två olika doshärmningsfunktioner som utnyttjar wavelets, och utvidgar ROI-baseraddoshärmning genom att introducera artificiella ROIar. Vidare så undersökervi så kallade variational autoencoders och möjligheten att använda dessa för extrahering av features från dosbilder. Vi kommer fram till att de föreslagna funktionerna har potential att åtgärda vissa begränsningar som finns hos de doshärmningsfunktioner som används idag. De fyra metoderna verkar alla approximativt kvantifiera begreppet doslikhet. Användning av dessa nya metoder i kombination med nuvarande ramverk för doshärmning har potential att förbättra resultaten från doshärmning. De numeriska undersökningar som underbygger dessa slutsatser är dock inte särskilt ingående, så mer noggranna numeriska tester krävs för att kunna ge några definitiva svar angående de presenterade doshärmningsfunktionernas användbarhet ipraktiken.
|
Page generated in 0.1211 seconds