• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 135
  • 41
  • 37
  • 31
  • 26
  • 21
  • 20
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Diagnóza Parkinsonovy choroby z řečového signálu / Parkinson disease diagnosis using speech signal analysis

Karásek, Michal January 2011 (has links)
The thesis deals with the recognition of Parkinson's disease from the speech signal. The first part refers to the principles of speech signals and speech signals by patients suffering from Parkinson's disease. Further, it continues to describe the issues of speech signals processing, basic symptoms used for diagnosis of Parkinson's disease (e. g. VAI, VSA, FCR, VOT etc.) and reduction of these symptoms. The next part focuses on a block diagram of the program for the diagnosis of Parkinson's disease. The main objective of this thesis is comparison of two methods of feature selection (mRMR and SFFS). For classification have selected two different methods were used. The first method is classification kNN and second method of classification is Gaussian mixture model (GMM).
122

Analýza experimentálních EKG / Analysis of experimental ECG

Mackových, Marek January 2016 (has links)
This thesis is focused on the analysis of experimental ECG records drawn up in isolated rabbit hearts and aims to describe changes in EKG caused by ischemia and left ventricular hypertrophy. It consists of a theoretical analysis of the problems in the evaluation of ECG during ischemia and hypertrophy, and describes an experimental ECG recording. Theoretical part is followed by a practical section which describes the method for calculating morphological parameters, followed by ROC analysis to evaluate their suitability for the classification of hypertrophy and at the end is focused on classification.
123

Topologieoptimierung mittels Deep Learning

Halle, Alex, Hasse, Alexander 05 July 2019 (has links)
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
124

Curating news sections in a historical Swedish news corpus

Rekathati, Faton January 2020 (has links)
The National Library of Sweden uses optical character recognition software to digitize their collections of historical newspapers. The purpose of such software is first to automatically segment text and images from scanned newspaper pages, and second to read the contents of the identified text regions. While the raw text is often digitized successfully, important contextual information regarding whether the text constitutes for example a header, a section title or the body text of an article is not captured. These characteristics are easy for a human to distinguish, yet they remain difficult for a machine to recognize. The main purpose of this thesis is to investigate how well section titles in the newspaper Svenska Dagbladet can be classified by using so called image embeddings as features. A secondary aim is to examine whether section titles become harder to classify in older newspaper data. Lastly, we explore if manual annotation work can be reduced using the predictions of a semi-supervised classifier to help in the labeling process.  Results indicate the use of image embeddings help quite substantially in classifying section titles. Datasets from three different time periods: 1990-1997, 2004-2013, and 2017 and onwards were sampled and annotated. The best performing model (Xgboost) achieved macro F1 scores of 0.886, 0.936 and 0.980 for the respective time periods. The results also showed classification became more difficult on older newspapers. Furthermore, a semi-supervised classifier managed an average precision of 83% with only single section title examples, showing promise as way to speed up manual annotation of data.
125

Knowledge Discovery and Data Mining Using Demographic and Clinical Data to Diagnose Heart Disease. / Knowledge Discovery och Data mining med hjälp av demografiska och kliniska data för att diagnostisera hjärtsjukdomar.

Fernandez Sanchez, Javier January 2018 (has links)
Cardiovascular disease (CVD) is the leading cause of morbidity, mortality, premature death and reduced quality of life for the citizens of the EU. It has been reported that CVD represents a major economic load on health care sys- tems in terms of hospitalizations, rehabilitation services, physician visits and medication. Data Mining techniques with clinical data has become an interesting tool to prevent, diagnose or treat CVD. In this thesis, Knowledge Dis- covery and Data Mining (KDD) was employed to analyse clinical and demographic data, which could be used to diagnose coronary artery disease (CAD). The exploratory data analysis (EDA) showed that female patients at an el- derly age with a higher level of cholesterol, maximum achieved heart rate and ST-depression are more prone to be diagnosed with heart disease. Furthermore, patients with atypical angina are more likely to be at an elderly age with a slightly higher level of cholesterol and maximum achieved heart rate than asymptotic chest pain patients. More- over, patients with exercise induced angina contained lower values of maximum achieved heart rate than those who do not experience it. We could verify that patients who experience exercise induced angina and asymptomatic chest pain are more likely to be diagnosed with heart disease. On the other hand, Logistic Regression, K-Nearest Neighbors, Support Vector Machines, Decision Tree, Bagging and Boosting methods were evaluated by adopting a stratified 10 fold cross-validation approach. The learning models provided an average of 78-83% F-score and a mean AUC of 85-88%. Among all the models, the highest score is given by Radial Basis Function Kernel Support Vector Machines (RBF-SVM), achieving 82.5% ± 4.7% of F-score and an AUC of 87.6% ± 5.8%. Our research con- firmed that data mining techniques can support physicians in their interpretations of heart disease diagnosis in addition to clinical and demographic characteristics of patients.
126

Analyzing the Need for Nonprofits in the Housing Sector: A Predictive Model Based on Location

Oerther, Catie 03 August 2023 (has links)
No description available.
127

Predicting Customer Churn in a Subscription-Based E-Commerce Platform Using Machine Learning Techniques

Aljifri, Ahmed January 2024 (has links)
This study investigates the performance of Logistic Regression, k-Nearest Neighbors (KNN), and Random Forest algorithms in predicting customer churn within an e-commerce platform. The choice of the mentioned algorithms was due to the unique characteristics of the dataset and the unique perception and value provided by each algorithm. Iterative models ‘examinations, encompassing preprocessing techniques, feature engineering, and rigorous evaluations, were conducted. Logistic Regression showcased moderate predictive capabilities but lagged in accurately identifying potential churners due to its assumptions of linearity between log odds and predictors. KNN emerged as the most accurate classifier, achieving superior sensitivity and specificity (98.22% and 96.35%, respectively), outperforming other models. Random Forest, with sensitivity and specificity (91.75% and 95.83% respectively) excelled in specificity but slightly lagged in sensitivity. Feature importance analysis highlighted "Tenure" as the most impactful variable for churn prediction. Preprocessing techniques differed in performance across models, emphasizing the importance of tailored preprocessing. The study's findings underscore the significance of continuous model refinement and optimization in addressing complex business challenges like customer churn. The insights serve as a foundation for businesses to implement targeted retention strategies, mitigating customer attrition, and promote growth in e-commerce platforms.
128

A deep learning based anomaly detection pipeline for battery fleets

Khongbantabam, Nabakumar Singh January 2021 (has links)
This thesis proposes a deep learning anomaly detection pipeline to detect possible anomalies during the operation of a fleet of batteries and presents its development and evaluation. The pipeline employs sensors that connect to each battery in the fleet to remotely collect real-time measurements of their operating characteristics, such as voltage, current, and temperature. The deep learning based time-series anomaly detection model was developed using Variational Autoencoder (VAE) architecture that utilizes either Long Short-Term Memory (LSTM) or, its cousin, Gated Recurrent Unit (GRU) as the encoder and the decoder networks (LSTMVAE and GRUVAE). Both variants were evaluated against three well-known conventional anomaly detection algorithms Isolation Nearest Neighbour (iNNE), Isolation Forest (iForest), and kth Nearest Neighbour (k-NN) algorithms. All five models were trained using two variations in the training dataset (full-year dataset and partial recent dataset), producing a total of 10 different model variants. The models were trained using the unsupervised method and the results were evaluated using a test dataset consisting of a few known anomaly days in the past operation of the customer’s battery fleet. The results demonstrated that k-NN and GRUVAE performed close to each other, outperforming the rest of the models with a notable margin. LSTMVAE and iForest performed moderately, while the iNNE and iForest variant trained with the full dataset, performed the worst in the evaluation. A general observation also reveals that limiting the training dataset to only a recent period produces better results nearly consistently across all models. / Detta examensarbete föreslår en pipeline för djupinlärning av avvikelser för att upptäcka möjliga anomalier under driften av en flotta av batterier och presenterar dess utveckling och utvärdering. Rörledningen använder sensorer som ansluter till varje batteri i flottan för att på distans samla in realtidsmätningar av deras driftsegenskaper, såsom spänning, ström och temperatur. Den djupinlärningsbaserade tidsserieanomalidetekteringsmodellen utvecklades med VAE-arkitektur som använder antingen LSTM eller, dess kusin, GRU som kodare och avkodarnätverk (LSTMVAE och GRU) VAE). Båda varianterna utvärderades mot tre välkända konventionella anomalidetekteringsalgoritmer -iNNE, iForest och k-NN algoritmer. Alla fem modellerna tränades med hjälp av två varianter av träningsdatauppsättningen (helårsdatauppsättning och delvis färsk datauppsättning), vilket producerade totalt 10 olika modellvarianter. Modellerna tränades med den oövervakade metoden och resultaten utvärderades med hjälp av en testdatauppsättning bestående av några kända anomalidagar under tidigare drift av kundens batteriflotta. Resultaten visade att k-NN och GRUVAE presterade nära varandra och överträffade resten av modellerna med en anmärkningsvärd marginal. LSTMVAE och iForest presterade måttligt, medan varianten iNNE och iForest tränade med hela datasetet presterade sämst i utvärderingen. En allmän observation avslöjar också att en begränsning av träningsdatauppsättningen till endast en ny period ger bättre resultat nästan konsekvent över alla modeller.
129

Αναγνώριση βασικών κινήσεων του χεριού με χρήση ηλεκτρομυογραφήματος / Recognition of basic hand movements using electromyography

Σαψάνης, Χρήστος 13 October 2013 (has links)
Ο στόχος αυτής της εργασίας ήταν η αναγνώριση έξι βασικών κινήσεων του χεριού με χρήση δύο συστημάτων. Όντας θέμα διεπιστημονικού επιπέδου έγινε μελέτη της ανατομίας των μυών του πήχη, των βιοσημάτων, της μεθόδου της ηλεκτρομυογραφίας (ΗΜΓ) και μεθόδων αναγνώρισης προτύπων. Παράλληλα, το σήμα περιείχε αρκετό θόρυβο και έπρεπε να αναλυθεί, με χρήση του EMD, να εξαχθούν χαρακτηριστικά αλλά και να μειωθεί η διαστασιμότητά τους, με χρήση των RELIEF και PCA, για βελτίωση του ποσοστού επιτυχίας ταξινόμησης. Στο πρώτο μέρος γίνεται χρήση συστήματος ΗΜΓ της Delsys αρχικά σε ένα άτομο και στη συνέχεια σε έξι άτομα με το κατά μέσο όρο επιτυχημένης ταξινόμησης, για τις έξι αυτές κινήσεις, να αγγίζει ποσοστά άνω του 80%. Το δεύτερο μέρος περιλαμβάνει την κατασκευή αυτόνομου συστήματος ΗΜΓ με χρήση του Arduino μικροελεγκτή, αισθητήρων ΗΜΓ και ηλεκτροδίων, τα οποία είναι τοποθετημένα σε ένα ελαστικό γάντι. Τα αποτελέσματα ταξινόμησης σε αυτή την περίπτωση αγγίζουν το 75%. / The aim of this work was to identify six basic movements of the hand using two systems. Being an interdisciplinary topic, there has been conducted studying in the anatomy of forearm muscles, biosignals, the method of electromyography (EMG) and methods of pattern recognition. Moreover, the signal contained enough noise and had to be analyzed, using EMD, to extract features and to reduce its dimensionality, using RELIEF and PCA, to improve the success rate of classification. The first part uses an EMG system of Delsys initially for an individual and then for six people with the average successful classification, for these six movements at rates of over 80%. The second part involves the construction of an autonomous system EMG using an Arduino microcontroller, EMG sensors and electrodes, which are arranged in an elastic glove. Classification results in this case reached 75% of success.
130

Artificial Neural Networks in Greenhouse Modelling

Miranda Trujillo, Luis Carlos 24 August 2018 (has links)
Moderne Präzisionsgartenbaulicheproduktion schließt hoch technifizierte Gewächshäuser, deren Einsatz in großem Maße von der Qualität der Sensorik- und Regelungstechnik abhängt, mit ein. Zu den Regelungsstrategien gehören unter anderem Methoden der Künstlichen Intelligenz, wie z.B. Künstliche Neuronale Netze (KNN, aus dem Englischen). Die vorliegende Arbeit befasst sich mit der Eignung KNN-basierter Modelle als Bauelemente von Klimaregelungstrategien in Gewächshäusern. Es werden zwei Modelle vorgestellt: Ein Modell zur kurzzeitigen Voraussage des Gewächshausklimas (Lufttemperatur und relative Feuchtigkeit, in Minuten-Zeiträumen), und Modell zur Einschätzung von phytometrischen Signalen (Blatttemperatur, Transpirationsrate und Photosyntheserate). Eine Datenbank, die drei Kulturjahre umfasste (Kultur: Tomato), wurde zur Modellbildung bzw. -test benutzt. Es wurde festgestellt, dass die ANN-basierte Modelle sehr stark auf die Auswahl der Metaparameter und Netzarchitektur reagieren, und dass sie auch mit derselben Architektur verschiedene Kalkulationsergebnisse liefern können. Nichtsdestotrotz, hat sich diese Art von Modellen als geeignet zur Einschätzung komplexer Pflanzensignalen sowie zur Mikroklimavoraussage erwiesen. Zwei zusätzliche Möglichkeiten zur Erstellung von komplexen Simulationen sind in der Arbeit enthalten, und zwar zur Klimavoraussage in längerer Perioden und zur Voraussage der Photosyntheserate. Die Arbeit kommt zum Ergebnis, dass die Verwendung von KNN-Modellen für neue Gewächshaussteuerungstrategien geeignet ist, da sie robust sind und mit der Systemskomplexität gut zurechtkommen. Allerdings muss beachtet werden, dass Probleme und Schwierigkeiten auftreten können. Diese Arbeit weist auf die Relevanz der Netzarchitektur, die erforderlichen großen Datenmengen zur Modellbildung und Probleme mit verschiedenen Zeitkonstanten im Gewächshaus hin. / One facet of the current developments in precision horticulture is the highly technified production under cover. The intensive production in modern greenhouses heavily relies on instrumentation and control techniques to automate many tasks. Among these techniques are control strategies, which can also include some methods developed within the field of Artificial Intelligence. This document presents research on Artificial Neural Networks (ANN), a technique derived from Artificial Intelligence, and aims to shed light on their applicability in greenhouse vegetable production. In particular, this work focuses on the suitability of ANN-based models for greenhouse environmental control. To this end, two models were built: A short-term climate prediction model (air temperature and relative humidity in time scale of minutes), and a model of the plant response to the climate, the latter regarding phytometric measurements of leaf temperature, transpiration rate and photosynthesis rate. A dataset comprising three years of tomato cultivation was used to build and test the models. It was found that this kind of models is very sensitive to the fine-tuning of the metaparameters and that they can produce different results even with the same architecture. Nevertheless, it was shown that ANN are useful to simulate complex biological signals and to estimate future microclimate trends. Furthermore, two connection schemes are proposed to assemble several models in order to generate more complex simulations, like long-term prediction chains and photosynthesis forecasts. It was concluded that ANN could be used in greenhouse automation systems as part of the control strategy, as they are robust and can cope with the complexity of the system. However, a number of problems and difficulties are pointed out, including the importance of the architecture, the need for large datasets to build the models and problems arising from different time constants in the whole greenhouse system.

Page generated in 0.0401 seconds