• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 114
  • 114
  • 114
  • 114
  • 77
  • 64
  • 55
  • 45
  • 42
  • 41
  • 41
  • 40
  • 39
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

HVD-LSTM Based Recognition of Epileptic Seizures and Normal Human Activity

Khan, Pritam, Khan, Yasin, Kumar, Sudhir, Khan, Mohammad S., Gandomi, Amir H. 01 September 2021 (has links)
In this paper, we detect the occurrence of epileptic seizures in patients as well as activities namely stand, walk, and exercise in healthy persons, leveraging EEG (electroencephalogram) signals. Using Hilbert vibration decomposition (HVD) on non-linear and non-stationary EEG signal, we obtain multiple monocomponents varying in terms of amplitude and frequency. After decomposition, we extract features from the monocomponent matrix of the EEG signals. The instantaneous amplitude of the HVD monocomponents varies because of the motion artifacts present in EEG signals. Hence, the acquired statistical features from the instantaneous amplitude help in identifying the epileptic seizures and the normal human activities. The features selected by correlation-based Q-score are classified using an LSTM (Long Short Term Memory) based deep learning model in which the feature-based weight update maximizes the classification accuracy. For epilepsy diagnosis using the Bonn dataset and activity recognition leveraging our Sensor Networks Research Lab (SNRL) data, we achieve testing classification accuracies of 96.00% and 83.30% respectively through our proposed method.
12

DeepDSSR: Deep Learning Structure for Human Donor Splice Sites Recognition

Alam, Tanvir, Islam, Mohammad Tariqul, Househ, Mowafa, Bouzerdoum, Abdesselam, Kawsar, Ferdaus Ahmed 01 January 2019 (has links)
Human genes often, through alternative splicing of pre-messenger RNAs, produce multiple mRNAs and protein isoforms that may have similar or completely different functions. Identification of splice sites is, therefore, crucial to understand the gene structure and variants of mRNA and protein isoforms produced by the primary RNA transcripts. Although many computational methods have been developed to detect the splice sites in humans, this is still substantially a challenging problem and further improvement of the computational model is still foreseeable. Accordingly, we developed DeepDSSR (deep donor splice site recognizer), a novel deep learning based architecture, for predicting human donor splice sites. The proposed method, built upon publicly available and highly imbalanced benchmark dataset, is comparable with the leading deep learning based methods for detecting human donor splice sites. Performance evaluation metrics show that DeepDSSR outperformed the existing deep learning based methods. Future work will improve the predictive capabilities of our model, and we will build a model for the prediction of acceptor splice sites.
13

LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data

Verner, Alexander 01 January 2019 (has links)
In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data.
14

Deep Learning Based Electrocardiogram Delineation

Abrishami, Hedayat 01 October 2019 (has links)
No description available.
15

LSTM Based Deep Learning Models for Prediction of Univariate Time Series Data(An Experiment to Predict New Daily Cases of Covid-19)

Zarean, Zeinab 15 September 2022 (has links)
No description available.
16

ACCELERATED CELLULAR TRACTION CALCULATION BY PREDICTIONS USING DEEP LEARNING

Ibn Shafi, Md. Kamal 01 December 2023 (has links) (PDF)
This study presents a novel approach for predicting future cellular traction in a time series. The proposed method leverages two distinct look-ahead Long Short-Term Memory (LSTM) models—one for cell boundary and the other for traction data—to achieve rapid and accurate predictions. These LSTM models are trained using real Fourier Transform Traction Cytometry (FTTC) output data, ensuring consistency and reliability in the underlying calculations. To account for variability among cells, each cell is trained separately, mitigating generalized errors. The predictive performance is demonstrated by accurately forecasting tractions for the next 30-time instances, with an error rate below 7%. Moreover, a strategy for real-time traction calculations is proposed, involving the capture of a bead reference image before cell placement in a controlled environment. By doing so, we eliminate the need for cell removal and enable real-time calculation of tractions. Combining these two ideas, our tool speeds up the traction calculations 1.6 times, leveraging from limiting TFM use. As a walk forward, prediction method is implemented by combining prediction values with real data for future prediction, it is indicative of more speedup. The predictive capabilities of this approach offer valuable insights, with potential applications in identifying cancerous cells based on their traction behavior over time.Additionally, we present an advanced cell boundary detection algorithm that autonomously identifies cell boundaries from obscure cell images, reducing human intervention and bias. This algorithm significantly streamlines data collection, enhancing the efficiency and accuracy of our methodology.
17

A Deep Recurrent Neural Network-Based Energy Management Strategy for Hybrid Electric Vehicles

Jamali Oskoei, Helia Sadat January 2021 (has links)
The automotive industry is inevitably experiencing a paradigm shift from fossil fuels to electric powertrain with significant technological breakthroughs in vehicle electrification. Emerging hybrid electric vehicles were one of the first steps towards cleaner and greener vehicles with a higher fuel economy and lower emission levels. The energy management strategy in hybrid electric vehicles determines the power flow pattern and significantly affects vehicle performance. Therefore, in this thesis, a learning-based strategy is proposed to address the energy management problem of a hybrid electric vehicle in various driving conditions. The idea of a deep recurrent neural network-based energy management strategy is proposed, developed, and evaluated. Initially, a hybrid electric vehicle model with a rule-based supervisory controller is constructed for this case study to obtain training data for the deep recurrent neural network and to evaluate the performance of the proposed energy management strategy. Secondly, due to its capabilities to remember historical data, a long short-term memory recurrent neural network is designed and trained to estimate the powertrain control variables from vehicle parameters. Extensive simulations are conducted to improve the model accuracy and ensure its generalization capability. Also, several hyper-parameters and structures are specifically tuned and debugged for this purpose. The novel proposed energy management strategy takes sequential data as input to capture the characteristics of both driver and controller behaviors and improve the estimation/prediction accuracy. The energy management controller is defined as a time-series problem, and a network predictor module is implemented in the system-level controller of the hybrid electric vehicle model. According to the simulation results, the proposed strategy and prediction model demonstrated lower fuel consumption and higher accuracy compared to other learning-based energy management strategies. / Thesis / Master of Applied Science (MASc)
18

Deep Quantile Regression for Unsupervised Anomaly Detection in Time-Series

Tambuwal, Ahmad I., Neagu, Daniel 18 November 2021 (has links)
Yes / Time-series anomaly detection receives increasing research interest given the growing number of data-rich application domains. Recent additions to anomaly detection methods in research literature include deep neural networks (DNNs: e.g., RNN, CNN, and Autoencoder). The nature and performance of these algorithms in sequence analysis enable them to learn hierarchical discriminative features and time-series temporal nature. However, their performance is affected by usually assuming a Gaussian distribution on the prediction error, which is either ranked, or threshold to label data instances as anomalous or not. An exact parametric distribution is often not directly relevant in many applications though. This will potentially produce faulty decisions from false anomaly predictions due to high variations in data interpretation. The expectations are to produce outputs characterized by a level of confidence. Thus, implementations need the Prediction Interval (PI) that quantify the level of uncertainty associated with the DNN point forecasts, which helps in making better-informed decision and mitigates against false anomaly alerts. An effort has been made in reducing false anomaly alerts through the use of quantile regression for identification of anomalies, but it is limited to the use of quantile interval to identify uncertainties in the data. In this paper, an improve time-series anomaly detection method called deep quantile regression anomaly detection (DQR-AD) is proposed. The proposed method go further to used quantile interval (QI) as anomaly score and compare it with threshold to identify anomalous points in time-series data. The tests run of the proposed method on publicly available anomaly benchmark datasets demonstrate its effective performance over other methods that assumed Gaussian distribution on the prediction or reconstruction cost for detection of anomalies. This shows that our method is potentially less sensitive to data distribution than existing approaches. / Petroleum Technology Development Fund (PTDF) PhD Scholarship, Nigeria (Award Number: PTDF/ ED/PHD/IAT/884/16)
19

Dynamic Load Modeling from PSSE-Simulated Disturbance Data using Machine Learning

Gyawali, Sanij 14 October 2020 (has links)
Load models have evolved from simple ZIP model to composite model that incorporates the transient dynamics of motor loads. This research utilizes the latest trend on Machine Learning and builds reliable and accurate composite load model. A composite load model is a combination of static (ZIP) model paralleled with a dynamic model. The dynamic model, recommended by Western Electricity Coordinating Council (WECC), is an induction motor representation. In this research, a dual cage induction motor with 20 parameters pertaining to its dynamic behavior, starting behavior, and per unit calculations is used as a dynamic model. For machine learning algorithms, a large amount of data is required. The required PMU field data and the corresponding system models are considered Critical Energy Infrastructure Information (CEII) and its access is limited. The next best option for the required amount of data is from a simulating environment like PSSE. The IEEE 118 bus system is used as a test setup in PSSE and dynamic simulations generate the required data samples. Each of the samples contains data on Bus Voltage, Bus Current, and Bus Frequency with corresponding induction motor parameters as target variables. It was determined that the Artificial Neural Network (ANN) with multivariate input to single parameter output approach worked best. Recurrent Neural Network (RNN) is also experimented side by side to see if an additional set of information of timestamps would help the model prediction. Moreover, a different definition of a dynamic model with a transfer function-based load is also studied. Here, the dynamic model is defined as a mathematical representation of the relation between bus voltage, bus frequency, and active/reactive power flowing in the bus. With this form of load representation, Long-Short Term Memory (LSTM), a variation of RNN, performed better than the concurrent algorithms like Support Vector Regression (SVR). The result of this study is a load model consisting of parameters defining the load at load bus whose predictions are compared against simulated parameters to examine their validity for use in contingency analysis. / Master of Science / Independent system Operators (ISO) and Distribution system operators (DSO) have a responsibility to provide uninterrupted power supply to consumers. That along with the longing to keep operating cost minimum, engineers and planners study the system beforehand and seek to find the optimum capacity for each of the power system elements like generators, transformers, transmission lines, etc. Then they test the overall system using power system models, which are mathematical representation of the real components, to verify the stability and strength of the system. However, the verification is only as good as the system models that are used. As most of the power systems components are controlled by the operators themselves, it is easy to develop a model from their perspective. The load is the only component controlled by consumers. Hence, the necessity of better load models. Several studies have been made on static load modeling and the performance is on par with real behavior. But dynamic loading, which is a load behavior dependent on time, is rather difficult to model. Some attempts on dynamic load modeling can be found already. Physical component-based and mathematical transfer function based dynamic models are quite widely used for the study. These load structures are largely accepted as a good representation of the systems dynamic behavior. With a load structure in hand, the next task is estimating their parameters. In this research, we tested out some new machine learning methods to accurately estimate the parameters. Thousands of simulated data are used to train machine learning models. After training, we validated the models on some other unseen data. This study finally goes on to recommend better methods to load modeling.
20

[en] A DEPENDENCY TREE ARC FILTER / [pt] UM FILTRO PARA ARCOS EM ÁRVORES DE DEPENDÊNCIA

RENATO SAYAO CRYSTALLINO DA ROCHA 13 December 2018 (has links)
[pt] A tarefa de Processamento de Linguagem Natural consiste em analisar linguagens naturais de forma computacional, facilitando o desenvolvimento de programas capazes de utilizar dados falados ou escritos. Uma das tarefas mais importantes deste campo é a Análise de Dependência. Tal tarefa consiste em analisar a estrutura gramatical de frases visando extrair aprender dados sobre suas relações de dependência. Em uma sentença, essas relações se apresentam em formato de árvore, onde todas as palavras são interdependentes. Devido ao seu uso em uma grande variedade de aplicações como Tradução Automática e Identificação de Papéis Semânticos, diversas pesquisas com diferentes abordagens são feitas nessa área visando melhorar a acurácia das árvores previstas. Uma das abordagens em questão consiste em encarar o problema como uma tarefa de classificação de tokens e dividi-la em três classificadores diferentes, um para cada sub-tarefa, e depois juntar seus resultados de forma incremental. As sub-tarefas consistem em classificar, para cada par de palavras que possuam relação paidependente, a classe gramatical do pai, a posição relativa entre os dois e a distância relativa entre as palavras. Porém, observando pesquisas anteriores nessa abordagem, notamos que o gargalo está na terceira sub-tarefa, a predição da distância entre os tokens. Redes Neurais Recorrentes são modelos que nos permitem trabalhar utilizando sequências de vetores, tornando viáveis problemas de classificação onde tanto a entrada quanto a saída do problema são sequenciais, fazendo delas uma escolha natural para o problema. Esse trabalho utiliza-se de Redes Neurais Recorrentes, em específico Long Short-Term Memory, para realizar a tarefa de predição da distância entre palavras que possuam relações de dependência como um problema de classificação sequence-to-sequence. Para sua avaliação empírica, este trabalho segue a linha de pesquisas anteriores e utiliza os dados do corpus em português disponibilizado pela Conference on Computational Natural Language Learning 2006 Shared Task. O modelo resultante alcança 95.27 por cento de precisão, resultado que é melhor do que o obtido por pesquisas feitas anteriormente para o modelo incremental. / [en] The Natural Language Processing task consists of analyzing the grammatical structure of a sentence written in natural language aiming to learn, identify and extract information related to its dependency structure. This data can be structured like a tree, since every word in a sentence has a head-dependent relation to another word from the same sentence. Since Dependency Parsing is used in many applications like Machine Translation, Semantic Role Labeling and Part-Of-Speech Tagging, researchers aiming to improve the accuracy on their models are approaching this task in many different ways. One of the approaches consists in looking at this task as a token classification problem, using different classifiers for each sub-task and joining them in an incremental way. These sub-tasks consist in classifying, for each head-dependent pair, the Part-Of-Speech tag of the head, the relative position between the two words and the distance between them. However, previous researches using this approach show that the bottleneck lies in the distance classifier. Recurrent Neural Networks are a kind of Neural Network that allows us to work using sequences of vectors, allowing for classification problems where both our input and output are sequences, making them a great choice for the problem at hand. This work studies the use of Recurrent Neural Networks, in specific Long Short-Term Memory networks, for the head-dependent distance classifier sub-task as a sequence-to-sequence classification problem. To evaluate its efficiency, this work follows the line of previous researches and makes use of the Portuguese corpus of the Conference on Computational Natural Language Learning 2006 Shared Task. The resulting model attains 95.27 percent precision, which is better than the previous results obtained using incremental models.

Page generated in 0.0719 seconds