• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 121
  • 121
  • 121
  • 121
  • 81
  • 67
  • 56
  • 48
  • 44
  • 43
  • 43
  • 42
  • 40
  • 39
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

HVD-LSTM Based Recognition of Epileptic Seizures and Normal Human Activity

Khan, Pritam, Khan, Yasin, Kumar, Sudhir, Khan, Mohammad S., Gandomi, Amir H. 01 September 2021 (has links)
In this paper, we detect the occurrence of epileptic seizures in patients as well as activities namely stand, walk, and exercise in healthy persons, leveraging EEG (electroencephalogram) signals. Using Hilbert vibration decomposition (HVD) on non-linear and non-stationary EEG signal, we obtain multiple monocomponents varying in terms of amplitude and frequency. After decomposition, we extract features from the monocomponent matrix of the EEG signals. The instantaneous amplitude of the HVD monocomponents varies because of the motion artifacts present in EEG signals. Hence, the acquired statistical features from the instantaneous amplitude help in identifying the epileptic seizures and the normal human activities. The features selected by correlation-based Q-score are classified using an LSTM (Long Short Term Memory) based deep learning model in which the feature-based weight update maximizes the classification accuracy. For epilepsy diagnosis using the Bonn dataset and activity recognition leveraging our Sensor Networks Research Lab (SNRL) data, we achieve testing classification accuracies of 96.00% and 83.30% respectively through our proposed method.
12

DeepDSSR: Deep Learning Structure for Human Donor Splice Sites Recognition

Alam, Tanvir, Islam, Mohammad Tariqul, Househ, Mowafa, Bouzerdoum, Abdesselam, Kawsar, Ferdaus Ahmed 01 January 2019 (has links)
Human genes often, through alternative splicing of pre-messenger RNAs, produce multiple mRNAs and protein isoforms that may have similar or completely different functions. Identification of splice sites is, therefore, crucial to understand the gene structure and variants of mRNA and protein isoforms produced by the primary RNA transcripts. Although many computational methods have been developed to detect the splice sites in humans, this is still substantially a challenging problem and further improvement of the computational model is still foreseeable. Accordingly, we developed DeepDSSR (deep donor splice site recognizer), a novel deep learning based architecture, for predicting human donor splice sites. The proposed method, built upon publicly available and highly imbalanced benchmark dataset, is comparable with the leading deep learning based methods for detecting human donor splice sites. Performance evaluation metrics show that DeepDSSR outperformed the existing deep learning based methods. Future work will improve the predictive capabilities of our model, and we will build a model for the prediction of acceptor splice sites.
13

LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data

Verner, Alexander 01 January 2019 (has links)
In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data.
14

Deep Learning Based Electrocardiogram Delineation

Abrishami, Hedayat 01 October 2019 (has links)
No description available.
15

LSTM Based Deep Learning Models for Prediction of Univariate Time Series Data(An Experiment to Predict New Daily Cases of Covid-19)

Zarean, Zeinab 15 September 2022 (has links)
No description available.
16

ACCELERATED CELLULAR TRACTION CALCULATION BY PREDICTIONS USING DEEP LEARNING

Ibn Shafi, Md. Kamal 01 December 2023 (has links) (PDF)
This study presents a novel approach for predicting future cellular traction in a time series. The proposed method leverages two distinct look-ahead Long Short-Term Memory (LSTM) models—one for cell boundary and the other for traction data—to achieve rapid and accurate predictions. These LSTM models are trained using real Fourier Transform Traction Cytometry (FTTC) output data, ensuring consistency and reliability in the underlying calculations. To account for variability among cells, each cell is trained separately, mitigating generalized errors. The predictive performance is demonstrated by accurately forecasting tractions for the next 30-time instances, with an error rate below 7%. Moreover, a strategy for real-time traction calculations is proposed, involving the capture of a bead reference image before cell placement in a controlled environment. By doing so, we eliminate the need for cell removal and enable real-time calculation of tractions. Combining these two ideas, our tool speeds up the traction calculations 1.6 times, leveraging from limiting TFM use. As a walk forward, prediction method is implemented by combining prediction values with real data for future prediction, it is indicative of more speedup. The predictive capabilities of this approach offer valuable insights, with potential applications in identifying cancerous cells based on their traction behavior over time.Additionally, we present an advanced cell boundary detection algorithm that autonomously identifies cell boundaries from obscure cell images, reducing human intervention and bias. This algorithm significantly streamlines data collection, enhancing the efficiency and accuracy of our methodology.
17

A Deep Recurrent Neural Network-Based Energy Management Strategy for Hybrid Electric Vehicles

Jamali Oskoei, Helia Sadat January 2021 (has links)
The automotive industry is inevitably experiencing a paradigm shift from fossil fuels to electric powertrain with significant technological breakthroughs in vehicle electrification. Emerging hybrid electric vehicles were one of the first steps towards cleaner and greener vehicles with a higher fuel economy and lower emission levels. The energy management strategy in hybrid electric vehicles determines the power flow pattern and significantly affects vehicle performance. Therefore, in this thesis, a learning-based strategy is proposed to address the energy management problem of a hybrid electric vehicle in various driving conditions. The idea of a deep recurrent neural network-based energy management strategy is proposed, developed, and evaluated. Initially, a hybrid electric vehicle model with a rule-based supervisory controller is constructed for this case study to obtain training data for the deep recurrent neural network and to evaluate the performance of the proposed energy management strategy. Secondly, due to its capabilities to remember historical data, a long short-term memory recurrent neural network is designed and trained to estimate the powertrain control variables from vehicle parameters. Extensive simulations are conducted to improve the model accuracy and ensure its generalization capability. Also, several hyper-parameters and structures are specifically tuned and debugged for this purpose. The novel proposed energy management strategy takes sequential data as input to capture the characteristics of both driver and controller behaviors and improve the estimation/prediction accuracy. The energy management controller is defined as a time-series problem, and a network predictor module is implemented in the system-level controller of the hybrid electric vehicle model. According to the simulation results, the proposed strategy and prediction model demonstrated lower fuel consumption and higher accuracy compared to other learning-based energy management strategies. / Thesis / Master of Applied Science (MASc)
18

A comparative analysis on the predictive performance of LSTM and SVR on Bitcoin closing prices.

Rayyan, Hakim January 2022 (has links)
Bitcoin has since its inception in 2009 seen its market capitalisation rise to a staggering 846 billion US Dollars making it the world’s leading cryptocurrency. This has attracted financial analysts as well as researchers to experiment with different models with the aim of developing one capable of predicting Bitcoin closing prices. The aim of this thesis was to examine how well the LSTM and the SVR models performed in predicting Bitcoin closing prices. As a measure of performance, the RMSE, NRMSE and MAPE were used as well as the Random walk without drift as a benchmark to further contextualise the performance of both models. The empirical results show that the Random walk without drift yielded the best results for both the RMSE and NRMSE scoring 1624.638 and 0.02525, respectively while the LSTM outperformed both the Random Walk without drift and the SVR model in terms of the MAPE scoring 0.0272 against 0.0274 for both the Random walk without drift and SVR, respectively. Given the performance of the Random Walk against both models, it cannot be inferred that the LSTM and SVR models yielded statistically significant predictions. / <p>Aaron Green</p>
19

Deep Quantile Regression for Unsupervised Anomaly Detection in Time-Series

Tambuwal, Ahmad I., Neagu, Daniel 18 November 2021 (has links)
Yes / Time-series anomaly detection receives increasing research interest given the growing number of data-rich application domains. Recent additions to anomaly detection methods in research literature include deep neural networks (DNNs: e.g., RNN, CNN, and Autoencoder). The nature and performance of these algorithms in sequence analysis enable them to learn hierarchical discriminative features and time-series temporal nature. However, their performance is affected by usually assuming a Gaussian distribution on the prediction error, which is either ranked, or threshold to label data instances as anomalous or not. An exact parametric distribution is often not directly relevant in many applications though. This will potentially produce faulty decisions from false anomaly predictions due to high variations in data interpretation. The expectations are to produce outputs characterized by a level of confidence. Thus, implementations need the Prediction Interval (PI) that quantify the level of uncertainty associated with the DNN point forecasts, which helps in making better-informed decision and mitigates against false anomaly alerts. An effort has been made in reducing false anomaly alerts through the use of quantile regression for identification of anomalies, but it is limited to the use of quantile interval to identify uncertainties in the data. In this paper, an improve time-series anomaly detection method called deep quantile regression anomaly detection (DQR-AD) is proposed. The proposed method go further to used quantile interval (QI) as anomaly score and compare it with threshold to identify anomalous points in time-series data. The tests run of the proposed method on publicly available anomaly benchmark datasets demonstrate its effective performance over other methods that assumed Gaussian distribution on the prediction or reconstruction cost for detection of anomalies. This shows that our method is potentially less sensitive to data distribution than existing approaches. / Petroleum Technology Development Fund (PTDF) PhD Scholarship, Nigeria (Award Number: PTDF/ ED/PHD/IAT/884/16)
20

Control of Grid-Connected Converters using Deep Learning

Ghidewon-Abay, Sengal 12 January 2023 (has links)
With the rise of inverter-based resources (IBRs) within the power system, the control of grid-connected converters (GCC) has become pertinent due to the fact they interface IBRs to the grid. The conventional method of control for grid-connected converters (GCCs) such as the voltage-sourced converter (VSC) is through a decoupled control loop in the synchronous reference frame. However, this model-based control method is sensitive to parameter changes causing deterioration in controller performance. Data-driven approaches such as machine learning can be utilized to design controllers that are capable of operating GCCs in various system conditions. This work reviews different machine learning applications in power systems as well as the conventional method of controlling a VSC. It explores a deep learning-based control method for a three-phase grid-connected VSC, specifically utilizing a long short-term memory (LSTM) network for robust control. Simulations of a conventional controlled VSC are conducted using Simulink to collect data for training the LSTM-based controller. The LSTM model is built and trained using the Keras and TensorFlow libraries in Python and tested in Simulink. The performance of the LSTM-based controller is evaluated under different case studies and compared to the conventional method of control. Simulation results demonstrate the effectiveness of this approach by outperforming the conventional controller and maintaining stability under different system parameter changes. / Master of Science / The desire to minimize the use of fossil fuels and reduce carbon footprints has increased the usage of renewable energies also known as inverter-based resources (IBRs) within the power grid. These resources add a level of complexity to operating the grid due to the fluctuating nature of IBRs and are connected to the power grid through grid-connected converters (GCCs). The control method conventionally used for GCCs is derived by accounting for the system parameters, creating a mathematical model under constant parameters. However, the parameters of the system are susceptible to changes under different operating and environmental conditions. This results in poor performance from the controller under various operating conditions due to its inability to be adaptive to the system. Data-driven approaches such as machine learning are becoming increasingly popular for their ability to capture the dynamics of a system with limited knowledge. The different applications of machine learning within power systems include fault diagnosis, energy management, and cyber security. This work explores the use of utilizing deep learning techniques for a robust approach of controlling GCCs.

Page generated in 0.0877 seconds