• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2929
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 20
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4974
  • 2948
  • 1301
  • 1098
  • 1090
  • 811
  • 745
  • 739
  • 557
  • 549
  • 546
  • 507
  • 479
  • 468
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Deep learning role in scoliosis detection and treatment

Guanche, Luis 29 January 2024 (has links)
Scoliosis is a common skeletal condition in which a curvature forms along the coronal plane of the spine. Although scoliosis has been long recognized, its pathophysiology and best mode of treatment are still debated. Currently, definitive diagnosis of scoliosis and its progression are performed through anterior-posterior (AP) radiographs by measuring the angle of coronal curvature, referred to as Cobb angle. Cobb angle measurements can be performed by Deep Learning algorithms and are currently being investigated as a possible diagnostic tool for clinicians. This thesis focuses on the role of Deep Learning in the diagnosis and treatment of Scoliosis and proposes a study design using the algorithms to continue to better understand and classify the disease.
752

A Comprehensive Analysis of Deep Learning for Interference Suppression, Sample and Model Complexity in Wireless Systems

Oyedare, Taiwo Remilekun 12 March 2024 (has links)
The wireless spectrum is limited and the demand for its use is increasing due to technological advancements in wireless communication, resulting in persistent interference issues. Despite progress in addressing interference, it remains a challenge for effective spectrum usage, particularly in the use of license-free and managed shared bands and other opportunistic spectrum access solutions. Therefore, efficient and interference-resistant spectrum usage schemes are critical. In the past, most interference solutions have relied on avoidance techniques and expert system-based mitigation approaches. Recently, researchers have utilized artificial intelligence/machine learning techniques at the physical (PHY) layer, particularly deep learning, which suppress or compensate for the interfering signal rather than simply avoiding it. In addition, deep learning has been utilized by researchers in recent years to address various difficult problems in wireless communications such as, transmitter classification, interference classification and modulation recognition, amongst others. To this end, this dissertation presents a thorough analysis of deep learning techniques for interference classification and suppression, and it thoroughly examines complexity (sample and model) issues that arise from using deep learning. First, we address the knowledge gap in the literature with respect to the state-of-the-art in deep learning-based interference suppression. To account for the limitations of deep learning-based interference suppression techniques, we discuss several challenges, including lack of interpretability, the stochastic nature of the wireless channel, issues with open set recognition (OSR) and challenges with implementation. We also provide a technical discussion of the prominent deep learning algorithms proposed in the literature and also offer guidelines for their successful implementation. Next, we investigate convolutional neural network (CNN) architectures for interference and transmitter classification tasks. In particular, we utilize a CNN architecture to classify interference, investigate model complexity of CNN architectures for classifying homogeneous and heterogeneous devices and then examine their impact on test accuracy. Next, we explore the issues with sample size and sample quality with regards to the training data in deep learning. In doing this, we also propose a rule-of-thumb for transmitter classification using CNN based on the findings from our sample complexity study. Finally, in cases where interference cannot be avoided, it is important to suppress such interference. To achieve this, we build upon autoencoder work from other fields to design a convolutional neural network (CNN)-based autoencoder model to suppress interference thereby ensuring coexistence of different wireless technologies in both licensed and unlicensed bands. / Doctor of Philosophy / Wireless communication has advanced a lot in recent years, but it is still hard to use the limited amount of available spectrum without interference from other devices. In the past, researchers tried to avoid interference using expert systems. Now, researchers are using artificial intelligence and machine learning, particularly deep learning, to mitigate interference in a different way. Deep learning has also been used to solve other tough problems in wireless communication, such as classifying the type of device transmitting a signal, classifying the signal itself or avoiding it. This dissertation presents a comprehensive review of deep learning techniques for reducing interference in wireless communication. It also leverages a deep learning model called convolutional neural network (CNN) to classify interference and investigates how the complexity of the CNN effects its performance. It also looks at the relationship between model performance and dataset size (i.e., sample complexity) in wireless communication. Finally, it discusses a CNN-based autoencoder technique to suppress interference in digital amplitude-phase modulation system. All of these techniques are important for making sure different wireless technologies can work together in both licensed and unlicensed bands.
753

Multi-Template Temporal Siamese Network for Visual Object Tracking

Sekhavati, Ali 04 January 2023 (has links)
Visual object tracking is the task of giving a unique ID to an object in a video frame, understanding whether it is present or not in a current frame and if it is present, precisely localizing its position. There are numerous challenges in object tracking, such as change of illumination, partial or full occlusion, change of target appearance, blurring caused by camera movement, presence of similar objects to the target, changes in video image quality through time, etc. Due to these challenges, traditional computer vision techniques cannot perform high-quality tracking, especially for long-term tracking. Almost all the state-of-the-art methods in object tracking use artificial intelligence nowadays, and more specifically, Convolutional Neural Networks. In this work, we present a Siamese based tracker which is different from previous works in two ways. Firstly, most of the Siamese based trackers takes the target in the first frame as the ground truth. Despite the success of such methods in previous years, it does not guarantee robust tracking as it cannot handle many of the challenges causing change in target appearance, such as blurring caused by camera movement, occlusion, pose variation, etc. In this work, while keeping the first frame as a template, we add five other additional templates that are dynamically updated and replaced considering target classification score in different frames. Diversity, similarity and recency are criteria to choose the members of the bag. We call it as a bag of dynamic templates. Secondly, many Siamese based trackers are vulnerable to mistakenly tracking another similar looking object instead of the intended target. Many researchers proposed computationally expensive approaches, such as tracking all the distractors and the given target and discriminate them in every frame. In this work, we propose an approach to handle this issue by estimate the next frame position by using the target's bounding box coordinates in previous frames. We use temporal network with past history of several previous frames, measure classification scores of candidates considering templates in the bag of dynamic templates and use tracker sequential confidence value which shows how confident the tracker has been in previous frames. We call it as robustifier that prevents the tracker from continuously switching between the target and possible distractors with this hypothesis in mind. Extensive experiments on OTB 50, OTB 100 and UAV20L datasets demonstrate the superiority of our work over the state-of-the-art methods.
754

ACOUSTIC EMISSION MONITORING OF THE POWDER BED FUSION PROCESS WITH MACHINE LEARNING APPROACH

Ghayoomi Mohammadi, Mohammad January 2021 (has links)
Laser powder bed fusion (L-PBF) is an additive manufacturing process where a heat source (such as a laser) consolidates material in powder form to build three-dimensional parts. For quality control purposes, this thesis uses real-time monitoring in L-PBF. Defects such as pores and cracks can be detected using Acoustic Emission (AE) during the powder bed selective laser melting process via the machine learning approach. This thesis investigates the performance of several Machine Learning (ML) techniques for online defect detection within the Laser Powder Bed Fusion (L- PBF) process. The goal is to improve the consistency in product quality and process reliability. The application of acoustic emission (AE) sensors to receive elastic waves during the printing process is a cost-effective way of meeting such a goal. For the first step, stainless steel 316L was produced via eight parameters. The acoustic emission signals received during the printing and data collection steps are analyzed using an AE sensor under various process parameters. Several time and frequency-domain features were extracted from data during the mining process from the AE signals. K-means clustering is employed during unsupervised learning, and a neural network approach was used for the supervised machine learning on the dataset. Data labelling is conducted for different laser powers, clustering results, and signal time durations. The results showed the potential of real-time quality monitoring using AE in the L-PBF process. Some process parameters within this project were intentionally adjusted to create three various levels of defects in H13 tool steel samples. First classes were printed with minimum defects, second classes with intentional cracks, and last classes with intentional cracks and porosities. AE signals were acquired during the samples' manufacturing process. Three different machine learning (ML) techniques were applied to analyze and interpret the data. First, using a hierarchical K-means clustering method, the data was labelled. This was followed by a supervised deep learning neural network (DL) to match acoustic signals with defect type. Second, a principal component analysis (PCA) was used to reduce the dimensionality of the data. A Gaussian Mixture Model (GMM) enabled the fast detection of defects, which is suitable for online monitoring. Third, a variational auto-encoder (VAE) approach was used to obtain a general feature of the signal, which could be used as an input for the classifier. Quality trends in AE signals collected from 316L samples were successfully detected using a supervised DL trained on the H13 tool steel dataset. The VAE approach shows a new method for detecting defects within the L-PBF processes, which would eliminate the need for model training in different materials. / Thesis / Master of Applied Science (MASc)
755

MECHANISMS OF VENOUS THROMBUS STABILITY

Shaya, Shana January 2022 (has links)
Whether a patient presents with deep vein thrombosis (DVT) or pulmonary embolism (PE) varies based on clinical factors. Patients with factor V Leiden (FVL) typically present with DVT while cancer patients present with PE. The biological mechanisms that determine DVT stability in the progression of DVT to PE are not known. Thus, little is known about the mechanism of thrombus stability, the factors involved, or the effect of anticoagulants on embolization and PE burden. In order to answer these questions, we first need to (i) develop a mouse model to evaluate DVT stability and its relationship with PE burden when treated with anticoagulants, (ii) determine if anticoagulants, by inhibiting thrombin, require FXIII to decrease thrombus stability, (iii) determine the effects of attenuating fibrinolysis, using epsilon aminocaproic acid (ε-ACA or EACA), supplemental FXIII and α2-AP, on clot stability and (iv) utilize our model to explain the FVL paradox. For our thrombus stability model, the femoral vein of C57BL/6, FXIII deficient (FXIII-/-), FVL heterozygous, or FVL homozygous female mice was subjected to ferric chloride (FeCl3) injury to initiate a non-occlusive thrombus. Treatment with saline, dalteparin, dabigatran, EACA or FXIII was administered 12 minutes after thrombus formation. Intravital videomicroscopy recorded the thrombus sizes and embolic events leaving the thrombus for 2 hours. Lungs were harvested, sectioned and stained for the presence of PE. Total and large embolic events were highest after dabigatran treatment compared to saline or dalteparin in wild-type (WT) mice. Variations in amounts of embolic events were not attributed to variations in thrombus size since thrombus size was similar between the groups. The number of emboli per lung slice was higher in dabigatran-treated mice. Large embolic events correlated positively with the number of emboli per lung slice independent of treatment. Dabigatran treatment in FXIII-/- mice did not alter embolization patterns suggesting that FXIII is required for dabigatran to decrease thrombus stability. EACA increases thrombus size significantly and therefore would not be a feasible alternative to IVC filters, as it will increase DVT size. FXIII marginally increased thrombus size. Treatment with FXIII decreases total and large embolic events in saline-, dalteparin- or dabigatran-treated mice, similar to EACA-treated mice. The number of emboli per lung slice was reduced after treatment with FXIII and EACA compared to non-treated mice. PE burden was not significantly different between FXIII anticoagulated mice or EACA-treated mice. The large embolic events correlate positively with PE burden. FVL heterozygous and homozygous mice had significantly reduced embolization and thrombus size grew significantly over time, this contrasted with WT mice, where thrombus size remained similar to the initial injury. PE burden was significantly reduced in the FVL mice compared to WT. Collectively, these data shows that we have successfully developed a mouse model of acute venous thrombus stability that can quantify emboli and PE burden. Consistent with clinical data, dabigatran, a DTI, was shown to acutely decrease thrombus stability and increase PE burden compared to LMWH or saline; an effect that was FXIII-dependent. Also, attenuating fibrinolysis with EACA, but not FXIII, increases thrombus size; but both increase DVT stability and decrease PE burden. Supplementing α2-AP did not alter thrombus stability. This suggests that administration of FXIII may be a better treatment option for DVT patients who are bleeding than EACA, since EACA may increase DVT size. Lastly, our model can explain the FVL paradox. Those with FVL have stable thrombus formation leading to an increased incidence of symptomatic DVT and a decreased risk of PE. / Thesis / Doctor of Philosophy (PhD)
756

Predicting Transfer Learning Performance Using Dataset Similarity for Time Series Classification of Human Activity Recognition / Transfer Learning Performance Using Dataset Similarity on Realtime Classification

Clark, Ryan January 2022 (has links)
Deep learning is increasingly becoming a viable way of classifying all types of data. Modern deep learning algorithms, such as one dimensional convolutional neural networks, have demonstrated excellent performance in classifying time series data because of the ability to identify time invariant features. A primary challenge of deep learning for time series classification is the large amount of data required for training and many application domains, such as in medicine, have challenges obtaining sufficient data. Transfer learning is a deep learning method used to apply feature knowledge from one deep learning model to another; this is a powerful tool when both training datasets are similar and offers smaller datasets the power of more robust larger datasets. This makes it vital that the best source dataset is selected when performing transfer learning and presently there is no metric for this purpose. In this thesis a metric of predicting the performance of transfer learning is proposed. To develop this metric this research will focus on classification and transfer learning for human-activity-recognition time series data. For general time series data, finding temporal relations between signals is computationally intensive using non-deep learning techniques. Rather than time-series signal processing, a neural network autoencoder was used to first transform the source and target datasets into a time independent feature space. To compare and quantify the suitability of transfer learning datasets, two metrics were examined: i) average embedded signal from each dataset was used to calculate the distance between each datasets centroid, and ii) a Generative Adversarial Network (GAN) model was trained and the discriminator portion of the GAN is then used to assess the dissimilarity between source and target. This thesis measures a correlation between the distance between two dataset and their similarity, as well as the ability for a GAN to discriminate between two datasets and their similarity. The discriminator metric, however, does suffer from an upper limit of dissimilarity. These metrics were then used to predict the success of transfer learning from one dataset to another for the purpose of general time series classification. / Thesis / Master of Applied Science (MASc) / Over the past decade, advances in computational power and increases in data quantity have made deep learning a useful method of complex pattern recognition and classification in data. There is a growing desire to be able to use these complex algorithms on smaller quantities of data. To achieve this, a deep learning model is first trained on a larger dataset and then retrained on the smaller dataset; this is called transfer learning. For transfer learning to be effective, there needs to be a level of similarity between the two datasets so that properties from larger dataset can be learned and then refined using the smaller dataset. Therefore, it is of great interest to understand what level of similarity exists between the two datasets. The goal of this research is to provide a similarity metric between two time series classification datasets so that potential performance gains from transfer learning can be better understood. The measure of similarity between two time series datasets presents a unique challenge due to the nature of this data. To address this challenge an encoder approach was implemented to transform the time series data into a form where each signal example can be compared against one another. In this thesis, different similarity metrics were evaluated and correlated to the performance of a deep learning model allowing the prediction of how effective transfer learning may be when applied.
757

Sensor capture and point cloud processing for off-road autonomous vehicles

Farmer, Eric D 01 May 2020 (has links)
Autonomous vehicles are complex robotic and artificial intelligence systems working together to achieve safe operation in unstructured environments. The objective of this work is to provide a foundation to develop more advanced algorithms for off-road autonomy. The project explores the sensors used for off-road autonomy and the data capture process. Additionally, the point cloud data captured from lidar sensors is processed to restore some of the geometric information lost during sensor sampling. Because ground truth values are needed for quantitative comparison, the MAVS was leveraged to generate a large off-road dataset in a variety of ecosystems. The results demonstrate data capture from the sensor suite and successful reconstruction of the selected geometric information. Using this geometric information, the point cloud data is more accurately segmented using the SqueezeSeg network.
758

An explainable method for prediction of sepsis in ICUs using deep learning

Baghaei, Kourosh T 30 April 2021 (has links)
As a complicated lethal medical emergency, sepsis is not easy to be diagnosed until it is too late for taking any life saving actions. Early prediction of sepsis in ICUs may reduce inpatient mortality rate. Although deep learning models can make predictions on the outcome of ICU stays with high accuracies, the opacity of such neural networks decreases their reliability. Particularly, in the ICU settings where the time is not on doctors' side and every single mistake increase the chances of patient's mortality. Therefore, it is crucial for the predictive model to provide some sort of reasoning in addition to the prediction it provides, so that the medical staff could avoid actions based on false alarms. To address this problem, we propose to add an attention layer to a deep recurrent neural network that can learn the relative importance of each of the parameters of the multivariate data of the ICU stay. Our approach sheds light on providing explainability through attention mechanism. We compare our method with some of the state-of-the-art methods and show the superiority of our approach in terms of providing explanations.
759

Volume CT Data Inspection and Deep Learning Based Anomaly Detection for Turbine Blade

Wang, Kan January 2017 (has links)
No description available.
760

The effects of grain size on the strength of magnesite aggregates deforming by low temperature plasticity and diffusion creep

McDaniel, Caleb Alan 26 July 2018 (has links)
No description available.

Page generated in 1.3956 seconds