• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

A Deep-learning based Approach for Foot Placement Prediction

Lee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.
812

Parkinson's Disease Automated Hand Tremor Analysis from Spiral Images

DeSipio, Rebecca E. 05 1900 (has links)
Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel Fourier Domain analysis technique that transforms the pencil content of hand-drawn spiral images into frequency features. Our technique is applied to an image dataset consisting of spirals drawn by healthy individuals and people with Parkinson’s Disease. The Fourier Domain analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s, a result 6% higher than previous methods. We compared this method against the results using extracted features from the ResNet-50 and VGG16 pre-trained deep network models. The VGG16 extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show correlation to the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease. / M.S. / Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel spiral analysis technique that converts the pencil content of hand-drawn spirals into numeric values, called features. The features measure spiral smoothness. Our technique is applied to an image dataset consisting of spirals drawn by healthy and Parkinson’s individuals. The spiral analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s. We compared this method against the results using extracted features from pre-trained deep network models. The VGG16 pre-trained model extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show a similar trend to the tremor evaluations rated by the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease.
813

Accelerating Conceptual Design Analysis of Marine Vehicles through Deep Learning

Jones, Matthew Cecil 02 May 2019 (has links)
Evaluation of the flow field imparted by a marine vehicle reveals the underlying efficiency and performance. However, the relationship between precise design features and their impact on the flow field is not well characterized. The goal of this work is first, to investigate the thermally-stratified near field of a self-propelled marine vehicle to identify the significance of propulsion and hull-form design decisions, and second, to develop a functional mapping between an arbitrary vehicle design and its associated flow field to accelerate the design analysis process. The unsteady Reynolds-Averaged Navier-Stokes equations are solved to compute near-field wake profiles, showing good agreement to experimental data and providing a balance between simulation fidelity and numerical cost, given the database of cases considered. Machine learning through convolutional networks is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct deep-learning networks. The first network directly maps explicitly-specified geometric design parameters to their corresponding flow fields. The second network considers the vehicle geometries themselves as tensors of geometric volume fractions to implicitly-learn the underlying parameter space. Once trained, both networks effectively generate realistic flow fields, accelerating the design analysis from a process that takes days to one that takes a fraction of a second. The implicit-parameter network successfully learns the underlying parameter space for geometries within the scope of the training data, showing comparable performance to the explicit-parameter network. With additions to the size and variability of the training database, this network has the potential to abstractly generalize the design space for arbitrary geometric inputs, even those beyond the scope of the training data. / Doctor of Philosophy / Evaluation of the flow field of a marine vehicle reveals the underlying performance, however, the exact relationship between design features and their impact on the flow field is not well established. The goal of this work is first, to investigate the flow surrounding a self–propelled marine vehicle to identify the significance of various design decisions, and second, to develop a functional relationship between an arbitrary vehicle design and its flow field, thereby accelerating the design analysis process. Near–field wake profiles are computed through simulation, showing good agreement to experimental data. Machine learning is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct approaches. The first approach directly maps explicitly–specified geometric design parameters to their corresponding flow fields. The second approach considers the vehicle geometries themselves to implicitly–learn the underlying relationships. Once trained, both approaches generate a realistic flow field corresponding to a user–provided vehicle geometry, accelerating the design analysis from a multi–day process to one that takes a fraction of a second. The implicit–parameter approach successfully learns from the underlying geometric features, showing comparable performance to the explicit–parameter approach. With a larger and more–diverse training database, this network has the potential to abstractly learn the design space relationships for arbitrary marine vehicle geometries, even those beyond the scope of the training database.
814

End-To-End Text Detection Using Deep Learning

Ibrahim, Ahmed Sobhy Elnady 19 December 2017 (has links)
Text detection in the wild is the problem of locating text in images of everyday scenes. It is a challenging problem due to the complexity of everyday scenes. This problem possesses a great importance for many trending applications, such as self-driving cars. Previous research in text detection has been dominated by multi-stage sequential approaches which suffer from many limitations including error propagation from one stage to the next. Another line of work is the use of deep learning techniques. Some of the deep methods used for text detection are box detection models and fully convolutional models. Box detection models suffer from the nature of the annotations, which may be too coarse to provide detailed supervision. Fully convolutional models learn to generate pixel-wise maps that represent the location of text instances in the input image. These models suffer from the inability to create accurate word level annotations without heavy post processing. To overcome these aforementioned problems we propose a novel end-to-end system based on a mix of novel deep learning techniques. The proposed system consists of an attention model, based on a new deep architecture proposed in this dissertation, followed by a deep network based on Faster-RCNN. The attention model produces a high-resolution map that indicates likely locations of text instances. A novel aspect of the system is an early fusion step that merges the attention map directly with the input image prior to word-box prediction. This approach suppresses but does not eliminate contextual information from consideration. Progressively larger models were trained in 3 separate phases. The resulting system has demonstrated an ability to detect text under difficult conditions related to illumination, resolution, and legibility. The system has exceeded the state of the art on the ICDAR 2013 and COCO-Text benchmarks with F-measure values of 0.875 and 0.533, respectively. / Ph. D.
815

CloudCV: Deep Learning and Computer Vision on the Cloud

Agrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content. The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science
816

Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction

Vekhande, Swapnil Sudhir 14 June 2019 (has links)
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality. / Master of Science / Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
817

Revealing the Determinants of Acoustic Aesthetic Judgment Through Algorithmic

Jenkins, Spencer Daniel 03 July 2019 (has links)
This project represents an important first step in determining the fundamental aesthetically relevant features of sound. Though there has been much effort in revealing the features learned by a deep neural network (DNN) trained on visual data, little effort in applying these techniques to a network trained on audio data has been performed. Importantly, these efforts in the audio domain often impose strong biases about relevant features (e.g., musical structure). In this project, a DNN is trained to mimic the acoustic aesthetic judgment of a professional composer. A unique corpus of sounds and corresponding professional aesthetic judgments is leveraged for this purpose. By applying a variation of Google's "DeepDream" algorithm to this trained DNN, and limiting the assumptions introduced, we can begin to listen to and examine the features of sound fundamental for aesthetic judgment. / Master of Science / The question of what makes a sound aesthetically “interesting” is of great importance to many, including biologists, philosophers of aesthetics, and musicians. This project serves as an important first step in determining the fundamental aesthetically relevant features of sound. First, a computer is trained to mimic the aesthetic judgments of a professional composer; if the composer would deem a sound “interesting,” then so would the computer. During this training, the computer learns for itself what features of sound are important for this classification. Then, a variation of Google’s “DeepDream” algorithm is applied to allow these learned features to be heard. By carefully considering the manner in which the computer is trained, this algorithmic “dreaming” allows us to begin to hear aesthetically salient features of sound.
818

Neural network modelling of RC deep beam shear strength

Yang, Keun-Hyeok, Ashour, Ashraf, Song, J-K., Lee, E-T. January 2008 (has links)
Yes / A 9 x 18 x 1 feed-forward neural network (NN) model trained using a resilient back-propagation algorithm and early stopping technique is constructed to predict the shear strength of deep reinforced concrete beams. The input layer covering geometrical and material properties of deep beams has nine neurons, and the corresponding output is the shear strength. Training, validation and testing of the developed neural network have been achieved using a comprehensive database compiled from 362 simple and 71 continuous deep beam specimens. The shear strength predictions of deep beams obtained from the developed NN are in better agreement with test results than those determined from strut-and-tie models. The mean and standard deviation of the ratio between predicted capacities using the NN and measured shear capacities are 1.028 and 0.154, respectively, for simple deep beams, and 1.0 and 0.122, respectively, for continuous deep beams. In addition, the trends ascertained from parametric study using the developed NN have a consistent agreement with those observed in other experimental and analytical investigations.
819

Shear capacity of reinforced concrete beams using neural network

Yang, Keun-Hyeok, Ashour, Ashraf, Song, J-K. January 2007 (has links)
No / Optimum multi-layered feed-forward neural network (NN) models using a resilient back-propagation algorithm and early stopping technique are built to predict the shear capacity of reinforced concrete deep and slender beams. The input layer neurons represent geometrical and material properties of reinforced concrete beams and the output layer produces the beam shear capacity. Training, validation and testing of the developed neural network have been achieved using 50%, 25%, and 25%, respectively, of a comprehensive database compiled from 631 deep and 549 slender beam specimens. The predictions obtained from the developed neural network models are in much better agreement with test results than those determined from shear provisions of different codes, such as KBCS, ACI 318-05, and EC2. The mean and standard deviation of the ratio between predicted using the neural network models and measured shear capacities are 1.02 and 0.18, respectively, for deep beams, and 1.04 and 0.17, respectively, for slender beams. In addition, the influence of different parameters on the shear capacity of reinforced concrete beams predicted by the developed neural network shows consistent agreement with those experimentally observed.
820

Accuracy of Open MRI for Guiding Injection of the Equine Deep Digital Flexor Tendon within the Hoof

Groom, Lauren M. 22 May 2017 (has links)
Lesions of the distal deep digital flexor tendon (DDFT) are frequently diagnosed using magnetic resonance imaging (MRI) in horses with foot pain. The prognosis for horses with DDFT lesions to return to previous levels of performance is poor. Treatment options are limited; consisting of conservative therapy, desmotomy of the accessory ligament of the deep digital flexor tendon, injection of the digital sheath or navicular bursa, navicular bursoscopy or intralesional injection. Intralesional injection of biologic therapeutics shows promise in tendon healing, with increased number of experimental and clinical studies finding positive results. However, accurate injection of DDFT lesions within the hoof is difficult and requires general anesthesia. The Hallmarq open, low-field MRI unit was used to develop an MRI-guided technique to inject structures within the hoof. This procedure has been previously reported for injecting the collateral ligaments of the distal interphalangeal joint. Four clinical cases of deep digital flexor tendinopathy have been treated with MRI-guided injections using a similar technique. The aim of this study was to evaluate accuracy of a technique for injection of the deep digital flexor tendon within the hoof using MRI-guidance, which could be performed in standing patients. We hypothesized that injection of the DDFT within the hoof could be accurately guided using open low-field MRI to target either the lateral or medial lobe at a specific location. Ten cadaver limbs were positioned in an open, low-field MRI unit to mimic a standing horse. Each DDFT lobe was assigned to have a proximal (adjacent to the proximal aspect of the navicular bursa) or distal (adjacent to the navicular bone) injection. A titanium needle was inserted into each tendon lobe, guided by T1-weighted transverse images acquired simultaneously during injection. Oil-based colored dye was injected as a marker. Post-injection MRI and gross sections were assessed by three blinded investigators experienced in equine MRI. The success of injection as evaluated on gross section was 85% (70% proximal, 100% distal). The success of injection as evaluated by MRI was 65% (60% proximal, 70% distal). There was no significant difference between the success of injecting the medial versus lateral lobe. The major limitation of this study was the use of cadaver limbs with normal tendons. The authors concluded that injection of the DDFT within the hoof is possible using MRI guidance. Future work should be focused on using the techniqe in live horses with tendon lesions, and more clinical studies are needed to determine the most efficacious biologic therapeutic for tendon healing. / Master of Science

Page generated in 1.2962 seconds