• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2972
  • 277
  • 199
  • 187
  • 164
  • 82
  • 53
  • 29
  • 25
  • 22
  • 21
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 5049
  • 3013
  • 1316
  • 1113
  • 1113
  • 824
  • 746
  • 745
  • 568
  • 555
  • 552
  • 518
  • 489
  • 473
  • 458
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

The correlation between cervical proprioception and cranio-cervical flexion tests in patients with whiplash-associated disorders

Snyckers, Merle 03 March 2008 (has links)
ABSTRACT: Whiplash-associated disorders are a common occurrence. Physiotherapy rehabilitation of such disorders include, among others, improving the recruitment ability of the deep cervical flexor muscles. Cervical proprioception, which has recently gained attention, is not commonly addressed. Evidence points to a possible link between cervical proprioception and deep cervical flexor recruitment ability. This study aimed to determine whether such a correlation exists. This is significant as it highlights the role that recruitment training of the deep cervical flexors has on cervical proprioception. A correlation study design was employed that involved 29 patients with whiplashassociated disorders. They were tested in their ability to perform the cranio-cervical flexion test and Revel’s test for proprioception. Linear regression was employed to interpret the results. This study concluded that a correlation exists between the ability to perform the craniocervical- flexion test and cervical proprioception.
132

A Deep Learning Approach To Target Recognition In Side-Scan Sonar Imagery

Unknown Date (has links)
Automatic target recognition capabilities in autonomous underwater vehicles has been a daunting task, largely due to the noisy nature of sonar imagery and due to the lack of publicly available sonar data. Machine learning techniques have made great strides in tackling this feat, although not much research has been done regarding deep learning techniques for side-scan sonar imagery. Here, a state-of-the-art deep learning object detection method is adapted for side-scan sonar imagery, with results supporting a simple yet robust method to detect objects/anomalies along the seabed. A systematic procedure was employed in transfer learning a pre-trained convolutional neural network in order to learn the pixel-intensity based features of seafloor anomalies in sonar images. Using this process, newly trained convolutional neural network models were produced using relatively small training datasets and tested to show reasonably accurate anomaly detection and classification with little to no false alarms. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
133

IMPROVING THE REALISM OF SYNTHETIC IMAGES THROUGH THE MIXTURE OF ADVERSARIAL AND PERCEPTUAL LOSSES

Atapattu, Charith Nisanka 01 December 2018 (has links)
This research is describing a novel method to generate realism improved synthetic images while preserving annotation information and the eye gaze direction. Furthermore, it describes how the perceptual loss can be utilized while introducing basic features and techniques from adversarial networks for better results.
134

Computer Vision System-On-Chip Designs for Intelligent Vehicles

Zhou, Yuteng 24 April 2018 (has links)
Intelligent vehicle technologies are growing rapidly that can enhance road safety, improve transport efficiency, and aid driver operations through sensors and intelligence. Advanced driver assistance system (ADAS) is a common platform of intelligent vehicle technologies. Many sensors like LiDAR, radar, cameras have been deployed on intelligent vehicles. Among these sensors, optical cameras are most widely used due to their low costs and easy installation. However, most computer vision algorithms are complicated and computationally slow, making them difficult to be deployed on power constraint systems. This dissertation investigates several mainstream ADAS applications, and proposes corresponding efficient digital circuits implementations for these applications. This dissertation presents three ways of software / hardware algorithm division for three ADAS applications: lane detection, traffic sign classification, and traffic light detection. Using FPGA to offload critical parts of the algorithm, the entire computer vision system is able to run in real time while maintaining a low power consumption and a high detection rate. Catching up with the advent of deep learning in the field of computer vision, we also present two deep learning based hardware implementations on application specific integrated circuits (ASIC) to achieve even lower power consumption and higher accuracy. The real time lane detection system is implemented on Xilinx Zynq platform, which has a dual core ARM processor and FPGA fabric. The Xilinx Zynq platform integrates the software programmability of an ARM processor with the hardware programmability of an FPGA. For the lane detection task, the FPGA handles the majority of the task: region-of-interest extraction, edge detection, image binarization, and hough transform. After then, the ARM processor takes in hough transform results and highlights lanes using the hough peaks algorithm. The entire system is able to process 1080P video stream at a constant speed of 69.4 frames per second, realizing real time capability. An efficient system-on-chip (SOC) design which classifies up to 48 traffic signs in real time is presented in this dissertation. The traditional histogram of oriented gradients (HoG) and support vector machine (SVM) are proven to be very effective on traffic sign classification with an average accuracy rate of 93.77%. For traffic sign classification, the biggest challenge comes from the low execution efficiency of the HoG on embedded processors. By dividing the HoG algorithm into three fully pipelined stages, as well as leveraging extra on-chip memory to store intermediate results, we successfully achieved a throughput of 115.7 frames per second at 1080P resolution. The proposed generic HoG hardware implementation could also be used as an individual IP core by other computer vision systems. A real time traffic signal detection system is implemented to present an efficient hardware implementation of the traditional grass-fire blob detection. The traditional grass-fire blob detection method iterates the input image multiple times to calculate connected blobs. In digital circuits, five extra on-chip block memories are utilized to save intermediate results. By using additional memories, all connected blob information could be obtained through one-pass image traverse. The proposed hardware friendly blob detection can run at 72.4 frames per second with 1080P video input. Applying HoG + SVM as feature extractor and classifier, 92.11% recall rate and 99.29% precision rate are obtained on red lights, and 94.44% recall rate and 98.27% precision rate on green lights. Nowadays, convolutional neural network (CNN) is revolutionizing computer vision due to learnable layer by layer feature extraction. However, when coming into inference, CNNs are usually slow to train and slow to execute. In this dissertation, we studied the implementation of principal component analysis based network (PCANet), which strikes a balance between algorithm robustness and computational complexity. Compared to a regular CNN, the PCANet only needs one iteration training, and typically at most has a few tens convolutions on a single layer. Compared to hand-crafted features extraction methods, the PCANet algorithm well reflects the variance in the training dataset and can better adapt to difficult conditions. The PCANet algorithm achieves accuracy rates of 96.8% and 93.1% on road marking detection and traffic light detection, respectively. Implementing in Synopsys 32nm process technology, the proposed chip can classify 724,743 32-by-32 image candidates in one second, with only 0.5 watt power consumption. In this dissertation, binary neural network (BNN) is adopted as a potential detector for intelligent vehicles. The BNN constrains all activations and weights to be +1 or -1. Compared to a CNN with the same network configuration, the BNN achieves 50 times better resource usage with only 1% - 2% accuracy loss. Taking car detection and pedestrian detection as examples, the BNN achieves an average accuracy rate of over 95%. Furthermore, a BNN accelerator implemented in Synopsys 32nm process technology is presented in our work. The elastic architecture of the BNN accelerator makes it able to process any number of convolutional layers with high throughput. The BNN accelerator only consumes 0.6 watt and doesn't rely on external memory for storage.
135

Incorporating Rich Features into Deep Knowledge Tracing

Zhang, Liang 14 April 2017 (has links)
The desire to follow student learning within intelligent tutoring systems in near real time has led to the development of several models anticipating the correctness of the next item as students work through an assignment. Such models have in- cluded Bayesian Knowledge Tracing (BKT), Performance Factors Analysis (PFA), and more recently with developments in Deep Learning, Deep Knowledge Tracing (DKT). The DKT model, based on the use of a recurrent neural network, exhibited promising results in paper [PBH+15]. Thus far, however, the model has only considered the knowledge components of the problems and correctness as input, neglecting the breadth of other features col- lected by computer-based learning platforms. This work seeks to improve upon the DKT model by incorporating more features at the problem-level and student-level. With this higher dimensional input, an adaption to the original DKT model struc- ture is also proposed, incorporating an Autoencoder network layer to convert the input into a low dimensional feature vector to reduce both the resource requirement and time needed to train. Experimental results show that our adapted DKT model, which includes more combinations of features, can effectively improve accuracy.
136

Deep Learning Binary Neural Network on an FPGA

Redkar, Shrutika 27 April 2017 (has links)
In recent years, deep neural networks have attracted lots of attentions in the field of computer vision and artificial intelligence. Convolutional neural network exploits spatial correlations in an input image by performing convolution operations in local receptive fields. When compared with fully connected neural networks, convolutional neural networks have fewer weights and are faster to train. Many research works have been conducted to further reduce computational complexity and memory requirements of convolutional neural networks, to make it applicable to low-power embedded applications. This thesis focuses on a special class of convolutional neural network with only binary weights and activations, referred as binary neural networks. Weights and activations for convolutional and fully connected layers are binarized to take only two values, +1 and -1. Therefore, the computations and memory requirement have been reduced significantly. The proposed architecture of binary neural networks has been implemented on an FPGA as a real time, high speed, low power computer vision platform. Only on-chip memories are utilized in the FPGA design. The FPGA implementation is evaluated using the CIFAR-10 benchmark and achieved a processing speed of 332,164 images per second for CIFAR-10 dataset with classification accuracy of about 86.06%.
137

Robust Auto-encoders

Zhou, Chong 27 April 2016 (has links)
In this thesis, our aim is to improve deep auto-encoders, an important topic in the deep learning area, which has shown connections to latent feature discovery models in the literature. Our model is inspired by robust principal component analysis, and we build an outlier filter on the top of basic deep auto-encoders. By adding this filter, we can split the input data X into two parts X=L+S, where the L could be better reconstructed by a deep auto-encoder and the S contains the anomalous parts of the original data X. Filtering out the anomalies increases the robustness of the standard auto-encoder, and thus we name our model ``Robust Auto-encoder'. We also propose a novel solver for the robust auto-encoder which alternatively optimizes the reconstruction cost of the deep auto-encoder and the sparsity of outlier filter in pursuit of finding the optimal solution. This solver is inspired by the Alternating Direction Method of Multipliers, Back-propagation and the Alternating Projection method, and we demonstrate the convergence properties of this algorithm and its superior performance in standard image recognition tasks. Last but not least, we apply our model to multiple domains, especially, the cyber-data analysis, where deep models are seldom currently used.
138

Deep Learning on Attributed Sequences

Zhuang, Zhongfang 02 August 2019 (has links)
Recent research in feature learning has been extended to sequence data, where each instance consists of a sequence of heterogeneous items with a variable length. However, in many real-world applications, the data exists in the form of attributed sequences, which is composed of a set of fixed-size attributes and variable-length sequences with dependencies between them. In the attributed sequence context, feature learning remains challenging due to the dependencies between sequences and their associated attributes. In this dissertation, we focus on analyzing and building deep learning models for four new problems on attributed sequences. First, we propose a framework, called NAS, to produce feature representations of attributed sequences in an unsupervised fashion. The NAS is capable of producing task independent embeddings that can be used in various mining tasks of attributed sequences. Second, we study the problem of deep metric learning on attributed sequences. The goal is to learn a distance metric based on pairwise user feedback. In this task, we propose a framework, called MLAS, to learn a distance metric that measures the similarity and dissimilarity between attributed sequence feedback pairs. Third, we study the problem of one-shot learning on attributed sequences. This problem is important for a variety of real-world applications ranging from fraud prevention to network intrusion detection. We design a deep learning framework OLAS to tackle this problem. Once the OLAS is trained, we can then use it to make predictions for not only the new data but also for entire previously unseen new classes. Lastly, we investigate the problem of attributed sequence classification with attention model. This is challenging that now we need to assess the importance of each item in each sequence considering both the sequence itself and the associated attributes. In this work, we propose a framework, called AMAS, to classify attributed sequences using the information from the sequences, metadata, and the computed attention. Our extensive experiments on real-world datasets demonstrate that the proposed solutions significantly improve the performance of each task over the state-of-the-art methods on attributed sequences.
139

Volumetric gas usage of the basic-sport scuba diver in water temperatures of 18.3, 22.2, 25.6, and 29.4 degrees Celsius

Wittlieff, Michael J January 2011 (has links)
Digitized by Kansas Correctional Industries
140

Geometry and uncertainty in deep learning for computer vision

Kendall, Alex Guy January 2019 (has links)
Deep learning and convolutional neural networks have become the dominant tool for computer vision. These techniques excel at learning complicated representations from data using supervised learning. In particular, image recognition models now out-perform human baselines under constrained settings. However, the science of computer vision aims to build machines which can see. This requires models which can extract richer information than recognition, from images and video. In general, applying these deep learning models from recognition to other problems in computer vision is significantly more challenging. This thesis presents end-to-end deep learning architectures for a number of core computer vision problems; scene understanding, camera pose estimation, stereo vision and video semantic segmentation. Our models outperform traditional approaches and advance state-of-the-art on a number of challenging computer vision benchmarks. However, these end-to-end models are often not interpretable and require enormous quantities of training data. To address this, we make two observations: (i) we do not need to learn everything from scratch, we know a lot about the physical world, and (ii) we cannot know everything from data, our models should be aware of what they do not know. This thesis explores these ideas using concepts from geometry and uncertainty. Specifically, we show how to improve end-to-end deep learning models by leveraging the underlying geometry of the problem. We explicitly model concepts such as epipolar geometry to learn with unsupervised learning, which improves performance. Secondly, we introduce ideas from probabilistic modelling and Bayesian deep learning to understand uncertainty in computer vision models. We show how to quantify different types of uncertainty, improving safety for real world applications.

Page generated in 0.0573 seconds