• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 334
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 481
  • 245
  • 200
  • 189
  • 163
  • 137
  • 127
  • 112
  • 104
  • 102
  • 88
  • 87
  • 85
  • 82
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Object Identification Using Mobile Device for Visually Impaired Person

Akarapu, Deepika 09 August 2021 (has links)
No description available.
162

Land Cover Classification on Satellite Image Time Series Using Deep Learning Models

Wang, Zhihao January 2020 (has links)
No description available.
163

A comparison of TV news coverage of the American medium (CNN) and the Middle East medium (Al-Jazeera) on the Iraq War

Benjamin, Adrenna 01 January 2004 (has links)
No description available.
164

Segmentace obrazu s využitím hlubokého učení / Image segmentation using deeplearning methods

Lukačovič, Martin January 2017 (has links)
This thesis deals with the current methods of semantic segmentation using deep learning. Other approaches of neaural networks in the area of deep learning are also discussed. It contains historical solutions of neural networks, their development, and basic principle. Convolutional neural networks are nowadays the most preferable networks in solving tasks as detection, classification, and image segmentation. The functionality was verified on a freely available environment based on conditional random fields as recurrent neural networks and compered with the deep convolutional neural networks using conditional random fields as postprocess. The latter mentioned method has become the basis for training of new models on two different datasets. There are various enviroments used to implement neural networks using deep learning, which offer diverse perform possibilities. For demonstration purposes a Python application leveraging the BVLC\,/\,Caffe framework was created. The best achieved accuracy of a trained model for clothing segmentation is 50,74\,\% and 68,52\,\% for segmentation of VOC objects. The application aims to allow interaction with image segmentation based on trained models.
165

Classifying Liver Fibrosis Stage Using Gadoxetic Acid-Enhanced MR Images

Lu, Yi Cheng January 2019 (has links)
The purpose is trying to classify the Liver Fibrosis stage using Gadoxetic Acid-EnhancedMR Images.  In the very beginning, a method proposed by one Korean group is being examined and trying to reproduce their result. However, the performance is not as impressive as theirs. Then, some gray-scale image feature extraction methods are used. Last but not least, the hottest method in recent years - ConvolutionNeural Network(CNN) was utilized. Finally, the performance has been evaluated in both methods. The result shows that with manual feature extraction, the Adaboost model works pretty well that AUC achieves 0.9. Besides, the AUC of ResNet-18 network - a deep learning architecture, can reach 0.93. Also, all the hyperparameters and training settings used on ResNet-18 can be transferred to ResNet-50/ResNet-101/InceptionV3 very well. The best model that can be obtained is ResNet-101which has an AUC of 0.96 - higher than all current publications for machine learning methods for staging liver fibrosis.
166

Hardware Efficient Deep Neural Network Implementation on FPGA

Shuvo, Md Kamruzzaman 01 December 2020 (has links)
In recent years, there has been a significant push to implement Deep Neural Networks (DNNs) on edge devices, which requires power and hardware efficient circuits to carry out the intensive matrix-vector multiplication (MVM) operations. This work presents hardware efficient MVM implementation techniques using bit-serial arithmetic and a novel MSB first computation circuit. The proposed designs take advantage of the pre-trained network weight parameters, which are already known in the design stage. Thus, the partial computation results can be pre-computed and stored into look-up tables. Then the MVM results can be computed in a bit-serial manner without using multipliers. The proposed novel circuit implementation for convolution filters and rectified linear activation function used in deep neural networks conducts computation in an MSB-first bit-serial manner. It can predict earlier if the outcomes of filter computations will be negative and subsequently terminate the remaining computations to save power. The benefits of using the proposed MVM implementations techniques are demonstrated by comparing the proposed design with conventional implementation. The proposed circuit is implemented on an FPGA. It shows significant power and performance improvements compared to the conventional designs implemented on the same FPGA.
167

Real-time 2D Static Hand Gesture Recognition and 2D Hand Tracking for Human-Computer Interaction

Popov, Pavel Alexandrovich 11 December 2020 (has links)
The topic of this thesis is Hand Gesture Recognition and Hand Tracking for user interface applications. 3 systems were produced, as well as datasets for recognition and tracking, along with UI applications to prove the concept of the technology. These represent significant contributions to resolving the hand recognition and tracking problems for 2d systems. The systems were designed to work in video only contexts, be computationally light, provide recognition and tracking of the user's hand, and operate without user driven fine tuning and calibration. Existing systems require user calibration, use depth sensors and do not work in video only contexts, or are computationally heavy requiring GPU to run in live situations. A 2-step static hand gesture recognition system was created which can recognize 3 different gestures in real-time. A detection step detects hand gestures using machine learning models. A validation step rejects false positives. The gesture recognition system was combined with hand tracking. It recognizes and then tracks a user's hand in video in an unconstrained setting. The tracking uses 2 collaborative strategies. A contour tracking strategy guides a minimization based template tracking strategy and makes it real-time, robust, and recoverable, while the template tracking provides stable input for UI applications. Lastly, an improved static gesture recognition system addresses the drawbacks due to stratified colour sampling of the detection boxes in the detection step. It uses the entire presented colour range and clusters it into constituent colour modes which are then used for segmentation, which improves the overall gesture recognition rates. One dataset was produced for static hand gesture recognition which allowed for the comparison of multiple different machine learning strategies, including deep learning. Another dataset was produced for hand tracking which provides a challenging series of user scenarios to test the gesture recognition and hand tracking system. Both datasets are significantly larger than other available datasets. The hand tracking algorithm was used to create a mouse cursor control application, a paint application for Android mobile devices, and a FPS video game controller. The latter in particular demonstrates how the collaborating hand tracking can fulfill the demanding nature of responsive aiming and movement controls.
168

Automated Building Extraction from Aerial Imagery with Mask R-CNN

Zilong Yang (9750833) 14 December 2020 (has links)
<p>Buildings are one of the fundamental sources of geospatial information for urban planning, population estimation, and infrastructure management. Although building extraction research has gained considerable progress through neural network methods, the labeling of training data still requires manual operations which are time-consuming and labor-intensive. Aiming to improve this process, this thesis developed an automated building extraction method based on the boundary following technique and the Mask Regional Convolutional Neural Network (Mask R-CNN) model. First, assisted by known building footprints, a boundary following method was used to automatically best label the training image datasets. In the next step, the Mask R-CNN model was trained with the labeling results and then applied to building extraction. Experiments with datasets of urban areas of Bloomington and Indianapolis with 2016 high resolution aerial images verified the effectiveness of the proposed approach. With the help of existing building footprints, the automatic labeling process took only five seconds for a 500*500 pixel image without human interaction. A 0.951 intersection over union (IoU) between the labeled mask and the ground truth was achieved due to the high quality of the automatic labeling step. In the training process, the Resnet50 network and the feature pyramid network (FPN) were adopted for feature extraction. The region proposal network (RPN) then was trained end-to-end to create region proposals. The performance of the proposed approach was evaluated in terms of building detection and mask segmentation in the two datasets. The building detection results of 40 test tiles respectively in Bloomington and Indianapolis showed that the Mask R-CNN model achieved 0.951 and 0.968 F1-scores. In addition, 84.2% of the newly built buildings in the Indianapolis dataset were successfully detected. According to the segmentation results on these two datasets, the Mask R-CNN model achieved the mean pixel accuracy (MPA) of 92% and 88%, respectively for Bloomington and Indianapolis. It was found that the performance of the mask segmentation and contour extraction became less satisfactory as the building shapes and roofs became more complex. It is expected that the method developed in this thesis can be adapted for large-scale use under varying urban setups.</p>
169

Gatekeeping and Citizen Journalism: A Qualitative Examination of Participatory Newsgathering

Channel, Amani 02 March 2010 (has links)
For nearly sixty years, scholars have studied how information is selected, vetted, and shared by news organizations. The process, known as gatekeeping, is an enduring mass communications theory that describes the process by which news is gathered and filtered to audiences. It has been suggested, however, that in the wake of online communications the traditional function of media gatekeeping is changing. The infusion of citizen-gathered media into news programming is resulting in what some call a paradigm shift. As mainstream news outlets adopt and encourage public participation, it is important that researchers have a greater understanding of the theoretical implications related to participatory media and gatekeeping. This study will be among the first to examine the adoption of citizen journalism by a major cable news network. It will focus on CNN's citizen journalism online news community called iReport, which allows the public to share and submit "unfiltered" content. Vetted submissions that are deemed newsworthy can then be broadcasted across CNN's networks, and published on CNN.com. This journalism practice appears to follow the thoughts of Nguyen (2006), who states that, "future journalists will need to be trained to not only become critical gate-keepers but also act as listeners, discussion and forum leaders/mediators in an intimate interaction with their audiences." The goal of the paper is to lay a foundation for understanding how participatory media is utilized by a news network to help researchers possibly develop new models and hypotheses related to gatekeeping theory.
170

Evaluation of Face Recognition Accuracy in Surveillance Video

Tuvskog, Johanna January 2020 (has links)
Automatic Face Recognition (AFR) can be useful in the forensic field when identifying people in surveillance footage. In AFR systems it is common to use deep neural networks which perform well if the quality of the images keeps a certain level. This is a problem when applying AFR on surveillance data since the quality of those images can be very poor. In this thesis the CNN FaceNet has been used to evaluate how different quality parameters influence the accuracy of the face recognition. The goal is to be able to draw conclusions about how to improve the recognition by using and avoiding certain parameters based on the conditions. Parameters that have been experimented with are angle of the face, image quality, occlusion, colour and lighting. This has been achieved by using datasets with different properties or by alternating the images. The parameters are meant to simulate different situations that can occur in surveillance footage that is difficult for the network to recognise. Three different models have been evaluated with different amount of embeddings and different training data. The results show that the two models trained on the VGGFace2 dataset performs much better than the one trained on CASIA-WebFace. All models performance drops on images with low quality compared to images with high quality because of the training data including mostly high-quality images. In some cases, the recognition results can be improved by applying some alterations in the images. This could be by using one frontal and one profile image when trying to identify a person or occluding parts of the shape of the face if it gets recognized as other persons with similar face shapes. One main improvement would be to extend the training datasets with more low-quality images. To some extent, this could be achieved by different kinds of data augmentation like artificial occlusion and down-sampled images.

Page generated in 0.0499 seconds