• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Privacy-Preserving Facial Recognition Using Biometric-Capsules

Tyler Stephen Phillips (8782193) 04 May 2020 (has links)
<div>In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design. </div><div><br></div><div>In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods.</div>
212

Physics Informed Neural Networks for Engineering Systems

Sukirt (8828960) 13 May 2020 (has links)
<div>This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics</div><div>informed neural networks leverage the information gathered over centuries in the</div><div>form of physical laws mathematically represented in the form of partial differential</div><div>equations to make up for the dearth of data associated with engineering and physi-</div><div>cal systems. To demonstrate the capability of physics informed neural networks, an</div><div>inverse and a forward problem are considered. The inverse problem involves discov-</div><div>ering a spatially varying concentration ?field from the observations of concentration</div><div>of a passive scalar. A forward problem involving conjugate heat transfer is solved as</div><div>well, where the boundary conditions on velocity and temperature are used to discover</div><div>the velocity, pressure and temperature ?fields in the entire domain. The predictions of</div><div>the physics informed neural networks are compared against simulated data generated</div><div>using OpenFOAM.</div>
213

Réseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Deep neural networks for the detection of small vehicles in aerial imagery

Ogier du Terrail, Jean 20 December 2018 (has links)
Cette thèse présente une tentative d'approche du problème de la détection et discrimination des petits véhicules dans des images aériennes en vue verticale par l'utilisation de techniques issues de l'apprentissage profond ou "deep-learning". Le caractère spécifique du problème permet d'utiliser des techniques originales mettant à profit les invariances des automobiles et autres avions vus du ciel.Nous commencerons par une étude systématique des détecteurs dits "single-shot", pour ensuite analyser l'apport des systèmes à plusieurs étages de décision sur les performances de détection. Enfin nous essayerons de résoudre le problème de l'adaptation de domaine à travers la génération de données synthétiques toujours plus réalistes, et son utilisation dans l'apprentissage de ces détecteurs. / The following manuscript is an attempt to tackle the problem of small vehicles detection in vertical aerial imagery through the use of deep learning algorithms. The specificities of the matter allows the use of innovative techniques leveraging the invariance and self similarities of automobiles/planes vehicles seen from the sky.We will start by a thorough study of single shot detectors. Building on that we will examine the effect of adding multiple stages to the detection decision process. Finally we will try to come to grips with the domain adaptation problem in detection through the generation of better looking synthetic data and its use in the training process of these detectors.
214

Security Framework for the Internet of Things Leveraging Network Telescopes and Machine Learning

Shaikh, Farooq Israr Ahmed 04 April 2019 (has links)
The recent advancements in computing and sensor technologies, coupled with improvements in embedded system design methodologies, have resulted in the novel paradigm called the Internet of Things (IoT). IoT is essentially a network of small embedded devices enabled with sensing capabilities that can interact with multiple entities to relay information about their environments. This sensing information can also be stored in the cloud for further analysis, thereby reducing storage requirements on the devices themselves. The above factors, coupled with the ever increasing needs of modern society to stay connected at all times, has resulted in IoT technology penetrating all facets of modern life. In fact IoT systems are already seeing widespread applications across multiple industries such as transport, utility, manufacturing, healthcare, home automation, etc. Although the above developments promise tremendous benefits in terms of productivity and efficiency, they also bring forth a plethora of security challenges. Namely, the current design philosophy of IoT devices, which focuses more on rapid prototyping and usability, results in security often being an afterthought. Furthermore, one needs to remember that unlike traditional computing systems, these devices operate under the assumption of tight resource constraints. As such this makes IoT devices a lucrative target for exploitation by adversaries. This inherent flaw of IoT setups has manifested itself in the form of various distributed denial of service (DDoS) attacks that have achieved massive throughputs without the need for techniques such as amplification, etc. Furthermore, once exploited, an IoT device can also function as a pivot point for adversaries to move laterally across the network and exploit other, potentially more valuable, systems and services. Finally, vulnerable IoT devices operating in industrial control systems and other critical infrastructure setups can cause sizable loss of property and in some cases even lives, a very sobering fact. In light of the above, this dissertation research presents several novel strategies for identifying known and zero-day attacks against IoT devices, as well as identifying infected IoT devices present inside a network along with some mitigation strategies. To this end, network telescopes are leveraged to generate Internet-scale notions of maliciousness in conjunction with signatures that can be used to identify such devices in a network. This strategy is further extended by developing a taxonomy-based methodology which is capable of categorizing unsolicited IoT behavior by leveraging machine learning (ML) techniques, such as ensemble learners, to identify similar threats in near-real time. Furthermore, to overcome the challenge of insufficient (malicious) training data within the IoT realm, a generative adversarial network (GAN) based framework is also developed to identify known and unseen attacks on IoT devices. Finally, a software defined networking (SDN) based solution is proposed to mitigate threats from unsolicited IoT devices.
215

DEVELOPING A DEEP LEARNING PIPELINE TO AUTOMATICALLY ANNOTATE GOLD PARTICLES IN IMMUNOELECTRON MICROSCOPY IMAGES

Unknown Date (has links)
Machine learning has been utilized in bio-imaging in recent years, however as it is relatively new and evolving, some researchers who wish to utilize machine learning tools have limited access because of a lack of programming knowledge. In electron microscopy (EM), immunogold labeling is commonly used to identify the target proteins, however the manual annotation of the gold particles in the images is a time-consuming and laborious process. Conventional image processing tools could provide semi-automated annotation, but those require that users make manual adjustments for every step of the analysis. To create a new high-throughput image analysis tool for immuno-EM, I developed a deep learning pipeline that was designed to deliver a completely automated annotation of immunogold particles in EM images. The program was made accessible for users without prior programming experience and was also expanded to be used on different types of immuno-EM images. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
216

Exploring Entity Relationship in Pairwise Ranking: Adaptive Sampler and Beyond

Yu, Lu 12 1900 (has links)
Living in the booming age of information, we have to rely on powerful information retrieval tools to seek the unique piece of desired knowledge from such a big data world, like using personalized search engine and recommendation systems. As one of the core components, ranking model can appear in almost everywhere as long as we need a relative order of desired/relevant entities. Based on the most general and intuitive assumption that entities without user actions (e.g., clicks, purchase, comments) are of less interest than those with user actions, the objective function of pairwise ranking models is formulated by measuring the contrast between positive (with actions) and negative (without actions) entities. This contrastive relationship is the core of pairwise ranking models. The construction of these positive-negative pairs has great influence on the model inference accuracy. Especially, it is challenging to explore the entity relationships in heterogeneous information network. In this thesis, we aim at advancing the development of the methodologies and principles of mining heterogeneous information network through learning entity relations from a pairwise learning to rank optimization perspective. More specifically we first show the connections of different relation learning objectives modified from different ranking metrics including both pairwise and list-wise objectives. We prove that most of popular ranking metrics can be optimized in the same lower bound. Secondly, we propose the class-imbalance problem imposed by entity relation comparison in ranking objectives, and prove that class-imbalance problem can lead to frequency 5 clustering and gradient vanishment problems. As a response, we indicate out that developing a fast adaptive sampling method is very essential to boost the pairwise ranking model. To model the entity dynamic dependency, we propose to unify the individual-level interaction and union-level interactions, and result in a multi-order attentive ranking model to improve the preference inference from multiple views.
217

Interpretability for Deep Learning Text Classifiers

Lucaci, Diana 14 December 2020 (has links)
The ubiquitous presence of automated decision-making systems that have a performance comparable to humans brought attention towards the necessity of interpretability for the generated predictions. Whether the goal is predicting the system’s behavior when the input changes, building user trust, or expert assistance in improving the machine learning methods, interpretability is paramount when the problem is not sufficiently validated in real applications, and when unacceptable results lead to significant consequences. While for humans, there are no standard interpretations for the decisions they make, the complexity of the systems with advanced information-processing capacities conceals the detailed explanations for individual predictions, encapsulating them under layers of abstractions and complex mathematical operations. Interpretability for deep learning classifiers becomes, thus, a challenging research topic where the ambiguity of the problem statement allows for multiple exploratory paths. Our work focuses on generating natural language interpretations for individual predictions of deep learning text classifiers. We propose a framework for extracting and identifying the phrases of the training corpus that influence the prediction confidence the most through unsupervised key phrase extraction and neural predictions. We assess the contribution margin that the added justification has when the deep learning model predicts the class probability of a text instance, by introducing and defining a contribution metric that allows one to quantify the fidelity of the explanation to the model. We assess both the performance impact of the proposed approach on the classification task as quantitative analysis and the quality of the generated justifications through extensive qualitative and error analysis. This methodology manages to capture the most influencing phrases of the training corpus as explanations that reveal the linguistic features used for individual test predictions, allowing humans to predict the behavior of the deep learning classifier.
218

AN END TO END PIPELINE TO LOCALIZE NUCLEI IN MICROSCOPIC ZEBRAFISH EMBRYO IMAGES

Juan Andres Carvajal (9524642) 16 December 2020 (has links)
<div><div><div><p>Determining the locations of nuclei in Zebrafish embryos is crucial for the study of the spatio-temporal behavior of these cells during the development process. With image seg- mentations, not only the location of the cell can be known, but also determine if each pixels is background or part of a nucleus. Traditional image processing techniques have been thor- oughly applied to this problem. These techniques suffer from bad generalization, many times relying on heuristic that apply to a specific type of image to reach a high accuracy when doing pixel by pixel segmentation. In previous work from our research lab, wavelet image segmentation was applied, but heuristics relied on expected nuclei size .</p><p>Machine learning techniques, and more specifically convolutional neural networks, have recently revolutionized image processing and computer vision in general. By relying on vast amounts of data and deep networks, problems in computer vision such as classification or semantic segmentation have reached new state of the art performance, and these techniques are continuously improving and pushing the boundaries of state of the art.</p><p>The lack of labeled data to as input to a machine learning model was the main bottleneck. To overcome this, this work utilized Amazon Turk platform. This platform allows users to create a task and give instructions to ‘Workers‘ , which agree to a price to complete each task. The data was preprocessed before being presented to the workers, and revised to make sure it was properly labeled.</p><p>Once labeled data was ready, the images and its corresponding segmented labels were used to train a U-Net model. In a nutshell, this models takes the input image, and at different scales, maps the image to a smaller vector. From this smaller vector, the model , again at different scales, constructs an image from this vector. During model training, the weights of the model are updated so that the image that is reconstructed minimizes the difference between the label image and the pixel segmentation.</p><p>We show that this method not only fits better the labeled ground truth image by the workers, but also generalizes well to other images of Zebrafish embryos. Once the model is trained, inference to obtain the segmented image is also orders of magnitude faster than previous techniques, including our previous wavelet segmentation method.</p></div></div></div>
219

Object Recognition with Progressive Refinement for Collaborative Robots Task Allocation

Wu, Wenbo 18 December 2020 (has links)
With the rapid development of deep learning techniques, the application of Convolutional Neural Network (CNN) has benefited the task of target object recognition. Several state-of-the-art object detectors have achieved excellent performance on the precision for object recognition. When it comes to applying the detection results for the real world application of collaborative robots, the reliability and robustness of the target object detection stage is essential to support efficient task allocation. In this work, collaborative robots task allocation is based on the assumption that each individual robotic agent possesses specialized capabilities to be matched with detected targets representing tasks to be performed in the surrounding environment which impose specific requirements. The goal is to reach a specialized labor distribution among the individual robots based on best matching their specialized capabilities with the corresponding requirements imposed by the tasks. In order to further improve task recognition with convolutional neural networks in the context of robotic task allocation, this thesis proposes an innovative approach for progressively refining the target detection process by taking advantage of the fact that additional images can be collected by mobile cameras installed on robotic vehicles. The proposed methodology combines a CNN-based object detection module with a refinement module. For the detection module, a two-stage object detector, Mask RCNN, for which some adaptations on region proposal generation are introduced, and a one-stage object detector, YOLO, are experimentally investigated in the context considered. The generated recognition scores serve as input for the refinement module. In the latter, the current detection result is considered as the a priori evidence to enhance the next detection for the same target with the goal to iteratively improve the target recognition scores. Both the Bayesian method and the Dempster-Shafer theory are experimentally investigated to achieve the data fusion process involved in the refinement process. The experimental validation is conducted on indoor search-and-rescue (SAR) scenarios and the results presented in this work demonstrate the feasibility and reliability of the proposed progressive refinement framework, especially when the combination of adapted Mask RCNN and D-S theory data fusion is exploited.
220

OBJECT DETECTION IN DEEP LEARNING

Haoyu Shi (8100614) 10 December 2019 (has links)
<p>Through the computing advance and GPU (Graphics Processing Unit) availability for math calculation, the deep learning field becomes more popular and prevalent. Object detection with deep learning, which is the part of image processing, plays an important role in automatic vehicle drive and computer vision. Object detection includes object localization and object classification. Object localization involves that the computer looks through the image and gives the correct coordinates to localize the object. Object classification is that the computer classification targets into different categories. The traditional image object detection pipeline idea is from Fast/Faster R-CNN [32] [58]. The region proposal network generates the contained objects areas and put them into classifier. The first step is the object localization while the second step is the object classification. The time cost for this pipeline function is not efficient. Aiming to address this problem, You Only Look Once (YOLO) [4] network is born. YOLO is the single neural network end-to-end pipeline with the image processing speed being 45 frames per second in real time for network prediction. In this thesis, the convolution neural networks are introduced, including the state of art convolutional neural networks in recently years. YOLO implementation details are illustrated step by step. We adopt the YOLO network for our applications since the YOLO network has the faster convergence rate in training and provides high accuracy and it is the end to end architecture, which makes networks easy to optimize and train. </p>

Page generated in 0.4639 seconds