• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

DEVELOPING A DEEP LEARNING PIPELINE TO AUTOMATICALLY ANNOTATE GOLD PARTICLES IN IMMUNOELECTRON MICROSCOPY IMAGES

Unknown Date (has links)
Machine learning has been utilized in bio-imaging in recent years, however as it is relatively new and evolving, some researchers who wish to utilize machine learning tools have limited access because of a lack of programming knowledge. In electron microscopy (EM), immunogold labeling is commonly used to identify the target proteins, however the manual annotation of the gold particles in the images is a time-consuming and laborious process. Conventional image processing tools could provide semi-automated annotation, but those require that users make manual adjustments for every step of the analysis. To create a new high-throughput image analysis tool for immuno-EM, I developed a deep learning pipeline that was designed to deliver a completely automated annotation of immunogold particles in EM images. The program was made accessible for users without prior programming experience and was also expanded to be used on different types of immuno-EM images. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
592

Exploring Entity Relationship in Pairwise Ranking: Adaptive Sampler and Beyond

Yu, Lu 12 1900 (has links)
Living in the booming age of information, we have to rely on powerful information retrieval tools to seek the unique piece of desired knowledge from such a big data world, like using personalized search engine and recommendation systems. As one of the core components, ranking model can appear in almost everywhere as long as we need a relative order of desired/relevant entities. Based on the most general and intuitive assumption that entities without user actions (e.g., clicks, purchase, comments) are of less interest than those with user actions, the objective function of pairwise ranking models is formulated by measuring the contrast between positive (with actions) and negative (without actions) entities. This contrastive relationship is the core of pairwise ranking models. The construction of these positive-negative pairs has great influence on the model inference accuracy. Especially, it is challenging to explore the entity relationships in heterogeneous information network. In this thesis, we aim at advancing the development of the methodologies and principles of mining heterogeneous information network through learning entity relations from a pairwise learning to rank optimization perspective. More specifically we first show the connections of different relation learning objectives modified from different ranking metrics including both pairwise and list-wise objectives. We prove that most of popular ranking metrics can be optimized in the same lower bound. Secondly, we propose the class-imbalance problem imposed by entity relation comparison in ranking objectives, and prove that class-imbalance problem can lead to frequency 5 clustering and gradient vanishment problems. As a response, we indicate out that developing a fast adaptive sampling method is very essential to boost the pairwise ranking model. To model the entity dynamic dependency, we propose to unify the individual-level interaction and union-level interactions, and result in a multi-order attentive ranking model to improve the preference inference from multiple views.
593

Interpretability for Deep Learning Text Classifiers

Lucaci, Diana 14 December 2020 (has links)
The ubiquitous presence of automated decision-making systems that have a performance comparable to humans brought attention towards the necessity of interpretability for the generated predictions. Whether the goal is predicting the system’s behavior when the input changes, building user trust, or expert assistance in improving the machine learning methods, interpretability is paramount when the problem is not sufficiently validated in real applications, and when unacceptable results lead to significant consequences. While for humans, there are no standard interpretations for the decisions they make, the complexity of the systems with advanced information-processing capacities conceals the detailed explanations for individual predictions, encapsulating them under layers of abstractions and complex mathematical operations. Interpretability for deep learning classifiers becomes, thus, a challenging research topic where the ambiguity of the problem statement allows for multiple exploratory paths. Our work focuses on generating natural language interpretations for individual predictions of deep learning text classifiers. We propose a framework for extracting and identifying the phrases of the training corpus that influence the prediction confidence the most through unsupervised key phrase extraction and neural predictions. We assess the contribution margin that the added justification has when the deep learning model predicts the class probability of a text instance, by introducing and defining a contribution metric that allows one to quantify the fidelity of the explanation to the model. We assess both the performance impact of the proposed approach on the classification task as quantitative analysis and the quality of the generated justifications through extensive qualitative and error analysis. This methodology manages to capture the most influencing phrases of the training corpus as explanations that reveal the linguistic features used for individual test predictions, allowing humans to predict the behavior of the deep learning classifier.
594

Efficient deep networks for real-world interaction

Abhishek Chaurasia (6864272) 16 December 2020 (has links)
<div><p>Deep neural networks are essential in applications such as image categorization, natural language processing, autonomous driving, home automation, and robotics. Most of these applications require instantaneous processing of data and decision making. In general existing neural networks are computationally expensive, and hence they fail to perform in real-time. Models performing semantic segmentation are being extensively used in self-driving vehicles. Autonomous vehicles not only need segmented output, but also control system capable of processing segmented output and deciding actuator outputs such as speed and direction.</p> <p><br></p> <p>In this thesis we propose efficient neural network architectures with fewer operations and parameters as compared to current state-of-the-art algorithms. Our work mainly focuses on designing deep neural network architectures for semantic segmentation. First, we introduce few network modules and concepts which help in reducing model complexity. Later on, we show that in terms of accuracy our proposed networks perform better or at least at par with state-of-the-art neural networks. Apart from that, we also compare our networks' performance on edge devices such as Nvidia TX1. Lastly, we present a control system capable of predicting steering angle and speed of a vehicle based on the neural network output.</p></div>
595

AN END TO END PIPELINE TO LOCALIZE NUCLEI IN MICROSCOPIC ZEBRAFISH EMBRYO IMAGES

Juan Andres Carvajal (9524642) 16 December 2020 (has links)
<div><div><div><p>Determining the locations of nuclei in Zebrafish embryos is crucial for the study of the spatio-temporal behavior of these cells during the development process. With image seg- mentations, not only the location of the cell can be known, but also determine if each pixels is background or part of a nucleus. Traditional image processing techniques have been thor- oughly applied to this problem. These techniques suffer from bad generalization, many times relying on heuristic that apply to a specific type of image to reach a high accuracy when doing pixel by pixel segmentation. In previous work from our research lab, wavelet image segmentation was applied, but heuristics relied on expected nuclei size .</p><p>Machine learning techniques, and more specifically convolutional neural networks, have recently revolutionized image processing and computer vision in general. By relying on vast amounts of data and deep networks, problems in computer vision such as classification or semantic segmentation have reached new state of the art performance, and these techniques are continuously improving and pushing the boundaries of state of the art.</p><p>The lack of labeled data to as input to a machine learning model was the main bottleneck. To overcome this, this work utilized Amazon Turk platform. This platform allows users to create a task and give instructions to ‘Workers‘ , which agree to a price to complete each task. The data was preprocessed before being presented to the workers, and revised to make sure it was properly labeled.</p><p>Once labeled data was ready, the images and its corresponding segmented labels were used to train a U-Net model. In a nutshell, this models takes the input image, and at different scales, maps the image to a smaller vector. From this smaller vector, the model , again at different scales, constructs an image from this vector. During model training, the weights of the model are updated so that the image that is reconstructed minimizes the difference between the label image and the pixel segmentation.</p><p>We show that this method not only fits better the labeled ground truth image by the workers, but also generalizes well to other images of Zebrafish embryos. Once the model is trained, inference to obtain the segmented image is also orders of magnitude faster than previous techniques, including our previous wavelet segmentation method.</p></div></div></div>
596

Object Recognition with Progressive Refinement for Collaborative Robots Task Allocation

Wu, Wenbo 18 December 2020 (has links)
With the rapid development of deep learning techniques, the application of Convolutional Neural Network (CNN) has benefited the task of target object recognition. Several state-of-the-art object detectors have achieved excellent performance on the precision for object recognition. When it comes to applying the detection results for the real world application of collaborative robots, the reliability and robustness of the target object detection stage is essential to support efficient task allocation. In this work, collaborative robots task allocation is based on the assumption that each individual robotic agent possesses specialized capabilities to be matched with detected targets representing tasks to be performed in the surrounding environment which impose specific requirements. The goal is to reach a specialized labor distribution among the individual robots based on best matching their specialized capabilities with the corresponding requirements imposed by the tasks. In order to further improve task recognition with convolutional neural networks in the context of robotic task allocation, this thesis proposes an innovative approach for progressively refining the target detection process by taking advantage of the fact that additional images can be collected by mobile cameras installed on robotic vehicles. The proposed methodology combines a CNN-based object detection module with a refinement module. For the detection module, a two-stage object detector, Mask RCNN, for which some adaptations on region proposal generation are introduced, and a one-stage object detector, YOLO, are experimentally investigated in the context considered. The generated recognition scores serve as input for the refinement module. In the latter, the current detection result is considered as the a priori evidence to enhance the next detection for the same target with the goal to iteratively improve the target recognition scores. Both the Bayesian method and the Dempster-Shafer theory are experimentally investigated to achieve the data fusion process involved in the refinement process. The experimental validation is conducted on indoor search-and-rescue (SAR) scenarios and the results presented in this work demonstrate the feasibility and reliability of the proposed progressive refinement framework, especially when the combination of adapted Mask RCNN and D-S theory data fusion is exploited.
597

OBJECT DETECTION IN DEEP LEARNING

Haoyu Shi (8100614) 10 December 2019 (has links)
<p>Through the computing advance and GPU (Graphics Processing Unit) availability for math calculation, the deep learning field becomes more popular and prevalent. Object detection with deep learning, which is the part of image processing, plays an important role in automatic vehicle drive and computer vision. Object detection includes object localization and object classification. Object localization involves that the computer looks through the image and gives the correct coordinates to localize the object. Object classification is that the computer classification targets into different categories. The traditional image object detection pipeline idea is from Fast/Faster R-CNN [32] [58]. The region proposal network generates the contained objects areas and put them into classifier. The first step is the object localization while the second step is the object classification. The time cost for this pipeline function is not efficient. Aiming to address this problem, You Only Look Once (YOLO) [4] network is born. YOLO is the single neural network end-to-end pipeline with the image processing speed being 45 frames per second in real time for network prediction. In this thesis, the convolution neural networks are introduced, including the state of art convolutional neural networks in recently years. YOLO implementation details are illustrated step by step. We adopt the YOLO network for our applications since the YOLO network has the faster convergence rate in training and provides high accuracy and it is the end to end architecture, which makes networks easy to optimize and train. </p>
598

Deep Neural Networks Based Disaggregation of Swedish Household Energy Consumption

Bhupathiraju, Praneeth Varma January 2020 (has links)
Context: In recent years, households have been increasing energy consumption to very high levels, where it is no longer sustainable. There has been a dire need to find a way to use energy more sustainably due to the increase in the usage of energy consumption. One of the main causes of this unsustainable usage of energy consumption is that the user is not much acquainted with the energy consumed by the smart appliances (dishwasher, refrigerator, washing machine etc) in their households. By letting the household users know the energy usage consumed by the smart appliances. For the energy analytics companies, they must analyze the energy consumed by the smart appliances present in a house. To achieve this Kelly et. al. [7] have performed the task of energy disaggregation by using deep neural networks and producing good results. Zhang et. al. [7] has gone even a step further in improving the deep neural networks proposed by Kelly et. al., The task was performed by Non-intrusive load monitoring (NILM) technique. Objectives: The thesis aims to assess the performance of the deep neural networks which are proposed by Kelly et.al. [7], and Zhang et. al. [8]. We use deep neural networks for disaggregation of the dishwasher energy consumption, in the presence of vampire loads such as electric heaters, in a Swedish household setting. We also try to identify the training time of the proposed deep neural networks.  Methods: An intensive literature review is done to identify state-of-the-art deep neural network techniques used for energy disaggregation.  All the experiments are being performed on the dataset provided by the energy analytics company Eliq AB. The data is collected from 4 households in Sweden. All the households consist of vampire loads, an electrical heater, whose power consumption can be seen in the main power sensor. A separate smart plug is used to collect the dishwasher power consumption data. Each algorithm training is done on 2 houses with data provided by all the houses except two, which will be used for testing. The metrics used for analyzing the algorithms are Accuracy, Recall, Precision, Root mean square error (RMSE), and F1 measure. These software metrics would help us identify the best suitable algorithm for the disaggregation of dishwasher energy in our case.  Results: The results from our study have proved that Gated recurrent unit (GRU) performed best when compared to the other neural networks in our study like Simple recurrent neural network (SRN), Convolutional Neural Network (CNN), Long short-Term memory (LSTM) and Recurrent convolution neural network (RCNN). The Accuracy, RMSE and the F1 score of the GRU algorithm are higher when compared with the other algorithms. Also, if the user does not consider F1 score and RMSE as an evaluation metric and considers training time as his or her metric, then Simple recurrent neural network outperforms all the other neural nets with an average training time of 19.34 minutes.
599

Applicability of deep learning for mandibular growth prediction

Jiwa, Safeer 29 July 2020 (has links)
OBJECTIVES: Cephalometric analysis is a tool used in orthodontics for craniofacial growth assessment. Magnitude and direction of mandibular growth pose challenges that may impede successful orthodontic treatment. Accurate growth prediction enables the practitioner to improve diagnostics and orthodontic treatment planning. Deep learning provides a novel method due to its ability to analyze massive quantities of data. We compared the growth prediction capabilities of a novel deep learning algorithm with an industry-standard method. METHODS: Using OrthoDx™, 17 mandibular landmarks were plotted on selected serial cephalograms of 101 growing subjects, obtained from the Forsyth Moorrees Twin Study. The Deep Learning Algorithm (DLA) was trained for a 2-year prediction with 81 subjects. X/Y coordinates of initial and final landmark positions were inputted into a multilayer perceptron that was trained to improve its growth prediction accuracy over several iterations. These parameters were then used on 20 test subjects and compared to the ground truth landmark locations to compute the accuracy. The 20 subjects’ growth was also predicted using Ricketts’s growth prediction (RGP) in Dolphin Imaging™ 11.9 and compared to the ground truth. Mean Absolute Error (MAE) of Ricketts and DLA were then compared to each other, and human landmark detection error used as a clinical reference mean (CRM). RESULTS: The 2-year mandibular growth prediction MAE was 4.21mm for DLA and 3.28mm for RGP. DLA’s error for skeletal landmarks was 2.11x larger than CRM, while RGP was 1.78x larger. For dental landmarks, DLA was 2.79x, and Ricketts was 1.73x larger than CRM. CONCLUSIONS: DLA is currently not on par with RGP for a 2-year growth prediction. However, an increase in data volume and increased training may improve DLA’s prediction accuracy. Regardless, significant future improvements to all growth prediction methods would more accurately assess growth from lateral cephalograms and improve orthodontic diagnoses and treatment plans.
600

Determinación y diseño del tipo de cimentación profunda con pilotes en puentes sobre suelos arenosos en Tumbes mediante un modelo computarizado / Determination and design of the type of deep foundation with pilots on bridges over sandy soils in Tumbes using a computerized model

Orellana Castillo, Javier Steven, Paitán Alejos, Juan Pablo 09 July 2020 (has links)
En el año 2017, Perú sufrió el fenómeno del Niño Costero luego de 19 años. Este desastre afectó principalmente la costa norte del país, ocasionando que numerosas viviendas e instalaciones terminaran enterradas por las inundaciones. Además, varios puentes colapsaron causando que pueblos queden incomunicados. A raíz de ello, se puede determinar que no todos los puentes están preparados para este tipo de fenómenos, teniendo como posibles causas estudios de suelos y diseños estructurales con escasa información. Por tales motivos, la presente tesis se refiere al diseño y determinación del tipo de pilote para la cimentación profunda más eficiente en puentes sobre suelos arenosos en Tumbes mediante un modelo computarizado. La aplicación se realizará en el puente Canoas, en el cual se buscará optimizar el rendimiento de ejecución considerando que pueda soportar las cargas actuantes y las características del suelo. Se propondrá un diseño alternativo para la superestructura que junto con una cimentación profunda con pilotes analizados será un proyecto óptimo tiempo de construcción, sin descuidar la capacidad resistente y costo. Esta propuesta consta de un puente de 50m de luz con vigas metálicas que presenta estribos de 16 m de altura en cada apoyo. Estos estribos tienen un encepado con 12 pilotes cada uno tipo CPI-8 con barrena de hélice continua (CFA). El diseño de la superestructura se realizará en SAP2000, los estribos serán en GEO5 y los pilotes se diseñarán por dos métodos (FHWA 1999 y analítico) comprobando su resistencia grupal con la eficiencia del grupo de 12 pilotes. / In 2017, Peru suffered the phenomenon of the Coastal Child after 19 years. This disaster mainly hit the north coast of the country, causing numerous homes and facilities to end up buried by flooding. In addition, several bridges collapsed causing villages to go intocommunicado. As a result, it can be determined that not all bridges are prepared for this type of phenomenon, taking as possible causes soil studies and structural designs with little information. For these reasons, this thesis refers to the design and determination of the type of pilot for the most efficient deep foundation in bridges on sandy soils in Tumbes using a computerized model. The application will be carried out on the Canoas bridge, in which it will be sought to optimize the execution performance considering that it can withstand the actuating loads and characteristics of the ground. An alternative design will be proposed for the superstructure that together with a deep foundation with analyzed piles will be an optimal construction time project, without neglecting the resilient capacity and cost. This proposal consists of a 50m light bridge with metal beams that presents 16 m high stirrups in each support. These stirrups have a brush with 12 piles each type CPI-8 with continuous propeller auger (CFA). The superstructure design will be done in SAP2000, the stirrups will be in GEO5 and the piles will be designed by two methods (FHWA 1999 and analytical) checking their group resistance with the efficiency of the group of 12 piles. / Tesis

Page generated in 0.0569 seconds