• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 588
  • 295
  • 86
  • 39
  • 15
  • 11
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1185
  • 813
  • 411
  • 291
  • 285
  • 277
  • 203
  • 196
  • 191
  • 140
  • 121
  • 121
  • 120
  • 117
  • 116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Interpretable Fine-Grained Visual Categorization

Guo, Pei 16 June 2021 (has links)
Not all categories are created equal in object recognition. Fine-grained visual categorization (FGVC) is a branch of visual object recognition that aims to distinguish subordinate categories within a basic-level category. Examples include classifying an image of a bird into specific species like "Western Gull" or "California Gull". Such subordinate categories exhibit characteristics like small inter-category variation and large intra-class variation, making distinguishing them extremely difficult. To address such challenges, an algorithm should be able to focus on object parts and be invariant to object pose. Like many other computer vision tasks, FGVC has witnessed phenomenal advancement following the resurgence of deep neural networks. However, the proposed deep models are usually treated as black boxes. Network interpretation and understanding aims to unveil the features learned by neural networks and explain the reason behind network decisions. It is not only a necessary component for building trust between humans and algorithms, but also an essential step towards continuous improvement in this field. This dissertation is a collection of papers that contribute to FGVC and neural network interpretation and understanding. Our first contribution is an algorithm named Pose and Appearance Integration for Recognizing Subcategories (PAIRS) which performs pose estimation and generates a unified object representation as the concatenation of pose-aligned region features. As the second contribution, we propose the task of semantic network interpretation. For filter interpretation, we represent the concepts a filter detects using an attribute probability density function. We propose the task of semantic attribution using textual summarization that generates an explanatory sentence consisting of the most important visual attributes for decision-making, as found by a general Bayesian inference algorithm. Pooling has been a key component in convolutional neural networks and is of special interest in FGVC. Our third contribution is an empirical and experimental study towards a thorough yet intuitive understanding and extensive benchmark of popular pooling approaches. Our fourth contribution is a novel LMPNet for weakly-supervised keypoint discovery. A novel leaky max pooling layer is proposed to explicitly encourages sparse feature maps to be learned. A learnable clustering layer is proposed to group the keypoint proposals into final keypoint predictions. 2020 marks the 10th year since the beginning of fine-grained visual categorization. It is of great importance to summarize the representative works in this domain. Our last contribution is a comprehensive survey of FGVC containing nearly 200 relevant papers that cover 7 common themes.
572

Classification of brain tumors in weakly annotated histopathology images with deep learning

Hrabovszki, Dávid January 2021 (has links)
Brain and nervous system tumors were responsible for around 250,000 deaths in 2020 worldwide. Correctly identifying different tumors is very important, because treatment options largely depend on the diagnosis. This is an expert task, but recently machine learning, and especially deep learning models have shown huge potential in tumor classification problems, and can provide fast and reliable support for pathologists in the decision making process. This thesis investigates classification of two brain tumors, glioblastoma multiforme and lower grade glioma in high-resolution H&E-stained histology images using deep learning. The dataset is publicly available from TCGA, and 220 whole slide images were used in this study. Ground truth labels were only available on whole slide level, but due to their large size, they could not be processed by convolutional neural networks. Therefore, patches were extracted from the whole slide images in two sizes and fed into separate networks for training. Preprocessing steps ensured that irrelevant information about the background was excluded, and that the images were stain normalized. The patch-level predictions were then combined to slide level, and the classification performance was measured on a test set. Experiments were conducted about the usefulness of pre-trained CNN models and data augmentation techniques, and the best method was selected after statistical comparisons. Following the patch-level training, five slide aggregation approaches were studied, and compared to build a whole slide classifier model. Best performance was achieved when using small patches (336 x 336 pixels), pre-trained CNN model without frozen layers, and mirroring data augmentation. The majority voting slide aggregation method resulted in the best whole slide classifier with 91.7% test accuracy and 100% sensitivity. In many comparisons, however, statistical significance could not be shown because of the relatively small size of the test set.
573

Concept Based Knowledge Discovery From Biomedical Literature

Radovanovic, Aleksandar January 2009 (has links)
Philosophiae Doctor - PhD / Advancement in biomedical research and continuous growth of scientific literature available in electronic form, calls for innovative methods and tools for information management, knowledge discovery, and data integration. Many biomedical fields such as genomics, proteomics, metabolomics, genetics, and emerging disciplines like systems biology and conceptual biology require synergy between experimental, computational, data mining and text mining technologies. A large amount of biomedical information available in various repositories, such as the US National Library of Medicine Bibliographic Database, emerge as a potential source of textual data for knowledge discovery. Text mining and its application of natural language processing and machine learning technologies to problems of knowledge discovery, is one of the most challenging fields in bioinformatics. This thesis describes and introduces novel methods for knowledge discovery and presents a software system that is able to extract information from biomedical literature, review interesting connections between various biomedical concepts and in so doing, generates new hypotheses. The experimental results obtained by using methods described in this thesis, are compared to currently published results obtained by other methods and a number of case studies are described. This thesis shows how the technology presented can be integrated with the researchers' own knowledge, experimentation and observations for optimal progression of scientific research.
574

A NEURAL-NETWORK-BASED CONTROLLER FOR MISSED-THRUST INTERPLANETARY TRAJECTORY DESIGN

Paul A Witsberger (12462006) 26 April 2022 (has links)
<p>The missed-thrust problem is a modern challenge in the field of mission design. While some methods exist to quantify its effects, there still exists room for improvement for algorithms which can fully anticipate and plan for a realistic set of missed-thrust events. The present work investigates the use of machine learning techniques to provide a robust controller for a low-thrust spacecraft. The spacecraft’s thrust vector is provided by a neural network controller which guides the spacecraft to the target along a trajectory that is robust to missed thrust, and the controller does not need to re-optimize any trajectories if it veers off its nominal course. The algorithms used to train the controller to account for missed thrust are supervised learning and neuroevolution. Supervised learning entails showing a neural network many examples of what inputs and outputs should look like, with the network learning over time to duplicate the patterns it has seen. Neuroevolution involves testing many neural networks on a problem, and using the principles of biological evolution and survival of the fittest to produce increasingly competitive networks. Preliminary results show that a controller designed with these methods provides mixed results, but performance can be greatly boosted if the controller’s output is used as an initial guess for an optimizer. With an optimizer, the success rate ranges from around 60% to 96% depending on the problem.</p> <p><br></p> <p>Additionally, this work conducts an analysis of a novel hyperbolic rendezvous strategy which was originally conceived by Dr. Buzz Aldrin. Instead of rendezvousing on the outbound leg of a hyperbolic orbit (traveling away from Earth), the spacecraft performs a rendezvous while on the inbound leg (traveling towards Earth). This allows for a relatively low Delta-v abort option for the spacecraft to return to Earth if a problem arose during rendezvous. Previous work that studied hyperbolic rendezvous has always assumed rendezvous on the outbound leg because the total Delta-v required (total propellant required) for the insertion alone is minimal with this strategy. However, I show that when an abort maneuver is taken into consideration, inserting on the inbound leg is both lower Delta-v overall, and also provides an abort window which is up to a full day longer.</p>
575

Use of Somatic Mutations for Classification of Endometrial Carcinomas with CpG Island Methylator Phenotype

Feige, Jonathan Robert 23 May 2022 (has links)
No description available.
576

Bilinear Gaussian Radial Basis Function Networks for classification of repeated measurements

Sjödin Hällstrand, Andreas January 2020 (has links)
The Growth Curve Model is a bilinear statistical model which can be used to analyse several groups of repeated measurements. Normally the Growth Curve Model is defined in such a way that the permitted sampling frequency of the repeated measurement is limited by the number of observed individuals in the data set.In this thesis, we examine the possibilities of utilizing highly frequently sampled measurements to increase classification accuracy for real world data. That is, we look at the case where the regular Growth Curve Model is not defined due to the relationship between the sampling frequency and the number of observed individuals. When working with this high frequency data, we develop a new method of basis selection for the regression analysis which yields what we call a Bilinear Gaussian Radial Basis Function Network (BGRBFN), which we then compare to more conventional polynomial and trigonometrical functional bases. Finally, we examine if Tikhonov regularization can be used to further increase the classification accuracy in the high frequency data case.Our findings suggest that the BGRBFN performs better than the conventional methods in both classification accuracy and functional approximability. The results also suggest that both high frequency data and furthermore Tikhonov regularization can be used to increase classification accuracy.
577

Winner Prediction of Blood Bowl 2 Matches with Binary Classification

Gustafsson, Andreas January 2019 (has links)
Being able to predict the outcome of a game is useful in many aspects. Such as,to aid designers in the process of understanding how the game is played by theplayers, as well as how to be able to balance the elements within the game aretwo of those aspects. If one could predict the outcome of games with certaintythe design process could possibly be evolved into more of an experiment basedapproach where one can observe cause and effect to some degree. It has previouslybeen shown that it is possible to predict outcomes of games to varying degrees ofsuccess. However, there is a lack of research which compares and evaluates severaldifferent models on the same domain with common aims. To narrow this identifiedgap an experiment is conducted to compare and analyze seven different classifierswithin the same domain. The classifiers are then ranked on accuracy against eachother with help of appropriate statistical methods. The classifiers compete onthe task of predicting which team will win or lose in a match of the game BloodBowl 2. For nuance three different datasets are made for the models to be trainedon. While the results vary between the models of the various datasets the general consensus has an identifiable pattern of rejections. The results also indicatea strong accuracy for Support Vector Machine and Logistic Regression across allthe datasets.
578

Stronger Together? An Ensemble of CNNs for Deepfakes Detection / Starkare Tillsammans? En Ensemble av CNNs för att Identifiera Deepfakes

Gardner, Angelica January 2020 (has links)
Deepfakes technology is a face swap technique that enables anyone to replace faces in a video, with highly realistic results. Despite its usefulness, if used maliciously, this technique can have a significant impact on society, for instance, through the spreading of fake news or cyberbullying. This makes the ability of deepfakes detection a problem of utmost importance. In this paper, I tackle the problem of deepfakes detection by identifying deepfakes forgeries in video sequences. Inspired by the state-of-the-art, I study the ensembling of different machine learning solutions built on convolutional neural networks (CNNs) and use these models as objects for comparison between ensemble and single model performances. Existing work in the research field of deepfakes detection suggests that escalated challenges posed by modern deepfake videos make it increasingly difficult for detection methods. I evaluate that claim by testing the detection performance of four single CNN models as well as six stacked ensembles on three modern deepfakes datasets. I compare various ensemble approaches to combine single models and in what way their predictions should be incorporated into the ensemble output. The results I found was that the best approach for deepfakes detection is to create an ensemble, though, the ensemble approach plays a crucial role in the detection performance. The final proposed solution is an ensemble of all available single models which use the concept of soft (weighted) voting to combine its base-learners’ predictions. Results show that this proposed solution significantly improved deepfakes detection performance and substantially outperformed all single models.
579

Semi-Supervised Learning Algorithm for Large Datasets Using Spark Environment

Kacheria, Amar January 2021 (has links)
No description available.
580

Dynamic Information Density for Image Classification in an Active Learning Framework

Morgan, Joshua Edward 01 May 2020 (has links)
No description available.

Page generated in 0.0613 seconds