• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 57
  • 23
  • 22
  • 21
  • 19
  • 17
  • 14
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sheaf Theory as a Foundation for Heterogeneous Data Fusion

Mansourbeigi, Seyed M-H 01 December 2018 (has links)
A major impediment to scientific progress in many fields is the inability to make sense of the huge amounts of data that have been collected via experiment or computer simulation. This dissertation provides tools to visualize, represent, and analyze the collection of sensors and data all at once in a single combinatorial geometric object. Encoding and translating heterogeneous data into common language are modeled by supporting objects. In this methodology, the behavior of the system based on the detection of noise in the system, possible failure in data exchange and recognition of the redundant or complimentary sensors are studied via some related geometric objects. Applications of the constructed methodology are described by two case studies: one from wildfire threat monitoring and the other from air traffic monitoring. Both cases are distributed (spatial and temporal) information systems. The systems deal with temporal and spatial fusion of heterogeneous data obtained from multiple sources, where the schema, availability and quality vary. The behavior of both systems is explained thoroughly in terms of the detection of the failure in the systems and the recognition of the redundant and complimentary sensors. A comparison between the methodology in this dissertation and the alternative methods is described to further verify the validity of the sheaf theory method. It is seen that the method has less computational complexity in both space and time.
2

Topological Data Analysis of Properties of Four-Regular Rigid Vertex Graphs

Conine, Grant Mcneil 24 June 2014 (has links)
Homologous DNA recombination and rearrangement has been modeled with a class of four-regular rigid vertex graphs called assembly graphs which can also be represented by double occurrence words. Various invariants have been suggested for these graphs, some based on the structure of the graphs, and some biologically motivated. In this thesis we use a novel method of data analysis based on a technique known as partial-clustering analysis and an algorithm known as Mapper to examine the relationships between these invariants. We introduce some of the basic machinery of topological data analysis, including the construction of simplicial complexes on a data set, clustering analysis, and the workings of the Mapper algorithm. We define assembly graphs and three specific invariants of these graphs: assembly number, nesting index, and genus range. We apply Mapper to the set of all assembly graphs up to 6 vertices and compare relationships between these three properties. We make several observations based upon the results of the analysis we obtained. We conclude with some suggestions for further research based upon our findings.
3

Persistent homology for the quantification of prostate cancer morphology in two and three-dimensional histology

January 2020 (has links)
archives@tulane.edu / The current system for evaluating prostate cancer architecture is the Gleason Grade system, which divides the morphology of cancer into five distinct architectural patterns, labeled numerically in increasing levels of cancer aggressiveness and generates a score by summing the labels of the two most dominant patterns. The Gleason score is currently the most powerful prognostic predictor of patient outcomes; however, it suffers from problems in reproducibility and consistency due to the high intra-observer and inter-observer variability among pathologists. In addition, the Gleason system lacks the granularity to address potentially prognostic architectural features beyond Gleason patterns. We look towards persistent homology, a tool from topological data analysis, to provide a means of evaluating prostate cancer glandular architecture. The objective of this work is to demonstrate the capacity of persistent homology to capture architectural features independently of Gleason patterns in a representation suitable for unsupervised and supervised machine learning. Specifically, using persistent homology, we compute topological representations of purely graded prostate cancer histopathology images of Gleason patterns and show that persistent homology is capable of clustering prostate cancer histology into architectural groups through discrete representations of persistent homology in both two-dimensional and three-dimensional histopathology. We then demonstrate the performance of persistent homology based features in common machine learning classifiers, indicating that persistent homology can both separate unique architectures in prostate cancer, but is also predictive of prostate cancer aggressiveness. Our results indicate the ability of persistent homology to cluster into unique groups with dominant architectural patterns consistent with the continuum of Gleason patterns. In addition, of particular interest, is the sensitivity of persistent homology to identify specific sub-architectural groups within single Gleason patterns, suggesting that persistent homology could represent a robust quantification method for prostate cancer architecture with higher granularity than the existing semi-quantitative measures. This work develops a framework for segregating prostate cancer aggressiveness by architectural subtype using topological representations, in a supervised machine learning setting, and lays the groundwork for augmenting traditional approaches with topological features for improved diagnosis and prognosis. / 1 / Peter Lawson
4

Diagnostic Analysis of Postural Data using Topological Data Analysis

Siegrist, Kyle W. 02 August 2019 (has links)
No description available.
5

A Progressive Refinement of Postural Human Balance Models Based on Experimental Data Using Topological Data Analysis

Larson, Michael Andrew 31 July 2020 (has links)
No description available.
6

Topological Analysis of Averaged Sentence Embeddings

Holmes, Wesley J. January 2020 (has links)
No description available.
7

Unravel the Geometry and Topology behind Noisy Networks

Tian, Minghao January 2020 (has links)
No description available.
8

Computing Topological Features for Data Analysis

Shi, Dayu January 2017 (has links)
No description available.
9

NETWORK AND TOPOLOGICAL ANALYSIS OF SCHOLARLY METADATA: A PLATFORM TO MODEL AND PREDICT COLLABORATION

Lance C Novak (7043189) 15 August 2019 (has links)
The scale of the scholarly community complicates searches within scholarly databases, necessitating keywords to index the topics of any given work. As a result, an author’s choice in keywords affects the visibility of each publication; making the sum of these choices a key representation of the author’s academic profile. As such the underlying network of investigators are often viewed through the lens of their keyword networks. Current keyword networks connect publications only if they use the exact same keyword, meaning uncontrolled keyword choice prevents connections despite semantic similarity. Computational understanding of semantic similarity has already been achieved through the process of word embedding, which transforms words to numerical vectors with context-correlated values. The resulting vectors preserve semantic relations and can be analyzed mathematically. Here we develop a model that uses embedded keywords to construct a network which circumvents the limitations caused by uncontrolled vocabulary. The model pipeline begins with a set of faculty, the publications and keywords of which are retrieved by SCOPUS API. These keywords are processed and then embedded. This work develops a novel method of network construction that leverages the interdisciplinarity of each publication, resulting in a unique network construction for any given set of publications. Postconstruction the network is visualized and analyzed with topological data analysis (TDA). TDA is used to calculate the connectivity and the holes within the network, referred to as the zero and first homology. These homologies inform how each author connects and where publication data is sparse. This platform has successfully modelled collaborations within the biomedical department at Purdue University and provides insight into potential future collaborations.
10

Classifying RGB Images with multi-colour Persistent Homology

Byttner, Wolf January 2019 (has links)
In Image Classification, pictures of the same type of object can have very different pixel values. Traditional norm-based metrics therefore fail to identify objectsin the same category. Topology is a branch of mathematics that deals with homeomorphic spaces, by discarding length. With topology, we can discover patterns in the image that are invariant to rotation, translation and warping. Persistent Homology is a new approach in Applied Topology that studies the presence of continuous regions and holes in an image. It has been used successfully for image segmentation and classification [12]. However, current approaches in image classification require a grayscale image to generate the persistence modules. This means information encoded in colour channels is lost. This thesis investigates whether the information in the red, green and blue colour channels of an RGB image hold additional information that could help algorithms classify pictures. We apply two recent methods, one by Adams [2] and the other by Hofer [25], on the CUB-200-2011 birds dataset [40] andfind that Hofer’s method produces significant results. Additionally, a modified method based on Hofer that uses the RGB colour channels produces significantly better results than the baseline, with over 48 % of images correctly classified, compared to 44 % and with a more significant improvement at lower resolutions.This indicates that colour channels do provide significant new information and generating one persistence module per colour channel is a viable approach to RGB image classification.

Page generated in 0.0968 seconds