• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Discrete Nodal Domain Theorems

18 May 2001 (has links)
No description available.
2

Discrete Nodal Domain Theorems

Davies, Brian E., Gladwell, Graham M. L., Leydold, Josef, Stadler, Peter F. January 2000 (has links) (PDF)
We give a detailed proof for two discrete analogues of Courant's Nodal Domain Theorem. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
3

Graphs of Given Order and Size and Minimum Algebraic Connectivity

Biyikoglu, Türker, Leydold, Josef 10 1900 (has links) (PDF)
The structure of connected graphs of given size and order that have minimal algebraic connectivity is investigated. It is shown that they must consist of a chain of cliques. Moreover, an upper bound for the number of maximal cliques of size 2 or larger is derived. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
4

Discrete Nodal Domain Theorems

Davies, Brian E., Leydold, Josef, Stadler, Peter F. January 2000 (has links) (PDF)
We give a detailed proof for two discrete analogues of Courant's Nodal Domain Theorem. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
5

Feature network methods for machine learning

Mu, Xinying 17 February 2021 (has links)
We develop a graph structure for feature vectors in machine learning, which we denote as a feature network (FN); this is different from sample-based networks, in which nodes simply represent samples. FNs reveal the underlying relationship among feature vector components and re-represent features as functions on a network. Our study focuses on using FN structures to extract underlying information and thus improve machine learning performance. Upon the representation of feature vectors as such functions, so-called graph signal processing, or graph functional analytic techniques can be implemented, consisting of analytic operations including differentiation and integration of feature vectors. Our motivation originated from a study using infrared spectroscopy data, where domain experts prefer using the second derivative information rather than the original data; this is an illustration of the potential power of understanding the underlying feature structure. We begin by developing a classification method based on the premise that is assuming data from different classes (e.g., different cancer subtypes) will have distinct underlying graph structures, for graphs consisting of genes as nodes and gene covariances as edges. That is, a feature vector from one class will tend to be "smooth" on the related FN, and "fluctuate" in the other FNs. This method, using an entirely new set of features from standard ones, on its own proves to somewhat outperform SVM and KNN in classifying cancer subtypes in infrared spectroscopy data and gene expression data. We are effectively also projecting high-dimensional data into a low dimensional representation of graph smoothness, providing a unique way of data visualization. Additionally, FNs represent new ways of thinking about data. With a graph structure for feature vectors, graphical functional analysis can be used to extract various types of information not apparent in the original feature vectors. Specifically, operations such as calculus, Fourier transforms, and convolutions can be performed on the graph vertex domain. We introduce a family of calculus-like operators in reproducing kernel Hilbert spaces for feature vector regularization to deal with two types of data deficiency, which we designate as noise and blurring. Such operations are generalized from widely used ones in computer vision. The derivative operations on feature vectors provide additional information by amplifying differences between highly correlated features. Integrating feature vectors smooths and denoises them. Applications show that those denoising and deblurring operators can improve classification algorithms. The feature network with deep learning can be naturally extended to graph convolutional networks. We proposed a deep multiscale clustering structure with small learning complexity on general graph distance structures. This framework substantially reduces the number of parameters, and it allows the introduction of general machine learning algorithms such as SVM to feed-forward in this deep structure.
6

Largest Laplacian Eigenvalue and Degree Sequences of Trees

Biyikoglu, Türker, Hellmuth, Marc, Leydold, Josef January 2008 (has links) (PDF)
We investigate the structure of trees that have greatest maximum eigenvalue among all trees with a given degree sequence. We show that in such an extremal tree the degree sequence is non-increasing with respect to an ordering of the vertices that is obtained by breadth-first search. This structure is uniquely determined up to isomorphism. We also show that the maximum eigenvalue in such classes of trees is strictly monotone with respect to majorization. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
7

Hypoelliptic Diffusion Maps and Their Applications in Automated Geometric Morphometrics

Gao, Tingran January 2015 (has links)
<p>We introduce Hypoelliptic Diffusion Maps (HDM), a novel semi-supervised machine learning framework for the analysis of collections of anatomical surfaces. Triangular meshes obtained from discretizing these surfaces are high-dimensional, noisy, and unorganized, which makes it difficult to consistently extract robust geometric features for the whole collection. Traditionally, biologists put equal numbers of ``landmarks'' on each mesh, and study the ``shape space'' with this fixed number of landmarks to understand patterns of shape variation in the collection of surfaces; we propose here a correspondence-based, landmark-free approach that automates this process while maintaining morphological interpretability. Our methodology avoids explicit feature extraction and is thus related to the kernel methods, but the equivalent notion of ``kernel function'' takes value in pairwise correspondences between triangular meshes in the collection. Under the assumption that the data set is sampled from a fibre bundle, we show that the new graph Laplacian defined in the HDM framework is the discrete counterpart of a class of hypoelliptic partial differential operators.</p><p>This thesis is organized as follows: Chapter 1 is the introduction; Chapter 2 describes the correspondences between anatomical surfaces used in this research; Chapter 3 and 4 discuss the HDM framework in detail; Chapter 5 illustrates some interesting applications of this framework in geometric morphometrics.</p> / Dissertation
8

Graph Laplacians, Nodal Domains, and Hyperplane Arrangements

Biyikoglu, Türker, Hordijk, Wim, Leydold, Josef, Pisanski, Tomaz, Stadler, Peter F. January 2002 (has links) (PDF)
Eigenvectors of the Laplacian of a graph G have received increasing attention in the recent past. Here we investigate their so-called nodal domains, i.e., the connected components of the maximal induced subgraphs of G on which an eigenvector \psi does not change sign. An analogue of Courant's nodal domain theorem provides upper bounds on the number of nodal domains depending on the location of \psi in the spectrum. This bound, however, is not sharp in general. In this contribution we consider the problem of computing minimal and maximal numbers of nodal domains for a particular graph. The class of Boolean Hypercubes is discussed in detail. We find that, despite the simplicity of this graph class, for which complete spectral information is available, the computations are still non-trivial. Nevertheless, we obtained some new results and a number of conjectures. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
9

Semiregular Trees with Minimal Laplacian Spectral Radius

Biyikoglu, Türker, Leydold, Josef January 2009 (has links) (PDF)
A semiregular tree is a tree where all non-pendant vertices have the same degree. Among all semiregular trees with fixed order and degree, a graph with minimal (adjacency / Laplacian) spectral radius is a caterpillar. Counter examples show that the result cannot be generalized to the class of trees with a given (non-constant) degree sequence. / Series: Research Report Series / Department of Statistics and Mathematics
10

Nodal Domain Theorems and Bipartite Subgraphs

Biyikoglu, Türker, Leydold, Josef, Stadler, Peter F. January 2005 (has links) (PDF)
The Discrete Nodal Domain Theorem states that an eigenfunction of the k-th largest eigenvalue of a generalized graph Laplacian has at most k (weak) nodal domains. We show that the number of strong nodal domains cannot exceed the size of a maximal induced bipartite subgraph and that this bound is sharp for generalized graph Laplacians. Similarly, the number of weak nodal domains is bounded by the size of a maximal bipartite minor. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing

Page generated in 0.041 seconds