• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 1
  • Tagged with
  • 17
  • 17
  • 15
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A non-asymptotic study of low-rank estimation of smooth kernels on graphs

Rangel Walteros, Pedro Andres 12 January 2015 (has links)
This dissertation investigates the problem of estimating a kernel over a large graph based on a sample of noisy observations of linear measurements of the kernel. We are interested in solving this estimation problem in the case when the sample size is much smaller than the ambient dimension of the kernel. As is typical in high-dimensional statistics, we are able to design a suitable estimator based on a small number of samples only when the target kernel belongs to a subset of restricted complexity. In our study, we restrict the complexity by considering scenarios where the target kernel is both low-rank and smooth over a graph. Using standard tools of non-parametric estimation, we derive a minimax lower bound on the least squares error in terms of the rank and the degree of smoothness of the target kernel. To prove the optimality of our lower-bound, we proceed to develop upper bounds on the error for a least-square estimator based on a non-convex penalty. The proof of these upper bounds depends on bounds for estimators over uniformly bounded function classes in terms of Rademacher complexities. We also propose a computationally tractable estimator based on least-squares with convex penalty. We derive an upper bound for the computationally tractable estimator in terms of a coherence function introduced in this work. Finally, we present some scenarios wherein this upper bound achieves a near-optimal rate. The motivations for studying such problems come from various real-world applications like recommender systems and social network analysis.
2

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
3

Novel adaptive reconstruction schemes for accelerated myocardial perfusion magnetic resonance imaging

Lingala, Sajan Goud 01 December 2013 (has links)
Coronary artery disease (CAD) is one of the leading causes of death in the world. In the United States alone, it is estimated that approximately every 25 seconds, a new CAD event will occur, and approximately every minute, someone will die of one. The detection of CAD during in its early stages is very critical to reduce the mortality rates. Magnetic resonance imaging of myocardial perfusion (MR-MPI) has been receiving significant attention over the last decade due to its ability to provide a unique view of the microcirculation blood flow in the myocardial tissue through the coronary vascular network. The ability of MR-MPI to detect changes in microcirculation during early stages of ischemic events makes it a useful tool in identifying myocardial tissues that are alive but at the risk of dying. However this technique is not yet fully established clinically due to fundamental limitations imposed by the MRI device physics. The limitations of current MRI schemes often make it challenging to simultaneously achieve high spatio-temporal resolution, sufficient spatial coverage, and good image quality in myocardial perfusion MRI. Furthermore, the acquisitions are typically set up to acquire images during breath holding. This often results in motion artifacts due to improper breath hold patterns. This dissertation deals with developing novel image reconstruction methods in conjunction with non-Cartesian sampling for the reconstruction of dynamic MRI data from highly accelerated / under-sampled Fourier measurements. The reconstruction methods are based on adaptive signal models to represent the dynamic data using few model coefficients. Three novel adaptive reconstruction methods are developed and validated: (a) low rank and sparsity based modeling, (b) blind compressed sensing, and (c) motion compensated compressed sensing. The developed methods are applicable to a wide range of dynamic imaging problems. In the context of MR-MPI, this dissertation show feasibilities that the developed methods can enable free breathing myocardial perfusion MRI acquisitions with high spatio-temporal resolutions ( < 2mm x 2mm, 1 heart beat) and slice coverage (upto 8 slices).
4

Cooperative Wideband Spectrum Sensing Based on Joint Sparsity

jowkar, ghazaleh 01 January 2017 (has links)
COOPERATIVE WIDEBAND SPECTRUM SENSING BASED ON JOINT SPARSITY By Ghazaleh Jowkar, Master of Science A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at Virginia Commonwealth University Virginia Commonwealth University 2017 Major Director: Dr. Ruixin Niu, Associate Professor of Department of Electrical and Computer Engineering In this thesis, the problem of wideband spectrum sensing in cognitive radio (CR) networks using sub-Nyquist sampling and sparse signal processing techniques is investigated. To mitigate multi-path fading, it is assumed that a group of spatially dispersed SUs collaborate for wideband spectrum sensing, to determine whether or not a channel is occupied by a primary user (PU). Due to the underutilization of the spectrum by the PUs, the spectrum matrix has only a small number of non-zero rows. In existing state-of-the-art approaches, the spectrum sensing problem was solved using the low-rank matrix completion technique involving matrix nuclear-norm minimization. Motivated by the fact that the spectrum matrix is not only low-rank, but also sparse, a spectrum sensing approach is proposed based on minimizing a mixed-norm of the spectrum matrix instead of low-rank matrix completion to promote the joint sparsity among the column vectors of the spectrum matrix. Simulation results are obtained, which demonstrate that the proposed mixed-norm minimization approach outperforms the low-rank matrix completion based approach, in terms of the PU detection performance. Further we used mixed-norm minimization model in multi time frame detection. Simulation results shows that increasing the number of time frames will increase the detection performance, however, by increasing the number of time frames after a number of times the performance decrease dramatically.
5

Low-rank Matrix Estimation

Fan, Xing 01 January 2024 (has links) (PDF)
The first part of this dissertation focuses on matrix-covariate regression models. While they have been studied in many existing works, classical statistical and computational methods for the analysis of the regression coefficient estimation are highly affected by high dimensional matrix-valued covariates. To address these issues, we proposes a framework of matrix-covariate regression models based on a low-rank constraint and an additional regularization for structured signals, with considerations of models of both continuous and binary responses. In the second part, we examine a Mixture Multilayer Stochastic Block Model (MMLSBM), where layers can be grouped into sets of similar networks. Each group of networks is endowed with a unique Stochastic Block Model. The objective is to partition the multilayer network into clusters of similar layers and identify communities within those layers. We present an alternative approach called the Alternating Minimization Algorithm (ALMA), which aims to simultaneously recover the layer partition and estimate the matrices of connection probabilities for the distinct layers. In the last part, we demonstrates the effectiveness of the projected gradient descent algorithm. Firstly, its local convergence rate is independent of the condition number. Secondly, under conditions where the objective function is rank-2r restricted L-smooth and μ-strongly convex, with L/μ < 3, projected gradient descent with appropriate step size converges linearly to the solution. Moreover, a perturbed version of this algorithm effectively navigates away from saddle points, converging to an approximate solution or a second-order local minimizer across a wide range of step sizes. Furthermore, we establish that there are no spurious local minimizes in estimating asymmetric low-rank matrices when the objective function satisfies L/μ < 3.
6

Semidefinite Facial Reduction for Low-Rank Euclidean Distance Matrix Completion

Krislock, Nathan January 2010 (has links)
The main result of this thesis is the development of a theory of semidefinite facial reduction for the Euclidean distance matrix completion problem. Our key result shows a close connection between cliques in the graph of the partial Euclidean distance matrix and faces of the semidefinite cone containing the feasible set of the semidefinite relaxation. We show how using semidefinite facial reduction allows us to dramatically reduce the number of variables and constraints required to represent the semidefinite feasible set. We have used this theory to develop a highly efficient algorithm capable of solving many very large Euclidean distance matrix completion problems exactly, without the need for a semidefinite optimization solver. For problems with a low level of noise, our SNLSDPclique algorithm outperforms existing algorithms in terms of both CPU time and accuracy. Using only a laptop, problems of size up to 40,000 nodes can be solved in under a minute and problems with 100,000 nodes require only a few minutes to solve.
7

Semidefinite Facial Reduction for Low-Rank Euclidean Distance Matrix Completion

Krislock, Nathan January 2010 (has links)
The main result of this thesis is the development of a theory of semidefinite facial reduction for the Euclidean distance matrix completion problem. Our key result shows a close connection between cliques in the graph of the partial Euclidean distance matrix and faces of the semidefinite cone containing the feasible set of the semidefinite relaxation. We show how using semidefinite facial reduction allows us to dramatically reduce the number of variables and constraints required to represent the semidefinite feasible set. We have used this theory to develop a highly efficient algorithm capable of solving many very large Euclidean distance matrix completion problems exactly, without the need for a semidefinite optimization solver. For problems with a low level of noise, our SNLSDPclique algorithm outperforms existing algorithms in terms of both CPU time and accuracy. Using only a laptop, problems of size up to 40,000 nodes can be solved in under a minute and problems with 100,000 nodes require only a few minutes to solve.
8

A probabilistic framework and algorithms for modeling and analyzing multi-instance data

Behmardi, Behrouz 28 November 2012 (has links)
Multi-instance data, in which each object (e.g., a document) is a collection of instances (e.g., word), are widespread in machine learning, signal processing, computer vision, bioinformatic, music, and social sciences. Existing probabilistic models, e.g., latent Dirichlet allocation (LDA), probabilistic latent semantic indexing (pLSI), and discrete component analysis (DCA), have been developed for modeling and analyzing multiinstance data. Such models introduce a generative process for multi-instance data which includes a low dimensional latent structure. While such models offer a great freedom in capturing the natural structure in the data, their inference may present challenges. For example, the sensitivity in choosing the hyper-parameters in such models, requires careful inference (e.g., through cross-validation) which results in large computational complexity. The inference for fully Bayesian models which contain no hyper-parameters often involves slowly converging sampling methods. In this work, we develop approaches for addressing such challenges and further enhancing the utility of such models. This dissertation demonstrates a unified convex framework for probabilistic modeling of multi-instance data. The three main aspects of the proposed framework are as follows. First, joint regularization is incorporated into multiple density estimation to simultaneously learn the structure of the distribution space and infer each distribution. Second, a novel confidence constraints framework is used to facilitate a tuning-free approach to control the amount of regularization required for the joint multiple density estimation with theoretical guarantees on correct structure recovery. Third, we formulate the problem using a convex framework and propose efficient optimization algorithms to solve it. This work addresses the unique challenges associated with both discrete and continuous domains. In the discrete domain we propose a confidence-constrained rank minimization (CRM) to recover the exact number of topics in topic models with theoretical guarantees on recovery probability and mean squared error of the estimation. We provide a computationally efficient optimization algorithm for the problem to further the applicability of the proposed framework to large real world datasets. In the continuous domain, we propose to use the maximum entropy (MaxEnt) framework for multi-instance datasets. In this approach, bags of instances are represented as distributions using the principle of MaxEnt. We learn basis functions which span the space of distributions for jointly regularized density estimation. The basis functions are analogous to topics in a topic model. We validate the efficiency of the proposed framework in the discrete and continuous domains by extensive set of experiments on synthetic datasets as well as on real world image and text datasets and compare the results with state-of-the-art algorithms. / Graduation date: 2013
9

A hierarchical control system for scheduling and supervising flexible manufacturing cells

Fahmy, Sherif 23 April 2009 (has links)
A hierarchical control system is proposed for automated flexible manufacturing cells (FMC) that operate in a job shop flow setting. The control system is made up of a higher level scheduler/reactive scheduler, which optimizes the production flow within the cell, and a lower level supervisor that implements the decisions of the scheduler on the shop floor. Previous studies have regularly considered the production scheduling and the supervisory control as two separate problems. This has led to: i) deadlock-prone optimized schedules that cannot be implemented in an automated setting, ii) deadlock-free optimized schedules that lack the means to be transformed into shop floor supervisors, or iii) supervisors that can safely drive the system with no consideration for production performance. The proposed control system combines mathematical models and an insertion heuristic to solve the deadlock-free scheduling problem in job shops, a deadlock-free reactive scheduling heuristic that can revise the schedules upon the occurrence of a wide variety of disruptions, and a systematic procedure that can transform schedules into readily implementable Petri net (PN) supervisors. The integration of these modules into one control hierarchy guarantees a correct, optimized and agile behavior of the controlled system. The performances of the mathematical models, the scheduling and the reactive scheduling heuristics were evaluated by comparison to performances of previous approaches. Experimental results showed that the proposed modules performed consistently better than the other corresponding approaches. The supervisor realization procedure and the overall control architecture were validated by simulation and implementation in an experimental robotic FMC. The control system developed was capable of driving the experimental cell to satisfactorily complete the processing of different product mixes that featured complex processing routes through the cell.
10

A hierarchical control system for scheduling and supervising flexible manufacturing cells

Fahmy, Sherif 23 April 2009 (has links)
A hierarchical control system is proposed for automated flexible manufacturing cells (FMC) that operate in a job shop flow setting. The control system is made up of a higher level scheduler/reactive scheduler, which optimizes the production flow within the cell, and a lower level supervisor that implements the decisions of the scheduler on the shop floor. Previous studies have regularly considered the production scheduling and the supervisory control as two separate problems. This has led to: i) deadlock-prone optimized schedules that cannot be implemented in an automated setting, ii) deadlock-free optimized schedules that lack the means to be transformed into shop floor supervisors, or iii) supervisors that can safely drive the system with no consideration for production performance. The proposed control system combines mathematical models and an insertion heuristic to solve the deadlock-free scheduling problem in job shops, a deadlock-free reactive scheduling heuristic that can revise the schedules upon the occurrence of a wide variety of disruptions, and a systematic procedure that can transform schedules into readily implementable Petri net (PN) supervisors. The integration of these modules into one control hierarchy guarantees a correct, optimized and agile behavior of the controlled system. The performances of the mathematical models, the scheduling and the reactive scheduling heuristics were evaluated by comparison to performances of previous approaches. Experimental results showed that the proposed modules performed consistently better than the other corresponding approaches. The supervisor realization procedure and the overall control architecture were validated by simulation and implementation in an experimental robotic FMC. The control system developed was capable of driving the experimental cell to satisfactorily complete the processing of different product mixes that featured complex processing routes through the cell.

Page generated in 0.0481 seconds