• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 66
  • 66
  • 66
  • 27
  • 16
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Privacy Concerned D2D-Assisted Delay-Tolerant Content Distribution System

Ma, Guoqing 28 April 2019 (has links)
It is foreseeable that device-to-device (D2D) communication will become a standard feature in the future, for the reason that it offloads the data traffic from network infrastructures to user devices. Recent researches prove that delivering delay-tolerant contents through content delivery network (CDN) by D2D helps network operators increase spectral and energy efficiency. However, protecting the private information of mobile users in D2D assistant CDN is the primary concern, which directly affects the willingness of mobile users to share their resources with others. In this thesis, we proposed a privacy concerned top layer system for selecting the sub-optimal set of mobile nodes as initial mobile content provider (MCP) for content delivery in any general D2D communications, which implies that our proposed system does not rely on private user information such as location, affinity, and personal preferences. We model the initial content carrier set problem as an incentive maximization problem to optimize the rewards for network operators and content providers. Then, we utilized the Markov random field (MRF) theory to build a probabilistic graphical model to make an inference on the observation of delivered contents. Furthermore, we proposed a greedy algorithm to solve the non-linear binary integer programming (NLBIP) problem for selecting the optimal initial content carrier set. The evaluations of the proposed system are based on both a simulated dataset and a real-world collected dataset corresponding to the off-line and on-line scenarios.
22

Inter-annual stability of land cover classification: explorations and improvements

Abercrombie, Stewart Parker 22 January 2016 (has links)
Land cover information is a key input to many earth system models, and thus accurate and consistent land cover maps are critically important to global change science. However, existing global land cover products show unrealistically high levels of year-to-year change. This thesis explores methods to improve accuracies for global land cover classifications, with a focus on reducing spurious year-to-year variation in results derived from MODIS data. In the first part of this thesis I use clustering to identify spectrally distinct sub-groupings within defined land cover classes, and assess the spectral separability of the resulting sub-classes. Many of the sub-classes are difficult to separate due to a high degree of overlap in spectral space. In the second part of this thesis, I examine two methods to reduce year-to-year variation in classification labels. First, I evaluate a technique to construct training data for a per-pixel supervised classification algorithm by combining multiple years of spectral measurements. The resulting classifier achieves higher accuracy and lower levels of year-to-year change than a reference classifier trained using a single year of data. Second, I use a spatio-temporal Markov Random Field (MRF) model to post-process the predictions of a per-pixel classifier. The MRF framework reduces spurious label change to a level comparable to that achieved by a post-hoc heuristic stabilization technique. The timing of label change in the MRF processed maps better matched disturbance events in a reference data, whereas the heuristic stabilization results in label changes that lag several years behind disturbance events.
23

A Markov Random Field Approach to Improving Classification of Remotely Sensed Imagery by Incorporating Spatial and Temporal Contexts

Xu, Min 16 October 2015 (has links)
No description available.
24

Subsurface Simulation Using Stochastic Modeling Techniques for Reliability Based Design of Geo-structures

Li, Zhao 04 October 2016 (has links)
No description available.
25

Bayesian Dynamical Modeling of Count Data

Zhuang, Lili 20 October 2011 (has links)
No description available.
26

Computational Modeling for Differential Analysis of RNA-seq and Methylation data

Wang, Xiao 16 August 2016 (has links)
Computational systems biology is an inter-disciplinary field that aims to develop computational approaches for a system-level understanding of biological systems. Advances in high-throughput biotechnology offer broad scope and high resolution in multiple disciplines. However, it is still a major challenge to extract biologically meaningful information from the overwhelming amount of data generated from biological systems. Effective computational approaches are of pressing need to reveal the functional components. Thus, in this dissertation work, we aim to develop computational approaches for differential analysis of RNA-seq and methylation data to detect aberrant events associated with cancers. We develop a novel Bayesian approach, BayesIso, to identify differentially expressed isoforms from RNA-seq data. BayesIso features a joint model of the variability of RNA-seq data and the differential state of isoforms. BayesIso can not only account for the variability of RNA-seq data but also combines the differential states of isoforms as hidden variables for differential analysis. The differential states of isoforms are estimated jointly with other model parameters through a sampling process, providing an improved performance in detecting isoforms of less differentially expressed. We propose to develop a novel probabilistic approach, DM-BLD, in a Bayesian framework to identify differentially methylated genes. The DM-BLD approach features a hierarchical model, built upon Markov random field models, to capture both the local dependency of measured loci and the dependency of methylation change. A Gibbs sampling procedure is designed to estimate the posterior distribution of the methylation change of CpG sites. Then, the differential methylation score of a gene is calculated from the estimated methylation changes of the involved CpG sites and the significance of genes is assessed by permutation-based statistical tests. We have demonstrated the advantage of the proposed Bayesian approaches over conventional methods for differential analysis of RNA-seq data and methylation data. The joint estimation of the posterior distributions of the variables and model parameters using sampling procedure has demonstrated the advantage in detecting isoforms or methylated genes of less differential. The applications to breast cancer data shed light on understanding the molecular mechanisms underlying breast cancer recurrence, aiming to identify new molecular targets for breast cancer treatment. / Ph. D.
27

"Segmentação de imagens e validação de classes por abordagem estocástica" / Image segmentation and class validation in a stochastic approach

Gerhardinger, Leandro Cavaleri 13 April 2006 (has links)
Uma etapa de suma importância na análise automática de imagens é a segmentação, que procura dividir uma imagem em regiões cujos pixels exibem um certo grau de similaridade. Uma característica que provê similaridade entre pixels de uma mesma região é a textura, formada geralmente pela combinação aleatória de suas intensidades. Muitos trabalhos vêm sendo realizados com o intuito de estudar técnicas não-supervisionadas de segmentação de imagens por modelos estocásticos, definindo texturas como campos aleatórios de Markov. Um método com esta abordagem que se destaca é o EM/MPM, um algoritmo iterativo que combina a técnica EM para realizar uma estimação de parâmetros por máxima verossimilhança com a MPM, utilizada para segmentação pela minimização do número de pixels erroneamente classificados. Este trabalho desenvolveu um estudo sobre a modelagem e a implementação do algoritmo EM/MPM, juntamente com sua abordagem multiresolução. Foram propostas uma estimação inicial de parâmetros por limiarização e uma combinação com o algoritmo de Annealing. Foi feito também um estudo acerca da validação de classes, ou seja, a busca pelo número de regiões diferentes na imagem, mostrando as principais técnicas encontradas na literatura e propondo uma nova abordagem, baseada na distribuição dos níveis de cinza das classes. Por fim, foi desenvolvida uma extensão do modelo para a segmentação de malhas em duas e três dimensões. / An important stage of the automatic image analysis process is segmentation, that aims to split an image into regions whose pixels exhibit a certain degree of similarity. Texture is known as an efficient feature that provides enough discriminant power to differenciate pixels from distinct regions. It is usually defined as a random combination of pixel intensities. A considerable amount of researches has been done on non-supervised techniques for image segmentation based on stochastic models, in which texture is defined as Markov Random Fields. Such an important method in this category is the EM/MPM, an iterative algorithm that combines the maximum-likelihood parameter estimation model EM with the MPM segmentation algorithm, whose aim is to minimize the number of misclassified pixels in the image. This work has carried out a study on stochastic models for segmentation and shows an implementation for the EM/MPM algorithm, together with a multiresolution approach. A new threshold-based scheme for the estimation of initial parameters for the EM/MPM model has been proposed. This work also shows how to incorporate the concept of annealing to the current EM/MPM algorithm in order to improve segmentation. Additionally, a study on the class validity problem (search for the correct number of classes) has been done, showing the most important techniques available in the literature. As a consequence, a gray level distribution-based approach has been devised. Finally, the work shows an extension of the traditional EM/MPM technique for segmenting 2D and 3D meshes.
28

Generalized Survey Propagation

Tu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
29

Generalized Survey Propagation

Tu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
30

Facial Soft Tissue Segmentation In Mri Using Unlabeled Atlas

Rezaeitabar, Yousef 01 August 2011 (has links) (PDF)
Segmentation of individual facial soft tissues has received relatively little attention in the literature due to the complicated structures of these tissues. There is a need to incorporate the prior information, which is usually in the form of atlases, in the segmentation process. In this thesis we performed several segmentation methods that take advantage of prior knowledge for facial soft tissue segmentation. An atlas based method and three expectation maximization &ndash / Markov random field (EM-MRF) based methods are tested for two dimensional (2D) segmentation of masseter muscle in the face. Atlas based method uses the manually labeled atlases as prior information. We implemented EM-MRF based method in different manners / without prior information, with prior information for initialization and with using labeled atlas as prior information. The differences between these methods and the influence of the prior information are discussed by comparing the results. Finally a new method based on EM-MRF is proposed in this study. In this method we aim to use prior information without performing manual segmentation, which is a very complicated and time consuming task. 10 MRI sets are used as experimental data in this study and leave-one-out technique is used to perform segmentation for all sets. The test data is modeled as a Markov Random Field where unlabeled training data, i.e., other 9 sets, are used as prior information. The model parameters are estimated by the Maximum Likelihood approach when the Expectation Maximization iterations are used to handle hidden labels. The performance of all segmentation methods are computed and compared to the manual segmented ground truth. Then we used the new 2D segmentation method for three dimensional (3D) segmentation of two masseter and two temporalis tissues in each data set and visualize the segmented tissue volumes.

Page generated in 0.0449 seconds