• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 66
  • 66
  • 66
  • 27
  • 16
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Odhad varianční matice pro filtraci ve vysoké dimenzi / Covariance estimation for filtering in high dimension

Turčičová, Marie January 2021 (has links)
Estimating large covariance matrices from small samples is an important problem in many fields. Among others, this includes spatial statistics and data assimilation. In this thesis, we deal with several methods of covariance estimation with emphasis on regula- rization and covariance models useful in filtering problems. We prove several properties of estimators and propose a new filtering method. After a brief summary of basic esti- mating methods used in data assimilation, the attention is shifted to covariance models. We show a distinct type of hierarchy in nested models applied to the spectral diagonal covariance matrix: explicit estimators of parameters are computed by the maximum like- lihood method and asymptotic variance of these estimators is shown to decrease when the maximization is restricted to a subspace that contains the true parameter value. A similar result is obtained for general M-estimators. For more complex covariance mo- dels, maximum likelihood method cannot provide explicit parameter estimates. In the case of a linear model for a precision matrix, however, consistent estimator in a closed form can be computed by the score matching method. Modelling of the precision ma- trix is particularly beneficial in Gaussian Markov random fields (GMRF), which possess a sparse precision matrix. The...
52

SEGMENTATION OF WHITE MATTER, GRAY MATTER, AND CSF FROM MR BRAIN IMAGES AND EXTRACTION OF VERTEBRAE FROM MR SPINAL IMAGES

PENG, ZHIGANG 02 October 2006 (has links)
No description available.
53

Modèles d'encodage parcimonieux de l'activité cérébrale mesurée par IRM fonctionnelle / Parsimonious encoding models for brain activity measured by functional MRI

Bakhous, Christine 10 December 2013 (has links)
L'imagerie par résonance magnétique fonctionnelle (IRMf) est une technique non invasive permettant l'étude de l'activité cérébrale au travers des changements hémodynamiques associés. Récemment, une technique de détection-estimation conjointe (DEC) a été développée permettant d'alterner (1) la détection de l'activité cérébrale induite par une stimulation ainsi que (2) l'estimation de la fonction de réponse hémodynamique caractérisant la dynamique vasculaire; deux problèmes qui sont généralement traités indépendamment. Cette approche considère une parcellisation a priori du cerveau en zones fonctionnellement homogènes et alterne (1) et (2) sur chacune d'entre elles séparément. De manière standard, l'analyse DEC suppose que le cerveau entier peut être activé par tous les types de stimuli (visuel, auditif, etc.). Cependant la spécialisation fonctionnelle des régions cérébrales montre que l'activité d'une région n'est due qu'à certains types de stimuli. La prise en compte de stimuli non pertinents dans l'analyse, peut dégrader les résultats. La sous-famille des types de stimuli pertinents n'étant pas la même à travers le cerveau une procédure de sélection de modèles serait très coûteuse en temps de calcul. De plus, une telle sélection a priori n'est pas toujours possible surtout dans les cas pathologiques. Ce travail de thèse propose une extension de l'approche DEC permettant la sélection automatique des conditions (types de stimuli) pertinentes selon l'activité cérébrale qu'elles suscitent, cela simultanément à l'analyse et adaptativement à travers les régions cérébrales. Des exemples d'analyses sur des jeux de données simulés et réels, illustrent la capacité de l'approche DEC parcimonieuse proposée à sélectionner les conditions pertinentes ainsi que son intérêt par rapport à l'approche DEC standard. / Functional magnetic resonance imaging (fMRI) is a noninvasive technique allowing the study of brain activity via the measurement of hemodynamic changes. Recently, a joint detection-estimation (JDE) framework was developed and relies on both (1) the brain activity detection and (2) the hemodynamic response function estimation, two steps that are generally addressed in a separate way. The JDE approach is a parcel-based model that alternates (1) and (2) on each parcel successively. The JDE analysis assumes that all delivered stimuli (e.g. visual, auditory, etc.) possibly generate a response everywhere in the brain although activation is likely to be induced by only some of them in specific brain areas. Inclusion of irrelevant events may degrade the results. Since the relevant conditions or stimulus types can change between different brain areas, a model selection procedure will be computationally expensive. Furthermore, criteria are not always available to select the relevant conditions prior to activation detection, especially in pathological cases. The goal of this work is to develop a JDE extension allowing an automatic selection of the relevant conditions according to the brain activity they elicit. This condition selection is done simultaneously to the analysis and adaptively through the different brain areas. Analysis on simulated and real datasets illustrate the ability of our model to select the relevant conditions and its interest compare to the standard JDE analysis.
54

Non-parametric synthesis of volumetric textures from a 2D sample

Urs, Radu Dragos 29 March 2013 (has links) (PDF)
This thesis deals with the synthesis of anisotropic volumetric textures from a single 2D observation. We present variants of non parametric and multi-scale algorithms. Their main specificity lies in the fact that the 3D synthesis process relies on the sampling of a single 2D input sample, ensuring consistency in the different views of the 3D texture. Two types of approaches are investigated, both multi-scale and based on markovian hypothesis. The first category brings together a set of algorithms based on fixed-neighbourhood search, adapted from existing algorithms of texture synthesis from multiple 2D sources. The principle is that, starting from a random initialisation, the 3D texture is modified, voxel by voxel, in a deterministic manner, ensuring that the grey level local configurations on orthogonal slices containing the voxel are similar to configurations of the input image. The second category points out an original probabilistic approach which aims at reproducing in the textured volume the interactions between pixels learned in the input image. The learning is done by non-parametric Parzen windowing. Optimization is handled voxel by voxel by a deterministic ICM type algorithm. Several variants are proposed regarding the strategies used for the simultaneous handling of the orthogonal slices containing the voxel. These synthesis methods are first implemented on a set of structured textures of varied regularity and anisotropy. A comparative study and a sensitivity analysis are carried out, highlighting the strengths and the weaknesses of the different algorithms. Finally, they are applied to the simulation of volumetric textures of carbon composite materials, on nanometric scale snapshots obtained by transmission electron microscopy. The proposed experimental benchmark allows to evaluate quantitatively and objectively the performances of the different methods.
55

Robust estimation for spatial models and the skill test for disease diagnosis

Lin, Shu-Chuan 25 August 2008 (has links)
This thesis focuses on (1) the statistical methodologies for the estimation of spatial data with outliers and (2) classification accuracy of disease diagnosis. Chapter I, Robust Estimation for Spatial Markov Random Field Models: Markov Random Field (MRF) models are useful in analyzing spatial lattice data collected from semiconductor device fabrication and printed circuit board manufacturing processes or agricultural field trials. When outliers are present in the data, classical parameter estimation techniques (e.g., least squares) can be inefficient and potentially mislead the analyst. This chapter extends the MRF model to accommodate outliers and proposes robust parameter estimation methods such as the robust M- and RA-estimates. Asymptotic distributions of the estimates with differentiable and non-differentiable robustifying function are derived. Extensive simulation studies explore robustness properties of the proposed methods in situations with various amounts of outliers in different patterns. Also provided are studies of analysis of grid data with and without the edge information. Three data sets taken from the literature illustrate advantages of the methods. Chapter II, Extending the Skill Test for Disease Diagnosis: For diagnostic tests, we present an extension to the skill plot introduced by Mozer and Briggs (2003). The method is motivated by diagnostic measures for osteoporosis in a study. By restricting the area under the ROC curve (AUC) according to the skill statistic, we have an improved diagnostic test for practical applications by considering the misclassification costs. We also construct relationships, using the Koziol-Green model and mean-shift model, between the diseased group and the healthy group for improving the skill statistic. Asymptotic properties of the skill statistic are provided. Simulation studies compare the theoretical results and the estimates under various disease rates and misclassification costs. We apply the proposed method in classification of osteoporosis data.
56

Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion

Simpson, Daniel Peter January 2008 (has links)
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A..=2b, where A 2 Rnn is a large, sparse symmetric positive definite matrix and b 2 Rn is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LLT is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L..T z, with x = A..1=2z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form n = A..=2b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t..=2 and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A..=2b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
57

Multiple classifier systems for the classification of hyperspectral data / ystème de classifieurs multiple pour la classification de données hyperspectrales

Xia, Junshi 23 October 2014 (has links)
Dans cette thèse, nous proposons plusieurs nouvelles techniques pour la classification d'images hyperspectrales basées sur l'apprentissage d'ensemble. Le cadre proposé introduit des innovations importantes par rapport aux approches précédentes dans le même domaine, dont beaucoup sont basées principalement sur un algorithme individuel. Tout d'abord, nous proposons d'utiliser la Forêt de Rotation (Rotation Forest) avec différentes techiniques d'extraction de caractéristiques linéaire et nous comparons nos méthodes avec les approches d'ensemble traditionnelles, tels que Bagging, Boosting, Sous-espace Aléatoire et Forêts Aléatoires. Ensuite, l'intégration des machines à vecteurs de support (SVM) avec le cadre de sous-espace de rotation pour la classification de contexte est étudiée. SVM et sous-espace de rotation sont deux outils puissants pour la classification des données de grande dimension. C'est pourquoi, la combinaison de ces deux méthodes peut améliorer les performances de classification. Puis, nous étendons le travail de la Forêt de Rotation en intégrant la technique d'extraction de caractéristiques locales et l'information contextuelle spatiale avec un champ de Markov aléatoire (MRF) pour concevoir des méthodes spatio-spectrale robustes. Enfin, nous présentons un nouveau cadre général, ensemble de sous-espace aléatoire, pour former une série de classifieurs efficaces, y compris les arbres de décision et la machine d'apprentissage extrême (ELM), avec des profils multi-attributs étendus (EMaPS) pour la classification des données hyperspectrales. Six méthodes d'ensemble de sous-espace aléatoire, y compris les sous-espaces aléatoires avec les arbres de décision, Forêts Aléatoires (RF), la Forêt de Rotation (RoF), la Forêt de Rotation Aléatoires (Rorf), RS avec ELM (RSELM) et sous-espace de rotation avec ELM (RoELM), sont construits par multiples apprenants de base. L'efficacité des techniques proposées est illustrée par la comparaison avec des méthodes de l'état de l'art en utilisant des données hyperspectrales réelles dans de contextes différents. / In this thesis, we propose several new techniques for the classification of hyperspectral remote sensing images based on multiple classifier system (MCS). Our proposed framework introduces significant innovations with regards to previous approaches in the same field, many of which are mainly based on an individual algorithm. First, we propose to use Rotation Forests with several linear feature extraction and compared them with the traditional ensemble approaches, such as Bagging, Boosting, Random subspace and Random Forest. Second, the integration of the support vector machines (SVM) with Rotation subspace framework for context classification is investigated. SVM and Rotation subspace are two powerful tools for high-dimensional data classification. Therefore, combining them can further improve the classification performance. Third, we extend the work of Rotation Forests by incorporating local feature extraction technique and spatial contextual information with Markov random Field (MRF) to design robust spatial-spectral methods. Finally, we presented a new general framework, Random subspace ensemble, to train series of effective classifiers, including decision trees and extreme learning machine (ELM), with extended multi-attribute profiles (EMAPs) for classifying hyperspectral data. Six RS ensemble methods, including Random subspace with DT (RSDT), Random Forest (RF), Rotation Forest (RoF), Rotation Random Forest (RoRF), RS with ELM (RSELM) and Rotation subspace with ELM (RoELM), are constructed by the multiple base learners. The effectiveness of the proposed techniques is illustrated by comparing with state-of-the-art methods by using real hyperspectral data sets with different contexts.
58

Explorando caminhos de mínima informação em grafos para problemas de classificação supervisionada

Hiraga, Alan Kazuo 05 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:12Z (GMT). No. of bitstreams: 1 5931.pdf: 2655791 bytes, checksum: 6eafe016c175143a8d55692b4681adfe (MD5) Previous issue date: 2014-05-05 / Financiadora de Estudos e Projetos / Classification is a very important step in pattern recognition, as it aims to categorize objects from a set of inherent features, through its labeling. This process can be supervised, when there is a sample set of labeled training classes, semi-supervised, when the number of labeled samples is limited or nearly inexistent, or unsupervised, where there are no labeled samples. This project proposes to explore minimum information paths in graphs for classification problems, through the definition of a supervised, non-parametric, graph-based classification method, by means of a contextual approach. This method proposes to construct a graph from a set of training samples, where the samples are represented by vertices and the edges are links between samples that belongs to a neighborhood system. From the graph construction, the method calculates the local observed Fisher information, a measurement based on the Potts model, for all vertices, identifying the amount of information that each sample has. Generally, different class vertices when connected by an edge, have a high information level. After that, it is necessary to weight the edges by means of a function that penalizes connecting vertices with high information. During this process, it is possible to identify and select high information vertices, which will be chosen to be prototype vertices, namely, the nodes that define the classes boundaries. After the definition, the method proposes that each prototype sample conquer the remaining samples by offering the shortest path in terms of information, so that when a sample is conquered it receives the label of the winning prototype, occurring the classification. To evaluate the proposed method, statistical methods to estimate the error rates, such as Hold-out, K-fold and Leave-One- Out Cross-Validation will be considered. The obtained results indicate that the method can be a viable alternative to the existing classification techniques. / A classificação é uma etapa muito importante em reconhecimento de padrões, pois ela tem o objetivo de categorizar objetos a partir de um conjunto de características inerentes a ele, atribuindo-lhe um rótulo. Esse processo de classificação pode ser supervisionado, quando existe um conjunto de amostras de treinamento rotuladas que representam satisfatoriamente as classes, semi-supervisionado, quando o conjunto de amostras é limitado ou quase inexistente, ou não-supervisionado, quando não existem amostras rotuladas. Este trabalho propõe explorar caminhos de mínima informação em grafos para problemas de classificação, por meio da criação de um método de classificação supervisionado, não paramétrico, baseado em grafos, seguindo uma abordagem contextual. Esse método propõe a construção de um grafo a partir do conjunto de amostras de treinamento, onde as amostras serão representadas pelos vértices e as arestas serão as ligações entre amostras pertencentes a uma relação de adjacência. A partir da construção do grafo o método faz o calculo da informação de Fisher Local Observada, uma medida baseada no modelo de Potts, para todos os vértices, identificando o grau de informação que cada um possui. Geralmente vértices de classes distintas quando conectados por uma aresta possuem alta informação (bordas). Feito o calculo da informação, é necessário ponderar as arestas por meio de uma função que penaliza a ligação de vértices com alta informação. Enquanto as arestas são ponderadas é possível identificar e selecionar vértices altamente informativos os quais serão escolhidos para serem vértices protótipos, ou seja, os vértices que definem a região de borda. Depois de ponderadas as arestas e definidos os protótipos, o método propõe que cada protótipo conquiste as amostras oferecendo o menor caminho até ele, de modo que quando uma amostra é conquistada ela receba o rótulo do protótipo que a conquistou, ocorrendo a classificação. Para avaliar o método serão utilizados métodos estatísticos para estimar as taxas de acertos, como K-fold, Hold-out e Leave-one-out Cross- Validation. Os resultados obtidos indicam que o método pode ser um uma alternativa viável as técnicas de classificação existentes.
59

Hierarchical Logcut : A Fast And Efficient Way Of Energy Minimization Via Graph Cuts

Kulkarni, Gaurav 06 1900 (has links) (PDF)
Graph cuts have emerged as an important combinatorial optimization tool for many problems in vision. Most of the computer vision problems are discrete labeling problems. For example, in stereopsis, labels represent disparity and in image restoration, labels correspond to image intensities. Finding a good labeling involves optimization of an Energy Function. In computer vision, energy functions for discrete labeling problems can be elegantly formulated through Markov Random Field (MRF) based modeling and graph cut algorithms have been found to efficiently optimize wide class of such energy functions. The main contribution of this thesis lies in developing an efficient combinatorial optimization algorithm which can be applied to a wide class of energy functions. Generally, graph cut algorithms deal sequentially with each label in the labeling problem at hand. The time complexity of these algorithms increases linearly with number of labels. Our algorithm, finds a solution/labeling in logarithmic time complexity without compromising on quality of solution. In our work, we present an improved Logcut algorithm [24]. Logcut algorithm [24] deals with finding individual bit values in integer representation of labels. It has logarithmic time complexity, but requires training over data set. Our improved Logcut (Heuristic-Logcut or H-Logcut) algorithm eliminates the need for training and obtains comparable results in respect to original Logcut algorithm. Original Logcut algorithm cannot be initialized by a known labeling. We present a new algorithm, Sequential Bit Plane Correction (SBPC) which overcomes this drawback of Logcut algorithm. SBPC algorithm starts from a known labeling and individually corrects each bit of a label. This algorithm too has logarithmic time complexity. SBPC in combination with H-Logcut algorithm, further improves rate of convergence and quality of results. Finally, a hierarchical approach to graph cut optimization is used to further improve on rate of convergence of our algorithm. Generally, in a hierarchical approach first, a solution at coarser level is computed and then its result is used to initialize algorithm at a finer level. Here we have presented a novel way of initializing the algorithm at finer level through fusion move [25]. The SBPC and H-Logcut algorithms are extended to accommodate for hierarchical approach. It is found that this approach drastically improves the rate of convergence and attains a very low energy labeling. The effectiveness of our approach is demonstrated on stereopsis. It is found that the algorithm significantly out performs all existing algorithms in terms of quality of solution as well as rate of convergence.
60

Protein Structural Modeling Using Electron Microscopy Maps

Eman Alnabati (13108032) 19 July 2022 (has links)
<p>Proteins are significant components of living cells. They perform a diverse range of biological functions such as cell shape and metabolism. The functions of proteins are determined by their three-dimensional structures. Cryogenic-electron microscopy (cryo-EM) is a technology known for determining the structure of large macromolecular structures including protein complexes. When individual atomic protein structures are available, a critical task in structure modeling is fitting the individual structures into the cryo-EM density map.</p> <p>In my research, I report a new computational method, MarkovFit, which is a machine learning-based method that performs simultaneous rigid fitting of the atomic structures of individual proteins into cryo-EM maps of medium to low resolution to model the three-dimensional structure of protein complexes. MarkovFit uses Markov random field (MRF), which allows probabilistic evaluation of fitted models. MarkovFit starts by searching the conformational space using FFT for potential poses of protein structures, computes scores which quantify the goodness-of-fit between each individual protein and the cryo-EM map, and the interactions between the proteins. Afterwards, proteins and their interactions are represented using a MRF graph. MRF nodes use a belief propagation algorithm to exchange information, and the best conformations are then extracted and refined using two structural refinement methods. </p> <p>The performance of MarkovFit was tested on three datasets; a dataset of simulated cryo-EM maps at resolution 10 Å, a dataset of high-resolution experimentally-determined cryo-EM maps, and a dataset of experimentally-determined cryo-EM maps of medium to low resolution. In addition to that, the performance of MarkovFit was compared to two state-of-the-art methods on their datasets. Lastly, MarkovFit modeled the protein complexes from the individual protein atomic models generated by AlphaFold, an AI-based model developed by DeepMind for predicting the 3D structure of proteins from their amino acid sequences.</p>

Page generated in 0.3776 seconds