• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 16
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 130
  • 130
  • 67
  • 28
  • 24
  • 19
  • 18
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Non-parametric synthesis of volumetric textures from a 2D sample

Urs, Radu Dragos 29 March 2013 (has links) (PDF)
This thesis deals with the synthesis of anisotropic volumetric textures from a single 2D observation. We present variants of non parametric and multi-scale algorithms. Their main specificity lies in the fact that the 3D synthesis process relies on the sampling of a single 2D input sample, ensuring consistency in the different views of the 3D texture. Two types of approaches are investigated, both multi-scale and based on markovian hypothesis. The first category brings together a set of algorithms based on fixed-neighbourhood search, adapted from existing algorithms of texture synthesis from multiple 2D sources. The principle is that, starting from a random initialisation, the 3D texture is modified, voxel by voxel, in a deterministic manner, ensuring that the grey level local configurations on orthogonal slices containing the voxel are similar to configurations of the input image. The second category points out an original probabilistic approach which aims at reproducing in the textured volume the interactions between pixels learned in the input image. The learning is done by non-parametric Parzen windowing. Optimization is handled voxel by voxel by a deterministic ICM type algorithm. Several variants are proposed regarding the strategies used for the simultaneous handling of the orthogonal slices containing the voxel. These synthesis methods are first implemented on a set of structured textures of varied regularity and anisotropy. A comparative study and a sensitivity analysis are carried out, highlighting the strengths and the weaknesses of the different algorithms. Finally, they are applied to the simulation of volumetric textures of carbon composite materials, on nanometric scale snapshots obtained by transmission electron microscopy. The proposed experimental benchmark allows to evaluate quantitatively and objectively the performances of the different methods.
112

Robust estimation for spatial models and the skill test for disease diagnosis

Lin, Shu-Chuan 25 August 2008 (has links)
This thesis focuses on (1) the statistical methodologies for the estimation of spatial data with outliers and (2) classification accuracy of disease diagnosis. Chapter I, Robust Estimation for Spatial Markov Random Field Models: Markov Random Field (MRF) models are useful in analyzing spatial lattice data collected from semiconductor device fabrication and printed circuit board manufacturing processes or agricultural field trials. When outliers are present in the data, classical parameter estimation techniques (e.g., least squares) can be inefficient and potentially mislead the analyst. This chapter extends the MRF model to accommodate outliers and proposes robust parameter estimation methods such as the robust M- and RA-estimates. Asymptotic distributions of the estimates with differentiable and non-differentiable robustifying function are derived. Extensive simulation studies explore robustness properties of the proposed methods in situations with various amounts of outliers in different patterns. Also provided are studies of analysis of grid data with and without the edge information. Three data sets taken from the literature illustrate advantages of the methods. Chapter II, Extending the Skill Test for Disease Diagnosis: For diagnostic tests, we present an extension to the skill plot introduced by Mozer and Briggs (2003). The method is motivated by diagnostic measures for osteoporosis in a study. By restricting the area under the ROC curve (AUC) according to the skill statistic, we have an improved diagnostic test for practical applications by considering the misclassification costs. We also construct relationships, using the Koziol-Green model and mean-shift model, between the diseased group and the healthy group for improving the skill statistic. Asymptotic properties of the skill statistic are provided. Simulation studies compare the theoretical results and the estimates under various disease rates and misclassification costs. We apply the proposed method in classification of osteoporosis data.
113

Multiple classifier systems for the classification of hyperspectral data / ystème de classifieurs multiple pour la classification de données hyperspectrales

Xia, Junshi 23 October 2014 (has links)
Dans cette thèse, nous proposons plusieurs nouvelles techniques pour la classification d'images hyperspectrales basées sur l'apprentissage d'ensemble. Le cadre proposé introduit des innovations importantes par rapport aux approches précédentes dans le même domaine, dont beaucoup sont basées principalement sur un algorithme individuel. Tout d'abord, nous proposons d'utiliser la Forêt de Rotation (Rotation Forest) avec différentes techiniques d'extraction de caractéristiques linéaire et nous comparons nos méthodes avec les approches d'ensemble traditionnelles, tels que Bagging, Boosting, Sous-espace Aléatoire et Forêts Aléatoires. Ensuite, l'intégration des machines à vecteurs de support (SVM) avec le cadre de sous-espace de rotation pour la classification de contexte est étudiée. SVM et sous-espace de rotation sont deux outils puissants pour la classification des données de grande dimension. C'est pourquoi, la combinaison de ces deux méthodes peut améliorer les performances de classification. Puis, nous étendons le travail de la Forêt de Rotation en intégrant la technique d'extraction de caractéristiques locales et l'information contextuelle spatiale avec un champ de Markov aléatoire (MRF) pour concevoir des méthodes spatio-spectrale robustes. Enfin, nous présentons un nouveau cadre général, ensemble de sous-espace aléatoire, pour former une série de classifieurs efficaces, y compris les arbres de décision et la machine d'apprentissage extrême (ELM), avec des profils multi-attributs étendus (EMaPS) pour la classification des données hyperspectrales. Six méthodes d'ensemble de sous-espace aléatoire, y compris les sous-espaces aléatoires avec les arbres de décision, Forêts Aléatoires (RF), la Forêt de Rotation (RoF), la Forêt de Rotation Aléatoires (Rorf), RS avec ELM (RSELM) et sous-espace de rotation avec ELM (RoELM), sont construits par multiples apprenants de base. L'efficacité des techniques proposées est illustrée par la comparaison avec des méthodes de l'état de l'art en utilisant des données hyperspectrales réelles dans de contextes différents. / In this thesis, we propose several new techniques for the classification of hyperspectral remote sensing images based on multiple classifier system (MCS). Our proposed framework introduces significant innovations with regards to previous approaches in the same field, many of which are mainly based on an individual algorithm. First, we propose to use Rotation Forests with several linear feature extraction and compared them with the traditional ensemble approaches, such as Bagging, Boosting, Random subspace and Random Forest. Second, the integration of the support vector machines (SVM) with Rotation subspace framework for context classification is investigated. SVM and Rotation subspace are two powerful tools for high-dimensional data classification. Therefore, combining them can further improve the classification performance. Third, we extend the work of Rotation Forests by incorporating local feature extraction technique and spatial contextual information with Markov random Field (MRF) to design robust spatial-spectral methods. Finally, we presented a new general framework, Random subspace ensemble, to train series of effective classifiers, including decision trees and extreme learning machine (ELM), with extended multi-attribute profiles (EMAPs) for classifying hyperspectral data. Six RS ensemble methods, including Random subspace with DT (RSDT), Random Forest (RF), Rotation Forest (RoF), Rotation Random Forest (RoRF), RS with ELM (RSELM) and Rotation subspace with ELM (RoELM), are constructed by the multiple base learners. The effectiveness of the proposed techniques is illustrated by comparing with state-of-the-art methods by using real hyperspectral data sets with different contexts.
114

Explorando caminhos de mínima informação em grafos para problemas de classificação supervisionada

Hiraga, Alan Kazuo 05 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:12Z (GMT). No. of bitstreams: 1 5931.pdf: 2655791 bytes, checksum: 6eafe016c175143a8d55692b4681adfe (MD5) Previous issue date: 2014-05-05 / Financiadora de Estudos e Projetos / Classification is a very important step in pattern recognition, as it aims to categorize objects from a set of inherent features, through its labeling. This process can be supervised, when there is a sample set of labeled training classes, semi-supervised, when the number of labeled samples is limited or nearly inexistent, or unsupervised, where there are no labeled samples. This project proposes to explore minimum information paths in graphs for classification problems, through the definition of a supervised, non-parametric, graph-based classification method, by means of a contextual approach. This method proposes to construct a graph from a set of training samples, where the samples are represented by vertices and the edges are links between samples that belongs to a neighborhood system. From the graph construction, the method calculates the local observed Fisher information, a measurement based on the Potts model, for all vertices, identifying the amount of information that each sample has. Generally, different class vertices when connected by an edge, have a high information level. After that, it is necessary to weight the edges by means of a function that penalizes connecting vertices with high information. During this process, it is possible to identify and select high information vertices, which will be chosen to be prototype vertices, namely, the nodes that define the classes boundaries. After the definition, the method proposes that each prototype sample conquer the remaining samples by offering the shortest path in terms of information, so that when a sample is conquered it receives the label of the winning prototype, occurring the classification. To evaluate the proposed method, statistical methods to estimate the error rates, such as Hold-out, K-fold and Leave-One- Out Cross-Validation will be considered. The obtained results indicate that the method can be a viable alternative to the existing classification techniques. / A classificação é uma etapa muito importante em reconhecimento de padrões, pois ela tem o objetivo de categorizar objetos a partir de um conjunto de características inerentes a ele, atribuindo-lhe um rótulo. Esse processo de classificação pode ser supervisionado, quando existe um conjunto de amostras de treinamento rotuladas que representam satisfatoriamente as classes, semi-supervisionado, quando o conjunto de amostras é limitado ou quase inexistente, ou não-supervisionado, quando não existem amostras rotuladas. Este trabalho propõe explorar caminhos de mínima informação em grafos para problemas de classificação, por meio da criação de um método de classificação supervisionado, não paramétrico, baseado em grafos, seguindo uma abordagem contextual. Esse método propõe a construção de um grafo a partir do conjunto de amostras de treinamento, onde as amostras serão representadas pelos vértices e as arestas serão as ligações entre amostras pertencentes a uma relação de adjacência. A partir da construção do grafo o método faz o calculo da informação de Fisher Local Observada, uma medida baseada no modelo de Potts, para todos os vértices, identificando o grau de informação que cada um possui. Geralmente vértices de classes distintas quando conectados por uma aresta possuem alta informação (bordas). Feito o calculo da informação, é necessário ponderar as arestas por meio de uma função que penaliza a ligação de vértices com alta informação. Enquanto as arestas são ponderadas é possível identificar e selecionar vértices altamente informativos os quais serão escolhidos para serem vértices protótipos, ou seja, os vértices que definem a região de borda. Depois de ponderadas as arestas e definidos os protótipos, o método propõe que cada protótipo conquiste as amostras oferecendo o menor caminho até ele, de modo que quando uma amostra é conquistada ela receba o rótulo do protótipo que a conquistou, ocorrendo a classificação. Para avaliar o método serão utilizados métodos estatísticos para estimar as taxas de acertos, como K-fold, Hold-out e Leave-one-out Cross- Validation. Os resultados obtidos indicam que o método pode ser um uma alternativa viável as técnicas de classificação existentes.
115

Hierarchical Logcut : A Fast And Efficient Way Of Energy Minimization Via Graph Cuts

Kulkarni, Gaurav 06 1900 (has links) (PDF)
Graph cuts have emerged as an important combinatorial optimization tool for many problems in vision. Most of the computer vision problems are discrete labeling problems. For example, in stereopsis, labels represent disparity and in image restoration, labels correspond to image intensities. Finding a good labeling involves optimization of an Energy Function. In computer vision, energy functions for discrete labeling problems can be elegantly formulated through Markov Random Field (MRF) based modeling and graph cut algorithms have been found to efficiently optimize wide class of such energy functions. The main contribution of this thesis lies in developing an efficient combinatorial optimization algorithm which can be applied to a wide class of energy functions. Generally, graph cut algorithms deal sequentially with each label in the labeling problem at hand. The time complexity of these algorithms increases linearly with number of labels. Our algorithm, finds a solution/labeling in logarithmic time complexity without compromising on quality of solution. In our work, we present an improved Logcut algorithm [24]. Logcut algorithm [24] deals with finding individual bit values in integer representation of labels. It has logarithmic time complexity, but requires training over data set. Our improved Logcut (Heuristic-Logcut or H-Logcut) algorithm eliminates the need for training and obtains comparable results in respect to original Logcut algorithm. Original Logcut algorithm cannot be initialized by a known labeling. We present a new algorithm, Sequential Bit Plane Correction (SBPC) which overcomes this drawback of Logcut algorithm. SBPC algorithm starts from a known labeling and individually corrects each bit of a label. This algorithm too has logarithmic time complexity. SBPC in combination with H-Logcut algorithm, further improves rate of convergence and quality of results. Finally, a hierarchical approach to graph cut optimization is used to further improve on rate of convergence of our algorithm. Generally, in a hierarchical approach first, a solution at coarser level is computed and then its result is used to initialize algorithm at a finer level. Here we have presented a novel way of initializing the algorithm at finer level through fusion move [25]. The SBPC and H-Logcut algorithms are extended to accommodate for hierarchical approach. It is found that this approach drastically improves the rate of convergence and attains a very low energy labeling. The effectiveness of our approach is demonstrated on stereopsis. It is found that the algorithm significantly out performs all existing algorithms in terms of quality of solution as well as rate of convergence.
116

Integration of uncertainty and definition of critical thresholds for CO2 storage risk assessment / Insertion de l'incertitude et définition de seuils critiques pour l'analyse de risques du stockage du CO2

Okhulkova, Tatiana 15 December 2015 (has links)
L'objectif principal de la thèse est de définir comment l'incertitude peut être prise en compte dans leprocessus d'évaluation des risques pour le stockage de CO2 et de quantifier, à l'aide de modèles numériques,les scénarios de fuite par migration latérale et à travers la couverture. Les scénarios choisis sont quantifiéspar l'approche de modélisation de système pour laquelle des modèles numériques prédictifs ad-hoc sontdéveloppés. Une étude probabiliste de propagation d'incertitude paramétrique par un méta-modèle depolynômes de chaos est réalisée. La problématique de la prise en compte de la variabilité spatiale comme unesource d'incertitude est éclairée et une étude comparative entre représentations homogène et hétérogène de laperméabilité est fournie. / The main goal of the thesis is to define how the uncertainty can be accounted for in the process of riskassessment for CO2 storage and to quantify by means of numerical models the scenarii of leakage by lateralmigration and through the caprock. The chosen scenarii are quantified using the system modeling approachfor which ad-hoc predictive numerical models are developed. A probabilistic parametric uncertaintypropagation study using polynomial chaos expansion is performed. Matters of spatial variability are alsodiscussed and a comparison between homogeneous and heterogeneous representations of permeability isprovided.
117

Pravděpodobnostní diskrétní model porušování betonu / Probabilistic discrete model of concrete fracturing

Kaděrová, Jana January 2018 (has links)
The thesis presents results of a numerical study on the performance of 3D discrete meso–scale lattice–particle model of concrete. The existing model was extended by introducing the spatial variability of chosen material parameter in form of random field. An experimental data from bending tests on notched and unnotched beams was exploited for the identification of model parameters as well as for the subsequent validation of its performance. With the basic and the extended randomized version of the model, numerical simulations were calculated so that the influence of the rate of fluctuation of the random field (governed by the correlation length) could be observed. The final part of the thesis describes the region in the beam active during the test in which the most of the fracture energy is released in terms of its size and shape. This region defines the strength of the whole member and as shown in the thesis, it does not have a constant size but it is influenced by the geometrical setup and the correlation length of the random field.
118

Protein Structural Modeling Using Electron Microscopy Maps

Eman Alnabati (13108032) 19 July 2022 (has links)
<p>Proteins are significant components of living cells. They perform a diverse range of biological functions such as cell shape and metabolism. The functions of proteins are determined by their three-dimensional structures. Cryogenic-electron microscopy (cryo-EM) is a technology known for determining the structure of large macromolecular structures including protein complexes. When individual atomic protein structures are available, a critical task in structure modeling is fitting the individual structures into the cryo-EM density map.</p> <p>In my research, I report a new computational method, MarkovFit, which is a machine learning-based method that performs simultaneous rigid fitting of the atomic structures of individual proteins into cryo-EM maps of medium to low resolution to model the three-dimensional structure of protein complexes. MarkovFit uses Markov random field (MRF), which allows probabilistic evaluation of fitted models. MarkovFit starts by searching the conformational space using FFT for potential poses of protein structures, computes scores which quantify the goodness-of-fit between each individual protein and the cryo-EM map, and the interactions between the proteins. Afterwards, proteins and their interactions are represented using a MRF graph. MRF nodes use a belief propagation algorithm to exchange information, and the best conformations are then extracted and refined using two structural refinement methods. </p> <p>The performance of MarkovFit was tested on three datasets; a dataset of simulated cryo-EM maps at resolution 10 Å, a dataset of high-resolution experimentally-determined cryo-EM maps, and a dataset of experimentally-determined cryo-EM maps of medium to low resolution. In addition to that, the performance of MarkovFit was compared to two state-of-the-art methods on their datasets. Lastly, MarkovFit modeled the protein complexes from the individual protein atomic models generated by AlphaFold, an AI-based model developed by DeepMind for predicting the 3D structure of proteins from their amino acid sequences.</p>
119

A Spatial-Temporal Contextual Kernel Method for Generating High-Quality Land-Cover Time Series

Wehmann, Adam 25 September 2014 (has links)
No description available.
120

Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data

Wang, Chao 08 January 2008 (has links)
No description available.

Page generated in 0.0755 seconds