• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1092
  • 239
  • 152
  • 123
  • 76
  • 51
  • 35
  • 24
  • 24
  • 23
  • 18
  • 16
  • 8
  • 7
  • 7
  • Tagged with
  • 2220
  • 322
  • 217
  • 175
  • 172
  • 171
  • 169
  • 163
  • 130
  • 128
  • 120
  • 118
  • 115
  • 112
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Contour Matching Using Local Affine Transformations

Bachelder, Ivan A. 01 April 1992 (has links)
Partial constraints are often available in visual processing tasks requiring the matching of contours in two images. We propose a non- iterative scheme to determine contour matches using locally affine transformations. The method assumes that contours are approximated by the orthographic projection of planar patches within oriented neighborhoods of varying size. For degenerate cases, a minimal matching solution is chosen closest to the minimal pure translation. Performance on noisy synthetic and natural contour imagery is reported.
112

Perceptually-based Comparison of Image Similarity Metrics

Russell, Richard, Sinha, Pawan 01 July 2001 (has links)
The image comparison operation ??sessing how well one image matches another ??rms a critical component of many image analysis systems and models of human visual processing. Two norms used commonly for this purpose are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric better captures the perceptual notion of image similarity than the other. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created via vector quantization. In both conditions the subjects showed a consistent preference for images matched using the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity.
113

Matching Interest Points Using Projective Invariant Concentric Circles

Chiu, Han-Pang, Lozano-Pérez, Tomás 01 1900 (has links)
We present a new method to perform reliable matching between different images. This method exploits a projective invariant property between concentric circles and the corresponding projected ellipses to find complete region correspondences centered on interest points. The method matches interest points allowing for a full perspective transformation and exploiting all the available luminance information in the regions. Experiments have been conducted on many different data sets to compare our approach to SIFT local descriptors. The results show the new method offers increased robustness to partial visibility, object rotation in depth, and viewpoint angle change. / Singapore-MIT Alliance (SMA)
114

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
115

Evolutionary study of the Hox gene family with matrix-based bioinformatics approaches

Thomas-Chollier, Morgane 27 June 2008 (has links)
Hox transcription factors are extensively investigated in diverse fields of molecular and evolutionary biology. Hox genes belong to the family of homeobox transcription factors characterised by a 60 amino acids region called homeodomain. These genes are evolutionary conserved and play crucial roles in the development of animals. In particular, they are involved in the specification of segmental identity, and in the tetrapod limb differentiation. In vertebrates, this family of genes can be divided into 14 groups of homology. Common methods to classify Hox proteins focus on the homeodomain. Classification is however hampered by the high conservation of this short domain. Since phylogenetic tree reconstruction is time-consuming, it is not suitable to classify the growing number of Hox sequences. The first goal of this thesis is therefore to design an automated approach to classify vertebrate Hox proteins in their groups of homology. This approach classifies Hox proteins on the basis of their scores for a combination of protein generalised profiles. The resulting program, HoxPred, combines predictive accuracy and time efficiency. We used this program to detect and classify Hox genes in several teleost fish genomes. In particular, it allowed us to clarify the evolutionary history of the HoxC1a genes in teleosts. Overall, HoxPred could efficiently contribute to the bioinformatics toolbox commonly used to annotate vertebrate Hox sequences. This program was then evaluated in non-vertebrate species. Although not intended for the classification of Hox proteins in distantly related species, HoxPred showed a high accuracy in bilaterians. It has also given insights into the evolutionary relationships between bilaterian posterior Hox genes, which are notoriously difficult to classify with phylogenetic trees. As transcription factors, Hox proteins regulate target genes by specifically binding DNA on cis-regulatory elements. Only a few of these target genes have been identified so far. The second goal of this work was to evaluate whether it is possible to apply computational approaches to detect Hox cis-regulatory elements in genomic sequences. Regulatory Sequence Analysis Tools (RSAT) is a suite of bioinformatics tools dedicated to the detection of cis-regulatory elements in genomes. We participated to the development of matrix-based pattern matching approaches in RSAT. After having performed a statistical validation of the pattern-matching scores, we focused on a study case based on the vertebrate HoxB1 protein, which binds DNA with its cofactors Pbx and Meis. This study aimed at predicting combinations of cis-regulatory elements for these three transcription factors.
116

Analysis of Microstrip Lines on Substrates Composed of Several Dielectric Layers under the Application of the Discrete Mode Matching

Sotomayor Polar, Manuel Gustavo January 2008 (has links)
Microstrip structures became very attractive with the development of cost-effective dielectric materials. Among several techniques suitable to the analysis of such structures, the discrete mode matching method (DMM) is a full-wave approach that allows a fast solution to Helmholz equation. Combined with a full-wave equivalent circuit, the DMM allows fast and accurate analysis of microstrips lines on multilayered substrates.   The knowledge of properties like dispersion and electromagnetic fields is essential in the implementation of such transmission lines. For this objective a MATLAB computer code was developed based on the discrete mode matching method (DMM) to perform this analysis.   The principal parameter for the analysis is the utilization of different dielectric profiles with the aim of a reduction in the dispersion in comparison with one-layer cylindrical microstrip line, showing a reduction of almost 50%. The analysis also includes current density distribution and electromagnetic fields representation. Finally, the data is compared with Ansoft HFSS to validate the results. / The German Aerospace Center has rights over the thesis work
117

Metabolic Network Alignments and their Applications

Cheng, Qiong 01 December 2009 (has links)
The accumulation of high-throughput genomic and proteomic data allows for the reconstruction of the increasingly large and complex metabolic networks. In order to analyze the accumulated data and reconstructed networks, it is critical to identify network patterns and evolutionary relations between metabolic networks. But even finding similar networks becomes computationally challenging. The dissertation addresses these challenges with discrete optimization and the corresponding algorithmic techniques. Based on the property of the gene duplication and function sharing in biological network,we have formulated the network alignment problem which asks the optimal vertex-to-vertex mapping allowing path contraction, vertex deletion, and vertex insertions. We have proposed the first polynomial time algorithm for aligning an acyclic metabolic pattern pathway with an arbitrary metabolic network. We also have proposed a polynomial-time algorithm for patterns with small treewidth and implemented it for series-parallel patterns which are commonly found among metabolic networks. We have developed the metabolic network alignment tool for free public use. We have performed pairwise mapping of all pathways among five organisms and found a set of statistically significant pathway similarities. We also have applied the network alignment to identifying inconsistency, inferring missing enzymes, and finding potential candidates.
118

Bayesian Logistic Regression with Jaro-Winkler String Comparator Scores Provides Sizable Improvement in Probabilistic Record Matching

Jann, Dominic 1983- 14 March 2013 (has links)
Record matching is a fundamental and ubiquitous part of today?s society. Anything from typing in a password in order to access your email to connecting existing health records in California with new health records in New York requires matching records together. In general, there are two types of record matching algorithms: deterministic, a more rules-based approach, and probabilistic, a model-based approach. Both types have their advantages and disadvantages. If the amount of data is relatively small, deterministic algorithms yield very high success rates. However, the number of common mistakes, and subsequent rules, becomes astronomically large as the sizes of the datasets increase. This leads to a highly labor-intensive process updating and maintaining the matching algorithm. On the other hand, probabilistic record matching implements a mathematical model that can take into account keying mistakes, does not require as much maintenance and over- head, and provides a probability that two particular entities should be linked. At the same time, as a model, assumptions need to be met, fitness has to be assessed, and predictions can be incorrect. Regardless of the type of algorithm, nearly all utilize a 0/1 field-matching structure, including the Fellegi-Sunter algorithm from 1969. That is to say that either the fields match entirely, or they do not match at all. As a result, typographical errors can get lost and false negatives can result. My research has yielded that using Jaro-Winkler string comparator scores as predictors to a Bayesian logistic regression model in lieu of a restrictive binary structure yields marginal improvement over current methodologies.
119

Performance of Assisted History Matching Techniques When Utilizing Multiple Initial Geologic Models

Aggarwal, Akshay 14 March 2013 (has links)
History matching is a process wherein changes are made to an initial geologic model of a reservoir, so that the predicted reservoir performance matches with the known production history. Changes are made to the model parameters which include rock and fluid parameters (viscosity, compressibility, relative permeability, etc.) or properties within the geologic model. Assisted History Matching (AHM) provides an algorithmic framework to minimize the mismatch in simulation, and aids in accelerating this process. The changes made by AHM techniques, however, cannot ensure a geologically consistent reservoir model. In fact, the performance of these techniques depends on the initial starting model. In order to understand the impact of the initial model, this project explored the performance of the AHM approach using a specific field case, but working with multiple distinct geologic scenarios. This project involved an integrated seismic to simulation study, wherein I interpreted the seismic data, assembled the geological information, and performed petrophysical log evaluation along with well test data calibration. The ensemble of static models obtained was carried through the AHM methodology. I used sensitivity analysis to determine the most important dynamic parameters that affect the history match. These parameters govern the large scale changes in the reservoir description and are optimized using the Evolutionary Strategy Algorithm. Finally, the streamline based techniques were used for local modifications to match the water cut well by well. The following general conclusions were drawn from this study- a) The use of multiple simple geologic models is extremely useful in screening possible geologic scenarios and especially for discarding unreasonable alternative models. This was especially true for the large scale architecture of the reservoir. b) The AHM methodology was very effective in exploring a large number of parameters, running the simulation cases, and generating the calibrated reservoir models. The calibration step consistently worked better if the models had more spatial detail, instead of the simple models used for screening. c) The AHM methodology implemented a sequence of pressure and water cut history matching. An examination of specific models indicated that a better geologic description minimized the conflict between these two match criteria.
120

Contributions to the Content-Based Image Retrieval Using Pictorial Queris

Borràs Agnosto, Agnès 06 November 2009 (has links)
L'accés massiu a les càmeres digitals, els ordinadors personals i a Internet, ha propiciat la creació de grans volums de dades en format digital. En aquest context, cada vegada adquireixen major rellevància totes aquelles eines dissenyades per organitzar la informació i facilitar la seva cerca.Les imatges són un cas particular de dades que requereixen tècniques específiques de descripció i indexació. L'àrea de la visió per computador encarregada de l'estudi d'aquestes tècniques rep el nom de Recuperació d'Imatges per Contingut, en anglès Content-Based Image Retrieval (CBIR). Els sistemes de CBIR no utilitzen descripcions basades en text sinó que es basen en característiques extretes de les pròpies imatges. En contrast a les més de 6000 llengües parlades en el món, les descripcions basades en característiques visuals representen una via d'expressió universal.La intensa recerca en el camp dels sistemes de CBIR s'ha aplicat en àrees de coneixement molt diverses. Així doncs s'han desenvolupat aplicacions de CBIR relacionades amb la medicina, la protecció de la propietat intel·lectual, el periodisme, el disseny gràfic, la cerca d'informació en Internet, la preservació dels patrimoni cultural, etc. Un dels punts importants d'una aplicació de CBIR resideix en el disseny de les funcions de l'usuari. L'usuari és l'encarregat de formular les consultes a partir de les quals es fa la cerca de les imatges. Nosaltres hem centrat l'atenció en aquells sistemes en què la consulta es formula a partir d'una representació pictòrica. Hem plantejat una taxonomia dels sistemes de consulta en composada per quatre paradigmes diferents: Consulta-segons-Selecció, Consulta-segons-Composició-Icònica, Consulta-segons-Esboç i Consulta-segons-Il·lustració. Cada paradigma incorpora un nivell diferent en el potencial expressiu de l'usuari. Des de la simple selecció d'una imatge, fins a la creació d'una il·lustració en color, l'usuari és qui pren el control de les dades d'entrada del sistema. Al llarg dels capítols d'aquesta tesi hem analitzat la influència que cada paradigma de consulta exerceix en els processos interns d'un sistema de CBIR. D'aquesta manera també hem proposat un conjunt de contribucions que hem exemplificat des d'un punt de vista pràctic mitjançant una aplicació final.

Page generated in 0.3016 seconds