361 |
Convex Mathematical Programs for Relational Matching of Object ViewsSchellewald, Christian. January 2005 (has links)
Mannheim, Univ., Diss., 2004.
|
362 |
Fast longest prefix matching : algorithms, analysis, and applications /Waldvogel, Marcel. January 2000 (has links)
Diss. no. 13266 techn. sc. SFIT Zurich. / Im Buchh.: Aachen : Shaker. Literaturverz.
|
363 |
Computation of weights for probabilistic record linkage using the EM algorithm /Bauman, G. John, January 2006 (has links) (PDF)
Project (M.S.)--Brigham Young University. Dept. of Statistics, 2006. / Includes bibliographical references (p. 45-46).
|
364 |
Efficient Image Matching with Distributions of Local Invariant FeaturesGrauman, Kristen, Darrell, Trevor 22 November 2004 (has links)
Sets of local features that are invariant to common image transformations are an effective representation to use when comparing images; current methods typically judge feature sets' similarity via a voting scheme (which ignores co-occurrence statistics) or by comparing histograms over a set of prototypes (which must be found by clustering). We present a method for efficiently comparing images based on their discrete distributions (bags) of distinctive local invariant features, without clustering descriptors. Similarity between images is measured with an approximation of the Earth Mover's Distance (EMD), which quickly computes the minimal-cost correspondence between two bags of features. Each image's feature distribution is mapped into a normed space with a low-distortion embedding of EMD. Examples most similar to a novel query image are retrieved in time sublinear in the number of examples via approximate nearest neighbor search in the embedded space. We also show how the feature representation may be extended to encode the distribution of geometric constraints between the invariant features appearing in each image.We evaluate our technique with scene recognition and texture classification tasks.
|
365 |
Combinatorial problems for graphs and partially ordered setsWang, Ruidong 13 November 2015 (has links)
This dissertation has three principal components. The first component is about the connections between the dimension of posets and the size of matchings in comparability and incomparability graphs. In 1951, Hiraguchi proved that for any finite poset P, the dimension of P is at most half of the number of points in P. We develop some new inequalities for the dimension of finite posets. These inequalities are then used to bound dimension in terms of the maximum size of matchings. We prove that if the dimension of P is d and d is at least 3, then there is a matching of size d in the comparability graph of P, and a matching of size d in the incomparability graph of P. The bounds in above theorems are best possible, and either result has Hiraguchi's theorem as an immediate corollary. In the second component, we focus on an extremal graph theory problem whose solution relied on the construction of a special kind of posets. In 1959, Paul Erdos, in a landmark paper, proved the existence of graphs with arbitrarily large girth and arbitrarily large chromatic number using probabilistic method. In a 1991 paper of Kriz and Nesetril, they introduced a new graph parameter eye(G). They show that there are graphs with large girth and large chromatic number among the class of graphs having eye parameter at most three. Answering a question of Kriz and Nesetril, we were able to strengthen their results and show that there are graphs with large girth and large chromatic number among the class of graphs having eye parameter at most two. The last component is about random posets--the poset version of the Erdos-Renyi random graphs. In 1991, Erdos, Kierstead and Trotter (EKT) investigated random height 2 posets and obtained several upper and lower bounds on the dimension of the random posets. Motivated by some extremal problems involving conditions which force a poset to contain a large standard example, we were compelled to revisit this subject. Our sharpened analysis allows us to conclude that as p approaches 1, the expected value of dimension first increases and then decreases, a subtlety not identified in EKT. Along the way, we establish connections with classical topics in analysis as well as with latin rectangles. Also, using structural insights drawn from this research, we are able to make progress on the motivating extremal problem with an application of the asymmetric form of the Lovasz Local Lemma.
|
366 |
Implémentation de PCM (Process Compact Models) pour l’étude et l’amélioration de la variabilité des technologies CMOS FDSOI avancées / Implementation of PCM (Process compact models) for the study and improvement of variability in advanced FD-SOI CMOS technologiesDenis, Yvan 16 June 2016 (has links)
Récemment, la course à la miniaturisation a vue sa progression ralentir à cause des défis technologiques qu’elle implique. Parmi ces obstacles, on trouve l’impact croissant de la variabilité local et process émanant de la complexité croissante du processus de fabrication et de la miniaturisation, en plus de la difficulté à réduire la longueur du canal. Afin de relever ces défis, de nouvelles architectures, très différentes de celle traditionnelle (bulk), ont été proposées. Cependant ces nouvelles architectures demandent plus d’efforts pour être industrialisées. L’augmentation de la complexité et du temps de développement requièrent de plus gros investissements financier. De fait il existe un besoin réel d’améliorer le développement et l’optimisation des dispositifs. Ce travail donne quelques pistes dans le but d’atteindre ces objectifs. L’idée, pour répondre au problème, est de réduire le nombre d’essai nécessaire pour trouver le processus de fabrication optimal. Le processus optimal est celui qui conduit à un dispositif dont les performances et leur dispersion atteignent les objectifs prédéfinis. L’idée développée dans cette thèse est de combiner l’outil TCAD et les modèles compacts dans le but de construire et calibrer ce que l’on appelle un PCM (Process Compact Model). Un PCM est un modèle analytique qui établit les liens entre les paramètres process et électriques du MOSFET. Il tire à la fois les bénéfices de la TCAD (puisqu’il relie directement les paramètres process aux paramètres électriques) et du modèle compact (puisque le modèle est analytique et donc rapide à calculer). Un PCM suffisamment prédictif et robuste peut être utilisé pour optimiser les performances et la variabilité globale du transistor grâce à un algorithme d’optimisation approprié. Cette approche est différente des méthodes de développement classiques qui font largement appel à l’expertise scientifique et à des essais successifs dans le but d’améliorer le dispositif. En effet cette approche apporte un cadre mathématique déterministe et robuste au problème.Le concept a été développé, testé et appliqué aux transistors 28 et 14 nm FD-SOI ainsi qu’aux simulations TCAD. Les résultats sont exposés ainsi que les recommandations nécessaires pour implémenter la technique à échelle industrielle. Certaines perspectives et applications sont de même suggérées. / Recently, the race for miniaturization has seen its growth slow because of technological challenges it entails. These barriers include the increasing impact of the local variability and processes from the increasing complexity of the manufacturing process and miniaturization, in addition to the difficult of reducing the channel length. To address these challenges, new architectures, very different from the traditional one (bulk), have been proposed. However these new architectures require more effort to be industrialized. Increasing complexity and development time require larger financial investments. In fact there is a real need to improve the development and optimization of devices. This work gives some tips in order to achieve these goals. The idea to address the problem is to reduce the number of trials required to find the optimal manufacturing process. The optimal process is one that results in a device whose performance and dispersion reach the predefined aims. The idea developed in this thesis is to combine TCAD tool and compact models in order to build and calibrate what is called PCM (Process Compact Model). PCM is an analytical model that establishes linkages between process and electrical parameters of the MOSFET. It takes both the benefits of TCAD (since it connects directly to the process parameters electrical parameters) and compact (since the model is analytic and therefore faster to calculate). A sufficiently robust predictive and PCM can be used to optimize performance and overall variability of the transistor through an appropriate optimization algorithm. This approach is different from traditional development methods that rely heavily on scientific expertise and successive tests in order to improve the system. Indeed this approach provides a deterministic and robust mathematical framework to the problem. The concept was developed, tested and applied to transistors 28 and 14 nm FD-SOI and to TCAD simulations. The results are presented and recommendations to implement it at industrial scale are provided. Some perspectives and applications are likewise suggested.
|
367 |
Essays on labor market dynamics with worker heterogeneityPizzinelli, Carlo January 2018 (has links)
This thesis is comprised of three chapters which discuss topics related to labor market dynamics from a macroeconomic perspective. Although each chapter is self-standing in terms of research question and methodology, they are united by a common interest for the macroeconomic implications of worker heterogeneity. The chapters vary with respect to the time horizon over which they study aggregate dynamics, covering business cycle frequency, the economy's long run steady state, and households' life cycle. Furthermore, they develop the concept of heterogeneity across different dimensions: stages of the life cycle, households' income and wealth, observed worker characteristics, and worker-firm productivity levels. The overall purpose of this thesis is therefore to contribute to the study of labor markets and labor policies through a multi-faceted approach.
|
368 |
Essays on two-sector matching, status rewards and liabilityGola, Paweł January 2015 (has links)
This thesis consists of three self-contained chapters. Chapter 1 develops a two-sector, bivariate matching model, in which each sector uses a different dimension of skill in the production process. I show there exists a unique assignment of agents to sectors and derive comparative statics. The main result is that if jobs are scarce, both an increase in sector one skills' spread and a technological improvement increase the supply of talent in sector one, but decrease it in sector two. In sector two, this raises wages and wage inequality. In sector one, the effects are ambiguous in general, but wages increase for the most and decrease for the least talented agents. Chapter 2 studies the impact of social status on occupational sorting in a two-sector matching framework. Talent is two-dimensional and thus status is not a zero-sum game; it depends both on occupational prestige and within-sector rank (local status). I show that the weights with which these two components enter - the structure of status - crucially influence the way in which agents self-select into sectors and argue that it is likely that these weights differ across occupations. The more important are the individual components of status in a sector, or the less important the collective component, the better the agents who join that industry, which has important implications for total payoffs, wage levels and inequality, and profits. I also show that the stable assignment is typically inefficient, which is driven by the distortion of relative status rewards, not status concerns per se. Chapter 3 investigates whether directors of companies should have limited liability. I develop a three-player model in which: (a) debtholders and equityholders are defined by their control rights and (b) the project is run by the directors. The main result is that increased liability for directors forces them to internalise more of the downside risk of the project and hence reduces their risk-taking. This is optimal if over-investment was a problem initially. I show that the extent to which over-investment is a problem depends on how well debtholders are protected compared to equityholders. If debtholders are strong, increased liability can cause under-investment.
|
369 |
Uma abordagem semi-automática para geração incremental de correspondências entre ontologias / A Semi-automatic approach for generating incremental correspondences between ontologiesHortêncio Filho, Fernando Wagner Brito January 2011 (has links)
HORTÊNCIO FILHO, Fernando Wagner Brito. Uma Abordagem semi-automática para geração incremental de correspondências entre ontologias. 2011. 80 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2011. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T15:27:53Z
No. of bitstreams: 1
2011_dis_fwbhortenciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-15T15:37:23Z (GMT) No. of bitstreams: 1
2011_dis_fwbhortenciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5) / Made available in DSpace on 2016-07-15T15:37:23Z (GMT). No. of bitstreams: 1
2011_dis_fwbhortenciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5)
Previous issue date: 2011 / The discovery of semantic correspondences between schemas is an important task for different fields of applications such as data integration, data warehousing and data mashup. In most cases, the data sources involved are heterogeneous and dynamic, making it even harder the performance of that task. Ontologies are being used in order to define common vocabulary used to describe the elements of the schemas involved in a particular application. The problem of matching between ontologies, or ontology matching, consists in the discovery of correspondences between terms of vocabularies (represented by ontologies) used between the various applications. The solutions proposed in the literature, despite being fully automatic have heuristic nature, and may produce non-satisfactory results. The problem intensifies when dealing with large data sources. The purpose of this paper is to propose a method for generation and incremental refinement of correspondences between ontologies. The proposed approach makes use of filtering techniques of ontologies, as well as user feedback to support the generation and refining these matches. For validation purposes, a tool was developed and experiments were conducted. / A descoberta de correspondências semânticas entre esquemas é uma importante tarefa para diversos domínios de aplicações, tais como integração de dados, data warehouse e mashup de dados. Na maioria dos casos, as fontes de dados envolvidas são heterogêneas e dinâmicas, dificultando ainda mais a realização dessa tarefa. Ontologias vêm sendo utilizadas no intuito de definir vocabulários comuns usados para descrever os elementos dos esquemas envolvidos em uma determinada aplicação. O problema de matching entre ontologias, ou ontology matching, consiste na descoberta de correspondências entre os termos dos vocabulários (representados por ontologias) usados entre as diversas aplicações. As soluções propostas na literatura, apesar de serem totalmente automáticas possuem natureza heurística, podendo produzir resultados não-satisfatórios. O problema se intensifica quando se lida com grandes fontes de dados. O objetivo deste trabalho é propor um método para geração e refinamento incremental de correspondências entre ontologias. A abordagem proposta faz uso de técnicas de filtragem de ontologias, bem como do feedback do usuário para dar suporte à geração e ao refinamento dessas correspondências. Para fins de validação, uma ferramenta foi desenvolvida e experimentos foram realizados.
|
370 |
Holistic Boolean Twig Pattern Matching for Efficient XML Query ProcessingDing, Dabin 01 May 2014 (has links)
Efficient twig pattern matching is essential to XML queries and other tree-based queries. Numerous so-called holistic algorithms have been proposed for efficiently processing the twig patterns in XML queries. However, a more general form of twig pattern, called Boolean-twig (or B-twig for short), which allows arbitrary combination of an arbitrary number of all the three logical connectives, AND, OR, and NOT, in a twig pattern, has not been adequately addressed. The theme of this study is on holistic (and efficient) B-twig pattern matching using region encoding and Dewey encoding schemes. We first adopt region encoding and propose a novel, direct approach called DBTwigMerge for holistic B-twig pattern matching, which although enjoys certain theoretical ``beauty'' and ``elegance'' but does not always outperform our prior approach, BTwigMerge. Based on the experience gained and in-depth investigation, we then come up with another new and more efficient approach, FBTwigMerge, which is proven to be the overall winner among all the holistic approaches using region encoding. In this study, we also studied the holistic B-twig pattern matching problem using Dewey encoding. The unique properties of Dewey encoding bring challenges and also benefits to this problem. By carefully addressing the challenges, this dissertation finally presents the first Dewey based holistic approach, called DeweyNOT, for efficiently solving the pattern matching problem with a subclass of B-twigs, i.e., twig queries involving arbitrary AND/NOT predicates. Extensive experimental studies have been conducted that demonstrate the viability and outstanding performance of the proposed approaches.
|
Page generated in 0.0248 seconds