• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1088
  • 239
  • 152
  • 123
  • 76
  • 51
  • 35
  • 24
  • 24
  • 23
  • 18
  • 16
  • 8
  • 7
  • 7
  • Tagged with
  • 2214
  • 322
  • 216
  • 175
  • 171
  • 169
  • 168
  • 163
  • 130
  • 128
  • 120
  • 118
  • 115
  • 111
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Graph by Example: an Exploratory Graph Query Interface for RDF Databases

Yang, Cheng 26 January 2016 (has links)
No description available.
192

Private Record Linkage: A Comparison of Selected Techniques for Name Matching

Grzebala, Pawel B. 06 May 2016 (has links)
No description available.
193

Parallel implementation of template matching on hypercube array processors

Chai, Sin-Kuo January 1989 (has links)
No description available.
194

Children's perception of the emotional content of music

Trunk, Barry January 1981 (has links)
No description available.
195

Implementation of Pattern Matching Calculus Using Type-Indexed Expressions

Ji, Xiaoheng 09 1900 (has links)
<p> The pattern matching calculus introduced by Kahl provides a fine-grained mechanism of modelling non-strict pattern matching in modern functional programming languages. By changing the rule of interpreting the empty expression that results from matching failures, the pattern matching calculus can be transformed into another calculus that abstracts a "more successful" evaluation. Kahl also showed that the two calculi have both a confluent reduction system and a same normalising strategy, which constitute the operational semantics of the pattern matching calculi.</p> <p> As a new technique based on Haskell's language extensions of type-saft cast, arbitrary-rank polymorphism and generalised algebraic data types, type-indexed expressions introduced by Kahl demonstrate a uniform way of defining all expressions as type-indexed to guarantee type safety.</p> <p> In this thesis, we implemented the type-indexed syntax and operational semantics of the pattern matching calculi using type-indexed expressions. Our type-indexed syntax mirrors the definition of the pattern matching calculi. We implemented the operational semantics of the two calculi perfectly and provided reduction and normalisation examples that show that the pattern matching calculus can be a useful basis of modelling non-strict pattern matching.</p> <p> We formalised and implemented the bimonadic semantics of the pattern matching calculi using categorical concepts and type-indexed expressions respectively. The bimonadic semantics employs two monads to reflect two kinds of computational effects, which correspond to the two major syntactic categories of the pattern matching calculi, i.e. expressons and matchings. Thus, the resulting implementation provides the detotational model of non-strict pattern matching with more accuracy.</p> <p> Finally, from a practical programming viewpoint, our implementation is a good demonstration of how to program in the pure type-indexed setting by taking fully advantage of Haskell's language extensions of type-safe cast, arbitrary-rank polymorphism and generalised algebraic data types.</p> / Thesis / Master of Science (MSc)
196

Selection of Video Descriptors: Generating Compact Descriptor Sets for Video Pairwise-Matching

YIN, TING January 2017 (has links)
This thesis presents several descriptor selection schemes for video content pairwise-matching tasks. Those proposed schemes attempt to leverage two significant properties of videos, temporal correlation and motion information. Aiming to find an efficient and descriptive representation for a video sequence, the concept of descriptor persistency is defined. Those descriptors that satisfy this definition are called persistent descriptors. In order to exploit descriptor persistency, an encoder is proposed. The proposed encoder consists of five main components. First, keyframe labelling is introduced to reduce complexity and ensure a reasonable size of persistent sets. After that, persistent descriptor detection is performed on each group of pictures (GOP) separately. The second component is the standard SIFT descriptor extraction. The third part is to identify persistent descriptors from each GOP, called persistent descriptors extraction. In this stage, three different methods are proposed: The direct method and two approximation approaches. Persistent descriptors selection, which is the fourth stage, is carried out to control the size of the persistent set. For this stage, three selection methods are proposed. All of them attempt to utilize the motion information to select more descriptive descriptors among all the persistent descriptors in the GOP. In order to perform pairwise-matching, in this thesis, a simple but efficient pairwise-matching method is proposed. Experiments are carried out to evaluate the performance of the proposed schemes. The datasets used for performance evaluation are subsets from the categories that describe in [1]. Two metrics de ned in [2], namely false positive rate (FPR) and true positive rate (TPR), are used for the performance evaluation. / Thesis / Master of Applied Science (MASc)
197

A Separator-Based Framework for Graph Matching Problems

Lahn, Nathaniel Adam 29 May 2020 (has links)
Given a graph, a matching is a set of vertex-disjoint edges. Graph matchings have been well studied, since they play a fundamental role in algorithmic theory as well as motivate many practical applications. Of particular interest is the problem of finding a maximum cardinality matching of a graph. Also of interest is the weighted variant: the problem of computing a minimum-cost maximum cardinality matching. For an arbitrary graph with m edges and n vertices, there are known, long-standing combinatorial algorithms that compute a maximum cardinality matching in O(m\sqrt{n}) time. For graphs with non-negative integer edge costs at most C, it is known how to compute a minimum-cost maximum cardinality matching in roughly O(m\sqrt{n} log(nC)) time using combinatorial methods. While non-combinatorial methods exist, they are generally impractical and not well understood due to their complexity. As a result, there is great interest in obtaining faster matching algorithms that are purely combinatorial in nature. Improving existing combinatorial algorithms for arbitrary graphs is considered to be a very difficult problem. To make the problem more approachable, it is desirable to make some additional assumptions about the graph. For our work, we make two such assumptions. First, we assume the graph is bipartite. Second, we assume that the graph has a small balanced separator, meaning it is possible to split the graph into two roughly equal-size components by removing a relatively small portion of the graph. Several well-studied classes of graphs have separator-like properties, including planar graphs, minor-free graphs, and geometric graphs. For such graphs, we describe a framework, a general set of techniques for designing efficient algorithms. We demonstrate this framework by applying it to yield polynomial-factor improvements for several open-problems in bipartite matching. / Doctor of Philosophy / Assume we are given a list of objects, and a list of compatible pairs of these objects. A matching consists of a chosen subset of these compatible pairs, where each object participates in at most one chosen pair. For any chosen pair of objects, we say the these two objects are matched. Generally, we seek to maximize the number of compatible matches. A maximum cardinality matching is a matching with the largest possible size. In many cases, there are multiple options for maximizing the number of compatible pairings. While maximizing the size of the matching is often the primary concern, one may also seek to minimize the cost of the matching. This is known as the minimum-cost maximum-cardinality matching problem. These two matching problems have been well studied, since they play a fundamental role in algorithmic theory as well as motivate many practical applications. Our interest is in the design of algorithms for both of these problems that are efficiently scalable, even as the number of objects involved grows very large. To aid in the design of scalable algorithms, we observe that some inputs have good separators, meaning that by removing some subset S of objects, one can divide the remaining objects into two sets V and V', where all pairs of objects between V and V' are incompatible. We design several new algorithms that exploit good separators, and prove that these algorithms scale better than previously existing approaches.
198

Shape Matching for Reduced Order Models of High-Speed Fluid Flows

Dennis, Ethan James 30 August 2024 (has links)
While computational fluid dynamics (CFD) simulations are an indispensable tool in modern aerospace engineering design, they bear a severe computational burden in applications where simulation results must be found quickly or repeatedly. Therefore, creating computationally inexpensive models that can capture complex fluid behaviors is a long-sought-after goal. As a result, methods to construct these reduced order models (ROMs) have seen increasing research interest. Still, parameter dependent high-speed flows that contain shock waves are a particularly challenging class of problems that introduces many complications in ROM frameworks. To make approximations in a linear space, ROM techniques for these problems require that basis functions are transformed such that discontinuities are aligned into a consistent reference frame. Techniques to construct these transformations, however, fail when the topology of shocks is not consistent between data snapshots. In this work, we first identify key features of these topology changes, and how that constrains transformations of this kind. We then construct a new modeling framework that can effectively deal with shockwave interactions that are known to cause failures. The capabilities of the resulting model were evaluated by analyzing supersonic flows over a wedge and a forward-facing step. In the case of the forward-facing step, when shock topology changes with Mach number, our method exhibits significant accuracy improvements. Suggestions for further developments and improvements to our methodology are also identified and discussed / Master of Science / While computational fluid dynamics (CFD) simulations are an indispensable tool in modern aerospace engineering design, they bear a severe computational burden in applications where simulation results must be found quickly or repeatedly. Therefore, creating computationally inexpensive models that can capture complex fluid behaviors is a long-sought-after goal. As a result, methods to construct these reduced order models (ROMs) have seen increasing research interest. Still, high-speed flows that contain shock waves are a particularly challenging class of problems that introduces many complications in ROM frameworks. First, we identify some of the common failure modes in previous ROM methodologies. We then construct a new modeling framework that can effectively deal with shockwave interactions that are known to cause these failures. The capabilities of the resulting model were evaluated by analyzing supersonic flows over a wedge and a forward-facing step. In cases where previous modeling frameworks are known to fail, our method exhibits significant accuracy improvements. Suggestions for further developments and improvements to our methodology are also identified and discussed.
199

Stable matching in preference relationships

Philpin, Elizabeth Mary 30 November 2006 (has links)
It is the aim of this paper to review some of the work done on stable matching, and on stable marriage problems in particular. Variants of the stable marriage problem will be considered, and the similarities and differences from a mathematical point of view will be highlighted. The correlation between preference and stability is a main theme, and the way in which diluted or incomplete preferences affect stability is explored. Since these problems have a wide range of practical applications, it is of interest to develop useful algorithms for the derivation of solutions. Time-complexity is a key factor in designing computable algorithms, making work load a strong consideration for practical purposes. Average and worst-case complexity are discussed. The number of different solutions that are possible for a given problem instance is surprising, and counter-intuitive. This leads naturally to a study of the solution sets and the lattice structure of solutions that emerges for any stable marriage problem. Many theorems derive from the lattice structure of stable solutions and it is shown that this can lead to the design of more efficient algorithms. The research on this topic is well established, and many theorems have been proved and published, although some published proofs have omitted the detail. In this paper, the author selects some key theorems, providing detailed proofs or alternate proofs, showing the mathematical richness of this field of study. Various applications are discussed, particularly with relevance to the social sciences, although mention is made of applications in computer science, game theory, and economics. The current research that is evident in this subject area, by reference to technical papers in periodicals and on the internet, suggests that it will remain a key topic for some time to come. / MATHEMATICAL SCIENCES / MSC (MATHEMATICS)
200

Signals in two-sided search

Poeschel, Friedrich Gerd January 2011 (has links)
We introduce signals to search models of two-sided matching markets and explore the implications for efficiency. In a labour market model in which firms can advertise wages and workers can choose effort, we find that advertisements can help overcome the Diamond paradox. Advertisements fix workers' beliefs, so that workers will react if firms renege on advertisements. Firms then prefer to advertise truthfully. Next, we consider a market with two-sided heterogeneity in which types are only privately observable. We identify a simple condition on the match output function for agents to signal their types truthfully and for the matching to exhibit positive assortative matching despite search frictions. While our theoretical work implies that the efficiency of matching increases as information technology spreads, empirical matching functions typically suggest that it declines. By estimating more general matching functions, we show that the result of declining efficiency can partly be attributed to omitted variable bias.

Page generated in 0.0311 seconds