• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 100
  • 95
  • 18
  • 17
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 634
  • 77
  • 55
  • 54
  • 54
  • 40
  • 40
  • 38
  • 37
  • 36
  • 34
  • 32
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

On the Role of Partition Inequalities in Classical Algorithms for Steiner Problems in Graphs

Tan, Kunlun January 2006 (has links)
The Steiner tree problem is a classical, well-studied, $\mathcal{NP}$-hard optimization problem. Here we are given an undirected graph $G=(V,E)$, a subset $R$ of $V$ of terminals, and non-negative costs $c_e$ for all edges $e$ in $E$. A feasible Steiner tree for a given instance is a tree $T$ in $G$ that spans all terminals in $R$. The goal is to compute a feasible Steiner tree of smallest cost. In this thesis we will focus on approximation algorithms for this problem: a $c$-approximation algorithm is an algorithm that returns a tree of cost at most $c$ times that of an optimum solution for any given input instance. <br /><br /> In a series of papers throughout the last decade, the approximation guarantee $c$ for the Steiner tree problem has been improved to the currently best known value of 1. 55 (Robins, Zelikovsky). Robins' and Zelikovsky's algorithm as well as most of its predecessors are greedy algorithms. <br /><br /> Apart from algorithmic improvements, there also has been substantial work on obtaining tight linear-programming relaxations for the Steiner tree problem. Many undirected and directed formulations have been proposed in the course of the last 25 years; their use, however, is to this point mostly restricted to the field of exact optimization. There are few examples of algorithms for the Steiner tree problem that make use of these LP relaxations. The best known such algorithm for general graphs is a 2-approximation (for the more general Steiner forest problem) due to Agrawal, Klein and Ravi. Their analysis is tight as the LP-relaxation used in their work is known to be weak: it has an IP/LP gap of approximately 2. <br /><br /> Most recent efforts to obtain algorithms for the Steiner tree problem that are based on LP-relaxations has focused on directed relaxations. In this thesis we present an undirected relaxation and show that the algorithm of Robins and Zelikovsky returns a Steiner tree whose cost is at most 1. 55 times its optimum solution value. In fact, we show that this algorithm can be viewed as a primal-dual algorithm. <br /><br/> The Steiner forest problem is a generalization of the Steiner tree problem. In the problem, instead of only one set of terminals, we are given more than one terminal set. An feasible Steiner forest is a forest that connects all terminals in the same terminal set for each terminal set. The goal is to find a minimum cost feasible Steiner forest. In this thesis, a new set of facet defining inequalities for the polyhedra of the Steiner forest is introduced.
92

Automated Partition and Identification of Wave System for Wave Spectrum in Finite Water Depth

Hsu, Cheng-Jung 24 July 2012 (has links)
In investigating ocean surface waves at a location, it is often to describe the waves with spectrum. A wave spectrum measures the distribution of wave energy from the wave trains in all different directions and all different periods at a location. Since the waves at a location contain seas and swells (which are the waves generated by local and remote wind systems respectively), a wave spectrum is viewed composing of different portions of the seas and swell spectrum. How to precisely partition a wave spectrum for each wave system (seas and swells) is of practical need in engineering practice, wave forecast, and coastal water management. At present, there are many different partitioning schemes, but not all are reliable and feasible enough for operational use. This report presents a wave system partition scheme followed by an identification scheme, which uses TMA spectrum, to identify different wave systems. This partition and identification scheme is intended for operationally use in partitioning the swell and wind sea in spectrum at finite water depth. Automated spectral partition and identification have been developed to evaluate swell systems in data during typhoon Meari.
93

A Sliding-Window Approach to Mining Maximal Large Itemsets for Large Databases

Chang, Yuan-feng 28 July 2004 (has links)
Mining association rules, means a process of nontrivial extraction of implicit, previously and potentially useful information from data in databases. Mining maximal large itemsets is a further work of mining association rules, which aims to find the set of all subsets of large (frequent) itemsets that could be representative of all large itemsets. Previous algorithms to mining maximal large itemsets can be classified into two approaches: exhausted and shortcut. The shortcut approach could generate smaller number of candidate itemsets than the exhausted approach, resulting in better performance in terms of time and storage space. On the other hand, when updates to the transaction databases occur, one possible approach is to re-run the mining algorithm on the whole database. The other approach is incremental mining, which aims for efficient maintenance of discovered association rules without re-running the mining algorithms. However, previous algorithms for mining maximal large itemsets based on the shortcut approach can not support incremental mining for mining maximal large itemsets. While the algorithms for incremental mining, {it e.g.}, the SWF algorithm, could not efficiently support mining maximal large itemsets, since it is based on the exhausted approach. Therefore, in this thesis, we focus on the design of an algorithm which could provide good performance for both mining maximal itemsets and incremental mining. Based on some observations, for example, ``{it if an itemset is large, all its subsets must be large; therefore, those subsets need not to be examined further}", we propose a Sliding-Window approach, the SWMax algorithm, for efficiently mining maximal large itemsets and incremental mining. Our SWMax algorithm is a two-passes partition-based approach. We will find all candidate 1-itemsets ($C_1$), candidate 3-itemsets ($C_3$), large 1-itemsets ($L_1$), and large 3-itemsets ($L_3$) in the first pass. We generate the virtual maximal large itemsets after the first pass. Then, we use $L_1$ to generate $C_2$, use $L_3$ to generate $C_4$, use $C_4$ to generate $C_5$, until there is no $C_k$ generated. In the second pass, we use the virtual maximal large itemsets to prune $C_k$, and decide the maximal large itemsets. For incremental mining, we consider two cases: (1) data insertion, (2) data deletion. Both in Case 1 and Case 2, if an itemset with size equal to 1 is not large in the original database, it could not be found in the updated database based on the SWF algorithm. That is, a missing case could occur in the incremental mining process of the SWF algorithm, because the SWF algorithm only keeps the $C_2$ information. While our SWMax algorithm could support incremental mining correctly, since $C_1$ and $C_3$ are maintained in our algorithm. We generate some synthetic databases to simulate the real transaction databases in our simulation. From our simulation, the results show that our SWMax algorithm could generate fewer number of candidates and needs less time than the SWF algorithm.
94

An ID-Tree Index Strategy for Information Filtering in Web-Based Systems

Wang, Yi-Siang 10 July 2006 (has links)
With the booming development of WWW, many search engines have been developed to help users to find useful information from a great quantity of data. However, users may have different needs in different situations. Opposite to the Information Retrieval where users retrieve data actively, Information Filtering (IF) sends information from servers to passive users through broadcast mediums, rather than being searched by them. Therefore, each user has his (or her) profile stored in the database, where a profile records a set of interest items that can present his (or her) interests or habits. To efficiently store many user profiles in servers and filter irrelevant users, many signature-based index techniques are applied in IF systems. By using signatures, IF does not need to compare each item of profiles to filter out irrelevant ones. However, because signatures are incomplete information of profiles, it is very hard to answer the complex queries by using only the signatures. Therefore, a critical issue of the signature-based IF service is how to index the signatures of user profiles for an efficient filtering process. There are often two types of queries in the signature-based IF systems, the inexact filtering and the similarity search queries. In the inexact filtering, a query is an incoming document and it needs to find the profiles whose interest items are all included in the query. On the other hand, in the similarity search, a query is a user profile and it needs to find the users whose interest items are similar to the query user. In this thesis, we propose an ID-tree index strategy, which indexes signatures of user profiles by partitioning them into subgroups using a binary tree structure according to all of the different items among them. Basically, our ID-tree index strategy is a kind of the signature tree. In an ID-tree, each path from the root to a leaf node is the signature of the profile pointed by the leaf node. Because each profile is pointed only by one leaf node of the ID-tree, there will be no collision in the structure. In other words, there will be no two profiles assigned to the same signature. Moreover, only the different items among subgroups of profiles will be checked at one time to filter out irrelevant profiles for queries. Therefore, our strategy can answer the inexact filtering and the similarity search queries with less number of accessed profiles as compared to the previous strategies. Moreover, to build the index of signatures, it needs less time to batch a great deal of database profiles. From our simulation results, we show that our strategy can access less number of profiles to answer the queries than Chen's signature tree strategy for the inexact filtering and Aggarwal et al.'s SG-table strategy for the similarity search.
95

A Low-Power and High-Performance Function Generator for Multiplier-Based Arithmetic Operations

Jan, Jeng-Shiun 23 June 2002 (has links)
In this thesis, we develop an automatic hardware synthesizer for multiplier-based arithmetic functions such as parallel multipliers/multiplier-accumulator/inner-product calculator. The synthesizer is divided into two major phases. In the first phase called pre-layout netlist generation, the synthesizer generates the gate-level verilog codes and the corresponding test fixture file for pre-layout simulation. The second phase, called layout-generation, is to produce the CIF file of final physical layout based on the gate-level netlist generated in the first phase. The thesis focuses on the first phase. The irregular connection of the Wallace tree in the parallel multiplier is optimized in order to reduce the overall delay and power. In addition to the conventional 3:2 couter that is usually included in standard cell library, our synthesizer can select other different compression elements that are full-custom designed using pass-transistor logic. We also propose several methods to partition the final addition part of the parallel multiplier into several regions in order to further reduce the critical path delay and the area cost. Thus, our multiplier generator combines the advantages of three basic design approaches: high-level synthesis, cell-based design and full-custom design along with area and power optimization.
96

Judicious partitions of graphs and hypergraphs

Ma, Jie 04 May 2011 (has links)
Classical partitioning problems, like the Max-Cut problem, ask for partitions that optimize one quantity, which are important to such fields as VLSI design, combinatorial optimization, and computer science. Judicious partitioning problems on graphs or hypergraphs ask for partitions that optimize several quantities simultaneously. In this dissertation, we work on judicious partitions of graphs and hypergraphs, and solve or asymptotically solve several open problems of Bollobas and Scott on judicious partitions, using the probabilistic method and extremal techniques.
97

Mise en correspondance de partitions en vue du suivi d'objets

Gomila, Cristina 12 September 2001 (has links) (PDF)
Dans le domaine des applications multimédia, les futurs standards vont permettre de créer de nouvelles voies de communication, d'accès et de manipulation de l'information audiovisuelle qui vont bien au-delà de la simple compression à laquelle se limitaient les standards de codage précédents. Parmi les nouvelles fonctionnalités, il est espéré que l'utilisateur pourra avoir accès au contenu des images par édition et manipulation des objets présents. Néanmoins, la standardisation ne couvre que la représentation et le codage de ces objets, en laissant ouvert un large champ de développement pour ce qui concerne la probl ématique liée à leur extraction et à leur suivi lorsqu'ils évoluent au long d'une séquence vidéo. C'est précisément sur ce point que porte cette thèse. Dans un premier temps, nous avons procédé à l' étude et à la mise au point d'algorithmes de filtrage et de segmentation à caractère générique, car ces outils sont à la base de tout système d'analyse du contenu d'une image ou d'une séquence. Plus concr ètement, nous avons étudié en détail une nouvelle classe de filtres morphologiques connus sous le nom de nivellements ainsi qu'une variation des algorithmes de segmentation basée sur l'inondation contrainte d'une image gradient. Les techniques de segmentation ont pour but de produire une partition de l'image aussi proche que possible de celle faite par l' oeil humain, en vue de la reconnaissance postérieure des objets. Néanmoins, dans la plupart des cas, cette dernière tâche ne peut être faite que par interaction humaine et, pourtant, lorsqu'on veut retrouver un objet dans une large collection d'images, ou suivre son évolution au long d'une s équence, la surveillance de chacune des partitions devient impossible. S'impose alors le développement d'algorithmes de mise en correspondance capables de propager l'information dans une série d'images, en limitant l'interaction humaine à une seule étape d'initialisation. En faisant le passage des images fixes aux séquences, la partie centrale de cette thèse est consacrée à l' étude du problème de la mise en correspondance de partitions. La méthode que nous avons développée, nommée technique de Segmentation et Appariement Conjoint (SAC), peut être définie comme étant de nature hybride. Elle combine des algorithmes classiques de mise en correspondance de graphes avec de nouvelles techniques d' édition, basées sur les hiérarchies de partitions fournies par la segmentation morphologique. Cette combinaison a donné lieu à un algorithme très robuste, malgré l'instabilité typiquement associée aux processus de segmentation. La segmentation de deux images peut différer fortement si on la considère du seul point de vue d'une partition unique ; néanmoins nous avons montré qu'elle est beaucoup plus stable si on considère des hiérarchies de partitions emboîtées, dans lesquelles tous les contours présents apparaissent, chacun avec une valuation indiquant sa force. Les résultats obtenus par la technique SAC ont fait d'elle une approche très prometteuse. Souple et puissante, elle est capable de reconnaître un objet lorsqu'il réapparaît après occultation grâce à la gestion d'un graphe de mémoire. Bien que nous nous soyons int éressés tout particulièrement à la problématique du suivi, les algorithmes mis au point ont un champ d'application beaucoup plus vaste dans le domaine de l'indexation, en particulier pour la recherche d'objets dans une base de données d'images ou de séquences. Finalement, dans le cadre du projet européen M4M (MPEG f(o)ur mobiles) nous avons abordé la mise en oeuvre d'un démonstrateur de segmentation en temps réel capable de détecter, segmenter et suivre un personnage dans des séquences de vidéophonie. Dans le cadre de cette application, la contrainte du temps réel est devenue le grand d éfi à surmonter, en nous obligeant a simplifier et à optimiser nos algorithmes. L'int erêt principal en termes des nouveaux services est double : d'un côté le détourage automatique du locuteur permettrait d'adapter le codage à l'objet, économisant du débit sans perte de qualité sur les régions d'int erêt ; d'un autre côté il permettrait de faire l' édition personnalisée des séquences en changeant la composition de la scène, par exemple en introduisant un nouveau fond, ou en disposant plusieurs locuteurs dans une salle de conférence virtuelle.
98

Boolean Partition Algebras

Van Name, Joseph Anthony 01 January 2013 (has links)
A Boolean partition algebra is a pair $(B,F)$ where $B$ is a Boolean algebra and $F$ is a filter on the semilattice of partitions of $B$ where $\bigcup F=B\setminus\{0\}$. In this dissertation, we shall investigate the algebraic theory of Boolean partition algebras and their connection with uniform spaces. In particular, we shall show that the category of complete non-Archimedean uniform spaces is equivalent to a subcategory of the category of Boolean partition algebras, and notions such as supercompleteness of non-Archimedean uniform spaces can be formulated in terms of Boolean partition algebras.
99

New correlation for predicting the best surfactant and co-solvent structures to evaluate for chemical EOR

Chang, Leonard Yujya 03 February 2015 (has links)
The focus of this study was the development of an improved correlation that more accurately quantifies the relationships between optimum salinity, optimum solubilization ratios, chemical formulation variables such as surfactant and co-solvent structures, and the EACN. Entrained in this study are improved correlations for co-solvent partition coefficients and correlations for the optimum salinity and solubilization ratio with EACN. Several trends in the oil-water partition coefficient were observed with the alcohol type (IBA and phenol), the number of ethylene oxide groups in the co-solvent, the EACN of the oil, temperature, and salinity. New EACN measurements were made using optimized formulations containing various combinations of primary surfactants, co-surfactants, co-solvents and alkali. The new EACN measurements ranged from 11.3 to 21.1. These new data significantly expand the total number of reliable EACN values available to understand and correlate chemical EOR formulation results. An improved correlation that more accurately quantifies the relationship between surfactant structure, co-solvents, oil, temperature, and optimum salinity was developed using a new and much larger high quality formulation dataset now available from studies done in recent years in the Center for Petroleum and Geosystems Engineering at the University of Texas at Austin. The correlation is useful for understanding the now very large number of microemulsion phase behavior experiments as well as the uncertainties associated with these data, and for suggesting new chemical structures to develop and test. / text
100

Seismic reflector characterization by a multiscale detection-estimation method

Maysami, Mohammad, Herrmann, Felix J. January 2007 (has links)
Seismic transitions of the subsurface are typically considered as zero-order singularities (step functions). According to this model, the conventional deconvolution problem aims at recovering the seismic reflectivity as a sparse spike train. However, recent multiscale analysis on sedimentary records revealed the existence of accumulations of varying order singularities in the subsurface, which give rise to fractional-order discontinuities. This observation not only calls for a richer class of seismic reflection waveforms, but it also requires a different methodology to detect and characterize these reflection events. For instance, the assumptions underlying conventional deconvolution no longer hold. Because of the bandwidth limitation of seismic data, multiscale analysis methods based on the decay rate of wavelet coefficients may yield ambiguous results. We avoid this problem by formulating the estimation of the singularity orders by a parametric nonlinear inversion method.

Page generated in 0.0634 seconds