• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 536
  • 172
  • 54
  • 45
  • 35
  • 21
  • 14
  • 12
  • 10
  • 9
  • 7
  • 7
  • 5
  • 4
  • 4
  • Tagged with
  • 1075
  • 266
  • 153
  • 145
  • 123
  • 112
  • 107
  • 100
  • 99
  • 79
  • 64
  • 62
  • 58
  • 58
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A Tree-based Summarization Framework For Differences Between Two Data Sets

Wang, Dong 21 January 2009 (has links)
No description available.
172

Rough Sets, Similarity, and Optimal Approximations

Lenarcic, Adam 11 1900 (has links)
Rough sets have been studied for over 30 years, and the basic concepts of lower and upper approximations have been analysed in detail, yet nowhere has the idea of an `optimal' rough approximation been proposed or investigated. In this thesis, several concepts are used in proposing a generalized definition: measures, rough sets, similarity, and approximation are each surveyed. Measure Theory allows us to generalize the definition of the `size' for a set. Rough set theory is the foundation that we use to define the term `optimal' and what constitutes an `optimal rough set'. Similarity indexes are used to compare two sets, and determine how alike or different they are. These sets can be rough or exact. We use similarity indexes to compare sets to intermediate approximations, and isolate the optimal rough sets. The historical roots of these concepts are explored, and the foundations are formally defined. A definition of an optimal rough set is proposed, as well as a simple algorithm to find it. Properties of optimal approximations such as minimum, maximum, and symmetry, are explored, and examples are provided to demonstrate algebraic properties and illustrate the mechanics of the algorithm. / Thesis / Doctor of Philosophy (PhD) / Until now, in the context of rough sets, only an upper and lower approximation had been proposed. Here, an concept of an optimal/best approximation is proposed, and a method to obtain it is presented.
173

Spatially localised membrane systems

Csuhaj-Varju, E., Gheorghe, Marian, Stannett, M., Vaszil, G. January 2015 (has links)
No / In this paper we investigate the use of general topological spaces in connection with a generalised variant of membrane systems. We provide an approach which produces a fine grain description of local operations occurring simultaneously in sets of compartments of the system by restricting the interactions between objects. This restriction is given by open sets of a topology and multisets of objects associated with them, which dynamically change during the functioning of the system and which together define a notion of vicinity for the objects taking part in the interactions. / The first, the second, and the third authors were partially supported under the Royal Society International Exchanges Scheme (ref. IE110369). The second author was also partially supported by the project MuVet, Romanian National Authority for Scientific Research (CNCS – UEFISCDI) grant number PNII- ID-PCE-2011-3-0688. This work was partially completed whilst the third author was a visiting fellow at the Isaac Newton Institute for the Mathematical Sciences in the programme ‘Semantics & Syntax: A Legacy of Alan Turing’. The work of the first and the fourth author was also supported in part by the Hungarian Scientific Research Fund (OTKA), Grant no. K75952. The fourth author was supported by the European Union through the T´AMOP-4.2.2.C-11/1/KONV-2012-0001 project which is co-financed by the European Social Fund.
174

Modeling Transportation Problems Using Concepts of Swarm Intelligence and Soft Computing

Lucic, Panta 26 March 2002 (has links)
Many real-world problems could be formulated in a way to fit the necessary form for discrete optimization. Discrete optimization problems can be solved by numerous different techniques that have developed over time. Some of the techniques provide optimal solution(s) to the problem and some of them give "good enough" solution(s). The fundamental reason for developing techniques capable of producing solutions that are not necessarily optimal is the fact that many discrete optimization problems are NP-complete. Metaheuristic algorithms are a common name for a set of general-purpose techniques developed to provide solution(s) to the problems associated with discrete optimization. Mostly the techniques are based on natural metaphors. Discrete optimization could be applied to countless problems in transportation engineering. Recently, researchers started studying the behavior of social insects (ants) in an attempt to use the swarm intelligence concept to develop artificial systems with the ability to search a problem's solution space in a way that is similar to the foraging search by a colony of social insects. The development of artificial systems does not entail the complete imitation of natural systems, but explores them in search of ideas for modeling. This research is partially devoted to the development of a new system based on the foraging behavior of bee colonies — Bee System. The Bee System was tested through many instances of the Traveling Salesman Problem. Many transportation-engineering problems, besides being of combinatorial nature, are characterized by uncertainty. In order to address these problems, the second part of the research is devoted to development of the algorithms that combine the existing results in the area of swarm intelligence (The Ant System) and approximate reasoning. The proposed approach — Fuzzy Ant System is tested on the following two examples: Stochastic Vehicle Routing Problem and Schedule Synchronization in Public Transit. / Ph. D.
175

Generalized total and partial set covering problems

Parrish, Edna L. January 1986 (has links)
This thesis is concerned with the development of two generalized set covering models. The first model is formulated for the total set covering problem where cost is minimized subject to the constraint that each customer must be served by at least one facility. The second model is constructed for the partial set covering problem in which customer coverage is maximized subject to a budget constraint. The conventional formulations of both the total set covering and partial set covering problems are shown to be special cases of the two generalized models that arc developed. Appropriate solution strategies arc discussed for each generalized model. A specialized algorithm for a particular case of the partial covering problem is constructed and computational results are presented. / M.S.
176

Geometry of Self-Similar Sets

Roinestad, Kristine A. 22 May 2007 (has links)
This paper examines self-similar sets and some of their properties, including the natural equivalence relation found in bilipschitz equivalence. Both dimension and preservation of paths are determined to be invariant under this equivalence. Also, sophisticated techniques, one involving the use of directed graphs, show the equivalence of two spaces. / Master of Science
177

Distance Sets and Gap Lemma

Boone, Zackary Ryan 26 May 2022 (has links)
Many problems in geometric measure theory are centered around finding conditions and structures on a set to guarantee that its distance set must be large. Two notions of structure that are of importance in this work are Hausdorff dimension and thickness. Recent progress has been made on generalizing the notion of thickness so part of this work also generalizes previous results using this new upgraded version of thickness. We also show why a famous conjecture about distance sets does not hold on the real line and thus, why this conjecture needs to happen in higher dimensions. Furthermore, we give explicit distance set and thickness calculations for a special class of self-similar sets. / Master of Science / Part of the study of geometric measure theory is centered around creating interesting structures to place on a set and determining what sort of threshold on that structure allows you to guarantee that some interesting geometric property exists for that set. An example of this is determining when you can guarantee that a set contains many unique distances between elements in that set. This work presents various types of structures that help to investigate the problem of when you can guarantee that a set has the previously mentioned geometric property.
178

Iterative decoding beyond belief propagation for low-density parity-check codes / Décodage itératif pour les codes LDPC au-delà de la propagation de croyances

Planjery, Shiva Kumar 05 December 2012 (has links)
Les codes Low-Density Parity-Check (LDPC) sont au coeur de larecherche des codes correcteurs d'erreurs en raison de leur excellenteperformance de décodage en utilisant un algorithme de décodageitératif de type propagation de croyances (Belief Propagation - BP).Cet algorithme utilise la représentation graphique d'un code, ditgraphe de Tanner, et calcule les fonctions marginales sur le graphe.Même si l'inférence calculée n'est exacte que sur un graphe acyclique(arbre), l'algorithme BP estime de manière très proche les marginalessur les graphes cycliques, et les codes LDPC peuvent asymptotiquementapprocher la capacité de Shannon avec cet algorithme.Cependant, sur des codes de longueurs finies dont la représentationgraphique contient des cycles, l'algorithme BP est sous-optimal etdonne lieu à l'apparition du phénomène dit de plancher d'erreur. Leplancher d'erreur se manifeste par la dégradation soudaine de la pentedu taux d'erreur dans la zone de fort rapport signal à bruit où lesstructures néfastes au décodage sont connues en termes de TrappingSets présents dans le graphe de Tanner du code, entraînant un échec dudécodage. De plus, les effets de la quantification introduite parl'implémentation en hardware de l'algorithme BP peuvent amplifier ceproblème de plancher d'erreur.Dans cette thèse nous introduisons un nouveau paradigme pour ledécodage itératif à précision finie des codes LDPC sur le canalbinaire symétrique. Ces nouveaux décodeurs, appelés décodeursitératifs à alphabet fini (Finite Alphabet Iterative Decoders – FAID)pour préciser que les messages appartiennent à un alphabet fini, sontcapables de surpasser l'algorithme BP dans la région du plancherd'erreur. Les messages échangés par les FAID ne sont pas desprobabilités ou vraisemblances quantifiées, et les fonctions de miseà jour des noeuds de variable ne copient en rien le décodage par BP cequi contraste avec les décodeurs BP quantifiés traditionnels. Eneffet, les fonctions de mise à jour sont de simples tables de véritéconçues pour assurer une plus grande capacité de correction d'erreuren utilisant la connaissance de topologies potentiellement néfastes audécodage présentes dans un code donné. Nous montrons que sur demultiples codes ayant un poids colonne de trois, il existe des FAIDutilisant 3 bits de précision pouvant surpasser l'algorithme BP(implémenté en précision flottante) dans la zone de plancher d'erreursans aucun compromis dans la latence de décodage. C'est pourquoi lesFAID obtiennent des performances supérieures comparées au BP avecseulement une fraction de sa complexité.Par ailleurs, nous proposons dans cette thèse une décimation amélioréedes FAID pour les codes LDPC dans le traitement de la mise à jour desnoeuds de variable. La décimation implique de fixer certains bits ducode à une valeur particulière pendant le décodage et peut réduire demanière significative le nombre d'itérations requises pour corriger uncertain nombre d'erreurs fixé tout en maintenant de bonnesperformances d'un FAID, le rendant plus à même d'être analysé. Nousillustrons cette technique pour des FAID utilisant 3 bits de précisioncodes de poids colonne trois. Nous montrons également comment cettedécimation peut être utilisée de manière adaptative pour améliorer lescapacités de correction d'erreur des FAID. Le nouveau modèle proposéde décimation adaptative a, certes, une complexité un peu plus élevée,mais améliore significativement la pente du plancher d'erreur pour unFAID donné. Sur certains codes à haut rendement, nous montrons que ladécimation adaptative des FAID permet d'atteindre des capacités decorrection d'erreur proches de la limite théorique du décodage au sensdu maximum de vraisemblance. / At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by message-passing algorithms which are traditionally based on the belief propagation (BP) algorithm. The BP algorithm operates on a graphical model of a code known as the Tanner graph, and computes marginals of functions on the graph. While inference using BP is exact only on loop-free graphs (trees), the BP still provides surprisingly close approximations to exact marginals on loopy graphs, and LDPC codes can asymptotically approach Shannon's capacity under BP decoding.However, on finite-length codes whose corresponding graphs are loopy, BP is sub-optimal and therefore gives rise to the error floor phenomenon. The error floor is an abrupt degradation in the slope of the error-rate performance of the code in the high signal-to-noise regime, where certain harmful structures generically termed as trapping sets present in the Tanner graph of the code, cause the decoder to fail. Moreover, the effects of finite precision that are introduced during hardware realizations of BP can further contribute to the error floor problem.In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the Binary Symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs) to signify that the message values belong to a finite alphabet, are capable of surpassing the BP in the error floor region. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder, which is in contrast to traditional quantized BP decoders. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability by using the knowledge of potentially harmful topologies that could be present in a given code. We show that on several column-weight-three codes of practical interest, there exist 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor without any compromise in decoding latency. Hence, they are able to achieve a superior performance compared to BP with only a fraction of its complexity.Additionally in this dissertation, we propose decimation-enhanced FAIDs for LDPC codes, where the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during the decoding process, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by maximum-likelihood decoding.
179

Global Supply Sets in Graphs

Moore, Christian G 01 May 2016 (has links)
For a graph G=(V,E), a set S⊆V is a global supply set if every vertex v∈V\S has at least one neighbor, say u, in S such that u has at least as many neighbors in S as v has in V \S. The global supply number is the minimum cardinality of a global supply set, denoted γgs (G). We introduce global supply sets and determine the global supply number for selected families of graphs. Also, we give bounds on the global supply number for general graphs, trees, and grid graphs.
180

Estimación probabilística del grado de excepcionalidad de un elemento arbitrario en un conjunto finito de datos: aplicación de la teoría de conjuntos aproximados de precisión variable

Fernández Oliva, Alberto 27 September 2010 (has links)
No description available.

Page generated in 0.0635 seconds